Physics

supermassive-black-hole-roars-to-life-as-astronomers-watch-in-real-time

Supermassive black hole roars to life as astronomers watch in real time

Sleeping Beauty —

A similar awakening may one day occur with the Milky Way’s supermassive black hole

Artist’s animation of the black hole at the center of SDSS1335+0728 awakening in real time—a first for astronomers.

In December 2019, astronomers were surprised to observe a long-quiet galaxy, 300 million light-years away, suddenly come alive, emitting ultraviolet, optical, and infrared light into space. Far from quieting down again, by February of this year, the galaxy had begun emitting X-ray light; it is becoming more active. Astronomers think it is most likely an active galactic nucleus (AGN), which gets its energy from supermassive black holes at the galaxy’s center and/or from the black hole’s spin. That’s the conclusion of a new paper accepted for publication in the journal Astronomy and Astrophysics, although the authors acknowledge the possibility that it might also be some kind of rare tidal disruption event (TDE).

The brightening of SDSS1335_0728 in the constellation Virgo, after decades of quietude, was first detected by the Zwicky Transient Facility telescope. Its supermassive black hole is estimated to be about 1 million solar masses. To get a better understanding of what might be going on, the authors combed through archival data and combined that with data from new observations from various instruments, including the X-shooter, part of the Very Large Telescope (VLT) in Chile’s Atacama Desert.

There are many reasons why a normally quiet galaxy might suddenly brighten, including supernovae or a TDE, in which part of the shredded star’s original mass is ejected violently outward. This, in turn, can form an accretion disk around the black hole that emits powerful X-rays and visible light. But these events don’t last nearly five years—usually not more than a few hundred days.

So the authors concluded that the galaxy has awakened and now has an AGN. First discovered by Carl Seyfert in 1943, the glow is the result of the cold dust and gas surrounding the black hole, which can form orbiting accretion disks. Gravitational forces compress the matter in the disk and heat it to millions of degrees Kelvin, producing radiation across the electromagnetic spectrum.

Alternatively, the activity might be due to an especially long and faint TDE—the longest and faintest yet detected, if so. Or it could be an entirely new phenomenon altogether. So SDSS1335+0728 is a galaxy to watch. Astronomers are already preparing for follow-up observations with the VLT’s Multi Unit Spectroscopic Explorer (MUSE) and Extremely Large Telescope, among others, and perhaps even the Vera Rubin Observatory slated to come online next summer. Its Large Synoptic Survey Telescope (LSST) will be capable of imaging the entire southern sky continuously, potentially capturing even more galaxy awakenings.

“Regardless of the nature of the variations, [this galaxy] provides valuable information on how black holes grow and evolve,” said co-author Paula Sánchez Sáez, an astronomer at the European Southern Observatory in Germany. “We expect that instruments like [these] will be key in understanding [why the galaxy is brightening].”

There is also a supermassive black hole at the center of our Milky Way galaxy (Sgr A*), but there is not yet enough material that has accreted for astronomers to pick up any emitted radiation, even in the infrared. So, its galactic nucleus is deemed inactive. It may have been active in the past, and it’s possible that it will reawaken again in a few million (or even billion) years when the Milky Way merges with the Andromeda Galaxy and their respective supermassive black holes combine. Only much time will tell.

Astronomy and Astrophysics, 2024. DOI: 10.1051/0004-6361/202347957  (About DOIs).

Listing image by ESO/M. Kornmesser

Supermassive black hole roars to life as astronomers watch in real time Read More »

polarized-light-yields-fresh-insight-into-mysterious-fast-radio-bursts

Polarized light yields fresh insight into mysterious fast radio bursts

CHIME-ing in —

Scientists looked at how polarization changed direction to learn more about origins

Artist’s rendition of how the angle of polarized light from an FRB changes as it journeys through space.

Enlarge / Artist’s rendition of how the angle of polarized light from a fast radio burst changes as it journeys through space.

CHIME/Dunlap Institute

Astronomers have been puzzling over the origins of mysterious fast radio bursts (FRBs) since the first one was spotted in 2007. Researchers now have their first look at non-repeating FRBs, i.e., those that have only produced a single burst of light to date. The authors of a new paper published in The Astrophysical Journal looked specifically at the properties of polarized light emitting from these FRBs, yielding further insight into the origins of the phenomenon. The analysis supports the hypothesis that there are different origins for repeating and non-repeating FRBs.

“This is a new way to analyze the data we have on FRBs. Instead of just looking at how bright something is, we’re also looking at the angle of the light’s vibrating electromagnetic waves,” said co-author Ayush Pandhi, a graduate student at the University of Toronto’s Dunlap Institute for Astronomy and Astrophysics. “It gives you additional information about how and where that light is produced and what it has passed through on its journey to us over many millions of light years.”

As we’ve reported previously, FRBs involve a sudden blast of radio-frequency radiation that lasts just a few microseconds. Astronomers have over a thousand of them to date; some come from sources that repeatedly emit FRBs, while others seem to burst once and go silent. You can produce this sort of sudden surge of energy by destroying something. But the existence of repeating sources suggests that at least some of them are produced by an object that survives the event. That has led to a focus on compact objects, like neutron stars and black holes—especially a class of neutron stars called magnetars—as likely sources.

There have also been many detected FRBs that don’t seem to repeat at all, suggesting that the conditions that produced them may destroy their source. That’s consistent with a blitzar—a bizarre astronomical event caused by the sudden collapse of an overly massive neutron star. The event is driven by an earlier merger of two neutron stars; this creates an unstable intermediate neutron star, which is kept from collapsing immediately by its rapid spin.

In a blitzar, the strong magnetic fields of the neutron star slow down its spin, causing it to collapse into a black hole several hours after the merger. That collapse suddenly deletes the dynamo powering the magnetic fields, releasing their energy in the form of a fast radio burst.

So the events we’ve been lumping together as FRBs could actually be the product of two different events. The repeating events occur in the environment around a magnetar. The one-shot events are triggered by the death of a highly magnetized neutron star within a few hours of its formation. Astronomers announced the detection of a possible blitzar potentially associated with an FRB last year.

Only about 3 percent of FRBs are of the repeating variety. Per Pandhi, this is the first analysis of the other 97 percent of non-repeating FRBs, using data from Canada’s CHIME instrument (Canadian Hydrogen Intensity Mapping Experiment). CHIME was built for other observations but is sensitive to many of the wavelengths that make up an FRB. Unlike most radio telescopes, which focus on small points in the sky, CHIME scans a huge area, allowing it to pick out FRBs even though they almost never happen in the same place twice.

Pandhi et al. decided to investigate how the direction of the light polarization from 128 non-repeating FRBs changes to learn more about the environments in which they originated. The team found that the polarized light from non-repeating FRBs changes both with time and with different colors of light. They concluded that this particular sample of non-repeating FRBs is either a separate population or more evolved versions of these kinds of FRBs that are part of a population that originated in less extreme environments with lower burst rates. That’s in keeping with the notion that non-repeating FRBs are quite different from their rarer repeating FRBs.

The Astrophysical Journal, 2024. DOI: 10.3847/1538-4357/ad40aa  (About DOIs).

Polarized light yields fresh insight into mysterious fast radio bursts Read More »

how-you-can-make-cold-brew-coffee-in-under-3-minutes-using-ultrasound

How you can make cold-brew coffee in under 3 minutes using ultrasound

Save yourself a few hours —

A “sonication” time between 1 and 3 minutes is ideal to get the perfect cold brew.

UNSW Sydney engineers developed a new way to make cold brew coffee in under three minutes without sacrificing taste.

Enlarge / UNSW Sydney engineers developed a new way to make cold brew coffee in under three minutes without sacrificing taste.

University of New South Wales, Sydney

Diehard fans of cold-brew coffee put in a lot of time and effort for their preferred caffeinated beverage. But engineers at the University of New South Wales, Sydney, figured out a nifty hack. They rejiggered an existing espresso machine to accommodate an ultrasonic transducer to administer ultrasonic pulses, thereby reducing the brewing time from 12 to 24 hours to just under three minutes, according to a new paper published in the journal Ultrasonics Sonochemistry.

As previously reported, rather than pouring boiling or near-boiling water over coffee grounds and steeping for a few minutes, the cold-brew method involves mixing coffee grounds with room-temperature water and letting the mixture steep for anywhere from several hours to two days. Then it is strained through a sieve to filter out all the sludge-like solids, followed by filtering. This can be done at home in a Mason jar, or you can get fancy and use a French press or a more elaborate Toddy system. It’s not necessarily served cold (although it can be)—just brewed cold.

The result is coffee that tastes less bitter than traditionally brewed coffee. “There’s nothing like it,” co-author Francisco Trujillo of UNSW Sydney told New Scientist. “The flavor is nice, the aroma is nice and the mouthfeel is more viscous and there’s less bitterness than a regular espresso shot. And it has a level of acidity that people seem to like. It’s now my favorite way to drink coffee.”

While there have been plenty of scientific studies delving into the chemistry of coffee, only a handful have focused specifically on cold-brew coffee. For instance, a 2018 study by scientists at Thomas Jefferson University in Philadelphia involved measuring levels of acidity and antioxidants in batches of cold- and hot-brew coffee. But those experiments only used lightly roasted coffee beans. The degree of roasting (temperature) makes a significant difference when it comes to hot-brew coffee. Might the same be true for cold-brew coffee?

To find out, the same team decided in 2020 to explore the extraction yields of light-, medium-, and dark-roast coffee beans during the cold-brew process. They used the cold-brew recipe from The New York Times for their experiments, with a water-to-coffee ratio of 10:1 for both cold- and hot-brew batches. (Hot brew normally has a water-to-coffee ratio of 20:1, but the team wanted to control variables as much as possible.) They carefully controlled when water was added to the coffee grounds, how long to shake (or stir) the solution, and how best to press the cold-brew coffee.

The team found that for the lighter roasts, caffeine content and antioxidant levels were roughly the same in both the hot- and cold-brew batches. However, there were significant differences between the two methods when medium- and dark-roast coffee beans were used. Specifically, the hot-brew method extracts more antioxidants from the grind; the darker the bean, the greater the difference. Both hot- and cold-brew batches become less acidic the darker the roast.

The new faster cold brew system subjects coffee grounds in the filter basket to ultrasonic sound waves from a transducer, via a specially adapted horn.

Enlarge / The new faster cold brew system subjects coffee grounds in the filter basket to ultrasonic sound waves from a transducer, via a specially adapted horn.

UNSW/Francisco Trujillo

That gives cold brew fans a few handy tips, but the process remains incredibly time-consuming; only true aficionados have the patience required to cold brew their own morning cuppa. Many coffee houses now offer cold brews, but it requires expensive, large semi-industrial brewing units and a good deal of refrigeration space. According to Trujillo, the inspiration for using ultrasound to speed up the process arose from failed research attempts to extract more antioxidants. Those experiments ultimately failed, but the setup produced very good coffee.

Trujillo et al. used a Breville Dual Boiler BES920 espresso machine for their latest experiments, with a few key modifications. They connected a bolt-clawed transducer to the brewing basket with a metal horn. They then used the transducer to inject 38.8 kHz sound waves through the walls at several different points, thereby transforming the filter basket into a powerful ultrasonic reactor.

The team used the machine’s original boiler but set it up to be independently controlled it with an integrated circuit to better manage the temperature of the water. As for the coffee beans, they picked Campos Coffee’s Caramel & Rich Blend (a medium roast). “This blend combines fresh, high-quality specialty coffee beans from Ethiopia, Kenya, and Colombia, and the roasted beans deliver sweet caramel, butterscotch, and milk chocolate flavors,” the authors wrote.

There were three types of samples for the experiments: cold brew hit with ultrasound at room temperature for one minute or for three minutes, and cold brew prepared with the usual 24-hour process. For the ultrasonic brews, the beans were ground into a fine grind typical for espresso, while a slightly coarser grind was used for the traditional cold-brew coffee.

How you can make cold-brew coffee in under 3 minutes using ultrasound Read More »

two-seconds-of-hope-for-fusion-power

Two seconds of hope for fusion power

image of a person in protective clothing, standing in a circular area with lots of mirrored metal panels.

Enlarge / The interior or the DIII-D tokamak.

Using nuclear fusion, the process that powers the stars, to produce electricity on Earth has famously been 30 years away for more than 70 years. But now, a breakthrough experiment done at the DIII-D National Fusion Facility in San Diego may finally push nuclear fusion power plants to be roughly 29 years away.

Nuclear fusion ceiling

The DIII-D facility is run by General Atomics for the Department of Energy. It includes an experimental tokamak, a donut-shaped nuclear fusion device that works by trapping astonishingly hot plasma in very strong, toroidal magnetic fields. Tokamaks, compared to other fusion reactor designs like stellarators, are the furthest along in their development; ITER, the world’s first power-plant-size fusion device now under construction in France, is scheduled to run its first tests with plasma in December 2025.

But tokamaks have always had some issues. Back in 1988, Martin Greenwald, a Massachusetts Institute of Technology expert on plasma physics, proposed an equation that described an apparent limit on how dense plasma could get in tokamaks. He argued that maximum attainable density is dictated by the minor radius of a tokamak and the current induced in the plasma to maintain magnetic stability. Going beyond that limit was supposed to make the magnets incapable of holding the plasma, heated up to north of 150 million degrees Celsius away from the walls of the machine.

Since the power output of a tokamak was proportional to the square of fuel density, this limit didn’t bode well for fusion power plants. A commercial reactor would either need to be huge or drive absurdly high plasma currents. The former meant it would be catastrophically expensive to build, and the latter that it would be expensive to run.

But there has been hope. Since then, many research teams working at different tokamak facilities—including the Joint European Torus (JET) in Britain or ASDEX Upgrade in Germany—achieved plasma densities exceeding the Greenwald limit. In response, Martin Greenwald himself revised his claim a bit, saying that the limit applied not to the line averaged plasma density in the entire reactor but only to the portion of the plasma occupying less than 10 percent of the radius near the reactor’s wall.

While the actual density numbers were pushed a little, the working principle behind the Greenwald limit still held—when the plasma density went up above the Greenwald line, the quality of confinement went down. “The major phenomenon people discovered in the high-density experiments was reduced energy confinement when plasma density was increased,” said Siye Ding, a researcher at General Atomics working at the DIII-D National Fusion Facility.

To use fusion for energy production, we need both high density and high confinement. “For the first time, we have experimentally demonstrated how to resolve this problem,” said Ding.

Self-organizing puzzle

“When you make a plasma in your reactor, there is a whole combination of parameters,” explained Andrea Garofalo, a sciences manager at General Atomics who worked on the experiment at DIII-D. “What is the plasma current, what is the toroidal field, what is the external heating versus time. Combinations of such parameters can vary in tokamaks—you can have plasma current higher or lower, you can start the heating early, you can start it later. All this comprises what we call a scenario.”

“We’re talking about optimizing the waveforms of power, fueling, etc. to achieve the right configuration,” he added.

The configuration he and his colleagues achieved (called the high-poloidal-beta scenario) worked like a charm.

People working on nuclear fusion use various metrics that integrate multiple parameters into simple numbers to make it easier to compare the performance of different fusion experiments. The H98Y metric tracks the quality of confinement. The high confinement mode that will be used at ITER has H98Y equal to 1. Plasma density is often denoted as FGR—the Greenwald fraction—which describes how far below or above the Greenwald limit plasma density can get. FGR equal to 1 means density exactly at the Greenwald limit.

Two seconds of hope for fusion power Read More »

a-supernova-caused-the-boat-gamma-ray-burst,-jwst-data-confirms

A supernova caused the BOAT gamma ray burst, JWST data confirms

Still the BOAT —

But astronomers are puzzled by the lack of signatures of expected heavy elements.

Artist's visualization of GRB 221009A showing the narrow relativistic jets — emerging from a central black hole — that gave rise to the brightest gamma ray burst yet detected.

Enlarge / Artist’s visualization of GRB 221009A showing the narrow relativistic jets—emerging from a central black hole—that gave rise to the brightest gamma-ray burst yet detected.

Aaron M. Geller/Northwestern/CIERA/ ITRC&DS

In October 2022, several space-based detectors picked up a powerful gamma-ray burst so energetic that astronomers nicknamed it the BOAT (Brightest Of All Time). Now they’ve confirmed that the GRB came from a supernova, according to a new paper published in the journal Nature Astronomy. However, they did not find evidence of heavy elements like platinum and gold one would expect from a supernova explosion, which bears on the longstanding question of the origin of such elements in the universe.

As we’ve reported previously, gamma-ray bursts are extremely high-energy explosions in distant galaxies lasting between mere milliseconds to several hours. There are two classes of gamma-ray bursts. Most (70 percent) are long bursts lasting more than two seconds, often with a bright afterglow. These are usually linked to galaxies with rapid star formation. Astronomers think that long bursts are tied to the deaths of massive stars collapsing to form a neutron star or black hole (or, alternatively, a newly formed magnetar). The baby black hole would produce jets of highly energetic particles moving near the speed of light, powerful enough to pierce through the remains of the progenitor star, emitting X-rays and gamma rays.

Those gamma-ray bursts lasting less than two seconds (about 30 percent) are deemed short bursts, usually emitting from regions with very little star formation. Astronomers think these gamma-ray bursts are the result of mergers between two neutron stars, or a neutron star merging with a black hole, comprising a “kilonova.” That hypothesis was confirmed in 2017 when the LIGO collaboration picked up the gravitational wave signal of two neutron stars merging, accompanied by the powerful gamma-ray bursts associated with a kilonova.

The October 2022 gamma-ray burst falls into the long category, lasting over 300 seconds. GRB 221009A triggered detectors aboard NASA’s Fermi Gamma-ray Space Telescope, the Neil Gehrels Swift Observatory, and Wind spacecraft, among others, just as gamma-ray astronomers had gathered for an annual meeting in Johannesburg, South Africa. The powerful signal came from the constellation Sagitta, traveling some 1.9 billion years to Earth.

Several papers were published last year reporting on the analytical results of all the observational data. Those findings confirmed that GRB 221009A was indeed the BOAT, appearing especially bright because its narrow jet was pointing directly at Earth. But the various analyses also yielded several surprising results that puzzled astronomers. Most notably, a supernova should have occurred a few weeks after the initial burst, but astronomers didn’t detect one, perhaps because it was very faint, and thick dust clouds in that part of the sky were dimming any incoming light.

Swift’s X-ray Telescope captured the afterglow of GRB 221009A about an hour after it was first detected.

Enlarge / Swift’s X-ray Telescope captured the afterglow of GRB 221009A about an hour after it was first detected.

NASA/Swift/A. Beardmore (University of Leicester)

That’s why Peter Blanchard of Northwestern University and his fellow co-authors decided to wait six months before undertaking their own analysis, relying on data collected during the GRB’s later phase by the Webb Space Telescope’s Near Infrared Spectrograph. They augmented that spectral data with observations from ALMA (Atacama Large Millimeter/Submillimeter Array) in Chile so they could separate light from the supernova and the GRB afterglow. The most significant finding was the telltale signatures of key elements like calcium and oxygen that one would expect to find with a supernova.

Yet the supernova wasn’t brighter than other supernovae associated with less energetic GRBs, which is puzzling. “You might expect that the same collapsing star producing a very energetic and bright GRB would also produce a very energetic and bright supernova,” said Blanchard. “But it turns out that’s not the case. We have this extremely luminous GRB, but a normal supernova.” The authors suggest that this might have something to do with the shape and structure of the relativistic jet, which was much narrower than other GRB jets, resulting in a more focused and brighter beam of light.

The data held another surprise for astronomers. The only confirmed source of heavy elements in the universe to date is the merging of binary neutron stars. But per Blanchard, there are far too few neutron star mergers to account for the abundance of heavy elements, so there must be another source. One hypothetical additional source is a rapidly spinning massive star that collapses and explodes into a supernova. Alas, there was no evidence of heavy elements in the JWST spectral data regarding the BOAT.

“When we confirmed that the GRB was generated by the collapse of a massive star, that gave us the opportunity to test a hypothesis for how some of the heaviest elements in the universe are formed,” said Blanchard. “We did not see signatures of these heavy elements, suggesting that extremely energetic GRBs like the BOAT do not produce these elements. That doesn’t mean that all GRBs do not produce them, but it’s a key piece of information as we continue to understand where these heavy elements come from. Future observations with JWST will determine if the BOAT’s ‘normal’ cousins produce these elements.”

Nature Astronomy, 2024. DOI: 10.1038/s41550-024-02237-4  (About DOIs).

A supernova caused the BOAT gamma ray burst, JWST data confirms Read More »

rip-peter-higgs,-who-laid-foundation-for-the-higgs-boson-in-the-1960s

RIP Peter Higgs, who laid foundation for the Higgs boson in the 1960s

A particle physics hero —

Higgs shared the 2013 Nobel Prize in Physics with François Englert.

Smiling Peter Higgs, seated in front of microphone with Edinburgh logo in the background

Enlarge / A visibly emotional Peter Higgs was present when CERN announced Higgs boson discovery in July 2012.

University of Edinburgh

Peter Higgs, the shy, somewhat reclusive physicist who won a Nobel Prize for his theoretical work on how the Higgs boson gives elementary particles their mass, has died at the age of 94. According to a statement from the University of Edinburgh, the physicist passed “peacefully at home on Monday 8 April following a short illness.”

“Besides his outstanding contributions to particle physics, Peter was a very special person, a man of rare modesty, a great teacher and someone who explained physics in a very simple and profound way,” Fabiola Gianotti, director general at CERN and former leader of one of the experiments that helped discover the Higgs particle in 2012, told The Guardian. “An important piece of CERN’s history and accomplishments is linked to him. I am very saddened, and I will miss him sorely.”

The Higgs boson is a manifestation of the Higgs field, an invisible entity that pervades the Universe. Interactions between the Higgs field and particles help provide particles with mass, with particles that interact more strongly having larger masses. The Standard Model of Particle Physics describes the fundamental particles that make up all matter, like quarks and electrons, as well as the particles that mediate their interactions through forces like electromagnetism and the weak force. Back in the 1960s, theorists extended the model to incorporate what has become known as the Higgs mechanism, which provides many of the particles with mass. One consequence of the Standard Model’s version of the Higgs boson is that there should be a force-carrying particle, called a boson, associated with the Higgs field.

Despite its central role in the function of the Universe, the road to predicting the existence of the Higgs boson was bumpy, as was the process of discovering it. As previously reported, the idea of the Higgs boson was a consequence of studies on the weak force, which controls the decay of radioactive elements. The weak force only operates at very short distances, which suggests that the particles that mediate it (the W and Z bosons) are likely to be massive. While it was possible to use existing models of physics to explain some of their properties, these predictions had an awkward feature: just like another force-carrying particle, the photon, the resulting W and Z bosons were massless.

Schematic of the Standard Model of particle physics.

Enlarge / Schematic of the Standard Model of particle physics.

Over time, theoreticians managed to craft models that included massive W and Z bosons, but they invariably came with a hitch: a massless partner, which would imply a longer-range force. In 1964, however, a series of papers was published in rapid succession that described a way to get rid of this problematic particle. If a certain symmetry in the models was broken, the massless partner would go away, leaving only a massive one.

The first of these papers, by François Englert and Robert Brout, proposed the new model in terms of quantum field theory; the second, by Higgs (then 35), noted that a single quantum of the field would be detectable as a particle. A third paper, by Gerald Guralnik, Carl Richard Hagen, and Tom Kibble, provided an independent validation of the general approach, as did a completely independent derivation by students in the Soviet Union.

At that time, “There seemed to be excitement and concern about quantum field theory (the underlying structure of particle physics) back then, with some people beginning to abandon it,” David Kaplan, a physicist at Johns Hopkins University, told Ars. “There were new particles being regularly produced at accelerator experiments without any real theoretical structure to explain them. Spin-1 particles could be written down comfortably (the photon is spin-1) as long as they didn’t have a mass, but the massive versions were confusing to people at the time. A bunch of people, including Higgs, found this quantum field theory trick to give spin-1 particles a mass in a consistent way. These little tricks can turn out to be very useful, but also give the landscape of what is possible.”

“It wasn’t clear at the time how it would be applied in particle physics.”

Ironically, Higgs’ seminal paper was rejected by the European journal Physics Letters. He then added a crucial couple of paragraphs noting that his model also predicted the existence of what we now know as the Higgs boson. He submitted the revised paper to Physical Review Letters in the US, where it was accepted. He examined the properties of the boson in more detail in a 1966 follow-up paper.

RIP Peter Higgs, who laid foundation for the Higgs boson in the 1960s Read More »

gravitational-waves-reveal-“mystery-object”-merging-with-a-neutron-star

Gravitational waves reveal “mystery object” merging with a neutron star

mind the gap —

The so-called “mass gap” might be less empty than physicists previously thought.

Artistic rendition of a black hole merging with a neutron star.

Enlarge / Artistic rendition of a black hole merging with a neutron star. LIGO/VIRGO/KAGRA detected a merger involving a neutron star and what might be a very light black hole falling within the “mass gap” range.

LIGO-India/ Soheb Mandhai

The LIGO/VIRGO/KAGRA collaboration searches the universe for gravitational waves produced by the mergers of black holes and neutron stars. It has now announced the detection of a signal indicating a merger between two compact objects, one of which has an unusual intermediate mass—heavier than a neutron star and lighter than a black hole. The collaboration provided specifics of their analysis of the merger and the “mystery object” in a draft manuscript posted to the physics arXiv, suggesting that the object might be a very low-mass black hole.

LIGO detects gravitational waves via laser interferometry, using high-powered lasers to measure tiny changes in the distance between two objects positioned kilometers apart. LIGO has detectors in Hanford, Washington state, and in Livingston, Louisiana. A third detector in Italy, Advanced VIRGO, came online in 2016. In Japan, KAGRA is the first gravitational-wave detector in Asia and the first to be built underground. Construction began on LIGO-India in 2021, and physicists expect it will turn on sometime after 2025.

To date, the collaboration has detected dozens of merger events since its first Nobel Prize-winning discovery. Early detected mergers involved either two black holes or two neutron stars, but in 2021, LIGO/VIRGO/KAGRA confirmed the detection of two separate “mixed” mergers between black holes and neutron stars.

Most objects involved in the mergers detected by the collaboration fall into two groups: stellar-mass black holes (ranging from a few solar masses to tens of solar masses) and supermassive black holes, like the one in the middle of our Milky Way galaxy (ranging from hundreds of thousands to billions of solar masses). The former are the result of massive stars dying in a core-collapse supernova, while the latter’s formation process remains something of a mystery. The range between the heaviest known neutron star and the lightest known black hole is known as the “mass gap” among scientists.

There have been gravitational wave hints of compact objects falling within the mass gap before. For instance, as reported previously, in 2019, LIGO/VIRGO picked up a gravitational wave signal from a black hole merger dubbed “GW190521,” that produced the most energetic signal detected thus far, showing up in the data as more of a “bang” than the usual “chirp.” Even weirder, the two black holes that merged were locked in an elliptical (rather than circular) orbit, and their axes of spin were tipped far more than usual compared to those orbits. And the new black hole resulting from the merger had an intermediate mass of 142 solar masses—smack in the middle of the mass gap.

Masses in the stellar graveyard.

Enlarge / Masses in the stellar graveyard.

xIGO-Virgo-KAGRA / Aaron Geller / Northwestern

That same year, the collaboration detected another signal, GW 190814, a compact binary merger involving a mystery object that also fell within the mass gap. With no corresponding electromagnetic signal to accompany the gravitational wave signal, astrophysicists were unable to determine whether that object was an unusually heavy neutron star or an especially light black hole. And now we have a new mystery object within the mass gap in a merger event dubbed “GW 230529.”

“While previous evidence for mass-gap objects has been reported both in gravitational and electromagnetic waves, this system is especially exciting because it’s the first gravitational-wave detection of a mass-gap object paired with a neutron star,” said co-author Sylvia Biscoveanu of Northwestern University. “The observation of this system has important implications for both theories of binary evolution and electromagnetic counterparts to compact-object mergers.”

See where this discovery falls within the mass gap.

Enlarge / See where this discovery falls within the mass gap.

Shanika Galaudage / Observatoire de la Côte d’Azur

LIGO/VIRGO/KAGRA started its fourth observing run last spring and soon picked up GW 230529’s signal. Scientists determined that one of the two merging objects had a mass between 1.2 to 2 times the mass of our sun—most likely a neutron star—while the other’s mass fell in the mass-gap range of 2.5 to 4.5 times the mass of our sun. As with GW 190814, there were no accompanying bursts of electromagnetic radiation, so the team wasn’t able to conclusively identify the nature of the more massive mystery object located some 650 million light-years from Earth, but they think it is probably a low-mass black hole. If so, the finding implies an increase in the expected rate of neutron star–black hole mergers with electromagnetic counterparts, per the authors.

“Before we started observing the universe in gravitational waves, the properties of compact objects like black holes and neutron stars were indirectly inferred from electromagnetic observations of systems in our Milky Way,” said co-author Michael Zevin, an astrophysicist at the Adler Planetarium. “The idea of a gap between neutron-star and black-hole masses, an idea that has been around for a quarter of a century, was driven by such electromagnetic observations. GW230529 is an exciting discovery because it hints at this ‘mass gap’ being less empty than astronomers previously thought, which has implications for the supernova explosions that form compact objects and for the potential light shows that ensue when a black hole rips apart a neutron star.”

arXiv, 2024. DOI: 10.48550/arXiv.2404.04248  (About DOIs).

Gravitational waves reveal “mystery object” merging with a neutron star Read More »

astronomers-have-solved-the-mystery-of-why-this-black-hole-has-the-hiccups

Astronomers have solved the mystery of why this black hole has the hiccups

David vs. Goliath —

Blame it on a smaller orbiting black hole repeatedly punching through the accretion disk.

graphic of hiccuping black hole

Enlarge / Scientists have found a large black hole that “hiccups,” giving off plumes of gas.

Jose-Luis Olivares, MIT

In December 2020, astronomers spotted an unusual burst of light in a galaxy roughly 848 million light-years away—a region with a supermassive black hole at the center that had been largely quiet until then. The energy of the burst mysteriously dipped about every 8.5 days before the black hole settled back down, akin to having a case of celestial hiccups.

Now scientists think they’ve figured out the reason for this unusual behavior. The supermassive black hole is orbited by a smaller black hole that periodically punches through the larger object’s accretion disk during its travels, releasing a plume of gas. This suggests that black hole accretion disks might not be as uniform as astronomers thought, according to a new paper published in the journal Science Advances.

Co-author Dheeraj “DJ” Pasham of MIT’s Kavli Institute for Astrophysics and Space research noticed the community alert that went out after the All Sky Automated Survey for SuperNovae (ASAS-SN) detected the flare, dubbed ASASSN-20qc. He was intrigued and still had some allotted time on the X-ray telescope, called NICER (the Neutron star Interior Composition Explorer) on board the International Space Station. He directed the telescope to the galaxy of interest and gathered about four months of data, after which the flare faded.

Pasham noticed a strange pattern as he analyzed that four months’ worth of data. The bursts of energy dipped every 8.5 days in the X-ray regime, much like a star’s brightness can briefly dim whenever an orbiting planet crosses in front. Pasham was puzzled as to what kind of object could cause a similar effect in an entire galaxy. That’s when he stumbled across a theoretical paper by Czech physicists suggesting that it was possible for a supermassive black hole at the center of a galaxy to have an orbiting smaller black hole; they predicted that, under the right circumstances, this could produce just such a periodic effect as Pasham had observed in his X-ray data.

Computer simulation of an intermediate-mass black hole orbiting a supermassive black hole and driving periodic gas plumes that can explain the observations.

Computer simulation of an intermediate-mass black hole orbiting a supermassive black hole and driving periodic gas plumes that can explain the observations.

Petra Sukova, Astronomical Institute of the CAS

“I was super excited about this theory and immediately emailed to say, ‘I think we’re observing exactly what your theory predicted,” Pasham said. They joined forces to run simulations incorporating the data from NICER, and the results supported the theory. The black hole at the galaxy’s center is estimated to have a mass of 50 million suns. Since there was no burst before December 2020, the team thinks there was, at most, just a faint accretion disk around that black hole and a smaller orbiting black hole of between 100 to 10,000 solar masses that eluded detection because of that.

So what changed? Pasham et al. suggest that a nearby star got caught in the gravitational pull of the supermassive black hole in December 2020 and was ripped to shreds, known as a tidal disruption event (TDE). As previously reported, in a TDE, part of the shredded star’s original mass is ejected violently outward. This, in turn, can form an accretion disk around the black hole that emits powerful X-rays and visible light. The jets are one way astronomers can indirectly infer the presence of a black hole. Those outflow emissions typically occur soon after the TDE.

That seems to be what happened in the current system to cause the sudden flare in the primary supermassive black hole. Now it had a much brighter accretion disk, so when its smaller black hole partner passed through the disk, larger than usual gas plumes were emitted. As luck would have it, that plume just happened to be pointed in the direction of an observing telescope.

Astronomers have known about so-called “David and Goliath” binary black hole systems for a while, but “this is a different beast,” said Pasham. “It doesn’t fit anything that we know about these systems. We’re seeing evidence of objects going in and through the disk, at different angles, which challenges the traditional picture of a simple gaseous disk around black holes. We think there is a huge population of these systems out there.”

Science Advances, 2024. DOI: 10.1126/sciadv.adj8898  (About DOIs).

Astronomers have solved the mystery of why this black hole has the hiccups Read More »

quantum-computing-progress:-higher-temps,-better-error-correction

Quantum computing progress: Higher temps, better error correction

conceptual graphic of symbols representing quantum states floating above a stylized computer chip.

There’s a strong consensus that tackling most useful problems with a quantum computer will require that the computer be capable of error correction. There is absolutely no consensus, however, about what technology will allow us to get there. A large number of companies, including major players like Microsoft, Intel, Amazon, and IBM, have all committed to different technologies to get there, while a collection of startups are exploring an even wider range of potential solutions.

We probably won’t have a clearer picture of what’s likely to work for a few years. But there’s going to be lots of interesting research and development work between now and then, some of which may ultimately represent key milestones in the development of quantum computing. To give you a sense of that work, we’re going to look at three papers that were published within the last couple of weeks, each of which tackles a different aspect of quantum computing technology.

Hot stuff

Error correction will require connecting multiple hardware qubits to act as a single unit termed a logical qubit. This spreads a single bit of quantum information across multiple hardware qubits, making it more robust. Additional qubits are used to monitor the behavior of the ones holding the data and perform corrections as needed. Some error correction schemes require over a hundred hardware qubits for each logical qubit, meaning we’d need tens of thousands of hardware qubits before we could do anything practical.

A number of companies have looked at that problem and decided we already know how to create hardware on that scale—just look at any silicon chip. So, if we could etch useful qubits through the same processes we use to make current processors, then scaling wouldn’t be an issue. Typically, this has meant fabricating quantum dots on the surface of silicon chips and using these to store single electrons that can hold a qubit in their spin. The rest of the chip holds more traditional circuitry that performs the initiation, control, and readout of the qubit.

This creates a notable problem. Like many other qubit technologies, quantum dots need to be kept below one Kelvin in order to keep the environment from interfering with the qubit. And, as anyone who’s ever owned an x86-based laptop knows, all the other circuitry on the silicon generates heat. So, there’s the very real prospect that trying to control the qubits will raise the temperature to the point that the qubits can’t hold onto their state.

That might not be the problem that we thought, according to some work published in Wednesday’s Nature. A large international team that includes people from the startup Diraq have shown that a silicon quantum dot processor can work well at the relatively toasty temperature of 1 Kelvin, up from the usual milliKelvin that these processors normally operate at.

The work was done on a two-qubit prototype made with materials that were specifically chosen to improve noise tolerance; the experimental procedure was also optimized to limit errors. The team then performed normal operations starting at 0.1 K, and gradually ramped up the temperatures to 1.5 K, checking performance as they did so. They found that a major source of errors, state preparation and measurement (SPAM), didn’t change dramatically in this temperature range: “SPAM around 1 K is comparable to that at millikelvin temperatures and remains workable at least until 1.4 K.”

The error rates they did see depended on the state they were preparing. One particular state (both spin-up) had a fidelity of over 99 percent, while the rest were less constrained, at somewhere above 95 percent. States had a lifetime of over a millisecond, which qualifies as long-lived int he quantum world.

All of which is pretty good, and suggests that the chips can tolerate reasonable operating temperatures, meaning on-chip control circuitry can be used without causing problems. The error rates of the hardware qubits are still well above those that would be needed for error correction to work. However, the researchers suggest that they’ve identified error processes that can potentially be compensated for. They expect that the ability to do industrial-scale manufacturing will ultimately lead to working hardware.

Quantum computing progress: Higher temps, better error correction Read More »

event-horizon-telescope-captures-stunning-new-image-of-milky-way’s-black-hole

Event Horizon Telescope captures stunning new image of Milky Way’s black hole

A new image from the Event Horizon Telescope has revealed powerful magnetic fields spiraling from the edge of a supermassive black hole at the center of the Milky Way, Sagittarius A*.

Enlarge / A new image from the Event Horizon Telescope has revealed powerful magnetic fields spiraling from the edge of a supermassive black hole at the center of the Milky Way, Sagittarius A*.

EHT Collaboration

Physicists have been confident since the1980s that there is a supermassive black hole at the center of the Milky Way galaxy, similar to those thought to be at the center of most spiral and elliptical galaxies. It’s since been dubbed Sagittarius A* (pronounced A-star), or SgrAfor short. The Event Horizon Telescope (EHT) captured the first image of SgrAtwo years ago. Now the collaboration has revealed a new polarized image (above) showcasing the black hole’s swirling magnetic fields. The technical details appear in two new papers published in The Astrophysical Journal Letters. The new image is strikingly similar to another EHT image of a larger supermassive black hole, M87*, so this might be something that all such black holes share.

The only way to “see” a black hole is to image the shadow created by light as it bends in response to the object’s powerful gravitational field. As Ars Science Editor John Timmer reported in 2019, the EHT isn’t a telescope in the traditional sense. Instead, it’s a collection of telescopes scattered around the globe. The EHT is created by interferometry, which uses light in the microwave regime of the electromagnetic spectrum captured at different locations. These recorded images are combined and processed to build an image with a resolution similar to that of a telescope the size of the most distant locations. Interferometry has been used at facilities like ALMA (the Atacama Large Millimeter/submillimeter Array) in northern Chile, where telescopes can be spread across 16 km of desert.

In theory, there’s no upper limit on the size of the array, but to determine which photons originated simultaneously at the source, you need very precise location and timing information on each of the sites. And you still have to gather sufficient photons to see anything at all. So atomic clocks were installed at many of the locations, and exact GPS measurements were built up over time. For the EHT, the large collecting area of ALMA—combined with choosing a wavelength in which supermassive black holes are very bright—ensured sufficient photons.

In 2019, the EHT announced the first direct image taken of a black hole at the center of an elliptical galaxy, Messier 87, located in the constellation of Virgo some 55 million light-years away. This image would have been impossible a mere generation ago, and it was made possible by technological breakthroughs, innovative new algorithms, and (of course) connecting several of the world’s best radio observatories. The image confirmed that the object at the center of M87is indeed a black hole.

In 2021, the EHT collaboration released a new image of M87showing what the black hole looks like in polarized light—a signature of the magnetic fields at the object’s edge—which yielded fresh insight into how black holes gobble up matter and emit powerful jets from their cores. A few months later, the EHT was back with images of the “dark heart” of a radio galaxy known as Centaurus A, enabling the collaboration to pinpoint the location of the supermassive black hole at the galaxy’s center.

SgrAis much smaller but also much closer than M87*. That made it a bit more challenging to capture an equally sharp image because SgrAchanges on time scales of minutes and hours compared to days and weeks for M87*. Physicist Matt Strassler previously compared the feat to “taking a one-second exposure of a tree on a windy day. Things get blurred out, and it can be difficult to determine the true shape of what was captured in the image.”

Event Horizon Telescope captures stunning new image of Milky Way’s black hole Read More »

report:-superconductivity-researcher-found-to-have-committed-misconduct

Report: Superconductivity researcher found to have committed misconduct

Definitely not super —

Details of what the University of Rochester investigation found are not available.

Image of a large lawn, with a domed building flanked by trees and flagpoles at its far end.

Enlarge / Rush Rhees Library at the University of Rochester.

We’ve been following the saga of Ranga Dias since he first burst onto the scene with reports of a high-pressure, room-temperature superconductor, published in Nature in 2020. Even as that paper was being retracted due to concerns about the validity of some of its data, Dias published a second paper claiming a similar breakthrough: a superconductor that works at high temperatures but somewhat lower pressures. Shortly afterward, that got retracted as well.

On Wednesday, the University of Rochester, where Dias is based, announced that it had concluded an investigation into Dias and found that he had committed research misconduct. (The outcome was first reported by The Wall Street Journal.)

The outcome is likely to mean the end of Dias’ career, as well as the company he founded to commercialize the supposed breakthroughs. But it’s unlikely we’ll ever see the full details of the investigation’s conclusions.

Questionable research

Dias’ lab was focused on high-pressure superconductivity. At extreme pressures, the orbitals where electrons hang out get distorted, which can alter the chemistry and electronic properties of materials. This can mean the formation of chemical compounds that don’t exist at normal pressures, along with distinct conductivity. In a number of cases, these changes enabled superconductivity at unusually high temperatures, although still well below the freezing point of water.

Dias, however, supposedly found a combination of chemicals that would boost the transition to superconductivity to near room temperature, although only at extreme pressures. While the results were plausible, the details regarding how some of the data was processed to produce one of the paper’s key graphs were lacking, and Dias didn’t provide a clear explanation. Nature eventually pulled the paper, and the University of Rochester initiated investigations (plural!) of his work.

Those investigations cleared Dias of misconduct, and he quickly was back with a report of another high-temperature superconductor, this one forming at less extreme pressures—somewhat surprisingly, published again by Nature. This time, things fell apart much more rapidly, with potential problems quickly becoming apparent, and many of the paper’s authors, not including Dias, called for its retraction.

The University of Rochester started yet another investigation, which is the one that has now concluded that Dias engaged in research misconduct.

The extent of this misconduct, however, might never be revealed. These internal university investigations are generally not made public, even if it might be in the public’s interest to know. The only recent exception is a case where a researcher accused of misconduct sued her university for defamation over the outcome of the investigation. The university submitted its investigation report as evidence, allowing it to become part of the public record.

Behind the scenes

That said, we have learned a fair bit about what has happened inside Dias’ lab, thanks to Nature News, a sister publication of the scientific journal that published both of Dias’ papers. It conducted a tour-de-force of investigative journalism, talking to Dias’ grad students and obtaining the peer review evaluations of Dias’ two papers.

The investigation showed that, for the first paper, Dias simply told his graduate students that the key data came from before he had set up his own lab, which explains why they weren’t aware of it. The students claimed that the ensuing investigations didn’t contact any of them, suggesting they were extremely similar in scope. By contrast, the students claim to have been more aware that the results presented in the second paper didn’t match up with experiments and, in at least one case, suggested Dias clearly misrepresented his lab’s work. (The paper claimed to have synthesized a chemical that the students say was simply purchased from a supplier.)

They were the ones who organized the effort to retract the paper and said that the final investigation actually sought their input.

Meanwhile, on the peer review side, the reporting does not leave Nature looking especially good. Both papers required several rounds of revision and review before being accepted, and even after all this work, most of the reviewers were ambiguous at best about whether the paper should be published. It was an editorial decision to go ahead despite that.

While things seem to have worked out in the end, the major institutions involved here—Nature and the University of Rochester—aren’t coming out of this unscathed. Neither seems to have taken early indications of misconduct as seriously as it should have. As for Dias, the reporting in the Nature News piece should be career-ending. And it’s worth considering that, in the absence of the reporter’s work, the research community would probably remain unaware of most of the details of Dias’ misconduct.

Report: Superconductivity researcher found to have committed misconduct Read More »

this-stretchy-electronic-material-hardens-upon-impact-just-like-“oobleck”

This stretchy electronic material hardens upon impact just like “oobleck”

a flexible alternative —

Researchers likened material’s structure to a big bowl of spaghetti and meatballs.

This flexible and conductive material has “adaptive durability,” meaning it gets stronger when hit.

Enlarge / This flexible and conductive material has “adaptive durability,” meaning it gets stronger when hit.

Yue (Jessica) Wang

Scientists are keen to develop new materials for lightweight, flexible, and affordable wearable electronics so that, one day, dropping our smartphones won’t result in irreparable damage. One team at the University of California, Merced, has made conductive polymer films that actually toughen up in response to impact rather than breaking apart, much like mixing corn starch and water in appropriate amounts produces a slurry that is liquid when stirred slowly but hardens when you punch it (i.e., “oobleck”). They described their work in a talk at this week’s meeting of the American Chemical Society in New Orleans.

“Polymer-based electronics are very promising,” said Di Wu, a postdoc in materials science at UCM. “We want to make the polymer electronics lighter, cheaper, and smarter. [With our] system, [the polymers] can become tougher and stronger when you make a sudden movement, but… flexible when you just do your daily, routine movement. They are not constantly rigid or constantly flexible. They just respond to your body movement.”

As we’ve previously reported, oobleck is simple and easy to make. Mix one part water to two parts corn starch, add a dash of food coloring for fun, and you’ve got oobleck, which behaves as either a liquid or a solid, depending on how much stress is applied. Stir it slowly and steadily and it’s a liquid. Punch it hard and it turns more solid under your fist. It’s a classic example of a non-Newtonian fluid.

In an ideal fluid, the viscosity largely depends on temperature and pressure: Water will continue to flow regardless of other forces acting upon it, such as being stirred or mixed. In a non-Newtonian fluid, the viscosity changes in response to an applied strain or shearing force, thereby straddling the boundary between liquid and solid behavior. Stirring a cup of water produces a shearing force, and the water shears to move out of the way. The viscosity remains unchanged. But for non-Newtonian fluids like oobleck, the viscosity changes when a shearing force is applied.

Ketchup, for instance, is a shear-thickening non-Newtonian fluid, which is one reason smacking the bottom of the bottle doesn’t make the ketchup come out any faster; the application of force increases the viscosity. Yogurt, gravy, mud, and pudding are other examples. And so is oobleck. (The name derives from a 1949 Dr. Seuss children’s book, Bartholomew and the Oobleck.) By contrast, non-drip paint exhibits a “shear-thinning” effect, brushing on easily but becoming more viscous once it’s on the wall. Last year, MIT scientists confirmed that the friction between particles was critical to that liquid-to-solid transition, identifying a tipping point when the friction reached a certain level and the viscosity abruptly increased.

Wu works in the lab of materials scientist Yue (Jessica) Wang, who decided to try to mimic the shear-thickening behavior of oobleck in a polymer material. Flexible polymer electronics are usually made by linking together conjugated conductive polymers, which are long and thin, like spaghetti. But these materials will still break apart in response to particularly large and/or rapid impacts.

So Wu and Wang decided to combine the spaghetti-like polymers with shorter polyaniline molecules and poly(3,4-ethylenedioxythiophene) polystyrene sulfonate, or PEDOT:PSS—four different polymers in all. Two of the four have a positive charge, and two have a negative charge. They used that mixture to make stretchy films and then tested the mechanical properties.

Lo and behold, the films behaved very much like oobleck, deforming and stretching in response to impact rather than breaking apart. Wang likened the structure to a big bowl of spaghetti and meatballs since the positively charged molecules don’t like water and therefore cluster into ball-like microstructures. She and Wu suggest that those microstructures absorb impact energy, flattening without breaking apart. And it doesn’t take much PEDOT:PSS to get this effect: just 10 percent was sufficient.

Further experiments identified an even more effective additive: positively charged 1,3-propanediamine nanoparticles. These particles can weaken the “meatball” polymer interactions just enough so that they can deform even more in response to impacts, while strengthening the interactions between the entangled long spaghetti-like polymers.

The next step is to apply their polymer films to wearable electronics like smartwatch bands and sensors, as well as flexible electronics for monitoring health. Wang’s lab has also experimented with a new version of the material that would be compatible with 3D printing, opening up even more opportunities. “There are a number of potential applications, and we’re excited to see where this new, unconventional property will take us,” said Wang.

This stretchy electronic material hardens upon impact just like “oobleck” Read More »