Science

rip-peter-higgs,-who-laid-foundation-for-the-higgs-boson-in-the-1960s

RIP Peter Higgs, who laid foundation for the Higgs boson in the 1960s

A particle physics hero —

Higgs shared the 2013 Nobel Prize in Physics with François Englert.

Smiling Peter Higgs, seated in front of microphone with Edinburgh logo in the background

Enlarge / A visibly emotional Peter Higgs was present when CERN announced Higgs boson discovery in July 2012.

University of Edinburgh

Peter Higgs, the shy, somewhat reclusive physicist who won a Nobel Prize for his theoretical work on how the Higgs boson gives elementary particles their mass, has died at the age of 94. According to a statement from the University of Edinburgh, the physicist passed “peacefully at home on Monday 8 April following a short illness.”

“Besides his outstanding contributions to particle physics, Peter was a very special person, a man of rare modesty, a great teacher and someone who explained physics in a very simple and profound way,” Fabiola Gianotti, director general at CERN and former leader of one of the experiments that helped discover the Higgs particle in 2012, told The Guardian. “An important piece of CERN’s history and accomplishments is linked to him. I am very saddened, and I will miss him sorely.”

The Higgs boson is a manifestation of the Higgs field, an invisible entity that pervades the Universe. Interactions between the Higgs field and particles help provide particles with mass, with particles that interact more strongly having larger masses. The Standard Model of Particle Physics describes the fundamental particles that make up all matter, like quarks and electrons, as well as the particles that mediate their interactions through forces like electromagnetism and the weak force. Back in the 1960s, theorists extended the model to incorporate what has become known as the Higgs mechanism, which provides many of the particles with mass. One consequence of the Standard Model’s version of the Higgs boson is that there should be a force-carrying particle, called a boson, associated with the Higgs field.

Despite its central role in the function of the Universe, the road to predicting the existence of the Higgs boson was bumpy, as was the process of discovering it. As previously reported, the idea of the Higgs boson was a consequence of studies on the weak force, which controls the decay of radioactive elements. The weak force only operates at very short distances, which suggests that the particles that mediate it (the W and Z bosons) are likely to be massive. While it was possible to use existing models of physics to explain some of their properties, these predictions had an awkward feature: just like another force-carrying particle, the photon, the resulting W and Z bosons were massless.

Schematic of the Standard Model of particle physics.

Enlarge / Schematic of the Standard Model of particle physics.

Over time, theoreticians managed to craft models that included massive W and Z bosons, but they invariably came with a hitch: a massless partner, which would imply a longer-range force. In 1964, however, a series of papers was published in rapid succession that described a way to get rid of this problematic particle. If a certain symmetry in the models was broken, the massless partner would go away, leaving only a massive one.

The first of these papers, by François Englert and Robert Brout, proposed the new model in terms of quantum field theory; the second, by Higgs (then 35), noted that a single quantum of the field would be detectable as a particle. A third paper, by Gerald Guralnik, Carl Richard Hagen, and Tom Kibble, provided an independent validation of the general approach, as did a completely independent derivation by students in the Soviet Union.

At that time, “There seemed to be excitement and concern about quantum field theory (the underlying structure of particle physics) back then, with some people beginning to abandon it,” David Kaplan, a physicist at Johns Hopkins University, told Ars. “There were new particles being regularly produced at accelerator experiments without any real theoretical structure to explain them. Spin-1 particles could be written down comfortably (the photon is spin-1) as long as they didn’t have a mass, but the massive versions were confusing to people at the time. A bunch of people, including Higgs, found this quantum field theory trick to give spin-1 particles a mass in a consistent way. These little tricks can turn out to be very useful, but also give the landscape of what is possible.”

“It wasn’t clear at the time how it would be applied in particle physics.”

Ironically, Higgs’ seminal paper was rejected by the European journal Physics Letters. He then added a crucial couple of paragraphs noting that his model also predicted the existence of what we now know as the Higgs boson. He submitted the revised paper to Physical Review Letters in the US, where it was accepted. He examined the properties of the boson in more detail in a 1966 follow-up paper.

RIP Peter Higgs, who laid foundation for the Higgs boson in the 1960s Read More »

epa-seeks-to-cut-“cancer-alley”-pollutants

EPA seeks to cut “Cancer Alley” pollutants

Out of the air —

Chemical plants will have to monitor how much is escaping and stop leaks.

Image of a large industrial facility on the side of a river.

Enlarge / An oil refinery in Louisiana. Facilities such as this have led to a proliferation of petrochemical plants in the area.

On Tuesday, the US Environmental Protection Agency announced new rules that are intended to cut emissions of two chemicals that have been linked to elevated incidence of cancer: ethylene oxide and chloroprene. While production and use of these chemicals takes place in a variety of locations, they’re particularly associated with an area of petrochemical production in Louisiana that has become known as “Cancer Alley.”

The new regulations would require chemical manufacturers to monitor the emissions at their facilities and take steps to repair any problems that result in elevated emissions. Despite extensive evidence linking these chemicals to elevated risk of cancer, industry groups are signaling their opposition to these regulations, and the EPA has seen two previous attempts at regulation set aside by courts.

Dangerous stuff

The two chemicals at issue are primarily used as intermediates in the manufacture of common products. Chloroprene, for example, is used for the production of neoprene, a synthetic rubber-like substance that’s probably familiar from products like insulated sleeves and wetsuits. It’s a four-carbon chain with two double-bonds that allow for polymerization and an attached chlorine that alters its chemical properties.

According to the National Cancer Institute (NCI), chloroprene “is a mutagen and carcinogen in animals and is reasonably anticipated to be a human carcinogen.” Given that cancers are driven by DNA damage, any mutagen would be “reasonably anticipated” to drive the development of cancer. Beyond that, it appears to be pretty nasty stuff, with the NCI noting that “exposure to this substance causes damage to the skin, lungs, CNS, kidneys, liver and depression of the immune system.”

The NCI’s take on Ethylene Oxide is even more definitive, with the Institute placing it on its list of cancer-causing substances. The chemical is very simple, with two carbons that are linked to each other directly, and also linked via an oxygen atom, which makes the molecule look a bit like a triangle. This configuration allows the molecule to participate in a broad range of reactions that break one of the oxygen bonds, making it useful in the production of a huge range of chemicals. Its reactivity also makes it useful for sterilizing items such as medical equipment.

Its sterilization function works through causing damage to DNA, which again makes it prone to causing cancers.

In addition to these two chemicals, the EPA’s new regulations will target a number of additional airborne pollutants, including benzene, 1,3-butadiene, ethylene dichloride, and vinyl chloride, all of which have similar entries at the NCI.

Despite the extensive record linking these chemicals to cancer, The New York Times quotes the US Chamber of Commerce, a pro-industry group, as saying that “EPA should not move forward with this rule-making based on the current record because there remains significant scientific uncertainty.”

A history of exposure

The petrochemical industry is the main source of these chemicals, so their release is associated with areas where the oil and gas industry has a major presence; the EPA notes that the regulations will target sources in Delaware, New Jersey, and the Ohio River Valley. But the primary focus will be on chemical plants in Texas and Louisiana. These include the area that has picked up the moniker Cancer Alley due to a high incidence of the disease in a stretch along the Mississippi River with a large concentration of chemical plants.

As is the case with many examples of chemical pollution, the residents of Cancer Alley are largely poor and belong to minority groups. As a result, the EPA had initially attempted to regulate the emissions under a civil rights provision of the Clean Air Act, but that has been bogged down due to lawsuits.

The new regulations simply set limits on permissible levels of release at what’s termed the “fencelines” of the facilities where these chemicals are made, used, or handled. If levels exceed an annual limit, the owners and operators “must find the source of the pollution and make repairs.” This gets rid of previous exemptions for equipment startup, shutdown, and malfunctions; those exemptions had been held to violate the Clean Air Act in a separate lawsuit.

The EPA estimates that the sites subject to regulation will see their collective emissions of these chemicals drop by nearly 80 percent, which works out to be 54 tons of ethylene oxide, 14 tons of chloroprene, and over 6,000 tons of the other pollutants. That in turn will reduce the cancer risk from these toxins by 96 percent among those subjected to elevated exposures. Collectively, the chemicals subject to these regulations also contribute to smog, so these reductions will have an additional health impact by reducing its levels as well.

While the EPA says that “these emission reductions will yield significant reductions in lifetime cancer risk attributable to these air pollutants,” it was unable to come up with an estimate of the financial benefits that will result from that reduction. By contrast, it estimates that the cost of compliance will end up being approximately $150 million annually. “Most of the facilities covered by the final rule are owned by large corporations,” the EPA notes. “The cost of implementing the final rule is less than one percent of their annual national sales.”

This sort of cost-benefit analysis is a required step during the formulation of Clean Air Act regulations, so it’s worth taking a step back and considering what’s at stake here: the EPA is basically saying that companies that work with significant amounts of carcinogens need to take stronger steps to make sure that they don’t use the air people breathe as a dumping ground for them.

Unsurprisingly, The New York Times quotes a neoprene manufacturer that the EPA is currently suing over its chloroprene emissions as claiming the new regulations are “draconian.”

EPA seeks to cut “Cancer Alley” pollutants Read More »

moments-of-totality:-how-ars-experienced-the-eclipse

Moments of totality: How Ars experienced the eclipse

Total eclipse of the Ars —

The 2024 total eclipse is in the books. Here’s how it looked across the US.

Baily's Beads are visible in this shot taken by Stephen Clark in Athens, Texas.

Enlarge / Baily’s Beads are visible in this shot taken by Stephen Clark in Athens, Texas.

Stephen Clark

“And God said, Let there be light: and there was light. And God saw the light, that it was good: and God divided the light from the darkness. And God called the light Day, and the darkness he called Night. And the evening and the morning were the first day.”

The steady rhythm of the night-day, dark-light progression is a phenomenon acknowledged in ancient sacred texts as a given. When it’s interrupted, people take notice. In the days leading up to the eclipse, excitement within the Ars Orbiting HQ grew, and plans to experience the last total eclipse in the continental United States until 2045 were made. Here’s what we saw across the country.

Kevin Purdy (watched from Buffalo, New York)

  • 3: 19 pm on April 8 in Buffalo overlooking Richmond Ave. near Symphony Circle.

    Kevin Purdy

  • A view of First Presbyterian Church from Richmond Avenue in Buffalo, NY.

    Kevin Purdy

  • The cloudy, strange skies at 3: 12 pm Eastern time in Buffalo on April 8.

    Kevin Purdy

  • A kind of second sunrise at 3: 21 p.m. on April 8 in Buffalo.

    Kevin Purdy

  • A clearer view of the total eclipse from Colden, New York, 30 minutes south of Buffalo on April 8, 2024.

    Sabrina May

Buffalo, New York, is a frequently passed-over city. Super Bowl victories, the shift away from Great Lakes shipping and American-made steel, being the second-largest city in a state that contains New York City: This city doesn’t get many breaks.

So, with Buffalo in the eclipse’s prime path, I, a former resident and booster, wanted to be there. So did maybe a million people, doubling the wider area’s population. With zero hotels, negative Airbnbs, and no flights below trust-fund prices, I arrived early, stayed late, and slept on sofas and air mattresses. I wanted to see if Buffalo’s moment of global attention would go better than last time.

The day started cloudy, as is typical in early April here. With one hour to go, I chatted with Donald Blank. He was filming an eclipse time-lapse as part of a larger documentary on Buffalo: its incredible history, dire poverty, heroes, mistakes, everything. The shot he wanted had the First Presbyterian Church, with its grand spire and Tiffany windows, in the frame. A 200-year-old stone church adds a certain context to a solar event many of us humans will never see again.

The sky darkened. Automatic porch lights flicked on at 3: 15 pm, then street lights, then car lights, for those driving to somehow more important things. People on front lawns cheered, clapped, and quietly couldn’t believe it. When it was over, I heard a neighbor say they forgot their phone inside. Blank walked over and offered to email her some shots he took. It was very normal in Buffalo, even when it was strange.

Benj Edwards (Raleigh, North Carolina)

  • Benj’s low-tech, but creative way of viewing the eclipse.

    Benj Edwards

  • So many crescents.

    Benj Edwards

I’m in Raleigh, North Carolina, and we were lucky to have a clear day today. We reached peak eclipse at around 3: 15 pm (but not total eclipse, sadly), and leading up to that time, the sun slowly began to dim as I looked out my home office window. Around 3 pm, I went outside on the back deck and began crafting makeshift pinhole lenses using cardboard and a steel awl, poking holes so that myself and my kids could see the crescent shape of the eclipse projected indirectly on a dark surface.

My wife had also bought some eclipse glasses from a local toy store, and I very briefly tried them while squinting. I could see the eclipse well, but my eyes were still feeling a little blurry. I didn’t trust them enough to let the kids use them. For the 2017 eclipse, I had purchased very dark welder’s lenses that I have since lost. Even then, I think I got a little bit of eye damage at that time. A floater formed in my left eye that still plagues me to this day. I have the feeling I’ll never learn this lesson, and the next time an eclipse comes around, I’ll just continue to get progressively more blind. But oh what fun to see the sun eclipsed.

Beth Mole (Raleigh, North Carolina)

Another view from Raleigh.

Enlarge / Another view from Raleigh.

Beth Mole

It was a perfect day for eclipse watching in North Carolina—crystal clear blue sky and a high of 75. Our peak was at 3: 15 pm with 78.6 percent sun coverage. The first hints of the moon’s pass came just before 2 pm. The whole family was out in the backyard (alongside a lot of our neighbors!), ready with pin-hole viewers, a couple of the NASA-approved cereal-box viewers, and eclipse glasses. We all watched as the moon progressively slipped in and stole the spotlight. At peak coverage, it was noticeably dimmer and it got remarkably cooler and quieter. It was not nearly as dramatic as being in the path of totality, but still really neat and fun. My 5-year-old had a blast watching the sun go from circle to bitten cookie to banana and back again.

Moments of totality: How Ars experienced the eclipse Read More »

teen’s-vocal-cords-act-like-coin-slot-in-worst-case-ingestion-accident

Teen’s vocal cords act like coin slot in worst-case ingestion accident

What are the chances? —

Luckily his symptoms were relatively mild, but doctors noted ulceration of his airway.

Teen’s vocal cords act like coin slot in worst-case ingestion accident

Most of the time, when kids accidentally gulp down a non-edible object, it travels toward the stomach. In the best-case scenarios for these unfortunate events, it’s a small, benign object that safely sees itself out in a day or two. But in the worst-case scenarios, it can go down an entirely different path.

That was the case for a poor teen in California, who somehow swallowed a quarter. The quarter didn’t head down the esophagus and toward the stomach, but veered into the airway, sliding passed the vocal cords like they were a vending-machine coin slot.

 Radiographs of the chest (Panel A, postero- anterior view) and neck (Panel B, lateral view). Removal with optical forceps (Panel C and Video 1), and reinspection of ulceration (Panel D, asterisks)

Enlarge / Radiographs of the chest (Panel A, postero- anterior view) and neck (Panel B, lateral view). Removal with optical forceps (Panel C and Video 1), and reinspection of ulceration (Panel D, asterisks)

In a clinical report published recently in the New England Journal of Medicine, doctors who treated the 14-year-old boy reported how they found—and later retrieved—the quarter from its unusual and dangerous resting place. Once it passed the vocal cords and the glottis, the coin got lodged in the subglottis, a small region between the vocal cords and the trachea.

Luckily, when the boy arrived at the emergency department, his main symptoms were hoarseness and difficulty swallowing. He was surprisingly breathing comfortably and without drooling, they noted. But imaging quickly revealed the danger his airway was in when the vertical coin lit up his scans.

“Airway foreign bodies—especially those in the trachea and larynx—necessitate immediate removal to reduce the risk of respiratory compromise,” they wrote in the NEJM report.

The teen was given general anesthetic while doctors used long, optical forceps, guided by a camera, to pluck the coin from its snug spot. After grabbing the coin, they re-inspected the boy’s airway noting ulcerations on each side matching the coin’s ribbed edge.

After the coin’s retrieval, the boy’s symptoms improved and he was discharged home, the doctors reported.

Teen’s vocal cords act like coin slot in worst-case ingestion accident Read More »

kamikaze-bacteria-explode-into-bursts-of-lethal-toxins

Kamikaze bacteria explode into bursts of lethal toxins

The needs of the many… —

If you make a big enough toxin, it’s difficult to get it out of the cells.

Colorized scanning electron microscope, SEM, image of Yersinia pestis bacteria

Enlarge / The plague bacteria, Yersina pestis, is a close relative of the toxin-producing species studied here.

Life-forms with no brain are capable of some astounding things. It might sound like sci-fi nightmare fuel, but some bacteria can wage kamikaze chemical warfare.

Pathogenic bacteria make us sick by secreting toxins. While the release of smaller toxin molecules is well understood, methods of releasing larger toxin molecules have mostly eluded us until now. Researcher Stefan Raunser, director of the Max Planck Institute of Molecular Physiology, and his team finally found out how the insect pathogen Yersinia entomophaga (which attacks beetles) releases its large-molecule toxin.

They found that designated “soldier cells” sacrifice themselves and explode to deploy the poison inside their victim. “YenTc appears to be the first example of an anti-eukaryotic toxin using this newly established type of secretion system,” the researchers said in a study recently published in Nature.

Silent and deadly

Y. entomophaga is part of the Yersinia genus, relatives of the plague bacteria, which produce what are known as Tc toxins. Their molecules are huge as far as bacterial toxins go, but, like most smaller toxin molecules, they still need to make it through the bacteria’s three cell membranes before they escape to damage the host. Raunser had already found in a previous study that Tc toxin molecules do show up outside the bacteria. What he wanted to see next was how and when they exit the bacteria that makes them.

To find out what kind of environment is ideal for Y. entomophaga to release YenTC, the bacteria were placed in acidic (PH under 7) and alkaline (PH over 7) mediums. While they did not release much in the acidic medium, the bacteria thrived in the high PH of the alkaline medium, and increasing the PH led it to release even more of the toxin. The higher PH environment in a beetle is around the mid-end of its gut, so it is now thought that most of the toxin is liberated when the bacteria reach that area.

How YenTc is released was more difficult to determine. When the research team used mass spectrometry to take a closer look at the toxin, they found that it was missing something: There was no signal sequence that indicated to the bacteria that the protein needed to be transported outside the bacterium. Signal sequences, also known as signal peptides, are kind of like built-in tags for secretion. They are in charge of connecting the proteins (toxins are proteins) to a complex at the innermost cell membrane that pushes them through. But YenTC apparently doesn’t need a signal sequence to export its toxins into the host.

About to explode

So how does this insect killer release YenTc, its most formidable toxin? The first test was a process of elimination. While YenTc has no signal sequence, the bacteria have different secretion systems for other toxins that it releases. Raunser thought that knocking out these secretion systems using gene editing could possibly reveal which one was responsible for secreting YenTc. Every secretion system in Y. entomophaga was knocked out until no more were left, yet the bacteria were still able to secrete YenTc.

The researchers then used fluorescence microscopy to observe the bacteria releasing its toxin. They inserted a gene that encodes a fluorescent protein into the toxin gene so the bacteria would glow when making the toxin. While not all Y. entomophaga cells produced YenTc, those that did (and so glowed) tended to be larger and more sluggish. To induce secretion, PH was raised to alkaline levels. Non-producing cells went about their business, but YenTc-expressing cells only took minutes to collapse and release the toxin.

This is what’s called a lytic secretion system, which involves the rupture of cell walls or membranes to release toxins.

“This prime example of self-destructive cooperation in bacteria demonstrates that YenTc release is the result of a controlled lysis strictly dedicated to toxin release rather than a typical secretion process, explaining our initially perplexing observation of atypical extracellular proteins,” the researchers said in the same study.

Yersinia also includes pathogenic bacteria that cause tuberculosis and bubonic plague, diseases that have devastated humans. Now that the secretion mechanism of one Yersinia species has been found out, Raunser wants to study more of them, along with other types of pathogens, to see if any others have kamikaze soldier cells that use the same lytic mechanism of releasing toxins.

The discovery of Y. entomophaga’s exploding cells could eventually mean human treatments that target kamikaze cells. In the meantime, we can at least be relieved we aren’t beetles.

Nature Microbiology, 2024. DOI: 10.1038/s41564-023-01571-z

Kamikaze bacteria explode into bursts of lethal toxins Read More »

gravitational-waves-reveal-“mystery-object”-merging-with-a-neutron-star

Gravitational waves reveal “mystery object” merging with a neutron star

mind the gap —

The so-called “mass gap” might be less empty than physicists previously thought.

Artistic rendition of a black hole merging with a neutron star.

Enlarge / Artistic rendition of a black hole merging with a neutron star. LIGO/VIRGO/KAGRA detected a merger involving a neutron star and what might be a very light black hole falling within the “mass gap” range.

LIGO-India/ Soheb Mandhai

The LIGO/VIRGO/KAGRA collaboration searches the universe for gravitational waves produced by the mergers of black holes and neutron stars. It has now announced the detection of a signal indicating a merger between two compact objects, one of which has an unusual intermediate mass—heavier than a neutron star and lighter than a black hole. The collaboration provided specifics of their analysis of the merger and the “mystery object” in a draft manuscript posted to the physics arXiv, suggesting that the object might be a very low-mass black hole.

LIGO detects gravitational waves via laser interferometry, using high-powered lasers to measure tiny changes in the distance between two objects positioned kilometers apart. LIGO has detectors in Hanford, Washington state, and in Livingston, Louisiana. A third detector in Italy, Advanced VIRGO, came online in 2016. In Japan, KAGRA is the first gravitational-wave detector in Asia and the first to be built underground. Construction began on LIGO-India in 2021, and physicists expect it will turn on sometime after 2025.

To date, the collaboration has detected dozens of merger events since its first Nobel Prize-winning discovery. Early detected mergers involved either two black holes or two neutron stars, but in 2021, LIGO/VIRGO/KAGRA confirmed the detection of two separate “mixed” mergers between black holes and neutron stars.

Most objects involved in the mergers detected by the collaboration fall into two groups: stellar-mass black holes (ranging from a few solar masses to tens of solar masses) and supermassive black holes, like the one in the middle of our Milky Way galaxy (ranging from hundreds of thousands to billions of solar masses). The former are the result of massive stars dying in a core-collapse supernova, while the latter’s formation process remains something of a mystery. The range between the heaviest known neutron star and the lightest known black hole is known as the “mass gap” among scientists.

There have been gravitational wave hints of compact objects falling within the mass gap before. For instance, as reported previously, in 2019, LIGO/VIRGO picked up a gravitational wave signal from a black hole merger dubbed “GW190521,” that produced the most energetic signal detected thus far, showing up in the data as more of a “bang” than the usual “chirp.” Even weirder, the two black holes that merged were locked in an elliptical (rather than circular) orbit, and their axes of spin were tipped far more than usual compared to those orbits. And the new black hole resulting from the merger had an intermediate mass of 142 solar masses—smack in the middle of the mass gap.

Masses in the stellar graveyard.

Enlarge / Masses in the stellar graveyard.

xIGO-Virgo-KAGRA / Aaron Geller / Northwestern

That same year, the collaboration detected another signal, GW 190814, a compact binary merger involving a mystery object that also fell within the mass gap. With no corresponding electromagnetic signal to accompany the gravitational wave signal, astrophysicists were unable to determine whether that object was an unusually heavy neutron star or an especially light black hole. And now we have a new mystery object within the mass gap in a merger event dubbed “GW 230529.”

“While previous evidence for mass-gap objects has been reported both in gravitational and electromagnetic waves, this system is especially exciting because it’s the first gravitational-wave detection of a mass-gap object paired with a neutron star,” said co-author Sylvia Biscoveanu of Northwestern University. “The observation of this system has important implications for both theories of binary evolution and electromagnetic counterparts to compact-object mergers.”

See where this discovery falls within the mass gap.

Enlarge / See where this discovery falls within the mass gap.

Shanika Galaudage / Observatoire de la Côte d’Azur

LIGO/VIRGO/KAGRA started its fourth observing run last spring and soon picked up GW 230529’s signal. Scientists determined that one of the two merging objects had a mass between 1.2 to 2 times the mass of our sun—most likely a neutron star—while the other’s mass fell in the mass-gap range of 2.5 to 4.5 times the mass of our sun. As with GW 190814, there were no accompanying bursts of electromagnetic radiation, so the team wasn’t able to conclusively identify the nature of the more massive mystery object located some 650 million light-years from Earth, but they think it is probably a low-mass black hole. If so, the finding implies an increase in the expected rate of neutron star–black hole mergers with electromagnetic counterparts, per the authors.

“Before we started observing the universe in gravitational waves, the properties of compact objects like black holes and neutron stars were indirectly inferred from electromagnetic observations of systems in our Milky Way,” said co-author Michael Zevin, an astrophysicist at the Adler Planetarium. “The idea of a gap between neutron-star and black-hole masses, an idea that has been around for a quarter of a century, was driven by such electromagnetic observations. GW230529 is an exciting discovery because it hints at this ‘mass gap’ being less empty than astronomers previously thought, which has implications for the supernova explosions that form compact objects and for the potential light shows that ensue when a black hole rips apart a neutron star.”

arXiv, 2024. DOI: 10.48550/arXiv.2404.04248  (About DOIs).

Gravitational waves reveal “mystery object” merging with a neutron star Read More »

why-are-there-so-many-species-of-beetles?

Why are there so many species of beetles?

The beetles outnumber us —

Diet played a key role in the evolution of the vast beetle family tree.

A box of beetles

Caroline Chaboo’s eyes light up when she talks about tortoise beetles. Like gems, they exist in myriad bright colors: shiny blue, red, orange, leaf green and transparent flecked with gold. They’re members of a group of 40,000 species of leaf beetles, the Chrysomelidae, one of the most species-rich branches of the vast beetle order, Coleoptera. “You have your weevils, longhorns, and leaf beetles,” she says. “That’s really the trio that dominates beetle diversity.”

An entomologist at the University of Nebraska, Lincoln, Chaboo has long wondered why the kingdom of life is so skewed toward beetles: The tough-bodied creatures make up about a quarter of all animal species. Many biologists have wondered the same thing, for a long time. “Darwin was a beetle collector,” Chaboo notes.

Despite their kaleidoscopic variety, most beetles share the same three-part body plan. The insects’ ability to fold their flight wings, origami-like, under protective forewings called elytra allows beetles to squeeze into rocky crevices and burrow inside trees. Beetles’ knack for thriving in a large range of microhabitats could also help explain their abundance of species, scientists say.

Enlarge / Despite their kaleidoscopic variety, most beetles share the same three-part body plan. The insects’ ability to fold their flight wings, origami-like, under protective forewings called elytra allows beetles to squeeze into rocky crevices and burrow inside trees. Beetles’ knack for thriving in a large range of microhabitats could also help explain their abundance of species, scientists say.

Of the roughly 1 million named insect species on Earth, about 400,000 are beetles. And that’s just the beetles described so far. Scientists typically describe thousands of new species each year. So—why so many beetle species? “We don’t know the precise answer,” says Chaboo. But clues are emerging.

One hypothesis is that there are lots of them because they’ve been around so long. “Beetles are 350 million years old,” says evolutionary biologist and entomologist Duane McKenna of the University of Memphis in Tennessee. That’s a great deal of time in which existing species can speciate, or split into new, distinct genetic lineages. By way of comparison, modern humans have existed for only about 300,000 years.

Yet just because a group of animals is old doesn’t necessarily mean it will have more species. Some very old groups have very few species. Coelacanth fish, for example, have been swimming in the ocean for approximately 360 million years, reaching a maximum of around 90 species and then declining to the two species known to be living today. Similarly, the lizard-like reptile the tuatara is the only living member of a once globally diverse ancient order of reptiles that originated about 250 million years ago.

Another possible explanation for why beetles are so rich in species is that, in addition to being old, they have unusual staying power. “They have survived at least two mass extinctions,” says Cristian Beza-Beza, a University of Minnesota postdoctoral fellow. Indeed, a 2015 study using fossil beetles to explore extinctions as far back as the Permian 284 million years ago concluded that lack of extinction may be at least as important as diversification for explaining beetle species abundance. In past eras, at least, beetles have demonstrated a striking ability to shift their ranges in response to climate change, and this may explain their extinction resilience, the authors hypothesize.

Why are there so many species of beetles? Read More »

the-science-of-smell-is-fragrant-with-submolecules

The science of smell is fragrant with submolecules

Did you smell something? —

A chemical that we smell may be a composite of multiple smell-making pieces.

cartoon of roses being smelled, with the nasal passages, neurons, and brain visible through cutaways.

When we catch a whiff of perfume or indulge in a scented candle, we are smelling much more than Floral Fantasy or Lavender Vanilla. We are actually detecting odor molecules that enter our nose and interact with cells that send signals to be processed by our brain. While certain smells feel like they’re unchanging, the complexity of this system means that large odorant molecules are perceived as the sum of their parts—and we are capable of perceiving the exact same molecule as a different smell.

Smell is more complex than we might think. It doesn’t consist of simply detecting specific molecules. Researcher Wen Zhou and his team from the Institute of Psychology of the Chinese Academy of Sciences have now found that parts of our brains analyze smaller parts of the odor molecules that make things smell.

Smells like…

So how do we smell? Odor molecules that enter our noses stimulate olfactory sensory neurons. They do this by binding to odorant receptors on these neurons (each of which makes only one of approximately 500 different odor receptors). Smelling something activates different neurons depending on what the molecules in that smell are and which receptors they interact with. The sensory neurons in the piriform cortex of the brain then use the information from the sensory neurons and interpret it as a message that makes us smell vanilla. Or a bouquet of flowers. Or whatever else.

Odor molecules were previously thought to be coded only as whole molecules, but Zhou and his colleagues wanted to see whether the brain’s analysis of odor molecules could perceive something less than a complete molecule. They reasoned that, if only whole molecules work, then after being exposed to a part of an odorant molecule, the test subjects would smell the original molecule exactly the same way. If, by contrast, the brain was able to pick up on the smell of a molecule’s substructures, neurons would adapt to the substructure. When re-exposed to the original molecule, subjects would not sense it nearly as strongly.

“If [sub-molecular factors are part of our perception of an odor]—the percept[ion] and its neural representation would be shifted towards those of the unadapted part of that compound,” the researchers said in a study recently published in Nature Human Behavior.

Doesn’t smell like…

To see whether their hypothesis held up, Zhou’s team presented test subjects with a compound abbreviated CP, its separate components C and P, and an unrelated component, U. P and U were supposed to have equal aromatic intensity despite being different scents.

In one session, subjects smelled CP and then sniffed P until they had adapted to it. When they smelled CP again, they reported it smelling more like C than P. Despite being exposed to the entire molecule, they were mostly smelling C, which was unadapted. In another session, subjects adapted to U, after which there was no change in how they perceived CP. So, the effect is specific to smelling a portion of the odorant molecule.

In yet another experiment, subjects were told to first smell CP and then adapt to the smell of P with just one nostril while they kept the other nostril closed. Once adapted, CP and C smelled similar, but only when snorted through the nostril that had been open. The two smelled much more different through the nostril that had been closed.

Previous research has shown that adaptation to odors takes place in the piriform cortex. Substructure adaptation causes this part of the brain to respond differently to the portions of a chemical that the nose has recently been exposed to.

This olfactory experiment showed that our brains perceive smells by doing more than just recognizing the presence of a whole odor molecule. Some molecules can be perceived as a collection of submolecular units that are perceived separately.

“The smells we perceived are the products of continuous analysis and synthesis in the olfactory system,” the team said in the same study, “breath by breath, of the structural features and relationships of volatile compounds in our ever-changing chemical environment.”

Nature Human Behaviour, 2024.  DOI: 10.1038/s41562-024-01849-0

The science of smell is fragrant with submolecules Read More »

here-are-the-winners-and-losers-when-it-comes-to-clouds-for-monday’s-eclipse

Here are the winners and losers when it comes to clouds for Monday’s eclipse

Happy hunting —

News you can use in regard to chasing cloud-free skies.

Cloud cover forecast for 2 pm ET on Monday, April 8.

Enlarge / Cloud cover forecast for 2 pm ET on Monday, April 8.

Tomer Burg

The best opportunity to view a total Solar eclipse in the United States for the next two decades is nearly at hand. Aside from making sure you’re in the path of totality, the biggest question for most eclipse viewers has been, will it be cloudy?

This has posed a challenge to the meteorological community. That’s because clouds are notoriously difficult to forecast for a number of reasons. The first is that they are localized features, sometimes on the order of a few miles or km across, which is smaller than the resolution of global models that provide forecasts five, seven, or more days out.

Weather models also struggle with predicting clouds because they can form anywhere from a few thousand feet (2,000 meters) above the ground to 50,000 feet (15,000 meters), and therefore they require good information about conditions in the atmosphere near the surface all the way into the stratosphere. The problem is that the combination of ground-based observations, weather balloons, data from aircraft, and satellites do not provide the kind of comprehensive atmospheric profile needed at locations around the world for completely accurate cloud forecasting.

Finally, there is the issue of partly cloudy skies and the transience of clouds themselves. Most places, most days, have a mixture of sunshine and cloudy skies. So let’s say the forecast looks pretty good for your location. According to forecasters there is only a 30 percent skycover forecast for Monday afternoon. Sounds great! But if a large cloud moves over the Sun during the few minutes of totality, it won’t matter if the day was mostly sunny.

With that in mind, here’s the forecast at three days out, with some strategies for finding the clear skies on Monday.

The forecast

The cloud forecast has actually been remarkably consistent for the last several days, in general terms. Texas has looked rather poor for visibility, the central region of the United States including bits of Missouri, Arkansas, Illinois, and Indiana have looked fairly good, areas along Lake Erie have been iffy, and the northeastern United States has looked optimal.

Our highest confidence area is northern New York, Vermont, New Hampshire, and Maine. The reason is that high pressure will be firmly in place for these locations on Monday, virtually guaranteeing mostly sunny skies. If you want to be confident of seeing the eclipse in North America, this is the place to be. But there is a catch—isn’t there always? A snowstorm this week, which may persist into Saturday morning, has made travel difficult. Conditions should improve by Sunday, however.

Rising pressures in the central United States will also make for good viewing conditions. The band of totality running from Northern Arkansas through Indiana is not guaranteed to have clear skies, but the odds are favorable for most locations here.

The Lake Erie region, including Cleveland, is probably the biggest wildcard in the national forecast. The atmospheric setup here is fairly complex, with the region just on the edge of high pressure ridging that will help keep skies clear. I’d be cautiously optimistic.

Finally there’s Texas. The forecast overall has been poor since I’ve began tracking it for the last two weeks. (And as I live in Texas, I’ve been following it closely.) The global models with the best predictive value—the European-based ECMWF and US-based GFS—have shown consistently cloudy skies across much of the state on Monday, with a non-zero chance of rain. I do think there will be some breaks in the clouds at the time of the eclipse, perhaps in locations near Dallas or to the west of Austin, and hopefully some of the cloud cover will be thin, high clouds. But whereas the skies at night are big and bright in Texas, the solar eclipse viewing conditions might just bite.

Some strategies for Monday

There are a lot of helpful resources online for tracking cloud cover over the weekend. One of the best hacks is to search the web for the nearest city or town, i.e. “NWS Cleveland, Ohio” and find the “forecaster discussion” section of the National Weather Service website. This will give you a credible local forecaster’s outlook on conditions. Most have been doing a great job of providing eclipse context in twice-daily discussions.

A meteorologist at the University of Oklahoma, Tomer Burg, has set up an excellent website to provide both an overview of the eclipse and a probabilistic outlook for localized conditions. Your best bets are the national blend of models forecast for average cloud cover (direct link), and a city dashboard that provides key information for more than 100 locations about precise eclipse timing and sky cover.

Good luck, Austin!

Enlarge / Good luck, Austin!

Tomer Burg

Finally, if you’re in the path of totality and are expected to have partly to mostly cloudy skies, don’t despair. There’s always a chance the forecast will change, even a few days out. There’s always a chance for a break in the clouds at the right time. There’s always a chance the clouds will be thin and high, with the disk of the Sun shining through.

And finally, if it is thickly overcast, it will still get eerily dark outside in the middle of the day. It will get noticeably colder. Animals will do nighttime things. So it will be special, but unfortunately not special.

Here are the winners and losers when it comes to clouds for Monday’s eclipse Read More »

rocket-report:-blue-origin-to-resume-human-flights;-progress-for-polaris-dawn

Rocket Report: Blue Origin to resume human flights; progress for Polaris Dawn

The wait is over —

“The pacing item in our supply chain is the BE-4.”

Ed Dwight stands in front of an F-104 jet fighter in 1963.

Enlarge / Ed Dwight stands in front of an F-104 jet fighter in 1963.

Welcome to Edition 6.38 of the Rocket Report! Ed Dwight was close to joining NASA’s astronaut corps more than 60 years ago. With an aeronautical engineering degree and experience as an Air Force test pilot, Dwight met the qualifications to become an astronaut. He was one of 26 test pilots the Air Force recommended to NASA for the third class of astronauts in 1963, but he wasn’t selected. Now, the man who would have become the first Black astronaut will finally get a chance to fly to space.

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar.

Ed Dwight named to Blue Origin’s next human flight. Blue Origin, Jeff Bezos’ space company, announced Thursday that 90-year-old Ed Dwight, who almost became the first Black astronaut in 1963, will be one of six people to fly to suborbital space on the company’s next New Shepard flight. Dwight, a retired Air Force captain, piloted military fighter jets and graduated test pilot school, following a familiar career track as many of the early astronauts. He was on a short list of astronaut candidates the Air Force provided NASA, but the space agency didn’t include him. It took 20 more years for the first Black American to fly to space. Dwight’s ticket with Blue Origin is sponsored by Space for Humanity, a nonprofit that seeks to expand access to space for all people. Five paying passengers will join Dwight for the roughly 10-minute up-and-down flight to the edge of space over West Texas. Kudos to Space for Humanity and Blue Origin for making this happen.

Return to flight … This mission, named NS-25, will be the first time Blue Origin flies with human passengers since August 2022. Blue Origin hasn’t announced a launch date yet for NS-25. On an uncrewed launch the following month, an engine failure destroyed a New Shepard booster and grounded Blue Origin’s suborbital rocket program for more than 15 months. New Shepard returned to flight December 19 on another research flight, again without anyone onboard. As the mission name suggests, this will be the 25th flight of a New Shepard rocket and the seventh flight with people. Blue Origin has a history of flying aviation pioneers and celebrities. On the first human flight with New Shepard in 2021, the passengers included company founder Jeff Bezos and famed female aviator Wally Funk. (submitted by EllPeaTea)

The easiest way to keep up with Eric Berger’s space reporting is to sign up for his newsletter, we’ll collect his stories in your inbox.

Revisit Astra’s 2020 rocket explosion. In March 2020, as the world was under the grip of COVID, Astra blew up a rocket in remote Alaska and didn’t want anyone to see it. New video published by TechCrunch shows Astra’s Rocket 3 vehicle exploding on its launch pad. This was one of several setbacks that have brought the startup to its knees. The explosion, which occurred at Alaska’s Pacific Spaceport Complex, was simply reported as an “anomaly” at the time, an industry term for pretty much any issue that deviates from the expected outcome, TechCrunch reports. Satellite imagery of the launch site showed burn scars, suggesting an explosion, but the footage published this week confirms the reality of the event. This was Astra’s first orbital-class rocket, and it blew up during a fueling rehearsal.

A sign of things to come … Astra eventually flew its Rocket 3 small satellite launcher seven times, but only two of the flights actually reached orbit. This prompted Astra to abandon its Rocket 3 program and focus on developing a larger rocket, Rocket 4. But the future of this new rocket is in doubt. Astra’s co-founders are taking the company private after its market value and stock price tanked, and it’s not clear where the company will go from here. (submitted by Ken the Bin)

The easiest way to keep up with Eric Berger’s space reporting is to sign up for his newsletter, we’ll collect his stories in your inbox.

Russia’s plan to “restore” its launch industry. Yuri Borisov, chief of the Russian space agency Roscosmos, has outlined a strategy for Russia to regain a dominant position in the global launch market, Ars reports. This will include the development of a partially reusable replacement for the Soyuz rocket called Amur-CNG. The country’s spaceflight enterprise is also working on “ultralight” boosters that will incorporate an element of reusability. In an interview posted on the Roscosmos website, Borisov said he hopes Russia will have a “completely new fleet of space vehicles” by the 2028-2029 timeframe. Russia has previously discussed plans to develop the Amur rocket (the CNG refers to the propellant, liquified methane). The multi-engine vehicle looks somewhat similar to SpaceX’s Falcon 9 rocket in that preliminary designs incorporated landing legs and grid fins to enable a powered first-stage landing.

Reason to doubt … Russia’s launch industry was a global leader a couple of decades ago when prices were cheap relative to Western rockets. But the heavy-lift Proton rocket is nearing retirement after concerns about its reliability, and the still-reliable Soyuz is now excluded from the global market after Russia’s invasion of Ukraine. In the 2000s and 2010s, Russia’s position in the market was supplanted by the European Ariane 5 rocket and then SpaceX’s Falcon 9. Roscosmos originally announced the medium-lift Amur rocket program in 2020 for a maiden flight in 2026. Since then, the rocket has encountered a nearly year-for-year delay in its first test launch. I’ll believe it when I see it. The only new, large rocket Russia has developed in nearly 40 years, the expendable Angara A5, is still launching dummy payloads on test flights a decade after its debut.

Rocket Report: Blue Origin to resume human flights; progress for Polaris Dawn Read More »

nasa-knows-what-knocked-voyager-1-offline,-but-it-will-take-a-while-to-fix

NASA knows what knocked Voyager 1 offline, but it will take a while to fix

Hope returns —

“Engineers are optimistic they can find a way for the FDS to operate normally.”

A Voyager space probe in a clean room at the Jet Propulsion Laboratory in 1977.

Enlarge / A Voyager space probe in a clean room at the Jet Propulsion Laboratory in 1977.

Engineers have determined why NASA’s Voyager 1 probe has been transmitting gibberish for nearly five months, raising hopes of recovering humanity’s most distant spacecraft.

Voyager 1, traveling outbound some 15 billion miles (24 billion km) from Earth, started beaming unreadable data down to ground controllers on November 14. For nearly four months, NASA knew Voyager 1 was still alive—it continued to broadcast a steady signal—but could not decipher anything it was saying.

Confirming their hypothesis, engineers at NASA’s Jet Propulsion Laboratory (JPL) in California confirmed a small portion of corrupted memory caused the problem. The faulty memory bank is located in Voyager 1’s Flight Data System (FDS), one of three computers on the spacecraft. The FDS operates alongside a command-and-control central computer and another device overseeing attitude control and pointing.

The FDS duties include packaging Voyager 1’s science and engineering data for relay to Earth through the craft’s Telemetry Modulation Unit and radio transmitter. According to NASA, about 3 percent of the FDS memory has been corrupted, preventing the computer from carrying out normal operations.

Optimism growing

Suzanne Dodd, NASA’s project manager for the twin Voyager probes, told Ars in February that this was one of the most serious problems the mission has ever faced. That is saying something because Voyager 1 and 2 are NASA’s longest-lived spacecraft. They launched 16 days apart in 1977, and after flying by Jupiter and Saturn, Voyager 1 is flying farther from Earth than any spacecraft in history. Voyager 2 is trailing Voyager 1 by about 2.5 billion miles, although the probes are heading out of the Solar System in different directions.

Normally, engineers would try to diagnose a spacecraft malfunction by analyzing data it sent back to Earth. They couldn’t do that in this case because Voyager 1 has been transmitting data packages manifesting a repeating pattern of ones and zeros. Still, Voyager 1’s ground team identified the FDS as the likely source of the problem.

The Flight Data Subsystem was an innovation in computing when it was developed five decades ago. It was the first computer on a spacecraft to use volatile memory. Most of NASA’s missions operate with redundancy, so each Voyager spacecraft launched with two FDS computers. But the backup FDS on Voyager 1 failed in 1982.

Due to the Voyagers’ age, engineers had to reference paper documents, memos, and blueprints to help understand the spacecraft’s design details. After months of brainstorming and planning, teams at JPL uplinked a command in early March to prompt the spacecraft to send back a readout of the FDS memory.

The command worked, and Voyager.1 responded with a signal different from the code the spacecraft had been transmitting since November. After several weeks of meticulous examination of the new code, engineers pinpointed the locations of the bad memory.

“The team suspects that a single chip responsible for storing part of the affected portion of the FDS memory isn’t working,” NASA said in an update posted Thursday. “Engineers can’t determine with certainty what caused the issue. Two possibilities are that the chip could have been hit by an energetic particle from space or that it simply may have worn out after 46 years.”

Voyager 1’s distance from Earth complicates the troubleshooting effort. The one-way travel time for a radio signal to reach Voyager 1 from Earth is about 22.5 hours, meaning it takes roughly 45 hours for engineers on the ground to learn how the spacecraft responded to their commands.

NASA also must use its largest communications antennas to contact Voyager 1. These 230-foot-diameter (70-meter) antennas are in high demand by many other NASA spacecraft, so the Voyager team has to compete with other missions to secure time for troubleshooting. This means it will take time to get Voyager 1 back to normal operations.

“Although it may take weeks or months, engineers are optimistic they can find a way for the FDS to operate normally without the unusable memory hardware, which would enable Voyager 1 to begin returning science and engineering data again,” NASA said.

NASA knows what knocked Voyager 1 offline, but it will take a while to fix Read More »

$158,000-als-drug-pulled-from-market-after-failing-in-large-clinical-trial

$158,000 ALS drug pulled from market after failing in large clinical trial

Off the market —

The drug is now unavailable to new patients; its maker to lay off 70% of employees.

$158,000 ALS drug pulled from market after failing in large clinical trial

Amylyx, the maker of a new drug to treat ALS, is pulling that drug from the market and laying off 70 percent of its workers after a large clinical trial found that the drug did not help patients, according to an announcement from the company Thursday.

The drug, Relyvrio, won approval from the Food and Drug Administration in September 2022 to slow the progression of ALS (amyotrophic lateral sclerosis, or Lou Gehrig’s disease). However, the data behind the controversial decision was shaky at best; it was based on a study of just 137 patients that had several weaknesses and questionable statistical significance, and FDA advisors initially voted against approval. Still, given the severity of the neurogenerative disease and lack of effective treatments, the FDA ultimately granted approval under the condition that the company was working on a Phase III clinical trial to solidify its claimed benefits.

Relyvrio—a combination of two existing, generic drugs—went on the market with a list price of $158,000.

Last month, the company announced the top-line results from that 48-week, randomized, placebo-controlled trial involving 664 patients: Relyvrio failed to meet any of the trial’s goals. The drug did not improve patients’ physical functions, which were scored on a standardized ALS-specific test, nor did it improve quality of life, respiratory function, or overall survival. At that time, the co-CEOs of the company said they were “surprised and deeply disappointed” by the result, and the company acknowledged that it was considering voluntarily withdrawing the drug from the market.

In the announcement on Thursday, the company called Relyvrio’s market withdrawal a “difficult moment for the ALS community.” Patients already taking the medication who wish to continue taking it will be able to do so through a free drug program, the company said. It is no longer available to new patients, effective Thursday.

Amylyx is now “restructuring” to focus on two other drug candidates that treat different neurodegenerative disease. The change will include laying off 70 percent of its workforce, which, according to The Washington Post, includes more than 350 employees.

Relyvrio is part of a series of similarly controversial drugs for devastating neurodegenerative diseases that have gained FDA approval despite questionable data. In January, drug maker Biogen announced it was abandoning Aduhelm, a highly contentious Alzheimer’s drug that failed two large trials prior to its heavily criticized approval.

$158,000 ALS drug pulled from market after failing in large clinical trial Read More »