Science

noaa-says-‘extreme’-solar-storm-will-persist-through-the-weekend

NOAA says ‘extreme’ Solar storm will persist through the weekend

Bright lights —

So far disruptions from the geomagnetic storm appear to be manageable.

Pink lights appear in the sky above College Station, Texas.

Enlarge / Pink lights appear in the sky above College Station, Texas.

ZoeAnn Bailey

After a night of stunning auroras across much of the United States and Europe on Friday, a severe geomagnetic storm is likely to continue through at least Sunday, forecasters said.

The Space Weather Prediction Center at the US-based National Oceanic and Atmospheric Prediction Center observed that ‘Extreme’ G5 conditions were ongoing as of Saturday morning due to heightened Solar activity.

“The threat of additional strong flares and CMEs (coronal mass ejections) will remain until the large and magnetically complex sunspot cluster rotates out of view over the next several days,” the agency posted in an update on the social media site X on Saturday morning.

Good and bad effects

For many observers on Friday night the heightened Solar activity was welcomed. Large areas of the United States, Europe, and other locations unaccustomed to displays of the aurora borealis saw vivid lights as energetically charged particles from the Solar storm passed through the Earth’s atmosphere. Brilliantly pink skies were observed as far south as Texas. Given the forecast for ongoing Solar activity, another night of extended northern lights is possible again on Saturday.

There were also some harmful effects. According to NOAA, there have been some irregularities in power grid transmissions, and degraded satellite communications and GPS services. Users of SpaceX’s Starlink satellite internet constellation have reported slower download speeds. Early on Saturday morning, SpaceX founder Elon Musk said the company’s Starlink satellites were “under a lot of pressure, but holding up so far.”

This is the most intense Solar storm recorded in more than two decades. The last G5 event—the most extreme category of such storms—occurred in October 2003 when there were electricity issues reported in Sweden and South Africa.

Should this storm intensify over the next day or two, scientists say the major risks include more widespread power blackouts, disabled satellites, and long-term damage of GPS networks.

Cause of these storms

Such storms are triggered when the Sun ejects a significant amount of its magnetic field and plasma into the Solar wind. The underlying causes of these coronal mass ejections, deeper in the Sun, are not fully understood. But it is hoped that data collected by NASA’s Parker Solar Probe and other observations will help scientists better understand and predict such phenomena.

When these coronal mass ejections reach Earth’s magnetic field they change it, and can introduce significant currents into electricity lines and transformers, leading to damages or outages.

The most intense geomagnetic storm occurred in 1859, during the so-called Carrington Event. This produced auroral lights around the world, and caused fires in multiple telegraph stations—at the time there were 125,000 miles of telegraph lines in the world.

According to one research paper on the Carrington Event, “At its height, the aurora was described as a blood or deep crimson red that was so bright that one ‘could read a newspaper by’.”

NOAA says ‘extreme’ Solar storm will persist through the weekend Read More »

is-dark-matter’s-main-rival-theory-dead?

Is dark matter’s main rival theory dead?

Galaxy rotation has long perplexed scientists.

Enlarge / Galaxy rotation has long perplexed scientists.

One of the biggest mysteries in astrophysics today is that the forces in galaxies do not seem to add up. Galaxies rotate much faster than predicted by applying Newton’s law of gravity to their visible matter, despite those laws working well everywhere in the Solar System.

To prevent galaxies from flying apart, some additional gravity is needed. This is why the idea of an invisible substance called dark matter was first proposed. But nobody has ever seen the stuff. And there are no particles in the hugely successful Standard Model of particle physics that could be the dark matter—it must be something quite exotic.

This has led to the rival idea that the galactic discrepancies are caused instead by a breakdown of Newton’s laws. The most successful such idea is known as Milgromian dynamics or Mond, proposed by Israeli physicist Mordehai Milgrom in 1982. But our recent research shows this theory is in trouble.

The main postulate of Mond is that gravity starts behaving differently from what Newton expected when it becomes very weak, as at the edges of galaxies. Mond is quite successful at predicting galaxy rotation without any dark matter, and it has a few other successes. But many of these can also be explained with dark matter, preserving Newton’s laws.

So how do we put Mond to a definitive test? We have been pursuing this for many years. The key is that Mond only changes the behavior of gravity at low accelerations, not at a specific distance from an object. You’ll feel lower acceleration on the outskirts of any celestial object—a planet, star, or galaxy—than when you are close to it. But it is the amount of acceleration, rather than the distance, that predicts where gravity should be stronger.

This means that, although Mond effects would typically kick in several thousand light years away from a galaxy, if we look at an individual star, the effects would become highly significant at a tenth of a light year. That is only a few thousand times larger than an astronomical unit (AU)—the distance between the Earth and the Sun. But weaker Mond effects should also be detectable at even smaller scales, such as in the outer Solar System.

This brings us to the Cassini mission, which orbited Saturn between 2004 and its final fiery crash into the planet in 2017. Saturn orbits the Sun at 10 AU. Due to a quirk of Mond, the gravity from the rest of our galaxy should cause Saturn’s orbit to deviate from the Newtonian expectation in a subtle way.

Cassini orbited Saturn from 2004 to 2017.

Enlarge / Cassini orbited Saturn from 2004 to 2017.

This can be tested by timing radio pulses between Earth and Cassini. Since Cassini was orbiting Saturn, this helped to measure the Earth-Saturn distance and allowed us to precisely track Saturn’s orbit. But Cassini did not find any anomaly of the kind expected in Mond. Newton still works well for Saturn.

Is dark matter’s main rival theory dead? Read More »

how-the-moon-got-a-makeover

How the Moon got a makeover

Putting on a new face —

The Moon’s former surface sank to the depths, until volcanism brought it back.

Image of the moon.

Our Moon may appear to shine peacefully in the night sky, but billions of years ago, it was given a facial by volcanic turmoil.

One question that has gone unanswered for decades is why there are more titanium-rich volcanic rocks, such as ilmenite, on the near side as opposed to the far side. Now a team of researchers at Arizona Lunar and Planetary Laboratory are proposing a possible explanation for that.

The lunar surface was once flooded by a bubbling magma ocean, and after the magma ocean had hardened, there was an enormous impact on the far side. Heat from this impact spread to the near side and made the crust unstable, causing sheets of heavier and denser minerals on the surface to gradually sink deep into the mantle. These melted again and were belched out by volcanoes. Lava from these eruptions (more of which happened on the near side) ended up in what are now titanium-rich flows of volcanic rock. In other words, the Moon’s old face vanished, only to resurface.

What lies beneath

The region of the Moon in question is known as the Procellarum KREEP Terrane (PKT). KREEP signifies high concentrations of potassium (K), rare earth elements (REE), and phosphorus (P). This is also where ilmenite-rich basalts are found. Both KREEP and the basalts are thought to have first formed when the Moon was cooling from its magma ocean phase. But the region stayed hot, as KREEP also contains high levels of radioactive uranium and thorium.

“The PKT region… represents the most volcanically active region on the Moon as a natural result of the high abundances of heat-producing elements,” the researchers said in a study recently published in Nature Geoscience.

Why is this region located on the near side, while the far side is lacking in KREEP and ilmenite-rich basalts? There was one existing hypothesis that caught the researchers’ attention: it proposed that after the magma ocean hardened on the near side, sheets of these KREEP minerals were too heavy to stay on the surface. They began to sink into the mantle and down to the border between the mantle and core. As they sank, these mineral sheets were thought to have left behind trace amounts of material throughout the mantle.

If the hypothesis was accurate, this would mean there should be traces of minerals from the hardened KREEP magma crust in sheet-like configurations beneath the lunar surface, which could reach all the way down to the edge of the core-mantle boundary.

How could that be tested? Gravity data from the GRAIL (Gravity Recovery and Interior Laboratory) mission to the Moon possibly had the answer. It would allow them to detect gravitational anomalies caused by the higher density of the KREEP rock compared to surrounding materials.

Coming to the surface

GRAIL data had previously revealed that there was a pattern of subsurface gravitational anomalies in the PKT region. This appeared similar to the pattern that the sheets of volcanic rock were predicted to have made as they sank, which is why the research team decided to run a computer simulation of sinking KREEP to see how well the hypothesis matched up with the GRAIL findings.

Sure enough, the simulation ended up forming just about the same pattern as the anomalies GRAIL found. The polygonal pattern seen in both the simulations and GRAIL data most likely means that traces of heavier KREEP and ilmenite-rich basalt layers were left behind beneath the surface as those layers sank due to their density, and GRAIL detected their residue due to their greater gravitational pull. GRAIL also suggested there were many lesser anomalies in the PKT region, which makes sense considering that a large part of the crust is made of volcanic rocks thought to have sunk and left behind residue before they melted and surfaced again through eruptions.

We now also have an idea of when this phenomenon occurred. Because there are impact basins that dated to around 4.22 billion years ago (not to be confused with the earlier far-side impact), but the magma ocean is thought to have hardened before that, the researchers think that the crust also began to sink before that time.

“The PKT border anomalies provide the most direct physical evidence for the nature of the post-magma ocean… mantle overturn and sinking of ilmenite into the deep interior,” the team said in the same study.

This is just one more bit of information regarding how the Moon evolved and why it is so uneven. The near side once raged with lava that is now volcanic rock, much of which exists in flows called mare (which translates to “sea” in Latin). Most of this volcanic rock, especially in the PKT region, contains rare earth elements.

We can only confirm that there really are traces of ancient crust inside the Moon by the collection of actual lunar material far beneath the surface. When Artemis astronauts are finally able to gather samples of volcanic material from the Moon in situ, who knows what will come to the surface?

Nature Geoscience, 2024.  DOI: 10.1038/s41561-024-01408-2

How the Moon got a makeover Read More »

nasa-wants-a-cheaper-mars-sample-return—boeing-proposes-most-expensive-rocket

NASA wants a cheaper Mars Sample Return—Boeing proposes most expensive rocket

The Space Launch System rocket lifts off on the Artemis I mission.

Enlarge / The Space Launch System rocket lifts off on the Artemis I mission.

NASA is looking for ways to get rock samples back from Mars for less than the $11 billion the agency would need under its own plan, so last month, officials put out a call to industry to propose ideas.

Boeing is the first company to release details about how it would attempt a Mars Sample Return mission. Its study involves a single flight of the Space Launch System (SLS) rocket, the super heavy-lift launcher designed to send astronauts to the Moon on NASA’s Artemis missions.

Jim Green, NASA’s former chief scientist and longtime head of the agency’s planetary science division, presented Boeing’s concept Wednesday at the Humans to Mars summit, an annual event sponsored primarily by traditional space companies. Boeing is the lead contractor for the SLS core stage and upper stage and has pitched the SLS, primarily a crew launch vehicle, as a rocket for military satellites and deep space probes.

All in one

Green, now retired, said the concept he and Boeing engineers propose would reduce the risks of Mars Sample Return. With one mission, there are fewer points of potential failure, he said.

“To reduce mission complexity, this new concept is doing one launch,” Green said.

This argument makes some sense, but the problem is SLS is the most expensive rocket flying today. Even if NASA and Boeing introduce cost-cutting measures, NASA’s inspector general reported last year it’s unlikely the cost of a single SLS launch would fall below $2 billion. The inspector general recommended NASA consider buying commercial rockets as an alternative to SLS for future Artemis missions.

NASA’s Perseverance rover, operating on Mars since February 2021, is collecting soil and rock core samples and sealing them in 43 cigar-size titanium tubes. The rover has dropped the first 10 of these tubes in a depot on the Martian surface that could be retrieved by a future sample return mission. The remaining tubes will likely remain stowed on Perseverance in hopes the rover will directly hand off the samples to the spacecraft that comes to Mars to get them.

Boeing says a single launch of the Space Launch System rocket could carry everything needed for a Mars Sample Return mission.

Enlarge / Boeing says a single launch of the Space Launch System rocket could carry everything needed for a Mars Sample Return mission.

Boeing

In his remarks, Green touted the benefits of launching a Mars Sample Return mission with a single rocket and a single spacecraft. NASA’s baseline concept involves two launches, one with a US-built lander and a small rocket to boost the rocket samples back off the surface of Mars, and another with a European spacecraft to rendezvous with the sample carrier in orbit around Mars, then bring the specimens back to Earth.

“This concept is one launch vehicle,” he said. “It’s the SLS. What does it do? It’s carrying a massive payload. What is the payload? It’s a Mars entry and descent aeroshell. It has a propulsive descent module.”

The lander would carry everything needed to get the samples back to Earth. A fetch rover onboard the lander would deploy to drive out and pick up the sample tubes collected by the Perseverance rover. Then, a robotic arm would transfer the sample tubes to a container at the top of a two-stage rocket called the Mars Ascent Vehicle (MAV) sitting on top of the lander. The MAV would have the oomph needed to boost the samples off the surface of Mars and into orbit, then fire engines to target a course back to Earth.

Boeing has no direct experience as a prime contractor for any Mars mission. SpaceX, with its giant Starship rocket designed for eventual Mars missions, and Lockheed Martin, which has built several Mars landers for NASA, are the companies with the technology and expertise that seem to be most useful for Mars Sample Return.

NASA is also collecting ideas for Mars Sample Return from its space centers across the United States. The agency also tasked the Jet Propulsion Laboratory, which was in charge of developing the original dead-on-arrival concept, to come up with a better idea. Later this year, NASA officials will reference these new proposals as they decide how to proceed with Mars Sample Return, with the goal of getting samples back from Mars in the 2030s.

NASA wants a cheaper Mars Sample Return—Boeing proposes most expensive rocket Read More »

more-children-gain-hearing-as-gene-therapy-for-profound-deafness-advances

More children gain hearing as gene therapy for profound deafness advances

Success —

The therapy treats a rare type of deafness, but experts hope it’s a “jumping point.”

Opal Sandy (center), who was born completely deaf because of a rare genetic condition, can now hear unaided for the first time after receiving gene therapy at 11-months-old. She is shown with her mother, father, and sister at their home in Eynsham, Oxfordshire, on May 7, 2024.

Enlarge / Opal Sandy (center), who was born completely deaf because of a rare genetic condition, can now hear unaided for the first time after receiving gene therapy at 11-months-old. She is shown with her mother, father, and sister at their home in Eynsham, Oxfordshire, on May 7, 2024.

There are few things more heartwarming than videos of children with deafness gaining the ability to hear, showing them happily turning their heads at the sound of their parents’ voices and joyfully bobbing to newly discovered music. Thanks to recent advances in gene therapy, more kids are getting those sweet and triumphant moments—with no hearing aids or cochlear implants needed.

At the annual conference of the American Society for Gene & Cell Therapy held in Baltimore this week, researchers showed many of those videos to their audiences of experts. On Wednesday, Larry Lustig, an otolaryngologist at Columbia University, presented clinical trial data of two children with profound deafness—the most severe type of deafness—who are now able to hear at normal levels after receiving an experimental gene therapy. One of the children was 11 months old at the time of the treatment, marking her as the youngest child in the world to date to receive gene therapy for genetic deafness.

On Thursday, Yilai Shu, an otolaryngologist at Fudan University in Shanghai, provided a one-year progress report on six children who were treated in the first in-human trial of gene therapy for genetic therapy. Five of the six had their hearing restored.

That trial, like the one Lustig presented, involved treating just one ear in all of the children—a safety precaution for such early trials. But Shu and colleagues have already moved on to both ears, or bilateral treatment. After presenting a progress report on the first trial, Shu presented unpublished early data on five additional patients who participated in the first in-human trial of bilateral treatment. All had bilateral hearing restoration and speech perception improvement.

“The opportunity of providing the full complexity and spectrum of sound in children born with profound genetic deafness is a phenomenon I did not expect to see in my lifetime,” Lustig said in a statement.

Jumping point

Shu and Lustig’s trials are separate but the treatments are, in broad strokes, similar. Both are aimed at restoring hearing loss caused by mutations in the OTOF gene the codes for the protein otoferlin. Normally, otoferlin is a critical protein for transmitting sound signals to the brain, specifically playing a key role in synaptic transmission between the ear’s inner hair cells and the auditory nerve. Using gutted adeno-associated viruses as vectors for gene delivery, the therapies provide the inner ear with a functional version of the OTOF gene. Once in the ear, the gene can be translated into functional otoferlin, restoring auditory signaling.

In the trial Lustig presented, the two patients saw a gradual improvement of hearing as otoferlin protein built up after treatment. For the 11-month-old, normal levels of hearing were restored within 24 weeks of treatment. For the second patient, a 4-year-old, improvements were detected at a six-week assessment. In the trial Shu presented, children began seeing hearing improvements at three- and four-week assessments. The children will continue to be followed into the future, which holds some uncertainties. It’s unclear if they will, at some point in their lives, need additional treatments to sustain their hearing. In mice, at least, the treatment lasts for the duration of the animals’ lives—but they only live for a few years.

“We expect this to last a long time,” Lustig said Wednesday. But “we don’t know what’s going to happen and we don’t know whether we can do a second dose. But, probably, I would guess, at some point that would have to be done.”

For now, the treatment is considered low-hanging fruit for the burgeoning field of gene therapy since it targets a severe condition caused by recessive mutations in a single gene. Otoferlin mutations lead to a very specific type of deafness called auditory neuropathy, in which the ear fails to send signals to the brain but works perfectly fine otherwise. This is an ultra-rare form of deafness affecting 1–8 percent of people with deafness globally. Only about 30 to 50 people in the US are born with this type of deafness each year.

However, Lustig calls it a “jumping point.” Now that researchers have shown that this gene therapy can work, “This is going to really spark, we hope, the development of gene therapy for more common types of deafness,” he said.

More children gain hearing as gene therapy for profound deafness advances Read More »

how-you-can-make-cold-brew-coffee-in-under-3-minutes-using-ultrasound

How you can make cold-brew coffee in under 3 minutes using ultrasound

Save yourself a few hours —

A “sonication” time between 1 and 3 minutes is ideal to get the perfect cold brew.

UNSW Sydney engineers developed a new way to make cold brew coffee in under three minutes without sacrificing taste.

Enlarge / UNSW Sydney engineers developed a new way to make cold brew coffee in under three minutes without sacrificing taste.

University of New South Wales, Sydney

Diehard fans of cold-brew coffee put in a lot of time and effort for their preferred caffeinated beverage. But engineers at the University of New South Wales, Sydney, figured out a nifty hack. They rejiggered an existing espresso machine to accommodate an ultrasonic transducer to administer ultrasonic pulses, thereby reducing the brewing time from 12 to 24 hours to just under three minutes, according to a new paper published in the journal Ultrasonics Sonochemistry.

As previously reported, rather than pouring boiling or near-boiling water over coffee grounds and steeping for a few minutes, the cold-brew method involves mixing coffee grounds with room-temperature water and letting the mixture steep for anywhere from several hours to two days. Then it is strained through a sieve to filter out all the sludge-like solids, followed by filtering. This can be done at home in a Mason jar, or you can get fancy and use a French press or a more elaborate Toddy system. It’s not necessarily served cold (although it can be)—just brewed cold.

The result is coffee that tastes less bitter than traditionally brewed coffee. “There’s nothing like it,” co-author Francisco Trujillo of UNSW Sydney told New Scientist. “The flavor is nice, the aroma is nice and the mouthfeel is more viscous and there’s less bitterness than a regular espresso shot. And it has a level of acidity that people seem to like. It’s now my favorite way to drink coffee.”

While there have been plenty of scientific studies delving into the chemistry of coffee, only a handful have focused specifically on cold-brew coffee. For instance, a 2018 study by scientists at Thomas Jefferson University in Philadelphia involved measuring levels of acidity and antioxidants in batches of cold- and hot-brew coffee. But those experiments only used lightly roasted coffee beans. The degree of roasting (temperature) makes a significant difference when it comes to hot-brew coffee. Might the same be true for cold-brew coffee?

To find out, the same team decided in 2020 to explore the extraction yields of light-, medium-, and dark-roast coffee beans during the cold-brew process. They used the cold-brew recipe from The New York Times for their experiments, with a water-to-coffee ratio of 10:1 for both cold- and hot-brew batches. (Hot brew normally has a water-to-coffee ratio of 20:1, but the team wanted to control variables as much as possible.) They carefully controlled when water was added to the coffee grounds, how long to shake (or stir) the solution, and how best to press the cold-brew coffee.

The team found that for the lighter roasts, caffeine content and antioxidant levels were roughly the same in both the hot- and cold-brew batches. However, there were significant differences between the two methods when medium- and dark-roast coffee beans were used. Specifically, the hot-brew method extracts more antioxidants from the grind; the darker the bean, the greater the difference. Both hot- and cold-brew batches become less acidic the darker the roast.

The new faster cold brew system subjects coffee grounds in the filter basket to ultrasonic sound waves from a transducer, via a specially adapted horn.

Enlarge / The new faster cold brew system subjects coffee grounds in the filter basket to ultrasonic sound waves from a transducer, via a specially adapted horn.

UNSW/Francisco Trujillo

That gives cold brew fans a few handy tips, but the process remains incredibly time-consuming; only true aficionados have the patience required to cold brew their own morning cuppa. Many coffee houses now offer cold brews, but it requires expensive, large semi-industrial brewing units and a good deal of refrigeration space. According to Trujillo, the inspiration for using ultrasound to speed up the process arose from failed research attempts to extract more antioxidants. Those experiments ultimately failed, but the setup produced very good coffee.

Trujillo et al. used a Breville Dual Boiler BES920 espresso machine for their latest experiments, with a few key modifications. They connected a bolt-clawed transducer to the brewing basket with a metal horn. They then used the transducer to inject 38.8 kHz sound waves through the walls at several different points, thereby transforming the filter basket into a powerful ultrasonic reactor.

The team used the machine’s original boiler but set it up to be independently controlled it with an integrated circuit to better manage the temperature of the water. As for the coffee beans, they picked Campos Coffee’s Caramel & Rich Blend (a medium roast). “This blend combines fresh, high-quality specialty coffee beans from Ethiopia, Kenya, and Colombia, and the roasted beans deliver sweet caramel, butterscotch, and milk chocolate flavors,” the authors wrote.

There were three types of samples for the experiments: cold brew hit with ultrasound at room temperature for one minute or for three minutes, and cold brew prepared with the usual 24-hour process. For the ultrasonic brews, the beans were ground into a fine grind typical for espresso, while a slightly coarser grind was used for the traditional cold-brew coffee.

How you can make cold-brew coffee in under 3 minutes using ultrasound Read More »

exploration-focused-training-lets-robotics-ai-immediately-handle-new-tasks

Exploration-focused training lets robotics AI immediately handle new tasks

Exploratory —

Maximum Diffusion Reinforcement Learning focuses training on end states, not process.

A woman performs maintenance on a robotic arm.

boonchai wedmakawand

Reinforcement-learning algorithms in systems like ChatGPT or Google’s Gemini can work wonders, but they usually need hundreds of thousands of shots at a task before they get good at it. That’s why it’s always been hard to transfer this performance to robots. You can’t let a self-driving car crash 3,000 times just so it can learn crashing is bad.

But now a team of researchers at Northwestern University may have found a way around it. “That is what we think is going to be transformative in the development of the embodied AI in the real world,” says Thomas Berrueta who led the development of the Maximum Diffusion Reinforcement Learning (MaxDiff RL), an algorithm tailored specifically for robots.

Introducing chaos

The problem with deploying most reinforcement-learning algorithms in robots starts with the built-in assumption that the data they learn from is independent and identically distributed. The independence, in this context, means the value of one variable does not depend on the value of another variable in the dataset—when you flip a coin two times, getting tails on the second attempt does not depend on the result of your first flip. Identical distribution means that the probability of seeing any specific outcome is the same. In the coin-flipping example, the probability of getting heads is the same as getting tails: 50 percent for each.

In virtual, disembodied systems, like YouTube recommendation algorithms, getting such data is easy because most of the time it meets these requirements right off the bat. “You have a bunch of users of a website, and you get data from one of them, and then you get data from another one. Most likely, those two users are not in the same household, they are not highly related to each other. They could be, but it is very unlikely,” says Todd Murphey, a professor of mechanical engineering at Northwestern.

The problem is that, if those two users were related to each other and were in the same household, it could be that the only reason one of them watched a video was that their housemate watched it and told them to watch it. This would violate the independence requirement and compromise the learning.

“In a robot, getting this independent, identically distributed data is not possible in general. You exist at a specific point in space and time when you are embodied, so your experiences have to be correlated in some way,” says Berrueta. To solve this, his team designed an algorithm that pushes robots be as randomly adventurous as possible to get the widest set of experiences to learn from.

Two flavors of entropy

The idea itself is not new. Nearly two decades ago, people in AI figured out algorithms, like Maximum Entropy Reinforcement Learning (MaxEnt RL), that worked by randomizing actions during training. “The hope was that when you take as diverse set of actions as possible, you will explore more varied sets of possible futures. The problem is that those actions do not exist in a vacuum,” Berrueta claims. Every action a robot takes has some kind of impact on its environment and on its own condition—disregarding those impacts completely often leads to trouble. To put it simply, an autonomous car that was teaching itself how to drive using this approach could elegantly park into your driveway but would be just as likely to hit a wall at full speed.

To solve this, Berrueta’s team moved away from maximizing the diversity of actions and went for maximizing the diversity of state changes. Robots powered by MaxDiff RL did not flail their robotic joints at random to see what that would do. Instead, they conceptualized goals like “can I reach this spot ahead of me” and then tried to figure out which actions would take them there safely.

Berrueta and his colleagues achieved that through something called ergodicity, a mathematical concept that says that a point in a moving system will eventually visit all parts of the space that the system moves in. Basically, MaxDiff RL encouraged the robots to achieve every available state in their environment. And the results of first tests in simulated environments were quite surprising.

Racing pool noodles

“In reinforcement learning there are standard benchmarks that people run their algorithms on so we can have a good way of comparing different algorithms on a standard framework,” says Allison Pinosky, a researcher at Northwestern and co-author of the MaxDiff RL study. One of those benchmarks is a simulated swimmer: a three-link body resting on the ground in a viscous environment that needs to learn to swim as fast as possible in a certain direction.

In the swimmer test, MaxDiff RL outperformed two other state-of-the-art reinforcement learning algorithms (NN-MPPI and SAC). These two needed several resets to figure out how to move the swimmers. To complete the task, they were following a standard AI learning process divided down into a training phase where an algorithm goes through multiple failed attempts to slowly improve its performance, and a testing phase where it tries to perform the learned task. MaxDiff RL, by contrast, nailed it, immediately adapting its learned behaviors to the new task.

The earlier algorithms ended up failing to learn because they got stuck trying the same options and never progressing to where they could learn that alternatives work. “They experienced the same data repeatedly because they were locally doing certain actions, and they assumed that was all they could do and stopped learning,” Pinosky explains. MaxDiff RL, on the other hand, continued changing states, exploring, getting richer data to learn from, and finally succeeded. And because, by design, it seeks to achieve every possible state, it can potentially complete all possible tasks within an environment.

But does this mean we can take MaxDiff RL, upload it to a self-driving car, and let it out on the road to figure everything out on its own? Not really.

Exploration-focused training lets robotics AI immediately handle new tasks Read More »

chemical-tweaks-to-a-toad-hallucinogen-turns-it-into-a-potential-drug

Chemical tweaks to a toad hallucinogen turns it into a potential drug

No licking toads! —

Targets a different serotonin receptor from other popular hallucinogens.

Image of the face of a large toad.

Enlarge / The Colorado River toad, also known as the Sonoran Desert Toad.

It is becoming increasingly accepted that classic psychedelics like LSD, psilocybin, ayahuasca, and mescaline can act as antidepressants and anti-anxiety treatments in addition to causing hallucinations. They act by binding to a serotonin receptor. But there are 14 known types of serotonin receptors, and most of the research into these compounds has focused on only one of them—the one these molecules like, called 5-HT2A. (5-HT, short for 5-hydroxytryptamine, is the chemical name for serotonin.)

The Colorado River toad (Incilius alvarius), also known as the Sonoran Desert toad, secretes a psychedelic compound that likes to bind to a different serotonin receptor subtype called 5-HT1A. And that difference may be the key to developing an entirely distinct class of antidepressants.

Uncovering novel biology

Like other psychedelics, the one the toad produces decreases depression and anxiety and induces meaningful and spiritually significant experiences. It has been used clinically to treat vets with post-traumatic stress disorder and is being developed as a treatment for other neurological disorders and drug abuse. 5-HT1A is a validated therapeutic target, as approved drugs, including the antidepressant Viibryd and the anti-anxiety med Buspar, bind to it. But little is known about how psychedelics engage with this receptor and which effects it mediates, so Daniel Wacker’s lab decided to look into it.

The researchers started by making chemical modifications to the frog psychedelic and noting how each of the tweaked molecules bound to both 5-HT2A  and 5-HT1A. As a group, these psychedelics are known as “designer tryptamines”—that’s tryp with a “y”, mind you—because they are metabolites of the amino acid tryptophan.

The lab made 10 variants and found one that is more than 800-fold selective about sticking to 5-HT1A as compared to 5-HT2A. That makes it a great research tool for elucidating the structure-activity relationship of the 5-HT1A receptor, as well as the molecular mechanisms behind the pharmacology of the drugs on the market that bind to it. The lab used it to explore both of those avenues. However, the variant’s ultimate utility might be as a new therapeutic for psychiatric disorders, so they tested it in mice.

Improving the lives of mice

The compound did not induce hallucinations in mice, as measured by the “head-twitch response.” But it did alleviate depression, as measured by a “chronic social defeat stress model.” In this model, for 10 days in a row, the experimental mouse was introduced to an “aggressor mouse” for “10-minute defeat bouts”; essentially, it got beat up by a bully at recess for two weeks. Understandably, after this experience, the experimental mouse tended not to be that friendly with new mice, as controls usually are. But when injected with the modified toad psychedelic, the bullied mice were more likely to interact positively with new mice they met.

Depressed mice, like depressed people, also suffer from anhedonia: a reduced ability to experience pleasure. In mice, this manifests in not taking advantage of drinking sugar water when given the opportunity. But treated bullied mice regained their preference for the sweet drink. About a third of mice seem to be “stress-resilient” in this model; the bullying doesn’t seem to phase them. The drug increased the number of resilient mice.

The 5-HT2A receptor has hogged all of the research love because it mediates the hallucinogenic effects of many popular psychedelics, so people assumed that it must mediate their therapeutic effects, too. However, Wacker argues that there is little evidence supporting this assumption. Wacker’s new toad-based psychedelic variant and its preference for the 5-HT1A receptor will help elucidate the complementary roles these two receptor subtypes play in mediating the cellular and psychological effects of psychedelic molecules. And it might provide the basis for a new tryptamine-based mental health treatment as well—one without hallucinatory side effects, disappointing as that may be to some.

Nature, 2024.  DOI: 10.1038/s41586-024-07403-2

Chemical tweaks to a toad hallucinogen turns it into a potential drug Read More »

the-wasps-that-tamed-viruses

The wasps that tamed viruses

Parasitoid wasp

Enlarge / Xorides praecatorius is a parasitoid wasp.

If you puncture the ovary of a wasp called Microplitis demolitor, viruses squirt out in vast quantities, shimmering like iridescent blue toothpaste. “It’s very beautiful, and just amazing that there’s so much virus made in there,” says Gaelen Burke, an entomologist at the University of Georgia.

M. demolitor  is a parasite that lays its eggs in caterpillars, and the particles in its ovaries are “domesticated” viruses that have been tuned to persist harmlessly in wasps and serve their purposes. The virus particles are injected into the caterpillar through the wasp’s stinger, along with the wasp’s own eggs. The viruses then dump their contents into the caterpillar’s cells, delivering genes that are unlike those in a normal virus. Those genes suppress the caterpillar’s immune system and control its development, turning it into a harmless nursery for the wasp’s young.

The insect world is full of species of parasitic wasps that spend their infancy eating other insects alive. And for reasons that scientists don’t fully understand, they have repeatedly adopted and tamed wild, disease-causing viruses and turned them into biological weapons. Half a dozen examples already are described, and new research hints at many more.

By studying viruses at different stages of domestication, researchers today are untangling how the process unfolds.

Partners in diversification

The quintessential example of a wasp-domesticated virus involves a group called the bracoviruses, which are thought to be descended from a virus that infected a wasp, or its caterpillar host, about 100 million years ago. That ancient virus spliced its DNA into the genome of the wasp. From then on, it was part of the wasp, passed on to each new generation.

Over time, the wasps diversified into new species, and their viruses diversified with them. Bracoviruses are now found in some 50,000 wasp species, including M. demolitor. Other domesticated viruses are descended from different wild viruses that entered wasp genomes at various times.

Researchers debate whether domesticated viruses should be called viruses at all. “Some people say that it’s definitely still a virus; others say it’s integrated, and so it’s a part of the wasp,” says Marcel Dicke, an ecologist at Wageningen University in the Netherlands who described how domesticated viruses indirectly affect plants and other organisms in a 2020 paper in the Annual Review of Entomology.

As the wasp-virus composite evolves, the virus genome becomes scattered through the wasp’s DNA. Some genes decay, but a core set is preserved—those essential for making the original virus’s infectious particles. “The parts are all in these different locations in the wasp genome. But they still can talk to each other. And they still make products that cooperate with each other to make virus particles,” says Michael Strand, an entomologist at the University of Georgia. But instead of containing a complete viral genome, as a wild virus would, domesticated virus particles serve as delivery vehicles for the wasp’s weapons.

Here are the steps in the life of a parasitic wasp that harbors a bracovirus.

Enlarge / Here are the steps in the life of a parasitic wasp that harbors a bracovirus.

Those weapons vary widely. Some are proteins, while others are genes on short segments of DNA. Most bear little resemblance to anything found in wasps or viruses, so it’s unclear where they originated. And they are constantly changing, locked in evolutionary arms races with the defenses of the caterpillars or other hosts.

In many cases, researchers have yet to discover even what the genes and proteins do inside the wasps’ hosts or prove that they function as weapons. But they have untangled some details.

For example, M. demolitor  wasps use bracoviruses to deliver a gene called glc1.8  into the immune cells of moth caterpillars. The glc1.8  gene causes the infected immune cells to produce mucus that prevents them from sticking to the wasp’s eggs. Other genes in M. demolitor’s bracoviruses force immune cells to kill themselves, while still others prevent caterpillars from smothering parasites in sheaths of melanin.

The wasps that tamed viruses Read More »

analyst-on-starlink’s-rapid-rise:-“nothing-short-of-mind-blowing”

Analyst on Starlink’s rapid rise: “Nothing short of mind-blowing”

$tarlink —

Starlink’s estimated free cash flow this year is about $600 million.

60 of SpaceX's broadband satellites stacked before launch.

Enlarge / 60 Starlink satellites stacked for launch at SpaceX facility in Cape Canaveral, Florida in 2019.

According to the research firm Quilty Space, SpaceX’s Starlink satellite Internet business is now profitable.

During a webinar on Thursday, analysts from the firm outlined the reasons why they think SpaceX has been able to achieve a positive cash flow in its space Internet business just five years after the first batch of 60 satellites were launched.

The co-founder of the firm, Chris Quilty, said the rapidity of Starlink’s rise surprised a lot of people, including himself. “A lot of industry veterans kind of scoffed at the idea,” he said. “We’d seen this before.”

Some history

Both SpaceX and another company, OneWeb, announced plans to build satellite megaconstellations in 2015 to deliver broadband Internet from low-Earth orbit. There was a lot of skepticism in the space community at the time because such plans had come and gone before, including a $9 billion constellation proposed by Teledesic with about 800 satellites that only ever managed to put a single demonstration satellite into space.

The thinking was that it would be too difficult to launch that many spacecraft and too technically challenging to get them all to communicate. Quilty recalled his own comments on the proposals back in 2015.

Analysis of Starlink financials in the last three years.

Enlarge / Analysis of Starlink financials in the last three years.

Quilty Space

“I correctly forecast that there would be no near term impact on the industry, but boy, was I wrong on the long-term impact,” he said. “I think I called for possibly a partial impact on certain segments of the industry. Incorrect. But remember the context back in 2015, the largest constellation in existence was Iridium with 66 satellites, and back in 2015, it wasn’t even entirely clear that they were going to make it successfully without a second dip into bankruptcy.”

It is clear that SpaceX has been successful on the launch and technical challenges. The company has deployed nearly 6,000 satellites, with more than 5,200 still operational and delivering Internet to 2.7 million customers in 75 different countries. But is the service profitable? That’s the question Quilty and his research team sought to address.

Build a model

Because Starlink is part of SpaceX’s portfolio, the company’s true financial situation is private. So Quilty built a model to assess the company’s profitability. First, the researchers assessed revenue. The firm estimates this will grow to $6.6 billion in 2024, up from essentially zero just four years ago.

“What Starlink achieved in the past three years is nothing short of mind-blowing,” Quilty said. “If you want to put that in context, SES and Intelsat announced in the last two weeks—these are the two largest geo-satellite operators—that they’re going to combine. They’ll have combined revenues of about 4.1 billion.”

In addition to rapidly growing its subscriber base, SpaceX has managed to control costs. It has built its satellites, which are connected to Internet hubs on Earth and beam connectivity to user terminals, for far less money than historical rivals. The version 1.0 satellites are estimated to have cost just $200,000.

Building satellites for less.

Enlarge / Building satellites for less.

Quilty Space

How has SpaceX done this? Caleb Henry, director of research for Quilty, pointed to three major factors.

“One is, they really, really aggressively vertically integrate, and that allows them to keep costs down by not having to absorb the profit margins from outside suppliers,” he said. “They really designed for manufacture and for cheap manufacture. And you can kind of see that in some of the component selections and designs that they’ve used. And then they’ve also built really high volume, so a production cadence and rate that the industry has not seen before.”

Getting to a profit

Quilty estimates that Starlink will have an EBITDA of $3.8 billion this year. This value indicates how well a company is managing its day-to-day operations and stands for earnings before interest, taxes, depreciation, and amortization. Additionally, Quilty estimates that capital expenditures for Starlink will be $3.1 billion this year. This leaves an estimated free cash flow from the business of about $600 million. In other words, Starlink is making money for SpaceX. It is self-sustaining.

According to Quilty’s analysis, the Starlink business has also addressed some concerns about its long-term financial viability. For example, it no longer subsidizes the cost of user terminals in the United States, and the replenishment costs for satellites in orbit are manageable.

These figures, it should be noted, do not include SpaceX’s Starshield business, which is building custom satellites for the US military for observation purposes and will likely leverage its Starlink technology.

There is also room for significant growth for Starlink as the larger Starship rocket comes online and begins to launch version 3.0 Starlink satellites. These are significantly chunkier, likely about 1.5 metric tons each, and will have the capability for significantly more broadband and enable direct-to-cell communications, removing the need for user terminals.

Analyst on Starlink’s rapid rise: “Nothing short of mind-blowing” Read More »

outdoing-the-dinosaurs:-what-we-can-do-if-we-spot-a-threatening-asteroid

Outdoing the dinosaurs: What we can do if we spot a threatening asteroid

We'd like to avoid this.

Enlarge / We’d like to avoid this.

Science Photo Library/Andrzej Wojcicki/Getty Images

In 2005, the United States Congress laid out a clear mandate: To protect our civilization and perhaps our very species, by 2020, the nation should be able to detect, track, catalog, and characterize no less than 90 percent of all near-Earth objects at least 140 meters across.

As of today, four years after that deadline, we have identified less than half and characterized only a small percentage of those possible threats. Even if we did have a full census of all threatening space rocks, we do not have the capabilities to rapidly respond to an Earth-intersecting asteroid (despite the success of NASA’s Double-Asteroid Redirection Test (DART) mission).

Some day in the finite future, an object will pose a threat to us—it’s an inevitability of life in our Solar System. The good news is that it’s not too late to do something about it. But it will take some work.

Close encounters

The dangers are, to put it bluntly, everywhere around us. The International Astronomical Union’s Minor Planet Center, which maintains a list of (no points award for guessing correctly) minor planets within the Solar System, has a running tally. At the time of the writing of this article, the Center has recorded 34,152 asteroids with orbits that come within 0.05 AU of the Earth (an AU is one astronomical unit, the average distance between the Earth and the Sun).

These near-Earth asteroids (or NEAs for short, sometimes called NEOs, for near-Earth objects) aren’t necessarily going to impact the Earth. But they’re the most likely ones to do it; in all the billions of kilometers that encompass the wide expanse of our Solar System, these are the ones that live in our neighborhood.

And impact they do. The larger planets and moons of our Solar System are littered with the craterous scars of past violent collisions. The only reason the Earth doesn’t have the same amount of visible damage as, say, the Moon is that our planet constantly reshapes its surface through erosion and plate tectonics.

It’s through craters elsewhere that astronomers have built up a sense of how often a planet like the Earth experiences a serious impact and the typical sizes of those impactors.

Tiny things happen all the time. When you see a beautiful shooting star streaking across the night sky, that’s from the “impact” of an object somewhere between the size of a grain of sand and a tiny pebble striking our atmosphere at a few tens of thousands of kilometers per hour.

Every few years or so, an object 10 meters across hits us; when it does, it delivers energy roughly equivalent to that of our earliest atomic weapons. Thankfully, most of the Earth is open ocean, and most impactors of this class burst apart in the upper atmosphere, so we typically don’t have to worry too much about them.

The much larger—but thankfully much rarer—asteroids are what cause us heartburn. This is where we get into the delightful mathematics of attempting to calculate an existential risk to humanity.

At one end of the scale, we have the kind of stuff that kills dinosaurs and envelops the globe in a shroud of ash. These rocks are several kilometers across but only come into Earth-crossing trajectories every few million years. One of them would doom us—certainly our civilization and likely our species. The combination of the unimaginable scale of devastation and the incredibly small likelihood of it occurring puts this kind of threat almost beyond human comprehension—and intervention. For now, we just have to hope that our time isn’t up.

Then there are the in-betweeners. These are the space rocks starting at a hundred meters across. Upon impact, they release a minimum of 30 megatons of energy, which is capable of leaving a crater a couple of kilometers across. Those kinds of dangers present themselves roughly every 10,000 years.

That’s an interesting time scale. Our written history stretches back thousands of years, and our institutions have existed for thousands of years. We can envision our civilization, our ways of life, and our humanity continuing into the future for thousands of years.

This means that at some point, either we or our descendants will have to deal with a threat of this magnitude. Not a rock large enough to hit the big reset button on life but powerful enough to present a scale of disaster not yet seen in human history.

Outdoing the dinosaurs: What we can do if we spot a threatening asteroid Read More »

nasa-confirms-“independent-review”-of-orion-heat-shield-issue

NASA confirms “independent review” of Orion heat shield issue

The Orion spacecraft after splashdown in the Pacific Ocean at the end of the Artemis I mission.

Enlarge / The Orion spacecraft after splashdown in the Pacific Ocean at the end of the Artemis I mission.

NASA has asked a panel of outside experts to review the agency’s investigation into the unexpected loss of material from the heat shield of the Orion spacecraft on a test flight in 2022.

Chunks of charred material cracked and chipped away from Orion’s heat shield during reentry at the end of the 25-day unpiloted Artemis I mission in December 2022. Engineers inspecting the capsule after the flight found more than 100 locations where the stresses of reentry stripped away pieces of the heat shield as temperatures built up to 5,000° Fahrenheit.

This was the most significant discovery on the Artemis I, an unpiloted test flight that took the Orion capsule around the Moon for the first time. The next mission in NASA’s Artemis program, Artemis II, is scheduled for launch late next year on a test flight to send four astronauts around the far side of the Moon.

Another set of eyes

The heat shield, made of a material called Avcoat, is attached to the base of the Orion spacecraft in 186 blocks. Avcoat is designed to ablate, or erode, in a controlled manner during reentry. Instead, fragments fell off the heat shield that left cavities resembling potholes.

Investigators are still looking for the root cause of the heat shield problem. Since the Artemis I mission, engineers conducted sub-scale tests of the Orion heat shield in wind tunnels and high-temperature arcjet facilities. NASA has recreated the phenomenon observed on Artemis I in these ground tests, according to Rachel Kraft, an agency spokesperson.

“The team is currently synthesizing results from a variety of tests and analyses that inform the leading theory for what caused the issues,” said Rachel Kraft, a NASA spokesperson.

Last week, nearly a year and a half after the Artemis I flight, the public got its first look at the condition of the Orion heat shield with post-flight photos released in a report from NASA’s inspector general. Cameras aboard the Orion capsule also recorded pieces of the heat shield breaking off the spacecraft during reentry.

NASA’s inspector general said the char loss issue “creates a risk that the heat shield may not sufficiently protect the capsule’s systems and crew from the extreme heat of reentry on future missions.”

“Those pictures, we’ve seen them since they were taken, but more importantly… we saw it,” said Victor Glover, pilot of the Artemis II mission, in a recent interview with Ars. “More than any picture or report, I’ve seen that heat shield, and that really set the bit for how interested I was in the details.”

NASA confirms “independent review” of Orion heat shield issue Read More »