Science

how-the-moon-got-a-makeover

How the Moon got a makeover

Putting on a new face —

The Moon’s former surface sank to the depths, until volcanism brought it back.

Image of the moon.

Our Moon may appear to shine peacefully in the night sky, but billions of years ago, it was given a facial by volcanic turmoil.

One question that has gone unanswered for decades is why there are more titanium-rich volcanic rocks, such as ilmenite, on the near side as opposed to the far side. Now a team of researchers at Arizona Lunar and Planetary Laboratory are proposing a possible explanation for that.

The lunar surface was once flooded by a bubbling magma ocean, and after the magma ocean had hardened, there was an enormous impact on the far side. Heat from this impact spread to the near side and made the crust unstable, causing sheets of heavier and denser minerals on the surface to gradually sink deep into the mantle. These melted again and were belched out by volcanoes. Lava from these eruptions (more of which happened on the near side) ended up in what are now titanium-rich flows of volcanic rock. In other words, the Moon’s old face vanished, only to resurface.

What lies beneath

The region of the Moon in question is known as the Procellarum KREEP Terrane (PKT). KREEP signifies high concentrations of potassium (K), rare earth elements (REE), and phosphorus (P). This is also where ilmenite-rich basalts are found. Both KREEP and the basalts are thought to have first formed when the Moon was cooling from its magma ocean phase. But the region stayed hot, as KREEP also contains high levels of radioactive uranium and thorium.

“The PKT region… represents the most volcanically active region on the Moon as a natural result of the high abundances of heat-producing elements,” the researchers said in a study recently published in Nature Geoscience.

Why is this region located on the near side, while the far side is lacking in KREEP and ilmenite-rich basalts? There was one existing hypothesis that caught the researchers’ attention: it proposed that after the magma ocean hardened on the near side, sheets of these KREEP minerals were too heavy to stay on the surface. They began to sink into the mantle and down to the border between the mantle and core. As they sank, these mineral sheets were thought to have left behind trace amounts of material throughout the mantle.

If the hypothesis was accurate, this would mean there should be traces of minerals from the hardened KREEP magma crust in sheet-like configurations beneath the lunar surface, which could reach all the way down to the edge of the core-mantle boundary.

How could that be tested? Gravity data from the GRAIL (Gravity Recovery and Interior Laboratory) mission to the Moon possibly had the answer. It would allow them to detect gravitational anomalies caused by the higher density of the KREEP rock compared to surrounding materials.

Coming to the surface

GRAIL data had previously revealed that there was a pattern of subsurface gravitational anomalies in the PKT region. This appeared similar to the pattern that the sheets of volcanic rock were predicted to have made as they sank, which is why the research team decided to run a computer simulation of sinking KREEP to see how well the hypothesis matched up with the GRAIL findings.

Sure enough, the simulation ended up forming just about the same pattern as the anomalies GRAIL found. The polygonal pattern seen in both the simulations and GRAIL data most likely means that traces of heavier KREEP and ilmenite-rich basalt layers were left behind beneath the surface as those layers sank due to their density, and GRAIL detected their residue due to their greater gravitational pull. GRAIL also suggested there were many lesser anomalies in the PKT region, which makes sense considering that a large part of the crust is made of volcanic rocks thought to have sunk and left behind residue before they melted and surfaced again through eruptions.

We now also have an idea of when this phenomenon occurred. Because there are impact basins that dated to around 4.22 billion years ago (not to be confused with the earlier far-side impact), but the magma ocean is thought to have hardened before that, the researchers think that the crust also began to sink before that time.

“The PKT border anomalies provide the most direct physical evidence for the nature of the post-magma ocean… mantle overturn and sinking of ilmenite into the deep interior,” the team said in the same study.

This is just one more bit of information regarding how the Moon evolved and why it is so uneven. The near side once raged with lava that is now volcanic rock, much of which exists in flows called mare (which translates to “sea” in Latin). Most of this volcanic rock, especially in the PKT region, contains rare earth elements.

We can only confirm that there really are traces of ancient crust inside the Moon by the collection of actual lunar material far beneath the surface. When Artemis astronauts are finally able to gather samples of volcanic material from the Moon in situ, who knows what will come to the surface?

Nature Geoscience, 2024.  DOI: 10.1038/s41561-024-01408-2

How the Moon got a makeover Read More »

nasa-wants-a-cheaper-mars-sample-return—boeing-proposes-most-expensive-rocket

NASA wants a cheaper Mars Sample Return—Boeing proposes most expensive rocket

The Space Launch System rocket lifts off on the Artemis I mission.

Enlarge / The Space Launch System rocket lifts off on the Artemis I mission.

NASA is looking for ways to get rock samples back from Mars for less than the $11 billion the agency would need under its own plan, so last month, officials put out a call to industry to propose ideas.

Boeing is the first company to release details about how it would attempt a Mars Sample Return mission. Its study involves a single flight of the Space Launch System (SLS) rocket, the super heavy-lift launcher designed to send astronauts to the Moon on NASA’s Artemis missions.

Jim Green, NASA’s former chief scientist and longtime head of the agency’s planetary science division, presented Boeing’s concept Wednesday at the Humans to Mars summit, an annual event sponsored primarily by traditional space companies. Boeing is the lead contractor for the SLS core stage and upper stage and has pitched the SLS, primarily a crew launch vehicle, as a rocket for military satellites and deep space probes.

All in one

Green, now retired, said the concept he and Boeing engineers propose would reduce the risks of Mars Sample Return. With one mission, there are fewer points of potential failure, he said.

“To reduce mission complexity, this new concept is doing one launch,” Green said.

This argument makes some sense, but the problem is SLS is the most expensive rocket flying today. Even if NASA and Boeing introduce cost-cutting measures, NASA’s inspector general reported last year it’s unlikely the cost of a single SLS launch would fall below $2 billion. The inspector general recommended NASA consider buying commercial rockets as an alternative to SLS for future Artemis missions.

NASA’s Perseverance rover, operating on Mars since February 2021, is collecting soil and rock core samples and sealing them in 43 cigar-size titanium tubes. The rover has dropped the first 10 of these tubes in a depot on the Martian surface that could be retrieved by a future sample return mission. The remaining tubes will likely remain stowed on Perseverance in hopes the rover will directly hand off the samples to the spacecraft that comes to Mars to get them.

Boeing says a single launch of the Space Launch System rocket could carry everything needed for a Mars Sample Return mission.

Enlarge / Boeing says a single launch of the Space Launch System rocket could carry everything needed for a Mars Sample Return mission.

Boeing

In his remarks, Green touted the benefits of launching a Mars Sample Return mission with a single rocket and a single spacecraft. NASA’s baseline concept involves two launches, one with a US-built lander and a small rocket to boost the rocket samples back off the surface of Mars, and another with a European spacecraft to rendezvous with the sample carrier in orbit around Mars, then bring the specimens back to Earth.

“This concept is one launch vehicle,” he said. “It’s the SLS. What does it do? It’s carrying a massive payload. What is the payload? It’s a Mars entry and descent aeroshell. It has a propulsive descent module.”

The lander would carry everything needed to get the samples back to Earth. A fetch rover onboard the lander would deploy to drive out and pick up the sample tubes collected by the Perseverance rover. Then, a robotic arm would transfer the sample tubes to a container at the top of a two-stage rocket called the Mars Ascent Vehicle (MAV) sitting on top of the lander. The MAV would have the oomph needed to boost the samples off the surface of Mars and into orbit, then fire engines to target a course back to Earth.

Boeing has no direct experience as a prime contractor for any Mars mission. SpaceX, with its giant Starship rocket designed for eventual Mars missions, and Lockheed Martin, which has built several Mars landers for NASA, are the companies with the technology and expertise that seem to be most useful for Mars Sample Return.

NASA is also collecting ideas for Mars Sample Return from its space centers across the United States. The agency also tasked the Jet Propulsion Laboratory, which was in charge of developing the original dead-on-arrival concept, to come up with a better idea. Later this year, NASA officials will reference these new proposals as they decide how to proceed with Mars Sample Return, with the goal of getting samples back from Mars in the 2030s.

NASA wants a cheaper Mars Sample Return—Boeing proposes most expensive rocket Read More »

more-children-gain-hearing-as-gene-therapy-for-profound-deafness-advances

More children gain hearing as gene therapy for profound deafness advances

Success —

The therapy treats a rare type of deafness, but experts hope it’s a “jumping point.”

Opal Sandy (center), who was born completely deaf because of a rare genetic condition, can now hear unaided for the first time after receiving gene therapy at 11-months-old. She is shown with her mother, father, and sister at their home in Eynsham, Oxfordshire, on May 7, 2024.

Enlarge / Opal Sandy (center), who was born completely deaf because of a rare genetic condition, can now hear unaided for the first time after receiving gene therapy at 11-months-old. She is shown with her mother, father, and sister at their home in Eynsham, Oxfordshire, on May 7, 2024.

There are few things more heartwarming than videos of children with deafness gaining the ability to hear, showing them happily turning their heads at the sound of their parents’ voices and joyfully bobbing to newly discovered music. Thanks to recent advances in gene therapy, more kids are getting those sweet and triumphant moments—with no hearing aids or cochlear implants needed.

At the annual conference of the American Society for Gene & Cell Therapy held in Baltimore this week, researchers showed many of those videos to their audiences of experts. On Wednesday, Larry Lustig, an otolaryngologist at Columbia University, presented clinical trial data of two children with profound deafness—the most severe type of deafness—who are now able to hear at normal levels after receiving an experimental gene therapy. One of the children was 11 months old at the time of the treatment, marking her as the youngest child in the world to date to receive gene therapy for genetic deafness.

On Thursday, Yilai Shu, an otolaryngologist at Fudan University in Shanghai, provided a one-year progress report on six children who were treated in the first in-human trial of gene therapy for genetic therapy. Five of the six had their hearing restored.

That trial, like the one Lustig presented, involved treating just one ear in all of the children—a safety precaution for such early trials. But Shu and colleagues have already moved on to both ears, or bilateral treatment. After presenting a progress report on the first trial, Shu presented unpublished early data on five additional patients who participated in the first in-human trial of bilateral treatment. All had bilateral hearing restoration and speech perception improvement.

“The opportunity of providing the full complexity and spectrum of sound in children born with profound genetic deafness is a phenomenon I did not expect to see in my lifetime,” Lustig said in a statement.

Jumping point

Shu and Lustig’s trials are separate but the treatments are, in broad strokes, similar. Both are aimed at restoring hearing loss caused by mutations in the OTOF gene the codes for the protein otoferlin. Normally, otoferlin is a critical protein for transmitting sound signals to the brain, specifically playing a key role in synaptic transmission between the ear’s inner hair cells and the auditory nerve. Using gutted adeno-associated viruses as vectors for gene delivery, the therapies provide the inner ear with a functional version of the OTOF gene. Once in the ear, the gene can be translated into functional otoferlin, restoring auditory signaling.

In the trial Lustig presented, the two patients saw a gradual improvement of hearing as otoferlin protein built up after treatment. For the 11-month-old, normal levels of hearing were restored within 24 weeks of treatment. For the second patient, a 4-year-old, improvements were detected at a six-week assessment. In the trial Shu presented, children began seeing hearing improvements at three- and four-week assessments. The children will continue to be followed into the future, which holds some uncertainties. It’s unclear if they will, at some point in their lives, need additional treatments to sustain their hearing. In mice, at least, the treatment lasts for the duration of the animals’ lives—but they only live for a few years.

“We expect this to last a long time,” Lustig said Wednesday. But “we don’t know what’s going to happen and we don’t know whether we can do a second dose. But, probably, I would guess, at some point that would have to be done.”

For now, the treatment is considered low-hanging fruit for the burgeoning field of gene therapy since it targets a severe condition caused by recessive mutations in a single gene. Otoferlin mutations lead to a very specific type of deafness called auditory neuropathy, in which the ear fails to send signals to the brain but works perfectly fine otherwise. This is an ultra-rare form of deafness affecting 1–8 percent of people with deafness globally. Only about 30 to 50 people in the US are born with this type of deafness each year.

However, Lustig calls it a “jumping point.” Now that researchers have shown that this gene therapy can work, “This is going to really spark, we hope, the development of gene therapy for more common types of deafness,” he said.

More children gain hearing as gene therapy for profound deafness advances Read More »

how-you-can-make-cold-brew-coffee-in-under-3-minutes-using-ultrasound

How you can make cold-brew coffee in under 3 minutes using ultrasound

Save yourself a few hours —

A “sonication” time between 1 and 3 minutes is ideal to get the perfect cold brew.

UNSW Sydney engineers developed a new way to make cold brew coffee in under three minutes without sacrificing taste.

Enlarge / UNSW Sydney engineers developed a new way to make cold brew coffee in under three minutes without sacrificing taste.

University of New South Wales, Sydney

Diehard fans of cold-brew coffee put in a lot of time and effort for their preferred caffeinated beverage. But engineers at the University of New South Wales, Sydney, figured out a nifty hack. They rejiggered an existing espresso machine to accommodate an ultrasonic transducer to administer ultrasonic pulses, thereby reducing the brewing time from 12 to 24 hours to just under three minutes, according to a new paper published in the journal Ultrasonics Sonochemistry.

As previously reported, rather than pouring boiling or near-boiling water over coffee grounds and steeping for a few minutes, the cold-brew method involves mixing coffee grounds with room-temperature water and letting the mixture steep for anywhere from several hours to two days. Then it is strained through a sieve to filter out all the sludge-like solids, followed by filtering. This can be done at home in a Mason jar, or you can get fancy and use a French press or a more elaborate Toddy system. It’s not necessarily served cold (although it can be)—just brewed cold.

The result is coffee that tastes less bitter than traditionally brewed coffee. “There’s nothing like it,” co-author Francisco Trujillo of UNSW Sydney told New Scientist. “The flavor is nice, the aroma is nice and the mouthfeel is more viscous and there’s less bitterness than a regular espresso shot. And it has a level of acidity that people seem to like. It’s now my favorite way to drink coffee.”

While there have been plenty of scientific studies delving into the chemistry of coffee, only a handful have focused specifically on cold-brew coffee. For instance, a 2018 study by scientists at Thomas Jefferson University in Philadelphia involved measuring levels of acidity and antioxidants in batches of cold- and hot-brew coffee. But those experiments only used lightly roasted coffee beans. The degree of roasting (temperature) makes a significant difference when it comes to hot-brew coffee. Might the same be true for cold-brew coffee?

To find out, the same team decided in 2020 to explore the extraction yields of light-, medium-, and dark-roast coffee beans during the cold-brew process. They used the cold-brew recipe from The New York Times for their experiments, with a water-to-coffee ratio of 10:1 for both cold- and hot-brew batches. (Hot brew normally has a water-to-coffee ratio of 20:1, but the team wanted to control variables as much as possible.) They carefully controlled when water was added to the coffee grounds, how long to shake (or stir) the solution, and how best to press the cold-brew coffee.

The team found that for the lighter roasts, caffeine content and antioxidant levels were roughly the same in both the hot- and cold-brew batches. However, there were significant differences between the two methods when medium- and dark-roast coffee beans were used. Specifically, the hot-brew method extracts more antioxidants from the grind; the darker the bean, the greater the difference. Both hot- and cold-brew batches become less acidic the darker the roast.

The new faster cold brew system subjects coffee grounds in the filter basket to ultrasonic sound waves from a transducer, via a specially adapted horn.

Enlarge / The new faster cold brew system subjects coffee grounds in the filter basket to ultrasonic sound waves from a transducer, via a specially adapted horn.

UNSW/Francisco Trujillo

That gives cold brew fans a few handy tips, but the process remains incredibly time-consuming; only true aficionados have the patience required to cold brew their own morning cuppa. Many coffee houses now offer cold brews, but it requires expensive, large semi-industrial brewing units and a good deal of refrigeration space. According to Trujillo, the inspiration for using ultrasound to speed up the process arose from failed research attempts to extract more antioxidants. Those experiments ultimately failed, but the setup produced very good coffee.

Trujillo et al. used a Breville Dual Boiler BES920 espresso machine for their latest experiments, with a few key modifications. They connected a bolt-clawed transducer to the brewing basket with a metal horn. They then used the transducer to inject 38.8 kHz sound waves through the walls at several different points, thereby transforming the filter basket into a powerful ultrasonic reactor.

The team used the machine’s original boiler but set it up to be independently controlled it with an integrated circuit to better manage the temperature of the water. As for the coffee beans, they picked Campos Coffee’s Caramel & Rich Blend (a medium roast). “This blend combines fresh, high-quality specialty coffee beans from Ethiopia, Kenya, and Colombia, and the roasted beans deliver sweet caramel, butterscotch, and milk chocolate flavors,” the authors wrote.

There were three types of samples for the experiments: cold brew hit with ultrasound at room temperature for one minute or for three minutes, and cold brew prepared with the usual 24-hour process. For the ultrasonic brews, the beans were ground into a fine grind typical for espresso, while a slightly coarser grind was used for the traditional cold-brew coffee.

How you can make cold-brew coffee in under 3 minutes using ultrasound Read More »

exploration-focused-training-lets-robotics-ai-immediately-handle-new-tasks

Exploration-focused training lets robotics AI immediately handle new tasks

Exploratory —

Maximum Diffusion Reinforcement Learning focuses training on end states, not process.

A woman performs maintenance on a robotic arm.

boonchai wedmakawand

Reinforcement-learning algorithms in systems like ChatGPT or Google’s Gemini can work wonders, but they usually need hundreds of thousands of shots at a task before they get good at it. That’s why it’s always been hard to transfer this performance to robots. You can’t let a self-driving car crash 3,000 times just so it can learn crashing is bad.

But now a team of researchers at Northwestern University may have found a way around it. “That is what we think is going to be transformative in the development of the embodied AI in the real world,” says Thomas Berrueta who led the development of the Maximum Diffusion Reinforcement Learning (MaxDiff RL), an algorithm tailored specifically for robots.

Introducing chaos

The problem with deploying most reinforcement-learning algorithms in robots starts with the built-in assumption that the data they learn from is independent and identically distributed. The independence, in this context, means the value of one variable does not depend on the value of another variable in the dataset—when you flip a coin two times, getting tails on the second attempt does not depend on the result of your first flip. Identical distribution means that the probability of seeing any specific outcome is the same. In the coin-flipping example, the probability of getting heads is the same as getting tails: 50 percent for each.

In virtual, disembodied systems, like YouTube recommendation algorithms, getting such data is easy because most of the time it meets these requirements right off the bat. “You have a bunch of users of a website, and you get data from one of them, and then you get data from another one. Most likely, those two users are not in the same household, they are not highly related to each other. They could be, but it is very unlikely,” says Todd Murphey, a professor of mechanical engineering at Northwestern.

The problem is that, if those two users were related to each other and were in the same household, it could be that the only reason one of them watched a video was that their housemate watched it and told them to watch it. This would violate the independence requirement and compromise the learning.

“In a robot, getting this independent, identically distributed data is not possible in general. You exist at a specific point in space and time when you are embodied, so your experiences have to be correlated in some way,” says Berrueta. To solve this, his team designed an algorithm that pushes robots be as randomly adventurous as possible to get the widest set of experiences to learn from.

Two flavors of entropy

The idea itself is not new. Nearly two decades ago, people in AI figured out algorithms, like Maximum Entropy Reinforcement Learning (MaxEnt RL), that worked by randomizing actions during training. “The hope was that when you take as diverse set of actions as possible, you will explore more varied sets of possible futures. The problem is that those actions do not exist in a vacuum,” Berrueta claims. Every action a robot takes has some kind of impact on its environment and on its own condition—disregarding those impacts completely often leads to trouble. To put it simply, an autonomous car that was teaching itself how to drive using this approach could elegantly park into your driveway but would be just as likely to hit a wall at full speed.

To solve this, Berrueta’s team moved away from maximizing the diversity of actions and went for maximizing the diversity of state changes. Robots powered by MaxDiff RL did not flail their robotic joints at random to see what that would do. Instead, they conceptualized goals like “can I reach this spot ahead of me” and then tried to figure out which actions would take them there safely.

Berrueta and his colleagues achieved that through something called ergodicity, a mathematical concept that says that a point in a moving system will eventually visit all parts of the space that the system moves in. Basically, MaxDiff RL encouraged the robots to achieve every available state in their environment. And the results of first tests in simulated environments were quite surprising.

Racing pool noodles

“In reinforcement learning there are standard benchmarks that people run their algorithms on so we can have a good way of comparing different algorithms on a standard framework,” says Allison Pinosky, a researcher at Northwestern and co-author of the MaxDiff RL study. One of those benchmarks is a simulated swimmer: a three-link body resting on the ground in a viscous environment that needs to learn to swim as fast as possible in a certain direction.

In the swimmer test, MaxDiff RL outperformed two other state-of-the-art reinforcement learning algorithms (NN-MPPI and SAC). These two needed several resets to figure out how to move the swimmers. To complete the task, they were following a standard AI learning process divided down into a training phase where an algorithm goes through multiple failed attempts to slowly improve its performance, and a testing phase where it tries to perform the learned task. MaxDiff RL, by contrast, nailed it, immediately adapting its learned behaviors to the new task.

The earlier algorithms ended up failing to learn because they got stuck trying the same options and never progressing to where they could learn that alternatives work. “They experienced the same data repeatedly because they were locally doing certain actions, and they assumed that was all they could do and stopped learning,” Pinosky explains. MaxDiff RL, on the other hand, continued changing states, exploring, getting richer data to learn from, and finally succeeded. And because, by design, it seeks to achieve every possible state, it can potentially complete all possible tasks within an environment.

But does this mean we can take MaxDiff RL, upload it to a self-driving car, and let it out on the road to figure everything out on its own? Not really.

Exploration-focused training lets robotics AI immediately handle new tasks Read More »

chemical-tweaks-to-a-toad-hallucinogen-turns-it-into-a-potential-drug

Chemical tweaks to a toad hallucinogen turns it into a potential drug

No licking toads! —

Targets a different serotonin receptor from other popular hallucinogens.

Image of the face of a large toad.

Enlarge / The Colorado River toad, also known as the Sonoran Desert Toad.

It is becoming increasingly accepted that classic psychedelics like LSD, psilocybin, ayahuasca, and mescaline can act as antidepressants and anti-anxiety treatments in addition to causing hallucinations. They act by binding to a serotonin receptor. But there are 14 known types of serotonin receptors, and most of the research into these compounds has focused on only one of them—the one these molecules like, called 5-HT2A. (5-HT, short for 5-hydroxytryptamine, is the chemical name for serotonin.)

The Colorado River toad (Incilius alvarius), also known as the Sonoran Desert toad, secretes a psychedelic compound that likes to bind to a different serotonin receptor subtype called 5-HT1A. And that difference may be the key to developing an entirely distinct class of antidepressants.

Uncovering novel biology

Like other psychedelics, the one the toad produces decreases depression and anxiety and induces meaningful and spiritually significant experiences. It has been used clinically to treat vets with post-traumatic stress disorder and is being developed as a treatment for other neurological disorders and drug abuse. 5-HT1A is a validated therapeutic target, as approved drugs, including the antidepressant Viibryd and the anti-anxiety med Buspar, bind to it. But little is known about how psychedelics engage with this receptor and which effects it mediates, so Daniel Wacker’s lab decided to look into it.

The researchers started by making chemical modifications to the frog psychedelic and noting how each of the tweaked molecules bound to both 5-HT2A  and 5-HT1A. As a group, these psychedelics are known as “designer tryptamines”—that’s tryp with a “y”, mind you—because they are metabolites of the amino acid tryptophan.

The lab made 10 variants and found one that is more than 800-fold selective about sticking to 5-HT1A as compared to 5-HT2A. That makes it a great research tool for elucidating the structure-activity relationship of the 5-HT1A receptor, as well as the molecular mechanisms behind the pharmacology of the drugs on the market that bind to it. The lab used it to explore both of those avenues. However, the variant’s ultimate utility might be as a new therapeutic for psychiatric disorders, so they tested it in mice.

Improving the lives of mice

The compound did not induce hallucinations in mice, as measured by the “head-twitch response.” But it did alleviate depression, as measured by a “chronic social defeat stress model.” In this model, for 10 days in a row, the experimental mouse was introduced to an “aggressor mouse” for “10-minute defeat bouts”; essentially, it got beat up by a bully at recess for two weeks. Understandably, after this experience, the experimental mouse tended not to be that friendly with new mice, as controls usually are. But when injected with the modified toad psychedelic, the bullied mice were more likely to interact positively with new mice they met.

Depressed mice, like depressed people, also suffer from anhedonia: a reduced ability to experience pleasure. In mice, this manifests in not taking advantage of drinking sugar water when given the opportunity. But treated bullied mice regained their preference for the sweet drink. About a third of mice seem to be “stress-resilient” in this model; the bullying doesn’t seem to phase them. The drug increased the number of resilient mice.

The 5-HT2A receptor has hogged all of the research love because it mediates the hallucinogenic effects of many popular psychedelics, so people assumed that it must mediate their therapeutic effects, too. However, Wacker argues that there is little evidence supporting this assumption. Wacker’s new toad-based psychedelic variant and its preference for the 5-HT1A receptor will help elucidate the complementary roles these two receptor subtypes play in mediating the cellular and psychological effects of psychedelic molecules. And it might provide the basis for a new tryptamine-based mental health treatment as well—one without hallucinatory side effects, disappointing as that may be to some.

Nature, 2024.  DOI: 10.1038/s41586-024-07403-2

Chemical tweaks to a toad hallucinogen turns it into a potential drug Read More »

the-wasps-that-tamed-viruses

The wasps that tamed viruses

Parasitoid wasp

Enlarge / Xorides praecatorius is a parasitoid wasp.

If you puncture the ovary of a wasp called Microplitis demolitor, viruses squirt out in vast quantities, shimmering like iridescent blue toothpaste. “It’s very beautiful, and just amazing that there’s so much virus made in there,” says Gaelen Burke, an entomologist at the University of Georgia.

M. demolitor  is a parasite that lays its eggs in caterpillars, and the particles in its ovaries are “domesticated” viruses that have been tuned to persist harmlessly in wasps and serve their purposes. The virus particles are injected into the caterpillar through the wasp’s stinger, along with the wasp’s own eggs. The viruses then dump their contents into the caterpillar’s cells, delivering genes that are unlike those in a normal virus. Those genes suppress the caterpillar’s immune system and control its development, turning it into a harmless nursery for the wasp’s young.

The insect world is full of species of parasitic wasps that spend their infancy eating other insects alive. And for reasons that scientists don’t fully understand, they have repeatedly adopted and tamed wild, disease-causing viruses and turned them into biological weapons. Half a dozen examples already are described, and new research hints at many more.

By studying viruses at different stages of domestication, researchers today are untangling how the process unfolds.

Partners in diversification

The quintessential example of a wasp-domesticated virus involves a group called the bracoviruses, which are thought to be descended from a virus that infected a wasp, or its caterpillar host, about 100 million years ago. That ancient virus spliced its DNA into the genome of the wasp. From then on, it was part of the wasp, passed on to each new generation.

Over time, the wasps diversified into new species, and their viruses diversified with them. Bracoviruses are now found in some 50,000 wasp species, including M. demolitor. Other domesticated viruses are descended from different wild viruses that entered wasp genomes at various times.

Researchers debate whether domesticated viruses should be called viruses at all. “Some people say that it’s definitely still a virus; others say it’s integrated, and so it’s a part of the wasp,” says Marcel Dicke, an ecologist at Wageningen University in the Netherlands who described how domesticated viruses indirectly affect plants and other organisms in a 2020 paper in the Annual Review of Entomology.

As the wasp-virus composite evolves, the virus genome becomes scattered through the wasp’s DNA. Some genes decay, but a core set is preserved—those essential for making the original virus’s infectious particles. “The parts are all in these different locations in the wasp genome. But they still can talk to each other. And they still make products that cooperate with each other to make virus particles,” says Michael Strand, an entomologist at the University of Georgia. But instead of containing a complete viral genome, as a wild virus would, domesticated virus particles serve as delivery vehicles for the wasp’s weapons.

Here are the steps in the life of a parasitic wasp that harbors a bracovirus.

Enlarge / Here are the steps in the life of a parasitic wasp that harbors a bracovirus.

Those weapons vary widely. Some are proteins, while others are genes on short segments of DNA. Most bear little resemblance to anything found in wasps or viruses, so it’s unclear where they originated. And they are constantly changing, locked in evolutionary arms races with the defenses of the caterpillars or other hosts.

In many cases, researchers have yet to discover even what the genes and proteins do inside the wasps’ hosts or prove that they function as weapons. But they have untangled some details.

For example, M. demolitor  wasps use bracoviruses to deliver a gene called glc1.8  into the immune cells of moth caterpillars. The glc1.8  gene causes the infected immune cells to produce mucus that prevents them from sticking to the wasp’s eggs. Other genes in M. demolitor’s bracoviruses force immune cells to kill themselves, while still others prevent caterpillars from smothering parasites in sheaths of melanin.

The wasps that tamed viruses Read More »

analyst-on-starlink’s-rapid-rise:-“nothing-short-of-mind-blowing”

Analyst on Starlink’s rapid rise: “Nothing short of mind-blowing”

$tarlink —

Starlink’s estimated free cash flow this year is about $600 million.

60 of SpaceX's broadband satellites stacked before launch.

Enlarge / 60 Starlink satellites stacked for launch at SpaceX facility in Cape Canaveral, Florida in 2019.

According to the research firm Quilty Space, SpaceX’s Starlink satellite Internet business is now profitable.

During a webinar on Thursday, analysts from the firm outlined the reasons why they think SpaceX has been able to achieve a positive cash flow in its space Internet business just five years after the first batch of 60 satellites were launched.

The co-founder of the firm, Chris Quilty, said the rapidity of Starlink’s rise surprised a lot of people, including himself. “A lot of industry veterans kind of scoffed at the idea,” he said. “We’d seen this before.”

Some history

Both SpaceX and another company, OneWeb, announced plans to build satellite megaconstellations in 2015 to deliver broadband Internet from low-Earth orbit. There was a lot of skepticism in the space community at the time because such plans had come and gone before, including a $9 billion constellation proposed by Teledesic with about 800 satellites that only ever managed to put a single demonstration satellite into space.

The thinking was that it would be too difficult to launch that many spacecraft and too technically challenging to get them all to communicate. Quilty recalled his own comments on the proposals back in 2015.

Analysis of Starlink financials in the last three years.

Enlarge / Analysis of Starlink financials in the last three years.

Quilty Space

“I correctly forecast that there would be no near term impact on the industry, but boy, was I wrong on the long-term impact,” he said. “I think I called for possibly a partial impact on certain segments of the industry. Incorrect. But remember the context back in 2015, the largest constellation in existence was Iridium with 66 satellites, and back in 2015, it wasn’t even entirely clear that they were going to make it successfully without a second dip into bankruptcy.”

It is clear that SpaceX has been successful on the launch and technical challenges. The company has deployed nearly 6,000 satellites, with more than 5,200 still operational and delivering Internet to 2.7 million customers in 75 different countries. But is the service profitable? That’s the question Quilty and his research team sought to address.

Build a model

Because Starlink is part of SpaceX’s portfolio, the company’s true financial situation is private. So Quilty built a model to assess the company’s profitability. First, the researchers assessed revenue. The firm estimates this will grow to $6.6 billion in 2024, up from essentially zero just four years ago.

“What Starlink achieved in the past three years is nothing short of mind-blowing,” Quilty said. “If you want to put that in context, SES and Intelsat announced in the last two weeks—these are the two largest geo-satellite operators—that they’re going to combine. They’ll have combined revenues of about 4.1 billion.”

In addition to rapidly growing its subscriber base, SpaceX has managed to control costs. It has built its satellites, which are connected to Internet hubs on Earth and beam connectivity to user terminals, for far less money than historical rivals. The version 1.0 satellites are estimated to have cost just $200,000.

Building satellites for less.

Enlarge / Building satellites for less.

Quilty Space

How has SpaceX done this? Caleb Henry, director of research for Quilty, pointed to three major factors.

“One is, they really, really aggressively vertically integrate, and that allows them to keep costs down by not having to absorb the profit margins from outside suppliers,” he said. “They really designed for manufacture and for cheap manufacture. And you can kind of see that in some of the component selections and designs that they’ve used. And then they’ve also built really high volume, so a production cadence and rate that the industry has not seen before.”

Getting to a profit

Quilty estimates that Starlink will have an EBITDA of $3.8 billion this year. This value indicates how well a company is managing its day-to-day operations and stands for earnings before interest, taxes, depreciation, and amortization. Additionally, Quilty estimates that capital expenditures for Starlink will be $3.1 billion this year. This leaves an estimated free cash flow from the business of about $600 million. In other words, Starlink is making money for SpaceX. It is self-sustaining.

According to Quilty’s analysis, the Starlink business has also addressed some concerns about its long-term financial viability. For example, it no longer subsidizes the cost of user terminals in the United States, and the replenishment costs for satellites in orbit are manageable.

These figures, it should be noted, do not include SpaceX’s Starshield business, which is building custom satellites for the US military for observation purposes and will likely leverage its Starlink technology.

There is also room for significant growth for Starlink as the larger Starship rocket comes online and begins to launch version 3.0 Starlink satellites. These are significantly chunkier, likely about 1.5 metric tons each, and will have the capability for significantly more broadband and enable direct-to-cell communications, removing the need for user terminals.

Analyst on Starlink’s rapid rise: “Nothing short of mind-blowing” Read More »

outdoing-the-dinosaurs:-what-we-can-do-if-we-spot-a-threatening-asteroid

Outdoing the dinosaurs: What we can do if we spot a threatening asteroid

We'd like to avoid this.

Enlarge / We’d like to avoid this.

Science Photo Library/Andrzej Wojcicki/Getty Images

In 2005, the United States Congress laid out a clear mandate: To protect our civilization and perhaps our very species, by 2020, the nation should be able to detect, track, catalog, and characterize no less than 90 percent of all near-Earth objects at least 140 meters across.

As of today, four years after that deadline, we have identified less than half and characterized only a small percentage of those possible threats. Even if we did have a full census of all threatening space rocks, we do not have the capabilities to rapidly respond to an Earth-intersecting asteroid (despite the success of NASA’s Double-Asteroid Redirection Test (DART) mission).

Some day in the finite future, an object will pose a threat to us—it’s an inevitability of life in our Solar System. The good news is that it’s not too late to do something about it. But it will take some work.

Close encounters

The dangers are, to put it bluntly, everywhere around us. The International Astronomical Union’s Minor Planet Center, which maintains a list of (no points award for guessing correctly) minor planets within the Solar System, has a running tally. At the time of the writing of this article, the Center has recorded 34,152 asteroids with orbits that come within 0.05 AU of the Earth (an AU is one astronomical unit, the average distance between the Earth and the Sun).

These near-Earth asteroids (or NEAs for short, sometimes called NEOs, for near-Earth objects) aren’t necessarily going to impact the Earth. But they’re the most likely ones to do it; in all the billions of kilometers that encompass the wide expanse of our Solar System, these are the ones that live in our neighborhood.

And impact they do. The larger planets and moons of our Solar System are littered with the craterous scars of past violent collisions. The only reason the Earth doesn’t have the same amount of visible damage as, say, the Moon is that our planet constantly reshapes its surface through erosion and plate tectonics.

It’s through craters elsewhere that astronomers have built up a sense of how often a planet like the Earth experiences a serious impact and the typical sizes of those impactors.

Tiny things happen all the time. When you see a beautiful shooting star streaking across the night sky, that’s from the “impact” of an object somewhere between the size of a grain of sand and a tiny pebble striking our atmosphere at a few tens of thousands of kilometers per hour.

Every few years or so, an object 10 meters across hits us; when it does, it delivers energy roughly equivalent to that of our earliest atomic weapons. Thankfully, most of the Earth is open ocean, and most impactors of this class burst apart in the upper atmosphere, so we typically don’t have to worry too much about them.

The much larger—but thankfully much rarer—asteroids are what cause us heartburn. This is where we get into the delightful mathematics of attempting to calculate an existential risk to humanity.

At one end of the scale, we have the kind of stuff that kills dinosaurs and envelops the globe in a shroud of ash. These rocks are several kilometers across but only come into Earth-crossing trajectories every few million years. One of them would doom us—certainly our civilization and likely our species. The combination of the unimaginable scale of devastation and the incredibly small likelihood of it occurring puts this kind of threat almost beyond human comprehension—and intervention. For now, we just have to hope that our time isn’t up.

Then there are the in-betweeners. These are the space rocks starting at a hundred meters across. Upon impact, they release a minimum of 30 megatons of energy, which is capable of leaving a crater a couple of kilometers across. Those kinds of dangers present themselves roughly every 10,000 years.

That’s an interesting time scale. Our written history stretches back thousands of years, and our institutions have existed for thousands of years. We can envision our civilization, our ways of life, and our humanity continuing into the future for thousands of years.

This means that at some point, either we or our descendants will have to deal with a threat of this magnitude. Not a rock large enough to hit the big reset button on life but powerful enough to present a scale of disaster not yet seen in human history.

Outdoing the dinosaurs: What we can do if we spot a threatening asteroid Read More »

nasa-confirms-“independent-review”-of-orion-heat-shield-issue

NASA confirms “independent review” of Orion heat shield issue

The Orion spacecraft after splashdown in the Pacific Ocean at the end of the Artemis I mission.

Enlarge / The Orion spacecraft after splashdown in the Pacific Ocean at the end of the Artemis I mission.

NASA has asked a panel of outside experts to review the agency’s investigation into the unexpected loss of material from the heat shield of the Orion spacecraft on a test flight in 2022.

Chunks of charred material cracked and chipped away from Orion’s heat shield during reentry at the end of the 25-day unpiloted Artemis I mission in December 2022. Engineers inspecting the capsule after the flight found more than 100 locations where the stresses of reentry stripped away pieces of the heat shield as temperatures built up to 5,000° Fahrenheit.

This was the most significant discovery on the Artemis I, an unpiloted test flight that took the Orion capsule around the Moon for the first time. The next mission in NASA’s Artemis program, Artemis II, is scheduled for launch late next year on a test flight to send four astronauts around the far side of the Moon.

Another set of eyes

The heat shield, made of a material called Avcoat, is attached to the base of the Orion spacecraft in 186 blocks. Avcoat is designed to ablate, or erode, in a controlled manner during reentry. Instead, fragments fell off the heat shield that left cavities resembling potholes.

Investigators are still looking for the root cause of the heat shield problem. Since the Artemis I mission, engineers conducted sub-scale tests of the Orion heat shield in wind tunnels and high-temperature arcjet facilities. NASA has recreated the phenomenon observed on Artemis I in these ground tests, according to Rachel Kraft, an agency spokesperson.

“The team is currently synthesizing results from a variety of tests and analyses that inform the leading theory for what caused the issues,” said Rachel Kraft, a NASA spokesperson.

Last week, nearly a year and a half after the Artemis I flight, the public got its first look at the condition of the Orion heat shield with post-flight photos released in a report from NASA’s inspector general. Cameras aboard the Orion capsule also recorded pieces of the heat shield breaking off the spacecraft during reentry.

NASA’s inspector general said the char loss issue “creates a risk that the heat shield may not sufficiently protect the capsule’s systems and crew from the extreme heat of reentry on future missions.”

“Those pictures, we’ve seen them since they were taken, but more importantly… we saw it,” said Victor Glover, pilot of the Artemis II mission, in a recent interview with Ars. “More than any picture or report, I’ve seen that heat shield, and that really set the bit for how interested I was in the details.”

NASA confirms “independent review” of Orion heat shield issue Read More »

deepmind-adds-a-diffusion-engine-to-latest-protein-folding-software

DeepMind adds a diffusion engine to latest protein-folding software

Added complexity —

Major under-the-hood changes let AlphaFold handle protein-DNA complexes and more.

image of a complicated mix of lines and ribbons arranged in a complicated 3D structure.

Enlarge / Prediction of the structure of a coronavirus Spike protein from a virus that causes the common cold.

Google DeepMind

Most of the activities that go on inside cells—the activities that keep us living, breathing, thinking animals—are handled by proteins. They allow cells to communicate with each other, run a cell’s basic metabolism, and help convert the information stored in DNA into even more proteins. And all of that depends on the ability of the protein’s string of amino acids to fold up into a complicated yet specific three-dimensional shape that enables it to function.

Up until this decade, understanding that 3D shape meant purifying the protein and subjecting it to a time- and labor-intensive process to determine its structure. But that changed with the work of DeepMind, one of Google’s AI divisions, which released Alpha Fold in 2021, and a similar academic effort shortly afterward. The software wasn’t perfect; it struggled with larger proteins and didn’t offer high-confidence solutions for every protein. But many of its predictions turned out to be remarkably accurate.

Even so, these structures only told half of the story. To function, almost every protein has to interact with something else—other proteins, DNA, chemicals, membranes, and more. And, while the initial version of AlphaFold could handle some protein-protein interactions, the rest remained black boxes. Today, DeepMind is announcing the availability of version 3 of AlphaFold, which has seen parts of its underlying engine either heavily modified or replaced entirely. Thanks to these changes, the software now handles various additional protein interactions and modifications.

Changing parts

The original AlphaFold relied on two underlying software functions. One of those took evolutionary limits on a protein into account. By looking at the same protein in multiple species, you can get a sense for which parts are always the same, and therefore likely to be central to its function. That centrality implies that they’re always likely to be in the same location and orientation in the protein’s structure. To do this, the original AlphaFold found as many versions of a protein as it could and lined up their sequences to look for the portions that showed little variation.

Doing so, however, is computationally expensive since the more proteins you line up, the more constraints you have to resolve. In the new version, the AlphaFold team still identified multiple related proteins but switched to largely performing alignments using pairs of protein sequences from within the set of related ones. This probably isn’t as information-rich as a multi-alignment, but it’s far more computationally efficient, and the lost information doesn’t appear to be critical to figuring out protein structures.

Using these alignments, a separate software module figured out the spatial relationships among pairs of amino acids within the target protein. Those relationships were then translated into spatial coordinates for each atom by code that took into account some of the physical properties of amino acids, like which portions of an amino acid could rotate relative to others, etc.

In AlphaFold 3, the prediction of atomic positions is handled by a diffusion module, which is trained by being given both a known structure and versions of that structure where noise (in the form of shifting the positions of some atoms) has been added. This allows the diffusion module to take the inexact locations described by relative positions and convert them into exact predictions of the location of every atom in the protein. It doesn’t need to be told the physical properties of amino acids, because it can figure out what they normally do by looking at enough structures.

(DeepMind had to train on two different levels of noise to get the diffusion module to work: one in which the locations of atoms were shifted while the general structure was left intact and a second where the noise involved shifting the large-scale structure of the protein, thus affecting the location of lots of atoms.)

During training, the team found that it took about 20,000 instances of protein structures for AlphaFold 3 to get about 97 percent of a set of test structures right. By 60,000 instances, it started getting protein-protein interfaces correct at that frequency, too. And, critically, it started getting proteins complexed with other molecules right, as well.

DeepMind adds a diffusion engine to latest protein-folding software Read More »

no-one-has-seen-the-data-behind-tyson’s-“climate-friendly-beef”-claim

No one has seen the data behind Tyson’s “climate friendly beef” claim

feedlot

Enlarge / The Environmental Working Group published a new analysis on Wednesday outlining its efforts to push the USDA for more transparency, including asking for specific rationale in allowing brands to label beef as “climate friendly.”

Carolyn Van Houten/Washington Post via Getty

About five miles south of Broken Bow, in the heart of central Nebraska, thousands of cattle stand in feedlots at Adams Land & Cattle Co., a supplier of beef to the meat giant Tyson Foods.

From the air, the feedlots look dusty brown and packed with cows—not a vision of happy animals grazing on open pastureland, enriching the soil with carbon. But when the animals are slaughtered, processed, and sent onward to consumers, labels on the final product can claim that they were raised in a “climate friendly” way.

In late 2022, Tyson—one of the country’s “big four” meat packers—applied to the US Department of Agriculture (USDA), seeking a “climate friendly” label for its Brazen Beef brand. The production of Brazen Beef, the label claims, achieves a “10 percent greenhouse gas reduction.” Soon after, the USDA approved the label.

Immediately, environmental groups questioned the claim and petitioned the agency to stop using it, citing livestock’s significant greenhouse gas emissions and the growing pile of research that documents them. These groups and journalism outlets, including Inside Climate News, have asked the agency for the data it used to support its rubber-stamping of Tyson’s label but have essentially gotten nowhere.

“There are lots of misleading claims on food, but it’s hard to imagine a claim that’s more misleading than ‘climate friendly’ beef,” said Scott Faber, a senior vice president at the Environmental Working Group (EWG). “It’s like putting a cancer-free label on a cigarette. There’s no worse food choice for the climate than beef.”

The USDA has since confirmed it is currently considering and has approved similar labels for more livestock companies, but would not say which ones.

On Wednesday, the EWG, a longtime watchdog of the USDA, published a new analysis, outlining its efforts over the last year to push the agency for more transparency, including asking it to provide the specific rationale for allowing Brazen Beef to carry the “climate friendly” label. Last year, the group filed a Freedom of Information Act request, seeking the data that Tyson supplied to the agency in support of its application, but received only a heavily redacted response. EWG also petitioned the agency to not allow climate friendly or low carbon claims on beef.

To earn the “climate friendly” label, Tyson requires ranchers to meet the criteria of its internal “Climate-Smart Beef” program, but EWG notes that the company fails to provide information about the practices that farmers are required to adopt or about which farmers participate in the program. The only farm it has publicly identified is the Adams company in Nebraska.

A USDA spokesperson told Inside Climate News it can only rely on a third-party verification company to substantiate a label claim and could not provide the data Tyson submitted for its review.

“Because Congress did not provide USDA with on-farm oversight authority that would enable it to verify these types of labeling claims, companies must use third-party certifying organizations to substantiate these claims,” the spokesperson wrote in an email, directing Inside Climate News to the third-party verifier or Tyson for more information.

The third-party verification company, Where Food Comes From, did not respond to emailed questions from Inside Climate News, and Tyson did not respond to emails seeking comment.

The USDA said it is reviewing EWG’s petitions and announced in June 2023 that it’s working on strengthening the “substantiation of animal-raising claims, which includes the type of claim affixed to the Brazen Beef product.”

The agency said other livestock companies were seeking similar labels and that the agency has approved them, but would not identify those companies, saying Inside Climate News would have to seek the information through a Freedom of Information Act request.

“They’re being incredibly obstinate about sharing anything right now,” said Matthew Hayek, a researcher with New York University who studies the environmental and climate impacts of the food system. “Speaking as a scientist, it’s not transparent and it’s a scandal in its own right that the government can’t provide this information.”

This lack of transparency from the agency worries environmental and legal advocacy groups, especially now that billions of dollars in taxpayer funds are available for agricultural practices deemed to have benefits for the climate. The Biden administration’s signature climate legislation, the Inflation Reduction Act, appropriated nearly $20 billion for these practices; another $3.1 billion is available through a Biden-era program called the Partnership for Climate-Smart Commodities.

“This is an important test case for USDA,” Faber said. “If they can’t say no to a clearly misleading climate claim like ‘climate friendly’ beef, why should they be trusted to say no to other misleading climate claims? There’s a lot of money at stake.”

No one has seen the data behind Tyson’s “climate friendly beef” claim Read More »