Science

the-wasps-that-tamed-viruses

The wasps that tamed viruses

Parasitoid wasp

Enlarge / Xorides praecatorius is a parasitoid wasp.

If you puncture the ovary of a wasp called Microplitis demolitor, viruses squirt out in vast quantities, shimmering like iridescent blue toothpaste. “It’s very beautiful, and just amazing that there’s so much virus made in there,” says Gaelen Burke, an entomologist at the University of Georgia.

M. demolitor  is a parasite that lays its eggs in caterpillars, and the particles in its ovaries are “domesticated” viruses that have been tuned to persist harmlessly in wasps and serve their purposes. The virus particles are injected into the caterpillar through the wasp’s stinger, along with the wasp’s own eggs. The viruses then dump their contents into the caterpillar’s cells, delivering genes that are unlike those in a normal virus. Those genes suppress the caterpillar’s immune system and control its development, turning it into a harmless nursery for the wasp’s young.

The insect world is full of species of parasitic wasps that spend their infancy eating other insects alive. And for reasons that scientists don’t fully understand, they have repeatedly adopted and tamed wild, disease-causing viruses and turned them into biological weapons. Half a dozen examples already are described, and new research hints at many more.

By studying viruses at different stages of domestication, researchers today are untangling how the process unfolds.

Partners in diversification

The quintessential example of a wasp-domesticated virus involves a group called the bracoviruses, which are thought to be descended from a virus that infected a wasp, or its caterpillar host, about 100 million years ago. That ancient virus spliced its DNA into the genome of the wasp. From then on, it was part of the wasp, passed on to each new generation.

Over time, the wasps diversified into new species, and their viruses diversified with them. Bracoviruses are now found in some 50,000 wasp species, including M. demolitor. Other domesticated viruses are descended from different wild viruses that entered wasp genomes at various times.

Researchers debate whether domesticated viruses should be called viruses at all. “Some people say that it’s definitely still a virus; others say it’s integrated, and so it’s a part of the wasp,” says Marcel Dicke, an ecologist at Wageningen University in the Netherlands who described how domesticated viruses indirectly affect plants and other organisms in a 2020 paper in the Annual Review of Entomology.

As the wasp-virus composite evolves, the virus genome becomes scattered through the wasp’s DNA. Some genes decay, but a core set is preserved—those essential for making the original virus’s infectious particles. “The parts are all in these different locations in the wasp genome. But they still can talk to each other. And they still make products that cooperate with each other to make virus particles,” says Michael Strand, an entomologist at the University of Georgia. But instead of containing a complete viral genome, as a wild virus would, domesticated virus particles serve as delivery vehicles for the wasp’s weapons.

Here are the steps in the life of a parasitic wasp that harbors a bracovirus.

Enlarge / Here are the steps in the life of a parasitic wasp that harbors a bracovirus.

Those weapons vary widely. Some are proteins, while others are genes on short segments of DNA. Most bear little resemblance to anything found in wasps or viruses, so it’s unclear where they originated. And they are constantly changing, locked in evolutionary arms races with the defenses of the caterpillars or other hosts.

In many cases, researchers have yet to discover even what the genes and proteins do inside the wasps’ hosts or prove that they function as weapons. But they have untangled some details.

For example, M. demolitor  wasps use bracoviruses to deliver a gene called glc1.8  into the immune cells of moth caterpillars. The glc1.8  gene causes the infected immune cells to produce mucus that prevents them from sticking to the wasp’s eggs. Other genes in M. demolitor’s bracoviruses force immune cells to kill themselves, while still others prevent caterpillars from smothering parasites in sheaths of melanin.

The wasps that tamed viruses Read More »

analyst-on-starlink’s-rapid-rise:-“nothing-short-of-mind-blowing”

Analyst on Starlink’s rapid rise: “Nothing short of mind-blowing”

$tarlink —

Starlink’s estimated free cash flow this year is about $600 million.

60 of SpaceX's broadband satellites stacked before launch.

Enlarge / 60 Starlink satellites stacked for launch at SpaceX facility in Cape Canaveral, Florida in 2019.

According to the research firm Quilty Space, SpaceX’s Starlink satellite Internet business is now profitable.

During a webinar on Thursday, analysts from the firm outlined the reasons why they think SpaceX has been able to achieve a positive cash flow in its space Internet business just five years after the first batch of 60 satellites were launched.

The co-founder of the firm, Chris Quilty, said the rapidity of Starlink’s rise surprised a lot of people, including himself. “A lot of industry veterans kind of scoffed at the idea,” he said. “We’d seen this before.”

Some history

Both SpaceX and another company, OneWeb, announced plans to build satellite megaconstellations in 2015 to deliver broadband Internet from low-Earth orbit. There was a lot of skepticism in the space community at the time because such plans had come and gone before, including a $9 billion constellation proposed by Teledesic with about 800 satellites that only ever managed to put a single demonstration satellite into space.

The thinking was that it would be too difficult to launch that many spacecraft and too technically challenging to get them all to communicate. Quilty recalled his own comments on the proposals back in 2015.

Analysis of Starlink financials in the last three years.

Enlarge / Analysis of Starlink financials in the last three years.

Quilty Space

“I correctly forecast that there would be no near term impact on the industry, but boy, was I wrong on the long-term impact,” he said. “I think I called for possibly a partial impact on certain segments of the industry. Incorrect. But remember the context back in 2015, the largest constellation in existence was Iridium with 66 satellites, and back in 2015, it wasn’t even entirely clear that they were going to make it successfully without a second dip into bankruptcy.”

It is clear that SpaceX has been successful on the launch and technical challenges. The company has deployed nearly 6,000 satellites, with more than 5,200 still operational and delivering Internet to 2.7 million customers in 75 different countries. But is the service profitable? That’s the question Quilty and his research team sought to address.

Build a model

Because Starlink is part of SpaceX’s portfolio, the company’s true financial situation is private. So Quilty built a model to assess the company’s profitability. First, the researchers assessed revenue. The firm estimates this will grow to $6.6 billion in 2024, up from essentially zero just four years ago.

“What Starlink achieved in the past three years is nothing short of mind-blowing,” Quilty said. “If you want to put that in context, SES and Intelsat announced in the last two weeks—these are the two largest geo-satellite operators—that they’re going to combine. They’ll have combined revenues of about 4.1 billion.”

In addition to rapidly growing its subscriber base, SpaceX has managed to control costs. It has built its satellites, which are connected to Internet hubs on Earth and beam connectivity to user terminals, for far less money than historical rivals. The version 1.0 satellites are estimated to have cost just $200,000.

Building satellites for less.

Enlarge / Building satellites for less.

Quilty Space

How has SpaceX done this? Caleb Henry, director of research for Quilty, pointed to three major factors.

“One is, they really, really aggressively vertically integrate, and that allows them to keep costs down by not having to absorb the profit margins from outside suppliers,” he said. “They really designed for manufacture and for cheap manufacture. And you can kind of see that in some of the component selections and designs that they’ve used. And then they’ve also built really high volume, so a production cadence and rate that the industry has not seen before.”

Getting to a profit

Quilty estimates that Starlink will have an EBITDA of $3.8 billion this year. This value indicates how well a company is managing its day-to-day operations and stands for earnings before interest, taxes, depreciation, and amortization. Additionally, Quilty estimates that capital expenditures for Starlink will be $3.1 billion this year. This leaves an estimated free cash flow from the business of about $600 million. In other words, Starlink is making money for SpaceX. It is self-sustaining.

According to Quilty’s analysis, the Starlink business has also addressed some concerns about its long-term financial viability. For example, it no longer subsidizes the cost of user terminals in the United States, and the replenishment costs for satellites in orbit are manageable.

These figures, it should be noted, do not include SpaceX’s Starshield business, which is building custom satellites for the US military for observation purposes and will likely leverage its Starlink technology.

There is also room for significant growth for Starlink as the larger Starship rocket comes online and begins to launch version 3.0 Starlink satellites. These are significantly chunkier, likely about 1.5 metric tons each, and will have the capability for significantly more broadband and enable direct-to-cell communications, removing the need for user terminals.

Analyst on Starlink’s rapid rise: “Nothing short of mind-blowing” Read More »

outdoing-the-dinosaurs:-what-we-can-do-if-we-spot-a-threatening-asteroid

Outdoing the dinosaurs: What we can do if we spot a threatening asteroid

We'd like to avoid this.

Enlarge / We’d like to avoid this.

Science Photo Library/Andrzej Wojcicki/Getty Images

In 2005, the United States Congress laid out a clear mandate: To protect our civilization and perhaps our very species, by 2020, the nation should be able to detect, track, catalog, and characterize no less than 90 percent of all near-Earth objects at least 140 meters across.

As of today, four years after that deadline, we have identified less than half and characterized only a small percentage of those possible threats. Even if we did have a full census of all threatening space rocks, we do not have the capabilities to rapidly respond to an Earth-intersecting asteroid (despite the success of NASA’s Double-Asteroid Redirection Test (DART) mission).

Some day in the finite future, an object will pose a threat to us—it’s an inevitability of life in our Solar System. The good news is that it’s not too late to do something about it. But it will take some work.

Close encounters

The dangers are, to put it bluntly, everywhere around us. The International Astronomical Union’s Minor Planet Center, which maintains a list of (no points award for guessing correctly) minor planets within the Solar System, has a running tally. At the time of the writing of this article, the Center has recorded 34,152 asteroids with orbits that come within 0.05 AU of the Earth (an AU is one astronomical unit, the average distance between the Earth and the Sun).

These near-Earth asteroids (or NEAs for short, sometimes called NEOs, for near-Earth objects) aren’t necessarily going to impact the Earth. But they’re the most likely ones to do it; in all the billions of kilometers that encompass the wide expanse of our Solar System, these are the ones that live in our neighborhood.

And impact they do. The larger planets and moons of our Solar System are littered with the craterous scars of past violent collisions. The only reason the Earth doesn’t have the same amount of visible damage as, say, the Moon is that our planet constantly reshapes its surface through erosion and plate tectonics.

It’s through craters elsewhere that astronomers have built up a sense of how often a planet like the Earth experiences a serious impact and the typical sizes of those impactors.

Tiny things happen all the time. When you see a beautiful shooting star streaking across the night sky, that’s from the “impact” of an object somewhere between the size of a grain of sand and a tiny pebble striking our atmosphere at a few tens of thousands of kilometers per hour.

Every few years or so, an object 10 meters across hits us; when it does, it delivers energy roughly equivalent to that of our earliest atomic weapons. Thankfully, most of the Earth is open ocean, and most impactors of this class burst apart in the upper atmosphere, so we typically don’t have to worry too much about them.

The much larger—but thankfully much rarer—asteroids are what cause us heartburn. This is where we get into the delightful mathematics of attempting to calculate an existential risk to humanity.

At one end of the scale, we have the kind of stuff that kills dinosaurs and envelops the globe in a shroud of ash. These rocks are several kilometers across but only come into Earth-crossing trajectories every few million years. One of them would doom us—certainly our civilization and likely our species. The combination of the unimaginable scale of devastation and the incredibly small likelihood of it occurring puts this kind of threat almost beyond human comprehension—and intervention. For now, we just have to hope that our time isn’t up.

Then there are the in-betweeners. These are the space rocks starting at a hundred meters across. Upon impact, they release a minimum of 30 megatons of energy, which is capable of leaving a crater a couple of kilometers across. Those kinds of dangers present themselves roughly every 10,000 years.

That’s an interesting time scale. Our written history stretches back thousands of years, and our institutions have existed for thousands of years. We can envision our civilization, our ways of life, and our humanity continuing into the future for thousands of years.

This means that at some point, either we or our descendants will have to deal with a threat of this magnitude. Not a rock large enough to hit the big reset button on life but powerful enough to present a scale of disaster not yet seen in human history.

Outdoing the dinosaurs: What we can do if we spot a threatening asteroid Read More »

nasa-confirms-“independent-review”-of-orion-heat-shield-issue

NASA confirms “independent review” of Orion heat shield issue

The Orion spacecraft after splashdown in the Pacific Ocean at the end of the Artemis I mission.

Enlarge / The Orion spacecraft after splashdown in the Pacific Ocean at the end of the Artemis I mission.

NASA has asked a panel of outside experts to review the agency’s investigation into the unexpected loss of material from the heat shield of the Orion spacecraft on a test flight in 2022.

Chunks of charred material cracked and chipped away from Orion’s heat shield during reentry at the end of the 25-day unpiloted Artemis I mission in December 2022. Engineers inspecting the capsule after the flight found more than 100 locations where the stresses of reentry stripped away pieces of the heat shield as temperatures built up to 5,000° Fahrenheit.

This was the most significant discovery on the Artemis I, an unpiloted test flight that took the Orion capsule around the Moon for the first time. The next mission in NASA’s Artemis program, Artemis II, is scheduled for launch late next year on a test flight to send four astronauts around the far side of the Moon.

Another set of eyes

The heat shield, made of a material called Avcoat, is attached to the base of the Orion spacecraft in 186 blocks. Avcoat is designed to ablate, or erode, in a controlled manner during reentry. Instead, fragments fell off the heat shield that left cavities resembling potholes.

Investigators are still looking for the root cause of the heat shield problem. Since the Artemis I mission, engineers conducted sub-scale tests of the Orion heat shield in wind tunnels and high-temperature arcjet facilities. NASA has recreated the phenomenon observed on Artemis I in these ground tests, according to Rachel Kraft, an agency spokesperson.

“The team is currently synthesizing results from a variety of tests and analyses that inform the leading theory for what caused the issues,” said Rachel Kraft, a NASA spokesperson.

Last week, nearly a year and a half after the Artemis I flight, the public got its first look at the condition of the Orion heat shield with post-flight photos released in a report from NASA’s inspector general. Cameras aboard the Orion capsule also recorded pieces of the heat shield breaking off the spacecraft during reentry.

NASA’s inspector general said the char loss issue “creates a risk that the heat shield may not sufficiently protect the capsule’s systems and crew from the extreme heat of reentry on future missions.”

“Those pictures, we’ve seen them since they were taken, but more importantly… we saw it,” said Victor Glover, pilot of the Artemis II mission, in a recent interview with Ars. “More than any picture or report, I’ve seen that heat shield, and that really set the bit for how interested I was in the details.”

NASA confirms “independent review” of Orion heat shield issue Read More »

deepmind-adds-a-diffusion-engine-to-latest-protein-folding-software

DeepMind adds a diffusion engine to latest protein-folding software

Added complexity —

Major under-the-hood changes let AlphaFold handle protein-DNA complexes and more.

image of a complicated mix of lines and ribbons arranged in a complicated 3D structure.

Enlarge / Prediction of the structure of a coronavirus Spike protein from a virus that causes the common cold.

Google DeepMind

Most of the activities that go on inside cells—the activities that keep us living, breathing, thinking animals—are handled by proteins. They allow cells to communicate with each other, run a cell’s basic metabolism, and help convert the information stored in DNA into even more proteins. And all of that depends on the ability of the protein’s string of amino acids to fold up into a complicated yet specific three-dimensional shape that enables it to function.

Up until this decade, understanding that 3D shape meant purifying the protein and subjecting it to a time- and labor-intensive process to determine its structure. But that changed with the work of DeepMind, one of Google’s AI divisions, which released Alpha Fold in 2021, and a similar academic effort shortly afterward. The software wasn’t perfect; it struggled with larger proteins and didn’t offer high-confidence solutions for every protein. But many of its predictions turned out to be remarkably accurate.

Even so, these structures only told half of the story. To function, almost every protein has to interact with something else—other proteins, DNA, chemicals, membranes, and more. And, while the initial version of AlphaFold could handle some protein-protein interactions, the rest remained black boxes. Today, DeepMind is announcing the availability of version 3 of AlphaFold, which has seen parts of its underlying engine either heavily modified or replaced entirely. Thanks to these changes, the software now handles various additional protein interactions and modifications.

Changing parts

The original AlphaFold relied on two underlying software functions. One of those took evolutionary limits on a protein into account. By looking at the same protein in multiple species, you can get a sense for which parts are always the same, and therefore likely to be central to its function. That centrality implies that they’re always likely to be in the same location and orientation in the protein’s structure. To do this, the original AlphaFold found as many versions of a protein as it could and lined up their sequences to look for the portions that showed little variation.

Doing so, however, is computationally expensive since the more proteins you line up, the more constraints you have to resolve. In the new version, the AlphaFold team still identified multiple related proteins but switched to largely performing alignments using pairs of protein sequences from within the set of related ones. This probably isn’t as information-rich as a multi-alignment, but it’s far more computationally efficient, and the lost information doesn’t appear to be critical to figuring out protein structures.

Using these alignments, a separate software module figured out the spatial relationships among pairs of amino acids within the target protein. Those relationships were then translated into spatial coordinates for each atom by code that took into account some of the physical properties of amino acids, like which portions of an amino acid could rotate relative to others, etc.

In AlphaFold 3, the prediction of atomic positions is handled by a diffusion module, which is trained by being given both a known structure and versions of that structure where noise (in the form of shifting the positions of some atoms) has been added. This allows the diffusion module to take the inexact locations described by relative positions and convert them into exact predictions of the location of every atom in the protein. It doesn’t need to be told the physical properties of amino acids, because it can figure out what they normally do by looking at enough structures.

(DeepMind had to train on two different levels of noise to get the diffusion module to work: one in which the locations of atoms were shifted while the general structure was left intact and a second where the noise involved shifting the large-scale structure of the protein, thus affecting the location of lots of atoms.)

During training, the team found that it took about 20,000 instances of protein structures for AlphaFold 3 to get about 97 percent of a set of test structures right. By 60,000 instances, it started getting protein-protein interfaces correct at that frequency, too. And, critically, it started getting proteins complexed with other molecules right, as well.

DeepMind adds a diffusion engine to latest protein-folding software Read More »

no-one-has-seen-the-data-behind-tyson’s-“climate-friendly-beef”-claim

No one has seen the data behind Tyson’s “climate friendly beef” claim

feedlot

Enlarge / The Environmental Working Group published a new analysis on Wednesday outlining its efforts to push the USDA for more transparency, including asking for specific rationale in allowing brands to label beef as “climate friendly.”

Carolyn Van Houten/Washington Post via Getty

About five miles south of Broken Bow, in the heart of central Nebraska, thousands of cattle stand in feedlots at Adams Land & Cattle Co., a supplier of beef to the meat giant Tyson Foods.

From the air, the feedlots look dusty brown and packed with cows—not a vision of happy animals grazing on open pastureland, enriching the soil with carbon. But when the animals are slaughtered, processed, and sent onward to consumers, labels on the final product can claim that they were raised in a “climate friendly” way.

In late 2022, Tyson—one of the country’s “big four” meat packers—applied to the US Department of Agriculture (USDA), seeking a “climate friendly” label for its Brazen Beef brand. The production of Brazen Beef, the label claims, achieves a “10 percent greenhouse gas reduction.” Soon after, the USDA approved the label.

Immediately, environmental groups questioned the claim and petitioned the agency to stop using it, citing livestock’s significant greenhouse gas emissions and the growing pile of research that documents them. These groups and journalism outlets, including Inside Climate News, have asked the agency for the data it used to support its rubber-stamping of Tyson’s label but have essentially gotten nowhere.

“There are lots of misleading claims on food, but it’s hard to imagine a claim that’s more misleading than ‘climate friendly’ beef,” said Scott Faber, a senior vice president at the Environmental Working Group (EWG). “It’s like putting a cancer-free label on a cigarette. There’s no worse food choice for the climate than beef.”

The USDA has since confirmed it is currently considering and has approved similar labels for more livestock companies, but would not say which ones.

On Wednesday, the EWG, a longtime watchdog of the USDA, published a new analysis, outlining its efforts over the last year to push the agency for more transparency, including asking it to provide the specific rationale for allowing Brazen Beef to carry the “climate friendly” label. Last year, the group filed a Freedom of Information Act request, seeking the data that Tyson supplied to the agency in support of its application, but received only a heavily redacted response. EWG also petitioned the agency to not allow climate friendly or low carbon claims on beef.

To earn the “climate friendly” label, Tyson requires ranchers to meet the criteria of its internal “Climate-Smart Beef” program, but EWG notes that the company fails to provide information about the practices that farmers are required to adopt or about which farmers participate in the program. The only farm it has publicly identified is the Adams company in Nebraska.

A USDA spokesperson told Inside Climate News it can only rely on a third-party verification company to substantiate a label claim and could not provide the data Tyson submitted for its review.

“Because Congress did not provide USDA with on-farm oversight authority that would enable it to verify these types of labeling claims, companies must use third-party certifying organizations to substantiate these claims,” the spokesperson wrote in an email, directing Inside Climate News to the third-party verifier or Tyson for more information.

The third-party verification company, Where Food Comes From, did not respond to emailed questions from Inside Climate News, and Tyson did not respond to emails seeking comment.

The USDA said it is reviewing EWG’s petitions and announced in June 2023 that it’s working on strengthening the “substantiation of animal-raising claims, which includes the type of claim affixed to the Brazen Beef product.”

The agency said other livestock companies were seeking similar labels and that the agency has approved them, but would not identify those companies, saying Inside Climate News would have to seek the information through a Freedom of Information Act request.

“They’re being incredibly obstinate about sharing anything right now,” said Matthew Hayek, a researcher with New York University who studies the environmental and climate impacts of the food system. “Speaking as a scientist, it’s not transparent and it’s a scandal in its own right that the government can’t provide this information.”

This lack of transparency from the agency worries environmental and legal advocacy groups, especially now that billions of dollars in taxpayer funds are available for agricultural practices deemed to have benefits for the climate. The Biden administration’s signature climate legislation, the Inflation Reduction Act, appropriated nearly $20 billion for these practices; another $3.1 billion is available through a Biden-era program called the Partnership for Climate-Smart Commodities.

“This is an important test case for USDA,” Faber said. “If they can’t say no to a clearly misleading climate claim like ‘climate friendly’ beef, why should they be trusted to say no to other misleading climate claims? There’s a lot of money at stake.”

No one has seen the data behind Tyson’s “climate friendly beef” claim Read More »

amid-two-wrongful-death-lawsuits,-panera-to-pull-the-plug-on-“charged”-drinks

Amid two wrongful death lawsuits, Panera to pull the plug on “charged” drinks

Zapped —

A large previously contained nearly as much caffeine as the FDA’s daily safe limit.

Dispensers for Charged Lemondade, a caffeinated lemonade drink, at Panera Bread, Walnut Creek, California, March 27, 2023.

Enlarge / Dispensers for Charged Lemondade, a caffeinated lemonade drink, at Panera Bread, Walnut Creek, California, March 27, 2023.

Panera Bread will stop selling its highly caffeinated “Charged” drinks, which have been the subject of at least three lawsuits and linked to at least two deaths.

It is unclear when exactly the company will pull the plug on the potent potables, but in a statement to Ars Tuesday, Panera said it was undergoing a “menu transformation” that includes an “enhanced beverage portfolio.” The company plans to roll out various new drinks, including a lemonade and tea, but a spokesperson confirmed that the new flavors would not contain added caffeine as the “charged” drinks did.

The fast-casual cafe-style chain drew national attention in 2022 for the unexpectedly high caffeine levels in the drinks, which were initially offered as self-serve with free refills.

The versions of the drinks at the time were labeled as containing 389 mg to 390 mg of caffeine in a large, 30-ounce drink, while the other option, a 20-ounce regular, contained 260 mg. According to the Food and Drug Administration, a limit of 400 mg of caffeine per day is generally considered safe for healthy adults, but a smaller amount is advised for adults with certain medical conditions or who are pregnant or breastfeeding. A standard 8-ounce cup of coffee generally contains between 80 to 100 mg of caffeine, while a Red Bull energy drink also contains 80 mg.

In September 2022, Sarah Katz, a 21-year-old with a heart condition, died after allegedly drinking one of the highly caffeinated lemonades from a restaurant in Philadelphia. In a wrongful death lawsuit filed against Panera in October 2023, Katz’s parents alleged that she didn’t know the drink contained potentially dangerous amounts of caffeine. Rather, she was “reasonably confident it was a traditional lemonade and/or electrolyte sports drink containing a reasonable amount of caffeine safe for her to drink,” the lawsuit stated.

Also in October, Dennis Brown, a 46-year-old man in Florida, went into cardiac arrest while walking home from a Panera, where he allegedly drank a charged lemonade and then had two refills. His family filed a lawsuit against Panera in December.

According to CNN, a third lawsuit was filed in January by a woman who claims she developed an irregularly fast heartbeat and palpitations after drinking the two-and-a-half caffeinated lemonades in April 2023. “The primary reason she ordered this drink was because it was advertised as ‘plant-based’ and ‘clean,’” the complaint states.

In a statement to Ars in December, Panera said it “stands firmly by the safety of our products.” However, the company increased warnings on the drinks last year and moved containers behind the counter in some stores. Most notably, it also reduced the labeled amount of caffeine in the drinks. The current menu lists the “Charged Sips” drinks as having between 155 mg to 302 mg, depending on the flavor and size.

Amid two wrongful death lawsuits, Panera to pull the plug on “charged” drinks Read More »

the-surprise-is-not-that-boeing-lost-commercial-crew-but-that-it-finished-at-all

The surprise is not that Boeing lost commercial crew but that it finished at all

Boeing really is going —

“The structural inefficiency was a huge deal.”

Boeing's Starliner spacecraft is lifted to be placed atop an Atlas V rocket for its first crewed launch.

Enlarge / Boeing’s Starliner spacecraft is lifted to be placed atop an Atlas V rocket for its first crewed launch.

United Launch Alliance

NASA’s senior leaders in human spaceflight gathered for a momentous meeting at the agency’s headquarters in Washington, DC, almost exactly ten years ago.

These were the people who, for decades, had developed and flown the Space Shuttle. They oversaw the construction of the International Space Station. Now, with the shuttle’s retirement, these princely figures in the human spaceflight community were tasked with selecting a replacement vehicle to send astronauts to the orbiting laboratory.

Boeing was the easy favorite. The majority of engineers and other participants in the meeting argued that Boeing alone should win a contract worth billions of dollars to develop a crew capsule. Only toward the end did a few voices speak up in favor of a second contender, SpaceX. At the meeting’s conclusion, NASA’s chief of human spaceflight at the time, William Gerstenmaier, decided to hold off on making a final decision.

A few months later, NASA publicly announced its choice. Boeing would receive $4.2 billion to develop a “commercial crew” transportation system, and SpaceX would get $2.6 billion. It was not a total victory for Boeing, which had lobbied hard to win all of the funding. But the company still walked away with nearly two-thirds of the money and the widespread presumption that it would easily beat SpaceX to the space station.

The sense of triumph would prove to be fleeting. Boeing decisively lost the commercial crew space race, and it proved to be a very costly affair.

With Boeing’s Starliner spacecraft finally due to take flight this week with astronauts on board, we know the extent of the loss, both in time and money. Dragon first carried people to the space station nearly four years ago. In that span, the Crew Dragon vehicle has flown thirteen public and private missions to orbit. Because of this success, Dragon will end up flying 14 operational missions to the station for NASA, earning a tidy fee each time, compared to just six for Starliner. Through last year, Boeing has taken $1.5 billion in charges due to delays and overruns with its spacecraft development.

So what happened? How did Boeing, the gold standard in human spaceflight for decades, fall so far behind on crew? This story, based largely on interviews with unnamed current and former employees of Boeing and contractors who worked on Starliner, attempts to provide some answers.

The early days

When the contracts were awarded, SpaceX had the benefit of working with NASA to develop a cargo variant of Dragon, which by 2014 was flying regular missions to the space station. But the company had no experience with human spaceflight. Boeing, by contrast, had decades of spaceflight experience, but it had to start from scratch with Starliner.

Each faced a deeper cultural challenge. A decade ago, SpaceX was deep into several major projects, including developing a new version of the Falcon 9 rocket, flying more frequently, experimenting with landing and reuse, and doing cargo supply missions. This new contract meant more money but a lot more work. A NASA engineer who worked closely with both SpaceX and Boeing in this time frame recalls visiting SpaceX and the atmosphere being something like a frenzied graduate school, where all of the employees were being pulled in different directions. Getting engineers to focus on Crew Dragon was difficult.

But at least SpaceX was in its natural environment. Boeing’s space division had never won a large fixed-price contract. Its leaders were used to operating in a cost-plus environment, in which Boeing could bill the government for all of its expenses and earn a fee. Cost overruns and delays were not the company’s problem—they were NASA’s. Now Boeing had to deliver a flyable spacecraft for a firm, fixed price.

Boeing struggled to adjust to this environment. When it came to complicated space projects, Boeing was used to spending other people’s money. Now, every penny spent on Starliner meant one less penny in profit (or, ultimately, greater losses). This meant that Boeing allocated fewer resources to Starliner than it needed to thrive.

“The difference between the two company’s cultures, design philosophies, and decision-making structures allowed SpaceX to excel in a fixed-price environment, where Boeing stumbled, even after receiving significantly more funding,” said Lori Garver in an interview. She was deputy administrator of NASA from 2009 to 2013 during the formative years of the commercial crew program and is the author of Escaping Gravity.

So Boeing faced financial pressure from the beginning. At the same time, it was confronting major technical challenges. Building a human spacecraft is very difficult. Some of the biggest hurdles would be flight software and propulsion.

The surprise is not that Boeing lost commercial crew but that it finished at all Read More »

mayans-burned-and-buried-dead-political-regimes

Mayans burned and buried dead political regimes

Winning isn’t everything! —

After burning, the remains were dumped in construction fill.

A long, rectangular stone building.

Enlarge / Mayans built impressive structures and occasionally put interesting items in the construction fill.

As civilizations evolve, so do the political regimes that govern them. But the transition from one era to another is not always quiet. Some ancient Mayan rulers made a very fiery public statement about who was in charge.

When archaeologists dug up the burned fragments of royal bodies and artifacts at the Mayan archaeological site of Ucanal in Guatemala, they realized they were looking at the last remnants of a fallen regime. There was no scorching on the walls of the structure they were found beneath. This could have only meant that the remains (which had already been in their tombs a hundred years) were consumed by flames in one place and buried in another. But why?

The team of archaeologists, led by Christina T. Halperin of the University of Montreal, think this was the doing of a new leader who wanted to annihilate all traces of the old regime. He couldn’t just burn them. He also had to bury them where they would be forgotten.

Into the fire

While there is other evidence of Mayans burning bodies and objects from old regimes, a ritual known as och-i k’ak’ t-u-muk-il (“the fire entered his/her tomb”), this is the first time burnt royal remains have been discovered somewhere other than their original tomb. They were found underneath construction fill at the base of a temple where the upper parts are thought to have been made from materials that had not lasted long.

Radiocarbon dating revealed these remains were burned around the same time as the ascent of the ruler Papmalil, who assumed the title of ochk’in kaloomte’ or “western overlord,” suggesting he may have been foreign. Inscriptions of his name were seen at the same site where the burnt fragments were unearthed. Papmalil’s rise meant the fall of the K’anwitznal dynasty—the one that the bones and ornaments most likely belonged to. It also marked the start of a period of great prosperity.

“Papmalil’s rule was not only seminal because of his possible foreign origins—perhaps breaking the succession of ruling dynasts at the site—but also because his rule shifted political dynamics in the southern Maya Lowlands,” the archeologists said in a study recently published in the journal Antiquity.

The overthrowing of the K’anwitznal dynasty is evidenced on the wall of a temple at Caracol, a site not far from Ucanal. An engraving on a Caracol altar shows a captive K’anwitzanl ruler in bondage. Other engravings made only two decades later depict Papmalil as the ruling figure, and the way he is pictured giving gifts to other kings is a testament to his regime’s increased strength in foreign relations.

Ashes to ashes

The archaeological team sees Papmalil’s accession as a pivotal point after which the city of Ucanal would go on to thrive. As other rulers had done before him, he apparently wanted to dismantle the old regime and make the fall of the K’anwitznal rulers known to everyone. Though the location of the K’anwitznal tombs is unknown, the team used a map of the site they had already made to determine that the temple where the burnt remains were found stood in what was once a public plaza.

Halperin thinks that the bones of these royals and the lavish ornaments the royals were buried with were believed to have had some sort of life force or spirit that needed to be conquered before the new regime would be secure. It was evident, because of shrinkage, warping, and discoloration, that the human bones, which belonged to four individuals (three of which were determined to be male), had been burned, suggesting temperatures of at least 800° C (1,472° F). Fractures and fissures on the jade and greenstone ornaments were also signs of burning at high temperatures.

“Because the fire-burning event itself had the potential to be highly ceremonial, public, and charged with emotion, it could dramatically mark the dismantling of an ancient regime,” the team said in the same study.

To the archaeologists, there is almost no doubt that the burning of the bones and artifacts found at the Ucanal site was an act of desecration, even though the location where they had been thrown into the fire is still a mystery. They’re convinced by the way that the remains were treated no differently than construction debris, deposited at the base of a temple during construction.

Other findings from cremations have shown a level of reverence for the bones of deposed rulers and dynasties. At another site that Halperin also investigated, the cremated bones of a queen were arranged carefully along with her jewelry. That was apparently not enough for Papmalil. Even today, some leaders just feel the need to be heard more loudly than others.

Antiquity, 2024.  DOI: 10.15184/aqy.2024.38

Mayans burned and buried dead political regimes Read More »

two-seconds-of-hope-for-fusion-power

Two seconds of hope for fusion power

image of a person in protective clothing, standing in a circular area with lots of mirrored metal panels.

Enlarge / The interior or the DIII-D tokamak.

Using nuclear fusion, the process that powers the stars, to produce electricity on Earth has famously been 30 years away for more than 70 years. But now, a breakthrough experiment done at the DIII-D National Fusion Facility in San Diego may finally push nuclear fusion power plants to be roughly 29 years away.

Nuclear fusion ceiling

The DIII-D facility is run by General Atomics for the Department of Energy. It includes an experimental tokamak, a donut-shaped nuclear fusion device that works by trapping astonishingly hot plasma in very strong, toroidal magnetic fields. Tokamaks, compared to other fusion reactor designs like stellarators, are the furthest along in their development; ITER, the world’s first power-plant-size fusion device now under construction in France, is scheduled to run its first tests with plasma in December 2025.

But tokamaks have always had some issues. Back in 1988, Martin Greenwald, a Massachusetts Institute of Technology expert on plasma physics, proposed an equation that described an apparent limit on how dense plasma could get in tokamaks. He argued that maximum attainable density is dictated by the minor radius of a tokamak and the current induced in the plasma to maintain magnetic stability. Going beyond that limit was supposed to make the magnets incapable of holding the plasma, heated up to north of 150 million degrees Celsius away from the walls of the machine.

Since the power output of a tokamak was proportional to the square of fuel density, this limit didn’t bode well for fusion power plants. A commercial reactor would either need to be huge or drive absurdly high plasma currents. The former meant it would be catastrophically expensive to build, and the latter that it would be expensive to run.

But there has been hope. Since then, many research teams working at different tokamak facilities—including the Joint European Torus (JET) in Britain or ASDEX Upgrade in Germany—achieved plasma densities exceeding the Greenwald limit. In response, Martin Greenwald himself revised his claim a bit, saying that the limit applied not to the line averaged plasma density in the entire reactor but only to the portion of the plasma occupying less than 10 percent of the radius near the reactor’s wall.

While the actual density numbers were pushed a little, the working principle behind the Greenwald limit still held—when the plasma density went up above the Greenwald line, the quality of confinement went down. “The major phenomenon people discovered in the high-density experiments was reduced energy confinement when plasma density was increased,” said Siye Ding, a researcher at General Atomics working at the DIII-D National Fusion Facility.

To use fusion for energy production, we need both high density and high confinement. “For the first time, we have experimentally demonstrated how to resolve this problem,” said Ding.

Self-organizing puzzle

“When you make a plasma in your reactor, there is a whole combination of parameters,” explained Andrea Garofalo, a sciences manager at General Atomics who worked on the experiment at DIII-D. “What is the plasma current, what is the toroidal field, what is the external heating versus time. Combinations of such parameters can vary in tokamaks—you can have plasma current higher or lower, you can start the heating early, you can start it later. All this comprises what we call a scenario.”

“We’re talking about optimizing the waveforms of power, fueling, etc. to achieve the right configuration,” he added.

The configuration he and his colleagues achieved (called the high-poloidal-beta scenario) worked like a charm.

People working on nuclear fusion use various metrics that integrate multiple parameters into simple numbers to make it easier to compare the performance of different fusion experiments. The H98Y metric tracks the quality of confinement. The high confinement mode that will be used at ITER has H98Y equal to 1. Plasma density is often denoted as FGR—the Greenwald fraction—which describes how far below or above the Greenwald limit plasma density can get. FGR equal to 1 means density exactly at the Greenwald limit.

Two seconds of hope for fusion power Read More »

glow-of-an-exoplanet-may-be-from-starlight-reflecting-off-liquid-iron

Glow of an exoplanet may be from starlight reflecting off liquid iron

For all the glory —

A phenomenon called a “glory” may be happening on a hellishly hot giant planet.

Image of a planet on a dark background, with an iridescent circle on the right side of the planet.

Enlarge / Artist impression of a glory on exoplanet WASP-76b.

Do rainbows exist on distant worlds? Many phenomena that happen on Earth—such as rain, hurricanes, and auroras—also occur on other planets in our Solar System if the conditions are right. Now we have evidence from outside our Solar System that one particularly strange exoplanet might even be displaying something close to a rainbow.

Appearing in the sky as a halo of colors, a phenomenon called a “glory” occurs when light hits clouds made up of a homogeneous substance in the form of spherical droplets. It might be the explanation for a mystery regarding observations of exoplanet WASP-76B. This planet, a scorching gas giant that experiences molten iron rain, has also been observed to have more light on its eastern terminator (a line used to separate the day side from the night side) than its western terminator. Why was there more light on one side of the planet?

After observing it with the CHEOPS space telescope, then combining that with previous observations from Hubble, Spitzer, and TESS, a team of researchers from ESA and the University of Bern in Switzerland now think that the most likely reason for the extra light is a glory.

Seeing the light

Over three years, CHEOPS made 23 observations of WASP-76B in both visible and infrared light. These included phase curves, transits, and secondary eclipses. Phase curves are continuous observations that track a planet’s complete revolution and show changes in its phase or the part of its illuminated side that is facing the telescope. The telescope may see more or less of that side as the planet orbits its star. Phase curves can determine the change in the total brightness of the planet and star as the planet orbits.

Secondary eclipses happen when a planet passes behind its host star and is eclipsed by it. The light seen during such an eclipse can later be compared with the total light both before and after the occultation to give us a sense of the light that’s reflected off the planet. Hot Jupiters like WASP-76B are commonly observed through secondary eclipses.

Phase-curve observations can continue while the planet is eclipsing its star. While it was observing the phase curve of WASP-76B, CHEOPS saw a pre-eclipse excess of light on its night side. This had also been seen in TESS phase-curve and secondary-eclipse observations that had been made earlier.

End of the rainbow?

An advantage of WASP-76b is that it is an ultra-hot Jupiter, so at least its day side does not have the clouds and hazes that often obscure the atmospheres of cooler hot Jupiters. This makes atmospheric emissions much easier to detect. That we had already observed an asymmetry in iron content between the day-side and night-side terminators, discovered in a previous study, made the planet especially intriguing. There was not much gaseous iron in the upper atmosphere of the day-side limb compared to that of the night-side limb. This is probably because it rains iron on the day side of WASP-76b, which then condenses into clouds of iron on the night side.

Observations from Hubble suggested that thermal inversion—when the air near the surface of a planet begins cooling—was occurring on the night side. Cooling on that side would cause iron that had previously condensed into clouds, rained down onto the day side, and then evaporated from the intense heat to condense again. Drops of liquid iron can then form clouds.

These clouds are critical since light from the host star, reflecting off these drops in those clouds, can create the effect of a glory.

“Explaining the observation with the glory effect would require spherical droplets of highly reflective, spherically shaped aerosols and clouds on the planet’s eastern hemisphere,” the researchers said in a paper recently published in Astronomy & Astrophysics.

Glories have been seen off Earth before. They are also known to form in the clouds of Venus. Just like WASP-76b, more pre-eclipse light was observed on Venus, so while a glory is all but definite for the exoplanet, future observations with a more powerful telescope could help determine how similar the phenomenon on WASP-76 is to that on Venus. If they match, this will be the first glory ever observed on an exoplanet.

If future research figures out a definite way to tell whether this is really a glory, these phenomena could tell us more about the atmospheric makeup of exoplanets, depending on the kinds of elements or molecules light is reflecting off of. They might even give away the presence of water, which could mean habitability. While the hypothesized glory on WASP-76b has not been definitively demonstrated, it is anything but a rainbow in the dark.

Astronomy & Astrophysics, 2024. DOI: 10.1051/0004-6361/202348270

Glow of an exoplanet may be from starlight reflecting off liquid iron Read More »

we-still-don’t-understand-how-one-human-apparently-got-bird-flu-from-a-cow

We still don’t understand how one human apparently got bird flu from a cow

Holstein dairy cows in a freestall barn.

Enlarge / Holstein dairy cows in a freestall barn.

The US Department of Agriculture this week posted an unpublished version of its genetic analysis into the spillover and spread of bird flu into US dairy cattle, offering the most complete look yet at the data state and federal investigators have amassed in the unexpected and worrisome outbreak—and what it might mean.

The preprint analysis provides several significant insights into the outbreak—from when it may have actually started, just how much transmission we’re missing, stunning unknowns about the only human infection linked to the outbreak, and how much the virus continues to evolve in cows. The information is critical as flu experts fear the outbreak is heightening the ever-present risk that this wily flu virus will evolve to spread among humans and spark a pandemic.

But, the information hasn’t been easy to come by. Since March 25—when the USDA confirmed for the first time that a herd of US dairy cows had contracted the highly pathogenic avian influenza H5N1 virus—the agency has garnered international criticism for not sharing data quickly or completely. On April 21, the agency dumped over 200 genetic sequences into public databases amid pressure from outside experts. However, many of those sequences lack descriptive metadata, which normally contains basic and key bits of information, like when and where the viral sample was taken. Outside experts don’t have that crucial information, making independent analyses frustratingly limited. Thus, the new USDA analysis—which presumably includes that data—offers the best yet glimpse of the complete information on the outbreak.

Undetected spread

One of the big takeaways is that USDA researchers think the spillover of bird flu from wild birds to cattle began late last year, likely in December. Thus, the virus likely circulated undetected in dairy cows for around four months before the USDA’s March 25 confirmation of an infection in a Texas herd.

This timeline conclusion largely aligns with what outside experts previously gleaned from the limited publicly available data. So, it may not surprise those following the outbreak, but it is worrisome. Months of undetected spread raise significant concerns about the country’s ability to identify and swiftly respond to emerging infectious disease outbreaks—and whether public health responses have moved past the missteps seen in the early stages of the COVID-19 pandemic.

But another big finding from the preprint is how many gaps still exist in our current understanding of the outbreak. To date, the USDA has identified 36 herds in nine states that have been infected with H5N1. The good news from the genetic analysis is that the USDA can draw lines connecting most of them. USDA researchers reported that “direct movement of cattle based upon production practices” seems to explain how H5N1 hopped from the Texas panhandle region—where the initial spillover is thought to have occurred—to nine other states, some as far-flung as North Carolina, Michigan, and Idaho.

Bayes factors for inferred movement between different discrete traits of H5N1 clade 2.3.4.4b viruses demonstrating the frequency of movement.

Enlarge / Bayes factors for inferred movement between different discrete traits of H5N1 clade 2.3.4.4b viruses demonstrating the frequency of movement.

Putative transmission pathways of HPAI H5N1 clade 2.3.4.4b genotype B3.13 supported by epidemiological links, animal movements, and genomic analysis.

Enlarge / Putative transmission pathways of HPAI H5N1 clade 2.3.4.4b genotype B3.13 supported by epidemiological links, animal movements, and genomic analysis.

Putative transmission pathways of HPAI H5N1 clade 2.3.4.4b genotype B3.13 supported by epidemiological links, animal movements, and genomic analysis. [/ars_img]The bad news is that those lines connecting the herds aren’t solid. There are gaps in which the genetic data suggests unidentified transmission occurred, maybe in unsampled cows, maybe in other animals entirely. The genetic data is clear that once this strain of bird flu—H5N1 clade 2.3.4.4 genotype B3.13 —hopped into cattle, it could readily spread to other mammals. The genetic data links viruses from cattle moving many times into other animals: There were five cattle-to-poultry jumps, one cattle-to-raccoon transmission, two events where the virus moved from cattle to domestic cats, and three times when the virus from cattle spilled back into wild birds.

“We cannot exclude the possibility that this genotype is circulating in unsampled locations and hosts as the existing analysis suggests that data are missing and undersurveillance may obscure transmission inferred using phylogenetic methods,” the USDA researchers wrote in their preprint.

We still don’t understand how one human apparently got bird flu from a cow Read More »