Science

ula’s-second-vulcan-rocket-lost-part-of-its-booster-and-kept-going

ULA’s second Vulcan rocket lost part of its booster and kept going


The US Space Force says this test flight was critical for certifying Vulcan for military missions.

United Launch Alliance’s Vulcan rocket, under contract for dozens of flights for the US military and Amazon’s Kuiper broadband network, lifted off from Florida on its second test flight Friday, suffered an anomaly with one of its strap-on boosters, and still achieved a successful mission, the company said in a statement.

This test flight, known as Cert-2, is the second certification mission for the new Vulcan rocket, a milestone that paves the way for the Space Force to clear ULA’s new rocket to begin launching national security satellites in the coming months.

While ULA said the Vulcan rocket continued to hit its marks during the climb into orbit Friday, engineers are investigating what happened with one of its solid rocket boosters shortly after liftoff.

After a last-minute aborted countdown earlier in the morning, the 202-foot-tall (61.6-meter) Vulcan rocket lit its twin methane-fueled BE-4 engines and two side-mounted solid rocket boosters to climb away from Cape Canaveral Space Force Station, Florida, at 7: 25 am EDT (11: 25 UTC) Friday.

A little tilt

As the rocket arced east from Cape Canaveral, a shower of sparks suddenly appeared at the base of the Vulcan rocket around 37 seconds into the mission. The exhaust plume from one of the strap-on boosters, made by Northrop Grumman, changed significantly, and the rocket slightly tilted on its axis before the guidance system and main engines made a steering correction.

Videos from the launch show the booster’s nozzle, the bell-shaped exhaust exit cone at the bottom of the booster, fall away from the rocket.

“It looks dramatic, like all things on a rocket,” Bruno wrote on X. “But it’s just the release of the nozzle. No explosions occurred.”

During the ascent of the Vulcan rocket on the #Cert2 mission, there appeared to be an issue with the solid rocket booster on the right side of the vehicle as seen from the KSC Press Site. However, the Centaur was able to reach orbit.https://t.co/3iwWLVWZHp

📹: @ABernNYC pic.twitter.com/5h06ffNMXr

— Spaceflight Now (@SpaceflightNow) October 4, 2024

The Federal Aviation Administration, which licenses commercial space launches in the United States, said in a statement that it assessed the booster anomaly and “determined no investigation is warranted at this time.” The FAA is not responsible for regulating launch vehicle anomalies unless they impact public safety.

The Vulcan rocket comes in several configurations, with zero, two, four, or six solid-fueled boosters clustered around the liquid-fueled core stage. ULA can tailor the configuration based on the parameters of each mission, such as payload mass and target orbit.

The boosters, which Northrop Grumman calls graphite epoxy motors, are 63 inches (1.6 meters) in diameter and 72 feet (22 meters) long. Their nozzles are made of a composite heat-resistant carbon-phenolic material.

Bruno added that the rest of the damaged booster’s composite casing held up fine during its roughly 90-second burn, but the anomaly caused “reduced, asymmetric thrust” that the rocket compensated for during the rest of its ascent into space.

The Federal Aviation Administration, which regulates commercial space launches, is not immediately requiring an investigation into the booster anomaly. The FAA said it is “assessing the operation and will issue an updated statement if the agency determines an investigation is warranted.”

Remarkably, the Vulcan rocket soldiered on and jettisoned both strap-on boosters to fall into the Atlantic Ocean. They’re not designed for recovery, so ULA and Northrop Grumman engineers will have to piece together what happened from imagery and performance data beamed down from the rocket in flight.

The BE-4 main engines, supplied by Jeff Bezos’ space company Blue Origin, appeared to work flawlessly for the first five minutes of the flight. The core stage shut down its engines and separated from Vulcan’s Centaur upper stage, which ignited two Aerojet Rocketdyne RL10 engines to propel the rocket into orbit.

The second Vulcan rocket lifts off from Cape Canaveral Space Force Station, Florida, powered by two methane-fueled BE-4 engines and two solid rocket boosters.

Credit: United Launch Alliance

The second Vulcan rocket lifts off from Cape Canaveral Space Force Station, Florida, powered by two methane-fueled BE-4 engines and two solid rocket boosters. Credit: United Launch Alliance

Live data displayed on ULA’s webcast of the launch suggested the RL10 engines fired for approximately 20 seconds longer than planned, apparently to compensate for the lower thrust from the damaged booster during the first phase of the flight. The Centaur upper stage completed a second burn about a half-hour into the mission.

The rocket did not carry a real satellite. Earlier this year, ULA decided to launch a dummy payload to simulate the mass of a spacecraft, when it became clear the original payload for Vulcan’s second flight—Sierra Space’s first Dream Chaser spaceplane—would not be ready to fly this fall. ULA says it self-funded most of the cost of the Cert-2 test flight, which Bruno suggested was somewhere below $100 million.

Bullseye insertion

“Orbital insertion was perfect,” Bruno wrote on X.

The Centaur engines were supposed to fire a third time later Friday to send the rocket on a trajectory to escape Earth orbit and head into the Solar System. ULA also planned to perform experiments with the Centaur upper stage to demonstrate technologies and capabilities for longer-duration missions that could eventually last days, weeks, or months. The company did not provide an update on the results of these experiments.

Friday morning’s launch follows the debut test flight of the Vulcan rocket on January 8, which sent a commercial lunar lander from Astrobotic on a trajectory toward the Moon. The launch in January was nearly perfect.

ULA is a 50-50 joint venture between Boeing and Lockheed Martin, which merged their rocket divisions to form a single company in 2006. SpaceX, with its Falcon 9 and Falcon Heavy rockets, is ULA’s main competitor in the market for launching large US military satellites into orbit.

In 2020, the Pentagon awarded ULA and SpaceX multibillion-dollar “Phase 2” contracts to share responsibilities for launching dozens of national security space missions through 2027. Defense officials selected ULA’s Vulcan rocket to launch 25 national security missions, the majority of the launches up for competition. The rest went to SpaceX’s Falcon 9 and Falcon Heavy, which started delivering on its Phase 2 contract in January 2023.

Later this year, the Space Force is expected to select up to three companies—almost certainly ULA, SpaceX, and perhaps Blue Origin with its soon-to-debut New Glenn rocket—in a fresh competition to be eligible for contracts to launch the military’s largest spacecraft through 2029.

The Space Force required ULA to complete two successful Vulcan test flights before clearing the new rocket for launching military satellites. Despite the booster malfunction, ULA officials clearly believe the Vulcan rocket did enough Friday for the Space Force to certify it.

“The success of Vulcan’s second certification flight heralds a new age of forward-looking technology committed to meeting the ever-growing requirements of space launch and supporting our nation’s assured access to space,” Bruno said in a statement. “We had an observation on one of our solid rocket boosters (SRBs) that we are reviewing, but we are overall pleased with the rocket’s performance and had a bullseye insertion.”

A closer view of the Vulcan rocket’s BE-4 main engines and twin solid-fueled boosters.

Credit: United Launch Alliance

A closer view of the Vulcan rocket’s BE-4 main engines and twin solid-fueled boosters. Credit: United Launch Alliance

In a press release after Friday’s launch, the Space Force hailed the test flight as a “certification milestone.”

“This is a significant achievement for both ULA and an important milestone for the nation’s strategic space lift capability,” said Brig. Gen. Kristin Panzenhagen, Space Systems Command’s program executive officer for assured access to space. “The Space Force’s partnership with launch companies, such as ULA, are absolutely critical in deploying on-orbit capabilities that protect our national interests.

“We are already starting to review the performance data from this launch, and we look forward to Vulcan meeting the certification requirements for a range of national security space missions,” Panzenhagen said in a statement.

The Space Force is eager for Vulcan to become operational. Some of the military’s most critical reconnaissance, communications, and missile warning satellites are slated to fly on Vulcan rockets.

Ramping up

Going into Friday’s test flight, ULA and the Space Force hoped to launch one or two more Vulcan rockets by the end of the year, both with US Space Force payloads. The timing of the next Vulcan launch, assuming the Space Force certifies the new rocket, will likely hinge on the outcome of the investigation into the booster anomaly.

ULA has already transported all major components of the next Vulcan rocket from its factory in Alabama to Cape Canaveral for final launch preparations. The company has a backlog of 69 Vulcan flights, counting missions for the Space Force, the National Reconnaissance Office, Amazon’s Kuiper network, and Sierra Space’s Dream Chaser spaceplane to resupply the International Space Station.

In a prelaunch briefing with reporters, Bruno said ULA aims to launch up to 20 times next year. Roughly half of that number will be Vulcan flights, and the rest will be Atlas V rockets, which ULA is retiring in favor of Vulcan.

There are 15 Atlas V rockets left to fly, primarily for Amazon and Boeing’s Starliner crew capsule. The nozzle failure Friday may also affect the schedule for Atlas V launches because the soon-to-retire rocket uses a similar booster design from Northrop Grumman.

ULA eventually wants to launch up to 25 Vulcan rockets per year from its launch pads at Cape Canaveral and at Vandenberg Space Force Base, California. The launch provider is outfitting a second assembly building in Florida to stack Vulcan rockets, a capability that will shorten the time between liftoffs. ULA is modifying its Atlas V launch pad in California to support Vulcan flights there next year.

ULA announced the Vulcan rocket in 2015 to replace the Atlas V and Delta IV rockets, which had stellar success records but were not cost-competitive with SpaceX’s partially reusable Falcon 9. The Atlas V also uses a Russian main engine, a situation that became politically untenable after Russia’s annexation of Crimea in 2014, and more so after the Russian invasion of Ukraine in 2022. The final Russian engines for the Atlas V arrived in the United States in 2021.

The Vulcan rocket is somewhat less expensive than the Atlas V, and significantly cheaper than the Delta IV, but still more costly than SpaceX’s Falcon 9. There is a closer price parity between Vulcan and SpaceX’s Falcon Heavy rocket.

Bruno hinted at the cost of developing the rocket in his roundtable discussion with reporters earlier this week.

“Developing a rocket, and then the infrastructure to develop a new space launch vehicle, the rule of thumb is it costs you somewhere between $5 billion and $7 billion,” Bruno said. “Vulcan is not outside the rule of thumb.”

Updated at 5: 15 pm EDT (21: 15 UTC) with new FAA statement.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

ULA’s second Vulcan rocket lost part of its booster and kept going Read More »

the-more-sophisticated-ai-models-get,-the-more-likely-they-are-to-lie

The more sophisticated AI models get, the more likely they are to lie


Human feedback training may incentivize providing any answer—even wrong ones.

Image of a Pinocchio doll with a long nose and a small green sprig at the end.

When a research team led by Amrit Kirpalani, a medical educator at Western University in Ontario, Canada, evaluated ChatGPT’s performance in diagnosing medical cases back in August 2024, one of the things that surprised them was the AI’s propensity to give well-structured, eloquent but blatantly wrong answers.

Now, in a study recently published in Nature, a different group of researchers tried to explain why ChatGPT and other large language models tend to do this. “To speak confidently about things we do not know is a problem of humanity in a lot of ways. And large language models are imitations of humans,” says Wout Schellaert, an AI researcher at the University of Valencia, Spain, and co-author of the paper.

Smooth operators

Early large language models like GPT-3 had a hard time answering simple questions about geography or science. They even struggled with performing simple math such as “how much is 20 +183.” But in most cases where they couldn’t identify the correct answer, they did what an honest human being would do: They avoided answering the question.

The problem with the non-answers is that large language models were intended to be question-answering machines. For commercial companies like Open AI or Meta that were developing advanced LLMs, a question-answering machine that answered “I don’t know” more than half the time was simply a bad product. So, they got busy solving this problem.

The first thing they did was scale the models up. “Scaling up refers to two aspects of model development. One is increasing the size of the training data set, usually a collection of text from websites and books. The other is increasing the number of language parameters,” says Schellaert. When you think about an LLM as a neural network, the number of parameters can be compared to the number of synapses connecting its neurons. LLMs like GPT-3 used absurd amounts of text data, exceeding 45 terabytes, for training. The number of parameters used by GPT-3 was north of 175 billion.

But it was not enough.

Scaling up alone made the models more powerful, but they were still bad at interacting with humans—slight variations in how you phrased your prompts could lead to drastically different results. The answers often didn’t feel human-like and sometimes were downright offensive.

Developers working on LLMs wanted them to parse human questions better and make answers more accurate, more comprehensible, and consistent with generally accepted ethical standards. To try to get there, they added an additional step: supervised learning methods, such as reinforcement learning, with human feedback. This was meant primarily to reduce sensitivity to prompt variations and to provide a level of output-filtering moderation intended to curb hateful-spewing Tay chatbot-style answers.

In other words, we got busy adjusting the AIs by hand. And it backfired.

AI people pleasers

“The notorious problem with reinforcement learning is that an AI optimizes to maximize reward, but not necessarily in a good way,” Schellaert says. Some of the reinforcement learning involved human supervisors who flagged answers they were not happy with. Since it’s hard for humans to be happy with “I don’t know” as an answer, one thing this training told the AIs was that saying “I don’t know” was a bad thing. So, the AIs mostly stopped doing that. But another, more important thing human supervisors flagged was incorrect answers. And that’s where things got a bit more complicated.

AI models are not really intelligent, not in a human sense of the word. They don’t know why something is rewarded and something else is flagged; all they are doing is optimizing their performance to maximize reward and minimize red flags. When incorrect answers were flagged, getting better at giving correct answers was one way to optimize things. The problem was getting better at hiding incompetence worked just as well. Human supervisors simply didn’t flag wrong answers that appeared good and coherent enough to them.

In other words, if a human didn’t know whether an answer was correct, they wouldn’t be able to penalize wrong but convincing-sounding answers.

Schellaert’s team looked into three major families of modern LLMs: Open AI’s ChatGPT, the LLaMA series developed by Meta, and BLOOM suite made by BigScience. They found what’s called ultracrepidarianism, the tendency to give opinions on matters we know nothing about. It started to appear in the AIs as a consequence of increasing scale, but it was predictably linear, growing with the amount of training data, in all of them. Supervised feedback “had a worse, more extreme effect,” Schellaert says. The first model in the GPT family that almost completely stopped avoiding questions it didn’t have the answers to was text-davinci-003. It was also the first GPT model trained with reinforcement learning from human feedback.

The AIs lie because we told them that doing so was rewarding. One key question is when and how often do we get lied to.

Making it harder

To answer this question, Schellaert and his colleagues built a set of questions in different categories like science, geography, and math. Then, they rated those questions based on how difficult they were for humans to answer, using a scale from 1 to 100. The questions were then fed into subsequent generations of LLMs, starting from the oldest to the newest. The AIs’ answers were classified as correct, incorrect, or evasive, meaning the AI refused to answer.

The first finding was that the questions that appeared more difficult to us also proved more difficult for the AIs. The latest versions of ChatGPT gave correct answers to nearly all science-related prompts and the majority of geography-oriented questions up until they were rated roughly 70 on Schellaert’s difficulty scale. Addition was more problematic, with the frequency of correct answers falling dramatically after the difficulty rose above 40. “Even for the best models, the GPTs, the failure rate on the most difficult addition questions is over 90 percent. Ideally we would hope to see some avoidance here, right?” says Schellaert. But we didn’t see much avoidance.

Instead, in more recent versions of the AIs, the evasive “I don’t know” responses were increasingly replaced with incorrect ones. And due to supervised training used in later generations, the AIs developed the ability to sell those incorrect answers quite convincingly. Out of the three LLM families Schellaert’s team tested, BLOOM and Meta’s LLaMA have released the same versions of their models with and without supervised learning. In both cases, supervised learning resulted in the higher number of correct answers, but also in a higher number of incorrect answers and reduced avoidance. The more difficult the question and the more advanced model you use, the more likely you are to get well-packaged, plausible nonsense as your answer.

Back to the roots

One of the last things Schellaert’s team did in their study was to check how likely people were to take the incorrect AI answers at face value. They did an online survey and asked 300 participants to evaluate multiple prompt-response pairs coming from the best performing models in each family they tested.

ChatGPT emerged as the most effective liar. The incorrect answers it gave in the science category were qualified as correct by over 19 percent of participants. It managed to fool nearly 32 percent of people in geography and over 40 percent in transforms, a task where an AI had to extract and rearrange information present in the prompt. ChatGPT was followed by Meta’s LLaMA and BLOOM.

“In the early days of LLMs, we had at least a makeshift solution to this problem. The early GPT interfaces highlighted parts of their responses that the AI wasn’t certain about. But in the race to commercialization, that feature was dropped, said Schellaert.

“There is an inherent uncertainty present in LLMs’ answers. The most likely next word in the sequence is never 100 percent likely. This uncertainty could be used in the interface and communicated to the user properly,” says Schellaert. Another thing he thinks can be done to make LLMs less deceptive is handing their responses over to separate AIs trained specifically to search for deceptions. “I’m not an expert in designing LLMs, so I can only speculate what exactly is technically and commercially viable,” he adds.

It’s going to take some time, though, before the companies that are developing general-purpose AIs do something about it, either out of their own accord or if forced by future regulations. In the meantime, Schellaert has some suggestions on how to use them effectively. “What you can do today is use AI in areas where you are an expert yourself or at least can verify the answer with a Google search afterwards. Treat it as a helping tool not as a mentor. It’s not going to be a teacher that proactively shows you where you went wrong. Quite the opposite. When you nudge it enough, it will happily go along with your faulty reasoning,” Schellaert says.

Nature, 2024.  DOI: 10.1038/s41586-024-07930-y

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

The more sophisticated AI models get, the more likely they are to lie Read More »

human-case-of-h5n1-suspected-in-california-amid-rapid-dairy-spread

Human case of H5N1 suspected in California amid rapid dairy spread

California’s infections bring the country’s total number of affected herds to 255 in 14 states, according to the USDA.

In a new release Thursday, California health officials worked to ease alarm about the human case, emphasizing that the risk to the general public remains low.

“Ongoing health checks of individuals who interact with potentially infected animals helped us quickly detect and respond to this possible human case. Fortunately, as we’ve seen in other states with human infections, the individual has experienced mild symptoms,” Tomás Aragón, director of California’s Department of Public Health, said. “We want to emphasize that the risk to the general public is low, and people who interact with potentially infected animals should take prevention measures.”

The release noted that in the past four months, the health department has distributed more than 340,000 respirators, 1.3 million gloves, 160,000 goggles and face shields, and 168,000 bouffant caps to farm workers. The state has also received 5,000 doses of seasonal flu vaccine earmarked for farm workers and is working to distribute those vaccines to local health departments.

Still, herd infections and human cases continue to tick up. Influenza researchers and other health experts are anxiously following the unusual dairy outbreak—the first time an avian influenza is known to have spilled over to and caused an outbreak in cattle. The more opportunities the virus has to spread and adapt to mammals, the more chances it could begin spreading among humans, potentially sparking an outbreak or even a pandemic.

Human case of H5N1 suspected in California amid rapid dairy spread Read More »

strange-“biotwang”-id’d-as-bryde’s-whale-call

Strange “biotwang” ID’d as Bryde’s whale call

In 2014, researchers monitoring acoustic recordings from the Mariana Archipelago picked up an unusual whale vocalization with both low- and high-frequency components. It seemed to be a whale call, but it sounded more mechanical than biological and has since been dubbed a “biotwang.”

Now a separate team of scientists has developed a machine-learning model to scan a dataset of recordings of whale vocalizations from various species to help identify the source of such calls. Combining that analysis with visual observations allowed the team to identify the source of the biotwang: a species of baleen whales called Bryde’s (pronounced “broodus”) whales. This should help researchers track populations of these whales as they migrate to different parts of the world, according to a recent paper published in the journal Frontiers in Marine Science.

Marine biologists often rely on a powerful tool called passive acoustic monitoring for long-term data collection of the ocean’s acoustic environment, including whale vocalizations. Bryde’s whale calls tend to be regionally specific, per the authors. For instance, calls in the eastern North Pacific are pretty well documented, with frequencies typically falling below 100 Hz, augmented by harmonic frequencies as high as 400 Hz. Far less is known about the sounds made by Bryde’s whales in the western and central North Pacific, since for many years there were only three known recordings of those vocalizations—including a call dubbed “Be8” (starting at 45 Hz with multiple harmonics) and mother-calf calls.

That changed with the detection of the biotwang in 2014. It’s quite a distinctive, complex call that typically lasts about 3.5 seconds, with five stages, starting at around 30 Hz and ending with a metallic sound that can reach as high as 8,000 Hz. “It’s a real weird call,” co-author Ann Allen, a scientist at NOAA Fisheries, told Ars. “Anybody who wasn’t familiar with whales would think it was some sort of artificial sound, made by a naval ship.” The 2014 team was familiar with whale vocalizations and originally attributed the strange sound to baleen whales. But that particular survey was autonomous, and without accompanying visual observations, the scientists could not definitively confirm their hypothesis.

Strange “biotwang” ID’d as Bryde’s whale call Read More »

ants-learned-to-farm-fungi-during-a-mass-extinction

Ants learned to farm fungi during a mass extinction

Timing is everything

Tracing the lineages of agricultural ants to their most recent common ancestor revealed that the ancestor probably lived through the end-Cretaceous mass extinction—the one that killed off the dinosaurs. The researchers argue that the two were almost certainly related. Current models suggest that there was so much dust in the atmosphere after the impact that set off the mass extinction that photosynthesis shut down for nearly two years, meaning minimal plant life. By contrast, the huge amount of dead material would allow fungi to flourish. So, it’s not surprising that ants started to adapt to use what was available to them.

That explains the huge cluster of species that cooperate with fungi. However, most of the species that engage in organized farming don’t appear until roughly 35 million years after the mass extinction, at the end of the Eocene (that’s about 33 million years before the present period). The researchers suggest that the climate changes that accompanied the transition to the Oligocene included a drying out of the tropical Americas, where the fungus-farming ants had evolved. This would cut down on the availability of fungi in the wild, potentially selecting for the ability of species that could propagate fungal species on their own.

This also corresponds to the origins of the yeast strains used by farming ants, as well as the most specialized agricultural fungal species. But it doesn’t account for the origin of coral fungus farmers, which seems to have occurred roughly 10 million years later.

The work gives us a much clearer picture of the origin of agriculture in ants and some reasonable hypotheses regarding the selective pressures that might have led to its evolution. In the long term, however, the biggest advance here may be the resources generated during this study. Ultimately, we’d like to understand the genetic basis for the changes in the ants’ behavior, as well as how the fungi have adapted to better provide for their farmers. To do that, we’ll need to compare the genomes of agricultural species with their free-living relatives. The DNA gathered for this study will ultimately be needed to pursue those questions.

Science, 2024. DOI: 10.1126/science.adn7179  (About DOIs).

Ants learned to farm fungi during a mass extinction Read More »

popular-gut-probiotic-completely-craps-out-in-randomized-controlled-trial

Popular gut probiotic completely craps out in randomized controlled trial

Any striking marketing claims in companies’ ads about the gut benefits of a popular probiotic may be full of, well, the same thing that has their target audience backed up.

In a randomized controlled trial, the probiotic Bifidobacterium animalis subsp. lactis—used in many probiotic products, including Dannon’s Activia yogurts—did nothing to improve bowel health in people with constipation, according to data from a randomized triple-blind placebo-controlled clinical trial published Wednesday in JAMA Network Open.

The study adds to a mixed and mostly unconvincing body of scientific literature on the bowel benefits of the bacterium, substrains of which are sometimes sold with faux scientific-sounding names in products. Dannon, for instance, previously marketed its substrain, DN-173 010, as “Bifidus regularis.”

Digested data

For the new study, researchers in China recruited 228 middle-aged adults, 85 percent of whom were women. The participants, all from Shanghai, were considered healthy based on medical testing and records, except for reporting functional constipation. This is a condition defined by having two or more signs of difficulty evacuating the bowels, such as frequent straining and having rock-like stool. For the study, the researchers included the additional criterion that participants have three or fewer complete, spontaneous bowel movements (CSBMs) per week.

The participants were randomized to take either a placebo (117 participants) or the probiotic (112 participants) every day for eight weeks. Both groups got packets of sweetened powder that participants added to a glass of water taken before breakfast each morning. In addition to a sweetener, the daily probiotic packets contained freeze-dried Bifidobacterium animalis subsp. lactis substrain HN019, which is used in some commercial probiotic products. The first dose had a concentration of 7 × 109 colony-forming units (CFUs), then participants shifted to a daily dose of 4.69 × 109 CFUs. Many probiotic products have doses of B. lactis in ranges from 1 × 109 to 17 × 109.

Popular gut probiotic completely craps out in randomized controlled trial Read More »

ula-hasn’t-given-up-on-developing-a-long-lived-cryogenic-space-tug

ULA hasn’t given up on developing a long-lived cryogenic space tug


On Friday’s launch, United Launch Alliance will test the limits of its Centaur upper stage.

United Launch Alliance’s second Vulcan rocket underwent a countdown dress rehearsal Tuesday. Credit: United Launch Alliance

The second flight of United Launch Alliance’s Vulcan rocket, planned for Friday morning, has a primary goal of validating the launcher’s reliability for delivering critical US military satellites to orbit.

Tory Bruno, ULA’s chief executive, told reporters Wednesday that he is “supremely confident” the Vulcan rocket will succeed in accomplishing that objective. The Vulcan’s second test flight, known as Cert-2, follows a near-flawless debut launch of ULA’s new rocket on January 8.

“As I come up on Cert-2, I’m pretty darn confident I’m going to have a good day on Friday, knock on wood,” Bruno said. “These are very powerful, complicated machines.”

The Vulcan launcher, a replacement for ULA’s Atlas V and Delta IV rockets, is on contract to haul the majority of the US military’s most expensive national security satellites into orbit over the next several years. The Space Force is eager to certify Vulcan to launch these payloads, but military officials want to see two successful test flights before committing one of its satellites to flying on the new rocket.

If Friday’s test flight goes well, ULA is on track to launch at least one—and perhaps two—operational missions for the Space Force by the end of this year. The Space Force has already booked 25 launches on ULA’s Vulcan rocket for military payloads and spy satellites for the National Reconnaissance Office. Including the launch Friday, ULA has 70 Vulcan rockets in its backlog, mostly for the Space Force, the NRO, and Amazon’s Kuiper satellite broadband network.

The Vulcan rocket is powered by two methane-fueled BE-4 engines produced by Jeff Bezos’ space company Blue Origin, and ULA can mount zero, two, four, or six strap-on solid rocket boosters from Northrop Grumman around the Vulcan’s first stage to propel heavier payloads to space. The rocket’s Centaur V upper stage is fitted with a pair of hydrogen-burning RL10 engines from Aerojet Rocketdyne.

The second Vulcan rocket will fly in the same configuration as the first launch earlier this year, with two strap-on solid-fueled boosters. The only noticeable modification to the rocket is the addition of some spray-on foam insulation around the outside of the first stage methane tank, which will keep the cryogenic fuel at the proper temperature as Vulcan encounters aerodynamic heating on its ascent through the atmosphere.

“This will give us just over one second more usable propellant,” Bruno wrote on X.

There is one more change from Vulcan’s first launch, which boosted a commercial lunar lander for Astrobotic on a trajectory toward the Moon. This time, there are no real spacecraft on the Vulcan rocket. Instead, ULA mounted a dummy payload to the Centaur V upper stage to simulate the mass of a functioning satellite.

ULA originally planned to launch Sierra Space’s first Dream Chaser spaceplane on the second Vulcan rocket. But the Dream Chaser won’t be ready to fly its first mission to resupply the International Space Station until next year. Under pressure from the Pentagon, ULA decided to move ahead with the second Vulcan launch without a payload at the company’s own expense, which Bruno tallied in the “high tens of millions of dollars.”

Heliocentricity

The test flight will begin with liftoff from Cape Canaveral Space Force Station, Florida, during a three-hour launch window opening at 6 am EDT (10: 00 UTC). The 202-foot-tall (61.6-meter) Vulcan rocket will head east over the Atlantic Ocean, shedding its boosters, first stage, and payload fairing in the first few minutes of flight.

The Centaur upper stage will fire its RL10 engines two times, completing the primary mission within about 35 minutes of launch. The rocket will then continue on for a series of technical demonstrations before ending up on an Earth escape trajectory into a heliocentric orbit around the Sun.

“We have a number of experiments that we’re conducting that are really technology demonstrations and measurements that are associated with our high-performance, longer-duration version of Centaur V that we’ll be introducing in the future,” Bruno said. “And these will help us go a little bit faster on that development. And, of course, because we don’t have an active spacecraft as a payload, we also have more instrumentation that we’re able to use for just characterizing the vehicle.”

The Centaur V upper stage for the Vulcan rocket.

The Centaur V upper stage for the Vulcan rocket. Credit: United Launch Alliance

ULA engineers have worked on the design of a long-lived upper stage for more than a decade. Their vision was to develop an upper stage fed by super-efficient cryogenic liquid hydrogen and liquid oxygen propellants that could generate its own power and operate in space for days, weeks, or longer rather than an upper stage’s usual endurance limit of several hours. This would allow the rocket to not only deliver satellites into bespoke high-altitude orbits but also continue on to release more payloads at different altitudes or provide longer-term propulsion in support of other missions.

The concept was called the Advanced Cryogenic Evolved Stage (ACES). ULA’s corporate owners, Boeing and Lockheed Martin, never authorized the full development of ACES, and the company said in 2020 that it was no longer pursuing the ACES concept.

The Centaur V upper stage currently used on the Vulcan rocket is a larger version of the thin-walled, pressure-stabilized Centaur upper stage that has been flying since the 1960s. Bruno said the Centaur V design, as it is today, offers as much as 12 hours of operating life in space. This is longer than any other existing rocket using cryogenic propellants, which can boil off over time.

ULA’s chief executive still harbors an ambition for regaining some of the same capabilities promised by ACES.

“What we are looking to do is to extend that by orders of magnitude,” Bruno said. “And what that would allow us to do is have a in-space transportation capability for in-space mobility and servicing and things like that.”

Space Force leaders have voiced a desire for future spacecraft to freely maneuver between different orbits, a concept the military calls “dynamic space operations.” This would untether spacecraft operations from fuel limitations and eventually require the development of in-orbit refueling, propellant depots, or novel propulsion technologies.

No one has tried to store large amounts of super-cold propellants in space for weeks or longer. Accomplishing this is a non-trivial thermal problem, requiring insulation to keep heat from the Sun from reaching the liquid cryogenic propellant, stored at temperatures of several hundred degrees below zero.

Bruno hesitated to share details of the experiments ULA plans for the Centaur V upper stage on Friday’s test flight, citing proprietary concerns. He said the experiments will confirm analytical models about how the upper stage performs in space.

“Some of these are devices, some of these are maneuvers because maneuvers make a difference, and some are related to performance in a way,” he said. “In some cases, those maneuvers are helping us with the thermal load that tries to come in and boil off the propellants.”

Eventually, ULA would like to eliminate hydrazine attitude control fuel and battery power from the Centaur V upper stage, Bruno said Wednesday. This sounds a lot like what ULA wanted to do with ACES, which would have used an internal combustion engine called Integrated Vehicle Fluids (IVF) to recycle gasified waste propellants to pressurize its propellant tanks, generate electrical power, and feed thrusters for attitude control. This would mean the upper stage wouldn’t need to rely on hydrazine, helium, or batteries.

ULA hasn’t talked much about the IVF system in recent years, but Bruno said the company is still developing it. “It’s part of all of this, but that’s all I will say, or I’ll start revealing what all the gadgets are.”

A comparison between ULA’s legacy Centaur upper stage and the new Centaur V.

A comparison between ULA’s legacy Centaur upper stage and the new Centaur V. Credit: United Launch Alliance

George Sowers, former vice president and chief scientist at ULA, was one of the company’s main advocates for extending the lifetime of upper stages and developing technologies for refueling and propellant depot. He retired from ULA in 2017 and is now a professor at the Colorado School of Mines and an independent aerospace industry consultant.

In an interview with Ars earlier this year, Sowers said ULA solved many of the problems with keeping cryogenic propellants at the right temperature in space.

“We had a lot of data on boil-off, just from flying Centaurs all the way to geosynchronous orbit, which doesn’t involve weeks, but it involves maybe half a day or so, which is plenty of time to get all the temperatures to stabilize at deep space levels,” Sowers said. “So you have to understand the heat transfer very well. Good models are very important.”

ULA experimented with different types of insulation and vapor cooling, which involves taking cold gas that boiled off of cryogenic fuel and blowing it on heat penetration points into the tanks.

“There are tricks to managing boil-off,” he said. “One of the tricks is that you never want to boil oxygen. You always want to boil hydrogen. So you size your propellant tanks and your propellant loads, assuming you’re going to have that extra hydrogen boil-off. Then what you can do is use the hydrogen to keep the oxygen cold to keep it from boiling.

“The amount of heat that you can reject by boiling off one kilogram of hydrogen is about five times what you would reject by boiling off one kilogram of oxygen. So those are some of the thermodynamic tricks,” Sowers said. “The way ULA accomplished that is by having a common bulkhead, so the hydrogen tank and the oxygen tank are in thermal contact. So hydrogen keeps the oxygen cold.”

ULA’s experiments showed it could get the hydrogen boil-off rate down to about 10 percent per year, based on thermodynamic models calibrated by data from flying older versions of the Centaur upper stage on Atlas V rockets, according to Sowers.

“In my mind, that kind of cemented the idea that distribution depots and things like that are very well in hand without having to have exotic cryocoolers, which tend to use a lot of power,” Sowers said. “It’s about efficiency. If you can do it passively, you don’t have to expend energy on cryocoolers.”

“We’re going to go to days, and then we’re going to go to weeks, and then we think it’s possible to take us to months,” Bruno said. “That’s a game changer.”

However, ULA’s corporate owners haven’t yet fully bought into this vision. Bruno said the Vulcan rocket and its supporting manufacturing and launch infrastructure cost between $5 billion and $7 billion to develop. ULA also plans to eventually recover and reuse BE-4 main engines from the Vulcan rocket, but that is still at least several years away.

But ULA is reportedly up for sale, and a well-capitalized buyer might find the company’s long-duration cryogenic upper stage more attractive and worth the investment.

“There’s a whole lot of missions that enables,” Bruno said. “So that’s a big step in capability, both for the United States and also commercially.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

ULA hasn’t given up on developing a long-lived cryogenic space tug Read More »

despite-stricter-regulations,-europe-has-issues-with-tattoo-ink-ingredients

Despite stricter regulations, Europe has issues with tattoo ink ingredients

Swierk et al. use various methods, including Raman spectroscopy, nuclear magnetic resonance spectroscopy, and electron microscopy, to analyze a broad range of commonly used tattoo inks. This enables them to identify specific pigments and other ingredients in the various inks.

Earlier this year, Swierk’s team identified 45 out of 54 inks (90 percent) with major labeling discrepancies in the US. Allergic reactions to the pigments, especially red inks, have already been documented. For instance, a 2020 study found a connection between contact dermatitis and how tattoos degrade over time. But additives can also have adverse effects. More than half of the tested inks contained unlisted polyethylene glycol—repeated exposure could cause organ damage—and 15 of the inks contained a potential allergen called propylene glycol.

Meanwhile, across the pond…

That’s a major reason why the European Commission has recently begun to crack down on harmful chemicals in tattoo ink, including banning two widely used blue and green pigments (Pigment Blue 15 and Pigment Green 7), claiming they are often of low purity and can contain hazardous substances. (US regulations are less strict than those adopted by the EU.) Swierk’s team has now expanded its chemical analysis to include 10 different tattoo inks from five different manufacturers supplying the European market.

According to Swierk et al., nine of those 10 inks did not meet EU regulations; five simply failed to list all the components, but four contained prohibited ingredients. The other main finding was that Raman spectroscopy is not very reliable for figuring out which of three common structures of Pigment Blue 15 has been used. (Only one has been banned.) Different instruments failed to reliably distinguish between the three forms, so the authors concluded that the current ban on Pigment Blue 15 is simply unenforceable.

“There are regulations on the book that are not being complied with, at least in part because enforcement is lagging,” said Swierk. “Our work cannot determine whether the issues with inaccurate tattoo ink labeling is intentional or unintentional, but at a minimum, it highlights the need for manufacturers to adopt better manufacturing standards. At the same time, the regulations that are on the books need to be enforced and if they cannot be enforced, like we argue in the case of Pigment Blue 15, they need to be reevaluated.”

Analyst, 2024. DOI: 10.1039/D4AN00793J  (About DOIs).

Despite stricter regulations, Europe has issues with tattoo ink ingredients Read More »

lab-owner-pleads-guilty-to-faking-covid-test-results-during-pandemic

Lab owner pleads guilty to faking COVID test results during pandemic

Justice —

Ill-gotten millions bought a Bentley, Lamborghini, Tesla X, and crypto, among other things.

Residents line up for COVID-19 testing on November 30, 2020 in Chicago.

Enlarge / Residents line up for COVID-19 testing on November 30, 2020 in Chicago.

The co-owner of a Chicago-based lab has pleaded guilty for his role in a COVID testing scam that raked in millions—which he used to buy stocks, cryptocurrency, and several luxury cars while still squirreling away over $6 million in his personal bank account.

Zishan Alvi, 45, of Inverness, Illinois, co-owned LabElite, which federal prosecutors say billed the federal government for COVID-19 tests that were either never performed or were performed with purposefully inadequate components to render them futile. Customers who sought testing from LabElite—sometimes for clearance to travel or have contact with vulnerable people—received either no results or results indicating they were negative for the deadly virus.

The scam, which ran from around February 2021 to about February 2022, made over $83 million total in fraudulent payments from the federal government’s Health Resources and Services Administration (HRSA), which covered the cost of COVID-19 testing for people without insurance during the height of the pandemic. Local media coverage indicated that people who sought testing at LabElite were discouraged from providing health insurance information.

In February 2022, the FBI raided LabElite’s Chicago testing site amid a crackdown on several large-scale fraudulent COVID testing schemes. In March 2023, Alvi was indicted by a federal grand jury on 10 counts of wire fraud and one count of theft of government funds. The indictment sought forfeiture of his ill-gotten riches, which were listed in the indictment.

The list included five vehicles: a 2021 Mercedes-Benz, a 2021 Land Rover Range Rover HSE, a  2021 Lamborghini Urus, A 2021 Bentley, and a 2022 Tesla X. There was also about $810,000 in an E*Trade account, approximately $500,000 in a Fidelity Investments account, and $245,814 in a Coinbase account. Last, there was $6,825,089 in Alvi’s personal bank account.

On Monday, the Department of Justice announced a deal in which Alvi pleaded guilty to one count of wire fraud, taking responsibility for $14 million worth of fraudulent HRSA claims. He now faces up to 20 years in prison and will be sentenced on February 7, 2025.

Lab owner pleads guilty to faking COVID test results during pandemic Read More »

toxic-chemicals-from-ohio-train-derailment-lingered-in-buildings-for-months

Toxic chemicals from Ohio train derailment lingered in buildings for months

This video screenshot released by the US National Transportation Safety Board (NTSB) shows the site of a derailed freight train in East Palestine, Ohio.

Enlarge / This video screenshot released by the US National Transportation Safety Board (NTSB) shows the site of a derailed freight train in East Palestine, Ohio.

On February 3, 2023, a train carrying chemicals jumped the tracks in East Palestine, Ohio, rupturing railcars filled with hazardous materials and fueling chemical fires at the foothills of the Appalachian Mountains.

The disaster drew global attention as the governors of Ohio and Pennsylvania urged evacuations for a mile around the site. Flames and smoke billowed from burning chemicals, and an acrid odor radiated from the derailment area as chemicals entered the air and spilled into a nearby creek.

Three days later, at the urging of the rail company Norfolk Southern, about 1 million pounds of vinyl chloride, a chemical that can be toxic to humans at high doses, was released from the damaged train cars and set aflame.

Federal investigators later concluded that the open burn and the black mushroom cloud it produced were unnecessary, but it was too late. Railcar chemicals spread into Ohio and Pennsylvania.

As environmental engineers, I and my colleagues are often asked to assist with public health decisions after disasters by government agencies and communities. After the evacuation order was lifted, community members asked for help.

In a new study, we describe the contamination we found, along with problems with the response and cleanup that, in some cases, increased the chances that people would be exposed to hazardous chemicals. It offers important lessons to better protect communities in the future.

How chemicals get into homes and water

When large amounts of chemicals are released into the environment, the air can become toxic. Chemicals can also wash into waterways and seep into the ground, contaminating groundwater and wells. Some chemicals can travel below ground into nearby buildings and make the indoor air unsafe.

A computer model shows how chemicals from the train may have spread, given wind patterns. The star on the Ohio-Pennsylvania line is the site of the derailment.

Enlarge / A computer model shows how chemicals from the train may have spread, given wind patterns. The star on the Ohio-Pennsylvania line is the site of the derailment.

Air pollution can find its way into buildings through cracks, windows, doors, and other portals. Once inside, the chemicals can penetrate home items like carpets, drapes, furniture, counters, and clothing. When the air is stirred up, those chemicals can be released again.

Evacuation order lifted, but buildings were contaminated

Three weeks after the derailment, we began investigating the safety of the area near 17 buildings in Ohio and Pennsylvania. The highest concentration of air pollution occurred in the 1-mile evacuation zone and a shelter-in-place band another mile beyond that. But the chemical plume also traveled outside these areas.

In and outside East Palestine, evidence indicated that chemicals from the railcars had entered buildings. Many residents complained about headaches, rashes, and other health symptoms after reentering the buildings.

At one building 0.2 miles away from the derailment site, the indoor air was still contaminated more than four months later.

Nine days after the derailment, sophisticated air testing by a business owner showed the building’s indoor air was contaminated with butyl acrylate and other chemicals carried by the railcars. Butyl acrylate was found above the two-week exposure level, a level at which measures should be taken to protect human health.

When rail company contractors visited the building 11 days after the wreck, their team left after just 10 minutes. They reported an “overwhelming/unpleasent odor” even though their government-approved handheld air pollution detectors detected no chemicals. This building was located directly above Sulphur Run creek, which had been heavily contaminated by the spill. Chemicals likely entered from the initial smoke plumes and also rose from the creek into the building.

Our tests weeks later revealed that railcar chemicals had even penetrated the business’s silicone wristband products on its shelves. We also detected several other chemicals that may have been associated with the spill.

Homes and businesses were mere feet from the contaminated waterways in East Palestine.

Enlarge / Homes and businesses were mere feet from the contaminated waterways in East Palestine.

Weeks after the derailment, government officials discovered that air in the East Palestine Municipal Building, about 0.7 miles away from the derailment site, was also contaminated. Airborne chemicals had entered that building through an open drain pipe from Sulphur Run.

More than a month after the evacuation order was lifted, the Ohio Environmental Protection Agency acknowledged that multiple buildings in East Palestine were being contaminated as contractors cleaned contaminated culverts under and alongside buildings. Chemicals were entering the buildings.

Toxic chemicals from Ohio train derailment lingered in buildings for months Read More »

ceo-of-“health-care-terrorists”-sues-senators-after-contempt-of-congress-charges

CEO of “health care terrorists” sues senators after contempt of Congress charges

“not the way this works” —

Suing an entire Senate panel seen as a “Hail Mary play” unlikely to succeed.

The empty chair of Steward Health Care System Chief Executive Officer, Dr. Ralph de la Torre who did not show up during the US Senate Committee on Health, Education, Labor, & Pensions Examining the Bankruptcy of Steward Health Care: How Management Decisions Have Impacted Patient Care.

Enlarge / The empty chair of Steward Health Care System Chief Executive Officer, Dr. Ralph de la Torre who did not show up during the US Senate Committee on Health, Education, Labor, & Pensions Examining the Bankruptcy of Steward Health Care: How Management Decisions Have Impacted Patient Care.

The infamous CEO of a failed hospital system is suing an entire Senate committee after being held in contempt of Congress, with civil and criminal charges unanimously approved by the full Senate last week.

In a federal lawsuit filed Monday, Steward CEO Ralph de la Torre claimed the senators “bulldozed over [his] constitutional rights” as they tried to “pillory and crucify him as a loathsome criminal” in a “televised circus.”

The Senate committee—the Committee on Health, Education, Labor, and Pensions (HELP), led by Bernie Sanders (I-Vt.)—issued a rare subpoena to de la Torre in July, compelling him to testify before the lawmakers. They sought to question the CEO on the deterioration of his hospital system, which previously included more than 30 hospitals across eight states. Steward filed for bankruptcy in May.

Imperiled patients

The committee alleges that de la Torre and Steward executives reaped millions in personal profits by hollowing out the health care facilities, even selling the land out from under them. The mismanagement left them so financially burdened that one doctor in a Steward-owned hospital in Louisiana said they were forced to perform “third-world medicine.” A lawmaker in that state who investigated the conditions at the hospital described Steward executives as “health care terrorists.”

Further, the financial strain on the hospitals is alleged to have led to the preventable deaths of 15 patients and put more than 2,000 other patients in “immediate peril.” As hospitals cut services, closed wards, or shuttered entirely, hundreds of health care workers were laid off, and communities were left without access to care. Nurses who remained in faltering facilities testified of harrowing conditions, including running out of basic supplies like beds. In one Massachusetts hospital, nurses were forced to place the remains of newborns in cardboard shipping boxes because Steward failed to pay a vendor for bereavement boxes.

Meanwhile, records indicate de la Torre and his companies were paid at least $250 million in recent years and he bought a 190-foot yacht for $40 million. Steward also owned two private jets collectively worth $95 million.

While de la Torre initially agreed to testify before the committee at the September 12 hearing, the wealthy CEO backed out the week beforehand. He claimed that a federal court order linked to the bankruptcy case prevented him from speaking on the matter; additionally, he invoked his Fifth Amendment right to avoid self-incrimination.

The HELP committee rejected de la Torre’s arguments, saying there were still relevant topics he could safely discuss without violating the order and that his Fifth Amendment rights did not permit him to refuse to appear before Congress when summoned by a subpoena. Still, the CEO was a no-show, and the Senate moved forward with the contempt charges.

“Not the way this works”

In the lawsuit filed today, de la Torre argues that the senators are attempting to punish him for invoking his Constitutional rights and that the hearing “was simply a device for the Committee to attack [him] and try to publicly humiliate and condemn him.”

The suit describes de la Torre as having a “distinguished career, bedecked by numerous accomplishments,” while accusing the senators of painting him as “a villain and scapegoat[ing] him for the company’s problems, even those caused by systemic deficiencies in Massachusetts’ health care system.” If he had appeared at the Congressional hearing, he would not have been able to defend himself from the personal attacks without being forced to abandon his Constitutional rights, the suit argues.

“Indeed, the Committee made it abundantly clear that they would put Dr. de la Torre’s invocation [of the Fifth Amendment] itself at the heart of their televised circus and paint him as guilty for the sin of remaining silent in the face of these assaults on his character and integrity,” the suit reads.

De la Torre seeks to have the federal court quash the Senate committee’s subpoena, enjoin both contempt charges, and declare that the Senate committee violated his Fifth Amendment rights.

Outside lawyers are skeptical that will occur. The lawsuit is a “Hail Mary play,” according to Stan M. Brand, an attorney who represented former Trump White House official Peter Navarro in a contempt of Congress case. De la Torre’s case “has very little chance of succeeding—I would say no chance of succeeding,” Brand told the Boston Globe.

“Every time that someone has tried to sue the House or Senate directly to challenge a congressional subpoena, the courts have said, ‘That that’s not the way this works,’” Brand said.

CEO of “health care terrorists” sues senators after contempt of Congress charges Read More »

for-the-first-time-since-1882,-uk-will-have-no-coal-fired-power-plants

For the first time since 1882, UK will have no coal-fired power plants

Into the black —

A combination of government policy and economics spells the end of UK’s coal use.

Image of cooling towers and smoke stacks against a dusk sky.

Enlarge / The Ratcliffe-on-Soar plant is set to shut down for good today.

On Monday, the UK will see the closure of its last operational coal power plant, Ratcliffe-on-Soar, which has been operating since 1968. The closure of the plant, which had a capacity of 2,000 megawatts, will bring an end to the history of the country’s coal use, which started with the opening of the first coal-fired power station in 1882. Coal played a central part in the UK’s power system in the interim, in some years providing over 90 percent of its total electricity.

But a number of factors combined to place coal in a long-term decline: the growth of natural gas-powered plants and renewables, pollution controls, carbon pricing, and a government goal to hit net-zero greenhouse gas emissions by 2050.

From boom to bust

It’s difficult to overstate the importance of coal to the UK grid. It was providing over 90 percent of the UK’s electricity as recently as 1956. The total amount of power generated continued to climb well after that, reaching a peak of 212 terawatt hours of production by 1980. And the construction of new coal plants was under consideration as recently as the late 2000s. According to the organization Carbon Brief’s excellent timeline of coal use in the UK, continuing the use of coal with carbon capture was given consideration.

But several factors slowed the use of fuel ahead of any climate goals set out by the UK, some of which have parallels to the US’s situation. The European Union, which included the UK at the time, instituted new rules to address acid rain, which raised the cost of coal plants. In addition, the exploitation of oil and gas deposits in the North Sea provided access to an alternative fuel. Meanwhile, major gains in efficiency and the shift of some heavy industry overseas cut demand in the UK significantly.

Through their effect on coal use, these changes also lowered employment in coal mining. The mining sector has sometimes been a significant force in UK politics, but the decline of coal reduced the number of people employed in the sector, reducing its political influence.

These had all reduced the use of coal even before governments started taking any aggressive steps to limit climate change. But, by 2005, the EU implemented a carbon trading system that put a cost on emissions. By 2008, the UK government adopted national emissions targets, which have been maintained and strengthened since then by both Labour and Conservative governments up until Rishi Sunak, who was voted out of office before he had altered the UK’s trajectory. What started as a pledge for a 60 percent reduction in greenhouse gas emissions by 2050 now requires the UK to hit net zero by that date.

Renewables, natural gas, and efficiency have all squeezed coal off the UK grid.

Enlarge / Renewables, natural gas, and efficiency have all squeezed coal off the UK grid.

These have included a floor on the price of carbon that ensures fossil-powered plants pay a cost for emissions that’s significant enough to promote the transition to renewables, even if prices in the EU’s carbon trading scheme are too low for that. And that transition has been rapid, with the total generations by renewables nearly tripling in the decade since 2013, heavily aided by the growth of offshore wind.

How to clean up the power sector

The trends were significant enough that, in 2015, the UK announced that it would target the end of coal in 2025, despite the fact that the first coal-free day on the grid wouldn’t come until two years after. But two years after that landmark, however, the UK was seeing entire weeks where no coal-fired plants were active.

To limit the worst impacts of climate change, it will be critical for other countries to follow the UK’s lead. So it’s worthwhile to consider how a country that was committed to coal relatively recently could manage such a rapid transition. There are a few UK-specific factors that won’t be possible to replicate everywhere. The first is that most of its coal infrastructure was quite old—Ratcliffe-on-Soar dates from the 1960s—and so it required replacement in any case. Part of the reason for its aging coal fleet was the local availability of relatively cheap natural gas, something that might not be true elsewhere, which put economic pressure on coal generation.

Another key factor is that the ever-shrinking number of people employed by coal power didn’t exert significant pressure on government policies. Despite the existence of a vocal group of climate contrarians in the UK, the issue never became heavily politicized. Both Labour and Conservative governments maintained a fact-based approach to climate change and set policies accordingly. That’s notably not the case in countries like the US and Australia.

But other factors are going to be applicable to a wide variety of countries. As the UK was moving away from coal, renewables became the cheapest way to generate power in much of the world. Coal is also the most polluting source of electrical power, providing ample reasons for regulation that have little to do with climate. Forcing coal users to pay even a fraction of its externalized costs on human health and the environment serve to make it even less economical compared to alternatives.

If these later factors can drive a move away from coal despite government inertia, then it can pay significant dividends in the fight to limit climate change. Inspired in part by the success in moving its grid off coal, the new Labour government in the UK has moved up its timeline for decarbonizing its power sector to 2030 (up from the previous Conservative government’s target of 2035).

For the first time since 1882, UK will have no coal-fired power plants Read More »