Science

peer-review-is-essential-for-science-unfortunately,-it’s-broken.

Peer review is essential for science. Unfortunately, it’s broken.

Peer review is essential for science. Unfortunately, it’s broken.

Aurich Lawson | Getty Images

Rescuing Science: Restoring Trust in an Age of Doubt was the most difficult book I’ve ever written. I’m a cosmologist—I study the origins, structure, and evolution of the Universe. I love science. I live and breathe science. If science were a breakfast cereal, I’d eat it every morning. And at the height of the COVID-19 pandemic, I watched in alarm as public trust in science disintegrated.

But I don’t know how to change people’s minds. I don’t know how to convince someone to trust science again. So as I started writing my book, I flipped the question around: is there anything we can do to make the institution of science more worthy of trust?

The short answer is yes. The long answer takes an entire book. In the book, I explore several different sources of mistrust—the disincentives scientists face when they try to communicate with the public, the lack of long-term careers, the complicitness of scientists when their work is politicized, and much more—and offer proactive steps we can take to address these issues to rebuild trust.

The section below is taken from a chapter discussing the relentless pressure to publish that scientists face, and the corresponding explosion in fraud that this pressure creates. Fraud can take many forms, from the “hard fraud” of outright fabrication of data, to many kinds of “soft fraud” that include plagiarism, manipulation of data, and careful selection of methods to achieve a desired result. The more that fraud thrives, the more that the public loses trust in science. Addressing this requires a fundamental shift in the incentive and reward structures that scientists work in. A difficult task to be sure, but not an impossible one—and one that I firmly believe will be worth the effort.

Modern science is hard, complex, and built from many layers and many years of hard work. And modern science, almost everywhere, is based on computation. Save for a few (and I mean very few) die-hard theorists who insist on writing things down with pen and paper, there is almost an absolute guarantee that with any paper in any field of science that you could possibly read, a computer was involved in some step of the process.

Whether it’s studying bird droppings or the collisions of galaxies, modern-day science owes its very existence—and continued persistence—to the computer. From the laptop sitting on an unkempt desk to a giant machine that fills up a room, “S. Transistor” should be the coauthor on basically all three million journal articles published every year.

The sheer complexity of modern science, and its reliance on customized software, renders one of the frontline defenses against soft and hard fraud useless. That defense is peer review.

The practice of peer review was developed in a different era, when the arguments and analysis that led to a paper’s conclusion could be succinctly summarized within the paper itself. Want to know how the author arrived at that conclusion? The derivation would be right there. It was relatively easy to judge the “wrongness” of an article because you could follow the document from beginning to end, from start to finish, and have all the information you needed to evaluate it right there at your fingerprints.

That’s now largely impossible with the modern scientific enterprise so reliant on computers.

To makes matters worse, many of the software codes used in science are not publicly available. I’ll say this again because it’s kind of wild to even contemplate: there are millions of papers published every year that rely on computer software to make the results happen, and that software is not available for other scientists to scrutinize to see if it’s legit or not. We simply have to trust it, but the word “trust” is very near the bottom of the scientist’s priority list.

Why don’t scientists make their code available? It boils down to the same reason that scientists don’t do many things that would improve the process of science: there’s no incentive. In this case, you don’t get any h-index points for releasing your code on a website. You only get them for publishing papers.

This infinitely agitates me when I peer-review papers. How am I supposed to judge the correctness of an article if I can’t see the entire process? What’s the point of searching for fraud when the computer code that’s sitting behind the published result can be shaped and molded to give any result you want, and nobody will be the wiser?

I’m not even talking about intentional computer-based fraud here; this is even a problem for detecting basic mistakes. If you make a mistake in a paper, a referee or an editor can spot it. And science is better off for it. If you make a mistake in your code… who checks it? As long as the results look correct, you’ll go ahead and publish it and the peer reviewer will go ahead and accept it. And science is worse off for it.

Science is getting more complex over time and is becoming increasingly reliant on software code to keep the engine going. This makes fraud of both the hard and soft varieties easier to accomplish. From mistakes that you pass over because you’re going too fast, to using sophisticated tools that you barely understand but use to get the result that you wanted, to just totally faking it, science is becoming increasingly wrong.

Peer review is essential for science. Unfortunately, it’s broken. Read More »

rocket-report:-chinese-firm-suffers-another-failure;-ariane-6-soars-in-debut

Rocket Report: Chinese firm suffers another failure; Ariane 6 soars in debut

The Ariane 6 rocket takes flight for the first time on July 9, 2024.

Enlarge / The Ariane 6 rocket takes flight for the first time on July 9, 2024.

ESA – S. Corvaja

Welcome to Edition 7.02 of the Rocket Report! The highlight of this week was the hugely successful debut of Europe’s Ariane 6 rocket. They will address the upper stage issue, I am sure. Given Europe’s commitment to zero debris, stranding the second stage is not great. But for a debut launch of a large new vehicle, this was really promising.

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

Chinese launch company suffers another setback. Chinese commercial rocket firm iSpace suffered a launch failure late Wednesday in a fresh setback for the company, Space News reports. The four-stage Hyperbola-1 solid rocket lifted off from Jiuquan spaceport in the Gobi Desert at 7: 40 pm ET (23: 40 UTC) on Wednesday. Beijing-based iSpace later issued a release stating that the rocket’s fourth stage suffered an anomaly. The statement did not reveal the name nor nature of the payloads lost on the flight.

Early troubles are perhaps to be expected … Beijing Interstellar Glory Space Technology Ltd., or iSpace, made history in 2019 as the first privately funded Chinese company to reach orbit, with the solid-fueled Hyperbola-1. However the rocket suffered three consecutive failures following that feat. The company recovered with two successful flights in 2023 before the latest failure. The loss could add to reliability concerns over China’s commercial launch industry as it follows Space Pioneer’s recent catastrophic static-fire explosion. (submitted by EllPeaTea)

Feds backtrack on former Firefly investor. A long, messy affair between US regulators and a Ukrainian businessman named Max Polyakov seems to have finally been resolved, Ars reports. On Tuesday, Polyakov’s venture capital firm Noosphere Venture Partners announced that the US government has released him and his related companies from all conditions imposed upon them in the run-up to the Russian invasion of Ukraine. This decision comes more than two years after the Committee on Foreign Investment in the United States and the US Air Force forced Polyakov to sell his majority stake in the Texas-based launch company Firefly.

Not a spy … This rocket company was founded in 2014 by an engineer named Tom Markusic, who ran into financial difficulty as he sought to develop the Alpha rocket. Markusic had to briefly halt Firefly’s operations before Polyakov, a colorful and controversial Ukrainian businessman, swooped in and provided a substantial infusion of cash into the company. “The US government quite happily allowed Polyakov to pump $200 million into Firefly only to decide he was a potential spy just as the company’s first rocket was ready to launch,” Ashlee Vance, a US journalist who chronicled Polyakov’s rise, told Ars. It turns out, Polyakov wasn’t a spy.

The easiest way to keep up with Eric Berger’s space reporting is to sign up for his newsletter, we’ll collect his stories in your inbox.

Pentagon ICBM costs soar. The price tag for the Pentagon’s next-generation nuclear-tipped Sentinel ICBMs has ballooned by 81 percent in less than four years, The Register reports. This triggered a mandatory congressional review. On Monday, the Department of Defense released the results of this review, with Under-secretary of Defense for Acquisition and Sustainment William LaPlante saying the Sentinel missile program met established criteria for being allowed to continue after his “comprehensive, unbiased review of the program.”

Trust us, the military says … The Sentinel project is the DoD’s attempt to replace its aging fleet of ground-based nuclear-armed Minuteman III missiles (first deployed in 1970) with new hardware. When it passed its Milestone B decision (authorization to enter the engineering and manufacturing phase) in September 2020, the cost was a fraction of the $141 billion the Pentagon now estimates Sentinel will cost, LaPlante said. To give that some perspective, the proposed annual budget for the Department of Defense for its fiscal 2025 is nearly $850 billion. (submitted by EllPeaTea)

Rocket Report: Chinese firm suffers another failure; Ariane 6 soars in debut Read More »

spacex’s-unmatched-streak-of-perfection-with-the-falcon-9-rocket-is-over

SpaceX’s unmatched streak of perfection with the Falcon 9 rocket is over

Numerous pieces of ice fell off the second stage of the Falcon 9 rocket during its climb into orbit from Vandenberg Space Force Base, California.

Enlarge / Numerous pieces of ice fell off the second stage of the Falcon 9 rocket during its climb into orbit from Vandenberg Space Force Base, California.

SpaceX

A SpaceX Falcon 9 rocket suffered an upper stage engine failure and deployed a batch of Starlink Internet satellites into a perilously low orbit after launch from California Thursday night, the first blemish on the workhorse launcher’s record in more than 300 missions since 2016.

Elon Musk, SpaceX’s founder and CEO, posted on X that the rocket’s upper stage engine failed when it attempted to reignite nearly an hour after the Falcon 9 lifted off from Vandenberg Space Force Base, California, at 7: 35 pm PDT (02: 35 UTC).

Frosty evidence

After departing Vandenberg to begin SpaceX’s Starlink 9-3 mission, the rocket’s reusable first stage booster propelled the Starlink satellites into the upper atmosphere, then returned to Earth for an on-target landing on a recovery ship parked in the Pacific Ocean. A single Merlin Vacuum engine on the rocket’s second stage fired for about six minutes to reach a preliminary orbit.

A few minutes after liftoff of SpaceX’s Starlink 9-3 mission, veteran observers of SpaceX launches noticed an unusual build-up of ice around the top of the Merlin Vacuum engine, which consumes a propellant mixture of super-chilled kerosene and cryogenic liquid oxygen. The liquid oxygen is stored at a temperature of several hundred degrees below zero.

Numerous chunks of ice fell away from the rocket as the upper stage engine powered into orbit, but the Merlin Vacuum, or M-Vac, engine appeared to complete its first burn as planned. A leak in the oxidizer system or a problem with insulation could lead to ice accumulation, although the exact cause, and its possible link to the engine malfunction later in flight, will be the focus of SpaceX’s investigation into the failure.

A second burn with the upper stage engine was supposed to raise the perigee, or low point, of the rocket’s orbit well above the atmosphere before releasing 20 Starlink satellites to continue climbing to their operational altitude with their own propulsion.

“Upper stage restart to raise perigee resulted in an engine RUD for reasons currently unknown,” Musk wrote in an update two hours after the launch. RUD (rapid unscheduled disassembly) is a term of art in rocketry that usually signifies a catastrophic or explosive failure.

“Team is reviewing data tonight to understand root cause,” Musk continued. “Starlink satellites were deployed, but the perigee may be too low for them to raise orbit. Will know more in a few hours.”

Telemetry from the Falcon 9 rocket indicated it released the Starlink satellites into an orbit with a perigee just 86 miles (138 kilometers) above Earth, roughly 100 miles (150 kilometers) lower than expected, according to Jonathan McDowell, an astrophysicist and trusted tracker of spaceflight activity. Detailed orbital data from the US Space Force was not immediately available.

Ripple effects

While ground controllers scramble to salvage the 20 Starlink satellites, SpaceX engineers began probing what went wrong with the second stage’s M-Vac engine. For SpaceX and its customers, the investigation into the rocket malfunction is likely the more pressing matter.

SpaceX could absorb the loss of 20 Starlink satellites relatively easily. The company’s satellite assembly line can produce 20 Starlink spacecraft in a few days. But the Falcon 9 rocket’s dependability and high flight rate have made it a workhorse for NASA, the US military, and the wider space industry. An investigation will probably delay several upcoming SpaceX flights.

The first in-flight failure for SpaceX’s Falcon rocket family since June 2015, a streak of 344 consecutive successful launches until tonight.

A lot of unusual ice was observed on the Falcon 9’s upper stage during its first burn tonight, some of it falling into the engine plume. https://t.co/1vc3P9EZjj pic.twitter.com/fHO73MYLms

— Stephen Clark (@StephenClark1) July 12, 2024

Depending on the cause of the problem and what SpaceX must do to fix it, it’s possible the company can recover from the upper stage failure and resume launching Starlink satellites soon. Most of SpaceX’s launches aren’t for external customers, but deploy satellites for the company’s own Starlink network. This gives SpaceX a unique flexibility to quickly return to flight with the Falcon 9 without needing to satisfy customer concerns.

The Federal Aviation Administration, which licenses all commercial space launches in the United States, will require SpaceX to conduct a mishap investigation before resuming Falcon 9 flights.

“The FAA will be involved in every step of the investigation process and must approve SpaceX’s final report, including any corrective actions,” an FAA spokesperson said. “A return to flight is based on the FAA determining that any system, process, or procedure related to the mishap does not affect public safety.”

Two crew missions are supposed to launch on SpaceX’s human-rated Falcon 9 rocket in the next six weeks, but those launch dates are now in doubt.

The all-private Polaris Dawn mission, commanded by billionaire Jared Isaacman, is scheduled to launch on a Falcon 9 rocket on July 31 from NASA’s Kennedy Space Center in Florida. Isaacman and three commercial astronaut crewmates will spend five days in orbit on a mission that will include the first commercial spacewalk outside their Crew Dragon capsule, using new pressure suits designed and built by SpaceX.

NASA’s next crew mission with SpaceX is slated to launch from Florida aboard a Falcon 9 rocket around August 19. This team of four astronauts will replace a crew of four who have been on the International Space Station since March.

Some customers, especially NASA’s commercial crew program, will likely want to see the results of an in-depth inquiry and require SpaceX to string together a series of successful Falcon 9 flights with Starlink satellites before clearing their own missions for launch. SpaceX has already launched 70 flights with its Falcon family of rockets since January 1, an average cadence of one launch every 2.7 days, more than the combined number of orbital launches by all other nations this year.

With this rapid-fire launch cadence, SpaceX could quickly demonstrate the fitness of any fixes engineers recommend to resolve the problem that caused Thursday night’s failure. But investigations into rocket failures often take weeks or months. It was too soon, early on Friday, to know the true impact of the upper stage malfunction on SpaceX’s launch schedule.

SpaceX’s unmatched streak of perfection with the Falcon 9 rocket is over Read More »

scientists-built-real-life-“stillsuit”-to-recycle-astronaut-urine-on-space-walks

Scientists built real-life “stillsuit” to recycle astronaut urine on space walks

shot of Fremen woman in a stillsuit kneeling

Enlarge / The Fremen on Arrakis wear full-body “stillsuits” that recycle absorbed sweat and urine into potable water.

Warner Bros.

The Fremen who inhabit the harsh desert world of Arrakis in Frank Herbert’s Dune must rely on full-body “stillsuits” for their survival, which recycle absorbed sweat and urine into potable water. Now science fiction is on the verge of becoming science fact: Researchers from Cornell University have designed a prototype stillsuit for astronauts that will recycle their urine into potable water during spacewalks, according to a new paper published in the journal Frontiers in Space Technologies.

Herbert provided specific details about the stillsuit’s design when planetologist Liet Kynes explained the technology to Duke Leto Atreides I:

It’s basically a micro-sandwich—a high-efficiency filter and heat-exchange system. The skin-contact layer’s porous. Perspiration passes through it, having cooled the body … near-normal evaporation process. The next two layers … include heat exchange filaments and salt precipitators. Salt’s reclaimed. Motions of the body, especially breathing and some osmotic action provide the pumping force. Reclaimed water circulates to catchpockets from which you draw it through this tube in the clip at your neck… Urine and feces are processed in the thigh pads. In the open desert, you wear this filter across your face, this tube in the nostrils with these plugs to ensure a tight fit. Breathe in through the mouth filter, out through the nose tube. With a Fremen suit in good working order, you won’t lose more than a thimbleful of moisture a day…

The Illustrated Dune Encyclopedia interpreted the stillsuit as something akin to a hazmat suit, without the full face covering. In David Lynch’s 1984 film, Dune, the stillsuits were organic and very form-fitting compared to the book description, almost like a second skin. The stillsuits in Denis Villeneuve’s most recent film adaptations (Dune Part 1 and Part 2) tried to hew more closely to the source material, with “micro-sandwiches” of acrylic fibers and porous cottons and embedded tubes for better flexibility.

Dune, the stillsuits were organic and very form-fitting.” height=”401″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/07/stillsuit2-640×401.jpg” width=”640″>

Enlarge / In David Lynch’s 1984 film, Dune, the stillsuits were organic and very form-fitting.

Universal Pictures

The Cornell team is not the first to try to build a practical stillsuit. Hacksmith Industries did a “one day build” of a stillsuit just last month, having previously tackled Thor’s Stormbreaker ax, Captain America’s electromagnetic shield, and a plasma-powered lightsaber, among other projects. The Hacksmith team dispensed with the icky urine and feces recycling aspects and focused on recycling sweat and moisture from breath.

Their version consists of a waterproof baggy suit (switched out for a more form-fitting bunny suit in the final version) with a battery-powered heat exchanger in the back. Any humidity condenses on the suit’s surface and drips into a bottle attached to a CamelBak bladder. There’s a filter mask attached to a tube that allows the wearer to breathe in filtered air, but it’s one way; the exhaled air is redirected to the condenser so the water content can be harvested into the CamelBak bladder and then sent back to the mask so the user can drink it. It’s not even close to achieving Herbert’s stated thimbleful a day in terms of efficiency since it mostly recycles moisture from sweat on the wearer’s back. But it worked.

Scientists built real-life “stillsuit” to recycle astronaut urine on space walks Read More »

nasa’s-flagship-mission-to-europa-has-a-problem:-vulnerability-to-radiation

NASA’s flagship mission to Europa has a problem: Vulnerability to radiation

Tripping transistors —

“What keeps me awake right now is the uncertainty.”

An artist's illustration of the Europa Clipper spacecraft during a flyby close to Jupiter's icy moon.

Enlarge / An artist’s illustration of the Europa Clipper spacecraft during a flyby close to Jupiter’s icy moon.

The launch date for the Europa Clipper mission to study the intriguing moon orbiting Jupiter, which ranks alongside the Cassini spacecraft to Saturn as NASA’s most expensive and ambitious planetary science mission, is now in doubt.

The $4.25 billion spacecraft had been due to launch in October on a Falcon Heavy rocket from Kennedy Space Center in Florida. However, NASA revealed that transistors on board the spacecraft may not be as radiation-hardened as they were believed to be.

“The issue with the transistors came to light in May when the mission team was advised that similar parts were failing at lower radiation doses than expected,” the space agency wrote in a blog post Thursday afternoon. “In June 2024, an industry alert was sent out to notify users of this issue. The manufacturer is working with the mission team to support ongoing radiation test and analysis efforts in order to better understand the risk of using these parts on the Europa Clipper spacecraft.”

The moons orbiting Jupiter, a massive gas giant planet, exist in one of the harshest radiation environments in the Solar System. NASA’s initial testing indicates that some of the transistors, which regulate the flow of energy through the spacecraft, could fail in this environment. NASA is currently evaluating the possibility of maximizing the transistor lifetime at Jupiter and expects to complete a preliminary analysis in late July.

To delay or not to delay

NASA’s update is silent on whether the spacecraft could still make its approximately three-week launch window this year, which gets Clipper to the Jovian system in 2030.

Ars reached out to several experts familiar with the Clipper mission to gauge the likelihood that it would make the October launch window, and opinions were mixed. The consensus view was between a 40 to 60 percent chance of becoming comfortable enough with the issue to launch this fall. If NASA engineers cannot become confident with the existing setup, the transistors would need to be replaced.

The Clipper mission has launch opportunities in 2025 and 2026, but these could lead to additional delays. This is due to the need for multiple gravitational assists. The 2024 launch follows a “MEGA” trajectory, including a Mars flyby in 2025 and an Earth flyby in late 2026—Mars-Earth Gravitational Assist. If Clipper launches a year late, it would necessitate a second Earth flyby. A launch in 2026 would revert to a MEGA trajectory. Ars has asked NASA for timelines of launches in 2025 and 2026 and will update if they provide this information.

Another negative result of delays would be costs, as keeping the mission on the ground for another year likely would result in another few hundred million dollars in expenses for NASA, which would blow a hole in its planetary science budget.

NASA’s blog post this week is not the first time the space agency has publicly mentioned these issues with the metal-oxide-semiconductor field-effect transistor, or MOSFET. At a meeting of the Space Studies Board in early June, Jordan Evans, project manager for the Europa Clipper Mission, said it was his No. 1 concern ahead of launch.

“What keeps me awake at night”

“The most challenging thing we’re dealing with right now is an issue associated with these transistors, MOSFETs, that are used as switches in the spacecraft,” he said. “Five weeks ago today, I got an email that a non-NASA customer had done some testing on these rad-hard parts and found that they were going before (the specifications), at radiation levels significantly lower than what we qualified them to as we did our parts procurement, and others in the industry had as well.”

At the time, Evans said things were “trending in the right direction” with regard to the agency’s analysis of the issue. It seems unlikely that NASA would have put out a blog post five weeks later if the issue were still moving steadily toward a resolution.

“What keeps me awake right now is the uncertainty associated with the MOSFETs and the residual risk that we will take on with that,” Evans said in June. “It’s difficult to do the kind of low-dose rate testing in the timeframes that we have until launch. So we’re gathering as much data as we can, including from missions like Juno, to better understand what residual risk we might launch with.”

These are precisely the kinds of issues that scientists and engineers don’t want to find in the final months before the launch of such a consequential mission. The stakes are incredibly high—imagine making the call to launch Clipper only to have the spacecraft fail six years later upon arrival at Jupiter.

NASA’s flagship mission to Europa has a problem: Vulnerability to radiation Read More »

much-of-neanderthal-genetic-diversity-came-from-modern-humans

Much of Neanderthal genetic diversity came from modern humans

A large, brown-colored skull seen in profile against a black background.

The basic outline of the interactions between modern humans and Neanderthals is now well established. The two came in contact as modern humans began their major expansion out of Africa, which occurred roughly 60,000 years ago. Humans picked up some Neanderthal DNA through interbreeding, while the Neanderthal population, always fairly small, was swept away by the waves of new arrivals.

But there are some aspects of this big-picture view that don’t entirely line up with the data. While it nicely explains the fact that Neanderthal sequences are far more common in non-African populations, it doesn’t account for the fact that every African population we’ve looked at has some DNA that matches up with Neanderthal DNA.

A study published on Thursday argues that much of this match came about because an early modern human population also left Africa and interbred with Neanderthals. But in this case, the result was to introduce modern human DNA to the Neanderthal population. The study shows that this DNA accounts for a lot of Neanderthals’ genetic diversity, suggesting that their population was even smaller than earlier estimates had suggested.

Out of Africa early

This study isn’t the first to suggest that modern humans and their genes met Neanderthals well in advance of our major out-of-Africa expansion. The key to understanding this is the genome of a Neanderthal from the Altai region of Siberia, which dates from roughly 120,000 years ago. That’s well before modern humans expanded out of Africa, yet its genome has some regions that have excellent matches to the human genome but are absent from the Denisovan lineage.

One explanation for this is that these are segments of Neanderthal DNA that were later picked up by the population that expanded out of Africa. The problem with that view is that most of these sequences also show up in African populations. So, researchers advanced the idea that an ancestral population of modern humans left Africa about 200,000 years ago, and some of its DNA was retained by Siberian Neanderthals. That’s consistent with some fossil finds that place anatomically modern humans in the Mideast at roughly the same time.

There is, however, an alternative explanation: Some of the population that expanded out of Africa 60,000 years ago and picked up Neanderthal DNA migrated back to Africa, taking the Neanderthal DNA with them. That has led to a small bit of the Neanderthal DNA persisting within African populations.

To sort this all out, a research team based at Princeton University focused on the Neanderthal DNA found in Africans, taking advantage of the fact that we now have a much larger array of completed human genomes (approximately 2,000 of them).

The work was based on a simple hypothesis. All of our work on Neanderthal DNA indicates that their population was relatively small, and thus had less genetic diversity than modern humans did. If that’s the case, then the addition of modern human DNA to the Neanderthal population should have boosted its genetic diversity. If so, then the stretches of “Neanderthal” DNA found in African populations should include some of the more diverse regions of the Neanderthal genome.

Much of Neanderthal genetic diversity came from modern humans Read More »

giant-salamander-species-found-in-what-was-thought-to-be-an-icy-ecosystem

Giant salamander species found in what was thought to be an icy ecosystem

Feeding time —

Found after its kind were thought extinct, and where it was thought to be too cold.

A black background with a brown fossil at the center, consisting of the head and a portion of the vertebral column.

C. Marsicano

Gaiasia jennyae, a newly discovered freshwater apex predator with a body length reaching 4.5 meters, lurked in the swamps and lakes around 280 million years ago. Its wide, flattened head had powerful jaws full of huge fangs, ready to capture any prey unlucky enough to swim past.

The problem is, to the best of our knowledge, it shouldn’t have been that large, should have been extinct tens of millions of years before the time it apparently lived, and shouldn’t have been found in northern Namibia. “Gaiasia is the first really good look we have at an entirely different ecosystem we didn’t expect to find,” says Jason Pardo, a postdoctoral fellow at Field Museum of Natural History in Chicago. Pardo is co-author of a study on the Gaiasia jennyae discovery recently published in Nature.

Common ancestry

“Tetrapods were the animals that crawled out of the water around 380 million years ago, maybe a little earlier,” Pardo explains. These ancient creatures, also known as stem tetrapods, were the common ancestors of modern reptiles, amphibians, mammals, and birds. “Those animals lived up to what we call the end of Carboniferous, about 370–300 million years ago. Few made it through, and they lasted longer, but they mostly went extinct around 370 million ago,” he adds.

This is why the discovery of Gaiasia jennyae in the 280 million-year-old rocks of Namibia was so surprising. Not only wasn’t it extinct when the rocks it was found in were laid down, but it was dominating its ecosystem as an apex predator. By today’s standards, it was like stumbling upon a secluded island hosting animals that should have been dead for 70 million years, like a living, breathing T-rex.

“The skull of gaiasia we have found is about 67 centimeters long. We also have a front end of her upper body. We know she was at minimum 2.5 meters long, probably 3.5, 4.5 meters—big head and a long, salamander-like body,” says Pardo. He told Ars that gaiasia was a suction feeder: she opened her jaws under water, which created a vacuum that sucked her prey right in. But the large, interlocked fangs reveal that a powerful bite was also one of her weapons, probably used to hunt bigger animals. “We suspect gaiasia fed on bony fish, freshwater sharks, and maybe even other, smaller gaiasia,” says Pardo, suggesting it was a rather slow, ambush-based predator.

But considering where it was found, the fact that it had enough prey to ambush is perhaps even more of a shocker than the animal itself.

Location, location, location

“Continents were organized differently 270–280 million years ago,” says Pardo. Back then, one megacontinent called Pangea had already broken into two supercontinents. The northern supercontinent called Laurasia included parts of modern North America, Russia, and China. The southern supercontinent, the home of gaiasia, was called Gondwana, which consisted of today’s India, Africa, South America, Australia, and Antarctica. And Gondwana back then was pretty cold.

“Some researchers hypothesize that the entire continent was covered in glacial ice, much like we saw in North America and Europe during the ice ages 10,000 years ago,” says Pardo. “Others claim that it was more patchy—there were those patches where ice was not present,” he adds. Still, 280 million years ago, northern Namibia was around 60 degrees southern latitude—roughly where the northernmost reaches of Antarctica are today.

“Historically, we thought tetrapods [of that time] were living much like modern crocodiles. They were cold-blooded, and if you are cold-blooded the only way to get large and maintain activity would be to be in a very hot environment. We believed such animals couldn’t live in colder environments. Gaiasia shows that it is absolutely not the case,” Pardo claims. And this turned upside-down lots of what we knew about life on Earth back in gaiasia’s time.

Giant salamander species found in what was thought to be an icy ecosystem Read More »

frozen-mammoth-skin-retained-its-chromosome-structure

Frozen mammoth skin retained its chromosome structure

Artist's depiction of a large mammoth with brown fur and huge, curving tusks in an icy, tundra environment.

One of the challenges of working with ancient DNA samples is that damage accumulates over time, breaking up the structure of the double helix into ever smaller fragments. In the samples we’ve worked with, these fragments scatter and mix with contaminants, making reconstructing a genome a large technical challenge.

But a dramatic paper released on Thursday shows that this isn’t always true. Damage does create progressively smaller fragments of DNA over time. But, if they’re trapped in the right sort of material, they’ll stay right where they are, essentially preserving some key features of ancient chromosomes even as the underlying DNA decays. Researchers have now used that to detail the chromosome structure of mammoths, with some implications for how these mammals regulated some key genes.

DNA meets Hi-C

The backbone of DNA’s double helix consists of alternating sugars and phosphates, chemically linked together (the bases of DNA are chemically linked to these sugars). Damage from things like radiation can break these chemical linkages, with fragmentation increasing over time. When samples reach the age of something like a Neanderthal, very few fragments are longer than 100 base pairs. Since chromosomes are millions of base pairs long, it was thought that this would inevitably destroy their structure, as many of the fragments would simply diffuse away.

But that will only be true if the medium they’re in allows diffusion. And some scientists suspected that permafrost, which preserves the tissue of some now-extinct Arctic animals, might block that diffusion. So, they set out to test this using mammoth tissues, obtained from a sample termed YakInf that’s roughly 50,000 years old.

The challenge is that the molecular techniques we use to probe chromosomes take place in liquid solutions, where fragments would just drift away from each other in any case. So, the team focused on an approach termed Hi-C, which specifically preserves information about which bits of DNA were close to each other. It does this by exposing chromosomes to a chemical that will link any pieces of DNA that are close physical proximity. So, even if those pieces are fragments, they’ll be stuck to each other by the time they end up in a liquid solution.

A few enzymes are then used to convert these linked molecules to a single piece of DNA, which is then sequenced. This data, which will contain sequence information from two different parts of the genome, then tells us that those parts were once close to each other inside a cell.

Interpreting Hi-C

On its own, a single bit of data like this isn’t especially interesting; two bits of genome might end up next to each other at random. But when you have millions of bits of data like this, you can start to construct a map of how the genome is structured.

There are two basic rules governing the pattern of interactions we’d expect to see. The first is that interactions within a chromosome are going to be more common than interactions between two chromosomes. And, within a chromosome, parts that are physically closer to each other on the molecule are more likely to interact than those that are farther apart.

So, if you are looking at a specific segment of, say, chromosome 12, most of the locations Hi-C will find it interacting with will also be on chromosome 12. And the frequency of interactions will go up as you move to sequences that are ever closer to the one you’re interested in.

On its own, you can use Hi-C to help reconstruct a chromosome even if you start with nothing but fragments. But the exceptions to the expected pattern also tell us things about biology. For example, genes that are active tend to be on loops of DNA, with the two ends of the loop held together by proteins; the same is true for inactive genes. Interactions within these loops tend to be more frequent than interactions between them, subtly altering the frequency with which two fragments end up linked together during Hi-C.

Frozen mammoth skin retained its chromosome structure Read More »

to-help-with-climate-change,-carbon-capture-will-have-to-evolve

To help with climate change, carbon capture will have to evolve

gotta catch more —

The technologies are useful tools but have yet to move us away from fossil fuels.

Image of a facility filled with green-colored tubes.

Enlarge / Bioreactors that host algae would be one option for carbon sequestration—as long as the carbon is stored somehow.

More than 200 kilometers off Norway’s coast in the North Sea sits the world’s first offshore carbon capture and storage project. Built in 1996, the Sleipner project strips carbon dioxide from natural gas—largely made up of methane—to make it marketable. But instead of releasing the CO2 into the atmosphere, the greenhouse gas is buried.

The effort stores around 1 million metric tons of CO2 per year—and is praised by many as a pioneering success in global attempts to cut greenhouse gas emissions.

Last year, total global CO2 emissions hit an all-time high of around 35.8 billion tons, or gigatons. At these levels, scientists estimate, we have roughly six years left before we emit so much CO2 that global warming will consistently exceed 1.5° Celsius above average preindustrial temperatures, an internationally agreed-upon limit. (Notably, the global average temperature for the past 12 months has exceeded this threshold.)

Phasing out fossil fuels is key to cutting emissions and fighting climate change. But a suite of technologies collectively known as carbon capture, utilization and storage, or CCUS, are among the tools available to help meet global targets to cut CO2 emissions in half by 2030 and to reach net-zero emissions by 2050. These technologies capture, use or store away CO2 emitted by power generation or industrial processes, or suck it directly out of the air. The Intergovernmental Panel on Climate Change (IPCC), the United Nations body charged with assessing climate change science, includes carbon capture and storage among the actions needed to slash emissions and meet temperature targets.

Carbon capture, utilization and storage technologies often capture CO2 from coal or natural gas power generation or industrial processes, such as steel manufacturing. The CO2 is compressed into a liquid under high pressure and transported through pipelines to sites where it may be stored, in porous sedimentary rock formations containing saltwater, for example, or used for other purposes. The captured CO2 can be injected into the ground to extract oil dregs or used to produce cement and other products.

Enlarge / Carbon capture, utilization and storage technologies often capture CO2 from coal or natural gas power generation or industrial processes, such as steel manufacturing. The CO2 is compressed into a liquid under high pressure and transported through pipelines to sites where it may be stored, in porous sedimentary rock formations containing saltwater, for example, or used for other purposes. The captured CO2 can be injected into the ground to extract oil dregs or used to produce cement and other products.

Governments and industry are betting big on such projects. Last year, for example, the British government announced 20 billion pounds (more than $25 billion) in funding for CCUS, often shortened to CCS. The United States allocated more than $5 billion between 2011 and 2023 and committed an additional $8.2 billion from 2022 to 2026. Globally, public funding for CCUS projects rose to $20 billion in 2023, according to the International Energy Agency (IEA), which works with countries around the world to forge energy policy.

Given the urgency of the situation, many people argue that CCUS is necessary to move society toward climate goals. But critics don’t see the technology, in its current form, shifting the world away from oil and gas: In a lot of cases, they point out, the captured CO2 is used to extract more fossil fuels in a process known as enhanced oil recovery. They contend that other existing solutions such as renewable energy offer deeper and quicker CO2 emissions cuts. “It’s better not to emit in the first place,” says Grant Hauber, an energy finance adviser at the Institute for Energy Economics and Financial Analysis, a nonpartisan organization in Lakewood, Ohio.

What’s more, fossil fuel companies provide funds to universities and researchers—which some say could shape what is studied and what is not, even if the work of individual scientists is legitimate. For these reasons, some critics say CCUS shouldn’t be pursued at all.

“Carbon capture and storage essentially perpetuates fossil fuel reliance. It’s a distraction and a delay tactic,” says Jennie Stephens, a climate justice researcher at Northeastern University in Boston. She adds that there is little focus on understanding the psychological, social, economic, and political barriers that prevent communities from shifting away from fossil fuels and forging solutions to those obstacles.

According to the Global CCS Institute, an industry-led think tank headquartered in Melbourne, Australia, of the 41 commercial projects operational as of July 2023, most were part of efforts that produce, extract, or burn fossil fuels, such as coal- and gas-fired power plants. That’s true of the Sleipner project, run by the energy company Equinor. It’s the case, too, with the world’s largest CCUS facility, operated by ExxonMobil in Wyoming, in the United States, which also captures CO2 as part of the production of methane.

Granted, not all CCUS efforts further fossil fuel production, and many projects now in the works have the sole goal of capturing and locking up CO2. Still, some critics doubt whether these greener approaches could ever lock away enough CO2 to meaningfully contribute to climate mitigation, and they are concerned about the costs.

Others are more circumspect. Sally Benson, an energy researcher at Stanford University, doesn’t want to see CCUS used as an excuse to carry on with fossil fuels. But she says the technology is essential for capturing some of the CO2 from fossil fuel production and usage, as well as from industrial processes, as society transitions to new energy sources. “If we can get rid of those emissions with carbon capture and sequestration, that sounds like success to me,” says Benson, who codirects an institute that receives funding from fossil fuel companies.

To help with climate change, carbon capture will have to evolve Read More »

nasa-update-on-starliner-thruster-issues:-this-is-fine

NASA update on Starliner thruster issues: This is fine

Boeing's Starliner spacecraft on final approach to the International Space Station last month.

Enlarge / Boeing’s Starliner spacecraft on final approach to the International Space Station last month.

Before clearing Boeing’s Starliner crew capsule to depart the International Space Station and head for Earth, NASA managers want to ensure the spacecraft’s problematic control thrusters can help guide the ship’s two-person crew home.

The two astronauts who launched June 5 on the Starliner spacecraft’s first crew test flight agree with the managers, although they said Wednesday that they’re comfortable with flying the capsule back to Earth if there’s any emergency that might require evacuation of the space station.

NASA astronauts Butch Wilmore and Suni Williams were supposed to return to Earth weeks ago, but managers are keeping them at the station as engineers continue probing thruster problems and helium leaks that have plagued the mission since its launch.

“This is a tough business that we’re in,” Wilmore, Starliner’s commander, told reporters Wednesday in a news conference from the space station. “Human spaceflight is not easy in any regime, and there have been multiple issues with any spacecraft that’s ever been designed, and that’s the nature of what we do.”

Five of the 28 reaction control system thrusters on Starliner’s service module dropped offline as the spacecraft approached the space station last month. Starliner’s flight software disabled the five control jets when they started overheating and losing thrust. Four of the thrusters were later recovered, although some couldn’t reach their full power levels as Starliner came in for docking.

Wilmore, who took over manual control for part of Starliner’s approach to the space station, said he could sense the spacecraft’s handling qualities diminish as thrusters temporarily failed. “You could tell it was degraded, but still, it was impressive,” he said. Starliner ultimately docked to the station in autopilot mode.

In mid-June, the Starliner astronauts hot-fired the thrusters again, and their thrust levels were closer to normal.

“What we want to know is that the thrusters can perform; if whatever their percentage of thrust is, we can put it into a package that will get us a deorbit burn,” said Williams, a NASA astronaut serving as Starliner’s pilot. “That’s the main purpose that we need [for] the service module: to get us a good deorbit burn so that we can come back.”

These small thrusters aren’t necessary for the deorbit burn itself, which will use a different set of engines to slow Starliner’s velocity enough for it to drop out of orbit and head for landing. But Starliner needs enough of the control jets working to maneuver into the proper orientation for the deorbit firing.

This test flight is the first time astronauts have flown in space on Boeing’s Starliner spacecraft, following years of delays and setbacks. Starliner is NASA’s second human-rated commercial crew capsule, and it’s poised to join SpaceX’s Crew Dragon in a rotation of missions ferrying astronauts to and from the space station through the rest of the decade.

But first, Boeing and NASA need to safely complete the Starliner test flight and resolve the thruster problems and helium leaks plaguing the spacecraft before moving forward with operational crew rotation missions. There’s a Crew Dragon spacecraft currently docked to the station, but Steve Stich, NASA’s commercial crew program manager, told reporters Wednesday that, right now, Wilmore and Williams still plan to come home on Starliner.

“The beautiful thing about the commercial crew program is that we have two vehicles, two different systems, that we could use to return crew,” Stich said. “So we have a little bit more time to go through the data and then make a decision as to whether we need to do anything different. But the prime option today is to return Butch and Suni on Starliner. Right now, we don’t see any reason that wouldn’t be the case.”

Mark Nappi, Boeing’s Starliner program manager, said officials identified more than 30 actions to investigate five “small” helium leaks and the thruster problems on Starliner’s service module. “All these items are scheduled to be completed by the end of next week,” Nappi said.

“It’s a test flight, and the first with crew, and we’re just taking a little extra time to make sure that we understand everything before we commit to deorbit,” Stich said.

NASA update on Starliner thruster issues: This is fine Read More »

congress-apparently-feels-a-need-for-“reaffirmation”-of-sls-rocket

Congress apparently feels a need for “reaffirmation” of SLS rocket

Stuart Smalley is here to help with daily affirmations of SLS.

Enlarge / Stuart Smalley is here to help with daily affirmations of SLS.

Aurich Lawson | SNL

There is a curious section in the new congressional reauthorization bill for NASA that concerns the agency’s large Space Launch System rocket.

The section is titled “Reaffirmation of the Space Launch System,” and in it Congress asserts its commitment to a flight rate of twice per year for the rocket. The reauthorization legislation, which cleared a House committee on Wednesday, also said NASA should identify other customers for the rocket.

“The Administrator shall assess the demand for the Space Launch System by entities other than NASA and shall break out such demand according to the relevant Federal agency or nongovernment sector,” the legislation states.

Congress directs NASA to report back, within 180 days of the legislation passing, on several topics. First, the legislators want an update on NASA’s progress toward achieving a flight rate of twice per year for the SLS rocket, and the Artemis mission by which this capability will be in place.

Additionally, Congress is asking for NASA to study demand for the SLS rocket and estimate “cost and schedule savings for reduced transit times” for deep space missions due to the “unique capabilities” of the rocket. The space agency also must identify any “barriers or challenges” that could impede use of the rocket by other entities other than NASA, and estimate the cost of overcoming those barriers.

Is someone afraid?

There is a fair bit to unpack here, but the inclusion of this section—there is no “reaffirmation” of the Orion spacecraft, for example—suggests that either the legacy space companies building the SLS rocket, local legislators, or both feel the need to protect the SLS rocket. As one source on Capitol Hill familiar with the legislation told Ars, “It’s a sign that somebody’s afraid.”

Congress created the SLS rocket 14 years ago with the NASA Authorization Act of 2010. The large rocket kept a river of contracts flowing to large aerospace companies, including Boeing and Northrop Grumman, who had been operating the Space Shuttle. Congress then lavished tens of billions of dollars on the contractors over the years for development, often authorizing more money than NASA said it needed. Congressional support was unwavering, at least in part because the SLS program boasts that it has jobs in every state.

Under the original law, the SLS rocket was supposed to achieve “full operational capability” by the end of 2016. The first launch of the SLS vehicle did not take place until late 2022, six years later. It was entirely successful. However, due to various reasons, the rocket will not fly again until September 2025 at the earliest.

Congress apparently feels a need for “reaffirmation” of SLS rocket Read More »

nearby-star-cluster-houses-unusually-large-black-hole

Nearby star cluster houses unusually large black hole

Big, but not that big —

Fast-moving stars imply that there’s an intermediate-mass black hole there.

Three panel image, with zoom increasing from left to right. Left most panel is a wide view of the globular cluster; right is a zoom in to the area where its central black hole must reside.

Enlarge / From left to right, zooming in from the globular cluster to the site of its black hole.

ESA/Hubble & NASA, M. Häberle

Supermassive black holes appear to reside at the center of every galaxy and to have done so since galaxies formed early in the history of the Universe. Currently, however, we can’t entirely explain their existence, since it’s difficult to understand how they could grow quickly enough to reach the cutoff for supermassive as quickly as they did.

A possible bit of evidence was recently found by using about 20 years of data from the Hubble Space Telescope. The data comes from a globular cluster of stars that’s thought to be the remains of a dwarf galaxy and shows that a group of stars near the cluster’s core are moving so fast that they should have been ejected from it entirely. That implies that something massive is keeping them there, which the researchers argue is a rare intermediate-mass black hole, weighing in at over 8,000 times the mass of the Sun.

Moving fast

The fast-moving stars reside in Omega Centauri, the largest globular cluster in the Milky Way. With an estimated 10 million stars, it’s a crowded environment, but observations are aided by its relative proximity, at “only” 17,000 light-years away. Those observations have been hinting that there might be a central black hole within the globular cluster, but the evidence has not been decisive.

The new work, done by a large international team, used over 500 images of Omega Centauri, taken by the Hubble Space Telescope over the course of 20 years. This allowed them to track the motion of stars within the cluster, allowing an estimate of their speed relative to the cluster’s center of mass. While this has been done previously, the most recent data allowed an update that reduced the uncertainty in the stars’ velocity.

Within the update data, a number of stars near the cluster’s center stood out for their extreme velocities: seven of them were moving fast enough that the gravitational pull of the cluster isn’t enough to keep them there. All seven should have been lost from the cluster within 1,000 years, although the uncertainties remained large for two of them. Based on the size of the cluster, there shouldn’t even be a single foreground star between the Hubble and the Omega Cluster, so these really seem to be within the cluster despite their velocity.

The simplest explanation for that is that there’s an additional mass holding them in place. That could potentially be several massive objects, but the close proximity of all these stars to the center of the cluster favor a single, compact object. Which means a black hole.

Based on the velocities, the researchers estimate that the object has a mass of at least 8,200 times that of the Sun. A couple of stars appear to be accelerating; if that holds up based on further observations, it would indicate that the black hole is over 20,000 solar masses. That places it firmly within black hole territory, though smaller than supermassive black holes, which are viewed as those with roughly a million solar masses or more. And it’s considerably larger than you’d expect from black holes formed through the death of a star, which aren’t expected to be much larger than 100 times the Sun’s mass.

This places it in the category of intermediate-mass black holes, of which there are only a handful of potential sightings, none of them universally accepted. So, this is a significant finding if for no other reason than it may be the least controversial spotting of an intermediate-mass black hole yet.

What’s this telling us?

For now, there are still considerable uncertainties in some of the details here—but prospects for improving the situation exist. Observations with the Webb Space Telescope could potentially pick up the faint emissions from gas that’s falling into the black hole. And it can track the seven stars identified here. Its spectrographs could also potentially pick up the red and blue shifts in light caused by the star’s motion. Its location at a considerable distance from Hubble could also provide a more detailed three-dimensional picture of Omega Centauri’s central structure.

Figuring this out could potentially tell us more about how black holes grow to supermassive scales. Earlier potential sightings of intermediate-mass black holes have also come in globular clusters, which may suggest that they’re a general feature of large gatherings of stars.

But Omega Centauri differs from many other globular clusters, which often contain large populations of stars that all formed at roughly the same time, suggesting the clusters formed from a single giant cloud of materials. Omega Centauri has stars with a broad range of ages, which is one of the reasons why people think it’s the remains of a dwarf galaxy that was sucked into the Milky Way.

If that’s the case, then its central black hole is an analog of the supermassive black holes found in actual dwarf galaxies—which raises the question of why it’s only intermediate-mass. Did something about its interactions with the Milky Way interfere with the black hole’s growth?

And, in the end, none of this sheds light on how any black hole grows to be so much more massive than any star it could conceivably have formed from. Getting a better sense of this black hole’s history could provide more perspective on some questions that are currently vexing astronomers.

Nature, 2024. DOI: 10.1038/s41586-024-07511-z  (About DOIs).

Nearby star cluster houses unusually large black hole Read More »