Science

nasa’s-flagship-mission-to-europa-has-a-problem:-vulnerability-to-radiation

NASA’s flagship mission to Europa has a problem: Vulnerability to radiation

Tripping transistors —

“What keeps me awake right now is the uncertainty.”

An artist's illustration of the Europa Clipper spacecraft during a flyby close to Jupiter's icy moon.

Enlarge / An artist’s illustration of the Europa Clipper spacecraft during a flyby close to Jupiter’s icy moon.

The launch date for the Europa Clipper mission to study the intriguing moon orbiting Jupiter, which ranks alongside the Cassini spacecraft to Saturn as NASA’s most expensive and ambitious planetary science mission, is now in doubt.

The $4.25 billion spacecraft had been due to launch in October on a Falcon Heavy rocket from Kennedy Space Center in Florida. However, NASA revealed that transistors on board the spacecraft may not be as radiation-hardened as they were believed to be.

“The issue with the transistors came to light in May when the mission team was advised that similar parts were failing at lower radiation doses than expected,” the space agency wrote in a blog post Thursday afternoon. “In June 2024, an industry alert was sent out to notify users of this issue. The manufacturer is working with the mission team to support ongoing radiation test and analysis efforts in order to better understand the risk of using these parts on the Europa Clipper spacecraft.”

The moons orbiting Jupiter, a massive gas giant planet, exist in one of the harshest radiation environments in the Solar System. NASA’s initial testing indicates that some of the transistors, which regulate the flow of energy through the spacecraft, could fail in this environment. NASA is currently evaluating the possibility of maximizing the transistor lifetime at Jupiter and expects to complete a preliminary analysis in late July.

To delay or not to delay

NASA’s update is silent on whether the spacecraft could still make its approximately three-week launch window this year, which gets Clipper to the Jovian system in 2030.

Ars reached out to several experts familiar with the Clipper mission to gauge the likelihood that it would make the October launch window, and opinions were mixed. The consensus view was between a 40 to 60 percent chance of becoming comfortable enough with the issue to launch this fall. If NASA engineers cannot become confident with the existing setup, the transistors would need to be replaced.

The Clipper mission has launch opportunities in 2025 and 2026, but these could lead to additional delays. This is due to the need for multiple gravitational assists. The 2024 launch follows a “MEGA” trajectory, including a Mars flyby in 2025 and an Earth flyby in late 2026—Mars-Earth Gravitational Assist. If Clipper launches a year late, it would necessitate a second Earth flyby. A launch in 2026 would revert to a MEGA trajectory. Ars has asked NASA for timelines of launches in 2025 and 2026 and will update if they provide this information.

Another negative result of delays would be costs, as keeping the mission on the ground for another year likely would result in another few hundred million dollars in expenses for NASA, which would blow a hole in its planetary science budget.

NASA’s blog post this week is not the first time the space agency has publicly mentioned these issues with the metal-oxide-semiconductor field-effect transistor, or MOSFET. At a meeting of the Space Studies Board in early June, Jordan Evans, project manager for the Europa Clipper Mission, said it was his No. 1 concern ahead of launch.

“What keeps me awake at night”

“The most challenging thing we’re dealing with right now is an issue associated with these transistors, MOSFETs, that are used as switches in the spacecraft,” he said. “Five weeks ago today, I got an email that a non-NASA customer had done some testing on these rad-hard parts and found that they were going before (the specifications), at radiation levels significantly lower than what we qualified them to as we did our parts procurement, and others in the industry had as well.”

At the time, Evans said things were “trending in the right direction” with regard to the agency’s analysis of the issue. It seems unlikely that NASA would have put out a blog post five weeks later if the issue were still moving steadily toward a resolution.

“What keeps me awake right now is the uncertainty associated with the MOSFETs and the residual risk that we will take on with that,” Evans said in June. “It’s difficult to do the kind of low-dose rate testing in the timeframes that we have until launch. So we’re gathering as much data as we can, including from missions like Juno, to better understand what residual risk we might launch with.”

These are precisely the kinds of issues that scientists and engineers don’t want to find in the final months before the launch of such a consequential mission. The stakes are incredibly high—imagine making the call to launch Clipper only to have the spacecraft fail six years later upon arrival at Jupiter.

NASA’s flagship mission to Europa has a problem: Vulnerability to radiation Read More »

much-of-neanderthal-genetic-diversity-came-from-modern-humans

Much of Neanderthal genetic diversity came from modern humans

A large, brown-colored skull seen in profile against a black background.

The basic outline of the interactions between modern humans and Neanderthals is now well established. The two came in contact as modern humans began their major expansion out of Africa, which occurred roughly 60,000 years ago. Humans picked up some Neanderthal DNA through interbreeding, while the Neanderthal population, always fairly small, was swept away by the waves of new arrivals.

But there are some aspects of this big-picture view that don’t entirely line up with the data. While it nicely explains the fact that Neanderthal sequences are far more common in non-African populations, it doesn’t account for the fact that every African population we’ve looked at has some DNA that matches up with Neanderthal DNA.

A study published on Thursday argues that much of this match came about because an early modern human population also left Africa and interbred with Neanderthals. But in this case, the result was to introduce modern human DNA to the Neanderthal population. The study shows that this DNA accounts for a lot of Neanderthals’ genetic diversity, suggesting that their population was even smaller than earlier estimates had suggested.

Out of Africa early

This study isn’t the first to suggest that modern humans and their genes met Neanderthals well in advance of our major out-of-Africa expansion. The key to understanding this is the genome of a Neanderthal from the Altai region of Siberia, which dates from roughly 120,000 years ago. That’s well before modern humans expanded out of Africa, yet its genome has some regions that have excellent matches to the human genome but are absent from the Denisovan lineage.

One explanation for this is that these are segments of Neanderthal DNA that were later picked up by the population that expanded out of Africa. The problem with that view is that most of these sequences also show up in African populations. So, researchers advanced the idea that an ancestral population of modern humans left Africa about 200,000 years ago, and some of its DNA was retained by Siberian Neanderthals. That’s consistent with some fossil finds that place anatomically modern humans in the Mideast at roughly the same time.

There is, however, an alternative explanation: Some of the population that expanded out of Africa 60,000 years ago and picked up Neanderthal DNA migrated back to Africa, taking the Neanderthal DNA with them. That has led to a small bit of the Neanderthal DNA persisting within African populations.

To sort this all out, a research team based at Princeton University focused on the Neanderthal DNA found in Africans, taking advantage of the fact that we now have a much larger array of completed human genomes (approximately 2,000 of them).

The work was based on a simple hypothesis. All of our work on Neanderthal DNA indicates that their population was relatively small, and thus had less genetic diversity than modern humans did. If that’s the case, then the addition of modern human DNA to the Neanderthal population should have boosted its genetic diversity. If so, then the stretches of “Neanderthal” DNA found in African populations should include some of the more diverse regions of the Neanderthal genome.

Much of Neanderthal genetic diversity came from modern humans Read More »

giant-salamander-species-found-in-what-was-thought-to-be-an-icy-ecosystem

Giant salamander species found in what was thought to be an icy ecosystem

Feeding time —

Found after its kind were thought extinct, and where it was thought to be too cold.

A black background with a brown fossil at the center, consisting of the head and a portion of the vertebral column.

C. Marsicano

Gaiasia jennyae, a newly discovered freshwater apex predator with a body length reaching 4.5 meters, lurked in the swamps and lakes around 280 million years ago. Its wide, flattened head had powerful jaws full of huge fangs, ready to capture any prey unlucky enough to swim past.

The problem is, to the best of our knowledge, it shouldn’t have been that large, should have been extinct tens of millions of years before the time it apparently lived, and shouldn’t have been found in northern Namibia. “Gaiasia is the first really good look we have at an entirely different ecosystem we didn’t expect to find,” says Jason Pardo, a postdoctoral fellow at Field Museum of Natural History in Chicago. Pardo is co-author of a study on the Gaiasia jennyae discovery recently published in Nature.

Common ancestry

“Tetrapods were the animals that crawled out of the water around 380 million years ago, maybe a little earlier,” Pardo explains. These ancient creatures, also known as stem tetrapods, were the common ancestors of modern reptiles, amphibians, mammals, and birds. “Those animals lived up to what we call the end of Carboniferous, about 370–300 million years ago. Few made it through, and they lasted longer, but they mostly went extinct around 370 million ago,” he adds.

This is why the discovery of Gaiasia jennyae in the 280 million-year-old rocks of Namibia was so surprising. Not only wasn’t it extinct when the rocks it was found in were laid down, but it was dominating its ecosystem as an apex predator. By today’s standards, it was like stumbling upon a secluded island hosting animals that should have been dead for 70 million years, like a living, breathing T-rex.

“The skull of gaiasia we have found is about 67 centimeters long. We also have a front end of her upper body. We know she was at minimum 2.5 meters long, probably 3.5, 4.5 meters—big head and a long, salamander-like body,” says Pardo. He told Ars that gaiasia was a suction feeder: she opened her jaws under water, which created a vacuum that sucked her prey right in. But the large, interlocked fangs reveal that a powerful bite was also one of her weapons, probably used to hunt bigger animals. “We suspect gaiasia fed on bony fish, freshwater sharks, and maybe even other, smaller gaiasia,” says Pardo, suggesting it was a rather slow, ambush-based predator.

But considering where it was found, the fact that it had enough prey to ambush is perhaps even more of a shocker than the animal itself.

Location, location, location

“Continents were organized differently 270–280 million years ago,” says Pardo. Back then, one megacontinent called Pangea had already broken into two supercontinents. The northern supercontinent called Laurasia included parts of modern North America, Russia, and China. The southern supercontinent, the home of gaiasia, was called Gondwana, which consisted of today’s India, Africa, South America, Australia, and Antarctica. And Gondwana back then was pretty cold.

“Some researchers hypothesize that the entire continent was covered in glacial ice, much like we saw in North America and Europe during the ice ages 10,000 years ago,” says Pardo. “Others claim that it was more patchy—there were those patches where ice was not present,” he adds. Still, 280 million years ago, northern Namibia was around 60 degrees southern latitude—roughly where the northernmost reaches of Antarctica are today.

“Historically, we thought tetrapods [of that time] were living much like modern crocodiles. They were cold-blooded, and if you are cold-blooded the only way to get large and maintain activity would be to be in a very hot environment. We believed such animals couldn’t live in colder environments. Gaiasia shows that it is absolutely not the case,” Pardo claims. And this turned upside-down lots of what we knew about life on Earth back in gaiasia’s time.

Giant salamander species found in what was thought to be an icy ecosystem Read More »

frozen-mammoth-skin-retained-its-chromosome-structure

Frozen mammoth skin retained its chromosome structure

Artist's depiction of a large mammoth with brown fur and huge, curving tusks in an icy, tundra environment.

One of the challenges of working with ancient DNA samples is that damage accumulates over time, breaking up the structure of the double helix into ever smaller fragments. In the samples we’ve worked with, these fragments scatter and mix with contaminants, making reconstructing a genome a large technical challenge.

But a dramatic paper released on Thursday shows that this isn’t always true. Damage does create progressively smaller fragments of DNA over time. But, if they’re trapped in the right sort of material, they’ll stay right where they are, essentially preserving some key features of ancient chromosomes even as the underlying DNA decays. Researchers have now used that to detail the chromosome structure of mammoths, with some implications for how these mammals regulated some key genes.

DNA meets Hi-C

The backbone of DNA’s double helix consists of alternating sugars and phosphates, chemically linked together (the bases of DNA are chemically linked to these sugars). Damage from things like radiation can break these chemical linkages, with fragmentation increasing over time. When samples reach the age of something like a Neanderthal, very few fragments are longer than 100 base pairs. Since chromosomes are millions of base pairs long, it was thought that this would inevitably destroy their structure, as many of the fragments would simply diffuse away.

But that will only be true if the medium they’re in allows diffusion. And some scientists suspected that permafrost, which preserves the tissue of some now-extinct Arctic animals, might block that diffusion. So, they set out to test this using mammoth tissues, obtained from a sample termed YakInf that’s roughly 50,000 years old.

The challenge is that the molecular techniques we use to probe chromosomes take place in liquid solutions, where fragments would just drift away from each other in any case. So, the team focused on an approach termed Hi-C, which specifically preserves information about which bits of DNA were close to each other. It does this by exposing chromosomes to a chemical that will link any pieces of DNA that are close physical proximity. So, even if those pieces are fragments, they’ll be stuck to each other by the time they end up in a liquid solution.

A few enzymes are then used to convert these linked molecules to a single piece of DNA, which is then sequenced. This data, which will contain sequence information from two different parts of the genome, then tells us that those parts were once close to each other inside a cell.

Interpreting Hi-C

On its own, a single bit of data like this isn’t especially interesting; two bits of genome might end up next to each other at random. But when you have millions of bits of data like this, you can start to construct a map of how the genome is structured.

There are two basic rules governing the pattern of interactions we’d expect to see. The first is that interactions within a chromosome are going to be more common than interactions between two chromosomes. And, within a chromosome, parts that are physically closer to each other on the molecule are more likely to interact than those that are farther apart.

So, if you are looking at a specific segment of, say, chromosome 12, most of the locations Hi-C will find it interacting with will also be on chromosome 12. And the frequency of interactions will go up as you move to sequences that are ever closer to the one you’re interested in.

On its own, you can use Hi-C to help reconstruct a chromosome even if you start with nothing but fragments. But the exceptions to the expected pattern also tell us things about biology. For example, genes that are active tend to be on loops of DNA, with the two ends of the loop held together by proteins; the same is true for inactive genes. Interactions within these loops tend to be more frequent than interactions between them, subtly altering the frequency with which two fragments end up linked together during Hi-C.

Frozen mammoth skin retained its chromosome structure Read More »

to-help-with-climate-change,-carbon-capture-will-have-to-evolve

To help with climate change, carbon capture will have to evolve

gotta catch more —

The technologies are useful tools but have yet to move us away from fossil fuels.

Image of a facility filled with green-colored tubes.

Enlarge / Bioreactors that host algae would be one option for carbon sequestration—as long as the carbon is stored somehow.

More than 200 kilometers off Norway’s coast in the North Sea sits the world’s first offshore carbon capture and storage project. Built in 1996, the Sleipner project strips carbon dioxide from natural gas—largely made up of methane—to make it marketable. But instead of releasing the CO2 into the atmosphere, the greenhouse gas is buried.

The effort stores around 1 million metric tons of CO2 per year—and is praised by many as a pioneering success in global attempts to cut greenhouse gas emissions.

Last year, total global CO2 emissions hit an all-time high of around 35.8 billion tons, or gigatons. At these levels, scientists estimate, we have roughly six years left before we emit so much CO2 that global warming will consistently exceed 1.5° Celsius above average preindustrial temperatures, an internationally agreed-upon limit. (Notably, the global average temperature for the past 12 months has exceeded this threshold.)

Phasing out fossil fuels is key to cutting emissions and fighting climate change. But a suite of technologies collectively known as carbon capture, utilization and storage, or CCUS, are among the tools available to help meet global targets to cut CO2 emissions in half by 2030 and to reach net-zero emissions by 2050. These technologies capture, use or store away CO2 emitted by power generation or industrial processes, or suck it directly out of the air. The Intergovernmental Panel on Climate Change (IPCC), the United Nations body charged with assessing climate change science, includes carbon capture and storage among the actions needed to slash emissions and meet temperature targets.

Carbon capture, utilization and storage technologies often capture CO2 from coal or natural gas power generation or industrial processes, such as steel manufacturing. The CO2 is compressed into a liquid under high pressure and transported through pipelines to sites where it may be stored, in porous sedimentary rock formations containing saltwater, for example, or used for other purposes. The captured CO2 can be injected into the ground to extract oil dregs or used to produce cement and other products.

Enlarge / Carbon capture, utilization and storage technologies often capture CO2 from coal or natural gas power generation or industrial processes, such as steel manufacturing. The CO2 is compressed into a liquid under high pressure and transported through pipelines to sites where it may be stored, in porous sedimentary rock formations containing saltwater, for example, or used for other purposes. The captured CO2 can be injected into the ground to extract oil dregs or used to produce cement and other products.

Governments and industry are betting big on such projects. Last year, for example, the British government announced 20 billion pounds (more than $25 billion) in funding for CCUS, often shortened to CCS. The United States allocated more than $5 billion between 2011 and 2023 and committed an additional $8.2 billion from 2022 to 2026. Globally, public funding for CCUS projects rose to $20 billion in 2023, according to the International Energy Agency (IEA), which works with countries around the world to forge energy policy.

Given the urgency of the situation, many people argue that CCUS is necessary to move society toward climate goals. But critics don’t see the technology, in its current form, shifting the world away from oil and gas: In a lot of cases, they point out, the captured CO2 is used to extract more fossil fuels in a process known as enhanced oil recovery. They contend that other existing solutions such as renewable energy offer deeper and quicker CO2 emissions cuts. “It’s better not to emit in the first place,” says Grant Hauber, an energy finance adviser at the Institute for Energy Economics and Financial Analysis, a nonpartisan organization in Lakewood, Ohio.

What’s more, fossil fuel companies provide funds to universities and researchers—which some say could shape what is studied and what is not, even if the work of individual scientists is legitimate. For these reasons, some critics say CCUS shouldn’t be pursued at all.

“Carbon capture and storage essentially perpetuates fossil fuel reliance. It’s a distraction and a delay tactic,” says Jennie Stephens, a climate justice researcher at Northeastern University in Boston. She adds that there is little focus on understanding the psychological, social, economic, and political barriers that prevent communities from shifting away from fossil fuels and forging solutions to those obstacles.

According to the Global CCS Institute, an industry-led think tank headquartered in Melbourne, Australia, of the 41 commercial projects operational as of July 2023, most were part of efforts that produce, extract, or burn fossil fuels, such as coal- and gas-fired power plants. That’s true of the Sleipner project, run by the energy company Equinor. It’s the case, too, with the world’s largest CCUS facility, operated by ExxonMobil in Wyoming, in the United States, which also captures CO2 as part of the production of methane.

Granted, not all CCUS efforts further fossil fuel production, and many projects now in the works have the sole goal of capturing and locking up CO2. Still, some critics doubt whether these greener approaches could ever lock away enough CO2 to meaningfully contribute to climate mitigation, and they are concerned about the costs.

Others are more circumspect. Sally Benson, an energy researcher at Stanford University, doesn’t want to see CCUS used as an excuse to carry on with fossil fuels. But she says the technology is essential for capturing some of the CO2 from fossil fuel production and usage, as well as from industrial processes, as society transitions to new energy sources. “If we can get rid of those emissions with carbon capture and sequestration, that sounds like success to me,” says Benson, who codirects an institute that receives funding from fossil fuel companies.

To help with climate change, carbon capture will have to evolve Read More »

nasa-update-on-starliner-thruster-issues:-this-is-fine

NASA update on Starliner thruster issues: This is fine

Boeing's Starliner spacecraft on final approach to the International Space Station last month.

Enlarge / Boeing’s Starliner spacecraft on final approach to the International Space Station last month.

Before clearing Boeing’s Starliner crew capsule to depart the International Space Station and head for Earth, NASA managers want to ensure the spacecraft’s problematic control thrusters can help guide the ship’s two-person crew home.

The two astronauts who launched June 5 on the Starliner spacecraft’s first crew test flight agree with the managers, although they said Wednesday that they’re comfortable with flying the capsule back to Earth if there’s any emergency that might require evacuation of the space station.

NASA astronauts Butch Wilmore and Suni Williams were supposed to return to Earth weeks ago, but managers are keeping them at the station as engineers continue probing thruster problems and helium leaks that have plagued the mission since its launch.

“This is a tough business that we’re in,” Wilmore, Starliner’s commander, told reporters Wednesday in a news conference from the space station. “Human spaceflight is not easy in any regime, and there have been multiple issues with any spacecraft that’s ever been designed, and that’s the nature of what we do.”

Five of the 28 reaction control system thrusters on Starliner’s service module dropped offline as the spacecraft approached the space station last month. Starliner’s flight software disabled the five control jets when they started overheating and losing thrust. Four of the thrusters were later recovered, although some couldn’t reach their full power levels as Starliner came in for docking.

Wilmore, who took over manual control for part of Starliner’s approach to the space station, said he could sense the spacecraft’s handling qualities diminish as thrusters temporarily failed. “You could tell it was degraded, but still, it was impressive,” he said. Starliner ultimately docked to the station in autopilot mode.

In mid-June, the Starliner astronauts hot-fired the thrusters again, and their thrust levels were closer to normal.

“What we want to know is that the thrusters can perform; if whatever their percentage of thrust is, we can put it into a package that will get us a deorbit burn,” said Williams, a NASA astronaut serving as Starliner’s pilot. “That’s the main purpose that we need [for] the service module: to get us a good deorbit burn so that we can come back.”

These small thrusters aren’t necessary for the deorbit burn itself, which will use a different set of engines to slow Starliner’s velocity enough for it to drop out of orbit and head for landing. But Starliner needs enough of the control jets working to maneuver into the proper orientation for the deorbit firing.

This test flight is the first time astronauts have flown in space on Boeing’s Starliner spacecraft, following years of delays and setbacks. Starliner is NASA’s second human-rated commercial crew capsule, and it’s poised to join SpaceX’s Crew Dragon in a rotation of missions ferrying astronauts to and from the space station through the rest of the decade.

But first, Boeing and NASA need to safely complete the Starliner test flight and resolve the thruster problems and helium leaks plaguing the spacecraft before moving forward with operational crew rotation missions. There’s a Crew Dragon spacecraft currently docked to the station, but Steve Stich, NASA’s commercial crew program manager, told reporters Wednesday that, right now, Wilmore and Williams still plan to come home on Starliner.

“The beautiful thing about the commercial crew program is that we have two vehicles, two different systems, that we could use to return crew,” Stich said. “So we have a little bit more time to go through the data and then make a decision as to whether we need to do anything different. But the prime option today is to return Butch and Suni on Starliner. Right now, we don’t see any reason that wouldn’t be the case.”

Mark Nappi, Boeing’s Starliner program manager, said officials identified more than 30 actions to investigate five “small” helium leaks and the thruster problems on Starliner’s service module. “All these items are scheduled to be completed by the end of next week,” Nappi said.

“It’s a test flight, and the first with crew, and we’re just taking a little extra time to make sure that we understand everything before we commit to deorbit,” Stich said.

NASA update on Starliner thruster issues: This is fine Read More »

congress-apparently-feels-a-need-for-“reaffirmation”-of-sls-rocket

Congress apparently feels a need for “reaffirmation” of SLS rocket

Stuart Smalley is here to help with daily affirmations of SLS.

Enlarge / Stuart Smalley is here to help with daily affirmations of SLS.

Aurich Lawson | SNL

There is a curious section in the new congressional reauthorization bill for NASA that concerns the agency’s large Space Launch System rocket.

The section is titled “Reaffirmation of the Space Launch System,” and in it Congress asserts its commitment to a flight rate of twice per year for the rocket. The reauthorization legislation, which cleared a House committee on Wednesday, also said NASA should identify other customers for the rocket.

“The Administrator shall assess the demand for the Space Launch System by entities other than NASA and shall break out such demand according to the relevant Federal agency or nongovernment sector,” the legislation states.

Congress directs NASA to report back, within 180 days of the legislation passing, on several topics. First, the legislators want an update on NASA’s progress toward achieving a flight rate of twice per year for the SLS rocket, and the Artemis mission by which this capability will be in place.

Additionally, Congress is asking for NASA to study demand for the SLS rocket and estimate “cost and schedule savings for reduced transit times” for deep space missions due to the “unique capabilities” of the rocket. The space agency also must identify any “barriers or challenges” that could impede use of the rocket by other entities other than NASA, and estimate the cost of overcoming those barriers.

Is someone afraid?

There is a fair bit to unpack here, but the inclusion of this section—there is no “reaffirmation” of the Orion spacecraft, for example—suggests that either the legacy space companies building the SLS rocket, local legislators, or both feel the need to protect the SLS rocket. As one source on Capitol Hill familiar with the legislation told Ars, “It’s a sign that somebody’s afraid.”

Congress created the SLS rocket 14 years ago with the NASA Authorization Act of 2010. The large rocket kept a river of contracts flowing to large aerospace companies, including Boeing and Northrop Grumman, who had been operating the Space Shuttle. Congress then lavished tens of billions of dollars on the contractors over the years for development, often authorizing more money than NASA said it needed. Congressional support was unwavering, at least in part because the SLS program boasts that it has jobs in every state.

Under the original law, the SLS rocket was supposed to achieve “full operational capability” by the end of 2016. The first launch of the SLS vehicle did not take place until late 2022, six years later. It was entirely successful. However, due to various reasons, the rocket will not fly again until September 2025 at the earliest.

Congress apparently feels a need for “reaffirmation” of SLS rocket Read More »

nearby-star-cluster-houses-unusually-large-black-hole

Nearby star cluster houses unusually large black hole

Big, but not that big —

Fast-moving stars imply that there’s an intermediate-mass black hole there.

Three panel image, with zoom increasing from left to right. Left most panel is a wide view of the globular cluster; right is a zoom in to the area where its central black hole must reside.

Enlarge / From left to right, zooming in from the globular cluster to the site of its black hole.

ESA/Hubble & NASA, M. Häberle

Supermassive black holes appear to reside at the center of every galaxy and to have done so since galaxies formed early in the history of the Universe. Currently, however, we can’t entirely explain their existence, since it’s difficult to understand how they could grow quickly enough to reach the cutoff for supermassive as quickly as they did.

A possible bit of evidence was recently found by using about 20 years of data from the Hubble Space Telescope. The data comes from a globular cluster of stars that’s thought to be the remains of a dwarf galaxy and shows that a group of stars near the cluster’s core are moving so fast that they should have been ejected from it entirely. That implies that something massive is keeping them there, which the researchers argue is a rare intermediate-mass black hole, weighing in at over 8,000 times the mass of the Sun.

Moving fast

The fast-moving stars reside in Omega Centauri, the largest globular cluster in the Milky Way. With an estimated 10 million stars, it’s a crowded environment, but observations are aided by its relative proximity, at “only” 17,000 light-years away. Those observations have been hinting that there might be a central black hole within the globular cluster, but the evidence has not been decisive.

The new work, done by a large international team, used over 500 images of Omega Centauri, taken by the Hubble Space Telescope over the course of 20 years. This allowed them to track the motion of stars within the cluster, allowing an estimate of their speed relative to the cluster’s center of mass. While this has been done previously, the most recent data allowed an update that reduced the uncertainty in the stars’ velocity.

Within the update data, a number of stars near the cluster’s center stood out for their extreme velocities: seven of them were moving fast enough that the gravitational pull of the cluster isn’t enough to keep them there. All seven should have been lost from the cluster within 1,000 years, although the uncertainties remained large for two of them. Based on the size of the cluster, there shouldn’t even be a single foreground star between the Hubble and the Omega Cluster, so these really seem to be within the cluster despite their velocity.

The simplest explanation for that is that there’s an additional mass holding them in place. That could potentially be several massive objects, but the close proximity of all these stars to the center of the cluster favor a single, compact object. Which means a black hole.

Based on the velocities, the researchers estimate that the object has a mass of at least 8,200 times that of the Sun. A couple of stars appear to be accelerating; if that holds up based on further observations, it would indicate that the black hole is over 20,000 solar masses. That places it firmly within black hole territory, though smaller than supermassive black holes, which are viewed as those with roughly a million solar masses or more. And it’s considerably larger than you’d expect from black holes formed through the death of a star, which aren’t expected to be much larger than 100 times the Sun’s mass.

This places it in the category of intermediate-mass black holes, of which there are only a handful of potential sightings, none of them universally accepted. So, this is a significant finding if for no other reason than it may be the least controversial spotting of an intermediate-mass black hole yet.

What’s this telling us?

For now, there are still considerable uncertainties in some of the details here—but prospects for improving the situation exist. Observations with the Webb Space Telescope could potentially pick up the faint emissions from gas that’s falling into the black hole. And it can track the seven stars identified here. Its spectrographs could also potentially pick up the red and blue shifts in light caused by the star’s motion. Its location at a considerable distance from Hubble could also provide a more detailed three-dimensional picture of Omega Centauri’s central structure.

Figuring this out could potentially tell us more about how black holes grow to supermassive scales. Earlier potential sightings of intermediate-mass black holes have also come in globular clusters, which may suggest that they’re a general feature of large gatherings of stars.

But Omega Centauri differs from many other globular clusters, which often contain large populations of stars that all formed at roughly the same time, suggesting the clusters formed from a single giant cloud of materials. Omega Centauri has stars with a broad range of ages, which is one of the reasons why people think it’s the remains of a dwarf galaxy that was sucked into the Milky Way.

If that’s the case, then its central black hole is an analog of the supermassive black holes found in actual dwarf galaxies—which raises the question of why it’s only intermediate-mass. Did something about its interactions with the Milky Way interfere with the black hole’s growth?

And, in the end, none of this sheds light on how any black hole grows to be so much more massive than any star it could conceivably have formed from. Getting a better sense of this black hole’s history could provide more perspective on some questions that are currently vexing astronomers.

Nature, 2024. DOI: 10.1038/s41586-024-07511-z  (About DOIs).

Nearby star cluster houses unusually large black hole Read More »

beryl-is-just-the-latest-disaster-to-strike-the-energy-capital-of-the-world

Beryl is just the latest disaster to strike the energy capital of the world

Don’t know what you’ve got until it’s gone —

It’s pretty weird to use something I’ve written about in the abstract for so long.

Why yes, that Starlink dish is precariously perched to get around tree obstructions.

Enlarge / Why yes, that Starlink dish is precariously perched to get around tree obstructions.

Eric Berger

I’ll readily grant you that Houston might not be the most idyllic spot in the world. The summer heat is borderline unbearable. The humidity is super sticky. We don’t have mountains or pristine beaches—we have concrete.

But we also have a pretty amazing melting pot of culture, wonderful cuisine, lots of jobs, and upward mobility. Most of the year, I love living here. Houston is totally the opposite of, “It’s a nice place to visit, but you wouldn’t want to live there.” Houston is not a particularly nice place to visit, but you might just want to live here.

Except for the hurricanes.

Houston is the largest city in the United States to be highly vulnerable to hurricanes. At a latitude of 29.7 degrees, the city is solidly in the subtropics, and much of it is built within 25 to 50 miles of the Gulf of Mexico. Every summer, with increasing dread, we watch tropical systems develop over the Atlantic Ocean and then move into the Gulf.

For some meteorologists and armchair forecasters, tracking hurricanes is fulfilling work and a passionate hobby. For those of us who live near the water along the upper Texas coast, following the movements of these storms is gut-wrenching stuff. A few days before a potential landfall, I’ll find myself jolting awake in the middle of the night by the realization that new model data must be available. When you see a storm turning toward you, or intensifying, it’s psychologically difficult to process.

Beryl the Bad

It felt like we were watching Beryl forever. It formed into a tropical depression on June 28, became a hurricane the next day, and by June 30, it was a major hurricane storming into the Caribbean Sea. Beryl set all kinds of records for a hurricane in late June and early July. Put simply, we have never seen an Atlantic storm intensify so rapidly, or so much, this early in the hurricane season. Beryl behaved as if it were the peak of the Atlantic season, in September, rather than the beginning of July—normally a pretty sleepy time for Atlantic hurricane activity. I wrote about this for Ars Technica a week ago.

At the time, it looked as though the greater Houston area would be completely spared by Beryl, as the most reliable modeling data took the storm across the Yucatan Peninsula and into the southern Gulf of Mexico before a final landfall in northern Mexico. But over time, the forecast began to change, with the track moving steadily up the Texas coast.

I was at a dinner to celebrate the birthday of my cousin’s wife last Friday when I snuck a peek at my phone. It was about 7 pm local time. We were at a Mexican restaurant in Galveston, and I knew the latest operational run of the European model was about to come out. This was a mistake, as the model indicated a landfall about 80 miles south of Houston, which would bring the core of the storm’s strongest winds over Houston.

I had to fake joviality for the rest of the night, while feeling sick to my stomach.

Barreling inland

The truth is, Beryl could have been much worse. After weakening due to interaction with the Yucatan Peninsula on Friday, Beryl moved into the Gulf of Mexico just about when I was having that celebratory dinner on Friday evening. At that point, it was a strong tropical storm with 60 mph sustained winds. It had nearly two and a half days over open water to re-organize, and that seemed likely. Beryl had Saturday to shrug off dry air and was expected to intensify significantly on Sunday. It was due to make landfall on Monday morning.

The track for Beryl continued to look grim over the weekend—although its landfall would occur well south of Houston, Beryl’s track inland would bring its center and core of strongest winds over the most densely populated part of the city. However, we took some solace from a lack of serious intensification on Saturday and Sunday. Even at 10 pm local time on Sunday, less than six hours before Beryl’s landfall near Matagorda, it was still not a hurricane.

However, in those final hours Beryl did finally start to get organized in a serious way. We have seen this before as hurricanes start to run up on the Texas coast, where frictional effects from its outer bands aid intensification. In the last six hours Beryl intensified into a Category 1 hurricane, with 80-mph sustained winds. The eyewall of the storm closed, and Beryl was poised for rapid intensification. Then it ran aground.

Normally, as a hurricane traverses land it starts to weaken fairly quickly. But Beryl didn’t. Instead, the storm maintained much of its strength and bulldozed right into the heart of Houston with near hurricane-force sustained winds and higher gusts. I suspect what happened is that Beryl, beginning to deepen, had a ton of momentum at landfall, and it took time for interaction with land to reverse that momentum and begin slowing down its winds.

First the lights went out. Then the Internet soon followed. Except for storm chasers, hurricanes are miserable experiences. There is the torrential rainfall and rising water. But most ominous of all, at least for me, are the howling winds. When stronger gusts come through, even sturdily built houses shake. Trees whip around violently. It is such an uncontrolled, violent fury that one must endure. Losing a connection to the outside world magnifies one’s sense of helplessness.

In the end, Beryl knocked out power to about 2.5 million customers across the Houston region, including yours truly. Because broadband Internet service providers generally rely on these electricity services to deliver Internet, many customers lost connectivity. Even cell phone towers, reduced to batteries or small generators, were often only capable of delivering text and voice services.

Beryl is just the latest disaster to strike the energy capital of the world Read More »

why-every-quantum-computer-will-need-a-powerful-classical-computer

Why every quantum computer will need a powerful classical computer

Image of a set of spheres with arrows within them, with all the arrows pointing in the same direction.

Enlarge / A single logical qubit is built from a large collection of hardware qubits.

One of the more striking things about quantum computing is that the field, despite not having proven itself especially useful, has already spawned a collection of startups that are focused on building something other than qubits. It might be easy to dismiss this as opportunism—trying to cash in on the hype surrounding quantum computing. But it can be useful to look at the things these startups are targeting, because they can be an indication of hard problems in quantum computing that haven’t yet been solved by any one of the big companies involved in that space—companies like Amazon, Google, IBM, or Intel.

In the case of a UK-based company called Riverlane, the unsolved piece that is being addressed is the huge amount of classical computations that are going to be necessary to make the quantum hardware work. Specifically, it’s targeting the huge amount of data processing that will be needed for a key part of quantum error correction: recognizing when an error has occurred.

Error detection vs. the data

All qubits are fragile, tending to lose their state during operations, or simply over time. No matter what the technology—cold atoms, superconducting transmons, whatever—these error rates put a hard limit on the amount of computation that can be done before an error is inevitable. That rules out doing almost every useful computation operating directly on existing hardware qubits.

The generally accepted solution to this is to work with what are called logical qubits. These involve linking multiple hardware qubits together and spreading the quantum information among them. Additional hardware qubits are linked in so that they can be measured to monitor errors affecting the data, allowing them to be corrected. It can take dozens of hardware qubits to make a single logical qubit, meaning even the largest existing systems can only support about 50 robust logical qubits.

Riverlane’s founder and CEO, Steve Brierley, told Ars that error correction doesn’t only stress the qubit hardware; it stresses the classical portion of the system as well. Each of the measurements of the qubits used for monitoring the system needs to be processed to detect and interpret any errors. We’ll need roughly 100 logical qubits to do some of the simplest interesting calculations, meaning monitoring thousands of hardware qubits. Doing more sophisticated calculations may mean thousands of logical qubits.

That error-correction data (termed syndrome data in the field) needs to be read between each operation, which makes for a lot of data. “At scale, we’re talking a hundred terabytes per second,” said Brierley. “At a million physical qubits, we’ll be processing about a hundred terabytes per second, which is Netflix global streaming.”

It also has to be processed in real time, otherwise computations will get held up waiting for error correction to happen. To avoid that, errors must be detected in real time. For transmon-based qubits, syndrome data is generated roughly every microsecond, so real time means completing the processing of the data—possibly Terabytes of it—with a frequency of around a Megahertz. And Riverlane was founded to provide hardware that’s capable of handling it.

Handling the data

The system the company has developed is described in a paper that it has posted on the arXiv. It’s designed to handle syndrome data after other hardware has already converted the analog signals into digital form. This allows Riverlane’s hardware to sit outside any low-temperature hardware that’s needed for some forms of physical qubits.

That data is run through an algorithm the paper terms a “Collision Clustering decoder,” which handles the error detection. To demonstrate its effectiveness, they implement it based on a typical Field Programmable Gate Array from Xilinx, where it occupies only about 5 percent of the chip but can handle a logical qubit built from nearly 900 hardware qubits (simulated, in this case).

The company also demonstrated a custom chip that handled an even larger logical qubit, while only occupying a tiny fraction of a square millimeter and consuming just 8 milliwatts of power.

Both of these versions are highly specialized; they simply feed the error information for other parts of the system to act on. So, it is a highly focused solution. But it’s also quite flexible in that it works with various error-correction codes. Critically, it also integrates with systems designed to control a qubit based on very different physics, including cold atoms, trapped ions, and transmons.

“I think early on it was a bit of a puzzle,” Brierley said. “You’ve got all these different types of physics; how are we going to do this?” It turned out not to be a major challenge. “One of our engineers was in Oxford working with the superconducting qubits, and in the afternoon he was working with the iron trap qubits. He came back to Cambridge and he was all excited. He was like, ‘They’re using the same control electronics.'” It turns out that, regardless of the physics involved in controlling the qubits, everybody had borrowed the same hardware from a different field (Brierley said it was a Xilinx radiofrequency system-on-a-chip built for 5G base stationed prototyping.) That makes it relatively easy to integrate Riverlane’s custom hardware with a variety of systems.

Why every quantum computer will need a powerful classical computer Read More »

new-weight-loss-and-diabetes-drugs-linked-to-lower-risk-of-10-cancers

New weight-loss and diabetes drugs linked to lower risk of 10 cancers

Secondary benefits —

For diabetes patients, GLP-1 drugs linked to lower cancer risks compared to insulin.

Ozempic is a GLP-1 drug for adults with type 2 diabetes.

Enlarge / Ozempic is a GLP-1 drug for adults with type 2 diabetes.

For patients with Type 2 diabetes, taking one of the new GLP-1 drugs, such as Ozempic, is associated with lower risks of developing 10 out of 13 obesity-associated cancers as compared with taking insulin, according to a recent study published in JAMA Network Open.

The study was retrospective, capturing data from over 1.6 million patients with Type 2 diabetes but no history of obesity-associated cancers prior to the study period. Using electronic health records, researchers had follow-up data for up to 15 years after the patients started taking either a GLP-1 drug, insulin, or metformin between 2008 and 2015.

This type of study can’t prove that the GLP-1 drugs caused the lower associated risks, but the results fit with some earlier findings. That includes results from one trial that found a 32 percent overall lower risk of obesity-associated cancers following bariatric surgery for weight loss.

In the new study, led by researchers at Case Western Reserve University School of Medicine, some of the GLP-1-associated risk reductions were quite substantial. Compared with patients taking insulin, patients taking a GLP-1 drug had a 65 percent lower associated risk of gall bladder cancer, a 63 percent lower associated risk of meningioma (a type of brain tumor), a 59 percent lower associated risk for pancreatic cancer, and a 53 percent lower associated risk of hepatocellular carcinoma (liver cancer). The researchers also found lower associated risks for esophageal cancer, colorectal cancer, kidney cancer, ovarian cancer, endometrial cancer, and multiple myeloma.

Compared with insulin, the researchers saw no lowered associated risk for thyroid and breast cancers. There was a lower risk of stomach cancer calculated, but the finding was not statistically significant.

Gaps and goals

The GLP-1 drugs did not show such promising results against metformin in the study. Compared with patients taking metformin, patients on GLP-1 drugs saw lower associated risks of colorectal cancer, gall bladder cancer, and meningioma, but those calculations were not statistically significant. The results also unexpectedly indicated a higher risk of kidney cancer for those taking GLP-1 drugs, but the cause of that potentially higher risk (which was not seen in the comparison with insulins) is unclear. The researchers called for more research to investigate that possible association.

Overall, the researchers call for far more studies to try to confirm a link between GLP-1 drugs and lower cancer risks, as well as studies to try to understand the mechanisms behind those potential risk reductions. It’s unclear if the lower risks may be driven simply by weight loss, or if insulin resistance, blood sugar levels, or some other mechanisms are at play.

The current study had several limitations given its retrospective, records-based design. Perhaps the biggest one is that the data didn’t allow the researchers to track individual patients’ weights throughout the study period. As such, researchers couldn’t examine associated cancer risk reductions with actual weight loss. It’s one more aspect that warrants further research.

Still, the study provides another promising result for the blockbuster, albeit pricy, drugs. The researchers suggest extending their work to assess whether GLP-1 drugs could be used to improve outcomes in patients with Type 2 diabetes or obesity who are already diagnosed with cancer, in addition to understanding if the drugs can help prevent the cancer.

New weight-loss and diabetes drugs linked to lower risk of 10 cancers Read More »

alaska’s-top-heavy-glaciers-are-approaching-an-irreversible-tipping point

Alaska’s top-heavy glaciers are approaching an irreversible tipping point

meltdown —

As the plateau of the icefield thins, ice and snow reserves at higher altitudes are lost.

Taku Glacier is one of many that begin in the Juneau Icefield.

Enlarge / Taku Glacier is one of many that begin in the Juneau Icefield.

The melting of one of North America’s largest ice fields has accelerated and could soon reach an irreversible tipping point. That’s the conclusion of new research colleagues and I have published on the Juneau Icefield, which straddles the Alaska-Canada border near the Alaskan capital of Juneau.

In the summer of 2022, I skied across the flat, smooth, and white plateau of the icefield, accompanied by other researchers, sliding in the tracks of the person in front of me under a hot sun. From that plateau, around 40 huge, interconnected glaciers descend towards the sea, with hundreds of smaller glaciers on the mountain peaks all around.

Our work, now published in Nature Communications, has shown that Juneau is an example of a climate “feedback” in action: as temperatures are rising, less and less snow is remaining through the summer (technically: the “end-of-summer snowline” is rising). This in turn leads to ice being exposed to sunshine and higher temperatures, which means more melt, less snow, and so on.

Like many Alaskan glaciers, Juneau’s are top-heavy, with lots of ice and snow at high altitudes above the end-of-summer snowline. This previously sustained the glacier tongues lower down. But when the end-of-summer snowline does creep up to the top plateau, then suddenly a large amount of a top-heavy glacier will be newly exposed to melting.

That’s what’s happening now, each summer, and the glaciers are melting much faster than before, causing the icefield to get thinner and thinner and the plateau to get lower and lower. Once a threshold is passed, these feedbacks can accelerate melt and drive a self-perpetuating loss of snow and ice which would continue even if the world were to stop warming.

Ice is melting faster than ever

Using satellites, photos and old piles of rocks, we were able to measure the ice loss across Juneau Icefield from the end of the last “Little Ice Age” (about 250 years ago) to the present day. We saw that the glaciers began shrinking after that cold period ended in about 1770. This ice loss remained constant until about 1979, when it accelerated. It accelerated again in 2010, doubling the previous rate. Glaciers there shrank five times faster between 2015 and 2019 than from 1979 to 1990.

Our data shows that as the snow decreases and the summer melt season lengthens, the icefield is darkening. Fresh, white snow is very reflective, and much of that strong solar energy that we experienced in the summer of 2022 is reflected back into space. But the end of summer snowline is rising and is now often occurring right on the plateau of the Juneau Icefield, which means that older snow and glacier ice is being exposed to the sun. These slightly darker surfaces absorb more energy, increasing snow and ice melt.

As the plateau of the icefield thins, ice and snow reserves at higher altitudes are lost, and the surface of the plateau lowers. This will make it increasingly hard for the icefield to ever stabilise or even recover. That’s because warmer air at low elevations drives further melt, leading to an irreversible tipping point.

Longer-term data like these are critical to understand how glaciers behave, and the processes and tipping points that exist within individual glaciers. These complex processes make it difficult to predict how a glacier will behave in future.

The world’s hardest jigsaw

We used satellite records to reconstruct how big the glacier was and how it behaved, but this really limits us to the past 50 years. To go back further, we need different methods. To go back 250 years, we mapped the ridges of moraines, which are large piles of debris deposited at the glacier snout, and places where glaciers have scoured and polished the bedrock.

To check and build on our mapping, we spent two weeks on the icefield itself and two weeks in the rainforest below. We camped among the moraine ridges, suspending our food high in the air to keep it safe from bears, shouting to warn off the moose and bears as we bushwhacked through the rainforest, and battling mosquitoes thirsty for our blood.

We used aerial photographs to reconstruct the icefield in the 1940s and 1970s, in the era before readily available satellite imagery. These are high-quality photos but they were taken before global positioning systems made it easy to locate exactly where they were taken.

A number also had some minor damage in the intervening years—some Sellotape, a tear, a thumbprint. As a result, the individual images had to be stitched together to make a 3D picture of the whole icefield. It was all rather like doing the world’s hardest jigsaw puzzle.

Work like this is crucial as the world’s glaciers are melting fast—all together they are currently losing more mass than the Greenland or Antarctic ice sheets, and thinning rates of these glaciers worldwide has doubled over the past two decades.

Our longer time series shows just how stark this acceleration is. Understanding how and where “feedbacks” are making glaciers melt even faster is essential to make better predictions of future change in this important regionThe Conversation

Bethan Davies, Senior Lecturer in Physical Geography, Newcastle University. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Alaska’s top-heavy glaciers are approaching an irreversible tipping point Read More »