Science

frozen-mammoth-skin-retained-its-chromosome-structure

Frozen mammoth skin retained its chromosome structure

Artist's depiction of a large mammoth with brown fur and huge, curving tusks in an icy, tundra environment.

One of the challenges of working with ancient DNA samples is that damage accumulates over time, breaking up the structure of the double helix into ever smaller fragments. In the samples we’ve worked with, these fragments scatter and mix with contaminants, making reconstructing a genome a large technical challenge.

But a dramatic paper released on Thursday shows that this isn’t always true. Damage does create progressively smaller fragments of DNA over time. But, if they’re trapped in the right sort of material, they’ll stay right where they are, essentially preserving some key features of ancient chromosomes even as the underlying DNA decays. Researchers have now used that to detail the chromosome structure of mammoths, with some implications for how these mammals regulated some key genes.

DNA meets Hi-C

The backbone of DNA’s double helix consists of alternating sugars and phosphates, chemically linked together (the bases of DNA are chemically linked to these sugars). Damage from things like radiation can break these chemical linkages, with fragmentation increasing over time. When samples reach the age of something like a Neanderthal, very few fragments are longer than 100 base pairs. Since chromosomes are millions of base pairs long, it was thought that this would inevitably destroy their structure, as many of the fragments would simply diffuse away.

But that will only be true if the medium they’re in allows diffusion. And some scientists suspected that permafrost, which preserves the tissue of some now-extinct Arctic animals, might block that diffusion. So, they set out to test this using mammoth tissues, obtained from a sample termed YakInf that’s roughly 50,000 years old.

The challenge is that the molecular techniques we use to probe chromosomes take place in liquid solutions, where fragments would just drift away from each other in any case. So, the team focused on an approach termed Hi-C, which specifically preserves information about which bits of DNA were close to each other. It does this by exposing chromosomes to a chemical that will link any pieces of DNA that are close physical proximity. So, even if those pieces are fragments, they’ll be stuck to each other by the time they end up in a liquid solution.

A few enzymes are then used to convert these linked molecules to a single piece of DNA, which is then sequenced. This data, which will contain sequence information from two different parts of the genome, then tells us that those parts were once close to each other inside a cell.

Interpreting Hi-C

On its own, a single bit of data like this isn’t especially interesting; two bits of genome might end up next to each other at random. But when you have millions of bits of data like this, you can start to construct a map of how the genome is structured.

There are two basic rules governing the pattern of interactions we’d expect to see. The first is that interactions within a chromosome are going to be more common than interactions between two chromosomes. And, within a chromosome, parts that are physically closer to each other on the molecule are more likely to interact than those that are farther apart.

So, if you are looking at a specific segment of, say, chromosome 12, most of the locations Hi-C will find it interacting with will also be on chromosome 12. And the frequency of interactions will go up as you move to sequences that are ever closer to the one you’re interested in.

On its own, you can use Hi-C to help reconstruct a chromosome even if you start with nothing but fragments. But the exceptions to the expected pattern also tell us things about biology. For example, genes that are active tend to be on loops of DNA, with the two ends of the loop held together by proteins; the same is true for inactive genes. Interactions within these loops tend to be more frequent than interactions between them, subtly altering the frequency with which two fragments end up linked together during Hi-C.

Frozen mammoth skin retained its chromosome structure Read More »

to-help-with-climate-change,-carbon-capture-will-have-to-evolve

To help with climate change, carbon capture will have to evolve

gotta catch more —

The technologies are useful tools but have yet to move us away from fossil fuels.

Image of a facility filled with green-colored tubes.

Enlarge / Bioreactors that host algae would be one option for carbon sequestration—as long as the carbon is stored somehow.

More than 200 kilometers off Norway’s coast in the North Sea sits the world’s first offshore carbon capture and storage project. Built in 1996, the Sleipner project strips carbon dioxide from natural gas—largely made up of methane—to make it marketable. But instead of releasing the CO2 into the atmosphere, the greenhouse gas is buried.

The effort stores around 1 million metric tons of CO2 per year—and is praised by many as a pioneering success in global attempts to cut greenhouse gas emissions.

Last year, total global CO2 emissions hit an all-time high of around 35.8 billion tons, or gigatons. At these levels, scientists estimate, we have roughly six years left before we emit so much CO2 that global warming will consistently exceed 1.5° Celsius above average preindustrial temperatures, an internationally agreed-upon limit. (Notably, the global average temperature for the past 12 months has exceeded this threshold.)

Phasing out fossil fuels is key to cutting emissions and fighting climate change. But a suite of technologies collectively known as carbon capture, utilization and storage, or CCUS, are among the tools available to help meet global targets to cut CO2 emissions in half by 2030 and to reach net-zero emissions by 2050. These technologies capture, use or store away CO2 emitted by power generation or industrial processes, or suck it directly out of the air. The Intergovernmental Panel on Climate Change (IPCC), the United Nations body charged with assessing climate change science, includes carbon capture and storage among the actions needed to slash emissions and meet temperature targets.

Carbon capture, utilization and storage technologies often capture CO2 from coal or natural gas power generation or industrial processes, such as steel manufacturing. The CO2 is compressed into a liquid under high pressure and transported through pipelines to sites where it may be stored, in porous sedimentary rock formations containing saltwater, for example, or used for other purposes. The captured CO2 can be injected into the ground to extract oil dregs or used to produce cement and other products.

Enlarge / Carbon capture, utilization and storage technologies often capture CO2 from coal or natural gas power generation or industrial processes, such as steel manufacturing. The CO2 is compressed into a liquid under high pressure and transported through pipelines to sites where it may be stored, in porous sedimentary rock formations containing saltwater, for example, or used for other purposes. The captured CO2 can be injected into the ground to extract oil dregs or used to produce cement and other products.

Governments and industry are betting big on such projects. Last year, for example, the British government announced 20 billion pounds (more than $25 billion) in funding for CCUS, often shortened to CCS. The United States allocated more than $5 billion between 2011 and 2023 and committed an additional $8.2 billion from 2022 to 2026. Globally, public funding for CCUS projects rose to $20 billion in 2023, according to the International Energy Agency (IEA), which works with countries around the world to forge energy policy.

Given the urgency of the situation, many people argue that CCUS is necessary to move society toward climate goals. But critics don’t see the technology, in its current form, shifting the world away from oil and gas: In a lot of cases, they point out, the captured CO2 is used to extract more fossil fuels in a process known as enhanced oil recovery. They contend that other existing solutions such as renewable energy offer deeper and quicker CO2 emissions cuts. “It’s better not to emit in the first place,” says Grant Hauber, an energy finance adviser at the Institute for Energy Economics and Financial Analysis, a nonpartisan organization in Lakewood, Ohio.

What’s more, fossil fuel companies provide funds to universities and researchers—which some say could shape what is studied and what is not, even if the work of individual scientists is legitimate. For these reasons, some critics say CCUS shouldn’t be pursued at all.

“Carbon capture and storage essentially perpetuates fossil fuel reliance. It’s a distraction and a delay tactic,” says Jennie Stephens, a climate justice researcher at Northeastern University in Boston. She adds that there is little focus on understanding the psychological, social, economic, and political barriers that prevent communities from shifting away from fossil fuels and forging solutions to those obstacles.

According to the Global CCS Institute, an industry-led think tank headquartered in Melbourne, Australia, of the 41 commercial projects operational as of July 2023, most were part of efforts that produce, extract, or burn fossil fuels, such as coal- and gas-fired power plants. That’s true of the Sleipner project, run by the energy company Equinor. It’s the case, too, with the world’s largest CCUS facility, operated by ExxonMobil in Wyoming, in the United States, which also captures CO2 as part of the production of methane.

Granted, not all CCUS efforts further fossil fuel production, and many projects now in the works have the sole goal of capturing and locking up CO2. Still, some critics doubt whether these greener approaches could ever lock away enough CO2 to meaningfully contribute to climate mitigation, and they are concerned about the costs.

Others are more circumspect. Sally Benson, an energy researcher at Stanford University, doesn’t want to see CCUS used as an excuse to carry on with fossil fuels. But she says the technology is essential for capturing some of the CO2 from fossil fuel production and usage, as well as from industrial processes, as society transitions to new energy sources. “If we can get rid of those emissions with carbon capture and sequestration, that sounds like success to me,” says Benson, who codirects an institute that receives funding from fossil fuel companies.

To help with climate change, carbon capture will have to evolve Read More »

nasa-update-on-starliner-thruster-issues:-this-is-fine

NASA update on Starliner thruster issues: This is fine

Boeing's Starliner spacecraft on final approach to the International Space Station last month.

Enlarge / Boeing’s Starliner spacecraft on final approach to the International Space Station last month.

Before clearing Boeing’s Starliner crew capsule to depart the International Space Station and head for Earth, NASA managers want to ensure the spacecraft’s problematic control thrusters can help guide the ship’s two-person crew home.

The two astronauts who launched June 5 on the Starliner spacecraft’s first crew test flight agree with the managers, although they said Wednesday that they’re comfortable with flying the capsule back to Earth if there’s any emergency that might require evacuation of the space station.

NASA astronauts Butch Wilmore and Suni Williams were supposed to return to Earth weeks ago, but managers are keeping them at the station as engineers continue probing thruster problems and helium leaks that have plagued the mission since its launch.

“This is a tough business that we’re in,” Wilmore, Starliner’s commander, told reporters Wednesday in a news conference from the space station. “Human spaceflight is not easy in any regime, and there have been multiple issues with any spacecraft that’s ever been designed, and that’s the nature of what we do.”

Five of the 28 reaction control system thrusters on Starliner’s service module dropped offline as the spacecraft approached the space station last month. Starliner’s flight software disabled the five control jets when they started overheating and losing thrust. Four of the thrusters were later recovered, although some couldn’t reach their full power levels as Starliner came in for docking.

Wilmore, who took over manual control for part of Starliner’s approach to the space station, said he could sense the spacecraft’s handling qualities diminish as thrusters temporarily failed. “You could tell it was degraded, but still, it was impressive,” he said. Starliner ultimately docked to the station in autopilot mode.

In mid-June, the Starliner astronauts hot-fired the thrusters again, and their thrust levels were closer to normal.

“What we want to know is that the thrusters can perform; if whatever their percentage of thrust is, we can put it into a package that will get us a deorbit burn,” said Williams, a NASA astronaut serving as Starliner’s pilot. “That’s the main purpose that we need [for] the service module: to get us a good deorbit burn so that we can come back.”

These small thrusters aren’t necessary for the deorbit burn itself, which will use a different set of engines to slow Starliner’s velocity enough for it to drop out of orbit and head for landing. But Starliner needs enough of the control jets working to maneuver into the proper orientation for the deorbit firing.

This test flight is the first time astronauts have flown in space on Boeing’s Starliner spacecraft, following years of delays and setbacks. Starliner is NASA’s second human-rated commercial crew capsule, and it’s poised to join SpaceX’s Crew Dragon in a rotation of missions ferrying astronauts to and from the space station through the rest of the decade.

But first, Boeing and NASA need to safely complete the Starliner test flight and resolve the thruster problems and helium leaks plaguing the spacecraft before moving forward with operational crew rotation missions. There’s a Crew Dragon spacecraft currently docked to the station, but Steve Stich, NASA’s commercial crew program manager, told reporters Wednesday that, right now, Wilmore and Williams still plan to come home on Starliner.

“The beautiful thing about the commercial crew program is that we have two vehicles, two different systems, that we could use to return crew,” Stich said. “So we have a little bit more time to go through the data and then make a decision as to whether we need to do anything different. But the prime option today is to return Butch and Suni on Starliner. Right now, we don’t see any reason that wouldn’t be the case.”

Mark Nappi, Boeing’s Starliner program manager, said officials identified more than 30 actions to investigate five “small” helium leaks and the thruster problems on Starliner’s service module. “All these items are scheduled to be completed by the end of next week,” Nappi said.

“It’s a test flight, and the first with crew, and we’re just taking a little extra time to make sure that we understand everything before we commit to deorbit,” Stich said.

NASA update on Starliner thruster issues: This is fine Read More »

congress-apparently-feels-a-need-for-“reaffirmation”-of-sls-rocket

Congress apparently feels a need for “reaffirmation” of SLS rocket

Stuart Smalley is here to help with daily affirmations of SLS.

Enlarge / Stuart Smalley is here to help with daily affirmations of SLS.

Aurich Lawson | SNL

There is a curious section in the new congressional reauthorization bill for NASA that concerns the agency’s large Space Launch System rocket.

The section is titled “Reaffirmation of the Space Launch System,” and in it Congress asserts its commitment to a flight rate of twice per year for the rocket. The reauthorization legislation, which cleared a House committee on Wednesday, also said NASA should identify other customers for the rocket.

“The Administrator shall assess the demand for the Space Launch System by entities other than NASA and shall break out such demand according to the relevant Federal agency or nongovernment sector,” the legislation states.

Congress directs NASA to report back, within 180 days of the legislation passing, on several topics. First, the legislators want an update on NASA’s progress toward achieving a flight rate of twice per year for the SLS rocket, and the Artemis mission by which this capability will be in place.

Additionally, Congress is asking for NASA to study demand for the SLS rocket and estimate “cost and schedule savings for reduced transit times” for deep space missions due to the “unique capabilities” of the rocket. The space agency also must identify any “barriers or challenges” that could impede use of the rocket by other entities other than NASA, and estimate the cost of overcoming those barriers.

Is someone afraid?

There is a fair bit to unpack here, but the inclusion of this section—there is no “reaffirmation” of the Orion spacecraft, for example—suggests that either the legacy space companies building the SLS rocket, local legislators, or both feel the need to protect the SLS rocket. As one source on Capitol Hill familiar with the legislation told Ars, “It’s a sign that somebody’s afraid.”

Congress created the SLS rocket 14 years ago with the NASA Authorization Act of 2010. The large rocket kept a river of contracts flowing to large aerospace companies, including Boeing and Northrop Grumman, who had been operating the Space Shuttle. Congress then lavished tens of billions of dollars on the contractors over the years for development, often authorizing more money than NASA said it needed. Congressional support was unwavering, at least in part because the SLS program boasts that it has jobs in every state.

Under the original law, the SLS rocket was supposed to achieve “full operational capability” by the end of 2016. The first launch of the SLS vehicle did not take place until late 2022, six years later. It was entirely successful. However, due to various reasons, the rocket will not fly again until September 2025 at the earliest.

Congress apparently feels a need for “reaffirmation” of SLS rocket Read More »

nearby-star-cluster-houses-unusually-large-black-hole

Nearby star cluster houses unusually large black hole

Big, but not that big —

Fast-moving stars imply that there’s an intermediate-mass black hole there.

Three panel image, with zoom increasing from left to right. Left most panel is a wide view of the globular cluster; right is a zoom in to the area where its central black hole must reside.

Enlarge / From left to right, zooming in from the globular cluster to the site of its black hole.

ESA/Hubble & NASA, M. Häberle

Supermassive black holes appear to reside at the center of every galaxy and to have done so since galaxies formed early in the history of the Universe. Currently, however, we can’t entirely explain their existence, since it’s difficult to understand how they could grow quickly enough to reach the cutoff for supermassive as quickly as they did.

A possible bit of evidence was recently found by using about 20 years of data from the Hubble Space Telescope. The data comes from a globular cluster of stars that’s thought to be the remains of a dwarf galaxy and shows that a group of stars near the cluster’s core are moving so fast that they should have been ejected from it entirely. That implies that something massive is keeping them there, which the researchers argue is a rare intermediate-mass black hole, weighing in at over 8,000 times the mass of the Sun.

Moving fast

The fast-moving stars reside in Omega Centauri, the largest globular cluster in the Milky Way. With an estimated 10 million stars, it’s a crowded environment, but observations are aided by its relative proximity, at “only” 17,000 light-years away. Those observations have been hinting that there might be a central black hole within the globular cluster, but the evidence has not been decisive.

The new work, done by a large international team, used over 500 images of Omega Centauri, taken by the Hubble Space Telescope over the course of 20 years. This allowed them to track the motion of stars within the cluster, allowing an estimate of their speed relative to the cluster’s center of mass. While this has been done previously, the most recent data allowed an update that reduced the uncertainty in the stars’ velocity.

Within the update data, a number of stars near the cluster’s center stood out for their extreme velocities: seven of them were moving fast enough that the gravitational pull of the cluster isn’t enough to keep them there. All seven should have been lost from the cluster within 1,000 years, although the uncertainties remained large for two of them. Based on the size of the cluster, there shouldn’t even be a single foreground star between the Hubble and the Omega Cluster, so these really seem to be within the cluster despite their velocity.

The simplest explanation for that is that there’s an additional mass holding them in place. That could potentially be several massive objects, but the close proximity of all these stars to the center of the cluster favor a single, compact object. Which means a black hole.

Based on the velocities, the researchers estimate that the object has a mass of at least 8,200 times that of the Sun. A couple of stars appear to be accelerating; if that holds up based on further observations, it would indicate that the black hole is over 20,000 solar masses. That places it firmly within black hole territory, though smaller than supermassive black holes, which are viewed as those with roughly a million solar masses or more. And it’s considerably larger than you’d expect from black holes formed through the death of a star, which aren’t expected to be much larger than 100 times the Sun’s mass.

This places it in the category of intermediate-mass black holes, of which there are only a handful of potential sightings, none of them universally accepted. So, this is a significant finding if for no other reason than it may be the least controversial spotting of an intermediate-mass black hole yet.

What’s this telling us?

For now, there are still considerable uncertainties in some of the details here—but prospects for improving the situation exist. Observations with the Webb Space Telescope could potentially pick up the faint emissions from gas that’s falling into the black hole. And it can track the seven stars identified here. Its spectrographs could also potentially pick up the red and blue shifts in light caused by the star’s motion. Its location at a considerable distance from Hubble could also provide a more detailed three-dimensional picture of Omega Centauri’s central structure.

Figuring this out could potentially tell us more about how black holes grow to supermassive scales. Earlier potential sightings of intermediate-mass black holes have also come in globular clusters, which may suggest that they’re a general feature of large gatherings of stars.

But Omega Centauri differs from many other globular clusters, which often contain large populations of stars that all formed at roughly the same time, suggesting the clusters formed from a single giant cloud of materials. Omega Centauri has stars with a broad range of ages, which is one of the reasons why people think it’s the remains of a dwarf galaxy that was sucked into the Milky Way.

If that’s the case, then its central black hole is an analog of the supermassive black holes found in actual dwarf galaxies—which raises the question of why it’s only intermediate-mass. Did something about its interactions with the Milky Way interfere with the black hole’s growth?

And, in the end, none of this sheds light on how any black hole grows to be so much more massive than any star it could conceivably have formed from. Getting a better sense of this black hole’s history could provide more perspective on some questions that are currently vexing astronomers.

Nature, 2024. DOI: 10.1038/s41586-024-07511-z  (About DOIs).

Nearby star cluster houses unusually large black hole Read More »

beryl-is-just-the-latest-disaster-to-strike-the-energy-capital-of-the-world

Beryl is just the latest disaster to strike the energy capital of the world

Don’t know what you’ve got until it’s gone —

It’s pretty weird to use something I’ve written about in the abstract for so long.

Why yes, that Starlink dish is precariously perched to get around tree obstructions.

Enlarge / Why yes, that Starlink dish is precariously perched to get around tree obstructions.

Eric Berger

I’ll readily grant you that Houston might not be the most idyllic spot in the world. The summer heat is borderline unbearable. The humidity is super sticky. We don’t have mountains or pristine beaches—we have concrete.

But we also have a pretty amazing melting pot of culture, wonderful cuisine, lots of jobs, and upward mobility. Most of the year, I love living here. Houston is totally the opposite of, “It’s a nice place to visit, but you wouldn’t want to live there.” Houston is not a particularly nice place to visit, but you might just want to live here.

Except for the hurricanes.

Houston is the largest city in the United States to be highly vulnerable to hurricanes. At a latitude of 29.7 degrees, the city is solidly in the subtropics, and much of it is built within 25 to 50 miles of the Gulf of Mexico. Every summer, with increasing dread, we watch tropical systems develop over the Atlantic Ocean and then move into the Gulf.

For some meteorologists and armchair forecasters, tracking hurricanes is fulfilling work and a passionate hobby. For those of us who live near the water along the upper Texas coast, following the movements of these storms is gut-wrenching stuff. A few days before a potential landfall, I’ll find myself jolting awake in the middle of the night by the realization that new model data must be available. When you see a storm turning toward you, or intensifying, it’s psychologically difficult to process.

Beryl the Bad

It felt like we were watching Beryl forever. It formed into a tropical depression on June 28, became a hurricane the next day, and by June 30, it was a major hurricane storming into the Caribbean Sea. Beryl set all kinds of records for a hurricane in late June and early July. Put simply, we have never seen an Atlantic storm intensify so rapidly, or so much, this early in the hurricane season. Beryl behaved as if it were the peak of the Atlantic season, in September, rather than the beginning of July—normally a pretty sleepy time for Atlantic hurricane activity. I wrote about this for Ars Technica a week ago.

At the time, it looked as though the greater Houston area would be completely spared by Beryl, as the most reliable modeling data took the storm across the Yucatan Peninsula and into the southern Gulf of Mexico before a final landfall in northern Mexico. But over time, the forecast began to change, with the track moving steadily up the Texas coast.

I was at a dinner to celebrate the birthday of my cousin’s wife last Friday when I snuck a peek at my phone. It was about 7 pm local time. We were at a Mexican restaurant in Galveston, and I knew the latest operational run of the European model was about to come out. This was a mistake, as the model indicated a landfall about 80 miles south of Houston, which would bring the core of the storm’s strongest winds over Houston.

I had to fake joviality for the rest of the night, while feeling sick to my stomach.

Barreling inland

The truth is, Beryl could have been much worse. After weakening due to interaction with the Yucatan Peninsula on Friday, Beryl moved into the Gulf of Mexico just about when I was having that celebratory dinner on Friday evening. At that point, it was a strong tropical storm with 60 mph sustained winds. It had nearly two and a half days over open water to re-organize, and that seemed likely. Beryl had Saturday to shrug off dry air and was expected to intensify significantly on Sunday. It was due to make landfall on Monday morning.

The track for Beryl continued to look grim over the weekend—although its landfall would occur well south of Houston, Beryl’s track inland would bring its center and core of strongest winds over the most densely populated part of the city. However, we took some solace from a lack of serious intensification on Saturday and Sunday. Even at 10 pm local time on Sunday, less than six hours before Beryl’s landfall near Matagorda, it was still not a hurricane.

However, in those final hours Beryl did finally start to get organized in a serious way. We have seen this before as hurricanes start to run up on the Texas coast, where frictional effects from its outer bands aid intensification. In the last six hours Beryl intensified into a Category 1 hurricane, with 80-mph sustained winds. The eyewall of the storm closed, and Beryl was poised for rapid intensification. Then it ran aground.

Normally, as a hurricane traverses land it starts to weaken fairly quickly. But Beryl didn’t. Instead, the storm maintained much of its strength and bulldozed right into the heart of Houston with near hurricane-force sustained winds and higher gusts. I suspect what happened is that Beryl, beginning to deepen, had a ton of momentum at landfall, and it took time for interaction with land to reverse that momentum and begin slowing down its winds.

First the lights went out. Then the Internet soon followed. Except for storm chasers, hurricanes are miserable experiences. There is the torrential rainfall and rising water. But most ominous of all, at least for me, are the howling winds. When stronger gusts come through, even sturdily built houses shake. Trees whip around violently. It is such an uncontrolled, violent fury that one must endure. Losing a connection to the outside world magnifies one’s sense of helplessness.

In the end, Beryl knocked out power to about 2.5 million customers across the Houston region, including yours truly. Because broadband Internet service providers generally rely on these electricity services to deliver Internet, many customers lost connectivity. Even cell phone towers, reduced to batteries or small generators, were often only capable of delivering text and voice services.

Beryl is just the latest disaster to strike the energy capital of the world Read More »

why-every-quantum-computer-will-need-a-powerful-classical-computer

Why every quantum computer will need a powerful classical computer

Image of a set of spheres with arrows within them, with all the arrows pointing in the same direction.

Enlarge / A single logical qubit is built from a large collection of hardware qubits.

One of the more striking things about quantum computing is that the field, despite not having proven itself especially useful, has already spawned a collection of startups that are focused on building something other than qubits. It might be easy to dismiss this as opportunism—trying to cash in on the hype surrounding quantum computing. But it can be useful to look at the things these startups are targeting, because they can be an indication of hard problems in quantum computing that haven’t yet been solved by any one of the big companies involved in that space—companies like Amazon, Google, IBM, or Intel.

In the case of a UK-based company called Riverlane, the unsolved piece that is being addressed is the huge amount of classical computations that are going to be necessary to make the quantum hardware work. Specifically, it’s targeting the huge amount of data processing that will be needed for a key part of quantum error correction: recognizing when an error has occurred.

Error detection vs. the data

All qubits are fragile, tending to lose their state during operations, or simply over time. No matter what the technology—cold atoms, superconducting transmons, whatever—these error rates put a hard limit on the amount of computation that can be done before an error is inevitable. That rules out doing almost every useful computation operating directly on existing hardware qubits.

The generally accepted solution to this is to work with what are called logical qubits. These involve linking multiple hardware qubits together and spreading the quantum information among them. Additional hardware qubits are linked in so that they can be measured to monitor errors affecting the data, allowing them to be corrected. It can take dozens of hardware qubits to make a single logical qubit, meaning even the largest existing systems can only support about 50 robust logical qubits.

Riverlane’s founder and CEO, Steve Brierley, told Ars that error correction doesn’t only stress the qubit hardware; it stresses the classical portion of the system as well. Each of the measurements of the qubits used for monitoring the system needs to be processed to detect and interpret any errors. We’ll need roughly 100 logical qubits to do some of the simplest interesting calculations, meaning monitoring thousands of hardware qubits. Doing more sophisticated calculations may mean thousands of logical qubits.

That error-correction data (termed syndrome data in the field) needs to be read between each operation, which makes for a lot of data. “At scale, we’re talking a hundred terabytes per second,” said Brierley. “At a million physical qubits, we’ll be processing about a hundred terabytes per second, which is Netflix global streaming.”

It also has to be processed in real time, otherwise computations will get held up waiting for error correction to happen. To avoid that, errors must be detected in real time. For transmon-based qubits, syndrome data is generated roughly every microsecond, so real time means completing the processing of the data—possibly Terabytes of it—with a frequency of around a Megahertz. And Riverlane was founded to provide hardware that’s capable of handling it.

Handling the data

The system the company has developed is described in a paper that it has posted on the arXiv. It’s designed to handle syndrome data after other hardware has already converted the analog signals into digital form. This allows Riverlane’s hardware to sit outside any low-temperature hardware that’s needed for some forms of physical qubits.

That data is run through an algorithm the paper terms a “Collision Clustering decoder,” which handles the error detection. To demonstrate its effectiveness, they implement it based on a typical Field Programmable Gate Array from Xilinx, where it occupies only about 5 percent of the chip but can handle a logical qubit built from nearly 900 hardware qubits (simulated, in this case).

The company also demonstrated a custom chip that handled an even larger logical qubit, while only occupying a tiny fraction of a square millimeter and consuming just 8 milliwatts of power.

Both of these versions are highly specialized; they simply feed the error information for other parts of the system to act on. So, it is a highly focused solution. But it’s also quite flexible in that it works with various error-correction codes. Critically, it also integrates with systems designed to control a qubit based on very different physics, including cold atoms, trapped ions, and transmons.

“I think early on it was a bit of a puzzle,” Brierley said. “You’ve got all these different types of physics; how are we going to do this?” It turned out not to be a major challenge. “One of our engineers was in Oxford working with the superconducting qubits, and in the afternoon he was working with the iron trap qubits. He came back to Cambridge and he was all excited. He was like, ‘They’re using the same control electronics.'” It turns out that, regardless of the physics involved in controlling the qubits, everybody had borrowed the same hardware from a different field (Brierley said it was a Xilinx radiofrequency system-on-a-chip built for 5G base stationed prototyping.) That makes it relatively easy to integrate Riverlane’s custom hardware with a variety of systems.

Why every quantum computer will need a powerful classical computer Read More »

new-weight-loss-and-diabetes-drugs-linked-to-lower-risk-of-10-cancers

New weight-loss and diabetes drugs linked to lower risk of 10 cancers

Secondary benefits —

For diabetes patients, GLP-1 drugs linked to lower cancer risks compared to insulin.

Ozempic is a GLP-1 drug for adults with type 2 diabetes.

Enlarge / Ozempic is a GLP-1 drug for adults with type 2 diabetes.

For patients with Type 2 diabetes, taking one of the new GLP-1 drugs, such as Ozempic, is associated with lower risks of developing 10 out of 13 obesity-associated cancers as compared with taking insulin, according to a recent study published in JAMA Network Open.

The study was retrospective, capturing data from over 1.6 million patients with Type 2 diabetes but no history of obesity-associated cancers prior to the study period. Using electronic health records, researchers had follow-up data for up to 15 years after the patients started taking either a GLP-1 drug, insulin, or metformin between 2008 and 2015.

This type of study can’t prove that the GLP-1 drugs caused the lower associated risks, but the results fit with some earlier findings. That includes results from one trial that found a 32 percent overall lower risk of obesity-associated cancers following bariatric surgery for weight loss.

In the new study, led by researchers at Case Western Reserve University School of Medicine, some of the GLP-1-associated risk reductions were quite substantial. Compared with patients taking insulin, patients taking a GLP-1 drug had a 65 percent lower associated risk of gall bladder cancer, a 63 percent lower associated risk of meningioma (a type of brain tumor), a 59 percent lower associated risk for pancreatic cancer, and a 53 percent lower associated risk of hepatocellular carcinoma (liver cancer). The researchers also found lower associated risks for esophageal cancer, colorectal cancer, kidney cancer, ovarian cancer, endometrial cancer, and multiple myeloma.

Compared with insulin, the researchers saw no lowered associated risk for thyroid and breast cancers. There was a lower risk of stomach cancer calculated, but the finding was not statistically significant.

Gaps and goals

The GLP-1 drugs did not show such promising results against metformin in the study. Compared with patients taking metformin, patients on GLP-1 drugs saw lower associated risks of colorectal cancer, gall bladder cancer, and meningioma, but those calculations were not statistically significant. The results also unexpectedly indicated a higher risk of kidney cancer for those taking GLP-1 drugs, but the cause of that potentially higher risk (which was not seen in the comparison with insulins) is unclear. The researchers called for more research to investigate that possible association.

Overall, the researchers call for far more studies to try to confirm a link between GLP-1 drugs and lower cancer risks, as well as studies to try to understand the mechanisms behind those potential risk reductions. It’s unclear if the lower risks may be driven simply by weight loss, or if insulin resistance, blood sugar levels, or some other mechanisms are at play.

The current study had several limitations given its retrospective, records-based design. Perhaps the biggest one is that the data didn’t allow the researchers to track individual patients’ weights throughout the study period. As such, researchers couldn’t examine associated cancer risk reductions with actual weight loss. It’s one more aspect that warrants further research.

Still, the study provides another promising result for the blockbuster, albeit pricy, drugs. The researchers suggest extending their work to assess whether GLP-1 drugs could be used to improve outcomes in patients with Type 2 diabetes or obesity who are already diagnosed with cancer, in addition to understanding if the drugs can help prevent the cancer.

New weight-loss and diabetes drugs linked to lower risk of 10 cancers Read More »

alaska’s-top-heavy-glaciers-are-approaching-an-irreversible-tipping point

Alaska’s top-heavy glaciers are approaching an irreversible tipping point

meltdown —

As the plateau of the icefield thins, ice and snow reserves at higher altitudes are lost.

Taku Glacier is one of many that begin in the Juneau Icefield.

Enlarge / Taku Glacier is one of many that begin in the Juneau Icefield.

The melting of one of North America’s largest ice fields has accelerated and could soon reach an irreversible tipping point. That’s the conclusion of new research colleagues and I have published on the Juneau Icefield, which straddles the Alaska-Canada border near the Alaskan capital of Juneau.

In the summer of 2022, I skied across the flat, smooth, and white plateau of the icefield, accompanied by other researchers, sliding in the tracks of the person in front of me under a hot sun. From that plateau, around 40 huge, interconnected glaciers descend towards the sea, with hundreds of smaller glaciers on the mountain peaks all around.

Our work, now published in Nature Communications, has shown that Juneau is an example of a climate “feedback” in action: as temperatures are rising, less and less snow is remaining through the summer (technically: the “end-of-summer snowline” is rising). This in turn leads to ice being exposed to sunshine and higher temperatures, which means more melt, less snow, and so on.

Like many Alaskan glaciers, Juneau’s are top-heavy, with lots of ice and snow at high altitudes above the end-of-summer snowline. This previously sustained the glacier tongues lower down. But when the end-of-summer snowline does creep up to the top plateau, then suddenly a large amount of a top-heavy glacier will be newly exposed to melting.

That’s what’s happening now, each summer, and the glaciers are melting much faster than before, causing the icefield to get thinner and thinner and the plateau to get lower and lower. Once a threshold is passed, these feedbacks can accelerate melt and drive a self-perpetuating loss of snow and ice which would continue even if the world were to stop warming.

Ice is melting faster than ever

Using satellites, photos and old piles of rocks, we were able to measure the ice loss across Juneau Icefield from the end of the last “Little Ice Age” (about 250 years ago) to the present day. We saw that the glaciers began shrinking after that cold period ended in about 1770. This ice loss remained constant until about 1979, when it accelerated. It accelerated again in 2010, doubling the previous rate. Glaciers there shrank five times faster between 2015 and 2019 than from 1979 to 1990.

Our data shows that as the snow decreases and the summer melt season lengthens, the icefield is darkening. Fresh, white snow is very reflective, and much of that strong solar energy that we experienced in the summer of 2022 is reflected back into space. But the end of summer snowline is rising and is now often occurring right on the plateau of the Juneau Icefield, which means that older snow and glacier ice is being exposed to the sun. These slightly darker surfaces absorb more energy, increasing snow and ice melt.

As the plateau of the icefield thins, ice and snow reserves at higher altitudes are lost, and the surface of the plateau lowers. This will make it increasingly hard for the icefield to ever stabilise or even recover. That’s because warmer air at low elevations drives further melt, leading to an irreversible tipping point.

Longer-term data like these are critical to understand how glaciers behave, and the processes and tipping points that exist within individual glaciers. These complex processes make it difficult to predict how a glacier will behave in future.

The world’s hardest jigsaw

We used satellite records to reconstruct how big the glacier was and how it behaved, but this really limits us to the past 50 years. To go back further, we need different methods. To go back 250 years, we mapped the ridges of moraines, which are large piles of debris deposited at the glacier snout, and places where glaciers have scoured and polished the bedrock.

To check and build on our mapping, we spent two weeks on the icefield itself and two weeks in the rainforest below. We camped among the moraine ridges, suspending our food high in the air to keep it safe from bears, shouting to warn off the moose and bears as we bushwhacked through the rainforest, and battling mosquitoes thirsty for our blood.

We used aerial photographs to reconstruct the icefield in the 1940s and 1970s, in the era before readily available satellite imagery. These are high-quality photos but they were taken before global positioning systems made it easy to locate exactly where they were taken.

A number also had some minor damage in the intervening years—some Sellotape, a tear, a thumbprint. As a result, the individual images had to be stitched together to make a 3D picture of the whole icefield. It was all rather like doing the world’s hardest jigsaw puzzle.

Work like this is crucial as the world’s glaciers are melting fast—all together they are currently losing more mass than the Greenland or Antarctic ice sheets, and thinning rates of these glaciers worldwide has doubled over the past two decades.

Our longer time series shows just how stark this acceleration is. Understanding how and where “feedbacks” are making glaciers melt even faster is essential to make better predictions of future change in this important regionThe Conversation

Bethan Davies, Senior Lecturer in Physical Geography, Newcastle University. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Alaska’s top-heavy glaciers are approaching an irreversible tipping point Read More »

the-yellowstone-supervolcano-destroyed-an-ecosystem-but-saved-it-for-us

The Yellowstone supervolcano destroyed an ecosystem but saved it for us

Set in stone —

50 years of excavation unveiled the story of a catastrophic event and its aftermath.

Interior view of the Rhino Barn. Exposed fossil skeletons left in-situ for research and public viewing.

Enlarge / Interior view of the Rhino Barn. Exposed fossil skeletons left in-situ for research and public viewing.

Rick E. Otto, University of Nebraska State Museum

Death was everywhere. Animal corpses littered the landscape and were mired in the local waterhole as ash swept around everything in its path. For some, death happened quickly; for others, it was slow and painful.

This was the scene in the aftermath of a supervolcanic eruption in Idaho, approximately 1,600 kilometers (900 miles) away. It was an eruption so powerful that it obliterated the volcano itself, leaving a crater 80 kilometers (50 miles) wide and spewing clouds of ash that the wind carried over long distances, killing almost everything that inhaled it. This was particularly true here, in this location in Nebraska, where animals large and small succumbed to the eruption’s deadly emissions.

Eventually, all traces of this horrific event were buried; life continued, evolved, and changed. That’s why, millions of years later in the summer of 1971, Michael Voorhies was able to enjoy another delightful day of exploring.

Finding rhinos

He was, as he had been each summer between academic years, creating a geologic map of his hometown in Nebraska. This meant going from farm to farm and asking if he could walk through the property to survey the rocks and look for fossils. “I’m basically just a kid at heart, and being a paleontologist in the summer was my idea of heaven,” Voorhies, now retired from the University of Georgia, told Ars.

What caught his eye on one particular farm was a layer of volcanic ash—something treasured by geologists and paleontologists, who use it to get the age of deposits. But as he got closer, he also noticed exposed bone. “Finding what was obviously a lower jaw which was still attached to the skull, now that was really quite interesting!” he said. “Mostly what you find are isolated bones and teeth.”

That skull belonged to a juvenile rhino. Voorhies and some of his students returned to the site to dig further, uncovering the rest of the rhino’s completely articulated remains (meaning the bones of its skeleton were connected as they would be in life). More digging produced the intact skeletons of another five or six rhinos. That was enough to get National Geographic funding for a massive excavation that took place between 1978 and 1979. Crews amassed, among numerous other animals, the remarkable total of 70 complete rhino skeletons.

To put this into perspective, most fossil sites—even spectacular locations preserving multiple animals—are composed primarily of disarticulated skeletons, puzzle pieces that paleontologists painstakingly put back together. Here, however, was something no other site had ever before produced: vast numbers of complete skeletons preserved where they died.

Realizing there was still more yet to uncover, Voorhies and others appealed to the larger Nebraska community to help preserve the area. Thanks to hard work and substantial local donations, the Ashfall Fossil Beds park opened to the public in 1991, staffed by two full-time employees.

Fossils discovered are now left in situ, meaning they remain exposed exactly where they are found, protected by a massive structure called the Hubbard Rhino Barn. Excavations are conducted within the barn at a much slower and steadier pace than those in the ’70s due in large part to the small, rotating number of seasonal employees—mostly college students—who excavate further each summer.

The Rhino Barn protects the fossil bed from the elements.

Enlarge / The Rhino Barn protects the fossil bed from the elements.

Photos by Rick E. Otto, University of Nebraska State Museum

A full ecosystem

Almost 50 years of excavation and research have unveiled the story of a catastrophic event and its aftermath, which took place in a Nebraska that nobody would recognize—one where species like rhinoceros, camels, and saber-toothed deer were a common sight.

But to understand that story, we have to set the stage. The area we know today as Ashfall Fossil Beds was actually a waterhole during the Miocene, one frequented by a diversity of animals. We know this because there are fossils of those animals in a layer of sand at the very bottom of the waterhole, a layer that was not impacted by the supervolcanic eruption.

Rick Otto was one of the students who excavated fossils in 1978. He became Ashfall’s superintendent in 1991 and retired in late 2023. “There were animals dying a natural death around the Ashfall waterhole before the volcanic ash storm took place,” Otto told Ars, which explains the fossils found in that sand. After being scavenged, their bodies may have been trampled by some of the megafauna visiting the waterhole, which would have “worked those bones into the sand.”

The Yellowstone supervolcano destroyed an ecosystem but saved it for us Read More »

egalitarian-oddity-found-in-the-neolithic

Egalitarian oddity found in the Neolithic

Eat up! —

Men, women, and immigrants all seemed to have similar dietary inputs.

Greyscale image of an adult skeleton in a fetal position, framed by vertical rocks.

Enlarge / A skeleton found during 1950’s excavations at the Barman site.

Did ancient people practice equality? While stereotypes may suggest otherwise, the remains of one Neolithic society reveal evidence that both men and women, as well as locals and foreigners, were all equal in at least a critical aspect of life: what they ate.

The Neolithic saw the dawn of agriculture and animal husbandry some 6,000 years ago. In what is now Valais, Switzerland, the type and amount of food people ate was the same regardless of sex or where they had come from. Researchers led by Déborah Rosselet-Christ of the University of Geneva (UNIGE) learned this by analyzing isotopes in the bones and teeth of adults buried in what is now called the Barmaz necropolis. Based on the 49 individuals studied, people at the Barmaz site enjoyed dietary equality.

“Unlike other similar studies of Neolithic burials, the Barmaz population appears to have drawn its protein resources from a similar environment, with the same access to resources for adults, whether male or female,” the researchers said in a study recently published in the Journal of Archaeological Science: Reports.

Down to the bone

To determine whether food was equal among the people buried at Barmaz, Rosselet-Christ and her team needed to examine certain isotopes in the bones and others in the teeth. Certain types of bone either do or do not renew, allowing the content of those bones to be associated with either someone’s place of birth or what they ate in their last years.

Being able to tell whether an individual was local or foreign was done by analyzing several strontium isotopes in the enamel of their teeth. Tooth enamel is formed at a young age and does not self-renew, so isotopes found in enamel, which enter it through the food someone eats, are indicative of the environment that their food was from. This can be used to distinguish whether an individual was born somewhere or moved after the early years of their lives. If you know what the strontium ratios are at a given site, you can compare those to the ratios in tooth enamel and determine if the owner of the tooth came from that area.

While strontium in tooth enamel can give away whether someone was born in or moved to a certain location at a young age, various isotopes of carbon, nitrogen, and sulfur that also come from food told the research team what and how much people ate during the last years of their lives. Bones such as the humerus (which was the best-preserved bone in most individuals) are constantly renewed with new material. This means that the most recently deposited bone tissue was put in place rather close to death.

Something for everyone

Near the valley of the Rhone River in the Swiss Alps, the Barmaz necropolis is located in an area that was once covered in deciduous forests that villages and farmland replaced. Most of the Barmaz people are thought to be locals. The strontium isotopes found in their teeth showed that only a few had not lived in the area during the first few years of their lives, when the enamel formed, though whether other individuals moved there later in life was more difficult to determine.

Analysis of the Barmaz diet showed that it was heavy on animal protein, supplemented with some plant products such as peas and barley. The isotopes analyzed were mostly from young goats and pigs. Based on higher levels of particular carbon and nitrogen isotopes found in their bones, the researchers think these juvenile animals might not have even been weaned yet, which means that the people of this agrarian society were willing to accept less meat yield for higher quality meat.

Rosselet-Christ’s most significant find was that the same median fractions of certain carbon, nitrogen, and sulfur isotopes were found in the bones of both men and women. Whether these people were local or foreign also did not matter—the values of these isotopes in those with different strontium isotope content in their tooth enamel was also the same. It seems that all adults ate equal amounts of the same foods, which was not always the case in Neolithic societies.

“The individuals buried at Barmaz—whether male or female—appear to have lived with equal opportunities, painting a picture of a society with egalitarian reflections,” the research team said in the same study.

Other things in this society were also equal. The dead were buried the same way, with mostly the same materials, regardless of sex or if they were locals or foreigners. While a society this egalitarian is not often associated with Neolithic people, it shows that some of our ancestors believed that nobody should be left out. Maybe they were much more like us than we think.

Journal of Archaeological Science: Reports, 2004. DOI: 10.1016/j.jasrep.2024.104585

Egalitarian oddity found in the Neolithic Read More »

the-greening-of-planes,-trains,-and-automobiles

The greening of planes, trains, and automobiles

Getting greener —

We need new fuels as society moves away from coal, natural gas and oil.

The greening of planes, trains, and automobiles

As the world races to decarbonize everything from the electricity grid to industry, it faces particular problems with transportation—which alone is responsible for about a quarter of our planet’s energy-related greenhouse gas emissions. The fuels for transport need to be not just green, cheap, and powerful, but also lightweight and safe enough to be carried around.

Fossil fuels—mainly gasoline and diesel—have been extraordinarily effective at powering a diverse range of mobile machines. Since the Industrial Revolution, humanity has perfected the art of dredging these up, refining them, distributing them and combusting them in engines, creating a vast and hard-to-budge industry. Now we have to step away from fossil fuels, and the world is finding no one-size-fits-all replacement.

Each type of transportation has its own peculiarities—which is one reason we have different formulations of hydrocarbons today, from gasoline to diesel, bunker fuel to jet fuel. Cars need a convenient, lightweight power source; container ships need enough oomph to last months; planes absolutely need to be reliable and to work at subzero temperatures. As the fossil fuels are phased out, the transport fuel landscape is “getting more diverse,” says Timothy Lipman, co-director of the Transportation Sustainability Research Center at the University of California, Berkeley.

Every energy solution has its pros and cons. Batteries are efficient but struggle with their weight. Hydrogen—the lightest element in the universe—packs a huge energy punch, but it’s expensive to make in a “green” way and, as a gas, it takes up a lot of space. Liquid fuels that carry hydrogen can be easier to transport or drop into an existing engine, but ammonia is toxic, biofuels are in short supply, and synthetic hydrocarbons are hard to produce.

The scale of this energy transition is massive, and the amount of renewable energy the world will require to make the needed electricity and alternative fuels is “a little bit mind-blowing,” says mechanical engineer Keith Wipke, manager of the fuel cell and hydrogen technologies program at the National Renewable Energy Laboratory in Colorado. Everything, from the electrical grid to buildings and industry, is also thirsty for renewable power: It’s estimated that overall, the global demand for electricity could more than double by 2050. Fortunately, analyses suggest that renewables are up to the task. “We need our foot on the accelerator pedal of renewables 100 percent, as fast as we can, and it will all get used,” says Wipke.

Each mode of transport has its specific fuel needs. Much is still to be settled, but here are some likely possibilities.

Enlarge / Each mode of transport has its specific fuel needs. Much is still to be settled, but here are some likely possibilities.

In order to stay below 1.5° of planetary warming and limit some of the worst effects of climate change, the Intergovernmental Panel on Climate Change recommends that the world hit net-zero emissions by 2050—meaning that whatever greenhouse gases we still put into the air we take out in other ways, such as through forests or carbon capture. Groups including the International Energy Agency (IEA)—a Paris-based intergovernmental organization that analyzes the global energy sector—have laid out pathways that can get the world to net zero.

The IEA’s pathway describes a massive, hard-to-enact shift across the entire world, including all kinds of transport. Their goal: to replace fossil fuels (which release long-captured carbon into the air, where it wreaks havoc on the climate) with something more sustainable, like green hydrogen or biofuels (which either don’t produce greenhouse gases at all or recycle the ones that are already in the air).

Although some transportation sectors are still in flux, we can now get a pretty good glimpse of what will likely be powering the ships, planes, trains, and automobiles of tomorrow. Here’s a peek into that future.

The greening of planes, trains, and automobiles Read More »