Science

during-a-town-hall-wednesday,-nasa-officials-on-stage-looked-like-hostages

During a town hall Wednesday, NASA officials on stage looked like hostages


A Trump appointee suggests NASA may not have a new administrator until next year.

NASA press secretary Bethany Stevens, acting administrator Janet Petro, chief of staff Brian Hughes, associate administrator Vanessa Wyche, and deputy associate administrator Casey Swails held a town hall with NASA employees Wednesday. Credit: NASA

The four people at the helm of America’s space agency held a town hall meeting with employees Wednesday, fielding questions about downsizing, layoffs, and proposed budget cuts that threaten to undermine NASA’s mission and prestige.

Janet Petro, NASA’s acting administrator, addressed questions from an auditorium at NASA Headquarters in Washington. She was joined by Brian Hughes, the agency’s chief of staff, a political appointee who was formerly a Florida-based consultant active in city politics and in Donald Trump’s 2024 presidential campaign. Two other senior career managers, Vanessa Wyche and Casey Swails, were also on the stage.

They tried to put a positive spin on the situation at NASA. Petro, Wyche, and Swails are civil servants, not Trump loyalists. None of them looked like they wanted to be there. The town hall was not publicized outside of NASA ahead of time, but live video of the event was available—unadvertised—on an obscure NASA streaming website. The video has since been removed.

8 percent down

NASA’s employees are feeling the pain after the White House proposed a budget cut of nearly 25 percent in fiscal year 2026, which begins October 1. The budget request would slash NASA’s topline budget by nearly 25 percent, from $24.8 billion to $18.8 billion. Adjusted for inflation, this would be the smallest NASA budget since 1961, when the first American launched into space.

“The NASA brand is really strong still, and we have a lot of exciting missions ahead of us,” Petro said. “So, I know it’s a hard time that we’re going to be navigating, but again, you have my commitment that I’m here and I will share all of the information that I have when I get it.”

It’s true that NASA employees, along with industry officials and scientists who regularly work with the agency, are navigating through what would most generously be described as a period of great uncertainty. The perception among NASA’s workforce is far darker. “NASA is f—ed,” one current leader in the agency told Ars a few weeks ago, soon after President Trump rescinded his nomination of billionaire businessman and commercial astronaut Jared Isaacman to be the agency’s next administrator.

Janet Petro, NASA’s acting administrator, is seen in 2020 at Kennedy Space Center in Florida. Credit: NASA/Kim Shiflett

Before the White House released its detailed budget proposal in May, NASA and other federal agencies were already scrambling to respond to the Trump administration’s directives to shrink the size of the government. While NASA escaped the mass layoffs of probationary employees that affected other departments, the space agency offered buyouts and incentives for civil servants to retire early or voluntarily leave their posts.

About 900 NASA employees signed up for the first round of the government’s “deferred resignation” program. Casey Swails, NASA’s deputy associate administrator, said Wednesday that number is now up to 1,500 after NASA announced another chance for employees to take the government’s deferred resignation offer. This represents about 8 percent of NASA’s workforce, and the window for employees to apply runs until July 25.

One takeaway from Wednesday’s town hall is that at least some NASA leaders want to motivate more employees to resign voluntarily. Hughes said a “major reason” for luring workers to leave the agency is to avoid “being in a spot where we have to do the involuntary options.”

Rumors of these more significant layoffs, or reductions in force, have hung over NASA for several months. If that happens, workers may not get the incentives the government is offering today to those who leave the agency on their own. Swails said NASA isn’t currently planning any such layoff, although she left the door open for the situation to change: “We’re doing everything we can to avoid going down that path.”

Ultimately, it will depend on how many employees NASA can get to resign on their own. If it’s not enough, layoffs may still be an option.

Many questions, few answers

Nearly all of the questions employees addressed to NASA leadership Wednesday were submitted anonymously, and in writing: When might Trump nominate someone for NASA administrator to take Isaacman’s place? Will any of NASA’s 10 field centers be closed? What is NASA going to do about Trump’s budget proposal, particularly its impact on science missions?

Their responses to these questions, in order: Probably not any time soon, maybe, and nothing.

The Trump administration selected Petro, an engineer and former Army helicopter pilot, to become acting head of NASA on Inauguration Day in January. Bill Nelson, who served as a Florida senator until 2019, resigned the NASA administrator job when former President Biden left the White House.

Petro was previously director of NASA’s Kennedy Space Center since 2021, and before that, she was deputy director of the Florida spaceport for 14 years. She leapfrogged NASA’s top civil servant, associate administrator Jim Free, to become acting administrator in January. Free retired from the agency in February. Before the presidential election last year, Free advocated for the next administration to stay the course with NASA’s Artemis program.

But that’s not what the Trump administration wants to do. The White House seeks to cancel the Space Launch System rocket and Orion spacecraft, both core elements of the Artemis program to return astronauts to the Moon after two more flights. Under the new plan, NASA would procure commercial transportation to ferry crews to the Moon and Mars in a similar way to how the agency buys rides for its astronauts to the International Space Station in low-Earth orbit.

NASA’s Curiosity rover captured images to create this selfie mosaic on the surface of Mars in 2015. If implemented as written, the Trump budget proposal would mark the first time in 30 years that NASA does not have a Mars lander in development. The agency would instead turn to commercial companies to demonstrate they can deliver payloads, and eventually humans, to the red planet.

The Trump administration’s statements on space policy have emphasized the longer-term goal of human missions to Mars. The White House’s plans for what NASA will do at the Moon after the Artemis program’s first landing are still undefined.

Petro has kept a low profile since becoming NASA’s temporary chief executive five months ago. If Trump moved forward with Isaacman’s nomination, he would likely be NASA administrator today. The Senate was a few days away from confirming Isaacman when Trump pulled his nomination, apparently for political reasons. The White House withdrew the nomination the day after Elon Musk, who backed Isaacman to take the top job at NASA, left the Trump administration.

Who’s running NASA?

Now, Petro could serve out the year as NASA’s acting administrator. Petro is well-regarded at Kennedy Space Center, where she was a fixture in the center’s headquarters building for nearly 20 years. But she lacks a political constituency in the Trump administration and isn’t empowered to make major policy decisions. The budget cuts proposed for NASA came from the White House’s Office of Management and Budget, not from within the agency itself.

President Trump has the reins on the process to select the next NASA administrator. Trump named Isaacman for the office in December, more than a month before his inauguration, and the earliest any incoming president has nominated a NASA administrator. Musk had close ties to Trump then, and a human mission to Mars got a mention in Trump’s inauguration speech.

But space issues seem to have fallen far down Trump’s list of priorities. Hughes, who got his job at NASA in part due to his political connections, suggested it might be a while before Trump gets around to selecting another NASA administrator nominee.

“I think the best guess would tell you that it’s hard to imagine it happening before the next six months, and could perhaps go longer than that into the eight- or nine-month range, but that’s purely speculation,” Hughes said, foreseeing impediments such as the large number of other pending nominations for posts across the federal government and high-priority negotiations with Congress over the federal budget.

Congress is also expected to go on recess in August, so the earliest a NASA nominee might get a confirmation hearing is this fall. Then, the Senate must vote to confirm the nominee before they can take office.

The timeline of Isaacman’s nomination for NASA administrator is instructive. Trump nominated Isaacman in December, and his confirmation hearing was in April. He was on the cusp of a confirmation vote in early June when Trump withdrew his nomination May 31.

As NASA awaits a leader with political backing, Petro said the agency is undergoing an overhaul to make it “leaner and more agile.” This is likely to result in office closures, and Hughes indicated NASA might end up shuttering entire field centers.

“To the specific question, will they be closed or consolidated? I don’t think we’re there yet to answer that question, but it is actively a part of the conversation we’re having as we go step-by-step through this,” Hughes said.

What can $4 billion buy you?

While Trump’s budget proposal includes robust funding for human space exploration, it’s a different story for most of the rest of NASA. The agency’s science budget would be cut in half to approximately $3.9 billion. NASA’s technology development division would also be reduced by 50 percent.

If the White House gets its way, NASA would scale back research on the International Space Station and cancel numerous robotic missions in development or already in space. The agency would terminate missions currently exploring Jupiter, on the way to study an asteroid, and approaching interstellar space. It would shut down the largest X-ray space telescope ever built and the only one in its class likely to be operating for the next 10 years.

“There’s a lot of science that can still be done with $4 billion,” Petro said. “How we do science, and how we do partnerships, may change in the future to sort of multiply what we’re doing.”

These partnerships might include asking academic institutions or wealthy benefactors to pitch in money to fund science projects at NASA. The agency might also invite commercial companies to play bigger roles in NASA robotic missions, which are typically owned by the government.

This view of Jupiter’s turbulent atmosphere from NASA’s Juno spacecraft includes several of the planet’s southern jet streams. Juno is one of the missions currently in space that NASA would shut down under Trump’s budget request. Credit: NASA

One employee asked what NASA could do to secure more funding in the president’s budget request. But that ship has sailed. The options now available to NASA’s leadership are to support the budget proposal, stay silent, or leave. NASA is an executive agency and part of the Trump administration, and the White House’s budget request is NASA’s, too.

“It’s not our job to advocate, but let’s try to look at this in a positive way,” Petro said. “We’ve still got a lot of money. Let’s see how much mission we can do.”

Ultimately, it’s up to Congress to appropriate funding for NASA and other parts of the government. Lawmakers haven’t signaled where they might land on NASA’s budget, but Sen. Ted Cruz (R-Texas), who is influential on space-related matters, released the text of a proposed bill a few weeks ago that would restore funding for the International Space Station and forego cancellation of the Space Launch System rocket, among other things. But Cruz did not have much to say about adding more money for NASA’s science programs.

NASA’s senior leaders did acknowledge Wednesday that the pain of the agency’s downsizing will extend far outside of the agency’s walls.

“Eighty-five percent of our budget goes out the door to contractors,” Petro said. “So, with a reduced budget, absolutely, our contractors will also be impacted. In fact, they’re probably the bigger driver that will be impacted.”

It’s clearly a turbulent time for America’s space agency, and NASA employees have another month to decide if they want to be part of it.

“I know there’s a lot to consider,” Swails said. “There’s a lot that people are thinking about. I would encourage you to talk it out. Tap into your support systems. Talk to your spouse, your partner, your friend, your financial advisor, whomever you consider those trusted advisors for you.”

This sounds like hollow advice, but it seems like it’s all NASA’s workers can do. The Trump administration isn’t waiting for Congress to finalize the budget for 2026. The downsizing is here.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

During a town hall Wednesday, NASA officials on stage looked like hostages Read More »

the-axion-may-help-clean-up-the-messy-business-of-dark-matter

The axion may help clean up the messy business of dark matter


We haven’t found evidence of the theoretical particle, but it’s still worth investigating.

In recent years, a curious hypothetical particle called the axion, invented to address challenging problems with the strong nuclear force, has emerged as a leading candidate to explain dark matter. Although the potential for axions to explain dark matter has been around for decades, cosmologists have only recently begun to seriously search for them. Not only might they be able to resolve some issues with older hypotheses about dark matter, but they also offer a dizzying array of promising avenues for finding them.

But before digging into what the axion could be and why it’s so useful, we have to explore why the vast majority of physicists, astronomers, and cosmologists accept the evidence that dark matter exists and that it’s some new kind of particle. While it’s easy to dismiss the dark matter hypothesis as some sort of modern-day epicycle, the reality is much more complex (to be fair to epicycles, it was an excellent idea that fit the data extremely well for many centuries).

The short version is that nothing in the Universe adds up.

We have many methods available to measure the mass of large objects like galaxies and clusters. We also have various methods to assess the effects of matter in the Universe, like the details of the cosmic microwave background or the evolution of the cosmic web. There are two broad categories: methods that rely solely on estimating the amount of light-emitting matter and methods that estimate the total amount of matter, whether it’s visible or not.

For example, if you take a picture of a generic galaxy, you’ll see that most of the light-emitting matter is concentrated in the core. But when you measure the rotation rate of the galaxy and use that to estimate the total amount of matter, you get a much larger number, plus some hints that it doesn’t perfectly overlap with the light-emitting stuff. The same thing happens for clusters of galaxies—the dynamics of galaxies within a cluster suggest the presence of much more matter than what we can see, and the two types of matter don’t always align. When we use gravitational lensing to measure a cluster’s contents, we again see evidence for much more matter than is plainly visible.

The tiny variations in the cosmic microwave background tell us about the influence of both matter that interacts with light and matter that doesn’t. It clearly shows that some invisible component dominated the early Universe. When we look at the large-scale structure, invisible matter rules the day. Matter that doesn’t interact with light can form structures much more quickly than matter that gets tangled up by interacting with itself. Without invisible matter, galaxies like the Milky Way can’t form quickly enough to match observations of the early Universe.

The calculations of Big Bang nucleosynthesis, which correctly predict the abundances of hydrogen and helium in the Universe, put strict constraints on how much light-emitting matter there can be, and that number simply isn’t large enough to accommodate all these disparate results.

Across cosmic scales in time and space, the evidence just piles up: There’s more stuff out there than meets the eye, and it can’t simply be dim-but-otherwise-regular matter.

Weakness of WIMPs

Since pioneering astronomer Vera Rubin first revealed dark matter in a big way in the 1970s, the astronomical community has tried every idea it could think of to explain these observations. One tantalizing possibility is that the dark matter is the entirely wrong approach; instead, we’re misunderstanding gravity itself. But so far, half a century later, all attempts to modify gravity ultimately fail one observational test or another. In fact, the most popular modified gravity theory, known as MOND, still requires the existence of dark matter, just less of it.

As the evidence piled up for dark matter in the 1980s and ’90s, astronomers began to favor a particular explanation known as WIMPs, for weakly interacting massive particles. WIMPs weren’t just made up on the spot. They were motivated by particle physics and our attempts to create theories beyond the Standard Model. Many extensions to the Standard Model predicted the existence of WIMP-like particles that could be made in abundance in the early Universe, generating a population of heavy-ish particles that remained largely in the cosmic background.

WIMPs seemed like a good idea, as they could both explain the dark matter problem and bring us to a new understanding of fundamental physics. The idea is that we are swimming in an invisible sea of dark matter particles that almost always simply pass through us undetected. But every once in a while, a WIMP should interact via the weak nuclear force (hence the origin of its name) and give off a shower of byproducts. One problem: We needed to detect one of these rare interactions. So experiments sprang up around the world to catch an elusive dark matter candidate.

With amazing names like CRESST, SNOLAB, and XENON, these experiments have spent years searching for a WIMP to no avail. They’re not an outright failure, though; instead, with every passing year, we know more and more about what the WIMP can’t be—what mass ranges and interaction strengths are now excluded.

By now, that list of what the WIMP can’t be is rather long, and large regions within the space of possibilities are now hard-and-fast ruled out.

OK, that’s fine. I mean, it’s a huge bummer that our first best guess didn’t pan out, but nature is under no obligation to make this easy for us. Maybe the dark matter isn’t a WIMP at all.

More entities are sitting around the particle physics attic that we might be able to use to explain this deep cosmic mystery. And one of those hypothetical particles is called the axion.

Cleaning up with axions

It was the late 1970s, and physicist Frank Wilczek was shopping for laundry detergent. He found one brand standing out among the bottles: Axion. He thought that would make an excellent name for a particle.

He was right.

For decades, physicists had been troubled by a little detail of the theory used to explain the strong nuclear force, known as quantum chromodynamics. By all measurements, that force obeys charge-parity symmetry, which means if you take an interaction, flip all the charges around, and run it in a mirror, you’ll get the same result. But quantum chromodynamics doesn’t enforce that symmetry on its own.

It seemed to be a rather fine-tuned state of affairs, with the strong force unnaturally maintaining a symmetry when there was nothing in the theory to explain why.

In 1977, Roberto Peccei and Helen Quinn discovered an elegant solution. By introducing a new field into the Universe, it could naturally introduce charge-parity symmetry into the equations of quantum chromodynamics. The next year, Wilczek and Gerard ‘t Hooft independently realized that this new field would imply the existence of a particle.

The axion.

Dark matter was just coming on the cosmic scene. Axions weren’t invented to solve that problem, but physicists very quickly realized that the complex physics of the early Universe could absolutely flood the cosmos with axions. What’s more, they would largely ignore regular matter and sit quietly in the background. In other words, the axion was an excellent dark matter candidate.

But axions were pushed aside as the WIMPs hypothesis gained more steam. Back-of-the-envelope calculations showed that the natural mass range of the WIMP would precisely match the abundances needed to explain the amount of dark matter in the Universe, with no other fine-tuning or adjustments required.

Never ones to let the cosmologists get in the way of a good time, the particle physics community kept up interest in the axion, finding different variations on the particle and devising clever experiments to see if the axion existed. One experiment requires nothing more than a gigantic magnet since, in an extremely strong magnetic field, axions can spontaneously convert into photons.

To date, no hard evidence for the axion has shown up. But WIMPs have proven to be elusive, so cosmologists are showing more love to the axion and identifying surprising ways that it might be found.

A sloshy Universe

Axions are tiny, even for subatomic particles. The lightest known particle is the neutrino, which weighs no more than 0.086 electron-volts (or eV). Compare that to, say, the electron, which weighs over half a million eV. The exact mass of the axion isn’t known, and there are many models and versions of the particle, but it can have a mass all the way down to a trillionth of an eV… and even lower.

In fact, axions belong to a much broader class of “ultra-light” dark matter particle candidates, which can have masses down to 10^-24 eV. This is multiple billions of times lighter than the WIMPs—and indeed most of the particles of the Standard Model.

That means axions and their friends act nothing like most of the particles of the Standard Model.

First off, it may not even be appropriate to refer to them as particles. They have such little mass that their de Broglie wavelength—the size of the quantum wave associated with every particle—can stretch into macroscopic proportions. In some cases, this wavelength can be a few meters across. In others, it’s comparable to a star or a solar system. In still others, a single axion “particle” can stretch across an entire galaxy.

In this view, the individual axion particles would be subsumed into a larger quantum wave, like an ocean of dark matter so large and vast that it doesn’t make sense to talk about its individual components.

And because axions are bosons, they can synchronize their quantum wave nature, becoming a distinct state of matter: a Bose-Einstein condensate. In a Bose-Einstein condensate, most of the particles share the same low-energy state. When this happens, the de Broglie wavelength is larger than the average separation between the particles, and the waves of the individual particles all add up together, creating, in essence, a super-particle.

This way, we may get axion “stars”—clumps of axions acting as a single particle. Some of these axion stars may be a few thousand kilometers across, wandering across interstellar space. Still others may be the size of galactic cores, which might explain an issue with the traditional WIMP picture.

The best description of dark matter in general is that it is “cold,” meaning that the individual particles do not move fast compared to the speed of light. This allows them to gravitationally interact and form the seeds of structures like galaxies and clusters. But this process is a bit too efficient. According to simulations, cold dark matter tends to form more small, sub-galactic clumps than we observe, and it tends to make the cores of galaxies much, much denser than we see.

Axions, and ultra-light dark matter in general, can provide a solution here because they would operate in two modes. At large scales, they can act like regular cold dark matter. But inside galaxies, they can condense, forming tight clumps. Critically, these clumps have uniform densities within them. This smooths out the distribution of axions within galaxies, preventing the formation of smaller clumps and ultra-dense cores.

A messy affair

Over the decades, astronomers and physicists have found an astounding variety of ways that axions might reveal their presence in the Universe. Because of their curious ability to transmute into photons in the presence of strong magnetic fields, any place that features strong fields—think neutron stars or even the solar corona—could produce extra radiation due to axions. That makes them excellent hunting grounds for the particles.

Axion stars—also sometimes known provocatively as dark stars—would be all but invisible under most circumstances. That is, until they destabilize in a cascading chain reaction of axion-to-photon conversion and blow themselves up.

Even the light from distant galaxies could betray the existence of axions. If they exist in a dense swarm surrounding a galaxy, their conversion to photons will contribute to the galaxy’s light, creating a signal that the James Webb Space Telescope can pick up.

To date, despite all these ideas, there hasn’t been a single shred of solid evidence for the existence of axions, which naturally drops them down a peg or two on the credibility scale. But that doesn’t mean that axions aren’t worth investigating further. The experiments conducted so far only place limits on what properties they might have; there’s still plenty of room for viable axion and axion-like candidates, unlike their WIMPy cousins.

There’s definitely something funny going on with the Universe. The dark matter hypothesis—that there is a large, invisible component to matter in the Universe—isn’t that great of an idea, but it’s the best one we have that fits the widest amount of available evidence. For a while, we thought we knew what the identity of that matter might be, and we spent decades (and small fortunes) in that search.

But while WIMPs were the mainstay hypothesis, that didn’t snuff out alternative paths. Dozens of researchers have investigated modified forms of gravity to equal levels of unsuccessfulness. And a small cadre has kept the axion flame alive. It’s a good thing, too, since their obscure explorations of the corners of particle physics laid the groundwork to flesh out axions into a viable competitor to WIMPs.

No, we haven’t found any axions. And we still don’t know what the dark matter is. But it’s only by pushing forward—advancing new ideas, testing them against the reality of observations, and when they fail, trying again—will we come to a new understanding. Axions may or may not be dark matter; the best we can say is that they are promising. But who wouldn’t want to live in a Universe filled with dark stars, invisible Bose-Einstein condensates, and strange new particles?

Photo of Paul Sutter

The axion may help clean up the messy business of dark matter Read More »

discovery-of-hms-endeavour-wreck-confirmed

Discovery of HMS Endeavour wreck confirmed

By 2016, RIMAP’s volunteers, operating on grants and private donations, had located 10 of the 13 wrecks, almost exactly where historical charts said they should be. And the search had gotten a boost from the 1998 discovery of a 200-year-old paper trail linking the troop transport Lord Sandwich to its former life as HMS Endeavour.

Narrowing the field

One candidate was found just 500 meters off the coast of Rhode Island (designated RI 2394), 14 meters below the surface and buried in nearly 250 years’ worth of sediment and silt. RIMAP’s team concluded in 2018 that this was likely the wreck of the Endeavour, although the researchers emphasized that they needed to accumulate more evidence to support their conclusions. That’s because only about 15 percent of the ship survived. Any parts of the hull that weren’t quickly buried by silt have long since decomposed in the water.

The ANMN felt confident enough in its own research by 2022 to hold that controversial news conference announcing the discovery, against RIMAP’s objections. But the evidence is now strong enough for RIMAP to reach the same conclusion. “In 1999 and again in 2019, RIMAP and ANMM agreed on a set of criteria that, if satisfied, would permit identification of RI 2394 as Lord Sandwich,” the authors wrote in the report’s introduction. “Based on the agreed preponderance of evidence approach, enough of these criteria have now been met… to positively identify RI 2394 as the remnants of Lord Sandwich, formerly James Cook’s HM Bark Endeavour.

The Rhode Island Historical Preservation and Heritage Commission and the ANMM are now collaborating to ensure that the wreck site is protected in the future.

Discovery of HMS Endeavour wreck confirmed Read More »

researchers-get-viable-mice-by-editing-dna-from-two-sperm

Researchers get viable mice by editing DNA from two sperm


Altering chemical modifications of DNA lets the DNA from two sperm make a mouse.

For many species, producing an embryo is a bit of a contest between males and females. Males want as many offspring as possible and want the females to devote as many resources as possible to each of them. Females do better by keeping their options open and distributing resources in a way to maximize the number of offspring they can produce over the course of their lives.

In mammals, this plays out through the chemical modification of DNA, a process called imprinting. Males imprint their DNA by adding methyl modifications to it in a way that alters the activity of genes in order to promote the growth of embryos. Females do similar things chemically but focus on shutting down genes that promote embryonic growth. In a handful of key regions of the genome, having only the modifications specific to one sex is lethal, as the embryo can’t grow to match its stage of development.

One consequence of this is that you normally can’t produce embryos using only the DNA from eggs or from sperm. But over the last few years, researchers have gradually worked around the need for imprinted sites to have one copy from each parent. Now, in a very sophisticated demonstration, researchers have used targeted editing of methylation to produce mice from the DNA of two sperm.

Imprinting and same-sex parents

There’s a long history of studying imprinting in mice. Long before the genome was sequenced, people had identified specific parts of the chromosomes that, if deleted, were lethal—but only if inherited from one of the two sexes. They correctly inferred that this meant that the genes in the region are normally inactivated in the germ cells of one of the sexes. If they’re deleted in the other sex, then the combination that results in the offspring—missing on one chromosome, inactivated in the other—is lethal.

Over time, seven critical imprinted regions were identified, scattered throughout the genome. And, roughly 20 years ago, a team managed to find the right deletion to enable a female mouse to give birth to offspring that received a set of chromosomes from each of two unfertilized eggs. The researchers drew parallels to animals that can reproduce through parthenogenesis, where the female gives birth using unfertilized eggs. But the mouse example obviously took a big assist via the manipulation of egg cells in culture before being implanted in a mouse.

By 2016, researchers were specifically editing in deletions of imprinted genes in order to allow the creation of embryos by fusing stem cell lines that only had a single set of chromosomes. This was far more focused than the original experiment, as the deletions were smaller and affected only a few genes. By 2018, they had expanded the repertoire by figuring out how to get the genomes of two sperm together in an unfertilized egg with its own genome eliminated.

The products of two male parents, however, died the day after birth. This is either due to improperly compensating for imprinting or simply because the deletions had additional impacts on the embryo’s health. It took until earlier this year, when a very specific combination of 20 different gene edits and deletions enabled mice generated using the chromosomes from two sperm cells to survive to adulthood.

The problem with all of these efforts is that the deletions may have health impacts on the animals and may still cause problems if inherited from the opposite sex. So, while it’s an interesting way to confirm our understanding of the role of imprinting in reproduction, it’s not necessarily the route to using this as a reliable reproductive tool. Which finally brings us to the present research.

Roll your own imprinting

Left out of the above is the nature of the imprinting itself: How does a chunk of chromosome and all the genes on it get marked as coming from a male or female? The secret is to chemically modify that region of the DNA in a way that doesn’t alter base pairing, but does allow it to be recognized as distinct by proteins. The most common way of doing this is to link a single carbon atom (a methyl group) to the base cytosine. This tends to shut nearby genes down, and it can be inherited through cell division, since there are enzymes that recognize when one of the two DNA strands is unmodified and adds a methyl to it.

Methylation turns out to explain imprinting. The key regions for imprinting are methylated differently in males and females, which influences nearby gene activity and can be maintained throughout all of embryonic development.

So, to make up for the imprinting problems caused when both sets of chromosomes come from the same sex, what you need to do is a targeted reprogramming of methylation. And that’s what the researchers behind the new paper have done.

First, they needed to tell the two sets of chromosomes apart. To do that, they used two distantly related strains of mice, one standard lab strain that originated in Europe and a second that was caught in the wild in Thailand less than a century ago. These two strains have been separated for long enough that they have a lot of small differences in DNA sequences scattered throughout the genome. So, it was possible to use these to target one or the other of the genomes.

This was done using parts of the DNA editing systems that have been developed, the most famous of which is CRISPR/CAS. These systems have a protein that pairs with an RNA sequence to find a matching sequence in DNA. In this case, those RNAs could be made so that they target imprinting regions in just one of the two mouse strains. The protein/RNA combinations could also be linked to enzymes that modify DNA, either adding methyls or removing them.

To bring all this together, the researchers started with an egg and deleted the genome from it. They then injected the heads of sperm, one from the lab strain, one from the recently wild mouse. This left them with an egg with two sets of chromosomes, although a quarter of them would have two Y chromosomes and thus be inviable (unlike the Y, the X has essential genes). Arbitrarily, they chose one set of chromosomes to be female and targeted methylation and de-methylation enzymes to it in order to reprogram the pattern of methylation on it. Once that was done, they could allow the egg to start dividing and implant it into female mice.

Rare success

The researchers spent time ensuring that the enzymes they had were modifying the methylation as expected and that development started as usual. Their general finding is that the enzymes did change the methylation state for about 500 bases on either side of the targeted site and did so pretty consistently. But there are seven different imprinting sites that need to be modified, each of which controls multiple nearby genes. So, while the modifications were consistent, they weren’t always thorough enough to result in the expected changes to all of the nearby genes.

This limited efficiency showed up in the rate of survival. Starting with over 250 reprogrammed embryos that carried DNA from two males, they ended up with 16 pregnancies, but only four that died at birth, and three live ones; based on other experiments, most of the rest died during the second half of embryonic development. Of the three live ones, one was nearly 40 percent larger than the typical pup, suggesting problems regulating growth—it died the day after birth.

All three live births were male, although the numbers are small enough that it’s impossible to tell if that’s significant or not.

The researchers suggest several potential reasons for the low efficiency. One is simply that, while the probability of properly reprogramming at least one of the sites is high, reprogramming all seven is considerably more challenging. There’s also the risk of off-target effects, where the modification takes place in locations with similar sequences to the ones targeted. They also concede that there could be other key imprinted regions that we simply haven’t identified yet.

We would need to sort that out if we want to use this approach as a tool, which might be potentially useful as a way to breed mice that carry mutations that affect female viability or fertility. But this work has already been useful even in its inefficient state, because it serves as a pretty definitive validation of our ideas about the function of imprinting in embryonic development, as well as the critical role methylation plays in this process. If we weren’t largely right about both of those, the efficiency of this approach wouldn’t be low—it would be zero.

PNAS, 2025. DOI: 10.1073/pnas.2425307122  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Researchers get viable mice by editing DNA from two sperm Read More »

sailing-the-fjords-like-the-vikings-yields-unexpected-insights

Sailing the fjords like the Vikings yields unexpected insights


“On we sweep with threshing oar”

Greer Jarrett has identified four possible small ports, or “havens,” used by Vikings along the Norwegian coast.

Experimental archaeologist Greer Jarrett of Lund University in Sweden has been sailing in the footsteps of Vikings for the last three years.

If you want to learn more about how and where the Vikings sailed, making the journey through the fjords yourself in replica boats is a practical, hands-on approach to achieving that end. Greer Jarrett, an archaeologist at Lund University in Sweden, has spent the last three years doing just that, sailing more than 5,000 kilometers along known Viking trade routes in open, spare-rigged clinker boats similar to those used by the Vikings.

Not only has Jarrett learned a great deal about the boats themselves, he also identified four possible havens along the Norwegian coast, part of what may have been a decentralized network that played a crucial role in trade and travel during that period. And those ports are located farther out to sea than other major ports and hubs known to date, according to a paper he published in the Journal of Archaeological Method and Theory.

It’s just the latest intriguing discovery enabled by the growing field of experimental archaeology, whereby researchers seek to reverse-engineer all manner of ancient technologies. Experimental archaeologists have, for instance, built their own versions of Early Upper Paleolithic adzes, axes, and chisels. The resulting fractures and wear enabled them to develop new criteria for identifying the likely functions of ancient tools. Others have tried to cook like the Neanderthals, concluding that flint flakes were surprisingly effective for butchering birds, and that roasting the birds damages the bones to such an extent that it’s unlikely they would be preserved in the archaeological record.

Kent State University’s Metin Eren has done practical experiments to study, for instance, the trajectories of atlatls attached to spears tipped with replica Clovis points, and how their performance compares to javelins used by Neanderthals. He even fashioned rudimentary blades out of his own frozen feces to test whether they could cut through pig hide, muscle, and tendon—solely to test a famous anthropological legend about an elderly Inuit man in the 1950s who purportedly did the same to kill and skin a dog, using its rib cage as a makeshift sled to venture off into the Arctic. (It did not work, so myth: busted. But it did snag Eren an Ig Nobel prize.)

Taking a hands-on, experimental archaeological approach to studying the Vikings makes sense in light of the dearth of contemporary written sources. “We have a few things written by outsiders, but there’s very, very few accounts written or delivered by people from Scandinavia during that period,” Jarrett told Ars. “We normally rely on indirect forms of evidence, be that genetics or archaeology or linguistics, which show strong, very frequent connections across maritime areas in the North Atlantic. But because traveling by boat is kind of an archaeologically invisible act, you don’t leave any footprints. So we have very little information about the voyages between these points.”

The sailing voyages made by Greer Jarrett during the research project. The image also shows the four possible Viking harbours identified by Jarrett.

The sailing voyages made by Greer Jarrett during the research project, as well as the four possible Viking harbors he identified. Credit: Greer Jarrett

Garrett and his crew used four or five different replica boats for their test voyages. Most were built by volunteers, enthusiasts, or students Jarrett had met during his considerable time in the field. They then sailed along the west coast of the Scandinavian Peninsula, a core area of Viking seafaring.

“These are reconstructions of traditional Norwegian boats from the 1800s and early 1900s,” said Jarrett. “My idea was, because of this really long-term continuity in traditional boat building practices, especially in Norway, it might be possible to use these later boats which have lots of similarities to try and work out the potentials of where people might have gotten out. It’s the idea of suggesting potentials based on practical experience to try and join those dots between the different evidence we have across the Viking world.”

That decision has led to some criticism from colleagues because of the enormous gap in time, but Jarrett defends his choice. “The Viking Age ends in the 11th century, and we’re talking about boats from 800 years later,” he said. “But the construction techniques and the way they are rigged and their general performance characteristics are similar enough. Because this is a project about voyages and not a project about boat building, it seemed like a defensible analogy.”

Seeking safe harbor

“On the long-range voyages, we worked in watches of four hours on and four hours off, and that is just about long enough to get some sleep on your off watch, but also just about short enough that you don’t get really, really, really cold, which is obviously a risk,” said Jarrett. “It was manageable, but we looked like penguins. I mean, we’re wearing six layers of wool at any time and sleeping all stacked together for warmth. But other times it’s really nice. The spring and the autumn in Scandinavia, there’s much more likelihood of high-pressure cycles, which means that it’s clearer and sunnier than in the summer itself.”

Nonetheless, there were some rough moments, such as when the mast spar holding up the mainsail snapped, forcing the crew to improvise and lash two oars together to hold the sail so they could continue their journey. It took several days to repair the boat so it could sail again. There was no safety boat following along in case the crew got into trouble, and no engine, although they did have a life raft, which the crew has yet to use.

Based on his sailing trials, Jarrett believes that the Vikings had no need for navigational tools like maps, a compass, or a sextant, relying instead on what he calls “mental maps”—or a “maritime cultural mindscape”—based on sailors’ memories and experiences passed down orally through generations. Those maps might also be informed by the myths linked to well-known coastal landmarks, such as skerries, small islets, or reefs.

“People had been moving by boat along the west coast of Scandinavia for a really, really, really long time, probably since the late Neolithic, if not earlier—thousands of years before the Viking age,” said Jarrett. “There are big trading networks in place beforehand, and that is reflected in the names, place names along the west coast. My primary argument is if you spend 3,000 years traveling up and down a coastline in which you can use the coast at all times for navigation, then it’s unnecessary to develop instrumentation.”

“Instruments are used when you are in a place out in the open sea that you don’t know,” Jarrett continued. “We definitely know they didn’t have compasses because those don’t arrive from China until the 1200s. There are these ideas about sunstones and sundials, or little sun compasses, which are entirely possible. But there’s no legitimate proof of either of them archaeologically yet. I may well be proved wrong if we find them at some point, but I don’t think they’re necessary for this at all.”

Based on the sailing trials, archaeological and documentary evidence of Viking Age maritime centers, and digital reconstructions of past sea levels. Jarrett was able develop a useful set of criteria for evaluating potential havens. For instance, the site should be reachable in low visibility, with land or sea marks that sailors could use as bearings; large enough to accommodate multiple vessels of at least the size of a fyring (which can house a crew of four to 10 people); provide good protection from sea swell and storm surges; and have access to fresh water, among other criteria. Four sites scored sufficiently high by those criteria to qualify as possible Viking havens.

The four sites are Smørhamn, located at the confluence of Oldersund and the Frøysjø, where an inn and trading post are known to have existed since at least the late 17th century; the archipelago of Sørøyane between Stad and Ålesund, near where the sea battle of Hjörungavágr was fought circa 986 CE; Bjørnsund, a number of small islands off the southwestern tip of Hustadvika; and the island of Storfosna, which appears on 16th and 17th century charts.

“I’m not saying, ‘This is where they went,'” said Jarrett. “I’m saying that, with these kinds of boats under these conditions, it would be possible to go to these places. And it’s much more difficult—not impossible, but much more difficult—to go to these other places or to sail in these other conditions.”

Pining for the fjords

The next step is for Jarrett and other archaeologists to hunt for evidence in support of his hypothesis. “Most of these sites have never been excavated,” said Jarrett. “There’s been a long assumption that these are landing places with the idea that you are dragging your boat ashore. I’m very opposed to that idea because these are two-and-a-half-ton boats, let alone the cargo. Unless you have a team of oxen and 20 people at your command, there is no way you’re getting them on the beach. I’m very convinced that these places have jetties and mooring posts likely preserved underwater. All of that organic material survives much better underwater than it does on land. So I think that’s very possible.”

They might also find smaller items suggestive of a thriving harbor community. “Whenever you go into land, you’ve got something that’s broken, so you need to do repairs,” said Jarrett. “So things like clink nails or piles of balustones or signs of smithing—the typical kind of things you’d use for repairing your ship, I think are possible to find.” Jarrett’s methodology might also prove useful for studying other seafaring communities. 

The practical experience of sailing the same seas as the Vikings naturally led to some surprising insights. “You are able to ask very different questions the minute you walk away from your desk and get on a boat,” said Jarrett. “I think it’s essential to do that because you think in new ways. In terms of the results themselves, the boats are extremely seaworthy crafts. When you get in them for the first time, you don’t think that, because they’re very, very light. They feel very flimsy, and they’re very low in the water compared to a modern sailing boat. So you feel really in touch with the wave, which is kind of scary. But because they’re so flexible and because of the way they’re rigged, they’re actually really stable, even in big waves.”

“We kept going out thinking, ‘Oh, this is maybe the limit of what this boat can tolerate,’ and then it would be fine, and we’d be, ‘Okay, let’s go a little bit in slightly bigger waves with slightly stronger wind,'” Jarrett continued. “So I think our comfort zones definitely visibly expanded during that period. And I had the chance to work with the same crews over three years. By the end of those three years, we were doing stuff that we would never have been able to do at the beginning.”

Another big difference from modern boats, Jarrett discovered, is that one cannot sail a traditional Viking craft alone. “It has to be a collaborative effort because of how you need a person at the front and the back of the boat basically at all times,” he said. “So developing the crew together and gaining not only skills, but also trust between us meant that we could do things in 2024 that seemed completely insane just a couple of years earlier. I cannot imagine what that is like if you have an entire lifetime of Viking sailors working together for 30 years. It must be an incredible way of creating social bonds.”

DOI: Journal of Archaeological Method and Theory, 2025. 10.1007/s10816-025-09708-6  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Sailing the fjords like the Vikings yields unexpected insights Read More »

how-a-grad-student-got-lhc-data-to-play-nice-with-quantum-interference

How a grad student got LHC data to play nice with quantum interference


New approach is already having an impact on the experiment’s plans for future work.

The ATLAS particle detector of the Large Hadron Collider (LHC) at the European Nuclear Research Center (CERN) in Geneva, Switzerland. Credit: EThamPhoto/Getty Images

The ATLAS particle detector of the Large Hadron Collider (LHC) at the European Nuclear Research Center (CERN) in Geneva, Switzerland. Credit: EThamPhoto/Getty Images

Measurements at the Large Hadron Collider have been stymied by one of the most central phenomena of the quantum world. But now, a young researcher has championed a new method to solve the problem using deep neural networks.

The Large Hadron Collider is one of the biggest experiments in history, but it’s also one of the hardest to interpret. Unlike seeing an image of a star in a telescope, saying anything at all about the data that comes out of the LHC requires careful statistical modeling.

“If you gave me a theory [that] the Higgs boson is this way or that way, I think people imagine, ‘Hey, you built the experiment, you should be able to tell me what you’re going to see under various hypotheses!’” said Daniel Whiteson, a professor at the University of California, Irvine. “But we don’t.”

One challenge with interpreting LHC data is interference, a core implication of quantum mechanics. Interference allows two possible events to inhibit each other, weakening the likelihood of seeing the result of either. In the presence of interference, physicists needed to use a fuzzier statistical method to analyze data, losing the data’s full power and increasing its uncertainty.

However, a recent breakthrough suggests a different way to tackle the problem. The ATLAS collaboration, one of two groups studying proton collisions at the LHC, released two papers last December that describe new ways of exploring data from their detector. One describes how to use a machine learning technique called Neural Simulation-Based Inference to maximize the potential of particle physics data. The other demonstrates its effectiveness with the ultimate test: re-doing a previous analysis with the new technique and seeing dramatic improvement.

The papers are the culmination of a young researcher’s six-year quest to convince the collaboration of the value of the new technique. Its success is already having an impact on the experiment’s plans for future work.

Making sense out of fusing bosons

Each particle collision at the LHC involves many possible pathways in which different particles combine to give rise to the spray of debris that experimenters see. In 2017, David Rousseau at IJCLab in Orsay, a member of the ATLAS collaboration, asked one of his students, Aishik Ghosh, to improve his team’s ability to detect a specific pathway. That particular pathway is quite important since it’s used to measure properties of the Higgs boson, a particle (first measured in 2012) that helps explain the mass of all other fundamental particles.

It was a pretty big ask. “When a grad student gets started in ATLAS, they’re a tiny cog in a giant, well-oiled machine of 3,500 physicists, who all seem to know exactly what they’re doing,” said Ghosh.

The pathway Ghosh was asked to study occurs via several steps. First, the two colliding protons each emit a W boson, a particle associated with the weak nuclear force. These two bosons fuse together, changing their identity to form a Higgs boson. The Higgs boson then decays, forming a pair of Z bosons, another particle associated with the weak force. Finally, those Z bosons themselves each decay into a lepton, like an electron, and its antimatter partner, like a positron.

A Feynman diagram for the pathway studied by Aishik Ghosh. Credit: ATLAS

Measurements like the one Ghosh was studying are a key way of investigating the properties of the Higgs boson. By precisely measuring how long it takes the Higgs boson to decay, physicists could find evidence of it interacting with new, undiscovered particles that are too massive for the LHC to produce directly.

Ghosh started on the project, hoping to find a small improvement in the collaboration’s well-tested methods. Instead, he noticed a larger issue. The goal he was given, of detecting a single pathway by itself, didn’t actually make sense.

“I was doing that and I realized, ‘What am I doing?’ There’s no clear objective,” said Ghosh.

The problem was quantum interference.

How quantum histories interfere

One of the most famous demonstrations of the mysterious nature of quantum mechanics is called the double-slit experiment. In this demonstration, electrons are shot through a screen with two slits that allow them to pass through to a photographic plate on the other side. With one slit covered, the electrons form a pattern centered on the opening. The photographic plate lights up bright right across from the slit and dims further away from it.

With both slits open, you would expect the pattern to get brighter as more electrons reach the photographic plate. Instead, the effect varies. The two slits do not give rise to two nice bright peaks; instead, you see a rippling pattern in which some areas get brighter while others get dimmer, even though the dimmer areas should, in principle, be easier for electrons to reach.

The effect happens even if the electrons are shot at the screen one by one to stop them from influencing each other directly. It’s as if each electron carries with it two possible histories, one in which it goes through one slit and another where it goes through the other before both end up at the same place. These two histories interfere with each other so that some destinations become less likely instead of more likely.

Results of the double-slit experiment. Credit: Jordgette (CC BY-SA 3.0)

For electrons in the double-slit experiment, the two different histories are two different paths through space. For a measurement at the Large Hadron Collider, the histories are more abstract—paths that lead through transformations of fields. One history might be like the pathway Ghosh was asked to study, in which two W bosons fuse to form a Higgs boson before the Higgs boson splits into two Z bosons. But in another history, the two W bosons might fuse and immediately split into two Z bosons without ever producing a Higgs.

Both histories have the same beginning, with two W bosons, and the same end, with two Z bosons. And just as the two histories of electrons in the double-slit experiment can interfere, so can the two histories for these particles.

Another possible history for colliding particles at the Large Hadron Collider, which interferes with the measurement Ghosh was asked to do. Credit: ATLAS

That interference makes the effect of the Higgs boson much more challenging to spot. ATLAS scientists wanted to look for two pairs of electrons and positrons, which would provide evidence that two Z bosons were produced. They would classify their observations into two types: observations that are evidence for the signal they were looking for (that of a decaying Higgs boson) and observations of events that generate this pattern of particles without the Higgs boson acting as an intermediate (the latter are called the background). But the two types of observations, signal and background, interfere. With a stronger signal, corresponding to more Higgs bosons decaying, you might observe more pairs of electrons and positrons… but if these events interfere, you also might see those pairs disappear.

Learning to infer

In traditional approaches, those disappearances are hard to cope with, even when using methods that already incorporate machine learning.

One of the most common uses of machine learning is classification—for example, distinguishing between pictures of dogs and cats. You train the machine on pictures of cats and pictures of dogs, and it tells you, given a picture, which animal is the most likely match. Physicists at the LHC were already using this kind of classification method to characterize the products of collisions, but it functions much worse when interference is involved.

“If you have something that disappears, you don’t quite know what to train on,” said David Rousseau. “Usually, you’re training signal versus background, exactly like you’re training cats versus dogs. When there is something that disappears, you don’t see what you trained on.”

At first, Ghosh tried a few simple tricks, but as time went on, he realized he needed to make a more fundamental change. He reached out to others in the community and learned about a method called Neural Simulation-Based Inference, or NSBI.

In older approaches, people had trained machine learning models to classify observations into signal and background, using simulations of particle collisions to make the training data. Then they used that classification to infer the most likely value of a number, like the amount of time it takes a Higgs boson to decay, based on data from an actual experiment. Neural Simulation-Based Inference skips the classification and goes directly to the inference.

Instead of trying to classify observations into signal and background, NSBI uses simulations to teach an artificial neural network to guess a formula called a likelihood ratio. Someone using NSBI would run several simulations that describe different situations, such as letting the Higgs boson decay at different rates, and then check how many of each type of simulation yielded a specific observation. The fraction of these simulations with a certain decay rate would provide the likelihood ratio, a method for inferring which decay rate is more likely given experimental evidence. If the neural network is good at guessing this ratio, it will be good at finding how long the Higgs takes to decay.

Because NSBI doesn’t try to classify observations into different categories, it handles quantum interference more effectively. Instead of trying to find the Higgs based on a signal that disappears, it examines all the data, trying to guess which decay time is the most likely.

Ghosh tested the method, which showed promising results on test data, and presented the results at a conference in 2019. But if he was going to convince the ATLAS collaboration that the method was safe to use, he still had a lot of work ahead of him.

Shifting the weight on ATLAS’ shoulders

Experiments like ATLAS have high expectations attached to them. A collaboration of thousands of scientists, ATLAS needs to not only estimate the laws of physics but also have a clear idea of just how uncertain those estimates are. At the time, NSBI hadn’t been tested in that way.

“None of this has actually been used on data,” said Ghosh. “Nobody knew how to quantify the uncertainties. So you have a neural network that gives you a likelihood. You don’t know how good the likelihood is. Is it well-estimated? What if it’s wrongly estimated just in some weird corner? That would completely bias your results.”

Checking those corners was too big a job for a single PhD student and too complex to complete within a single PhD degree. Aishik would have to build a team, and he would need time to build that team. That’s tricky in the academic world, where students go on to short-term postdoc jobs with the expectation that they quickly publish new results to improve their CV for the next position.

“We’re usually looking to publish the next paper within two to three years—no time to overhaul our methods,” said Ghosh. Fortunately, Ghosh had support. He received his PhD alongside Rousseau and went to work with Daniel Whiteson, who encouraged him to pursue his ambitious project.

“I think it’s really important that postdocs learn to take those risks because that’s what science is,” Whiteson said.

Ghosh gathered his team. Another student of Rousseau’s, Arnaud Maury, worked to calibrate the machine’s confidence in its answers. A professor at the University of Massachusetts, Rafael Coelho Lopes de Sa, joined the project. His student Jay Sandesara would have a key role in getting the calculation to work at full scale on a computer cluster. IJCLab emeritus RD Schaffer and University of Liège professor Gilles Loupe provided cross-checks and advice.

The team wanted a clear demonstration that their method worked, so they took an unusual step. They took data that ATLAS had already analyzed and performed a full analysis using their method instead, showing that it could pass every check the collaboration could think of. They would publish two papers, one describing the method and the other giving the results of their upgraded analysis. Zach Marshall, who was the computing coordinator for ATLAS at the time, helped get the papers through, ensuring that they were vetted by experts in multiple areas.

“It was a very small subset of our community that had that overlap between this technical understanding and the physics analysis experience and understanding that were capable of really speaking to whether that paper was sufficient and intelligible and useful. So we really had to make sure that we engaged that little group of humans by name,” said Marshall.

The new method showed significant improvements, getting a much more precise result than the collaboration’s previous analysis. That improvement, and the thorough checks, persuaded ATLAS to use NSBI more broadly going forward. It will give them much more precision than they expected, using the Higgs boson to search for new particles and clarify our understanding of the quantum world. When ATLAS discusses its future plans, it makes projections of the precision it expects to reach in the future. But those plans are now being upended.

“One of the fun things about this method that Aishik pushed hard is each time it feels like now we do that projection—here’s how well we’ll do in 15 years—we absolutely crush those projections,” said Marshall. “So we are just now having to redo a set of projections because we matched our old projections for 15 years out already today. It’s a very fun problem to have.”

How a grad student got LHC data to play nice with quantum interference Read More »

psyche-keeps-its-date-with-an-asteroid,-but-now-it’s-running-in-backup-mode

Psyche keeps its date with an asteroid, but now it’s running in backup mode

The spacecraft, built by Maxar Space Systems, will operate its electric thrusters for the equivalent of three months between now and November to keep the mission on track for arrival at asteroid Psyche in 2029.

“Through comprehensive testing and analysis, the team narrowed down the potential causes to a valve that may have malfunctioned in the primary line,” NASA said in a statement Friday. “The switch to the identical backup propellant line in late May restored full functionality to the propulsion system.”

The next waypoint on Psyche’s voyage will be a flyby of Mars in May 2026. Officials expect Psyche to keep that date, which is critical for using Mars’ gravity to slingshot the spacecraft deeper into the Solar System, eventually reaching the asteroid belt about four years from now.

NASA’s Psyche spacecraft takes a spiral path to the asteroid Psyche, as depicted in this graphic that shows the path from above the plane of the planets, labeled with key milestones of the prime mission. Credit: NASA/JPL-Caltech

At Psyche, the spacecraft will enter orbit and progressively move closer to the asteroid, using a suite of sensors to map its surface, measure its shape, mass, and gravity field, and determine its elemental composition. Observations through telescopes suggest Psyche is roughly 140 miles (226 kilometers) in diameter, or about the width of Massachusetts. But it’s likely not spherical in shape. Scientists describe its shape as more akin to a potato.

Potatoes come in lots of shapes, and researchers won’t know exactly what Psyche looks like until NASA’s asteroid explorer arrives in 2029. Psyche will be the first metallic, or M-type, asteroid visited by any spacecraft, and scientists are eager to study an object that’s largely made of metals—probably iron, nickel, and perhaps some rarer elements instead of rocky minerals.

With the Psyche spacecraft’s plasma thrusters back in action, these goals of NASA’s billion-dollar science mission remain achievable.

“The mission team’s dedication and systematic approach to this investigation exemplifies the best of NASA engineering,” said Bob Mase, Psyche project manager at  JPL, in a statement. “Their thorough diagnosis and recovery, using the backup system, demonstrates the value of robust spacecraft design and exceptional teamwork.”

But there’s still a lingering concern whatever problem caused the valve to malfunction in the primary fuel line might also eventually affect the same kind of valve in the backup line.

“We are doing a lot of good proactive work around that possible issue,” wrote Lindy Elkins-Tanton, Psyche’s principal investigator at Arizona State University, in a post on X.

Psyche keeps its date with an asteroid, but now it’s running in backup mode Read More »

new-body-size-database-for-marine-animals-is-a-“library-of-life”

New body size database for marine animals is a “library of life”

The ocean runs on size

McClain officially launched MOBS as a passion project while on sabbatical in 2022 but he had been informally collecting data on body size for various marine groups for several years before that. So he had a small set of data already to kick off the project, incorporating it all into a single large database with a consistent set format and style.

Craig McClain holding a giant isopod (Bathynomus giganteus), one of the deep sea’s most iconic crustaceans

Craig McClain holding a giant isopod (Bathynomus giganteus), one of the deep sea’s most iconic crustaceans Credit: Craig McClain

“One of the things that had prevented me from doing this before was the taxonomy issue,” said McClain. “Say you wanted to get the body size for all [species] of octopuses. That was not something that was very well known unless some taxonomist happened to publish [that data]. And that data was likely not up-to-date because new species are [constantly] being described.”

However, in the last five to ten years, the World Register of Marine Species (WoRMS) was established with the objective of cataloging all marine life, with taxonomy experts assigned to specific groups to determine valid new species, which are then added to the data set with a specific numerical code. McClain tied his own dataset to that same code, making it quite easy to update MOBS as new species are added to WoRMS. McClain and his team were also able to gather body size data from various museum collections.

The MOBS database focuses on body length (a linear measurement) as opposed to body mass. “Almost every taxonomic description of a new species has some sort of linear measurement,” said McClain. “For most organisms, it’s a length, maybe a width, and if you’re really lucky you might get a height. It’s very rare for anything to be weighed unless it’s an objective of the study. So that data simply doesn’t exist.”

While all mammals generally have similar density, “If you compare the density of a sea slug, a nudibranch, versus a jellyfish, even though they have the same masses, their carbon contents are much different,” he said. “And a one-meter worm that’s a cylinder and a one-meter sea urchin that’s a sphere are fundamentally different weights and different kinds of organisms.” One solution for the latter is to convert to volume to account for shape differences. Length-to-weight ratios can also differ substantially for different marine animal groups. That’s why McClain hopes to compile a separate database for length-to-weight conversions.

New body size database for marine animals is a “library of life” Read More »

how-a-data-center-company-uses-stranded-renewable-energy

How a data center company uses stranded renewable energy

“Decisions around where data centers get built have shifted dramatically over the last six months, with access to power now playing the most significant role in location scouting,” Joshi said. “The grid can’t keep pace with AI demands, so the industry is taking control with onsite power generation.”

Soluna, like other data center developers looking to rely on renewable energy, buys the excess power from wind, hydro, and solar plants that they can’t sell to the grid. By the end of the year, Soluna will have three facilities totaling 123 megawatts of capacity in Kentucky and Texas and seven projects in the works with upwards of 800 total megawatts.

Belizaire and I talked about how in Texas, where I report from, there’s plenty of curtailed energy from wind and solar farms because of the region’s transmission capacity. In West Texas, other data center developers are also taking advantage of the unused wind energy, far from major load centers like Dallas and Houston, by co-locating their giant warehouses full of advanced computers and high-powered cooling systems with the excess energy.

One data center developer using curtailed renewable power in Texas is IREN. The firm owns and operates facilities optimized for Bitcoin mining and AI. It developed a 7.5-gigawatt facility in Childress and broke ground on a 1.4-gigawatt data center in Sweetwater.

IREN purchases power through the state grid’s wholesale market during periods of oversupply, said Kent Draper, the company’s chief commercial officer, and reduces its consumption when prices are high. It’s able to do that by turning off its computers and minimizing power demand from its data centers.

But curtailment is an issue all over the world, Belizaire said, from Oklahoma, North Dakota, South Dakota, California, and Arizona in the US, to Northern Ireland, Germany, Portugal, and Australia.

“Anywhere where you have large utility-scale renewable development that’s been built out, you’re going to find it,” Belizaire said.

In a March analysis, the US Energy Information Administration reported that solar and wind power curtailments are increasing in California. In 2024, the grid operator for most of California curtailed 3.4 million megawatt hours of utility-scale wind and solar output, a 29 percent increase from the amount of electricity curtailed in 2023.

How a data center company uses stranded renewable energy Read More »

a-shark-scientist-reflects-on-jaws-at-50

A shark scientist reflects on Jaws at 50


We’re still afraid to go in the water

Ars chats with marine biologist David Shiffman about the film’s legacy—both good and bad.

Roy Scheider starred as Chief Martin Brody in the 1975 blockbuster Jaws. Credit: Universal Pictures

Today marks the 50th anniversary of Jaws, Steven Spielberg’s blockbuster horror movie based on the bestselling novel by Peter Benchley. We’re marking the occasion with a tribute to this classic film and its enduring impact on the popular perception of sharks, shark conservation efforts, and our culture at large.

(Many spoilers below.)

Jaws tells the story of Chief Martin Brody (Roy Scheider), the new police chief for Amity Island, a New England beach town and prime summer tourist attraction. But that thriving industry is threatened by a series of shark attacks, although the local mayor, Larry Vaughn (Murray Hamilton), initially dismisses the possibility, ridiculing the findings of visiting marine biologist Matt Hooper (Richard Dreyfuss). The attacks keep escalating and the body count grows, until the town hires a grizzled shark hunter named Quint (Robert Shaw) to hunt down and kill the great white shark, with the help of Brody and Hooper.

Benchley wrote his novel after reading about a sports fisherman named Frank Mundus, who captured a very large shark in 1964; in fact, the character of Quint is loosely based on Mundus. Benchley wrote an early draft of the screenplay, which underwent multiple revisions during production. In the end, he estimated that his contributions amounted to the basic storyline and the mechanics. Spielberg wasn’t the studio’s first choice for director; initially they hired Dick Richards, but Richards kept referring to the shark as a whale. Eventually, he was fired and replaced with the 26-year-old Spielberg, who had just finished his first feature film (The Sugarland Express).

Spielberg was given a $3.5 million shooting budget and a timeframe of 55 days for filming. However, the production was troubled from the start, largely due to the director’s insistence on shooting on location in Martha’s Vineyard; Jaws was the first major film to be shot on the ocean. Spielberg later admitted, “I was pretty naive about Mother Nature and the hubris of a filmmaker who thinks he can conquer the elements was foolhardy.” Unwanted boats kept drifting into the frame; cameras kept getting waterlogged; Carl Gottlieb (who played the local news editor Meadows) was nearly decapitated by a propeller; Dreyfuss nearly got stuck in the shark cage; and several actors suffered from seasickness. Frustrated crew members took to calling the movie “Flaws.”

A shark strikes

“duh-duh-duh-duh-duh-duh….” Universal Pictures

There were three pneumatically powered full-sized mechanical sharks built for the shoot, nicknamed “Bruce,” and they kept malfunctioning. The pneumatic hoses kept taking on seawater; the skin was made of neoprene foam, which soaked up water and became bloated; and one of the models kept getting tangled up in seaweed. In the end, Spielberg opted to shoot most of the early scenes without ever showing the actual shark, which actually heightened the tension and suspense, especially when combined with John Williams’ ominous theme music (“duh-duh-duh-duh-duh-duh…”).

In the end, shooting ran for 159 days, and the budget ballooned to $9 million. All the delays gave Spielberg and his writers (especially Gottlieb) extra time to refine the script, often just prior to filming the scenes. A lot of the dialogue was improvised by the actors. And it was all worth it in the end, because Jaws went on to become a major summer box office success. All told, it grossed $476 million globally across all its theatrical releases and won three Oscars, although it lost Best Picture to One Flew Over the Cuckoo’s Nest.

Jaws inspired many, many subsequent films, including Ridley Scott’s Alien in 1979, described in pitch meetings as “Jaws in space. Audience reactions were often extreme, with many people becoming fearful of swimming in the ocean for fear of sharks. And while the sequels were, shall we say, underwhelming, the original Jaws has stood the test of time. Ars spoke with marine biologist and shark conservationist David Shiffman, author of Why Sharks Matter, to discuss the film’s depiction of sharks and its enduring place in popular culture.

Ars Technica: Let’s start by talking about the enormous impact of the film, both good and bad, on the general public’s awareness of sharks.

David Shiffman: A lot of folks in both the marine science world and the ocean conservation communities have reported that Jaws in a lot of ways changed our world. It’s not that people used to think that sharks were cute, cuddly, adorable animals, and then after Jaws, they thought that they were bloodthirsty killing machines. They just weren’t on people’s minds. Fishermen knew about them, surfers thought about them, but that was about it. Most people who went to the beach didn’t pay much mind to what could be there. Jaws absolutely shattered that. My parents both reported that the summer that Jaws came out, they were afraid to go swimming in their community swimming pools.

No, really, the water’s fine!

“You knew.” The young boy’s mother (Lee Fierro) confronts Brody. Universal Pictures

David Shiffman: I have encountered people who were so scared that they were afraid to go in the bathtub. A lot of movies are very scary, but they don’t have that real-world impact. I love Jurassic Park, but I’m not afraid that a T. rex is going to eat me when I go into an outhouse, even though that’s about as realistic as what’s portrayed in Jaws. There’s something called the “Jaws Effect” in public policy literature, which is a way of measuring how fictional portrayals of real-world issues affect what citizens think about that issue and what policy preferences they support as a result. It’s fascinating how a fictional portrayal can do that, because I cannot stress enough: That is not what sharks look like or how they behave.

The movie also was the first time that a scientist was the hero. People half a generation above me have reported that seeing Richard Dreyfuss’ Hooper on the big screen as the one who saves the day changed their career trajectory. “You can be a scientist who studies fish. Cool. I want to do that.” In the time since Jaws came out, a lot of major changes have happened. One is that shark populations have declined globally by about 50 percent, and many species are now critically endangered.

And shark science has become much more professionalized. The American Elasmobranch Society—I’m on the board of directors—was founded in 1983, and now we have about 500 members in the US, Canada ,and Mexico. There have since been subsequent organizations founded in Australia and the Pacific Islands, Europe, South America, and a new one starting this year in Asia.

And then, from a cultural standpoint, we now have a whole genre of bad shark movies.

Ars Technica: Sharknado!

David Shiffman: Yes! Sharknado is one of the better of the bunch. Sitting on my desk here, we’ve got Sharkenstein, Raiders of the Lost Shark, and, of course, Shark Exorcist, all from the 2010s. I’ve been quoted as saying there’s two types of shark movie: There’s Jaws and there’s bad shark movies.

Ars Technica: Populations of the tiger shark, the great white, and couple of other species have declined so dramatically that many are on the verge of extinction. Is it just a coincidence that those declines started shortly after Jaws came out? 

David Shiffman: The short answer is not that Jaws caused this, but that perhaps Jaws made it easier for it to happen because people weren’t outraged the way they might’ve been if it happened to say, whales, whose populations were also declining around the same time. The number one threat to shark species as a whole is unsustainable overfishing practices. People are killing too many sharks. Sustainable fisheries for sharks can and do exist, and the US largely has done a good job with this, but around the world, it’s a bad scene.

“A whole genre of bad shark movies”

For instance, shark fin soup started to be a problem around the 1980s thanks to the economic boom in China and the emergence of a new middle class there. Shark fin soup is a traditional Chinese and Southeast Asian delicacy. It’s associated with the emperor and his court. It’s not shark meat that’s used. It’s the little skeletal fin rays from the fins that are basically a bland, noodle-like substance when they’re dried and boiled. The purpose of this was for people to say, “I have so much money that I can eat these incredibly rare delicacies.” That was not caused by Jaws. But perhaps it was allowed to happen because there was less public sympathy for sharks.

It’s worth noting that shark fin soup and the shark fin trade is no longer the biggest or only threat to sharks. It hasn’t been in about 20 years. Ironically, a lot of that has to do with Chinese government efforts not to save the ocean, but to crack down on public corruption. A lot of government officials used to throw extravagant banquets for their friends and family. The new Chinese government said, “We’re not doing that anymore.” That alone saved a lot of endangered species. It was not motivated by concern about the state of the ocean, but it had that effect.

Ars Technica: People have a tendency to think that sharks are simply brutal killing machines. Why are they so important to the ecosystem?

David Shiffman: The title of my book is Why Sharks Matter because sharks do matter and people don’t think about them that way. These are food chains that provide billions of humans with food, including some of the poorest humans on Earth. They provide tens of millions of humans with jobs. When those food chains are disrupted, that’s bad for coastal communities, bad for food security and livelihoods. If we want to have healthy ocean food chains, we need a healthy top of the food chain, because when you lose the top of the food chain, the whole thing can unravel in unpredictable, but often quite devastating ways.

 So sharks play important ecological roles by holding the food chain that we all depend on in place. They’re also not a significant threat to you and your family. More people in a typical year die from flower pots falling on their head when they walk down the street. More people in a typical year die falling off a cliff when they’re trying to take a selfie of the scenery behind them, than are killed by sharks. Any human death or injury is a tragedy, and I don’t want to minimize that. But when we’re talking about global-scale policy responses, the relative risk versus reward needs to be considered.

Ars Technica:  There’s a scene in Jaws where Hooper is talking about his personal theory: territoriality, the idea that this rogue great white came in and made this his personal territory and now he’ll just keep feeding until the food runs out. Is that a real scientific premise from the 1970s and how valid is it?

The hunt begins

The town hires grizzled shark hunter Quint (Robert Shaw) to kill the great white shark. Universal Pictures

David Shiffman: Rogue sharks are nonsense. It is nonsense that is still held by some kooks who are ostensibly in my field, but it is not supported by any evidence whatsoever. In all of recorded human history, there is proof that exactly one shark bit more than one human. That was the Sharm el-Sheikh attacks around Christmas in Egypt a few years ago. Generally speaking, a lot of times it’s hard to predict why wild animals do or don’t do anything. But if this was a behavior that was real, there would be evidence that it happens and there isn’t any, despite a lot of people looking.

Was it commonly believed in the 1970s? No. Did Peter Benchley make it up? No. It’s a thing in some animals for sure. In some neighborhoods, people will pick up gators and move them hundreds of miles away; the gators will move back to that exact same spot. I think the same thing has been shown with bears. Wolves certainly have a home range. But for sharks, it’s not a thing.

Ars Technica: Quint has a famous monologue about surviving the USS Indianapolis sinking and witnessing crew members being eaten by sharks. How historically accurate is that?. 

David Shiffman: We don’t really know how many of the people who were killed following the sinking of the Indianapolis were killed by sharks. Certainly, firsthand accounts report that sharks were present. But those people were in the water because they were on a boat that exploded after being hit by a torpedo. That is not good for your health. So a lot of those people were either mortally wounded or killed by that initial explosion, and then perhaps were scavenged by sharks. Those are also people who are in the water bleeding, making a lot of noise. That’s an incredible scene in the movie. But the deaths Quint attributes to sharks is more people than have been reliably documented as killed by sharks in the history of the world ever.

Ars Technica: How accurate is Jaws in terms of how and why sharks attack humans? For instance, someone says that people splashing in the water mimics what sharks want to hunt. 

David Shiffman: Anyone who tells you they know exactly why a wild animal does or does not do something is someone who you should be a little skeptical of. But a leading theory, which I think makes sense, is this idea of mistaken identity. Some of the people who are most commonly bitten by sharks, though it’s still astronomically rare, are surfers. These are people who are cutting through the water with a silhouette that resembles a seal, wearing black neoprene, which is modeled after seal blubber. Sharks have been patrolling the ocean since before there were trees on land, and it’s only in the last hundred years or so that they’ve had to wonder, is that my preferred prey, or is it a human using technology to mimic my preferred prey for recreational purposes?

If you’ve been in the ocean, there’s been a shark not that far from you, and it knew you were there, and you probably had no idea it was there and had a pleasant day in the water. The sharks that do bite people, they take a little bite and they go, what is that? And swim away. That can be real bad if it hits a major artery or if you’re far from shore. Again, I don’t want to minimize the real harm. But it is not a shark hunting you because it has a taste for human flesh. They don’t have hands. They explore their environment with their mouths and most things in their environment they can eat.

I think Mythbusters tested fish blood versus mammal blood versus chicken blood, I think. And the sharks were attracted to fish blood and had no reaction to the others. So these are animals that are very, very, very well adapted for environmental conditions that in some cases don’t really exist anymore.

Man vs. great white

Brody fights off an increasingly aggressive great white. Universal Pictures

With humans, most of the time, what happens is an immediate bite, and then they swim away. With seals or large prey, they’ll often hit it really hard from below, sometimes knocking it completely out of the water. Or if they’re hunting whales or something that they can’t fit in their mouth, they just take a huge bite and swim away. With fish, they swallow them whole to the extent possible. Sometimes there’s a shaking motion to snap a neck or whatever. You see that with some land predators, too. It’s nothing like what’s seen there—but what an awesome scene.

Ars Technica: What is your favorite scene in Jaws and the one that makes you cringe the most?

David Shiffman: Oh, man. It’s really a great movie, and it holds up well. It was hailed as revolutionary at the time because you hardly ever see the shark. But the reason they did that was because the model of the shark that they built kept breaking. So they decided, let’s just shoot it from the shark’s eye view and save money and annoyance. I love the scene when Hooper realizes that the tiger shark that they’ve caught is obviously not the right species and the reaction that people have to that—just this idea that science and expertise can be used to solve problems. Whenever a shark bites someone, there are people who go out and kill any shark they can find and think that they’re helping.

One of my favorite professional experiences is the American Alasdair Rank Society conference. One year it was in Austin, Texas, near the original Alamo Drafthouse. Coincidentally, while we were there, the cinema held a “Jaws on the Water” event. They had a giant projector screen, and we were sitting in a lake in inner tubes while there were scuba divers in the water messing with us from below. I did that with 75 professional shark scientists. It was absolutely amazing. It helped knowing that it was a lake.

Ars Technica: If you wanted to make another really good shark movie, what would that look like today? 

David Shiffman: I often say that there are now three main movie plots: a man goes on a quest, a stranger comes to town, or there’s a shark somewhere you would not expect a shark to be. It depends if you want to make a movie that’s actually good, or one of the more fun “bad” movies like Sharknado or Sharktopus or Avalanche Sharks—the tagline of which is “snow is just frozen water.” These movies are just off the rails and absolutely incredible. The ones that don’t take themselves too seriously and are in on the joke tend to be very fun. But then you get movies like Netflix’s Under Paris (2024); they absolutely thought they were making a good movie and took themselves very seriously, and it was painful to watch.

I would love to see actual science and conservation portrayed. I’d love to see species that are not typically found in these movies featured. The Sharknado series actually did a great job of this because they talked with me and other scientists after the success of the first one. Sharknado II is thanked in my PhD dissertation, because they funded one of my chapters. In that movie, it’s not just great whites and tiger sharks and bull sharks. They have a whale shark that falls out of the sky and hits someone. They have a cookie-cutter shark that falls out of the sky and burrows through someone’s leg. There’s a lot of shark diversity out there, and it’d be nice to get that featured more.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

A shark scientist reflects on Jaws at 50 Read More »

microsoft-lays-out-its-path-to-useful-quantum-computing

Microsoft lays out its path to useful quantum computing


Its platform needs error correction that works with different hardware.

Some of the optical hardware needed to make Atom Computing’s machines work. Credit: Atom Computing

On Thursday, Microsoft’s Azure Quantum group announced that it has settled on a plan for getting error correction on quantum computers. While the company pursues its own hardware efforts, the Azure team is a platform provider that currently gives access to several distinct types of hardware qubits. So it has chosen a scheme that is suitable for several different quantum computing technologies (notably excluding its own). The company estimates that the system it has settled on can take hardware qubits with an error rate of about 1 in 1,000 and use them to build logical qubits where errors are instead 1 in 1 million.

While it’s describing the scheme in terms of mathematical proofs and simulations, it hasn’t shown that it works using actual hardware yet. But one of its partners, Atom Computing, is accompanying the announcement with a description of how its machine is capable of performing all the operations that will be needed.

Arbitrary connections

There are similarities and differences between what the company is talking about today and IBM’s recent update of its roadmap, which described another path to error-resistant quantum computing. In IBM’s case, it makes both the software stack that will perform the error correction and the hardware needed to implement it. It uses chip-based hardware, with the connections among qubits mediated by wiring that’s laid out when the chip is fabricated. Since error correction schemes require a very specific layout of connections among qubits, once IBM decides on a quantum error correction scheme, it can design chips with the wiring needed to implement that scheme.

Microsoft’s Azure, in contrast, provides its users with access to hardware from several different quantum computing companies, each based on different technology. Some of them, like Rigetti and Microsoft’s own planned processor, are similar to IBM’s in that they have a fixed layout during manufacturing, and so can only handle codes that are compatible with their wiring layout. But others, such as those provided by Quantinuum and Atom Computing, store their qubits in atoms that can be moved around and connected in arbitrary ways. Those arbitrary connections allow very different types of error correction schemes to be considered.

It can be helpful to think of this using an analogy to geometry. A chip is like a plane, where it’s easiest to form the connections needed for error correction among neighboring qubits; longer connections are possible, but not as easy. Things like trapped ions and atoms provide a higher-dimensional system where far more complicated patterns of connections are possible. (Again, this is an analogy. IBM is using three-dimensional wiring in its processing chips, while Atom Computing stores all its atoms in a single plane.)

Microsoft’s announcement is focused on the sorts of processors that can form the more complicated, arbitrary connections. And, well, it’s taking full advantage of that, building an error correction system with connections that form a four-dimensional hypercube. “We really have focused on the four-dimensional codes due to their amenability to current and near term hardware designs,” Microsoft’s Krysta Svore told Ars.

The code not only describes the layout of the qubits and their connections, but also the purpose of each hardware qubit. Some of them are used to hang on to the value of the logical qubit(s) stored in a single block of code. Others are used for what are called “weak measurements.” These measurements tell us something about the state of the ones that are holding on to the data—not enough to know their values (a measurement that would end the entanglement), but enough to tell if something has changed. The details of the measurement allow corrections to be made that restore the original value.

Microsoft’s error correction system is described in a preprint that the company recently released. It includes a family of related geometries, each of which provides different degrees of error correction, based on how many simultaneous errors they can identify and fix. The descriptions are about what you’d expect for complicated math and geometry—”Given a lattice Λ with an HNF L, the code subspace of the 4D geometric code CΛ is spanned by the second homology H2(T4Λ,F2) of the 4-torus T4Λ—but the gist is that all of them convert collections of physical qubits into six logical qubits that can be error corrected.

The more hardware qubits you add to host those six logical qubits, the greater error protection each of them gets. That becomes important because some more sophisticated algorithms will need more than the one-in-a-million error protection that Svore said Microsoft’s favored version will provide. That favorite is what’s called the Hadamard version, which bundles 96 hardware qubits to form six logical qubits, and has a distance of eight (distance being a measure of how many simultaneous errors it can tolerate). You can compare that with IBM’s announcement, which used 144 hardware qubits to host 12 logical qubits at a distance of 12 (so, more hardware, but more logical qubits and greater error resistance).

The other good stuff

On its own, a description of the geometry is not especially exciting. But Microsoft argues that this family of error correction codes has a couple of significant advantages. “All of these codes in this family are what we call single shot,” Svore said. “And that means that, with a very low constant number of rounds of getting information about the noise, one can decode and correct the errors. This is not true of all codes.”

Limiting the number of measurements needed to detect errors is important. For starters, measurements themselves can create errors, so making fewer makes the system more robust. In addition, in things like neutral atom computers, the atoms have to be moved to specific locations where measurements take place, and the measurements heat them up so that they can’t be reused until cooled. So, limiting the measurements needed can be very important for the performance of the hardware.

The second advantage of this scheme, as described in the draft paper, is the fact that you can perform all the operations needed for quantum computing on the logical qubits these schemes host. Just like in regular computers, all the complicated calculations performed on a quantum computer are built up from a small number of simple logical operations. But not every possible logical operation works well with any given error correction scheme. So it can be non-trivial to show that an error correction scheme is compatible with enough of the small operations to enable universal quantum computation.

So, the paper describes how some logical operations can be performed relatively easily, while a few others require manipulations of the error correction scheme in order to work. (These manipulations have names like lattice surgery and magic state distillation, which are good signs that the field doesn’t take itself that seriously.)

So, in sum, Microsoft feels that it has identified an error correction scheme that is fairly compact, can be implemented efficiently on hardware that stores qubits in photons, atoms, or trapped ions, and enables universal computation. What it hasn’t done, however, is show that it actually works. And that’s because it simply doesn’t have the hardware right now. Azure is offering trapped ion machines from IonQ and Qantinuum, but these top out at 56 qubits—well below the 96 needed for their favored version of these 4D codes. The largest it has access to is a 100-qubit machine from a company called PASQAL, which barely fits the 96 qubits needed, leaving no room for error.

While it should be possible to test smaller versions of codes in the same family, the Azure team has already demonstrated its ability to work with error correction codes based on hypercubes, so it’s unclear whether there’s anything to gain from that approach.

More atoms

Instead, it appears to be waiting for another partner, Atom Computing, to field its next-generation machine, one it’s designing in partnership with Microsoft. “This first generation that we are building together between Atom Computing and Microsoft will include state-of-the-art quantum capabilities, will have 1,200 physical qubits,” Svore said “And then the next upgrade of that machine will have upwards of 10,000. And so you’re looking at then being able to go to upwards of a hundred logical qubits with deeper and more reliable computation available. “

So, today’s announcement was accompanied by an update on progress from Atom Computing, focusing on a process called “midcircuit measurement.” Normally, during quantum computing algorithms, you have to resist performing any measurements of the value of qubits until the entire calculation is complete. That’s because quantum calculations depend on things like entanglement and each qubit being in a superposition between its two values; measurements can cause all that to collapse, producing definitive values and ending entanglement.

Quantum error correction schemes, however, require that some of the hardware qubits undergo weak measurements multiple times while the computation is in progress. Those are quantum measurements taking place in the middle of a computation—midcircuit measurements, in other words. To show that its hardware will be up to the task that Microsoft expects of it, the company decided to demonstrate mid-circuit measurements on qubits implementing a simple error correction code.

The process reveals a couple of notable features that are distinct from doing this with neutral atoms. To begin with, the atoms being used for error correction have to be moved to a location—the measurement zone—where they can be measured without disturbing anything else. Then, the measurement typically heats up the atom slightly, meaning they have to be cooled back down afterward. Neither of these processes is perfect, and so sometimes an atom gets lost and needs to be replaced with one from a reservoir of spares. Finally, the atom’s value needs to be reset, and it has to be sent back to its place in the logical qubit.

Testing revealed that about 1 percent of the atoms get lost each cycle, but the system successfully replaces them. In fact, they set up a system where the entire collection of atoms is imaged during the measurement cycle, and any atom that goes missing is identified by an automated system and replaced.

Overall, without all these systems in place, the fidelity of a qubit is about 98 percent in this hardware. With error correction turned on, even this simple logical qubit saw its fidelity rise over 99.5 percent. All of which suggests their next computer should be up to some significant tests of Microsoft’s error correction scheme.

Waiting for the lasers

The key questions are when it will be released, and when its successor, which should be capable of performing some real calculations, will follow it? That’s something that’s a challenging question to ask because, more so than some other quantum computing technologies, neutral atom computing is dependent on something that’s not made by the people who build the computers: lasers. Everything about this system—holding atoms in place, moving them around, measuring, performing manipulations—is done with a laser. The lower the noise of the laser (in terms of things like frequency drift and energy fluctuations), the better performance it’ll have.

So, while Atom can explain its needs to its suppliers and work with them to get things done, it has less control over its fate than some other companies in this space.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Microsoft lays out its path to useful quantum computing Read More »

mit-student-prints-ai-polymer-masks-to-restore-paintings-in-hours

MIT student prints AI polymer masks to restore paintings in hours

MIT graduate student Alex Kachkine once spent nine months meticulously restoring a damaged baroque Italian painting, which left him plenty of time to wonder if technology could speed things up. Last week, MIT News announced his solution: a technique that uses AI-generated polymer films to physically restore damaged paintings in hours rather than months. The research appears in Nature.

Kachkine’s method works by printing a transparent “mask” containing thousands of precisely color-matched regions that conservators can apply directly to an original artwork. Unlike traditional restoration, which permanently alters the painting, these masks can reportedly be removed whenever needed. So it’s a reversible process that does not permanently change a painting.

“Because there’s a digital record of what mask was used, in 100 years, the next time someone is working with this, they’ll have an extremely clear understanding of what was done to the painting,” Kachkine told MIT News. “And that’s never really been possible in conservation before.”

Figure 1 from the paper.

Figure 1 from the paper. Credit: MIT

Nature reports that up to 70 percent of institutional art collections remain hidden from public view due to damage—a large amount of cultural heritage sitting unseen in storage. Traditional restoration methods, where conservators painstakingly fill damaged areas one at a time while mixing exact color matches for each region, can take weeks to decades for a single painting. It’s skilled work that requires both artistic talent and deep technical knowledge, but there simply aren’t enough conservators to tackle the backlog.

The mechanical engineering student conceived the idea during a 2021 cross-country drive to MIT, when gallery visits revealed how much art remains hidden due to damage and restoration backlogs. As someone who restores paintings as a hobby, he understood both the problem and the potential for a technological solution.

To demonstrate his method, Kachkine chose a challenging test case: a 15th-century oil painting requiring repairs in 5,612 separate regions. An AI model identified damage patterns and generated 57,314 different colors to match the original work. The entire restoration process reportedly took 3.5 hours—about 66 times faster than traditional hand-painting methods.

A handout photo of Alex Kachkine, who developed the AI printed film technique.

Alex Kachkine, who developed the AI-printed film technique. Credit: MIT

Notably, Kachkine avoided using generative AI models like Stable Diffusion or the “full-area application” of generative adversarial networks (GANs) for the digital restoration step. According to the Nature paper, these models cause “spatial distortion” that would prevent proper alignment between the restored image and the damaged original.

MIT student prints AI polymer masks to restore paintings in hours Read More »