astronomy

researchers-spot-saturn-sized-planet-in-the-“einstein-desert”

Researchers spot Saturn-sized planet in the “Einstein desert”


Rogue, free-floating planets appear to have two distinct origins.

Most of the exoplanets we’ve discovered have been in relatively tight orbits around their host stars, allowing us to track them as they repeatedly loop around them. But we’ve also discovered a handful of planets through a phenomenon that’s called microlensing. This occurs when a planet passes between the line of sight between Earth and another star, creating a gravitational lens that distorts the star, causing it to briefly brighten.

The key thing about microlensing compared to other methods of finding planets is that the lensing planet can be nearly anywhere on the line between the star and Earth. So, in many cases, these events are driven by what are called rogue planets: those that aren’t part of any exosolar system at all, but they drift through interstellar space. Now, researchers have used microlensing and the fortuitous orientation of the Gaia space telescope to spot a Saturn-sized planet that’s the first found in what’s called the “Einstein desert,” which may be telling us something about the origin of rogue planets.

Going rogue

Most of the planets we’ve identified are in orbit around stars and formed from the disks of gas and dust that surrounded the star early in its history. We’ve imaged many of these disks and even seen some with evidence of planets forming within them. So how do you get a planet that’s not bound to any stars? There are two possible routes.

The first involves gravitational interactions, either among the planets of the system or due to an encounter between the exosolar system and a passing star. Under the right circumstances, these interactions can eject a planet from its orbit and send it hurtling through interstellar space. As such, we should expect them to be like any typical planet, ranging in mass from small, rocky bodies up to gas giants. An alternative method of making a rogue planet starts with the same process of gravitational collapse that builds a star—but in this case, the process literally runs out of gas. What’s left is likely to be a large gas giant, possibly somewhere between Jupiter and a brown dwarf star in mass.

Since these objects are unlinked to any exosolar system, they’re not going to have any regular interactions with stars; our only way of spotting them is through microlensing. And microlensing tells us very little about the size of the planet. To figure things out, we would need some indication of things like how distant the star and planet are, and how big the star is.

That doesn’t mean that microlensing events have told us nothing. We can identify the size of the Einstein ring, the circular ring of light that forms when the planet and star are perfectly lined up from Earth’s perspective. Given that information and some of the remaining pieces of info mentioned above, we can figure out the planet’s mass. But even without that, we can make some inferences using statistical models.

Studies of collections of microlensing events (these collections are small, typically in the dozens, because these events are rare and hard to spot) have identified a distinctive pattern. There’s a cluster of relatively small Einstein rings that are likely to have come from relatively small planets. Then, there’s a gap, followed by a second cluster that’s likely to be made by far larger planets. The gap between the two has been termed the “Einstein desert,” and there has been considerable discussion regarding its significance and whether it’s even real or simply a product of the relatively small sample size.

Sometimes you get lucky

All of which brings us to the latest microlensing event, which was picked up by two projects that each gave it a different but equally compelling name. To the Korea Microlensing Telescope Network, the event was KMT-2024-­BLG-­0792. For the Optical Gravitational Lensing Experiment, or OGLE, it was OGLE-­2024-­BLG-­0516. We’ll just call it “the microlensing event” and note that everyone agrees that it happened in early May 2024.

Both of those networks are composed of Earth-based telescopes, and so they only provide a single perspective on the microlensing event. But we got lucky that the European Space Agency’s space telescope Gaia was oriented in a way that made it very easy to capture images. “Serendipitously, the KMT-­2024-­BLG-­0792/OGLE-­2024-­BLG-­0516 microlensing event was located nearly perpendicular to the direction of Gaia’s precession axis,” the researchers who describe this event write. “This rare geometry caused the event to be observed by Gaia six times over a 16-­hour period.”

Gaia is also located at the L2 Lagrange point, which is a considerable distance from Earth. That’s far enough away that the peak of the events’ brightness, as seen from Gaia’s perspective, occurred nearly two hours later than it did for telescopes on Earth. This let us determine the parallax of the microlensing event, and thus its distance. Other images of the star from before or after the event indicated it was a red giant in the galactic bulge, which also gave us a separate check on its likely distance and size.

Using the parallax and the size of the Einstein ring, the researchers determined that the planet involved was roughly 0.2 times the mass of Jupiter, which makes it a bit smaller than the mass of Saturn. Those estimates are consistent with a statistical model that took the other properties into account. The measurements also placed it squarely in the middle of the Einstein desert—the first microlensing event we’ve seen there.

That’s significant because it means we can orient the Einstein desert to a specific mass of a planet within it. Because of the variability of things like distance and the star’s size, not every planet that produces a similar-sized Einstein ring will be similar in size, but statistics suggest that this will typically be the case. And that’s in keeping with one of the potential explanations for the Einstein desert: that it represents the gap in size between the two different methods of making a rogue planet.

For the normal planet formation scenario, the lighter the planet is, the easier it is to be ejected, so you’d expect a bias toward small, rocky bodies. The Saturn-sized planet seen here may be near the upper limit of the sorts of bodies we’d typically see being ejected from an exosolar system. By contrast, the rogue planets that form through the same mechanisms that give us brown dwarfs would typically be Jupiter-sized or larger.

That said, the low number of total microlensing events still leaves the question of the reality of the Einstein gap an open question. Sticking with the data from the Korea Microlensing Telescope Network, the researchers find that the frequency of other detections suggests that we’d have a 27 percent chance of detecting just one item in the area of the Einstein desert even if the desert wasn’t real and detections were equal probably across the size range. So, as is often the case, we’re going to need to let the network do its job for a few years more before we have the data to say anything definitive.

Science, 2026. DOI: 10.1126/science.adv9266 (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Researchers spot Saturn-sized planet in the “Einstein desert” Read More »

the-$4.3-billion-space-telescope-trump-tried-to-cancel-is-now-complete

The $4.3 billion space telescope Trump tried to cancel is now complete


“We’re going to be making 3D movies of what is going on in the Milky Way galaxy.”

Artist’s concept of the Nancy Grace Roman Space Telescope. Credit: NASA Goddard Space Flight Center Scientific Visualization Studio

A few weeks ago, technicians inside a cavernous clean room in Maryland made the final connection to complete assembly of NASA’s Nancy Grace Roman Space Telescope.

Parts of this new observatory, named for NASA’s first chief astronomer, recently completed a spate of tests to ensure it can survive the shaking and intense sound of a rocket launch. Engineers placed the core of the telescope inside a thermal vacuum chamber, where it withstood the airless conditions and extreme temperature swings it will see in space.

Then, on November 25, teams at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, joined the inner and outer portions of the Roman Space Telescope. With this milestone, NASA declared the observatory complete and on track for launch as soon as fall 2026.

“The team is ecstatic,” said Jackie Townsend, the observatory’s deputy project manager at Goddard, in a recent interview with Ars. “It has been a long road, but filled with lots of successes and an ordinary amount of challenges, I would say. It’s just so rewarding to get to this spot.”

An ordinary amount of challenges is not something you usually hear a NASA official say about a one-of-a-kind space mission. NASA does hard things, and they usually take more time than originally predicted. Astronomers endured more than 10 years of delays, fixes, and setbacks before the James Webb Space Telescope finally launched in 2021.

Webb is the largest telescope ever put into space. After launch, Webb had to perform a sequence of more than 50 major deployment steps, with 178 release mechanisms that had to work perfectly. Any one of the more than 300 single points of failure could have doomed the mission. In the end, Webb unfolded its giant segmented mirror and delicate sunshield without issue. After a quarter-century of development and more than $11 billion spent, the observatory is finally delivering images and science results. And they’re undeniably spectacular.

The completed Nancy Grace Roman Space Telescope, seen here with its solar panels deployed inside a clean room at NASA’s Goddard Space Flight Center in Maryland. Credit: NASA/Jolearra Tshiteya

Seeing far and wide

Roman is far less complex, with a 7.9-foot (2.4-meter) primary mirror that is nearly three times smaller than Webb’s. While it lacks Webb’s deep vision, Roman will see wider swaths of the sky, enabling a cosmic census of billions of stars and galaxies near and far (on the scale of the Universe). This broad vision will support research into dark matter and dark energy, which are thought to make up about 95 percent of the Universe. The rest of the Universe is made of regular atoms and molecules that we can see and touch.

It is also illustrative to compare Roman with the Hubble Space Telescope, which has primary mirrors of the same size. This means Roman will produce images with similar resolution to Hubble. The distinction lies deep inside Roman, where technicians have delicately laid an array of detectors to register the faint infrared light coming through the telescope’s aperture.

“Things like night vision goggles will use the same basic detector device, just tuned to a different wavelength,” Townsend said.

These detectors are located in Roman’s Wide Field Instrument, the mission’s primary imaging camera. There are 18 of them, each 4,096×4,096 pixels wide, combining to form a roughly 300-megapixel camera sensitive to visible and near-infrared light. Teledyne, the company that produced the detectors, says this is the largest infrared focal plane ever made.

The near-infrared channel on Hubble’s Wide Field Camera 3, which covers much the same part of the spectrum as Roman, has a single 1,024-pixel detector.

“That’s how you get to a much higher field-of-view for the Roman Space Telescope, and it was one of the key enabling technologies,” Townsend told Ars. “That was one place where Roman invested significant dollars, even before we started as a mission, to mature that technology so that it was ready to infuse into this mission.”

With these detectors in its bag, Roman will cover much more cosmic real estate than Hubble. For example, Roman will be able to re-create Hubble’s famous Ultra Deep Field image with the same sharpness, but expand it to show countless stars and galaxies over an area of the sky at least 100 times larger.

This infographic illustrates the differences between the sizes of the primary mirrors and detectors on the Hubble, Roman, and Webb telescopes. Credit: NASA

Roman has a second instrument, the Roman Coronagraph, with masks, filters, and adaptive optics to block out the glare from stars and reveal the faint glow from objects around them. It is designed to photograph planets 100 million times fainter than their stars, or 100 to 1,000 times better than similar instruments on Webb and Hubble. Roman can also detect exoplanets using the tried-and-true transit method, but scientists expect the new telescope will find a lot more than past space missions, thanks to its wider vision.

“With Roman’s construction complete, we are poised at the brink of unfathomable scientific discovery,” said Julie McEnery, Roman’s senior project scientist at NASA Goddard, in a press release. “In the mission’s first five years, it’s expected to unveil more than 100,000 distant worlds, hundreds of millions of stars, and billions of galaxies. We stand to learn a tremendous amount of new information about the universe very rapidly after Roman launches.”

Big numbers are crucial for learning how the Universe works, and Roman will feed vast volumes of data down to astronomers on Earth. “So much of what physics is trying to understand about the nature of the Universe today needs large number statistics in order to understand,” Townsend said.

In one of Roman’s planned sky surveys, the telescope will cover in nine months what would take Hubble between 1,000 and 2,000 years. In another survey, Roman will cover an area equivalent to 3,455 full moons in about three weeks, then go back and observe a smaller portion of that area repeatedly over five-and-a-half days—jobs that Hubble and Webb can’t do.

“We will do fundamentally different science,” Townsend said. “In some subset of our observations, we’re going to be making 3D movies of what is going on in the Milky Way galaxy and in distant galaxies. That is just something that’s never happened before.”

Getting here and getting there

Roman’s promised scientific bounty will come at a cost of $4.3 billion, including expenses for development, manufacturing, launch, and five years of operations.

This is about $300 million more than NASA expected when it formally approved Roman for development in 2020, an overrun the agency blamed on complications related to the coronavirus pandemic. Otherwise, Roman’s budget has been stable since NASA officials finalized the mission’s architecture in 2017, when it was still known by a bulky acronym: WFIRST, the Wide Field InfraRed Survey Telescope.

At that time, the agency reclassified the Roman Coronagraph as a technology demonstration, allowing managers to relax their requirements for the instrument and stave off concerns about cost growth.

Roman survived multiple attempts by the first Trump administration to cancel the mission. Each time, Congress restored funding to keep the observatory on track for launch in the mid-2020s. With Donald Trump back in the White House, the administration’s budget office earlier this year again wanted to cancel Roman. Eventually, the Trump administration released its fiscal year 2026 budget request in May, calling for a drastic cut to Roman, but not total cancellation.

Once again, both houses of Congress signaled their opposition to the cuts, and the mission remains on track for launch next year, perhaps as soon as September. This is eight months ahead of the schedule NASA has publicized for Roman for the last few years.

Townsend told Ars the mission escaped the kind of crippling cost overruns and delays that afflicted Webb through careful planning and execution. “Roman was under a cost cap, and we operated to that,” she said. “We went through reasonable efforts to preclude those kinds of highly complex deployments that lead you to having trouble in integration and test.”

The outer barrel section of the Roman Space Telescope inside a thermal vacuum chamber at NASA’s Goddard Space Flight Center, Maryland. Credit: NASA/Sydney Rohde

There are only a handful of mechanisms that must work after Roman’s launch. They include a deployable cover designed to shield the telescope’s mirror during launch and solar array wings that will unfold once Roman is in space. The observatory will head to an observing post about a million miles (1.5 million kilometers) from Earth.

“We don’t have moments of terror for the deployment,” Townsend said. “Obviously, launch is always a risk, the tip-off rates that you have when you separate from the launch vehicle… Then, obviously, getting the aperture door open so that it’s deployed is another one. But these feel like normal aerospace risks, not unusual, harrowing moments for Roman.”

It also helps that Roman will use a primary mirror gifted to NASA by the National Reconnaissance Office, the US government’s spy satellite agency. The NRO originally ordered the mirror for a telescope that would peer down on the Earth, but the spy agency no longer needed it. Before NASA got its hands on the surplus mirror in 2012, scientists working on the preliminary design for what became Roman were thinking of a smaller telescope.

The larger telescope will make Roman a more powerful tool for science, and the NRO’s donation eliminated the risk of a problem or delay manufacturing a new mirror. But the upside meant NASA had to build a more massive spacecraft and use a bigger rocket to accommodate it, adding to the observatory’s cost.

Tests of Roman’s components have gone well this year. Work on Roman continued at Goddard through the government shutdown in the fall. On Webb, engineers uncovered one problem after another as they tried to verify the observatory would perform as intended in space. There were leaky valves, tears in the Webb’s sunshield, a damaged transducer, and loose screws. With Roman, engineers so far have found no “significant surprises” during ground testing, Townsend said.

“What we always hope when you’re doing this final round of environmental tests is that you’ve wrung out the hardware at lower levels of assembly, and it looks like, in Roman’s case, we did a spectacular job at the lower level,” she said.

With Roman now fully assembled, attention at Goddard will turn to an end-to-end functional test of the observatory early next year, followed by electromagnetic interference testing, and another round of acoustic and vibration tests. Then, perhaps around June of next year, NASA will ship the observatory to Kennedy Space Center, Florida, to prepare for launch on a SpaceX Falcon Heavy rocket.

“We’re really down to the last stretch of environmental testing for the system,” Townsend said. “It’s definitely already seen the worst environment until we get to launch.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

The $4.3 billion space telescope Trump tried to cancel is now complete Read More »

formation-of-oceans-within-icy-moons-could-cause-the-waters-to-boil

Formation of oceans within icy moons could cause the waters to boil

That can have significant consequences on the stresses experienced by the icy shells of these moons. Water is considerably more dense than ice. So, as a moon’s ocean freezes up, its interior will expand, creating outward forces that press against the gravity holding the moon together. The potential of this transition to shape the surface geology of a number of moons, including Europa and Enceladus, has already been explored. So, the researchers behind the new work decided to look at the opposite issue: what happens when the interior starts to melt?

Rather than focus on a specific moon, the team did a general model of an ice-covered ocean. This model treated the ice shell as an elastic surface, meaning it wouldn’t just snap, and placed viscous ice below that. Further down, there was a liquid ocean and eventually a rocky core. As the ice melted and the ocean expanded, the researchers tracked the stresses on the ice shell and the changes in pressure that occurred at the ice-ocean interface. They also tracked the spread of thermal energy through the ice shell.

Pressure drop

Obviously, there are limits to how much the outer shell can flex to accommodate the shrinking of the inner portions of the moon that are melting. This creates a low-pressure area under the shell. The consequences of this depend on the moon’s size. For larger moons—and this includes most of the moons the team looked at, including Europa—there were two options. For some, gravity is sufficiently strong to keep the pressure at a point where the water at the interface remains liquid. In others, the gravity was enough to cause even an elastic surface to fail, leading to surface collapse.

For smaller moons, however, this doesn’t work out; the pressure gets low enough that water will boil even at the ambient temperatures (just above the freezing point of water). In addition, the low pressure will likely cause any gases dissolved in the water to be released. The result is that gas bubbles should form at the ice-water interface. “Boiling is possible on these bodies—and not others—because they are small and have a relatively low gravitational acceleration,” the researchers conclude. “Consequently, less ocean underpressure is needed to counterbalance the [crustal] pressure.”

Formation of oceans within icy moons could cause the waters to boil Read More »

nasa-really-wants-you-to-know-that-3i/atlas-is-an-interstellar-comet

NASA really wants you to know that 3I/ATLAS is an interstellar comet

The HiRISE camera, meant to image Mars’ surface, was repurposed to capture 3I/ATLAS. Credit: NASA/JPL-Caltech/University of Arizona

As eccentricity continues to rise from there, the question shifts from “what shape is its trajectory?” to “how much does the Sun alter its path through the Solar System?” For 3I/Atlas, with an eccentricity of over six, the answer is “not very much at all.” The object has approached the inner Solar System along a reasonable approximation of a straight line, experienced a gentle bend around the Sun near Mars’ orbit, and now will be zipping straight out of the Solar System again.

So, the object clearly did not originate here, which means getting a better look at it is a high priority. Unfortunately, 3I/ATLAS’s closest approach to Earth’s orbit happened when it was on the far side of the Sun from Earth. We’ve been getting closer to it since, but the hardware that got the best views was all orbiting Mars and is designed largely to point down. NASA’s Nicky Fox, the associate administrator for Science, praised the operators for getting NASA’s hardware “pushed beyond their designed capabilities” when imaging the object.

That includes using the MAVEN mission (designed to study Mars’ atmosphere) to get spectral information, and the HiRISE camera, which captured the image below. Other images came from a solar observatory and two separate missions that are on their way to visit asteroids. Other hardware that can normally image objects like this, such as the Hubble and JWST, pivoted to image 3I/ATLAS as well.

What we now know

Hubble has gotten the best view of 3I/ATLAS; its data suggests that the comet is, at most, just a couple of kilometers across. It doesn’t show much variability over time, suggesting that, if it’s rotating, it’s doing so very slowly. It has shown some differences as it warmed up, first producing a jet of material on its side facing the Sun before radiation pressure pushed that behind it to form a tail. There is some indication that, as we saw during the Rosetta mission’s visit to one of our Solar System’s comets, most of the material may be jetting out of distinct “hotspots” on the comet’s surface.

NASA really wants you to know that 3I/ATLAS is an interstellar comet Read More »

runaway-black-hole-mergers-may-have-built-supermassive-black-holes

Runaway black hole mergers may have built supermassive black holes

The researchers used cosmological simulations to recreate the first 700 million years of cosmic history, focusing on the formation of a single dwarf galaxy. In their virtual galaxy, waves of stars were born in short, explosive bursts as cold gas clouds collapsed inside a dark matter halo. Instead of a single starburst episode followed by a steady drizzle of star formation as Garcia expected, there were two major rounds of stellar birth. Whole swarms of stars flared to life like Christmas tree lights.

“The early Universe was an incredibly crowded place,” Garcia said. “Gas clouds were denser, stars formed faster, and in those environments, it’s natural for gravity to gather stars into these tightly bound systems.”

Those clusters started out scattered around the galaxy but fell in toward the center like water swirling down a drain. Once there, they merged to create one megacluster, called a nuclear star cluster (so named because it lies at the nucleus of the galaxy). The young galactic heart shone with the light of a million suns and may have set the stage for a supermassive black hole to form.

A simulation of the formation of the super-dense star clusters.

A seemingly simple tweak was needed to make the simulation more precise than previous ones. “Most simulations simplify things to make calculations more practical, but then you sacrifice realism,” Garcia said. “We used an improved model that allowed star formation to vary depending on local conditions rather than just go at a constant rate like with previous models.”

Using the University of Maryland’s supercomputing facility Zaratan, Garcia accomplished in six months what would have taken 12 years on a MacBook.

Some clouds converted as much as 80 percent of their gas into stars—a ferocious rate compared to the 2 percent typically seen in nearby galaxies today. The clouds sparkled to life, becoming clusters of newborn stars held together by their mutual gravity and lighting a new pathway for supermassive black holes to form extremely early in the Universe.

Chicken or egg?

Most galaxies, including our own, are anchored by a nuclear star cluster nestled around a supermassive black hole. But the connection between the two has been a bit murky—did the monster black hole form and then draw stars close, or did the cluster itself give rise to the black hole?

Runaway black hole mergers may have built supermassive black holes Read More »

there-could-be-“dark-main-sequence”-stars-at-the-galactic-center

There could be “dark main sequence” stars at the galactic center


Dark matter particle and antiparticle collisions could make some stars immortal.

For a star, its initial mass is everything. It determines how quickly it burns through its hydrogen and how it will evolve once it starts fusing heavier elements. It’s so well understood that scientists have devised a “main sequence” that acts a bit like a periodic table for stars, correlating their mass and age with their properties.

The main sequence, however, is based on an assumption that’s almost always true: All of the energy involved comes from the gravity-driven fusion of lighter elements into heavier ones. However, three astrophysicists consider an alternative source of energy that may apply at the very center of our galaxy— energy released when dark matter particles and antiparticles collide and annihilate. While we don’t even know that dark matter can do that, it’s a hypothetical with some interesting consequences, like seemingly immortal stars, and others that move backward along the main sequence path.

Dark annihilations

We haven’t figured out what dark matter is, but there are lots of reasons to think that it is comprised of elementary particles. And, if those behave like all of the particles we understand well, then there will be both regular and antimatter versions. Should those collide, they should annihilate each other, releasing energy in the process. Given dark matter’s general propensity not to interact with anything, these collisions will be extremely rare except in locations with very high dark matter concentrations.

The only place that’s likely to happen is at the very center of our galaxy. And, for a while, there was an excess of radiation coming from the galactic core that people thought might be due to dark matter annihilations, although it eventually turned out to have a more mundane explanation.

At the extreme densities found within a light year of the supermassive black hole at the center of our galaxy, concentrations are high enough that these collisions could be a major source of energy. And so astronomers have considered what all that energy might do to stars that end up in a black hole’s orbit, finding that under the right circumstances, dark matter destruction could provide more energy to a star than fusion.

That prompted three astrophysicists (Isabelle John, Rebecca Leane, and Tim Linden) to try to look at things in an organized fashion, modeling a “dark main sequence” of stars as they might exist within a close proximity to the Milky Way’s center.

The intense gravity and radiation found near the galaxy’s core mean that stars can’t form there. So, anything that’s in a tight orbit had formed somewhere else before gravitational interactions had pushed it into the gravitational grasp of the galaxy’s central black hole. The researchers used a standard model of star evolution to build a collection of moderate-sized stars, from one to 20 solar masses at 0.05 solar mass intervals. These are allowed to ignite fusion at their cores and then shift into a dark-matter-rich environment.

Since we have no idea how often dark matter particles might run into each other, John, Leane, and Linden use two different collision frequencies. These determine how much energy is imparted into these stars by dark matter, which the researchers simply add as a supplement to the amount of fusion energy the stars are producing. Then, the stars are allowed to evolve forward in time.

(The authors note that stars that are thrown into the grasp of a supermassive black hole tend to have very eccentric orbits, so they spend a lot of time outside the zone where dark matter collisions take place with a significant frequency. So, what they’ve done is the equivalent of having these stars experience the energy input given their average orbital distance from the galaxy’s core. In reality, a star would spend some years with higher energy input and some years with lower input as it moves about its orbit.)

Achieving immortality

The physics of what happens is based on the same balance of forces that govern fusion-powered stars, but produces some very strange results. Given only fusion power, a star will exist at a balance point. If gravity compresses it, fusion speeds up, more energy is released, and that energy causes the star to expand outward again. That causes the density drop, slowing fusion back down again.

The dark matter annihilations essentially provide an additional source of energy that stays constant regardless of what happens to the star’s density. At the low end of the mass range the researchers considered, this can cause the star to nearly shut off fusion, essentially looking like a far younger star than it actually is. That has the effect of causing the star to move backward along the main sequence diagram.

The researchers note that even lighter stars could essentially get so much additional energy that they can’t hold together and end up dissipating, something that’s been seen in models run by other researchers.

As the mass gets higher, stars reach the point where they essentially give up on fusion and get by with nothing but dark matter annihilations. They have enough mass to hold together gravitationally, but end up too diffused for fusion to continue. And they’ll stay that way as long as they continue to get additional injections of energy. “A star like this might look like a young, still-forming star,” the authors write, “but has features of a star that has undergone nuclear fusion in the past and is effectively immortal.”

John, Leane, and Linden find that the higher mass stars remain dense enough for fusion to continue even in proximity to the galaxy’s black hole. But the additional energy kept that fusion happening at a moderate rate. They proceeded through the main sequence, but at a pace that was exceptionally slow, so that running the simulation for a total of 10 billion years didn’t see them change significantly.

The other strange thing here is that all of this is very sensitive to how much dark matter annihilation is taking place. A star that’s “immortal” at one average distance will progress slowly through the main sequence if its average distance is a light year further out. Similarly, stars that are too light to survive at one location will hold together if they are a bit further from the supermassive black hole.

Is there anything to this?

The big caution is that this work only looks at the average input from dark matter annihilation. In reality, a star that might be immortal at its average distance will likely spend a few years too hot to hold together, and then several years cooling off in conditions that should allow fusion to reignite. It would be nice to see a model run with this sort of pulsed input, perhaps basing it on the orbits of some of the stars we’ve seen that get close to the Milky Way’s central black hole.

In the meantime, John, Leane, and Linden write that their results are consistent with some of the oddities that are apparent in the stars we’ve observed at the galaxy’s center. These have two distinctive properties: They appear heavier than the average star in the Milky Way, and all seem to be quite young. If there is a “dark main sequence,” then the unusual heft can be explained simply by the fact that lower mass stars end up dissipating due to the additional energy. And the model would suggest that these stars simply appear to be young because they haven’t undergone much fusion.

The researchers suggest that we could have a clearer picture if we were able to spend enough time observing the stars at our galaxy’s core with a large enough telescope, allowing us to understand their nature and orbits.

Physical Review D, 2025. DOI: Not yet available  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

There could be “dark main sequence” stars at the galactic center Read More »

merger-of-two-massive-black-holes-is-one-for-the-record-books

Merger of two massive black holes is one for the record books

Physicists with the LIGO/Virgo/KAGRA collaboration have detected the gravitational wave signal (dubbed GW231123) of the most massive merger between two black holes yet observed, resulting in a new black hole that is 225 times more massive than our Sun. The results were presented at the Edoardo Amaldi Conference on Gravitational Waves in Glasgow, Scotland.

The LIGO/Virgo/KAGRA collaboration searches the universe for gravitational waves produced by the mergers of black holes and neutron stars. LIGO detects gravitational waves via laser interferometry, using high-powered lasers to measure tiny changes in the distance between two objects positioned kilometers apart. LIGO has detectors in Hanford, Washington, and in Livingston, Louisiana. A third detector in Italy, Advanced Virgo, came online in 2016. In Japan, KAGRA is the first gravitational-wave detector in Asia and the first to be built underground. Construction began on LIGO-India in 2021, and physicists expect it will turn on sometime after 2025.

To date, the collaboration has detected dozens of merger events since its first Nobel Prize-winning discovery. Early detected mergers involved either two black holes or two neutron stars.  In 2021, LIGO/Virgo/KAGRA confirmed the detection of two separate “mixed” mergers between black holes and neutron stars.

A tour of Virgo. Credit: EGO-Virgo

LIGO/Virgo/KAGRA started its fourth observing run in 2023, and by the following year had announced the detection of a signal indicating a merger between two compact objects, one of which was most likely a neutron star. The other had an intermediate mass—heavier than a neutron star and lighter than a black hole. It was the first gravitational-wave detection of a mass-gap object paired with a neutron star and hinted that the mass gap might be less empty than astronomers previously thought.

Merger of two massive black holes is one for the record books Read More »

new-evidence-that-some-supernovae-may-be-a-“double-detonation”

New evidence that some supernovae may be a “double detonation”

Type Ia supernovae are critical tools in astronomy, since they all appear to explode with the same intensity, allowing us to use their brightness as a measure of distance. The distance measures they’ve given us have been critical to tracking the expansion of the Universe, which led to the recognition that there’s some sort of dark energy hastening the Universe’s expansion. Yet there are ongoing arguments over exactly how these events are triggered.

There’s widespread agreement that type Ia supernovae are the explosions of white dwarf stars. Normally, these stars are composed primarily of moderately heavy elements like carbon and oxygen, and lack the mass to trigger additional fusion. But if some additional material is added, the white dwarf can reach a critical mass and reignite a runaway fusion reaction, blowing the star apart. But the source of the additional mass has been somewhat controversial.

But there’s an additional hypothesis that doesn’t require as much mass: a relatively small explosion on a white dwarf’s surface can compress the interior enough to restart fusion in stars that haven’t yet reached a critical mass. Now, observations of the remains of a supernova provide some evidence of the existence of these so-called “double detonation” supernovae.

Deconstructing white dwarfs

White dwarfs are the remains of stars with a similar mass to our Sun. After having gone through periods during which hydrogen and helium were fused, these tend to end up as carbon and oxygen-rich embers: hot due to their history, but incapable of reaching the densities needed to fuse these elements. Left on their own, these stellar remnants will gradually cool.

But many stars are not left on their own; they exist in binary systems with a companion, or even larger systems. These companions can provide the material needed to boost white dwarfs to the masses that can restart fusion. There are two potential pathways for this to happen. Many stars go through periods where they are so large that their gravitational pull is barely enough to hold on to their outer layers. If the white dwarf orbits closely enough, it can pull in material from the other star, boosting its mass until it passes a critical threshold, at which point fusion can restart.

New evidence that some supernovae may be a “double detonation” Read More »

simulations-find-ghostly-whirls-of-dark-matter-trailing-galaxy-arms

Simulations find ghostly whirls of dark matter trailing galaxy arms

“Basically what you do is you set up a bunch of particles that represent things like stars, gas, and dark matter, and you let them evolve for millions of years,” Bernet says. “Human lives are much too short to witness this happening in real time. We need simulations to help us see more than the present, which is like a single snapshot of the Universe.”

Several other groups already had galaxy simulations they were using to do other science, so the team asked one to see their data. When they found the dark matter imprint they were looking for, they checked for it in another group’s simulation. They found it again, and then in a third simulation as well.

The dark matter spirals are much less pronounced than their stellar counterparts, but the team noted a distinct imprint on the motions of dark matter particles in the simulations. The dark spiral arms lag behind the stellar arms, forming a sort of unseen shadow.

These findings add a new layer of complexity to our understanding of how galaxies evolve, suggesting that dark matter is more than a passive, invisible scaffolding holding galaxies together. Instead, it appears to react to the gravity from stars in galaxies’ spiral arms in a way that may even influence star formation or galactic rotation over cosmic timescales. It could also explain the relatively newfound excess mass along a nearby spiral arm in the Milky Way.

The fact that they saw the same effect in differently structured simulations suggests that these dark matter spirals may be common in galaxies like the Milky Way. But tracking them down in the real Universe may be tricky.

Bernet says scientists could measure dark matter in the Milky Way’s disk. “We can currently measure the density of dark matter close to us with a huge precision,” he says. “If we can extend these measurements to the entire disk with enough precision, spiral patterns should emerge if they exist.”

“I think these results are very important because it changes our expectations for where to search for dark matter signals in galaxies,” Brooks says. “I could imagine that this result might influence our expectation for how dense dark matter is near the solar neighborhood and could influence expectations for lab experiments that are trying to directly detect dark matter.” That’s a goal scientists have been chasing for nearly 100 years.

Ashley writes about space for a contractor for NASA’s Goddard Space Flight Center by day and freelances in her free time. She holds master’s degrees in space studies from the University of North Dakota and science writing from Johns Hopkins University. She writes most of her articles with a baby on her lap.

Simulations find ghostly whirls of dark matter trailing galaxy arms Read More »

milky-way-galaxy-might-not-collide-with-andromeda-after-all

Milky Way galaxy might not collide with Andromeda after all

100,000 computer simulations reveal Milky Way’s fate—and it might not be what we thought.

It’s been textbook knowledge for over a century that our Milky Way galaxy is doomed to collide with another large spiral galaxy, Andromeda, in the next 5 billion years and merge into one even bigger galaxy. But a fresh analysis published in the journal Nature Astronomy is casting that longstanding narrative in a more uncertain light. The authors conclude that the likelihood of this collision and merger is closer to the odds of a coin flip, with a roughly 50 percent probability that the two galaxies will avoid such an event during the next 10 billion years.

Both the Milky Way and the Andromeda galaxies (M31) are part of what’s known as the Local Group (LG), which also hosts other smaller galaxies (some not yet discovered) as well as dark matter (per the prevailing standard cosmological model). Both already have remnants of past mergers and interactions with other galaxies, according to the authors.

“Predicting future mergers requires knowledge about the present coordinates, velocities, and masses of the systems partaking in the interaction,” the authors wrote. That involves not just the gravitational force between them but also dynamical friction. It’s the latter that dominates when galaxies are headed toward a merger, since it causes galactic orbits to decay.

This latest analysis is the result of combining data from the Hubble Space Telescope and the European Space Agency’s (ESA) Gaia space telescope to perform 100,000 Monte Carlo computer simulations, taking into account not just the Milky Way and Andromeda but the full LG system. Those simulations yielded a very different prediction: There is approximately a 50/50 chance of the galaxies colliding within the next 10 billion years. There is still a 2 percent chance that they will collide in the next 4 to 5 billion years. “Based on the best available data, the fate of our galaxy is still completely open,” the authors concluded.

Milky Way galaxy might not collide with Andromeda after all Read More »

testing-a-robot-that-could-drill-into-europa-and-enceladus

Testing a robot that could drill into Europa and Enceladus


We don’t currently have a mission to put it on, but NASA is making sure it’s ready.

Geysers on Saturn’s moon Enceladus Credit: NASA

Europa and Enceladus are two ocean moons that scientists have concluded have liquid water oceans underneath their outer icy shells. The Europa Clipper mission should reach Europa around April of 2030. If it collects data hinting at the moon’s potential habitability, robotic lander missions could be the only way to confirm if there’s really life in there or not.

To make these lander missions happen, NASA’s Jet Propulsion Laboratory team has been working on a robot that could handle the search for life and already tested it on the Matanuska Glacier in Alaska. “At this point this is a pretty mature concept,” says Kevin Hand, a planetary scientist at JPL who led this effort.

Into the unknown

There are only a few things we know for sure about conditions on the surface of Europa, and nearly all of them don’t bode well for lander missions. First, Europa is exposed to very harsh radiation, which is a problem for electronics. The window of visibility—when a potential robotic lander could contact Earth—lasts less than half of the 85 hours it takes for the moon to complete its day-night cycle due to the Europa-Jupiter orbit. So, for more than half the mission, the robot would need to fend for itself, with no human ground teams to get it out of trouble. The lander would also need to run on non-rechargeable batteries, because the vast distance to the Sun would make solar panels prohibitively massive.

And that’s just the beginning. Unlike on Mars, we don’t have any permanent orbiters around Europa that could provide a communication infrastructure, and we don’t have high-resolution imagery of the surface, which would make the landing particularly tricky. “We don’t know what Europa’s surface looks like at the centimeter to meter scale. Even with the Europa Clipper imagery, the highest resolution will be about half a meter per pixel across a few select regions,” Hand explains.

Because Europa has an extremely thin atmosphere that doesn’t provide any insulation, the temperatures on top of its ice shell are estimated to vary between minus-160° Celsius during the daytime maximum and minus-220° C during the night, which means the ice the lander would be there to sample is most likely hard as concrete. Hand’s team, building their robot, had to figure out a design that could deal with all these issues.

The work on the robotic system for the Europa lander mission began more than 10 years ago. Back then, the 2013–2022 decadal strategy for planetary science cited the Europa Clipper as the second-highest priority large-scale planetary mission, so a lander seemed like a natural follow-up.

Autonomy and ice drilling

The robot developed by Hand’s team has legs that enable it to stabilize itself on various types of surfaces, from rock-hard ice to loose, soft snow. To orient itself in the environment, it uses a stereoscopic camera with an LED light source for illumination hooked to computer-vision algorithms—a system similar to the one currently used by the Perseverance rover on Mars. “Stereoscopic cameras can triangulate points in an image and build a digital surface topography model,” explains Joseph Bowkett, a JPL researcher and engineer who worked on the robot’s design.

The team built an entirely new robotic arm with seven degrees of freedom. Force torque sensors installed in most of its joints act a bit like a nervous system, informing the robot when key components sustain excessive loads to prevent it from damaging the arm or the drill. “As we press down on the surface [and] conduct drilling and sampling, we can measure the forces and react accordingly,” Bowkett says. The finishing touch was the ICEPICK, a drilling and sampling tool the robot uses to excavate samples from the ice up to 20 centimeters deep.

Because of long periods the lander would need operate without any human supervision, the team also gave it a wide range of autonomous systems, which operate at two different levels. High-level autonomy is responsible for scheduling and prioritizing tasks within a limited energy budget. The robot can drill into a sampling site, analyze samples with onboard instruments, and decide whether it makes sense to keep drilling at the same spot or choose a different sampling site. The high-level system is also tasked with choosing the most important results for downlink back to Earth.

Low-level autonomy breaks all these high-level tasks down into step-by-step decisions on how to operate the drill and how to move the arm in the safest and most energy-efficient way.

The robot was tested in simulation software first, then indoors at JPL’s facilities, and finally at the Matanuska Glacier in Alaska, where it was lowered from a helicopter that acted as a proxy for a landing vehicle. It was tested at three different sites, ranked from the easiest to the most challenging. It completed all the baseline activities as well as all of the extras. The latter included a task like drilling 27 centimeters deep into ice at the most difficult site, where it was awkwardly positioned on an eight-to-12-degree slope. The robot passed all the tests with flying colors.

And then it got shelved.

Switching the ocean worlds

Hand’s team put their Europa landing robot through the Alaskan field test campaign between July and August 2022. But when the new decadal strategy for planetary science came out in 2023, it turned out that the Europa lander was not among the missions selected. The National Academies committee responsible for formulating these decadal strategies did not recommend giving it a go, mainly because they believed harsh radiation in the Jovian system would make detecting biosignatures “challenging” for a lander.

An Enceladus lander, on the other hand, remained firmly on the table. “I was also on the team developing EELS, a robot intended for a potential Enceladus mission, so thankfully I can speak about both. The radiation challenges are indeed far greater for Europa,” Bowkett says.

Another argument for changing our go-to ocean world is that water plumes containing salts along with carbon- and nitrogen-bearing molecules have already been observed on Enceladus, which means there is a slight chance biosignatures could be detected by a flyby mission. The surface of Enceladus, according to the decadal strategy document, should be capable of preserving biogenic evidence for a long time and seems more conducive to a lander mission. “Luckily, many of the lessons on how to conduct autonomous sampling on Europa, we believe, will transfer to Enceladus, with the benefit of a less damaging radiation environment,” Bowkett told Ars.

The dream of a Europa landing is not completely dead, though. “I would love to get into the Europa’s ocean with a submersible and further down to the seafloor. I would love for that to happen,” Hand says. “But technologically it’s quite a big leap, and you always have to balance your dream missions with the number of technological miracles that need to be solved to make these missions possible.”

Science Robotics, 2025.  DOI: 10.1126/scirobotics.adi5582

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Testing a robot that could drill into Europa and Enceladus Read More »

new-data-confirms:-there-really-is-a-planet-squeezed-in-between-two-stars

New data confirms: There really is a planet squeezed in between two stars

And, critically, the entire orbit is within the orbit of the smaller companion star. The gravitational forces of a tight binary should prevent any planets from forming within this space early in the system’s history. So, how did the planet end up in such an unusual configuration?

A confused past

The fact that one of the stars present in ν Octantis is a white dwarf suggests some possible explanations. White dwarfs are formed by Sun-like stars that have advanced through a late helium-burning period that causes them to swell considerably, leaving the outer surface of the star weakly bound to the rest of its mass. At the distances within ν Octantis, that would allow considerable material to be drawn off the outer companion and pulled onto the surface of what’s now the central star. The net result is a considerable mass transfer.

This could have done one of two things to place a planet in the interior of the system. One is that the transferred material isn’t likely to make an immediate dive onto the surface of the nearby star. If the process is slow enough, it could have produced a planet-forming disk for a brief period—long enough to produce a planet on the interior of the system.

Alternatively, if there were planets orbiting exterior to both stars, the change in the mass distribution of the system could have potentially destabilized their orbits. That might be enough to cause interactions among the planets to send one of them spiraling inward, where it was eventually captured in the stable retrograde orbit we now find it.

Either case, the authors emphasize, should be pretty rare, meaning we’re unlikely to have imaged many other systems like this at this stage of our study of exoplanets. They do point to another tight binary, HD 59686, that appears to have a planet in a retrograde orbit. But, as with ν Octantis, the data isn’t clear enough to rule out alternative configurations yet. So, once again, more data is needed.

Nature, 2025. DOI: 10.1038/s41586-025-09006-x  (About DOIs).

New data confirms: There really is a planet squeezed in between two stars Read More »