syndication

the-first-stars-may-not-have-been-as-uniformly-massive-as-we thought

The first stars may not have been as uniformly massive as we thought


Collapsing gas clouds in the early universe may have formed lower-mass stars as well.

Stars form in the universe from massive clouds of gas. Credit: European Southern Observatory, CC BY-SA

For decades, astronomers have wondered what the very first stars in the universe were like. These stars formed new chemical elements, which enriched the universe and allowed the next generations of stars to form the first planets.

The first stars were initially composed of pure hydrogen and helium, and they were massive—hundreds to thousands of times the mass of the Sun and millions of times more luminous. Their short lives ended in enormous explosions called supernovae, so they had neither the time nor the raw materials to form planets, and they should no longer exist for astronomers to observe.

At least that’s what we thought.

Two studies published in the first half of 2025 suggest that collapsing gas clouds in the early universe may have formed lower-mass stars as well. One study uses a new astrophysical computer simulation that models turbulence within the cloud, causing fragmentation into smaller, star-forming clumps. The other study—an independent laboratory experiment—demonstrates how molecular hydrogen, a molecule essential for star formation, may have formed earlier and in larger abundances. The process involves a catalyst that may surprise chemistry teachers.

As an astronomer who studies star and planet formation and their dependence on chemical processes, I am excited at the possibility that chemistry in the first 50 million to 100 million years after the Big Bang may have been more active than we expected.

These findings suggest that the second generation of stars—the oldest stars we can currently observe and possibly the hosts of the first planets—may have formed earlier than astronomers thought.

Primordial star formation

Video illustration of the star and planet formation process. Credit: Space Telescope Science Institute.

Stars form when massive clouds of hydrogen many light-years across collapse under their own gravity. The collapse continues until a luminous sphere surrounds a dense core that is hot enough to sustain nuclear fusion.

Nuclear fusion happens when two or more atoms gain enough energy to fuse together. This process creates a new element and releases an incredible amount of energy, which heats the stellar core. In the first stars, hydrogen atoms fused together to create helium.

The new star shines because its surface is hot, but the energy fueling that luminosity percolates up from its core. The luminosity of a star is its total energy output in the form of light. The star’s brightness is the small fraction of that luminosity that we directly observe.

This process where stars form heavier elements by nuclear fusion is called stellar nucleosynthesis. It continues in stars after they form as their physical properties slowly change. The more massive stars can produce heavier elements such as carbon, oxygen, and nitrogen, all the way up to iron, in a sequence of fusion reactions that end in a supernova explosion.

Supernovae can create even heavier elements, completing the periodic table of elements. Lower-mass stars like the Sun, with their cooler cores, can sustain fusion only up to carbon. As they exhaust the hydrogen and helium in their cores, nuclear fusion stops, and the stars slowly evaporate.

The remnant of a high-mass star supernova explosion imaged by the Chandra X-ray Observatory, left, and the remnant of a low-mass star evaporating in a blue bubble, right.

The remnant of a high-mass star supernova explosion imaged by the Chandra X-ray Observatory, left, and the remnant of a low-mass star evaporating in a blue bubble, right. Credit: CC BY 4.0

High-mass stars have high pressure and temperature in their cores, so they burn bright and use up their gaseous fuel quickly. They last only a few million years, whereas low-mass stars—those less than two times the Sun’s mass—evolve much more slowly, with lifetimes of billions or even trillions of years.

If the earliest stars were all high-mass stars, then they would have exploded long ago. But if low-mass stars also formed in the early universe, they may still exist for us to observe.

Chemistry that cools clouds

The first star-forming gas clouds, called protostellar clouds, were warm—roughly room temperature. Warm gas has internal pressure that pushes outward against the inward force of gravity trying to collapse the cloud. A hot air balloon stays inflated by the same principle. If the flame heating the air at the base of the balloon stops, the air inside cools, and the balloon begins to collapse.

Stars form when clouds of dust collapse inward and condense around a small, bright, dense core. Credit: NASA, ESA, CSA, and STScI, J. DePasquale (STScI), CC BY-ND

Only the most massive protostellar clouds with the most gravity could overcome the thermal pressure and eventually collapse. In this scenario, the first stars were all massive.

The only way to form the lower-mass stars we see today is for the protostellar clouds to cool. Gas in space cools by radiation, which transforms thermal energy into light that carries the energy out of the cloud. Hydrogen and helium atoms are not efficient radiators below several thousand degrees, but molecular hydrogen, H₂, is great at cooling gas at low temperatures.

When energized, H₂ emits infrared light, which cools the gas and lowers the internal pressure. That process would make gravitational collapse more likely in lower-mass clouds.

For decades, astronomers have reasoned that a low abundance of H₂ early on resulted in hotter clouds whose internal pressure would be too hot to easily collapse into stars. They concluded that only clouds with enormous masses, and therefore higher gravity, would collapse, leaving more massive stars.

Helium hydride

In a July 2025 journal article, physicist Florian Grussie and collaborators at the Max Planck Institute for Nuclear Physics demonstrated that the first molecule to form in the universe, helium hydride, HeH⁺, could have been more abundant in the early universe than previously thought. They used a computer model and conducted a laboratory experiment to verify this result.

Helium hydride? In high school science you probably learned that helium is a noble gas, meaning it does not react with other atoms to form molecules or chemical compounds. As it turns out, it does—but only under the extremely sparse and dark conditions of the early universe, before the first stars formed.

HeH⁺ reacts with hydrogen deuteride—HD, which is one normal hydrogen atom bonded to a heavier deuterium atom—to form H₂. In the process, HeH⁺ also acts as a coolant and releases heat in the form of light. So the high abundance of both molecular coolants earlier on may have allowed smaller clouds to cool faster and collapse to form lower-mass stars.

Gas flow also affects stellar initial masses

In another study, published in July 2025, astrophysicist Ke-Jung Chen led a research group at the Academia Sinica Institute of Astronomy and Astrophysics using a detailed computer simulation that modeled how gas in the early universe may have flowed.

The team’s model demonstrated that turbulence, or irregular motion, in giant collapsing gas clouds can form lower-mass cloud fragments from which lower-mass stars condense.

The study concluded that turbulence may have allowed these early gas clouds to form stars either the same size or up to 40 times more massive than the Sun’s mass.

The galaxy NGC 1140 is small and contains large amounts of primordial gas with far fewer elements heavier than hydrogen and helium than are present in our Sun. This composition makes it similar to the intensely star-forming galaxies found in the early universe. These early universe galaxies were the building blocks for large galaxies such as the Milky Way.

The galaxy NGC 1140 is small and contains large amounts of primordial gas with far fewer elements heavier than hydrogen and helium than are present in our Sun. This composition makes it similar to the intensely star-forming galaxies found in the early universe. These early universe galaxies were the building blocks for large galaxies such as the Milky Way. Credit: ESA/Hubble & NASA, CC BY-ND

The two new studies both predict that the first population of stars could have included low-mass stars. Now, it is up to us observational astronomers to find them.

This is no easy task. Low-mass stars have low luminosities, so they are extremely faint. Several observational studies have recently reported possible detections, but none are yet confirmed with high confidence. If they are out there, though, we will find them eventually.The Conversation

Luke Keller is a professor of physics and astronomy at Ithaca College.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Photo of The Conversation

The Conversation is an independent source of news and views, sourced from the academic and research community. Our team of editors work with these experts to share their knowledge with the wider public. Our aim is to allow for better understanding of current affairs and complex issues, and hopefully improve the quality of public discourse on them.

The first stars may not have been as uniformly massive as we thought Read More »

how-the-cavefish-lost-its-eyes—again-and-again

How the cavefish lost its eyes—again and again


Mexican tetras in pitch-black caverns had no use for the energetically costly organs.

Photographs of Astyanax mexicanus, surface form with eyes (top) and cave form without eyes (bottom). Credit: Daniel Castranova, NICHD/NIH

Photographs of Astyanax mexicanus, surface form with eyes (top) and cave form without eyes (bottom). Credit: Daniel Castranova, NICHD/NIH

Time and again, whenever a population was swept into a cave and survived long enough for natural selection to have its way, the eyes disappeared. “But it’s not that everything has been lost in cavefish,” says geneticist Jaya Krishnan of the Oklahoma Medical Research Foundation. “Many enhancements have also happened.”

Though the demise of their eyes continues to fascinate biologists, in recent years, attention has shifted to other intriguing aspects of cavefish biology. It has become increasingly clear that they haven’t just lost sight but also gained many adaptations that help them to thrive in their cave environment, including some that may hold clues to treatments for obesity and diabetes in people.

Casting off expensive eyes

It has long been debated why the eyes were lost. Some biologists used to argue that they just withered away over generations because cave-dwelling animals with faulty eyes experienced no disadvantage. But another explanation is now considered more likely, says evolutionary physiologist Nicolas Rohner of the University of Münster in Germany: “Eyes are very expensive in terms of resources and energy. Most people now agree that there must be some advantage to losing them if you don’t need them.”

Scientists have observed that mutations in different genes involved in eye formation have led to eye loss. In other words, says Krishnan, “different cavefish populations have lost their eyes in different ways.”

Meanwhile, the fishes’ other senses tend to have been enhanced. Studies have found that cave-dwelling fish can detect lower levels of amino acids than surface fish can. They also have more tastebuds and a higher density of sensitive cells alongside their bodies that let them sense water pressure and flow.

Regions of the brain that process other senses are also expanded, says developmental biologist Misty Riddle of the University of Nevada, Reno, who coauthored a 2023 article on Mexican tetra research in the Annual Review of Cell and Developmental Biology. “I think what happened is that you have to, sort of, kill the eye program in order to expand the other areas.”

Killing the processes that support the formation of the eye is quite literally what happens. Just like non-cave-dwelling members of the species, all cavefish embryos start making eyes. But after a few hours, cells in the developing eye start dying, until the entire structure has disappeared. Riddle thinks this apparent inefficiency may be unavoidable. “The early development of the brain and the eye are completely intertwined—they happen together,” she says. That means the least disruptive way for eyelessness to evolve may be to start making an eye and then get rid of it.

In what Krishnan and Rohner have called “one of the most striking experiments performed in the field of vertebrate evolution,” a study published in 2000 showed that the fate of the cavefish eye is heavily influenced by its lens. Scientists showed this by transplanting the lens of a surface fish embryo to a cavefish embryo, and vice versa. When they did this, the eye of the cavefish grew a retina, rod cells, and other important parts, while the eye of the surface fish stayed small and underdeveloped.

Starving and bingeing

It’s easy to see why cavefish would be at a disadvantage if they were to maintain expensive tissues they aren’t using. Since relatively little lives or grows in their caves, the fish are likely surviving on a meager diet of mostly bat feces and organic waste that washes in during the rainy season. Researchers keeping cavefish in labs have discovered that, genetically, the creatures are exquisitely adapted to absorbing and storing nutrients. “They’re constantly hungry, eating as much as they can,” Krishnan says.

Intriguingly, the fish have at least two mutations that are associated with diabetes and obesity in humans. In the cavefish, though, they may be the basis of some traits that are very helpful to a fish that occasionally has a lot of food but often has none. When scientists compare cavefish and surface fish kept in the lab under the same conditions, cavefish fed regular amounts of standard fish food “get fat. They get high blood sugar,” Rohner says. “But remarkably, they do not develop obvious signs of disease.”

Fats can be toxic for tissues, Rohner explains, so they are stored in fat cells. “But when these cells get too big, they can burst, which is why we often see chronic inflammation in humans and other animals that have stored a lot of fat in their tissues.” Yet a 2020 study by Rohner, Krishnan, and their colleagues revealed that even very well-fed cavefish had fewer signs of inflammation in their fat tissues than surface fish do.

Even in their sparse cave conditions, wild cavefish can sometimes get very fat, says Riddle. This is presumably because, whenever food ends up in the cave, the fish eat as much of it as possible, since there may be nothing else for a long time to come. Intriguingly, Riddle says, their fat is usually bright yellow, because of high levels of carotenoids, the substance in the carrots that your grandmother used to tell you were good for your… eyes.

“The first thing that came to our mind, of course, was that they were accumulating these because they don’t have eyes,” says Riddle. In this species, such ideas can be tested: Scientists can cross surface fish (with eyes) and cavefish (without eyes) and look at what their offspring are like. When that’s done, Riddle says, researchers see no link between eye presence or size and the accumulation of carotenoids. Some eyeless cavefish had fat that was practically white, indicating lower carotenoid levels.

Instead, Riddle thinks these carotenoids may be another adaptation to suppress inflammation, which might be important in the wild, as cavefish are likely overeating whenever food arrives.

Studies by Krishnan, Rohner, and colleagues published in 2020 and 2022 have found other adaptations that seem to help tamp down inflammation. Cavefish cells produce lower levels of certain molecules called cytokines that promote inflammation, as well as lower levels of reactive oxygen species — tissue-damaging byproducts of the body’s metabolism that are often elevated in people with obesity or diabetes.

Krishnan is investigating this further, hoping to understand how the well-fed cavefish remain healthy. Rohner, meanwhile, is increasingly interested in how cavefish survive not just overeating, but long periods of starvation, too.

No waste

On a more fundamental level, researchers still hope to figure out why the Mexican tetra evolved into cave forms while any number of other Mexican river fish that also regularly end up in caves did not. (Globally, there are more than 200 cave-adapted fish species, but species that also still have populations on the surface are quite rare.) “Presumably, there is something about the tetras’ genetic makeup that makes it easier for them to adapt,” says Riddle.

Though cavefish are now well-established lab animals used in research and are easy to purchase for that purpose, preserving them in the wild will be important to safeguard the lessons they still hold for us. “There are hundreds of millions of the surface fish,” says Rohner, but cavefish populations are smaller and more vulnerable to pressures like pollution and people drawing water from caves during droughts.

One of Riddle’s students, David Perez Guerra, is now involved in a committee to support cavefish conservation. And researchers themselves are increasingly careful, too. “The tissues of the fish collected during our lab’s last field trip benefited nine different labs,” Riddle says. “We wasted nothing.”

This article originally appeared in Knowable Magazine, a nonprofit publication dedicated to making scientific knowledge accessible to all. Sign up for Knowable Magazine’s newsletter.

Photo of Knowable Magazine

Knowable Magazine explores the real-world significance of scholarly work through a journalistic lens.

How the cavefish lost its eyes—again and again Read More »

why-wind-farms-attract-so-much-misinformation-and-conspiracy theory

Why wind farms attract so much misinformation and conspiracy theory

The recent resistance

Academic work on the question of anti-wind farm activism is revealing a pattern: Conspiracy thinking is a stronger predictor of opposition than age, gender, education, or political leaning.

In Germany, the academic Kevin Winter and colleagues found that belief in conspiracies had many times more influence on wind opposition than any demographic factor. Worryingly, presenting opponents with facts was not particularly successful.

In a more recent article, based on surveys in the US, UK, and Australia that looked at people’s propensity to give credence to conspiracy theories, Winter and colleagues argued that opposition is “rooted in people’s worldviews.”

If you think climate change is a hoax or a beat-up by hysterical eco-doomers, you’re going to be easily persuaded that wind turbines are poisoning groundwater, causing blackouts, or, in Trump’s words, “driving [the whales] loco.”

Wind farms are fertile ground for such theories. They are highly visible symbols of climate policy, and complex enough to be mysterious to non-specialists. A row of wind turbines can become a target for fears about modernity, energy security, or government control.

This, say Winter and colleagues, “poses a challenge for communicators and institutions committed to accelerating the energy transition.” It’s harder to take on an entire worldview than to correct a few made-up talking points.

What is it all about?

Beneath the misinformation, often driven by money or political power, there’s a deeper issue. Some people—perhaps Trump among them—don’t want to deal with the fact that fossil technologies, which brought prosperity and a sense of control, are also causing environmental crises. And these are problems that aren’t solved with the addition of more technology. It offends their sense of invulnerability, of dominance. This “anti-reflexivity,” as some academics call it, is a refusal to reflect on the costs of past successes.

It is also bound up with identity. In some corners of the online “manosphere,” concerns over climate change are being painted as effeminate.

Many boomers, especially white heterosexual men like Trump, have felt disoriented as their world has shifted and changed around them. The clean energy transition symbolizes part of this change. Perhaps this is a good way to understand why Trump is lashing out at “windmills.”The Conversation

Marc Hudson, Visiting Fellow, SPRU, University of Sussex Business School, University of Sussex. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why wind farms attract so much misinformation and conspiracy theory Read More »

a-geothermal-network-in-colorado-could-help-a-rural-town-diversify-its-economy

A geothermal network in Colorado could help a rural town diversify its economy


Town pitches companies to take advantage of “reliable, cost-effective heating and cooling.”

This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here.

Hayden, a small town in the mountains of northwest Colorado, is searching for ways to diversify its economy, much like other energy communities across the Mountain West.

For decades, a coal-fired power plant, now scheduled to shut down in the coming years, served as a reliable source of tax revenue, jobs, and electricity.

When town leaders in the community just west of Steamboat Springs decided to create a new business park, harnessing geothermal energy to heat and cool the buildings simply made sense.

The technology aligns with Colorado’s sustainability goals and provides access to grants and tax credits that make the project financially feasible for a town with around 2,000 residents, said Matthew Mendisco, town manager.

“We’re creating the infrastructure to attract employers, support local jobs, and give our community reliable, cost-effective heating and cooling for decades to come,” Mendisco said in a statement.

Bedrock Energy, a geothermal drilling startup company that employs advanced drilling techniques developed by the oil and gas industry, is currently drilling dozens of boreholes that will help heat and cool the town’s Northwest Colorado Business District.

The 1,000-feet-deep boreholes or wells will connect buildings in the industrial park to steady underground temperatures. Near the surface the Earth is approximately 51° F year round. As the drills go deeper, the temperature slowly increases to approximately 64° F near the bottom of the boreholes. Pipes looping down into each well will draw on this thermal energy for heating in the winter and cooling in the summer, significantly reducing energy needs.

Ground source heat pumps located in each building will provide additional heating or cooling depending on the time of year.

The project, one of the first in the region, drew the interest of some of the state’s top political leaders, who attended an open house hosted by town officials and company executives on Wednesday.

“Our energy future is happening right now—right here in Hayden,” US Senator John Hickenlooper (D-Colo.) said in a prepared statement prior to the event.

“Projects like this will drive rural economic growth while harnessing naturally occurring energy to provide reliable, cost-effective heating and cooling to local businesses,” said US Senator Michael Bennet (D-Colo.) in a written statement.

In an interview with Inside Climate News, Mendisco said that extreme weather snaps, which are not uncommon in a town over 6,000 feet above sea level, will not force companies to pay higher prices for fossil fuels to meet energy demands, like they do elsewhere in the country. He added that the system’s rates will be “fairly sustainable, and they will be as competitive as any of our other providers, natural gas, etcetera.”

The geothermal system under construction for Hayden’s business district will be owned by the town and will initially consist of separate systems for each building that will be connected into a larger network over time. Building out the network as the business park grows will help reduce initial capital costs.

Statewide interest

Hayden received two state grants totaling $300,000 to help design and build its geothermal system.

“It wasn’t completely clear to us how much interest was really going to be out there,” Will Toor, executive director of the Colorado Energy Office, said of a grant program the state launched in 2022.

In the past few years, the program has seen significant interest, with approximately 80 communities across the state exploring similar projects, said Bryce Carter, the geothermal program manager for the state’s Energy Office.

Two projects under development are by Xcel Energy, the largest electricity and gas provider in the state. A law passed in Colorado in 2023 required large gas utilities to develop at least one geothermal heating and cooling network in the state. The networks, which connect individual buildings and boreholes into a shared thermal loop, offer high efficiency and an economy of scale, but also have high upfront construction costs.

There are now 26 utility-led geothermal heating and cooling projects under development or completed nationwide, Jessica Silber-Byrne of the Building Decarbonization Coalition, a nonprofit based in Delaware, said.

Utility companies are widely seen as a natural developer of such projects as they can shoulder multi-million dollar expenses and recoup those costs in ratepayer fees over time. The first, and so far only, geothermal network completed by a gas utility was built by Eversource Energy in Framingham, Massachusetts, last year.

Grid stress concerns heat up geothermal opportunities

Twelve states have legislation supporting or requiring the development of thermal heating and cooling networks. Regulators are interested in the technology because its high efficiency can reduce demand on electricity grids.

Geothermal heating and cooling is roughly twice as efficient as air source heat pumps, a common electric heating and cooling alternative that relies on outdoor air. During periods of extreme heat or extreme cold, air source heat pumps have to work harder, requiring approximately four times more electricity than ground source heat pumps.

As more power-hungry data centers come online, the ability of geothermal heating and cooling to reduce the energy needs of other users of the grid, particularly at periods of peak demand, could become increasingly important, geothermal proponents say.

“The most urgent conversation about energy right now is the stress on the grid,” Joselyn Lai, Bedrock Energy’s CEO, said. “Geothermal’s role in the energy ecosystem will actually increase because of the concerns about meeting load growth.”

The geothermal system will be one of the larger drilling projects to date for Bedrock, a company founded in Austin, Texas, in 2022. Bedrock, which is working on another similarly sized project in Crested Butte, Colorado, seeks to reduce the cost of relatively shallow-depth geothermal drilling through the use of robotics and data analytics that rely on artificial intelligence.

By using a single, continuous steel pipe for drilling, rather than dozens of shorter pipe segments that need to be attached as they go, Bedrock can drill faster and transmit data more easily from sensors near the drill head to the surface.

In addition to shallow, low-temperature geothermal heating and cooling networks, deep, hot-rock geothermal systems that generate steam for electricity production are also seeing increased interest. New, enhanced geothermal systems that draw on hydraulic fracturing techniques developed by the oil and gas industry and other advanced drilling methods are quickly expanding geothermal energy’s potential.

“We’re also very bullish on geothermal electricity,” said Toor, of the Colorado Energy Office, adding that the state has a goal of reducing carbon emissions from the electricity sector by 80 percent by 2030. He said geothermal power that produces clean, round-the-clock electricity will likely play a key role in meeting that target.

The University of Colorado, Boulder, is currently considering the use of geothermal energy for heating, cooling, and electricity production and has received grants for initial feasibility studies through the state’s energy office.

For town officials in Hayden, the technology’s appeal is simple.

“Geothermal works at night, it works in the day, it works whenever you want it to work,” Mendisco said. “It doesn’t matter if there’s a giant snowstorm [or] a giant rainstorm. Five hundred feet to 1,000 feet below the surface, the Earth doesn’t care. It just generates heat.”

Photo of Inside Climate News

A geothermal network in Colorado could help a rural town diversify its economy Read More »

using-pollen-to-make-paper,-sponges,-and-more

Using pollen to make paper, sponges, and more

Softening the shell

To begin working with pollen, scientists can remove the sticky coating around the grains in a process called defatting. Stripping away these lipids and allergenic proteins is the first step in creating the empty capsules for drug delivery that Csaba seeks. Beyond that, however, pollen’s seemingly impenetrable shell—made up of the biopolymer sporopollenin—had long stumped researchers and limited its use.

A breakthrough came in 2020, when Cho and his team reported that incubating pollen in an alkaline solution of potassium hydroxide at 80° Celsius (176° Fahrenheit) could significantly alter the surface chemistry of pollen grains, allowing them to readily absorb and retain water.

The resulting pollen is as pliable as Play-Doh, says Shahrudin Ibrahim, a research fellow in Cho’s lab who helped to develop the technique. Before the treatment, pollen grains are more like marbles: hard, inert, and largely unreactive. After, the particles are so soft they stick together easily, allowing more complex structures to form. This opens up numerous applications, Ibrahim says, proudly holding up a vial of the yellow-brown slush in the lab.

When cast onto a flat mold and dried out, the microgel assembles into a paper or film, depending on the final thickness, that is strong yet flexible. It is also sensitive to external stimuli, including changes in pH and humidity. Exposure to the alkaline solution causes pollen’s constituent polymers to become more hydrophilic, or water-loving, so depending on the conditions, the gel will swell or shrink due to the absorption or expulsion of water, explains Ibrahim.

For technical applications, pollen grains are first stripped of their allergy-inducing sticky coating, in a process called defatting. Next, if treated with acid, they form hollow sporopollenin capsules that can be used to deliver drugs. If treated instead with an alkaline solution, the defatted pollen grains are transformed into a soft microgel that can be used to make thin films, paper, and sponges. Credit: Knowable Magazine

This winning combination of properties, the Singaporean researchers believe, makes pollen-based film a prospect for many future applications: smart actuators that allow devices to detect and respond to changes in their surroundings, wearable health trackers to monitor heart signals, and more. And because pollen is naturally UV-protective, there’s the possibility it could substitute for certain photonically active substrates in perovskite solar cells and other optoelectronic devices.

Using pollen to make paper, sponges, and more Read More »

the-west-texas-measles-outbreak-has-ended

The West Texas measles outbreak has ended

A large measles outbreak in Texas that has affected 762 people has now ended, according to an announcement Monday by the Texas Department of State Health Services. The agency says it has been more than 42 days since a new case was reported in any of the counties that previously showed evidence of ongoing transmission.

The outbreak has contributed to the worst year for measles cases in the United States in more than 30 years. As of August 5, the most recent update from the Centers for Disease Control and Prevention, a total of 1,356 confirmed measles cases have been reported across the country this year. For comparison, there were just 285 measles cases in 2024.

The Texas outbreak began in January in a rural Mennonite community with low vaccination rates. More than two-thirds of the state’s reported cases were in children, and two children in Texas died of the virus. Both were unvaccinated and had no known underlying conditions. Over the course of the outbreak, a total of 99 people were hospitalized, representing 13 percent of cases.

Measles is a highly contagious respiratory illness that can temporarily weaken the immune system, leaving individuals vulnerable to secondary infections such as pneumonia. In rare cases, it can also lead to swelling of the brain and long-term neurological damage. It can also cause pregnancy complications, such as premature birth and babies with low birth weight. The best way to prevent the disease is the measles, mumps, and rubella (MMR) vaccine. One dose of the vaccine is 93 percent effective against measles while two doses is 97 percent effective.

The West Texas measles outbreak has ended Read More »

how-a-mysterious-particle-could-explain-the-universe’s-missing-antimatter

How a mysterious particle could explain the Universe’s missing antimatter


New experiments focused on understanding the enigmatic neutrino may offer insights.

An artist’s composition of the Milky Way seen with a neutrino lens (blue). Credit: IceCube Collaboration/NSF/ESO

Everything we see around us, from the ground beneath our feet to the most remote galaxies, is made of matter. For scientists, that has long posed a problem: According to physicists’ best current theories, matter and its counterpart, antimatter, ought to have been created in equal amounts at the time of the Big Bang. But antimatter is vanishingly rare in the universe. So what happened?

Physicists don’t know the answer to that question yet, but many think the solution must involve some subtle difference in the way that matter and antimatter behave. And right now, the most promising path into that unexplored territory centers on new experiments involving the mysterious subatomic particle known as the neutrino.

“It’s not to say that neutrinos are definitely the explanation of the matter-antimatter asymmetry, but a very large class of models that can explain this asymmetry are connected to neutrinos,” says Jessica Turner, a theoretical physicist at Durham University in the United Kingdom.

Let’s back up for a moment: When physicists talk about matter, that’s just the ordinary stuff that the universe is made of—mainly protons and neutrons (which make up the nuclei of atoms), along with lighter particles like electrons. Although the term “antimatter” has a sci-fi ring to it, antimatter is not all that different from ordinary matter. Typically, the only difference is electric charge: For example, the positron—the first antimatter particle to be discovered—matches an electron in its mass but carries a positive rather than a negative charge. (Things are a bit more complicated with electrically neutral particles. For example, a photon is considered to be its own antiparticle, but an antineutron is distinct from a neutron in that it’s made up of antiquarks rather than ordinary quarks.)

Various antimatter particles can exist in nature; they occur in cosmic rays and in thunderclouds, and are produced by certain kinds of radioactive decay. (Because people—and bananas—contain a small amount of radioactive potassium, they emit minuscule amounts of antimatter in the form of positrons.)

Small amounts of antimatter have also been created by scientists in particle accelerators and other experiments, at great effort and expense—putting a damper on science fiction dreams of rockets propelled by antimatter or planet-destroying weapons energized by it.

When matter and antimatter meet, they annihilate, releasing energy in the form of radiation. Such encounters are governed by Einstein’s famous equation, E=mc2—energy equals mass times the square of the speed of light — which says you can convert a little bit of matter into a lot of energy, or vice versa. (The positrons emitted by bananas and bodies have so little mass that we don’t notice the teeny amounts of energy released when they annihilate.) Because matter and antimatter annihilate so readily, it’s hard to make a chunk of antimatter much bigger than an atom, though in theory you could have everything from antimatter molecules to antimatter planets and stars.

But there’s a puzzle: If matter and antimatter were created in equal amounts at the time of the Big Bang, as theory suggests, shouldn’t they have annihilated, leaving a universe made up of pure energy? Why is there any matter left?

Physicists’ best guess is that some process in the early universe favored the production of matter compared to the production of antimatter — but exactly what that process was is a mystery, and the question of why we live in a matter-dominated universe is one of the most vexing problems in all of physics.

Crucially, physicists haven’t been able to think of any such process that would mesh with today’s leading theory of matter and energy, known as the Standard Model of particle physics. That leaves theorists seeking new ideas, some as-yet-unknown physics that goes beyond the Standard Model. This is where neutrinos come in.

A neutral answer

Neutrinos are tiny particles without any electric charge. (The name translates as “little neutral one.”) According to the Standard Model, they ought to be massless, like photons, but experiments beginning in the 1990s showed that they do in fact have a tiny mass. (They’re at least a million times lighter than electrons, the extreme lightweights among normal matter.) Since physicists already know that neutrinos violate the Standard Model by having mass, their hope is that learning more about these diminutive particles might yield insights into whatever lies beyond.

Neutrinos have been slow to yield their secrets, however, because they barely interact with other particles. About 60 billion neutrinos from the Sun pass through every square centimeter of your skin each second. If those neutrinos interacted with the atoms in our bodies, they would probably destroy us. Instead, they pass right through. “You most likely will not interact with a single neutrino in your lifetime,” says Pedro Machado, a physicist at Fermilab near Chicago. “It’s just so unlikely.”

Experiments, however, have shown that neutrinos “oscillate” as they travel, switching among three different identities—physicists call them “flavors”: electron neutrino, muon neutrino, and tau neutrino. Oscillation measurements have also revealed that different-flavored neutrinos have slightly different masses.

Neutrinos are known to oscillate, switching between three varieties or “flavors.” Exactly how they oscillate is governed by the laws of quantum mechanics, and the probability of finding that an electron neutrino has transformed into a muon neutrino, for example, varies as a function of the distance traveled. (The third flavor state, the tau neutrino, is very rare.) Credit: Knowable Magazine

Neutrino oscillation is weird, but it may be weird in a useful way, because it might allow physicists to probe certain fundamental symmetries in nature—and these in turn may illuminate the most troubling of asymmetries, namely the universe’s matter-antimatter imbalance.

For neutrino researchers, a key symmetry is called charge-parity or CP symmetry. It’s actually a combination of two distinct symmetries: Changing a particle’s charge flips matter into antimatter (or vice versa), while changing a particle’s parity flips a particle into its mirror image (like turning a right-handed glove into a left-handed glove). So the CP-opposite version of a particle of ordinary matter is a mirror image of the corresponding antiparticle. But does this opposite particle behave exactly the same as the original one? If not, physicists say that CP symmetry is violated—a fancy way of saying that matter and antimatter behave slightly differently from one another. So any examples of CP symmetry violation in nature could help to explain the matter-antimatter imbalance.

In fact, CP violation has already been observed in some mesons, a type of subatomic particle typically made up of one quark and one antiquark, a surprising result first found in the 1960s. But it’s an extremely small effect, and it falls far short of being able to account for the universe’s matter-antimatter asymmetry.

In July 2025, scientists working at the Large Hadron Collider at CERN near Geneva reported clear evidence for a similar violation by one type of particle from a different family of subatomic particles known as baryons—but this newly observed CP violation is similarly believed to be much too small to account for the matter-antimatter imbalance.

Charge-parity or CP symmetry is a combination of two distinct symmetries: Changing a particle’s charge from positive to negative, for example, flips matter into antimatter (or vice versa), while changing a particle’s parity flips a particle into its mirror image (like turning a right-handed glove into a left-handed glove). Consider an electron: Flip its charge and you end up with a positron; flip its “handedness”—in particle physics, this is actually a quantum-mechanical property known as spin—and you get an electron with opposite spin. Flip both properties, and you get a positron that’s like a mirror image of the original electron. Whether this CP-flipped particle behaves the same way as the original electron is a key question: If it doesn’t, physicists say that CP symmetry is “violated.” Any examples of CP symmetry violation in nature could help to explain the matter-antimatter imbalance observed in the universe today. Credit: Knowable Magazine

Experiments on the horizon

So what about neutrinos? Do they violate CP symmetry—and if so, do they do it in a big enough way to explain why we live in a matter-dominated universe? This is precisely the question being addressed by a new generation of particle physics experiments. Most ambitious among them is the Deep Underground Neutrino Experiment (DUNE), which is now under construction in the United States; data collection could begin as early as 2029.

DUNE will employ the world’s most intense neutrino beam, which will fire both neutrinos and antineutrinos from Fermilab to the Sanford Underground Research Facility, located 800 miles away in South Dakota. (There’s no tunnel; the neutrinos and antineutrinos simply zip through the earth, for the most part hardly noticing that it’s there.) Detectors at each end of the beam will reveal how the particles oscillate as they traverse the distance between the two labs—and whether the behavior of the neutrinos differs from that of the antineutrinos.

DUNE won’t pin down the precise amount of neutrinos’ CP symmetry violation (if there is any), but it will set an upper limit on it. The larger the possible effect, the greater the discrepancy in the behavior of neutrinos versus antineutrinos, and the greater the likelihood that neutrinos could be responsible for the matter-antimatter asymmetry in the early universe.

The Deep Underground Neutrino Experiment (DUNE), now under construction, will see both neutrinos and antineutrinos fired from below Fermilab near Chicago to the Sanford Underground Research Facility some 800 miles away in South Dakota. Neutrinos can pass through earth unaltered, with no need of a tunnel. The ambitious experiment may reveal how the behavior of neutrinos differs from that of their antimatter counterparts, antineutrinos. Credit: Knowable Magazine

For Shirley Li, a physicist at the University of California, Irvine, the issue of neutrino CP violation is an urgent question, one that could point the way to a major rethink of particle physics. “If I could have one question answered by the end of my lifetime, I would want to know what that’s about,” she says.

Aside from being a major discovery in its own right, CP symmetry violation in neutrinos could challenge the Standard Model by pointing the way to other novel physics. For example, theorists say it would mean there could be two kinds of neutrinos—left-handed ones (the normal lightweight ones observed to date) and much heavier right-handed neutrinos, which are so far just a theoretical possibility. (The particles’ “handedness” refers to their quantum properties.)

These right-handed neutrinos could be as much as 1015 times heavier than protons, and they’d be unstable, decaying almost instantly after coming into existence. Although they’re not found in today’s universe, physicists suspect that right-handed neutrinos may have existed in the moments after the Big Bang — possibly decaying via a process that mimicked CP violation and favored the creation of matter over antimatter.

It’s even possible that neutrinos can act as their own antiparticles—that is, that neutrinos could turn into antineutrinos and vice versa. This scenario, which the discovery of right-handed neutrinos would support, would make neutrinos fundamentally different from more familiar particles like quarks and electrons. If antineutrinos can turn into neutrinos, that could help explain where the antimatter went during the universe’s earliest moments.

One way to test this idea is to look for an unusual type of radioactive decay — theorized but thus far never observed—known as “neutrinoless double-beta decay.” In regular double-beta decay, two neutrons in a nucleus simultaneously decay into protons, releasing two electrons and two antineutrinos in the process. But if neutrinos can act as their own antiparticles, then the two neutrinos could annihilate each other, leaving only the two electrons and a burst of energy.

A number of experiments are underway or planned to look for this decay process, including the KamLAND-Zen experiment, at the Kamioka neutrino detection facility in Japan; the nEXO experiment at the SNOLAB facility in Ontario, Canada; the NEXT experiment at the Canfranc Underground Laboratory in Spain; and the LEGEND experiment at the Gran Sasso laboratory in Italy. KamLAND-Zen, NEXT, and LEGEND are already up and running.

While these experiments differ in the details, they all employ the same general strategy: They use a giant vat of dense, radioactive material with arrays of detectors that look for the emission of unusually energetic electrons. (The electrons’ expected neutrino companions would be missing, with the energy they would have had instead carried by the electrons.)

While the neutrino remains one of the most mysterious of the known particles, it is slowly but steadily giving up its secrets. As it does so, it may crack the puzzle of our matter-dominated universe — a universe that happens to allow inquisitive creatures like us to flourish. The neutrinos that zip silently through your body every second are gradually revealing the universe in a new light.

“I think we’re entering a very exciting era,” says Turner.

This article originally appeared in Knowable Magazine, a nonprofit publication dedicated to making scientific knowledge accessible to all. Sign up for Knowable Magazine’s newsletter.

Photo of Knowable Magazine

Knowable Magazine explores the real-world significance of scholarly work through a journalistic lens.

How a mysterious particle could explain the Universe’s missing antimatter Read More »

upcoming-deepseek-ai-model-failed-to-train-using-huawei’s-chips

Upcoming DeepSeek AI model failed to train using Huawei’s chips

DeepSeek is still working with Huawei to make the model compatible with Ascend for inference, the people said.

Founder Liang Wenfeng has said internally he is dissatisfied with R2’s progress and has been pushing to spend more time to build an advanced model that can sustain the company’s lead in the AI field, they said.

The R2 launch was also delayed because of longer-than-expected data labeling for its updated model, another person added. Chinese media reports have suggested that the model may be released as soon as in the coming weeks.

“Models are commodities that can be easily swapped out,” said Ritwik Gupta, an AI researcher at the University of California, Berkeley. “A lot of developers are using Alibaba’s Qwen3, which is powerful and flexible.”

Gupta noted that Qwen3 adopted DeepSeek’s core concepts, such as its training algorithm that makes the model capable of reasoning, but made them more efficient to use.

Gupta, who tracks Huawei’s AI ecosystem, said the company is facing “growing pains” in using Ascend for training, though he expects the Chinese national champion to adapt eventually.

“Just because we’re not seeing leading models trained on Huawei today doesn’t mean it won’t happen in the future. It’s a matter of time,” he said.

Nvidia, a chipmaker at the center of a geopolitical battle between Beijing and Washington, recently agreed to give the US government a cut of its revenues in China in order to resume sales of its H20 chips to the country.

“Developers will play a crucial role in building the winning AI ecosystem,” said Nvidia about Chinese companies using its chips. “Surrendering entire markets and developers would only hurt American economic and national security.”

DeepSeek and Huawei did not respond to a request for comment.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Upcoming DeepSeek AI model failed to train using Huawei’s chips Read More »

openai,-cofounder-sam-altman-to-take-on-neuralink-with-new-startup

OpenAI, cofounder Sam Altman to take on Neuralink with new startup

The company aims to raise $250 million from OpenAI and other investors, although the talks are at an early stage. Altman will not personally invest.

The new venture would be in direct competition with Neuralink, founded by Musk in 2016, which seeks to wire brains directly to computers.

Musk and Altman cofounded OpenAI, but Musk left the board in 2018 after clashing with Altman, and the two have since become fierce rivals in their pursuit of AI.

Musk launched his own AI start-up, xAI, in 2023 and has been attempting to block OpenAI’s conversion from a nonprofit in the courts. Musk donated much of the initial capital to get OpenAI off the ground.

Neuralink is one of a pack of so-called brain-computer interface companies, while a number of start-ups, such as Precision Neuroscience and Synchron, have also emerged on the scene.

Neuralink earlier this year raised $650 million at a $9 billion valuation, and it is backed by investors including Sequoia Capital, Thrive Capital, and Vy Capital. Altman had previously invested in Neuralink.

Brain implants are a decades-old technology, but recent leaps forward in AI and in the electronic components used to collect brain signals have offered the prospect that they can become more practically useful.

Altman has backed a number of other companies in markets adjacent to ChatGPT-maker OpenAI, which is valued at $300 billion. In addition to cofounding World, he has also invested in the nuclear fission group Oklo and nuclear fusion project Helion.

OpenAI declined to comment.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

OpenAI, cofounder Sam Altman to take on Neuralink with new startup Read More »

china-tells-alibaba,-bytedance-to-justify-purchases-of-nvidia-ai-chips

China tells Alibaba, ByteDance to justify purchases of Nvidia AI chips

Beijing is demanding tech companies including Alibaba and ByteDance justify their orders of Nvidia’s H20 artificial intelligence chips, complicating the US chipmaker’s business in China after striking an export arrangement with the Trump administration.

The tech companies have been asked by regulators such as the Ministry of Industry and Information Technology (MIIT) to explain why they need to order Nvidia’s H20 chips instead of using domestic alternatives, said three people familiar with the situation.

Some tech companies, who were the main buyers of Nvidia’s H20 chips before their sale in China was restricted, were planning to downsize their orders as a result of the questions from regulators, said two of the people.

“It’s not banned but has kind of become a politically incorrect thing to do,” said one Chinese data center operator about purchasing Nvidia’s H20 chips.

Alibaba, ByteDance, and MIIT did not immediately respond to a request for comment.

Chinese regulators have expressed growing disapproval of companies using Nvidia’s chips for any government or security related projects. Bloomberg reported on Tuesday that Chinese authorities had sent notices to a range of companies discouraging the use of the H20 chips, particularly for government-related work.

China tells Alibaba, ByteDance to justify purchases of Nvidia AI chips Read More »

experiment-will-attempt-to-counter-climate-change-by-altering-ocean

Experiment will attempt to counter climate change by altering ocean


Gulf of Maine will be site of safety and effectiveness testing.

Woods Hole researchers, Adam Subhas (left) and Chris Murray, conducted a series of lab experiments earlier this year to test the impact of an alkaline substance, known as sodium hydroxide, on copepods in the Gulf of Maine. Credit: Daniel Hentz/Woods Hole Oceanographic Institution

Later this summer, a fluorescent reddish-pink spiral will bloom across the Wilkinson Basin in the Gulf of Maine, about 40 miles northeast of Cape Cod. Scientists from the Woods Hole Oceanographic Institution will release the nontoxic water tracer dye behind their research vessel, where it will unfurl into a half-mile wide temporary plume, bright enough to catch the attention of passing boats and even satellites.

As it spreads, the researchers will track its movement to monitor a tightly controlled, federally approved experiment testing whether the ocean can be engineered to absorb more carbon, and in turn, help combat the climate crisis.

As the world struggles to stay below the 1.5° Celsius global warming threshold—a goal set out in the Paris Agreement to avoid the most severe impacts of climate change—experts agree that reducing greenhouse gas emissions won’t be enough to avoid overshooting this target. The latest Intergovernmental Panel on Climate Change report, published in 2023, emphasizes the urgent need to actively remove carbon from the atmosphere, too.

“If we really want to have a shot at mitigating the worst effects of climate change, carbon removal needs to start scaling to the point where it can supplement large-scale emissions reductions,” said Adam Subhas, an associate scientist in marine chemistry and geochemistry at the Woods Hole Oceanographic Institution, who will oversee the week-long experiment.

The test is part of the LOC-NESS project—short for Locking away Ocean Carbon in the Northeast Shelf and Slope—which Subhas has been leading since 2023. The ongoing research initiative is evaluating the effectiveness and environmental impact of a marine carbon dioxide removal approach called ocean alkalinity enhancement (OAE).

This method of marine carbon dioxide removal involves adding alkaline substances to the ocean to boost its natural ability to neutralize acids produced by greenhouse gases. It’s promising, Subhas said, because it has the potential to lock away carbon permanently.

“Ocean alkalinity enhancement does have the potential to reach sort of gigatons per year of carbon removal, which is the scale at which you would need to supplement emissions reductions,” Subhas said. “Once the alkalinity is dissolved in seawater, it reacts with carbon dioxide and forms bicarbonate—essentially dissolved baking soda. That bicarbonate is one of the most stable forms of carbon in the ocean, and it can stay locked away for tens of thousands, even hundreds of thousands of years.”

But it will be a long time before this could happen at the magnitude needed to mitigate climate change.

According to Wil Burns, co-director of the Institute for Responsible Carbon Removal at American University, between 6 and 10 gigatons of carbon need to be removed from the atmosphere annually by 2050 in order to meet the Paris Agreement climate target. “It’s a titanic task,” he said.

Most marine carbon dioxide removal initiatives, including those involving OAE, are still in a nascent stage.

“We’re really far from having any of these technologies be mature,” said Lisa Levin, an oceanographer and professor at the Scripps Institution of Oceanography at the University of California San Diego, who spoke on a panel at the United Nations Ocean Conference in June about the potential environmental risks of mining and carbon dioxide removal on deep-sea ecosystems. “We’re looking at a decade until any serious, large-scale marine carbon removal is going to be able to happen—or more.”

“In the meantime, everybody acknowledges that what we have to do is to reduce emissions, right, and not rely on taking carbon out of the atmosphere,” she said.

Marine carbon dioxide removal

So far, most carbon removal efforts have centered on land-based strategies, such as planting trees, restoring soils, and building machines that capture carbon dioxide directly from the air. Increasingly, researchers are exploring whether the oceans might help.

“Looking at the oceans makes a lot of sense when it comes to carbon removal, because the oceans sequester 70 times more CO2 than terrestrial sources,” Burns said. What if it can hold more?

That question is drawing growing attention, not only from scientists. In recent years, a wave of private companies have started piloting various methods of removing carbon from the oceans.

“It’s really the private sector that’s pushing the scaling of this very quickly,” Subhas said. In the US and Canada, he said, there are at least four companies piloting varied ocean alkalinity enhancement techniques.

Last year, Ebb Carbon, a California-based startup focused on marine carbon dioxide removal, signed a deal with Microsoft to remove up to 350,000 metric tons of CO2 over the next decade using an ocean alkalinity enhancement process that splits seawater into acidic and alkaline streams. The alkaline stream is then returned to the sea where it reacts with CO2 and stores it as bicarbonate, enabling the ocean to absorb more carbon dioxide from the atmosphere. In return, Microsoft will purchase carbon removal credits from the startup.

Another company called Vesta, which has headquarters in San Francisco, is using an approach called Coastal Carbon Capture. This involves adding finely ground olivine—a naturally occurring olive-green colored mineral—to sandy beaches. From there, ocean tides and waves carry it into the sea. Olivine reacts quickly with seawater in a process known as enhanced weathering, increasing ocean alkalinity. The company piloted one of their projects in Duck, North Carolina, last year where it estimated approximately 5,000 metric tons of carbon dioxide would be removed through coastal carbon capture after accounting for project emissions, according to its website.

But these efforts are not without risk, AU’s Burns said. “We have to proceed in an extremely precautionary manner,” he said.

Some scientists are concerned that OAE initiatives that involve olivine, which contains heavy metals like nickel and chromium, may harm marine life, he said. Another concern is that the olivine could cloud certain ocean areas and block light from penetrating to deeper depths. If too much alkalinity is introduced too fast in concentrated areas, he said, some animals might not be able to adjust.

Other marine carbon dioxide removal projects are using other methods besides OAE. Some involve adding iron to the ocean to stimulate growth in microscopic plants called phytoplankton, which absorb carbon dioxide through photosynthesis. Others include the cultivation of large-scale farms of kelp and seaweed, which also absorb carbon dioxide through photosynthesis. The marine plants can then be sunk in the deep ocean to store the carbon they absorbed.

In 2023, researchers from Woods Hole Oceanographic Institution conducted their first OAE-related field experiment from the 90-foot research vessel R/V Connecticut south of Massachusetts. As part of this first experiment, nontoxic water tracer dye was released into the ocean. Researchers tracked its movement through the water for 72 hours to model the dispersion of a plume of alkalinity over time.

Credit: Woods Hole Oceanographic Institution

In 2023, researchers from Woods Hole Oceanographic Institution conducted their first OAE-related field experiment from the 90-foot research vessel R/V Connecticut south of Massachusetts. As part of this first experiment, nontoxic water tracer dye was released into the ocean. Researchers tracked its movement through the water for 72 hours to model the dispersion of a plume of alkalinity over time. Credit: Woods Hole Oceanographic Institution

One technique that has not yet been tried, but may be piloted in the future, according to the science-based conservation nonprofit Ocean Visions, would employ new technology to accelerate the ocean’s natural process of transferring surface water and carbon to the deep ocean. That’s called artificial downwelling. In a reverse process—artificial upwelling—cooler, nutrient-rich waters from the deep ocean would be pumped to the surface to spur phytoplankton growth.

So far, UC San Diego’s Levin said she is not convinced that these trials will lead to impactful carbon removal.

“I do not think the ocean is ever going to be a really large part of that solution,” she said. However, she added, “It might be part of the storage solution. Right now, people are looking at injecting carbon dioxide that’s removed from industry activities on land and transporting it to the ocean and injecting it into basalt.”

Levin said she’s also worried that we don’t know enough yet about the consequences of altering natural ocean processes.

“I am concerned about how many field trials would be required to actually understand what would happen, and whether we could truly understand the environmental risk of a fully scaled-up operation,” she said.

The experiment

Most marine carbon dioxide removal projects that have kicked off already are significantly larger in scale than the LOC-NESS experiment, which Subhas estimates will remove around 50 tons of CO2.

But, he emphasized, the goal of this project is not to compete in size or scale. He said the aim is to provide independent academic research that can help guide and inform the future of this industry and ensure it does not have negative repercussions on the marine environment.

There is some concern, he said, that commercial entities may pursue large-scale OAE initiatives to capitalize on the growing voluntary carbon market without first conducting adequate testing for safety and efficacy. Unlike those initiatives, there is no profit to be made from LOC-NESS. No carbon credits will be sold, Subhas said.

The project is funded by a collection of government and philanthropic sources, including the National Oceanic and Atmospheric Administration and the Carbon to Sea Initiative, a nonprofit that brings funders and scientists together to support marine carbon dioxide removal research and technology.

“We really feel like it’s necessary for the scientific community to be delivering transparent, trusted, and rigorous science to evaluate these things as these activities are currently happening and scaling in the ocean by the private sector,” Subhas said.

The LOC-NESS field trial in Wilkinson Basin will be the first “academic only” OAE experiment conducted from a ship in US waters. It is also the first of its kind to receive a permit from the Environmental Protection Agency under the Marine Protection, Research, and Sanctuaries Act.

“There’s no research in the past or planned that gets even close to providing a learning opportunity that this research is providing for OAE in the pelagic environment,” said Carbon to Sea Initiative’s Antonius Gagern, referring to the open sea experiment.

The permit was granted in April after a year of consultations between the EPA and other federal agencies.

During the process’ public comment periods, commenters expressed concerns about the potential impact on marine life, including the critically endangered North Atlantic right whales, small crustaceans that they eat called copepods, and larvae for the commercially important squid and mackerel fisheries. In a written response to some of these comments, the EPA stated that the small-scale project “demonstrates scientific rigor” and is “not expected to significantly affect human health, the marine environment, or other uses of the ocean.”

Subhas and his interdisciplinary team of chemists, biologists, engineers, and physicists from Woods Hole have spent the last few years planning this experiment and conducting a series of trials at their lab on Cape Cod to ensure they can safely execute and effectively monitor the results of the open-water test they will conduct this summer in the Gulf of Maine.

They specifically tested the effects of sodium hydroxide—an alkaline substance also known as lye or caustic soda—on marine microbes, phytoplankton, and copepods, a crucial food source for many marine species in the region in addition to the right whales. “We chose sodium hydroxide because it’s incredibly pure,” Subhas said. It’s widely used in the US to reduce acidity in drinking water.

It also helps counter ocean acidification, according to Subhas. “It’s like Tums for the ocean,” he said.

Ocean acidification occurs when the ocean absorbs excess carbon dioxide, causing its pH to drop. This makes it harder for corals, krill, and shellfish like oysters and clams to develop their hard calcium carbonate shells or skeletons.

This month, the team plans to release 50 tons of sodium hydroxide into a designated area of the Wilkinson Basin from the back of one of two research vessels participating in the LOC-NESS operation.

The basin is an ideal test site, according to Subhas, because there is little presence of phytoplankton, zooplankton, commercial fish larvae, and endangered species, including some whales, during this season. Still, as a precautionary measure, Woods Hole has contracted a protected species observer to keep a look out for marine species and mitigate potential harm if they are spotted. That person will be on board as the vessel travels to and from the field trial site, including while the team releases the sodium hydroxide into the ocean.

The alkaline substance will be dispersed over four to 12 hours off the back of one of the research vessels, along with the nontoxic fluorescent red water tracer dye called rhodamine. The dye will help track the location and spread of the sodium hydroxide once released into the ocean, and the vessel’s wake will help mix the solution in with the ocean water.

After about an hour, Subhas said, it will form into a “pinkish” patch of water that can be picked up on satellites. “We’re going to be taking pictures from space and looking at how this patch sort of evolves, dilutes, and stretches and disperses over time.”

For a week after that, scientists aboard the vessels will take rotating shifts to collect data around the clock. They will deploy drones and analyze over 20 types of samples from the research vessel to monitor how the surrounding waters and marine life respond to the experiment. They’ll track changes in ocean chemistry, nutrient levels, plankton populations and water clarity, while also measuring acidity and dissolved CO2.

In March, the team did a large-scale dry run of the dispersal at an open air testing facility on a naval base in New Jersey. According to Subhas, the trial demonstrated their ability to safely and effectively deliver alkalinity to surface seawater.

“The next step is being able to measure the carbon uptake from seawater—from the atmosphere into seawater,” he said. That is a slower process. He said he expects to have some preliminary results on carbon uptake, as well as environmental impacts, early next year.

This story originally appeared on Inside Climate News.

Photo of Inside Climate News

Experiment will attempt to counter climate change by altering ocean Read More »

nasa-plans-to-build-a-nuclear-reactor-on-the-moon—a-space-lawyer-explains-why

NASA plans to build a nuclear reactor on the Moon—a space lawyer explains why

These sought-after regions are scientifically vital and geopolitically sensitive, as multiple countries want to build bases or conduct research there. Building infrastructure in these areas would cement a country’s ability to access the resources there and potentially exclude others from doing the same.

Critics may worry about radiation risks. Even if designed for peaceful use and contained properly, reactors introduce new environmental and operational hazards, particularly in a dangerous setting such as space. But the UN guidelines do outline rigorous safety protocols, and following them could potentially mitigate these concerns.

Why nuclear? Because solar has limits

The Moon has little atmosphere and experiences 14-day stretches of darkness. In some shadowed craters, where ice is likely to be found, sunlight never reaches the surface at all. These issues make solar energy unreliable, if not impossible, in some of the most critical regions.

A small lunar reactor could operate continuously for a decade or more, powering habitats, rovers, 3D printers, and life-support systems. Nuclear power could be the linchpin for long-term human activity. And it’s not just about the Moon – developing this capability is essential for missions to Mars, where solar power is even more constrained.

The UN Committee on the Peaceful Uses of Outer Space sets guidelines to govern how countries act in outer space. United States Mission to International Organizations in Vienna. Credit: CC BY-NC-ND

A call for governance, not alarm

The United States has an opportunity to lead not just in technology but in governance. If it commits to sharing its plans publicly, following Article IX of the Outer Space Treaty and reaffirming a commitment to peaceful use and international participation, it will encourage other countries to do the same.

The future of the Moon won’t be determined by who plants the most flags. It will be determined by who builds what, and how. Nuclear power may be essential for that future. Building transparently and in line with international guidelines would allow countries to more safely realize that future.

A reactor on the Moon isn’t a territorial claim or a declaration of war. But it is infrastructure. And infrastructure will be how countries display power—of all kinds—in the next era of space exploration.The Conversation

Michelle L.D. Hanlon, Professor of Air and Space Law, University of Mississippi. This article is republished from The Conversation under a Creative Commons license. Read the original article.

NASA plans to build a nuclear reactor on the Moon—a space lawyer explains why Read More »