Science

researchers-engineer-bacteria-to-produce-plastics

Researchers engineer bacteria to produce plastics

Image of a series of chemical reactions, with enzymes driving each step forward.

One of the enzymes used in this system takes an amino acid (left) and links it to Coenzyme A. The second takes these items and links them into a polymer. Credit: Chae et. al.

Normally, PHA synthase forms links between molecules that run through an oxygen atom. But it’s also possible to form a related chemical link that instead runs through a nitrogen atom, like those found on amino acids. There were no known enzymes, however, that catalyze these reactions. So, the researchers decided to test whether any existing enzymes could be induced to do something they don’t normally do.

The researchers started with an enzyme from Clostridium that links chemicals to Coenzyme A that has a reputation for not being picky about the chemicals it interacts with. This worked reasonably well at linking amino acids to Coenzyme A. For linking the amino acids together, they used an enzyme from Pseudomonas that had four different mutations that expanded the range of molecules it would use as reaction materials. Used in a test tube, the system worked: Amino acids were linked together in a polymer.

The question was whether it would work in cells. Unfortunately, one of the two enzymes turns out to be mildly toxic to E. coli, slowing its growth. So, the researchers evolved a strain of E. coli that could tolerate the protein. With both of these two proteins, the cells produced small amounts of an amino acid polymer. If they added an excess of an amino acid to the media the cells were growing in, the polymer would be biased toward incorporating that amino acid.

Boosting polymer production

However, the yield of the polymer by weight of bacteria was fairly low. “It was reasoned that these [amino acids] might be more efficiently incorporated into the polymer if generated within the cells from a suitable carbon source,” the researchers write. So, the researchers put in extra copies of the genes needed to produce one specific amino acid (lysine). That worked, producing more polymer, with a higher percentage of the polymer being lysine.

Researchers engineer bacteria to produce plastics Read More »

physicists-unlock-another-clue-to-brewing-the-perfect-espresso

Physicists unlock another clue to brewing the perfect espresso

The team initially tried to use a simple home coffee machine for their experiments but eventually partnered with Coffeelab, a major roaster in Poland, and CoffeeMachineSale, the largest global distributor of roasting gear. This brought industrial-grade equipment and much professional coffee expertise to the project: state-of-the-art grinders, for instance, and a cafe-grade espresso machine, tricked out with a pressure sensor, flow meter, and a set of scales. The entire setup was connected to laboratory laptops via a microchip and controlled with custom software that allowed the scientists to precisely monitor pressure, mass, and water flowing through the coffee.

The scientists measured the total dissolved solids to determine the rate at which coffee is dissolved, comparing brews without a channel to those with artificially induced channels. They found that, indeed, channeling adversely affected extraction yields. However, channeling does not have an impact on the rate at which water flows through the espresso puck.

“That is mostly due to the structural rearrangement of coffee grounds under pressure,” Lisicki said. “When the dry coffee puck is hit with water under high pressure—as high as 10 times the atmospheric pressure, so roughly the pressure 100 meters below the sea surface—it compacts and swells up. So even though water can find a preferential path, there is still significant resistance limiting the flow.”

The team is now factoring their results into numerical and theoretical models of porous bed extraction. They are also compiling an atlas of the different kinds of espresso pucks based on micro-CT imaging of the coffee.

“What we have found can help the coffee industry brew with more knowledge,” said Myck. “Many people follow procedures based on unconfirmed intuitions or claims which prove to have confirmation. What’s more, we have really interesting data regarding pressure-induced flow in coffee, the results of which have been a surprise to us as well. Our approach may let us finally understand the magic that happens inside your coffee machine.”

Physicists unlock another clue to brewing the perfect espresso Read More »

a-“biohybrid”-robotic-hand-built-using-real-human-muscle-cells

A “biohybrid” robotic hand built using real human muscle cells

Biohybrid robots work by combining biological components like muscles, plant material, and even fungi with non-biological materials. While we are pretty good at making the non-biological parts work, we’ve always had a problem with keeping the organic components alive and well. This is why machines driven by biological muscles have always been rather small and simple—up to a couple centimeters long and typically with only a single actuating joint.

“Scaling up biohybrid robots has been difficult due to the weak contractile force of lab-grown muscles, the risk of necrosis in thick muscle tissues, and the challenge of integrating biological actuators with artificial structures,” says Shoji Takeuchi, a professor at the Tokyo University, Japan. Takeuchi led a research team that built a full-size, 18 centimeter-long biohybrid human-like hand with all five fingers driven by lab-grown human muscles.

Keeping the muscles alive

Out of all the roadblocks that keep us from building large-scale biohybrid robots, necrosis has probably been the most difficult to overcome. Growing muscles in a lab usually means a liquid medium to supply nutrients and oxygen to muscle cells seeded on petri dishes or applied to gel scaffoldings. Since these cultured muscles are small and ideally flat, nutrients and oxygen from the medium can easily reach every cell in the growing culture.

When we try to make the muscles thicker and therefore more powerful, cells buried deeper in those thicker structures are cut off from nutrients and oxygen, so they die, undergoing necrosis. In living organisms, this problem is solved by the vascular network. But building artificial vascular networks in lab-grown muscles is still something we can’t do very well. So, Takeuchi and his team had to find their way around the necrosis problem. Their solution was sushi rolling.

The team started by growing thin, flat muscle fibers arranged side by side on a petri dish. This gave all the cells access to nutrients and oxygen, so the muscles turned out robust and healthy. Once all the fibers were grown, Takeuchi and his colleagues rolled them into tubes called MuMuTAs (multiple muscle tissue actuators) like they were preparing sushi rolls. “MuMuTAs were created by culturing thin muscle sheets and rolling them into cylindrical bundles to optimize contractility while maintaining oxygen diffusion,” Takeuchi explains.

A “biohybrid” robotic hand built using real human muscle cells Read More »

for-climate-and-livelihoods,-africa-bets-big-on-solar-mini-grids

For climate and livelihoods, Africa bets big on solar mini-grids


Nigeria is pioneering the development of small, off-grid solar panel installations.

A general view of a hybrid minigrids station in Doma Town which is mainly powered by solar energy in Doma, Nassarawa State, Nigeria on October 16, 2023. Credit: Kola Sulaimon/AFP via Getty Images

To the people of Mbiabet Esieyere and Mbiabet Udouba in Nigeria’s deep south, sundown would mean children doing their homework by the glow of kerosene lamps, and the faint thrum of generators emanating from homes that could afford to run them. Like many rural communities, these two villages of fishermen and farmers in the community of Mbiabet, tucked away in clearings within a dense palm forest, had never been connected to the country’s national electricity grid.

Most of the residents had never heard of solar power either. When, in 2021, a renewable-energy company proposed installing a solar “mini-grid” in their community, the villagers scoffed at the idea of the sun powering their homes. “We didn’t imagine that something [like this] can exist,” says Solomon Andrew Obot, a resident in his early 30s.

The small installation of solar panels, batteries and transmission lines proposed by the company Prado Power would service 180 households in Mbiabet Esieyere and Mbiabet Udouba, giving them significantly more reliable electricity for a fraction of the cost of diesel generators. Village leaders agreed to the installation, though many residents remained skeptical. But when the panels were set up in 2022, lights blinked on in the brightly painted two-room homes and tan mud huts dotted sparsely through the community. At a village meeting in September, locals erupted into laughter as they recalled walking from house to house, turning on lights and plugging in phone chargers. “I [was] shocked,” Andrew Obot says.

Like many African nations, Nigeria has lagged behind Global North countries in shifting away from planet-warming fossil fuels and toward renewable energy. Solar power contributes just around 3 percent of the total electricity generated in Africa—though it is the world’s sunniest continent—compared to nearly 12 percent in Germany and 6 percent in the United States.

At the same time, in many African countries, solar power now stands to offer much more than environmental benefits. About 600 million Africans lack reliable access to electricity; in Nigeria specifically, almost half of the 230 million people have no access to electricity grids. Today, solar has become cheap and versatile enough to help bring affordable, reliable power to millions—creating a win-win for lives and livelihoods as well as the climate.

That’s why Nigeria is placing its bets on solar mini-grids—small installations that produce up to 10 megawatts of electricity, enough to power over 1,700 American homes—that can be set up anywhere. Crucially, the country has pioneered mini-grid development through smart policies to attract investment, setting an example for other African nations.

Nearly 120 mini-grids are now installed, powering roughly 50,000 households and reaching about 250,000 people. “Nigeria is actually like a poster child for mini-grid development across Africa,” says energy expert Rolake Akinkugbe-Filani, managing director of EnergyInc Advisors, an energy infrastructure consulting firm.

Though it will take more work—and funding—to expand mini-grids across the continent, Nigeria’s experience demonstrates that they could play a key role in weaning African communities off fossil-fuel-based power. But the people who live there are more concerned with another, immediate benefit: improving livelihoods. Affordable, reliable power from Mbiabet’s mini-grid has already supercharged local businesses, as it has in many places where nonprofits like Clean Technology Hub have supported mini-grid development, says Ifeoma Malo, the organization’s founder. “We’ve seen how that has completely transformed those communities.”

The African energy transition takes shape

Together, Africa’s countries account for less than 5 percent of global carbon dioxide emissions, and many experts, like Malo, take issue with the idea that they need to rapidly phase out fossil fuels; that task should be more urgent for the United States, China, India, the European countries and Russia, which create the bulk of emissions. Nevertheless, many African countries have set ambitious phase-out goals. Some have already turned to locally abundant renewable energy sources, like geothermal power from the Earth’s crust, which supplies nearly half of the electricity produced in Kenya, and hydropower, which creates more than 80 percent of the electricity in the Democratic Republic of Congo, Ethiopia and Uganda.

But hydropower and geothermal work only where those resources naturally exist. And development of more geographically versatile power sources, like solar and wind, has progressed more slowly in Africa. Though solar is cheaper than fossil-fuel-derived electricity in the long term, upfront construction costs are often higher than they are for building new fossil-fuel power plants.

Thanks to its sunny, equatorial position, the African continent has an immense potential for solar power, shown here in kilowatt-hours. However, solar power contributes less than 3 percent of the electricity generated in Africa. Credit: Knowable Magazine

Getting loans to finance big-ticket energy projects is especially hard in Africa, too. Compared to Europe or the United States, interest rates for loans can be two to three times higher due to perceived risks—for instance, that cash-strapped utility companies, already struggling to collect bills from customers, won’t be able to pay back the loans. Rapid political shifts and currency fluctuations add to the uncertainty. To boot, some Western African nations such as Nigeria charge high tariffs on importing technologies such as solar panels. “There are challenges that are definitely hindering the pace at which renewable energy development could be scaling in the region,” says renewable energy expert Tim Reber of the Colorado-based US National Renewable Energy Laboratory.

Some African countries are beginning to overcome these barriers and spur renewable energy development, notes Bruno Merven, an expert in energy systems modeling at the University of Cape Town in South Africa, coauthor of a look at renewable energy development in the Annual Review of Resource Economics. Super-sunny Morocco, for example, has phased out subsidies for gasoline and industrial fuel. South Africa is agreeing to buy power from new, renewable infrastructure that is replacing many coal plants that are now being retired.

Nigeria, where only about a quarter of the national grid generates electricity and where many turn to generators for power, is leaning on mini-grids—since expanding the national grid to its remote communities, scattered across an area 1.3 times the size of Texas, would cost a prohibitive amount in the tens of billions of dollars. Many other countries are in the same boat. “The only way by which we can help to electrify the entire continent is to invest heavily in renewable energy mini-grids,” says Stephen Kansuk, the United Nations Development Program’s regional technical advisor for Africa on climate change mitigation and energy issues.

Experts praise the steps Nigeria has taken to spur such development. In 2016, the country’s Electricity Regulatory Commission provided legal guidelines on how developers, electricity distribution companies, regulators and communities can work together to develop the small grids. This was accompanied by a program through which organizations like the World Bank, the Global Energy Alliance for People and Planet, Bezos Earth Fund and the Rockefeller Foundation could contribute funds, making mini-grid investments less financially risky for developers.

Solar power was also made more attractive by a recent decision by Nigerian President Bola Ahmed Tinubu to remove a long-standing government subsidy on petroleum products. Fossil-fuel costs have been soaring since, for vehicles as well as the generators that many communities rely on. Nigeria has historically been Africa’s largest crude oil producer, but fuel is now largely unaffordable for the average Nigerian, including those living in rural areas, who often live on less than $2 a day. In the crude-oil-rich state of Akwa Ibom, where the Mbiabet villages are located, gasoline was 1,500 naira per liter (around $1) at the time of publishing. “Now that subsidies have come off petrol,” says Akinkugbe-Filani, “we’re seeing a lot more people transition to alternative sources of energy.”

Mini-grids take off

To plan a mini-grid in Nigeria, developers often work with government agencies that have mapped out ideal sites: sunny places where there are no plans to extend the national grid, ensuring that there’s a real power need.

More than 500 million Africans lack access to electricity, and where there is electricity, much of it comes from fossil fuels. Countries are taking different approaches to bring more renewable energy into the mix. Nigeria is focusing on mini-grids, which are especially useful in areas that lack national electricity grids. Morocco and South Africa are building large-scale solar power installations, while Kenya and the Democratic Republic of the Congo are making use of local renewable energy sources like geothermal and hydropower, respectively. Credit: Knowable Magazine

The next step is getting communities on board, which can take months. Malo recalls a remote Indigenous village in the hills of Adamawa state in Nigeria’s northeast, where locals have preserved their way of life for hundreds of years and are wary of outsiders. Her team had almost given up trying to liaise with reluctant male community leaders and decided to try reaching out to the women. The women, it turned out, were fascinated by the technology and how it could help them, especially at night — to fetch water from streams, to use the bathroom and to keep their children safe from snakes. “We find that if we convince them, they’re able to go and convince their husbands,” Malo says.

The Mbiabet community took less convincing. Residents were drawn to the promise of cheap, reliable electricity and its potential to boost local businesses.

Like many other mini-grids, the one in Mbiabet benefited from a small grant, this one from the Rocky Mountain Institute, a US-based nonprofit focused on renewable energy adoption. The funds allowed residents to retain 20 percent ownership of the mini-grid and reduced upfront costs for Prado Power, which built the panels with the help of local laborers.

On a day in late September, it’s a sunny afternoon, though downpours from the days before have made their imprint on the ground. There are no paved roads and today, the dirt road leading through the tropical forest into the cluster of villages is unnavigable by car. At one point, we build an impromptu bridge of grass and vegetation across a sludgy impasse; the last stretch of the journey is made on foot. It would be costly and labor-intensive to extend the national grid here.

Palm trees give way to tin roofs propped up by wooden poles, and Andrew Obot is waiting at the meeting point. He was Mbiabet’s vice youth president when Prado Power first contacted the community; now he’s the site manager. He steers his okada—a local motorbike—up the bumpy red dirt road to go see the solar panels.

Along the way, we see transmission lines threading through thick foliage. “That’s the solar power,” shouts Andrew Obot over the drone of the okada engine. All the lines were built by Prado Power to supply households in the two villages.

We enter a grassy clearing where three rows of solar panels sit behind wire gates. Collectively, the 39 panels have a capacity of over 20 kilowatts—enough to power just one large, energy-intensive American household but more than enough for the lightbulbs, cooker plates and fans in the 180 households in Mbiabet Esieyere and Mbiabet Udouba.

Whereas before, electricity was more conservatively used, now it is everywhere. An Afrobeats tune blares from a small barbershop on the main road winding through Mbiabet Esieyere. Inside, surrounded by walls plastered with shiny posters of trending hairstyles — including a headshot of popular musician Davido with the tagline “BBC—Big Boyz Cutz”—two young girls sit on a bench near a humming fan, waiting for their heads to be shaved.

The salon owner, Christian Aniefiok Asuquo, started his business two years ago when he was 16, just before the panels were installed. Back then, his appliances were powered by a diesel generator, which he would fill with 2,000 naira worth (around $1.20) of fuel daily. This would last around an hour. Now, he spends just 2,000 naira a month on electricity. “I feel so good,” he says, and his customers, too, are happy. He used to charge 500 naira ($0.30) per haircut, but now charges 300 naira ($0.18) and still makes a profit. He has more customers these days.

For many Mbiabet residents, “it’s an overall boost in their economic development,” says Suleiman Babamanu, the Rocky Mountain Institute’s program director in Nigeria. Also helping to encourage residents to take full advantage of their newly available power is the installation of an “agro-processing hub,” equipped with crop-processing machines and a community freezer to store products like fish. Provided by the company Farm Warehouse in partnership with Prado Power, the hub is leased out to locals. It includes a grinder and fryer to process cassava—the community’s primary crop—into garri, a local food staple, which many of the village women sell to neighboring communities and at local markets.

The women are charged around 200 naira ($0.12) to process a small basin of garri from beginning to end. Sarah Eyakndue Monday, a 24-year-old cassava farmer, used to spend three to four hours processing cassava each day; it now takes her less than an hour. “It’s very easy,” she says with a laugh. She produces enough garri during that time to earn up to 50,000 naira ($30.25) a week—almost five times what she was earning before.

Prado Power also installed a battery system to save some power for nighttime (there’s a backup diesel generator should batteries become depleted during multiple overcast days). That has proved especially valuable to women in Mbiabet Esieyere and Mbiabet Udouba, who now feel safer. “Everywhere is … brighter than before,” says Eyakndue Monday.

Other African communities have experienced similar benefits, according to Renewvia Energy, a US-based solar company. In a recent company-funded survey, 2,658 Nigerian and Kenyan households and business owners were interviewed before and after they got access to Renewvia’s mini-grids. Remarkably, the median income of Kenyan households had quadrupled. Instead of spending hours each day walking kilometers to collect drinking water, many communities were able to install electricity-powered wells or pumps, along with water purifiers.

“With all of that extra time, women in the community were able to either start their own businesses or just participate in businesses that already exist,” says Renewvia engineer Nicholas Selby, “and, with that, gain some income for themselves.”

Navigating mini-grid challenges

Solar systems require regular maintenance—replacing retired batteries, cleaning, and repairing and addressing technical glitches over the 20- to 25-year lifetime of a panel. Unless plans for care are built into a project, they risk failure. In some parts of India, for example, thousands of mini-grids installed by the government in recent decades have fallen into disrepair, according to a report provided to The Washington Post. Typically, state agencies have little long-term incentive to maintain solar infrastructure, Kansuk says.

Kansuk says this is less likely in situations where private companies that make money off the grids help to fund them, encouraging them to install high-quality devices and maintain them. It also helps to train locals with engineering skills so they can maintain the panels themselves—companies like Renewvia have done this at their sites. Although Prado Power hasn’t been able to provide such training to locals in Mbiabet or their other sites, they recruit locals like Andrew Obot to work as security guards, site managers and construction workers.

Over the longer term, demographic shifts may also leave some mini-grids in isolated areas abandoned—as in northern Nigeria, for instance, where banditry and kidnapping are forcing rural populations toward more urban settings. “That’s become a huge issue,” Malo says. Partly for this reason, some developers are focusing on building mini-grids in regions that are less prone to violence and have higher economic activity—often constructing interconnected mini-grids that supply multiple communities.

Eventually, those close enough to the national grid will likely be connected to the larger system, says Chibuikem Agbaegbu, a Nigeria-based climate and energy expert of the Africa Policy Research Institute. They can send their excess solar-sourced electricity into the main grid, thus making a region’s overall energy system greener and more reliable.

The biggest challenge for mini-grids, however, is cost. Although they tend to offer cheaper, more reliable electricity compared to fossil-fuel-powered generators, it is still quite expensive for many people — and often much more costly than power from national grids, which is frequently subsidized by African governments. Costs can be even higher when communities sprawl across large areas that are expensive to connect.

Mini-grid companies have to charge relatively high rates in order to break even, and many communities may not be buying enough power to make a mini-grid worthwhile for the developers — for instance, Kansuk says, if residents want electricity only for lighting and to run small household appliances.

Kansuk adds that this is why developers like Prado Power still rely on grants or other funding sources to subsidize construction costs so they can charge locals affordable prices for electricity. Another solution, as evidenced in Mbiabet, is to introduce industrial machinery and equipment in tandem with mini-grids to increase local incomes so that people can afford the electricity tariffs.

“For you to be able to really transform lives in rural communities, you need to be able to improve the business viability—both for the mini-grid and for the community,” says Babamanu. The Rocky Mountain Institute is part of an initiative that identifies suitable electrical products, from cold storage to rice mills to electric vehicle chargers, and supports their installation in communities with the mini-grids.

Spreading mini-grids across the continent

Energy experts believe that these kinds of solutions will be key for expanding mini-grids across Africa. Around 60 million people in the continent gained access to electricity through mini-grids between 2009 and 2019, in countries such as Kenya, Tanzania and Senegal, and the United Nations Development Program is working with a total of 21 African countries, Kansuk says, including Mali, Niger and Somalia, to incentivize private companies to develop mini-grids there.

But it takes more than robust policies to help mini-grids thrive. Malo says it would help if Western African countries removed import tariffs for solar panels, as many governments in Eastern Africa have done. And though Agbaegbu estimates that Nigeria has seen over $900 million in solar investments since 2018—and the nation recently announced $750 million more through a multinationally funded program that aims to provide over 17.5 million Nigerians with electricity access—it needs more. “If you look at what is required versus what is available,” says Agbaegbu, “you find that there’s still a significant gap.”

Many in the field argue that such money should come from more industrialized, carbon-emitting countries to help pay for energy development in Global South countries in ways that don’t add to the climate problem; some also argue for funds to compensate for damages caused by climate impacts, which hit these countries hardest. At the 2024 COP29 climate change conference, wealthy nations set a target of $300 billion in annual funding for climate initiatives in other countries by 2035—three times more than what they had previously pledged. But African countries alone need an estimated $200 billion per year by 2030 to meet their energy goals, according to the International Energy Agency.

Meanwhile, Malo adds, it’s important that local banks in countries like Nigeria also invest in mini-grid development, to lessen dependence on foreign financing. That’s especially the case in light of current freezes in USAID funding, she says, which has resulted in a loss of money for solar projects in Nigeria and other nations.

With enough support, Reber says, mini-grids—along with rooftop and larger solar projects—could make a sizable contribution to lowering carbon emissions in Africa. Those who already have the mini-grids seem convinced they’re on the path toward a better, economically richer future, and Babamanu knows of communities that have written letters to policymakers to express their interest.

Eyakndue Monday, the cassava farmer from Mbiabet, doesn’t keep her community’s news a secret. Those she has told now come to her village to charge their phones and watch television. “I told a lot of my friends that our village is … better because of the light,” she says. “They were just happy.”

This story was originally published by Knowable Magazine.

Photo of Knowable Magazine

Knowable Magazine explores the real-world significance of scholarly work through a journalistic lens.

For climate and livelihoods, Africa bets big on solar mini-grids Read More »

small-charges-in-water-spray-can-trigger-the-formation-of-key-biochemicals

Small charges in water spray can trigger the formation of key biochemicals

Once his team nailed how droplets become electrically charged and how the micro-lightning phenomenon works, they recreated the Miller-Urey experiment. Only without the spark plugs.

Ingredients of life

After micro-lightnings started jumping between droplets in a mixture of gases similar to that used by Miller and Urey, the team examined their chemical composition with a mass spectrometer. They confirmed glycine, uracil, urea, cyanoethylene, and lots of other chemical compounds were made. “Micro-lightnings made all organic molecules observed previously in the Miller-Urey experiment without any external voltage applied,” Zare claims.

But does it really bring us any closer to explaining the beginnings of life? After all, Miller and Urey already demonstrated those molecules could be produced by electrical discharges in a primordial Earth’s atmosphere—does it matter all that much where those discharges came from?  Zare argues that it does.

“Lightning is intermittent, so it would be hard for these molecules to concentrate. But if you look at waves crashing into rocks, you can think the spray would easily go into the crevices in these rocks,” Zare suggests. He suggests that the water in these crevices would evaporate, new spray would enter and evaporate again and again. The cyclic drying would allow the chemical precursors to build into more complex molecules. “When you go through such a dry cycle, it causes polymerization, which is how you make DNA,” Zare argues. Since sources of spray were likely common on the early Earth, Zare thinks this process could produce far more organic chemicals than potential alternatives like lightning strikes, hydrothermal vents, or impacting comets.

But even if micro-lightning really produced the basic building blocks of life on Earth, we’re still not sure how those combined into living organisms. “We did not make life. We just demonstrated a possible mechanism that gives us some chemical compounds you find in life,” Zare says. “It’s very important to have a lot of humility with this stuff.”

Science Advances, 2025.  DOI: 10.1126/sciadv.adt8979

Small charges in water spray can trigger the formation of key biochemicals Read More »

in-one-dog-breed,-selection-for-utility-may-have-selected-for-obesity

In one dog breed, selection for utility may have selected for obesity

High-risk Labradors also tended to pester their owners for food more often. Dogs with low genetic risk scores, on the other hand, stayed slim regardless of whether the owners paid attention to how and whether they were fed or not.

But other findings proved less obvious. “We’ve long known chocolate-colored Labradors are prone to being overweight, and I’ve often heard people say that’s because they’re really popular as pets for young families with toddlers that throw food on the floor all the time and where dogs are just not given that much attention,” Raffan says. Her team’s data showed that chocolate Labradors actually had a much higher genetic obesity risk than yellow or black ones

Some of the Labradors particularly prone to obesity, the study found, were guide dogs, which were included in the initial group. Training a guide dog in the UK usually takes around two years, during which the dogs learn multiple skills, like avoiding obstacles, stopping at curbs, navigating complex environments, and responding to emergency scenarios. Not all dogs are able to successfully finish this training, which is why guide dogs are often selectively bred with other guide dogs in the hope their offspring would have a better chance at making it through the same training.

But it seems that this selective breeding among guide dogs might have had unexpected consequences. “Our results raise the intriguing possibility that we may have inadvertently selected dogs prone to obesity, dogs that really like their food, because that makes them a little bit more trainable. They would do anything for a biscuit,” Raffan says.

The study also found that genes responsible for obesity in dogs are also responsible for obesity in humans. “The impact high genetic risk has on dogs leads to increased appetite. It makes them more interested in food,” Raffan claims. “Exactly the same is true in humans. If you’re at high genetic risk you aren’t inherently lazy or rubbish about overeating—it’s just you are more interested in food and get more reward from it.”

Science, 2025.  DOI: 10.1126/science.ads2145

In one dog breed, selection for utility may have selected for obesity Read More »

no,-that’s-not-a-cosmic-cone-of-shame—it’s-nasa’s-newest-space-telescope

No, that’s not a cosmic cone of shame—it’s NASA’s newest space telescope


A filter for the Universe

“SPHEREx is going to produce an enormous three-dimensional map of the entire night sky.”

NASA’s SPHEREx observatory after completion of environmental testing at BAE Systems in Boulder, Colorado, last year. Credit: NASA/JPL-Caltech/BAE Systems

Satellites come in all shapes and sizes, but there aren’t any that look quite like SPHEREx, an infrared observatory NASA launched Tuesday night in search of answers to simmering questions about how the Universe, and ultimately life, came to be.

The mission launched aboard a SpaceX Falcon 9 rocket from Vandenberg Space Force Base in California at 8: 10 pm local time (11: 10 pm EDT) Tuesday. Less than 45 minutes later, the Falcon 9’s upper stage released SPHEREx into a polar orbit at an altitude of roughly 420 miles (675 kilometers). Ground controllers received the first signals from the spacecraft, confirming its health after reaching space.

As soon as next month, once engineers verify the observatory is ready, SPHEREx will begin a two-year science mission surveying the sky in 102 colors invisible to the human eye. The observatory’s infrared detectors will collect data on the chemical composition of asteroids, hazy star-forming clouds, and faraway galaxies.

A Falcon 9 rocket lifted SPHEREx into orbit. Credit: NASA/Jim Ross

“SPHEREx is going to produce an enormous three-dimensional map of the entire night sky, and with this immense and novel dataset, we’re going to address some of the most fundamental questions in astrophysics,” said Phil Korngut, the mission’s instrument scientist at Caltech.

“Using a technique called linear variable filter spectroscopy, we’re going to produce 102 maps in 102 wavelengths every six months, and our baseline mission is to do this four time over the course of two years,” Korngut said.

Boiling it down

The acronym for the SPHEREx mission is a mouthful—it stands for the Spectro-Photometer for the History of the Universe, Epoch of Reionization and Ices Explorer. Scientists sum up the $488 million mission by saying it seeks answers to three basic questions:

• How did the Universe begin?

• How did galaxies begin?

• What are the conditions for life outside the Solar System?

While it’s possible to sum up these objectives in an elevator pitch, the details touch on esoteric topics like cosmic inflation, quantum physics, and the flatness of spacetime. Philosophically, these questions are existential. SPHEREx will try to punch above its weight.

Built by BAE Systems, SPHEREx is about the size of a subcompact car, and it lacks the power and resolution of a flagship observatory like the James Webb Space Telescope. Webb’s primary mirror spans more than 21 feet (6.5 meters) across, while SPHEREx’s primary mirror has an effective diameter of just 7.9 inches (20 centimeters), comparable to a consumer-grade backyard telescope.

SPHEREx will test the inflationary model, a theory to explain the unimaginably violent moments after the Big Bang. Credit: NASA

But NASA’s newest space telescope has a few advantages. While Webb is designed to peer deep into small slivers of the sky, SPHEREx’s wider field of view will observe the sky in all directions. Like its name might suggest, SPHEREx will capture a spherical view of the cosmos. Color filters overlay the instrument’s detector array to separate light coming into the telescope into its component wavelengths, a process known as spectroscopy. NASA says SPHEREx’s unique design allows it to conduct infrared spectroscopy on hundreds of thousands of objects simultaneously, and more than 600 exposures per day.

“SPHEREx is a testament to doing big science with a small telescope,” said Beth Fabinsky, the mission’s project manager at NASA’s Jet Propulsion Laboratory in California.

Because SPHEREx orbits hundreds of miles above the Earth, the telescope flies above the discernible atmosphere, which can absorb faint thermal energy coming from distant astronomical sources. Its detectors must be cold, below minus 360 degrees Fahrenheit, or 55 Kelvin, or the the telescope would be blinded by its own light. This is the reason the spacecraft has such an unusual look.

Many past infrared telescopes used cryogenic coolant to chill their detectors, but this is a finite resource that gradually boils off in space, limiting mission lifetimes. Webb uses a complicated tennis court-sized sunshield to block heat and light from the Sun from its infrared instruments. Engineers came up with a simpler solution for SPHEREx.

Three concentric photon shields extend from the top of the spacecraft to insulate the telescope’s optics and detectors from light from the Sun and the Earth. This design requires no moving parts, boosting the mission’s reliability and longevity. The photon shields look like an Elizabethan collar. Pet owners may know it as the “cone of shame” given to animals after surgeries.

Like NASA’s new half-billion-dollar space telescope, this cheery canine wears his collar with pride. Credit: Michael Macor/San Francisco Chronicle via Getty Images

For SPHEREx, this cone is an enabler, allowing astronomers to map hundreds of millions of galaxies to study inflation, a cosmological theory that suggests the Universe underwent a mind-boggling expansion just after the Big Bang nearly 13.8 billion years ago. Through the process of inflation, the Universe grew a “trillion-trillion-fold” in a fraction of a second, Korngut said.

The theory suggests inflation left behind the blueprint for the largest-scale structures of the Universe, called the cosmic web. Inflation “expanded tiny fluctuations, smaller than an atom, to enormous cosmological scales that we see today, traced out by galaxies and clusters of galaxies,” said Jamie Bock, a cosmologist at Caltech who leads the SPHEREx science team.

“Even though inflation (theory) was invented in the 1980s, it’s been tested over the intervening decades and has been consistent with the data,” Bock said. “While we have this general picture, we still don’t know what drove inflation, why it happened. So what SPHEREx will do will test certain models of inflation by tracing out the three dimensions, hundreds of millions of galaxies, over the entire sky. And those galaxies trace out the initial fluctuations set up by inflation.”

SPHEREx’s telescope will also collect the combined light emitted by all galaxies, all the way back to the cosmic dawn, when the first stars and galaxies shined through the foggy aftermath of the Big Bang. Scientists believe star formation peaked in the Universe some 10 billion years ago, but their understanding of cosmic history is based on observations of a relatively small population of galaxies.

“SPHEREx, with its small telescope, is going to address this subject in a novel way,” Bock said. “Instead of really counting, very deeply, individual galaxies, SPHEREx is going to look at the total glow produced by all galaxies. This cosmological glow captures all light emitted over cosmic history from galaxies, as well as anything else that emits light. So it’s a very different way of looking at the Universe, and in particular, that first stage of star and galaxy formation must also be in this cosmic glow.”

Bock and his science team will match the aggregate data from SPHEREx with what they know about the Universe’s early galaxies from missions like Webb and the Hubble Space Telescope. “We can compare to counts that have been built up with large telescopes and see if we’ve missed any sources of light,” Bock said.

Closer to home

In our own galaxy, SPHEREx will use its infrared sensitivity to investigate the origins and abundance of water and ice in molecular clouds, the precursors to alien solar systems where gas and dust collapse to form stars and planets.

“We think that most of the water and ice in the universe is in places like this,” said Rachel Akeson, SPHEREx science data center lead at Caltech. “It’s also likely that the water in Earth’s oceans originated in the molecular cloud. So how will SPHEREx map the ice in our galaxy? While other space telescopes have found reservoirs of water in hundreds of locations, SPHEREx observations of our galaxy will give us more than 9 million targets, a much bigger sample than we have now.”

As the telescope scans across these millions of targets, its detectors will make measurements of each point in the sky in 102 infrared wavelengths. With the help of spectroscopy, SPHEREx will measure how much water is bound up in these star-forming clouds.

“Knowing the water content around the galaxy is a clue to how many locations could potentially host life,” Akeson said.

The SPHEREx observatory (top) was joined on its ride to space by four small NASA satellites (bottom) setting out to study the solar wind. Credit: Benjamin Fry/BAE Systems

All-sky surveys like SPHEREx’s often turn up surprises because they ingest immense amounts of data. They leave behind enduring legacies by building up catalogs of galaxies and stars. Astronomers use these archives to plan follow-up observations by more powerful telescopes like Webb and Hubble, or with future observatories employing technologies unavailable today.

As it pans across the sky observing distant galaxies, SPHEREx’s telescope will also catch glimpses of targets within our own Solar System. These include planets and thousands of asteroids, comets, icy worlds beyond Pluto, and interstellar objects that occasionally transit through the Solar System. SPHEREx sill measure water, iron, carbon dioxide, and multiple types of ices (water, methane, nitrogen, ammonia, and others) on the surface of these worlds closer to home.

Finding savings where possible

A second NASA mission hitched a ride to space with SPHEREx, deploying into a similar orbit a few minutes after the Falcon 9 released its primary payload.

This secondary mission, called PUNCH, consists of four suitcase-sized satellites that will study the solar corona, or outer atmosphere, a volatile sheath of super-heated gas extending millions of miles from the Sun’s surface. NASA expects PUNCH’s $150 million mission will reveal information about how the corona generates the solar wind, a continuous stream of charged particles streaming out in all directions from the Sun.

There are tangible reasons to study the solar wind. These particles travel through space at speeds close to 1 million mph, and upon reaching Earth, interact with our planet’s magnetic field. Bursts of energy erupting from the Sun, like solar flares, can generate shocks in the solar wind current, leading to higher risks for geomagnetic storms. These have a range of effects on the Earth, ranging from colorful but benign auroras to disruptions to satellite operations, navigation, and communication.

Other NASA spacecraft have zoomed in to observe second-by-second changes in the Sun’s atmosphere, and a fleet of sentinels closer to Earth measure the solar wind after it has traveled through space for three days. PUNCH will combine the imaging capacities of four small satellites to create a single “virtual instrument” with a view broad enough to monitor the solar wind as it leaves the Sun and courses farther into the Solar System.

Hailing a ride to space is not as simple as opening up Uber on your phone, but sharing rides offers a more cost-effective way to launch small satellites like PUNCH. SpaceX regularly launches rideshare flights, called Transporter missions, on its Falcon 9 rocket, sometimes with more than 100 satellites on a single launch going to a standard orbit. Missions like SPHEREx and PUNCH aren’t usually a good fit for SpaceX’s Transporter missions because they have more stringent demands for cleanliness and must launch into bespoke orbits to achieve their science goals.

Matching SPHEREx and PUNCH to the same rocket required both missions to go to the same orbit, and be ready for launch at the same time. That’s a luxury not often available to NASA’s mission planners, but where possible, the agency wants to take advantage of rideshare opportunities.

Launching the PUNCH mission on its own dedicated rocket would have likely cost at least $15 million. This is the approximate price of a mission on Firefly Aerospace’s Alpha rocket, the cheapest US launcher with the muscle to lift the PUNCH satellites into orbit.

“This is a real change in how we do business,” said Mark Clampin, the acting deputy administrator for NASA’s Science Mission Directorate, or SMD. “It’s a new strategy that SMD is working where we can maximize the efficiency of launches by flying two payloads at once, so we maximize the science return.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

No, that’s not a cosmic cone of shame—it’s NASA’s newest space telescope Read More »

large-study-shows-drinking-alcohol-is-good-for-your-cholesterol-levels

Large study shows drinking alcohol is good for your cholesterol levels

The good and the bad

For reference, the optimal LDL level for adults is less than 100 mg/dL, and optimal HDL is 60 mg/dL or higher. Higher LDL levels can increase the risk of heart disease, stroke, peripheral artery disease, and other health problems, while higher HDL has a protective effect against cardiovascular disease. Though some of the changes reported in the study were small, the researchers note that they could be meaningful in some cases. For instance, an increase of 5 mg/dL in LDL is enough to raise the risk of a cardiovascular event by 2 percent to 3 percent.

The researchers ran three different models to adjust for a variety of factors, including basics like age, sex, body mass index, as well as medical conditions, such as hypertension and diabetes, and lifestyle factors, such as exercise, dietary habits, and smoking. All the models showed the same associations. They also broke out the data by what kinds of alcohol people reported drinking—wine, beer, sake, other liquors and spirits. The results were the same across the categories.

The study isn’t the first to find good news for drinkers’ cholesterol levels, though it’s one of the larger studies with longer follow-up time. And it’s long been found that alcohol drinking seems to have some benefits for cardiovascular health. A recent review and meta-analysis by the National Academies of Sciences, Engineering, and Medicine found that moderate drinkers had lower relative risks of heart attacks and strokes. The analysis also found that drinkers had a lower risk of all-cause mortality (death by any cause). The study did, however, find increased risks of breast cancer. Another recent review found increased risk of colorectal, female breast, liver, oral cavity, pharynx, larynx, and esophagus cancers.

In all, the new cholesterol findings aren’t an invitation for nondrinkers to start drinking or for heavy drinkers to keep hitting the bottle hard, the researchers caution. There are a lot of other risks to consider. For drinkers who aren’t interested in quitting, the researchers recommend taking it easy. And those who do want to quit should keep a careful eye on their cholesterol levels.

In their words: “Public health recommendations should continue to emphasize moderation in alcohol consumption, but cholesterol levels should be carefully monitored after alcohol cessation to mitigate potential [cardiovascular disease] risks,” the researchers conclude.

Large study shows drinking alcohol is good for your cholesterol levels Read More »

d-wave-quantum-annealers-solve-problems-classical-algorithms-struggle-with

D-Wave quantum annealers solve problems classical algorithms struggle with


The latest claim of a clear quantum supremacy solves a useful problem.

Right now, quantum computers are small and error-prone compared to where they’ll likely be in a few years. Even within those limitations, however, there have been regular claims that the hardware can perform in ways that are impossible to match with classical computation (one of the more recent examples coming just last year). In most cases to date, however, those claims were quickly followed by some tuning and optimization of classical algorithms that boosted their performance, making them competitive once again.

Today, we have a new entry into the claims department—or rather a new claim by an old entry. D-Wave is a company that makes quantum annealers, specialized hardware that is most effective when applied to a class of optimization problems. The new work shows that the hardware can track the behavior of a quantum system called an Ising model far more efficiently than any of the current state-of-the-art classical algorithms.

Knowing what will likely come next, however, the team behind the work writes, “We hope and expect that our results will inspire novel numerical techniques for quantum simulation.”

Real physics vs. simulation

Most of the claims regarding quantum computing superiority have come from general-purpose quantum hardware, like that of IBM and Google. These can solve a wide range of algorithms, but have been limited by the frequency of errors in their qubits. Those errors also turned out to be the reason classical algorithms have often been able to catch up with the claims from the quantum side. They limit the size of the collection of qubits that can be entangled at once, allowing algorithms that focus on interactions among neighboring qubits to perform reasonable simulations of the hardware’s behavior.

In any case, most of these claims have involved quantum computers that weren’t solving any particular algorithm, but rather simply behaving like a quantum computer. Google’s claims, for example, are based around what are called “random quantum circuits,” which is exactly what it sounds like.

Off in its own corner is a company called D-Wave, which makes hardware that relies on quantum effects to perform calculations, but isn’t a general-purpose quantum computer. Instead, its collections of qubits, once configured and initialized, are left to find their way to a ground energy state, which will correspond to a solution to a problem. This approach, called quantum annealing, is best suited to solving problems that involve finding optimal solutions to complex scheduling problems.

D-Wave was likely to have been the first company to experience the “we can outperform classical” followed by an “oh no you can’t” from algorithm developers, and since then it has typically been far more circumspect. In the meantime, a number of companies have put D-Wave’s computers to use on problems that align with where the hardware is most effective.

But on Thursday, D-Wave will release a paper that will once again claim, as its title indicates, “beyond classical computation.” And it will be doing it on a problem that doesn’t involve random circuits.

You sing, Ising

The new paper describes using D-Wave’s hardware to compute the evolution over time of something called an Ising model. A simple version of this model is a two-dimensional grid of objects, each of which can be in two possible states. The state that any one of these objects occupies is influenced by the state of its neighbors. So, it’s easy to put an Ising model into an unstable state, after which values of the objects within it will flip until it reaches a low-energy, stable state. Since this is also a quantum system, however, random noise can sometimes flip bits, so the system will continue to evolve over time. You can also connect the objects into geometries that are far more complicated than a grid, allowing more complex behaviors.

Someone took great notes from a physics lecture on Ising models that explains their behavior and role in physics in more detail. But there are two things you need to know to understand this news. One is that Ising models don’t involve a quantum computer merely acting like an array of qubits—it’s a problem that people have actually tried to find solutions to. The second is that D-Wave’s hardware, which provides a well-connected collection of quantum devices that can flip between two values, is a great match for Ising models.

Back in 2023, D-Wave used its 5,000-qubit annealer to demonstrate that its output when performing Ising model evolution was best described using Schrödinger’s equation, a central way of describing the behavior of quantum systems. And, as quantum systems become increasingly complex, Schrödinger’s equation gets much, much harder to solve using classical hardware—the implication being that modeling the behavior of 5,000 of these qubits could quite possibly be beyond the capacity of classical algorithms.

Still, having been burned before by improvements to classical algorithms, the D-Wave team was very cautious about voicing that implication. As they write in their latest paper, “It remains important to establish that within the parametric range studied, despite the limited correlation length and finite experimental precision, approximate classical methods cannot match the solution quality of the [D-Wave hardware] in a reasonable amount of time.”

So it’s important that they now have a new paper that indicates that classical methods in fact cannot do that in a reasonable amount of time.

Testing alternatives

The team, which is primarily based at D-Wave but includes researchers from a handful of high-level physics institutions from around the world, focused on three different methods of simulating quantum systems on classical hardware. They were put up against a smaller version of what will be D-Wave’s Advantage 2 system, designed to have a higher qubit connectivity and longer coherence times than its current Advantage. The work essentially involved finding where the classical simulators bogged down as either the simulation went on for too long, or the complexity of the Ising model’s geometry got too high (all while showing that D-Wave’s hardware could perform the same calculation).

Three different classical approaches were tested. Two of them involved a tensor network, one called MPS, for matrix product of states, and the second called projected entangled-pair states (PEPS). They also tried a neural network, as a number of these have been trained successfully to predict the output of Schrödinger’s equation for different systems.

These approaches were first tested on a simple 8×8 grid of objects rolled up into a cylinder, which increases the connectivity by eliminating two of the edges. And, for this simple system that evolved over a short period, the classical methods and the quantum hardware produced answers that were effectively indistinguishable.

Two of the classical algorithms, however, were relatively easy to eliminate from serious consideration. The neural network provided good results for short simulations but began to diverge rapidly once the system was allowed to evolve for longer times. And PEPS works by focusing on local entanglement and failed as entanglement was spread to ever-larger systems. That left MPS as the classical representative as more complex geometries were run for longer times.

By identifying where MPS started to fail, the researchers could estimate the amount of classical hardware that would be needed to allow the algorithm to keep pace with the Advantage 2 hardware on the most complex systems. And, well, it’s not going to be realistic any time soon. “On the largest problems, MPS would take millions of years on the Frontier supercomputer per input to match [quantum hardware] quality,” they conclude. “Memory requirements would exceed its 700PB storage, and electricity requirements would exceed annual global consumption.” By contrast, it took a few minutes on D-Wave’s hardware.

Again, in the paper, the researchers acknowledge that this may lead to another round of optimizations that bring classical algorithms back into competition. And, apparently those have already started once a draft of this upcoming paper was placed on the arXiv. At a press conference happening as this report was being prepared, one of D-Wave’s scientists, Andrew King, noted that two pre-prints have already appeared on the arXiv that described improvements to classical algorithms.

While these allow classical simulations to perform more of the results demonstrated in the new paper, they don’t involve simulating the most complicated geometries, and require shorter times and fewer total qubits. Nature talked to one of the people behind these algorithm improvements, who was optimistic that they could eventually replicate all of D-Wave’s results using non-quantum algorithms. D-Wave, obviously, is skeptical. And King said that a new, larger Advantage 2 test chip with over 4,000 qubits available had recently been calibrated, and he had already tested even larger versions of these same Ising models on it—ones that would be considerably harder for classical methods to catch up to.

In any case, the company is acting like things are settled. During the press conference describing the new results, people frequently referred to D-Wave having achieved quantum supremacy, and its CEO, Alan Baratz, in responding to skepticism sparked by the two draft manuscripts, said, “Our work should be celebrated as a significant milestone.”

Science, 2025. DOI: 10.1126/science.ado6285  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

D-Wave quantum annealers solve problems classical algorithms struggle with Read More »

how-whale-urine-benefits-the-ocean-ecosystem

How whale urine benefits the ocean ecosystem

A “great whale conveyor belt”

illustration showing how whale urine spreads throughout the ocean ecosystem

Credit: A. Boersma

Migrating whales typically gorge in summers at higher latitudes to build up energy reserves to make the long migration to lower latitudes. It’s still unclear exactly why the whales migrate, but it’s likely that pregnant females in particular find it more beneficial to give birth and nurse their young in warm, shallow, sheltered areas—perhaps to protect their offspring from predators like killer whales. Warmer waters also keep the whale calves warm as they gradually develop their insulating layers of blubber. Some scientists think that whales might also migrate to molt their skin in those same warm, shallow waters.

Roman et al. examined publicly available spatial data for whale feeding and breeding grounds, augmented with sightings from airplane and ship surveys to fill in gaps in the data, then fed that data into their models for calculating nutrient transport. They focused on six species known to migrate seasonally over long distances from higher latitudes to lower latitudes: blue whales, fin whales, gray whales, humpback whales, and North Atlantic and southern right whales.

They found that whales can transport some 4,000 tons of nitrogen each year during their migrations, along with 45,000 tons of biomass—and those numbers could have been three times larger in earlier eras before industrial whaling depleted populations. “We call it the ‘great whale conveyor belt,’” Roman said. “It can also be thought of as a funnel, because whales feed over large areas, but they need to be in a relatively confined space to find a mate, breed, and give birth. At first, the calves don’t have the energy to travel long distances like the moms can.” The study did not include any effects from whales releasing feces or sloughing their skin, which would also contribute to the overall nutrient flux.

“Because of their size, whales are able to do things that no other animal does. They’re living life on a different scale,” said co-author Andrew Pershing, an oceanographer at the nonprofit organization Climate Central. “Nutrients are coming in from outside—and not from a river, but by these migrating animals. It’s super-cool, and changes how we think about ecosystems in the ocean. We don’t think of animals other than humans having an impact on a planetary scale, but the whales really do.” 

Nature Communications, 2025. DOI: 10.1038/s41467-025-56123-2  (About DOIs).

How whale urine benefits the ocean ecosystem Read More »

nci-employees-can’t-publish-information-on-these-topics-without-special-approval

NCI employees can’t publish information on these topics without special approval

The list is “an unusual mix of words that are tied to activities that this administration has been at war with—like equity, but also words that they purport to be in favor of doing something about, like ultraprocessed food,” Tracey Woodruff, director of the Program on Reproductive Health and the Environment at the University of California, San Francisco, said in an email.

The guidance states that staffers “do not need to share content describing the routine conduct of science if it will not get major media attention, is not controversial or sensitive, and does not touch on an administration priority.”

A longtime senior employee at the institute said that the directive was circulated by the institute’s communications team, and the content was not discussed at the leadership level. It is not clear in which exact office the directive originated. The NCI, NIH and HHS did not respond to ProPublica’s emailed questions. (The existence of the list was first revealed in social media posts on Friday.)

Health and research experts told ProPublica they feared the chilling effect of the new guidance. Not only might it lead to a lengthier and more complex clearance process, it may also cause researchers to censor their work out of fear or deference to the administration’s priorities.

“This is real interference in the scientific process,” said Linda Birnbaum, a former director of the National Institute of Environmental Health Sciences who served as a federal scientist for four decades. The list, she said, “just seems like Big Brother intimidation.”

During the first two months of Donald Trump’s second presidency, his administration has slashed funding for research institutions and stalled the NIH’s grant application process.

Kennedy has suggested that hundreds of NIH staffers should be fired and said that the institute should deprioritize infectious diseases like COVID-19 and shift its focus to chronic diseases, such as diabetes and obesity.

Obesity is on the NCI’s new list, as are infectious diseases including COVID-19, bird flu and measles.

The “focus on bird flu and covid is concerning,” Woodruff wrote, because “not being transparent with the public about infectious diseases will not stop them or make them go away and could make them worse.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.

NCI employees can’t publish information on these topics without special approval Read More »

what-the-epa’s-“endangerment-finding”-is-and-why-it’s-being-challenged

What the EPA’s “endangerment finding” is and why it’s being challenged


Getting rid of the justification for greenhouse gas regulations won’t be easy.

Credit: Mario Tama/Getty Images

A document that was first issued in 2009 would seem an unlikely candidate for making news in 2025. Yet the past few weeks have seen a steady stream of articles about an analysis first issued by the Environmental Protection Agency (EPA) in the early years of Obama’s first term: the endangerment finding on greenhouse gasses.

The basics of the document are almost mundane: greenhouse gases are warming the climate, and this will have negative consequences for US citizens. But it took a Supreme Court decision to get written in the first place, and it has played a role in every attempt by the EPA to regulate greenhouse gas emissions across multiple administrations. And, while the first Trump administration left it in place, the press reports we’re seeing suggest that an attempt will be made to eliminate it in the near future.

The only problem: The science in which the endangerment finding is based on is so solid that any ensuing court case will likely leave its opponents worse off in the long run, which is likely why the earlier Trump administration didn’t challenge it.

Get comfortable, because the story dates all the way back to the first Bush administration.

A bit of history

One of the goals of the US’s Clean Air Act, first passed in 1963, is to “address the public health and welfare risks posed by certain widespread air pollutants.” By the end of the last century, it was becoming increasingly clear that greenhouse gases fit that definition. While they weren’t necessarily directly harmful to the people inhaling them—our lungs are constantly being filled with carbon dioxide, after all—the downstream effects of the warming they caused could certainly impact human health and welfare. But, with the federal government taking no actions during George W. Bush’s time in office, a group of states and cities sued to force the EPA’s hand.

That suit eventually reached the Supreme Court in the form of Massachusetts v. EPA, which led to a ruling in 2007 determining that the Clean Air Act required the EPA perform an analysis of the dangers posed by greenhouse gasses. That analysis was done by late 2007, but the Bush administration simply ignored it for the remaining year it had in office. (It was eventually released after Bush left office.)

That left the Obama-era EPA to reach essentially the same conclusions that the Bush administration had: greenhouse gasses are warming the planet. And that will have various impacts—sea level rise, dangerous heat, damage to agriculture and forestry, and more.

That conclusion compelled the EPA to formulate regulations to limit the emission of greenhouse gasses from power plants. Obama’s EPA did just that, but came late enough to still be tied up in courts by the time his term ended. They were also formulated before the plunge in the cost of renewable power sources, which have since led to a drop in carbon emissions that have far outpaced what the EPA’s rules intended to accomplish.

The first Trump administration formulated alternative rules that also ended up in court for being an insufficient response to the conclusions of the endangerment finding. Which ultimately led the Biden administration to start formulating a new set of rules. And at that point, the Supreme Court decided to step in and rule on the Obama rules, even though everyone knew they would never go into effect.

The court indicated that the EPA needed to regulate each power plant individually, rather than regulating the wider grid, which sent the Biden administration back to the drawing board. Its attempts at crafting regulations were also in court when Trump returned to office.

There were a couple of notable aspects to that last case, West Virginia v. EPA, which hinged on the fact that Congress had never explicitly indicated that it wanted to see greenhouse gasses regulated. Congress responded by ensuring that the Inflation Reduction Act’s energy-focused components specifically mentioned that these were intended to limit carbon emissions, eliminating one potential roadblock. The other thing is that, in this and other court cases, the Supreme Court could have simply overturned Massachusetts v. EPA, the case that put greenhouse gasses within the regulatory framework of the Clean Air Act. Yet a court that has shown a great enthusiasm for overturning precedent didn’t do so.

Nothing dangerous?

So, in the 15 years since the EPA initially released its endangerment findings, they’ve resulted in no regulations whatsoever. But, as long as they existed, the EPA is required to at least attempt to regulate them. So, getting rid of the endangerment findings would seem like the obvious thing for an administration led by a president who repeatedly calls climate change a hoax. And there were figures within the first Trump administration who argued in favor of that.

So why didn’t it happen?

That was never clear, but I’d suggest at least some members of the first Trump administration were realistic about the likely results. The effort to contest the endangerment finding was pushed by people who largely reject the vast body of scientific evidence that indicates that greenhouse gases are warming the climate. And, if anything, the evidence had gotten more decisive in the years between the initial endangerment finding and Trump’s inauguration. I expect that their effort was blocked by people who knew that it would fail in the courts, and likely leave behind precedents that made future regulatory efforts easier.

This interpretation is supported by the fact that the Trump-era EPA received a number of formal petitions to revisit the endangerment finding. Having read a few (something you should not do), they are uniformly awful. References to supposed peer-reviewed “papers” turn out to be little more than PDFs hosted on a WordPress site. Other arguments are based on information contained in the proceedings of a conference organized by an anti-science think tank. The Trump administration rejected them all with minimal comment the day before Biden’s inauguration.

Biden’s EPA went back and made detailed criticisms of each of them if you want to see just how laughable the arguments against mainstream science were at the time. And, since then, we’ve experienced a few years of temperatures that are so high they’ve surprised many climate scientists.

Unrealistic

But the new head of the EPA is apparently anything but a realist, and multiple reports have indicated he’s asking to be given the opportunity to go ahead and redo the endangerment finding. A more recent report suggests two possibilities. One is to recruit scientists from the fringes to produce a misleading report and roll the dice on getting a sympathetic judge who will overlook the obvious flaws. The other would be to argue that any climate change that happens will have net benefits to the US.

That latter approach would run into the problem that we’ve gotten increasingly sophisticated at doing analyses that attribute the impact of climate change on the individual weather disasters that do harm the welfare of citizens of the US. While it might have been possible to make a case for uncertainty here a decade ago, that window has been largely closed by the scientific community.

Even if all of these efforts fail, it will be entirely possible for the EPA to construct greenhouse gas regulations that accomplish nothing and get tied up in court for the remainder of Trump’s term. But a court case could show just how laughably bad the positions staked out by climate contrarians are (and, by extension, the position of the president himself). There’s a small chance that the resulting court cases will result in a legal record that will make it that much harder to accept the sorts of minimalist regulations that Trump proposed in his first term.

Which is probably why this approach was rejected the first time around.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

What the EPA’s “endangerment finding” is and why it’s being challenged Read More »