Sustainability

bipartisan-consensus-in-favor-of-renewable-power-is-ending

Bipartisan consensus in favor of renewable power is ending

End of an era —

The change is most pronounced in those over 50 years old.

Image of solar panels on a green grassy field, with blue sky in the background.

One of the most striking things about the explosion of renewable power that’s happening in the US is that much of it is going on in states governed by politicians who don’t believe in the problem wind and solar are meant to address. Acceptance of the evidence for climate change tends to be lowest among Republicans, yet many of the states where renewable power has boomed—wind in Wyoming and Iowa, solar in Texas—are governed by Republicans.

That’s partly because, up until about 2020, there was a strong bipartisan consensus in favor of expanding wind and solar power, with support above 75 percent among both parties. Since then, however, support among Republicans has dropped dramatically, approaching 50 percent, according to polling data released this week.

Renewables enjoyed solid Republican support until recently.

Renewables enjoyed solid Republican support until recently.

To a certain extent, none of this should be surprising. The current leader of the Republican Party has been saying that wind turbines cause cancer and offshore wind is killing whales. And conservative-backed groups have been spreading misinformation in order to drum up opposition to solar power facilities.

Meanwhile, since 2022, the Inflation Reduction Act has been promoted as one of the Biden administration’s signature accomplishments and has driven significant investments in renewable power, much of it in red states. Negative partisanship is undoubtedly contributing to this drop in support.

One striking thing about the new polling data, gathered by the Pew Research Center, is how dramatically it skews with age. When given a choice between expanding fossil fuel production or expanding renewable power, Republicans under the age of 30 favored renewables by a 2-to-1 margin. Republicans over 30, in contrast, favored fossil fuels by margins that increased with age, topping out at a three-to-one margin in favor of fossil fuels among those in the 65-and-over age group. The decline in support occurred in those over 50 starting in 2020; support held steady among younger groups until 2024, when the 30–49 age group started moving in favor of fossil fuels.

Among younger Republicans, support for renewable energy remains high.

Among younger Republicans, support for renewable energy remains high.

Democrats, by contrast, break in favor of renewables by 75 points, with little difference across age groups and no indication of significant change over time. They’re also twice as likely to think a solar farm will help the local economy than Republicans are.

Similar differences were apparent when Pew asked about policies meant to encourage the sale of electric vehicles, with 83 percent of Republicans opposed to having half of cars sold be electric in 2032. By contrast, nearly two-thirds of Democrats favored this policy.

There’s also a rural/urban divide apparent (consistent with Republicans getting more support from rural voters). Forty percent of urban residents felt that a solar farm would improve the local economy; only 25 percent of rural residents agreed. Rural residents were also more likely to say solar farms made the landscape unattractive and take up too much space. (Suburban participants were consistently in between rural and urban participants.)

What’s behind these changes? The single biggest factor appears to be negative partisanship combined with the election of Joe Biden.

For Republicans, 2020 represented an inflection point in terms of support for different types of energy. That wasn't true for Democrats.

For Republicans, 2020 represented an inflection point in terms of support for different types of energy. That wasn’t true for Democrats.

Among Republicans, support for every single form of power started to change in 2020—fossil fuels, renewables, and nuclear. Among Democrats, that’s largely untrue. Their high level of support for renewable power and aversion to fossil fuels remained largely unchanged. The lone exception is nuclear power, where support rose among both Democrats and Republicans (the Biden administration has adopted a number of pro-nuclear policies).

This isn’t to say that non-political factors are playing no role. The rapid expansion of renewable power means that many more people are seeing facilities open near them, and viewing that as an indication of a changing society. Some degree of backlash was almost inevitable and, in this case, the close ties between conservative lobbyists and fossil fuel interests were ready to take advantage of it.

Bipartisan consensus in favor of renewable power is ending Read More »

“energy-smart”-bricks-need-less-power-to-make,-are-better-insulation

“Energy-smart” bricks need less power to make, are better insulation

Image of a person holding a bag full of dirty looking material with jagged pieces in it.

Enlarge / Some of the waste material that ends up part of these bricks.

Seamus Daniel, RMIT University

Researchers at the Royal Melbourne Institute of Technology (RMIT) in Australia have developed special “energy-smart bricks” that can be made by mixing clay with glass waste and coal ash. These bricks can help mitigate the negative effects of traditional brick manufacturing, an energy-intensive process that requires large-scale clay mining, contributes heavily to CO2 emissions, and generates a lot of air pollution.

According to the RMIT researchers, “Brick kilns worldwide consume 375 million tonnes (~340 million metric tons) of coal in combustion annually, which is equivalent to 675 million tonnes of CO2 emission (~612 million metric tons).” This exceeds the combined annual carbon dioxide emissions of 130 million passenger vehicles in the US.

The energy-smart bricks rely on a material called RCF waste. It mostly contains fine pieces of glass (92 percent) left over from the recycling process, along with ceramic materials, plastic, paper, and ash. Most of this waste material generally ends up in landfills, where it can cause soil and water degradation. However, the study authors note, “The utilization of RCF waste in fired-clay bricks offers a potential solution to the increasing global waste crisis and reduces the burden on landfills.”

What makes the bricks “energy-smart”

Compared to traditional bricks, the newly developed energy-smart bricks have lower thermal conductivity: They retain heat longer and undergo more uniform heating. This means they can be manufactured at lower firing temperatures. For instance, while regular clay bricks are fired (a process during which bricks are baked in a kiln, so they become hard and durable) at 1,050° C, energy-smart bricks can achieve the required hardness at 950° C, saving 20 percent of the energy needed for traditional brickmaking.

Based on bricks produced in their lab, they estimated that “each firing cycle led to a potential value of up to $158,460 through a reduction of 417 tonnes of CO2, resulting from a 9.5 percent reduction in firing temperature.” So basically, if a manufacturer switches from regular clay bricks to energy-smart bricks, it will end up saving thousands of dollars on its power bill, and its kilns will release less CO2 into Earth’s atmosphere. Scaled up to the estimated 1.4 trillion bricks made each year, the savings are substantial.

But brick manufacturers aren’t the only ones who benefit. “Bricks characterized by low thermal conductivity contribute to efficient heat storage and absorption, creating a cooler environment during summer and a warmer comfort during winter. This advantage translates into energy savings for air conditioning, benefiting the occupants of the house or building,” the study authors explained.

Tests conducted by the researchers suggest that the residents of a single-story house built using energy-smart bricks will save up to 5 percent on their energy bills compared to those living in a house made with regular clay bricks.

“Energy-smart” bricks need less power to make, are better insulation Read More »

us’s-power-grid-continues-to-lower-emissions—everything-else,-not-so-much

US’s power grid continues to lower emissions—everything else, not so much

Down, but not down enough —

Excluding one pandemic year, emissions are lower than they’ve been since the 1980s.

Graph showing total US carbon emissions, along with individual sources. Most trends are largely flat or show slight declines.

On Thursday, the US Department of Energy released its preliminary estimate for the nation’s carbon emissions in the previous year. Any drop in emissions puts us on a path that would avoid some of the catastrophic warming scenarios that were still on the table at the turn of the century. But if we’re to have a chance of meeting the Paris Agreement goal of keeping the planet from warming beyond 2° C, we’ll need to see emissions drop dramatically in the near future.

So, how is the US doing? Emissions continue to trend downward, but there’s no sign the drop has accelerated. And most of the drop has come from a single sector: changes in the power grid.

Off the grid, on the road

US carbon emissions have been trending downward since roughly 2007, when they peaked at about six gigatonnes. In recent years, the pandemic produced a dramatic drop in emissions in 2020, lowering them to under five gigatonnes for the first time since before 1990, when the EIA’s data started. Carbon dioxide release went up a bit afterward, with 2023 marking the first post-pandemic decline, with emissions again clearly below five gigatonnes.

The DOE’s Energy Information Agency (EIA) divides the sources of carbon dioxide into five different sectors: electricity generation, transportation, and residential, commercial, and industrial uses. The EIA assigns 80 percent of the 2023 reduction in US emissions to changes in the electric power grid, which is not a shock given that it’s the only sector that’s seen significant change in the entire 30-year period the EIA is tracking.

With hydro in the rearview mirror, wind and solar are coming after coal and nuclear.

With hydro in the rearview mirror, wind and solar are coming after coal and nuclear.

What’s happening with the power grid? Several things. At the turn of the century, coal accounted for over half of the US’s electricity generation; it’s now down to 16 percent. Within the next two years, it’s likely to be passed by wind and solar, which were indistinguishable from zero percent of generation as recently as 2004. Things would be even better for them if not for generally low wind speeds leading to a decline in wind generation in 2023. The biggest change, however, has been the rise of natural gas, which went from 10 percent of generation in 1990 to over 40 percent in 2023.

A small contributor to the lower emissions came from lower demand—it dropped by a percentage point compared to 2022. Electrification of transport and appliances, along with the growth of AI processing, are expected to send demand soaring in the near future, but there’s no indication of that on the grid yet.

Currently, generating electricity accounts for 30 percent of the US’s carbon emissions. That places it as the second most significant contributor, behind transportation, which is responsible for 39 percent of emissions. The EIA rates transportation emissions as unchanged relative to 2022, despite seeing air travel return to pre-pandemic levels and a slight increase in gasoline consumption. Later in this decade, tighter fuel efficiency rules are expected to drive a decline in transportation emissions, which are only down about 10 percent compared to their 2006 peak.

Buildings and industry

The remaining sectors—commercial, residential, and industrial—have a more complicated relationship with fossil fuels. Some of their energy comes via the grid, so its emissions are already accounted for. Thanks to the grid decarbonizing, these would be going down, but for business and residential use, grid-dependent emissions are dropping even faster than that would imply. This suggests that things like more efficient lighting and appliances are having an impact.

Separately, direct use of fossil fuels for things like furnaces, water heaters, etc., has been largely flat for the entire 30 years the EIA is looking at, although milder weather led to a slight decline in 2023 (8 percent for residential properties, 4 percent for commercial).

In contrast, the EIA only tracks the direct use of fossil fuels for industrial processes. These are down slightly over the 30-year period but have been fairly stable since the 2008 economic crisis, with no change in emissions between 2022 and 2023. As with the electric grid, the primary difference in this sector has been due to the growth of natural gas and the decline of coal.

Overall, there are two ways to look at this data. The first is that progress at limiting carbon emissions has been extremely limited and that there has been no progress at all in several sectors. The more optimistic view is that the technologies for decarbonizing the electric grid and improving building electrical usage are currently the most advanced, and the US has focused its decarbonization efforts where they’ll make the most difference.

From either perspective, it’s clear that the harder challenges are still coming, both in terms of accelerating decarbonization, and in terms of tackling sectors where decarbonization will be harder. The Biden administration has been working to put policies in place that should drive progress in this regard, but we probably won’t see much of their impact until early in the following decade.

Listing image by Yaorusheng

US’s power grid continues to lower emissions—everything else, not so much Read More »

updating-california’s-grid-for-evs-may-cost-up-to-$20-billion

Updating California’s grid for EVs may cost up to $20 billion

A charging cable plugged in to a port on the side of an electric vehicle. The plug glows green near where it contacts the vehicle.

California’s electric grid, with its massive solar production and booming battery installations, is already on the cutting edge of the US’s energy transition. And it’s likely to stay there, as the state will require that all passenger vehicles be electric by 2035. Obviously, that will require a grid that’s able to send a lot more electrons down its wiring and a likely shift in the time of day that demand peaks.

Is the grid ready? And if not, how much will it cost to get it there? Two researchers at the University of California, Davis—Yanning Li and Alan Jenn—have determined that nearly two-thirds of its feeder lines don’t have the capacity that will likely be needed for car charging. Updating to handle the rising demand might set its utilities back as much as 40 percent of the existing grid’s capital cost.

The lithium state

Li and Jenn aren’t the first to look at how well existing grids can handle growing electric vehicle sales; other research has found various ways that different grids fall short. However, they have access to uniquely detailed data relevant to California’s ability to distribute electricity (they do not concern themselves with generation). They have information on every substation, feeder line, and transformer that delivers electrons to customers of the state’s three largest utilities, which collectively cover nearly 90 percent of the state’s population. In total, they know the capacity that can be delivered through over 1,600 substations and 5,000 feeders.

California has clear goals for its electric vehicles, and those are matched with usage based on the California statewide travel demand model, which accounts for both trips and the purpose of those trips. These are used to determine how much charging will need to be done, as well as where that charging will take place (home or a charging station). Details on that charging comes from the utilities, charging station providers, and data logs.

They also project which households will purchase EVs based on socioeconomic factors, scaled so that adoption matches the state’s goals.

Combined, all of this means that Li and Jenn can estimate where charging is taking place and how much electricity will be needed per charge. They can then compare that need to what the existing grid has the capacity to deliver.

It falls short, and things get worse very quickly. By 2025, only about 7 percent of the feeders will experience periods of overload. By 2030, that figure will grow to 27 percent, and by 2035—only about a decade away—about half of the feeders will be overloaded. Problems grow a bit more slowly after that, with two-thirds of the feeders overloaded by 2045, a decade after all cars sold in California will be EVs. At that point, total electrical demand will be close to twice the existing capacity.

Updating California’s grid for EVs may cost up to $20 billion Read More »

climate-damages-by-2050-will-be-6-times-the-cost-of-limiting-warming-to-2°

Climate damages by 2050 will be 6 times the cost of limiting warming to 2°

A worker walks between long rows of solar panels.

Almost from the start, arguments about mitigating climate change have included an element of cost-benefit analysis: Would it cost more to move the world off fossil fuels than it would to simply try to adapt to a changing world? A strong consensus has built that the answer to the question is a clear no, capped off by a Nobel in Economics given to one of the people whose work was key to building that consensus.

While most academics may have considered the argument put to rest, it has enjoyed an extended life in the political sphere. Large unknowns remain about both the costs and benefits, which depend in part on the remaining uncertainties in climate science and in part on the assumptions baked into economic models.

In Wednesday’s edition of Nature, a small team of researchers analyzed how local economies have responded to the last 40 years of warming and projected those effects forward to 2050. They find that we’re already committed to warming that will see the growth of the global economy undercut by 20 percent. That places the cost of even a limited period of climate change at roughly six times the estimated price of putting the world on a path to limit the warming to 2° C.

Linking economics and climate

Many economic studies of climate change involve assumptions about the value of spending today to avoid the costs of a warmer climate in the future, as well as the details of those costs. But the people behind the new work, Maximilian Kotz, Anders Levermann, and Leonie Wenz decided to take an empirical approach. They obtained data about the economic performance of over 1,600 individual regions around the globe, going back 40 years. They then attempted to look for connections between that performance and climate events.

Previous research already identified a number of climate measures—average temperatures, daily temperature variability, total annual precipitation, the annual number of wet days, and extreme daily rainfall—that have all been linked to economic impacts. Some of these effects, like extreme rainfall, are likely to have immediate effects. Others on this list, like temperature variability, are likely to have a gradual impact that is only felt over time.

The researchers tested each factor for lagging effects, meaning an economic impact sometime after their onset. These suggested that temperature factors could have a lagging impact up to eight years after they changed, while precipitation changes were typically felt within four years of climate-driven changes. While this relationship might be in error for some of the economic changes in some regions, the inclusion of so many regions and a long time period should help limit the impact of those spurious correlations.

With the climate/economic relationship worked out, the researchers obtained climate projections from the Coupled Model Intercomparison Project (CMIP) project. With that in hand, they could look at future climates and estimate their economic costs.

Obviously, there are limits to how far into the future this process will work. The uncertainties of the climate models grow with time; the future economy starts looking a lot less like the present, and things like temperature extremes start to reach levels where past economic behavior no longer applies.

To deal with that, Kotz, Levermann, and Wenz performed a random sampling to determine the uncertainty in the system they developed. They look for the point where the uncertainties from the two most extreme emissions scenarios overlap. That occurs in 2049; after that, we can’t expect the past economic impacts of climate to apply.

Kotz, Levermann, and Wenz suggest that this is an indication of warming we’re already committed to, in part because the effect of past emissions hasn’t been felt in its entirety and partly because the global economy is a boat that turns slowly, so it will take time to implement significant changes in emissions. “Such a focus on the near term limits the large uncertainties about diverging future emission trajectories, the resulting long-term climate response and the validity of applying historically observed climate–economic relations over long timescales during which socio-technical conditions may change considerably,” they argue.

Climate damages by 2050 will be 6 times the cost of limiting warming to 2° Read More »

epa-seeks-to-cut-“cancer-alley”-pollutants

EPA seeks to cut “Cancer Alley” pollutants

Out of the air —

Chemical plants will have to monitor how much is escaping and stop leaks.

Image of a large industrial facility on the side of a river.

Enlarge / An oil refinery in Louisiana. Facilities such as this have led to a proliferation of petrochemical plants in the area.

On Tuesday, the US Environmental Protection Agency announced new rules that are intended to cut emissions of two chemicals that have been linked to elevated incidence of cancer: ethylene oxide and chloroprene. While production and use of these chemicals takes place in a variety of locations, they’re particularly associated with an area of petrochemical production in Louisiana that has become known as “Cancer Alley.”

The new regulations would require chemical manufacturers to monitor the emissions at their facilities and take steps to repair any problems that result in elevated emissions. Despite extensive evidence linking these chemicals to elevated risk of cancer, industry groups are signaling their opposition to these regulations, and the EPA has seen two previous attempts at regulation set aside by courts.

Dangerous stuff

The two chemicals at issue are primarily used as intermediates in the manufacture of common products. Chloroprene, for example, is used for the production of neoprene, a synthetic rubber-like substance that’s probably familiar from products like insulated sleeves and wetsuits. It’s a four-carbon chain with two double-bonds that allow for polymerization and an attached chlorine that alters its chemical properties.

According to the National Cancer Institute (NCI), chloroprene “is a mutagen and carcinogen in animals and is reasonably anticipated to be a human carcinogen.” Given that cancers are driven by DNA damage, any mutagen would be “reasonably anticipated” to drive the development of cancer. Beyond that, it appears to be pretty nasty stuff, with the NCI noting that “exposure to this substance causes damage to the skin, lungs, CNS, kidneys, liver and depression of the immune system.”

The NCI’s take on Ethylene Oxide is even more definitive, with the Institute placing it on its list of cancer-causing substances. The chemical is very simple, with two carbons that are linked to each other directly, and also linked via an oxygen atom, which makes the molecule look a bit like a triangle. This configuration allows the molecule to participate in a broad range of reactions that break one of the oxygen bonds, making it useful in the production of a huge range of chemicals. Its reactivity also makes it useful for sterilizing items such as medical equipment.

Its sterilization function works through causing damage to DNA, which again makes it prone to causing cancers.

In addition to these two chemicals, the EPA’s new regulations will target a number of additional airborne pollutants, including benzene, 1,3-butadiene, ethylene dichloride, and vinyl chloride, all of which have similar entries at the NCI.

Despite the extensive record linking these chemicals to cancer, The New York Times quotes the US Chamber of Commerce, a pro-industry group, as saying that “EPA should not move forward with this rule-making based on the current record because there remains significant scientific uncertainty.”

A history of exposure

The petrochemical industry is the main source of these chemicals, so their release is associated with areas where the oil and gas industry has a major presence; the EPA notes that the regulations will target sources in Delaware, New Jersey, and the Ohio River Valley. But the primary focus will be on chemical plants in Texas and Louisiana. These include the area that has picked up the moniker Cancer Alley due to a high incidence of the disease in a stretch along the Mississippi River with a large concentration of chemical plants.

As is the case with many examples of chemical pollution, the residents of Cancer Alley are largely poor and belong to minority groups. As a result, the EPA had initially attempted to regulate the emissions under a civil rights provision of the Clean Air Act, but that has been bogged down due to lawsuits.

The new regulations simply set limits on permissible levels of release at what’s termed the “fencelines” of the facilities where these chemicals are made, used, or handled. If levels exceed an annual limit, the owners and operators “must find the source of the pollution and make repairs.” This gets rid of previous exemptions for equipment startup, shutdown, and malfunctions; those exemptions had been held to violate the Clean Air Act in a separate lawsuit.

The EPA estimates that the sites subject to regulation will see their collective emissions of these chemicals drop by nearly 80 percent, which works out to be 54 tons of ethylene oxide, 14 tons of chloroprene, and over 6,000 tons of the other pollutants. That in turn will reduce the cancer risk from these toxins by 96 percent among those subjected to elevated exposures. Collectively, the chemicals subject to these regulations also contribute to smog, so these reductions will have an additional health impact by reducing its levels as well.

While the EPA says that “these emission reductions will yield significant reductions in lifetime cancer risk attributable to these air pollutants,” it was unable to come up with an estimate of the financial benefits that will result from that reduction. By contrast, it estimates that the cost of compliance will end up being approximately $150 million annually. “Most of the facilities covered by the final rule are owned by large corporations,” the EPA notes. “The cost of implementing the final rule is less than one percent of their annual national sales.”

This sort of cost-benefit analysis is a required step during the formulation of Clean Air Act regulations, so it’s worth taking a step back and considering what’s at stake here: the EPA is basically saying that companies that work with significant amounts of carcinogens need to take stronger steps to make sure that they don’t use the air people breathe as a dumping ground for them.

Unsurprisingly, The New York Times quotes a neoprene manufacturer that the EPA is currently suing over its chloroprene emissions as claiming the new regulations are “draconian.”

EPA seeks to cut “Cancer Alley” pollutants Read More »

can-we-drill-for-hydrogen?-new-find-suggests-additional-geological-source.

Can we drill for hydrogen? New find suggests additional geological source.

Image of apartment buildings with mine tailings behind them, and green hills behind those.

Enlarge / Mining operations start right at the edge of Bulqizë, Albania.

“The search for geologic hydrogen today is where the search for oil was back in the 19th century—we’re just starting to understand how this works,” said Frédéric-Victor Donzé, a geologist at Université Grenoble Alpes. Donzé is part of a team of geoscientists studying a site at Bulqizë in Albania where miners at one of the world’s largest chromite mines may have accidentally drilled into a hydrogen reservoir.

The question Donzé and his team want to tackle is whether hydrogen has a parallel geological system with huge subsurface reservoirs that could be extracted the way we extract oil. “Bulqizë is a reference case. For the first time, we have real data. We have a proof,” Donzé said.

Greenish energy source

Water is the only byproduct of burning hydrogen, which makes it a potential go-to green energy source. The problem is that the vast majority of the 96 million tons of hydrogen we make each year comes from processing methane, and that does release greenhouse gases. Lots of them. “There are green ways to produce hydrogen, but the cost of processing methane is lower. This is why we are looking for alternatives,” Donzé said.

And the key to one of those alternatives may be buried in the Bulqizë mine. Chromite, an ore that contains lots of chromium, has been mined at Bulqizë since the 1980s. The mining operation was going smoothly until 2007, when the miners drilled through a fault, a discontinuity in the rocks. “Then they started to have explosions. In the mine, they had a small electric train, and there were sparks flying, and then… boom,” Donzé said. At first, Bulqizë management thought the cause was methane, the usual culprit of mining accidents. But it wasn’t.

Hydrogen at fault

The mine was bought by a Chinese company in 2017, and the new owners immediately sent their engineering teams to deal with explosions. They did measurements and found the hydrogen concentration in the mine’s galleries was around 1–2 percent. It only needs to be at 0.4–0.5 percent for the atmosphere to become explosive. “They also found the hydrogen was coming from the fault drilled through back in 2007. Unfortunately, one of the explosions happened when the engineering team was down there. Three or four people died,” Donzé said.

It turned out that over 200 tons of hydrogen was released from the Bulqizë mine each year. Donzé’s team went there to figure out where all this hydrogen was coming from.

The rocks did not contain enough hydrogen to reach that sort of flow rate. One possible explanation is the hydrogen being released as a product of an ongoing geological process called serpentinization. “But for this to happen, the temperature in the mine would need to reach 200–300 degrees Celsius, and even then, it would not produce 200 tons per year,” said Donzé. “So the most probable was the third option—that we have a reservoir,” he added.

“Probable,” of course, is far from certain.

Can we drill for hydrogen? New find suggests additional geological source. Read More »

google,-environmental-defense-fund-will-track-methane-emissions-from-space

Google, Environmental Defense Fund will track methane emissions from space

It’s a gas —

Satellite data + Google Maps + AI should help figure out where methane is leaking.

computer-generated image of a satellite highlighting emissions over a small square on the globe.

Enlarge / With color, high resolution.

Google/EDF

When discussing climate change, attention generally focuses on our soaring carbon dioxide emissions. But levels of methane have risen just as dramatically, and it’s a far more potent greenhouse gas. And, unlike carbon dioxide, it’s not the end result of a valuable process; methane largely ends up in the atmosphere as the result of waste, lost during extraction and distribution.

Getting these losses under control would be one of the easiest ways to slow down greenhouse warming. But tracking methane emissions often comes from lots of smaller, individual sources. To help get a handle on all the leaks, the Environmental Defense Fund has been working to put its own methane-monitoring satellite in orbit. On Wednesday, it announced that it was partnering with Google to take the data from the satellite, make it publicly available, and tie it to specific sources.

The case for MethaneSAT

Over the course of 20 years, methane is 84 times more potent than carbon dioxide when it comes to greenhouse warming. And most methane in the atmosphere ultimately reacts with oxygen, producing water vapor and carbon dioxide—both of which are also greenhouse gasses. Those numbers are offset by the fact that methane levels in the atmosphere are very low, currently just under two parts per million (versus over 400 ppm for CO2). Still, levels have gone up considerably since monitoring started.

The primary source of the excess methane is the extraction and distribution of natural gas. In the US, the EPA has developed rules meant to force companies with natural gas infrastructure to find and fix leaks. (Unsurprisingly, Texas plans to sue to block this rule.) But finding leaks has turned out to be a challenge. The US has been using industry-wide estimates that turned out to be much lower than numbers based on monitoring a subset of facilities.

Globally, that sort of detailed surveying simply isn’t possible, and we don’t have the type of satellite-based instruments we need to focus on methane emissions. A researcher behind one global survey said, “We were quite disappointed because we discovered that the sensitivity of our system was pretty low.” (The survey did identify sites that were “ultra emitters” despite the sensitivity issues.)

To help identify the major sources of methane release, the Environmental Defense Fund, a US-based NGO, has spun off a project called MethaneSAT that will monitor the emissions from space. The project is backed by large philanthropic donations and has partnered with the New Zealand Space Agency. The Rocket Lab launch company will build the satellite control center in New Zealand, while SpaceX will carry the 350 kg satellite to orbit in a shared launch, expected in early March.

Once in orbit, the hardware will use methane’s ability to absorb in the infrared—the same property that causes all the problems—to track emissions globally at a resolution down below a square kilometer.

Handling the data

That will generate large volumes of data that countries may struggle to interpret. That’s where the new Google partnership will come in. Google will use the same AI capability it has developed to map features such as roads and sidewalks on satellite images but repurpose it to identify oil and gas infrastructure. Both the MethaneSAT’s emissions data and infrastructure details will be combined and made available via the company’s Google Earth service.

Top image: A view of an area undergoing oil/gas extraction. Left: a close-up of an individual drilling site. Right: Computer-generated color coding of the hardware present at the site.

Top image: A view of an area undergoing oil/gas extraction. Left: a close-up of an individual drilling site. Right: Computer-generated color coding of the hardware present at the site.

Google / EDF

The project builds off work Google has done previously by placing methane monitoring hardware on Street View photography vehicles, also in collaboration with the Environmental Defense Fund.

In a press briefing, Google’s Yael Maguire said that the challenge is keeping things up to date, as infrastructure in the oil and gas industry can change fairly rapidly. While he didn’t use it as an example, one illustration of that challenge was the rapid development of liquified natural gas import infrastructure in Europe in the wake of Russia’s invasion of Ukraine.

The key question, however, is one of who’s going to use this information. Extraction companies could use it to identify the sites of leaks and fix them but are unlikely to do that in the absence of a regulatory requirement. Governments could rely on this information to take regulatory actions but will probably want some sort of independent vetting of the data before doing so. At the moment, all EDF is saying is that it’s engaging in discussions with several parties about potentially using the data.

One clear user will be the academic community, which is already using less-targeted satellite data to explore the issue of methane emissions.

Regardless, as everyone involved in the project emphasizes, getting methane under control is probably the easiest and quickest way to eliminate a bit of impending warming. And that could help countries meet emissions targets without immediately starting on some of the slower and more expensive options. So, even if no one has currently committed to using this data, they may ultimately come around—because using it to do something is better than doing nothing.

Google, Environmental Defense Fund will track methane emissions from space Read More »

over-2-percent-of-the-us’s-electricity-generation-now-goes-to-bitcoin

Over 2 percent of the US’s electricity generation now goes to bitcoin

Mining stakes —

US government tracking the energy implications of booming bitcoin mining in US.

Digital generated image of golden helium balloon in shape of bitcoin sign inflated with air pump and moving up against purple background.

Enlarge / It takes a lot of energy to keep pumping out more bitcoins.

What exactly is bitcoin mining doing to the electric grid? In the last few years, the US has seen a boom in cryptocurrency mining, and the government is now trying to track exactly what that means for the consumption of electricity. While its analysis is preliminary, the Energy Information Agency (EIA) estimates that large-scale cryptocurrency operations are now consuming over 2 percent of the US’s electricity. That’s roughly the equivalent of having added an additional state to the grid over just the last three years.

Follow the megawatts

While there is some small-scale mining that goes on with personal computers and small rigs, most cryptocurrency mining has moved to large collections of specialized hardware. While this hardware can be pricy compared to personal computers, the main cost for these operations is electricity use, so the miners will tend to move to places with low electricity rates. The EIA report notes that, in the wake of a crackdown on cryptocurrency in China, a lot of that movement has involved relocation to the US, where keeping electricity prices low has generally been a policy priority.

One independent estimate made by the Cambridge Centre for Alternative Finance had the US as the home of just over 3 percent of the global bitcoin mining at the start of 2020. By the start of 2022, that figure was nearly 38 percent.

The Cambridge Center also estimates the global electricity use of all bitcoin mining, so it’s possible to multiply that by the US’s percentage and come up with an estimate for the amount of electricity that boom has consumed. Because of the uncertainties in these estimates, the number could be anywhere from 25 to 91 Terawatt-hours. Even the low end of that range would mean bitcoin mining is now using the equivalent of Utah’s electricity consumption (the high end is roughly Washington’s), which has significant implications for the electric grid as a whole.

So, the EIA decided it needed a better grip on what was going on. To get that, it went through trade publications, financial reports, news articles, and congressional investigation reports to identify as many bitcoin mining operations as it could. With 137 facilities identified, it then inquired about the power supply needed to operate them at full capacity, receiving answers for 101 of those facilities.

If running all-out, those 101 facilities would consume 2.3 percent of the US’s average power demand. That places them on the high side of the Cambridge Center estimates.

Finding power-ups

The mining operations fall in two major clusters: one in Texas, and one extending from western New York down the Appalachians to southern Georgia. While there are additional ones scattered throughout the US, these are the major sites.

The EIA has also found some instances where the operations moved in near underutilized power plants and sent generation soaring again. Tracking the history of five of these plants showed that generation had fallen steadily from 2015 to 2020, reaching a low where they collectively produced just half a Terawatt-hour. Miners moving in nearby tripled production in just a year and has seen it rise to over 2 Terawatt-hours in 2022.

Power plants near bitcoin mining operations have seen generation surge over the last two years.

Enlarge / Power plants near bitcoin mining operations have seen generation surge over the last two years.

These are almost certainly fossil fuel plants that might be reasonable candidates for retirement if it weren’t for their use to supply bitcoin miners. So, these miners are contributing to all of the health and climate problems associated with the continued use of fossil fuels.

The EIA also found a number of strategies that miners used to keep their power costs low. In one case, they moved into a former aluminum smelting facility in Texas to take advantage of its capacious connections to the grid. In another, they put a facility next to a nuclear plant in Pennsylvania and set up a direct connection to the plant. The EIA also found cases where miners moved near natural gas fields that produced waste methane that would otherwise have been burned off.

Since bitcoin mining is the antithesis of an essential activity, several mining operations have signed up for demand-response programs, where they agree to take their operations offline if electricity demand is likely to exceed generating capacity in return for compensation by the grid operator. It has been widely reported that one facility in Texas—the one at the former aluminum smelter site—earned over $30 million by shutting down during a heat wave in 2023.

To better understand the implications of this major new drain on the US electric grid, the EIA will be performing monthly analyses of bitcoin operations during the first half of 2024. But based on these initial numbers, it’s clear that the relocation of so many mining operations to the US will significantly hinder efforts to bring the US’s electric grid to carbon neutrality.

Over 2 percent of the US’s electricity generation now goes to bitcoin Read More »

urban-agriculture’s-carbon-footprint-can-be-worse-than-that-of-large-farms

Urban agriculture’s carbon footprint can be worse than that of large farms

Greening your greens —

Saving on the emissions associated with shipping doesn’t guarantee a lower footprint.

Lots of plants in the foreground, and dense urban buildings in the background

A few years back, the Internet was abuzz with the idea of vertical farms running down the sides of urban towers, with the idea that growing crops where they’re actually consumed could eliminate the carbon emissions involved with shipping plant products long distances. But lifecycle analysis of those systems, which require a lot of infrastructure and energy, suggest they’d have a hard time doing better than more traditional agriculture.

But those systems represent only a small fraction of urban agriculture as it’s practiced. Most urban farming is a mix of local cooperative gardens and small-scale farms located within cities. And a lot less is known about the carbon footprint of this sort of farming. Now, a large international collaboration has worked with a number of these farms to get a handle on their emissions in order to compare those to large-scale agriculture.

The results suggest it’s possible that urban farming can have a lower impact. But it requires choosing the right crops and a long-term commitment to sustainability.

Tracking crops

Figuring out the carbon footprint of urban farms is a challenge, because it involves tracking all the inputs, from infrastructure to fertilizers, as well as the productivity of the farm. A lot of the urban farms, however, are nonprofits, cooperatives, and/or staffed primarily by volunteers, so detailed reporting can be a challenge. To get around this, the researchers worked with a lot of individual farms in France, Germany, Poland, the UK, and US in order to get accurate accounts of materials and practices.

Data from large-scale agriculture for comparison is widely available, and it includes factors like transport of the products to consumers. The researchers used data from the same countries as the urban farms.

On average, the results aren’t good for urban agriculture. An average serving from an urban farm was associated with 0.42 kg of carbon dioxide equivalents. By contrast, traditional produce resulted in emissions of about 0.07 kg per serving—six times less.

But that average obscures a lot of nuance. Of the 73 urban farms studied, 17 outperformed traditional agriculture by this measure. And, if the single highest-emitting farm was excluded from the analysis, the median of the urban farms ended up right around that 0.7 kg per serving.

All of this suggests the details of urban farming practices make a big difference. One thing that matters is the crop. Tomatoes tend to be fairly resource-intensive to grow and need to be shipped quickly in order to be consumed while ripe. Here, urban farms came in at 0.17 kg of carbon per serving, while conventional farming emits 0.27 kg/serving.

Difference-makers

One clear thing was that the intentions of those running the farms didn’t matter much. Organizations that had a mission of reducing environmental impact, or had taken steps like installing solar panels, were no better off at keeping their emissions low.

The researchers note two practical reasons for the differences they saw. One is infrastructure, which is the single largest source of carbon emissions at small sites. These include things like buildings, raised beds, and compost handling. The best sites the researchers saw did a lot of upcycling of things like construction waste into structures like the surrounds for raised beds.

Infrastructure in urban sites is also a challenge because of the often intense pressure on land, which can mean gardens have to relocate. This can shorten the lifetime of infrastructure and increase its environmental impact.

Another major factor was the use of urban waste streams for the consumables involved with farming. Composting from urban waste essentially eliminated fertilizer use (it was only 5 percent of the rate of conventional farming). Here, practices matter a great deal, as some composting techniques allow the material to become oxygen-free, which results in the anaerobic production of methane. Rainwater use also made a difference; in one case, the carbon impact of water treatment and distribution accounted for over two-thirds of an urban farm’s emissions.

These suggest that careful planning could make urban farms effective at avoiding some of the carbon emissions of conventional agriculture. This would involve figuring out best practices for infrastructure and consumables, as well as targeting crops that can have high carbon emissions when grown on conventional farms.

But any negatives are softened by a couple of additional considerations. One is that even the worst-performing produce seen in this analysis is far better in terms of carbon emissions than eating meat. The researchers also point out that many of the cooperative gardens provide a lot of social functions—things like after-school programs or informal classes—that can be difficult to put an emissions price on. Maximizing these could definitely boost the societal value of the operations, even if it doesn’t have a clear impact on the environment.

Nature Cities, 2019. DOI: 10.1038/s44284-023-00023-3  (About DOIs).

Urban agriculture’s carbon footprint can be worse than that of large farms Read More »

40%-of-us-electricity-is-now-emissions-free

40% of US electricity is now emissions-free

Decarbonizing, but slowly —

Good news as natural gas, coal, and solar see the biggest changes.

Image of electric power lines with a power plant cooling tower in the background.

Just before the holiday break, the US Energy Information Agency released data on the country’s electrical generation. Because of delays in reporting, the monthly data runs through October, so it doesn’t provide a complete picture of the changes we’ve seen in 2023. But some of the trends now seem locked in for the year: wind and solar are likely to be in a dead heat with coal, and all carbon-emissions-free sources combined will account for roughly 40 percent of US electricity production.

Tracking trends

Having data through October necessarily provides an incomplete picture of 2023. There are several factors that can cause the later months of the year to differ from the earlier ones. Some forms of generation are seasonal—notably solar, which has its highest production over the summer months. Weather can also play a role, as unusually high demand for heating in the winter months could potentially require that older fossil fuel plants be brought online. It also influences production from hydroelectric plants, creating lots of year-to-year variation.

Finally, everything’s taking place against a backdrop of booming construction of solar and natural gas. So, it’s entirely possible that we will have built enough new solar over the course of the year to offset the seasonal decline at the end of the year.

Let’s look at the year-to-date data to get a sense of the trends and where things stand. We’ll then check the monthly data for October to see if any of those trends show indications of reversing.

The most important takeaway is that energy use is largely flat. Overall electricity production year-to-date is down by just over one percent from 2022, though demand was higher this October compared to last year. This is in keeping with a general trend of flat-to-declining electricity use as greater efficiency is offsetting factors like population growth and expanding electrification.

That’s important because it means that any newly added capacity will displace the use of existing facilities. And, at the moment, that displacement is happening to coal.

Can’t hide the decline

At this point last year, coal had produced nearly 20 percent of the electricity in the US. This year, it’s down to 16.2 percent, and only accounts for 15.5 percent of October’s production. Wind and solar combined are presently at 16 percent of year-to-date production, meaning they’re likely to be in a dead heat with coal this year and easily surpass it next year.

Year-to-date, wind is largely unchanged since 2022, accounting for about 10 percent of total generation, and it’s up to over 11 percent in the October data, so that’s unlikely to change much by the end of the year. Solar has seen a significant change, going from five to six percent of the total electricity production (this figure includes both utility-scale generation and the EIA’s estimate of residential production). And it’s largely unchanged in October alone, suggesting that new construction is offsetting some of the seasonal decline.

Coal is being squeezed out by natural gas, with an assist from renewables.

Enlarge / Coal is being squeezed out by natural gas, with an assist from renewables.

Eric Bangeman/Ars Technica

Hydroelectric production has dropped by about six percent since last year, causing it to slip from 6.1 percent to 5.8 percent of the total production. Depending on the next couple of months, that may allow solar to pass hydro on the list of renewables.

Combined, the three major renewables account for about 22 percent of year-to-date electricity generation, up about 0.5 percent since last year. They’re up by even more in the October data, placing them well ahead of both nuclear and coal.

Nuclear itself is largely unchanged, allowing it to pass coal thanks to the latter’s decline. Its output has been boosted by a new, 1.1 Gigawatt reactor that come online this year (a second at the same site, Vogtle in Georgia, is set to start commercial production at any moment). But that’s likely to be the end of new nuclear capacity for this decade; the challenge will be keeping existing plants open despite their age and high costs.

If we combine nuclear and renewables under the umbrella of carbon-free generation, then that’s up by nearly 1 percent since 2022 and is likely to surpass 40 percent for the first time.

The only thing that’s keeping carbon-free power from growing faster is natural gas, which is the fastest-growing source of generation at the moment, going from 40 percent of the year-to-date total in 2022 to 43.3 percent this year. (It’s actually slightly below that level in the October data.) The explosive growth of natural gas in the US has been a big environmental win, since it creates the least particulate pollution of all the fossil fuels, as well as the lowest carbon emissions per unit of electricity. But its use is going to need to start dropping soon if the US is to meet its climate goals, so it will be critical to see whether its growth flat lines over the next few years.

Outside of natural gas, however, all the trends in US generation are good, especially considering that the rise of renewable production would have seemed like an impossibility a decade ago. Unfortunately, the pace is currently too slow for the US to have a net-zero electric grid by the end of the decade.

40% of US electricity is now emissions-free Read More »

government-makes-an-app-to-cut-down-government’s-role-in-solar-permitting

Government makes an app to cut down government’s role in solar permitting

Aerial view of houses with roof-top solar panels.

Enlarge / NREL has taken some of the hassle out of getting permits for projects like these.

Can government agencies develop software to help cut bureaucratic red tape through automation? The answer is “yes,” according to the promising results achieved by the National Renewable Energy Laboratory (NREL), which has saved thousands of hours of labor for local governments by creating a tool called SolarAPP+ (Solar Automated Permit Processing Plus) for residential solar permits.

“We estimate that automatic SolarAPP+ permitting saved around 9,900 hours of… staff time in 2022,” NREL staff wrote in the report, “SolarAPP+ Performance Review (2022 Data). “Based on median timelines, a typical SolarAPP+ project is permitted and inspected 13 business days sooner than traditional projects… SolarAPP+ has eliminated over 134,000 days in permitting-related delays.”

SolarAPP+ automates over 100 compliance checks in the permitting process that are usually the responsibility of city, county, or town employees, according to Jeff Cook, SolarAPP+ program lead at NREL and first author of the report. It can be more accurate, thorough, and efficient than a time-pressured local government employee would be.

Saving time and money

Sometimes, the cost of permitting can be higher than the cost of solar hardware, Cook said. It depends on the specifics of the project.

“We knew that residential rooftop solar volume was increasing across the country,” Cook said. “It took us… 20 years to get to a million PV installations. And I think we got to 2 million PV installations just a few years later. And so there’s a lot of solar volume out there. And the problem is that each one of those systems needs to be reviewed for code compliance. And so if you need a human to review that, you’ve got a million applications.”

“When regulations make it unnecessarily difficult for people to quickly install solar and storage systems, it hurts everyone,” said Senator Scott Wiener (D-Calif.) in a press statement. “It hurts those who want to install solar. And it hurts communities across California, which are being negatively impacted by climate change. We need to make it easier for people to use renewable energy—that’s just a no-brainer. Expediting solar permitting is something we can do to make this a reality.”

A coalition of stakeholders from the solar industry, the US Department of Energy, and the building code-development community requested that NREL develop the software, Cook said. The organizations represented included UL Solutions and the Interstate Renewable Energy Council. (UL Solutions is a company that addresses a broad range of safety issues; initially, it focused on fire and electrical safety.)

“What we identified is the community need for the software and we identified that there was a gap in the private sector,” Cook said. “There was no incentive to do it from any active members of the private sector, but a real potential opportunity or value to the public good if such a software existed and was publicly available and free for a local government to adopt.”

Cook estimates that hundreds of thousands of hours in plan review time would have been required to manually approve all of the residential solar permits in the United States in recent years. Approving a permit for a residential solar project can take local government staff 15 minutes to an hour, and around 30 percent of the applications are later revised.

A flood of applications

“It just inundates the staff with work that they have to do,” Cook said.

“We are seeing about 750 residential requests over the past 12 months, which is about double the number of applications we saw two years ago,” said Kate Gallego, mayor of Phoenix, at the SolarAPP+ Industry Roundtable. “When I ask people in industry what we can do to speed up deployment of solar, they ask, ‘Can you do permitting faster?’ We’re at about 30 days now. We want to get that permitted as fast as possible, but we don’t want to sacrifice safety, and we want to make sure we’re not just doing it quickly, but well. That’s why this partnership was very attractive to me.”

Up to five separate departments may review the permits—the ones that oversee structural, electrical, fire, planning, and zoning decisions, Cook said.

“There’s usually a queue,” Cook said. “Just because it takes the jurisdiction only 15 minutes to review doesn’t mean that you send it to them today—they review it an hour later and get back to you. The average is, across the country, a seven-day turnaround, but it can be 30 days plus. It really varies across the country depending on how much volume of solar is in that space.”

Government makes an app to cut down government’s role in solar permitting Read More »