Science

us-solar-production-soars-by-25-percent-in-just-one-year

US solar production soars by 25 percent in just one year

Solar sailing —

2024 is seeing the inevitable outcome of the building boom in solar farms.

A single construction person set in the midst of a sea of solar panels.

With the plunging price of photovoltaics, the construction of solar plants has boomed in the US. Last year, for example, the US’s Energy Information Agency expected that over half of the new generating capacity would be solar, with a lot of it coming online at the very end of the year for tax reasons. Yesterday, the EIA released electricity generation numbers for the first five months of 2024, and that construction boom has seemingly made itself felt: generation by solar power has shot up by 25 percent compared to just one year earlier.

The EIA breaks down solar production according to the size of the plant. Large grid-scale facilities have their production tracked, giving the EIA hard numbers. For smaller installations, like rooftop solar on residential and commercial buildings, the agency has to estimate the amount produced, since the hardware often resides behind the metering equipment, so only shows up via lower-than-expected consumption.

In terms of utility-scale production, the first five months of 2024 saw it rise by 29 percent compared to the same period in the year prior. Small-scale solar was “only” up by 18 percent, with the combined number rising by 25.3 percent.

Most other generating sources were largely flat, year over year. This includes coal, nuclear, and hydroelectric, all of which changed by 2 percent or less. Wind was up by 4 percent, while natural gas rose by 5 percent. Because natural gas is the largest single source of energy on the grid, however, its 5 percent rise represents a lot of electrons—slightly more than the total increase in wind and solar.

US electricity sources for January through May of 2024. Note that the numbers do not add up to 100 percent due to the omission of minor contributors like geothermal and biomass.

Enlarge / US electricity sources for January through May of 2024. Note that the numbers do not add up to 100 percent due to the omission of minor contributors like geothermal and biomass.

John Timmer

Overall, energy use was up by about 4 percent compared to the same period in 2023. This could simply be a matter of changing weather conditions that require more heating or cooling. But there have been several trends that should increase electricity usage: the rise of bitcoin mining, the growth of data centers, and the electrification of appliances and transport. So far, that hasn’t shown up in the actual electricity usage in the US, which has stayed largely flat for decades. It could be possible that 2024 is the year when usage starts going up again.

More to come

It’s worth noting that this data all comes from before some of the most productive months of the year for solar power; overall, the EIA is predicting that solar production could rise by as much as 42 percent in 2024.

So, where does this leave the US’s efforts to decarbonize? If we combine nuclear, hydro, wind, and solar under the umbrella of carbon-free power sources, then these account for about 45 percent of US electricity production so far this year. Within that category, wind and solar now produce more than three times hydroelectric, and roughly the same amount as nuclear.

Wind and solar have also produced 1.3 times as much electricity as coal so far in 2024, with solar alone now producing about half as much as coal. That said, natural gas still produces twice as much electricity as wind and solar combined, indicating we still have a long way to go to decarbonize our grid.

When you look at the generating facilities that will be built over the next 12 months, it's difficult not to see a pattern.

Enlarge / When you look at the generating facilities that will be built over the next 12 months, it’s difficult not to see a pattern.

Still, we can expect solar’s productivity to climb even before the year is out. That’s in part because we don’t yet have numbers for June, the month that contains the longest day of the year. But it’s also because the construction boom shows no sign of stopping. As noted here, solar and wind deployments are expected to dwarf everything else over the coming year. The items in gray on the map primarily represent battery storage, which will allow us to make better use of those renewables, as well.

By contrast, facilities that are scheduled for retirement over the next year largely consist of coal and natural gas plants.

US solar production soars by 25 percent in just one year Read More »

scientists-unlock-more-secrets-of-rembrandt’s-pigments-in-the-night-watch

Scientists unlock more secrets of Rembrandt’s pigments in The Night Watch

More from operation night watch —

Use of arsenic sulfides for yellow, orange/red hues adds to artist’s known pigment palette.

The Nightwatch, or Militia Company of District II under the Command of Captain Frans Banninck Cocq (1642)

Enlarge / Rembrandt’s The Night Watch underwent many chemical and mechanical alterations over the last 400 years.

Public domain

Since 2019, researchers have been analyzing the chemical composition of the materials used to create Rembrandt’s masterpiece, The Night Watch, as part of the Rijksmuseum’s ongoing Operation Night Watch, devoted to its long-term preservation. Chemists at the Rijksmuseum and the University of Amsterdam have now detected unusual arsenic-based yellow and orange/red pigments used to paint the duff coat of one of the central figures in the painting, according to a recent paper in the journal Heritage Science. It’s a new addition to Rembrandt’s known pigment palette that further adds to our growing body of knowledge about the materials he used.

As previously reported, past analyses of Rembrandt’s paintings identified many pigments the Dutch master used in his work, including lead white, multiple ochres, bone black, vermilion, madder lake, azurite, ultramarine, yellow lake, and lead-tin yellow, among others. The artist rarely used pure blue or green pigments, with Belshazzar’s Feast being a notable exception. (The Rembrandt Database is the best resource for a comprehensive chronicling of the many different investigative reports.)

Early last year, the researchers at Operation Night Watch found rare traces of a compound called lead formate in the painting—surprising in itself, but the team also identified those formates in areas where there was no lead pigment, white or yellow. It’s possible that lead formates disappear fairly quickly, which could explain why they have not been detected in paintings by the Dutch Masters until now. But if that is the case, why didn’t the lead formate disappear in The Night Watch? And where did it come from in the first place?

Hoping to answer these questions, the team whipped up a model of “cooked oils” from a 17th-century recipe and analyzed those model oils with synchrotron radiation. The results supported their hypothesis that the oil used for light parts of the painting was treated with an alkaline lead drier. The fact that The Night Watch was revarnished with an oil-based varnish in the 18th century complicates matters, as this may have provided a fresh source of formic acid, such that different regions of the painting rich in lead formates may have formed at different times in the painting’s history.

Last December, the team turned its attention to the preparatory layers applied to the canvas. It’s known that Rembrandt used a quartz-clay ground for The Night Watch—the first time he had done so, perhaps because the colossal size of the painting “motivated him to look for a cheaper, less heavy and more flexible alternative for the ground layer” than the red earth, lead white, and cerussite he was known to use on earlier paintings.

The Night Watch. (b) Detail of figure’s embroidered gold buff coat. (c) X-ray diffraction image of coat detail showing arsenic. (d) Stereomicroscope image showing arsenic hot spot.” height=”531″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/07/rembrandt1-640×531.jpg” width=”640″>

Enlarge / (a) Rembrandt’s The Night Watch. (b) Detail of figure’s embroidered gold buff coat. (c) X-ray diffraction image of coat detail showing arsenic. (d) Stereomicroscope image showing arsenic hot spot.

N. De Keyser et al., 2024

They used 3D X-ray methods to capture more detail, revealing the presence of an unknown (and unexpected) lead-containing layer located just underneath the ground layer. This could be due to using a lead compound added to the oil used to prepare the canvas as a drying additive—perhaps to protect the painting from the damaging effects of humidity. (Usually a glue sizing was used before applying the ground layer.) The lead layer discovered last year could be the reason for the unusual lead protrusions in areas of The Night Watch, since there are no other lead-containing compounds in the paint. It’s possible that lead migrated into the painting’s ground layer from the lead-oil preparatory layer below.

An intentional combination

The presence of arsenic sulfides in The Night Watch appears to be an intentional pigment combination by Rembrandt, according to the authors of this latest paper. Artists throughout history have used naturally occurring orpiment and realgar, as well as artificial arsenic sulfide pigments, to get yellow, orange, and red hues in their paints. Orpiment was also used for medicinal purposes, in hair removal creams and oils, in wax seals, yellow ink, bookbinder green (mixed with indigo), and for the treatment or coating of metals like silver.

However, the use of artificial arsenic sulfides has rarely been reported in artworks, although they are mentioned in multiple artists’ treatises dating back to the 15th century. Earlier work using advanced analytical techniques such as Raman spectroscopy and X-ray powder diffraction revealed that Rembrandt used arsenic sulfide pigments (artificial orpiment) in two late paintings: The Jewish Bride (c 1665) and The Man in a Red Cap (c 1665).

For this latest work, Nouchka De Keyser of the Rijksmuseum and co-authors used macroscopic X-ray fluorescence imaging to map The Night Watch, which revealed the presence of arsenic and sulfur in the doublet sleeves and embroidered buff coat worn by Lt. Willem Van Ruytenburch, i.e., the central figure to the right of Captain Frans Bannick Cocq in the painting. The researchers initially assumed that this was due to Rembrandt’s use of orpiment for yellow hues and realgar for red hues.

Ars Vitraria Experimentalis, 1679. (c) Page from the Weimar taxa of 1674 including prices for white, yellow, and red arsenic.” height=”300″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/07/rembrandt2-640×300.jpg” width=”640″>

Enlarge / (a, b) Pages from Johann Kunckel’s Ars Vitraria Experimentalis, 1679. (c) Page from the Weimar taxa of 1674 including prices for white, yellow, and red arsenic.

N. De Keyser et al., 2024

To learn more, they took tiny samples and analyzed them with light microscopy, micro-Raman spectroscopy, electron microscopy, and X-ray powder diffraction. They found the yellow particles were actually pararealgar while the orange to red particles were semi-amorphous pararealgar. These are more unusual arsenic sulfide components, typically associated with degradation products from either the natural minerals or their artificial equivalents as they age.

But De Keyser et al. concluded that the presence of these components was actually an intentional mixture, based on their perusal of multiple historical sources and catalogs of collection cabinets with long lists of various arsenic sulfides. There was clearly contemporary knowledge of manipulating both natural and artificial arsenic sulfides to get different shades of yellow, orange, and red.

They also found vermilion and lead-tin yellow in the paint mixture; Rembrandt was known to use these to add brightness and intensity to his paintings. In the case of The Night Watch, “Rembrandt clearly aimed for a bright orange tone with a high color strength that allowed him to create an illusion of the gold thread embroidery in Van Ruytenburch’s costume,” the authors wrote. “The artificial orange to red arsenic sulfide might have offered different optical and rheological paint properties as compared to the mineral form of orpiment and realgar.”

In addition, the team examined paint samples from different artists known to use arsenic sulfides—whose works are also part of the Rijksmuseum collection—and found a similar mixture of pigments in a painting by Rembrandt’s contemporary, Willem Kalf. “It is evidence that a variety of natural and artificial arsenic sulfides were manufactured and traded during Rembrandt’s time and were available in Amsterdam,” the authors wrote—most likely imported, since the Dutch Republic did not have considerable mining resources.

Heritage Science, 2024. DOI: 10.1186/s40494-024-01350-x  (About DOIs).

Scientists unlock more secrets of Rembrandt’s pigments in The Night Watch Read More »

no,-nasa-hasn’t-found-life-on-mars-yet,-but-the-latest-discovery-is-intriguing

No, NASA hasn’t found life on Mars yet, but the latest discovery is intriguing

Look at the big brain on percy —

“These spots are a big surprise.”

NASA’s Perseverance rover discovered “leopard spots” on a reddish rock nicknamed “Cheyava Falls” in Mars’ Jezero Crater in July 2024.

Enlarge / NASA’s Perseverance rover discovered “leopard spots” on a reddish rock nicknamed “Cheyava Falls” in Mars’ Jezero Crater in July 2024.

NASA/JPL-Caltech/MSSS

NASA’s Perseverance rover has found a very intriguing rock on the surface of Mars.

An arrowhead-shaped rock observed by the rover has chemical signatures and structures that could have been formed by ancient microbial life. To be absolutely clear, this is not irrefutable evidence of past life on Mars, when the red planet was more amenable to water-based life billions of years ago. But discovering these colored spots on this rock is darn intriguing and has Mars scientists bubbling with excitement.

“These spots are a big surprise,” said David Flannery, an astrobiologist and member of the Perseverance science team from the Queensland University of Technology in Australia, in a NASA news release. “On Earth, these types of features in rocks are often associated with the fossilized record of microbes living in the subsurface.”

What the rover found

This is a very recent discovery, and the science has not yet been peer-reviewed. The sample was collected on July 21—a mere four days ago—as the rover explored the Neretva Vallis riverbed. This valley was formed long ago when water rushed into Jezero Crater.

The science team operating Perseverance has nicknamed the rock Chevaya Falls and subjected it to multiple scans by the rover’s SHERLOC (Scanning Habitable Environments with Raman & Luminescence for Organics & Chemicals) instrument. The distinctive colorful spots, containing both iron and phosphate, are a smoking gun for certain chemical reactions—rather than microbial life itself.

On Earth, microbial life can derive energy from these kinds of chemical reactions. So, what we have here is a plausible source of energy for microbes on Mars. In addition, there are organic chemicals present on the same rock, which is consistent with something living there. From this, it is tempting to jump to the idea of microbes living on a rock, eons ago, in a Martian river. But this is not direct evidence of life.

NASA has a seven-step process for determining whether something can be confirmed as extraterrestrial life. This is known as the CoLD scale, for Confidence of Life Detection. In this case, the detection of these spots on a Martian rock represents just the first of seven steps—for example, scientists must still rule out non-biological possibility and identify other signals to have confidence in off-world life.

Bring them home

According to NASA, Perseverance has used all of its available instrumentation to study Chevaya Falls. “We have zapped that rock with lasers and X-rays and imaged it literally day and night from just about every angle imaginable,” said Ken Farley, Perseverance project scientist. “Scientifically, Perseverance has nothing more to give.”

The discovery provides some wind in the sails for NASA’s flagging efforts to devise and fly a Mars Sample Return mission. The agency’s most recent plan, costing $11 billion, was determined to be too expensive. Now, the space agency is asking the industry for help. In June it commissioned 10 studies on alternative means of returning rocks from Mars sooner, and presumably for a lower cost.

Now, scientists can point to rocks like Chevaya Falls and say this is precisely why they must be studied in ultra-capable labs back on Earth.

No, NASA hasn’t found life on Mars yet, but the latest discovery is intriguing Read More »

webb-directly-images-giant-exoplanet-that-isn’t-where-it-should-be

Webb directly images giant exoplanet that isn’t where it should be

How do you misplace that? —

Six times bigger than Jupiter, the planet is the oldest and coldest yet imaged.

A dark background with read and blue images embedded in it, both showing a single object near an area marked with an asterisk.

Enlarge / Image of Epsilon Indi A at two wavelengths, with the position of its host star indicated by an asterisk.

T. Müller (MPIA/HdA), E. Matthews (MPIA)

We have a couple of techniques that allow us to infer the presence of an exoplanet based on its effects on the light coming from its host star. But there’s an alternative approach that sometimes works: image them directly. It’s much more limited, since the planet has to be pretty big and orbiting far away enough from its star to avoid having light coming from the planet swamped by the far more intense starlight.

Still, it has been done. Massive exoplanets have been captured relatively shortly after their formation, when the heat generated by the collapse of material into the planet causes them to glow in the infrared. But the Webb telescope is far more sensitive than any infrared observatory we’ve ever built, and it has managed to image a relatively nearby exoplanet that’s roughly as old as the ones in our Solar System.

Looking directly at a planet

What do you need to directly image a planet that’s orbiting a star light-years away? The first thing is a bit of hardware called a coronagraph attached to your telescope. This is responsible for blocking the light from the star the planet is orbiting; without it, that light will swamp any other sources in the exosolar system. Even with a good coronagraph, you need the planets to be orbiting at a significant distance from the star so that they’re cleanly separated from the signal being blocked by the coronagraph.

Then, you need the planet to emit a fair bit of light. While the right surface composition might allow the planet to be highly reflective, that’s not going to be a great option considering the distances we’d need the planet to be orbiting to be visible at all. Instead, the planets we’ve spotted so far have been young and still heated by the processes that brought material together to form a planet in the first place. Being really big doesn’t hurt matters either.

Put that all together, and what you expect to be able to spot is a very young, very distant planet that’s massive enough to fall into the super-Jupiter category.

But the launch of the Webb Space Telescope has given us new capabilities in the infrared range, and a large international team of researchers pointed it at a star called Epsilon Indi A. It’s a bit less than a dozen light years away (which is extremely close in astronomical terms), and the star is both similar in size and age to the Sun, making it an interesting target for observations. Perhaps most significantly previous data had suggested a large exoplanet would be found, based on indications that the star was apparently shifting as the exoplanet tugged on it during its orbit.

And there was in fact an indication of a planet there. It just didn’t look much like the expected planet. “It’s about twice as massive, a little farther from its star, and has a different orbit than we expected,” said Elisabeth Matthews, one of the researchers involved.

At the moment, there’s no explanation for the discrepancy. The odds of it being an unrelated background object are extremely small. And a reanalysis of data on the motion of Epsilon Indi A suggests that this is likely to be the only large planet in the system—there could be additional planets, but they’d be much smaller. So, the researchers named the planet Epsilon Indi Ab, even though that was the same name given to the planet that doesn’t seem to match this one’s properties.

Big, cold, and a bit enigmatic

The revised Epsilon Indi Ab is a large planet, estimated at roughly six times the mass of Jupiter. It’s also orbiting at roughly the same distance as Neptune. It’s generally bright across the mid-infrared, consistent with a planet that’s roughly 275 Kelvin—not too far off from room temperature. That’s also close to what we would estimate for its temperature simply based on the age of the planet. That makes it the coolest exoplanet ever directly imaged.

While the signal from the planet was quite bright at a number of wavelengths, the planet couldn’t even be detected in one area of the spectrum (3.5 to 5 micrometers, for the curious). That’s considered an indication that the planet has high levels of elements heavier than helium, and a high ratio of carbon to oxygen. The gap in the spectrum may influence estimates of the planet’s age, so further observations will probably need to be conducted to clarify why there are no emissions at these wavelengths.

The researchers also suggest that imaging more of these cool exoplanets should be a priority, given that we should be cautious about extrapolating anything from a single example. So, in that sense, this first exoplanet imaging provides an important confirmation that, with Webb and its coronagraph, we’ve now got the tools we need to do so, and they work very well.

Nature, 2024. DOI: 10.1038/s41586-024-07837-8  (About DOIs).

Webb directly images giant exoplanet that isn’t where it should be Read More »

woman-who-went-on-the-lam-with-untreated-tb-is-now-cured

Woman who went on the lam with untreated TB is now cured

happy ending —

The woman realized how serious her infection was once she was in custody.

Scanning electron micrograph of <em>Mycobacterium tuberculosis</em> bacteria, which cause TB.” src=”https://cdn.arstechnica.net/wp-content/uploads/2015/03/5149398678_97948614ea_o-514×640.jpg”></img><figcaption>
<p>Scanning electron micrograph of <em>Mycobacterium tuberculosis</em> bacteria, which cause TB.</p>
</figcaption></figure>
<p>“She’s cured!”</p>
<p>Health officials in Washington state are celebrating the clean bill of health for one particularly notable resident: the woman <a href=who refused to isolate and get treatment for her active case of infectious tuberculosis for over a year. She even spent around three months on the lam, dodging police as they tried to execute a civil arrest warrant. During her time as a fugitive, police memorably reported that she took a city bus to go to a casino.

The woman, identified only as V.N. in court documents, had court orders to get treatment for her tuberculosis infection beginning in January of 2022. She refused to comply as the court renewed the orders on a monthly basis and held at least 17 hearings on the matter. The judge in her case issued an arrest warrant in March of 2023, but V.N. evaded law enforcement. She was finally arrested in June of last year and spent 23 days getting court-ordered treatment behind bars before being released with conditions.

This week, James Miller, a health officer for Washington’s Tacoma-Pierce County, where V.N. resides, announced the happy ending.

“The woman cooperated with Pierce County Superior Court’s orders and our disease investigators. She’s tested negative for tuberculosis (also called TB) multiple times. She gained back weight she’d lost and is healthy again,” Miller reported. He noted that V.N. and her family gave the county permission to share the news and said they are now happy she received the treatment she needed.

Amid the legal attempts to get V.N. treated, the health department noted in court documents that V.N. had been in a car accident in January 2023, after which she had gone to an emergency department complaining of chest pain. Doctors there—who did not know she had an active case of tuberculosis—took X-rays of her lungs. The images revealed that her lungs were in such bad shape that the doctors thought she had cancer. In fact, the images revealed that her tuberculosis case was worsening.

Risky infection

Tuberculosis is a potentially life-threatening bacterial infection caused by Mycobacterium tuberculosis, which mostly infects the lungs but can invade other areas of the body as well. The bacteria spread through the air when an infected person coughs, sneezes, spits, or otherwise launches bacterial cells into the air around them. Although transmission mostly occurs from close, prolonged contact, inhaling only a few of the microscopic germs is enough to spark an infection. According to the World Health Organization, tuberculosis killed 1.3 million people in 2022 and infected an estimated 10.6 million. Worldwide, it was the second leading infectious killer after COVID-19 that year.

Treatment for tuberculosis requires lengthy antibiotic regimens, which are typically taken for four to nine months. Drug-resistant infections require second-line, more toxic drugs. In the past, treatment for multi-drug resistant tuberculosis could last up to 20 months, but newer clinical guidances prioritize shorter regimens.

Given the risks to herself and those around her, county health officials did all they could to get V.N. treated. “Seeking a court order is our last resort after we exhaust all other options,” Miller said. “It’s a difficult process that takes a lot of time and coordination with other agencies.”

But according to Miller, V.N. softened to the idea of being treated once she was in custody and county disease investigators worked to gain her trust. “At that point, she realized how serious her situation was and decided to treat her illness,” he said. With treatment, she “regained her health over time.”

“She is now cured, which means that tuberculosis no longer poses a risk to her health,” he concluded. “This also means she is no longer at risk of infecting others.”

Woman who went on the lam with untreated TB is now cured Read More »

appeals-court-denies-stay-to-states-trying-to-block-epa’s-carbon-limits

Appeals Court denies stay to states trying to block EPA’s carbon limits

You can’t stay here —

The EPA’s plan to cut carbon emissions from power plants can go ahead.

Cooling towers emitting steam, viewed from above.

On Friday, the US Court of Appeals for the DC Circuit denied a request to put a hold on recently formulated rules that would limit carbon emissions made by fossil fuel power plants. The request, made as part of a case that sees 25 states squaring off against the EPA, would have put the federal government’s plan on hold while the case continued. Instead, the EPA will be allowed to continue the process of putting its rules into effect, and the larger case will be heard under an accelerated schedule.

Here we go again

The EPA’s efforts to regulate carbon emissions from power plants go back all the way to the second Bush administration, when a group of states successfully sued the EPA to force it to regulate greenhouse gas emissions. This led to a formal endangerment finding regarding greenhouse gases during the Obama administration, something that remained unchallenged even during Donald Trump’s term in office.

Obama tried to regulate emissions through the Clean Power Plan, but his second term came to an end before this plan had cleared court hurdles, allowing the Trump administration to formulate a replacement that did far less than the Clean Power Plan. This took place against a backdrop of accelerated displacement of coal by natural gas and renewables that had already surpassed the changes envisioned under the Clean Power Plan.

In any case, the Trump plan was thrown out by the courts on the day before Biden’s administration, allowing his EPA to start with a clean slate. Biden’s original plan, which would have had states regulate emissions from their electric grids by regulating them as a single system, was thrown out by the Supreme Court, which ruled that emissions would need to be regulated on a per-plant basis in a decision termed West Virginia v. EPA.

So, that’s what the agency is now trying to do. Its plan, issued last year, would allow fossil-fuel-burning plants that are being shut down in the early 2030s to continue operating without restrictions. Others will need to either install carbon capture equipment, or natural gas plants could swap in green hydrogen as their primary fuel.

And again

In response, 25 states have sued to block the rule (you can check out this filing to see if yours is among them). The states also sought a stay that would prevent the rule from being implemented while the case went forward. In it, they argue that carbon capture technology isn’t mature enough to form the basis of these regulations (something we predicted was likely to be a point of contention). The suit also suggests that the rules would effectively put coal out of business, something that’s beyond the EPA’s remit.

The DC Court of Appeals, however, was not impressed, ruling that the states’ arguments regarding carbon capture are insufficient: “Petitioners have not shown they are likely to succeed on those claims given the record in this case.” And that’s the key hurdle for determining whether a stay is justified. And the regulations don’t pose a likelihood of irreparable harm, as the court notes that states aren’t even expected to submit a plan for at least two years, and the regulations won’t kick in until 2030 at the earliest.

Meanwhile, the states cited the Supreme Court’s West Virginia v. EPA decision to argue against these rules, suggesting they represent a “major question” that requires input from Congress. The Court was also not impressed, writing that “EPA has claimed only the power to ‘set emissions limits under Section 111 based on the application of measures that would reduce pollution by causing the regulated source to operate more cleanly,’ a type of conduct that falls well within EPA’s bailiwick.”

To respond to the states’ concerns about the potential for irreparable harm, the court plans to consider them during the 2024 term and has given the parties just two weeks to submit proposed schedules for briefings on the case.

Appeals Court denies stay to states trying to block EPA’s carbon limits Read More »

spacex-just-stomped-the-competition-for-a-new-contract—that’s-not-great

SpaceX just stomped the competition for a new contract—that’s not great

A rocket sits on a launch pad during a purple- and gold-streaked dawn.

Enlarge / With Dragon and Falcon, SpaceX has become an essential contractor for NASA.

SpaceX

There is an emerging truth about NASA’s push toward commercial contracts that is increasingly difficult to escape: Companies not named SpaceX are struggling with NASA’s approach of awarding firm, fixed-price contracts for space services.

This belief is underscored by the recent award of an $843 million contract to SpaceX for a heavily modified Dragon spacecraft that will be used to deorbit the International Space Station by 2030.

The recently released source selection statement for the “US Deorbit Vehicle” contract, a process led by NASA head of space operations Ken Bowersox, reveals that the competition was a total stomp. SpaceX faced just a single serious competitor in this process, Northrop Grumman. And in all three categories—price, mission suitability, and past performance—SpaceX significantly outclassed Northrop.

Although it’s wonderful that NASA has an excellent contractor in SpaceX, it’s not healthy in the long term that there are so few credible competitors. Moreover, a careful reading of the source selection statement reveals that NASA had to really work to get a competition at all.

“I was really happy that we got proposals from the companies that we did,” Bowersox said during a media teleconference last week. “The companies that sent us proposals are both great companies, and it was awesome to see that interest. I would have expected a few more [proposals], honestly, but I was very happy to get the ones that we got.”

Commercial initiatives struggling

NASA’s push into “commercial” space began nearly two decades ago with a program to deliver cargo to the International Space Station. The space agency initially selected SpaceX and Rocketplane Kistler to develop rockets and spacecraft to accomplish this, but after Kistler missed milestones, the company was subsequently replaced by Orbital Sciences Corporation. The cargo delivery program was largely successful, resulting in the Cargo Dragon (SpaceX) and Cygnus (Orbital Sciences) spacecraft. It continues to this day.

A commercial approach generally means that NASA pays a “fixed” price for a service rather than paying a contractor’s costs plus a fee. It also means that NASA hopes to become one of many customers. The idea is that, as the first mover, NASA is helping to stimulate a market by which its fixed-priced contractors can also sell their services to other entities—both private companies and other space agencies.

NASA has since extended this commercial approach to crew, with SpaceX and Boeing winning large contracts in 2014. However, only SpaceX has flown operational astronaut missions, while Boeing remains in the development and test phase, with its ongoing Crew Flight Test. Whereas SpaceX has sold half a dozen private crewed missions on Dragon, Boeing has yet to announce any.

Such a commercial approach has also been tried with lunar cargo delivery through the “Commercial Lunar Payload Services” program, as well as larger lunar landers (Human Landing System), next-generation spacesuits, and commercial space stations. Each of these programs has a mixed record at best. For example, NASA’s inspector general was highly critical of the lunar cargo program in a recent report, and one of the two spacesuit contractors, Collins Aerospace, recently dropped out because it could not execute on its fixed-price contract.

Some of NASA’s most important traditional space contractors, including Lockheed Martin, Boeing, and Northrop Grumman, have all said they are reconsidering whether to participate in fixed-price contract competitions in the future. For example, Northrop CEO Kathy Warden said last August, “We are being even more disciplined moving forward in ensuring that we work with the government to have the appropriate use of fixed-price contracts.”

So the large traditional space contractors don’t like fixed-price contracts, and many new space companies are struggling to survive in this environment.

SpaceX just stomped the competition for a new contract—that’s not great Read More »

model-mixes-ai-and-physics-to-do-global-forecasts

Model mixes AI and physics to do global forecasts

Cloudy with a chance of accuracy —

Google/academic project is great with weather, has some limits for climate.

Image of a dark blue flattened projection of the Earth, with lighter blue areas showing the circulation of the atmosphere.

Enlarge / Image of some of the atmospheric circulation seen during NeuralGCM runs.

Google

Right now, the world’s best weather forecast model is a General Circulation Model, or GCM, put together by the European Center for Medium-Range Weather Forecasts. A GCM is in part based on code that calculates the physics of various atmospheric processes that we understand well. For a lot of the rest, GCMs rely on what’s termed “parameterization,” which attempts to use empirically determined relationships to approximate what’s going on with processes where we don’t fully understand the physics.

Lately, GCMs have faced some competition from machine-learning techniques, which train AI systems to recognize patterns in meteorological data and use those to predict the conditions that will result over the next few days. Their forecasts, however, tend to get a bit vague after more than a few days and can’t deal with the sort of long-term factors that need to be considered when GCMs are used to study climate change.

On Monday, a team from Google’s AI group and the European Centre for Medium-Range Weather Forecasts are announcing NeuralGCM, a system that mixes physics-based atmospheric circulation with AI parameterization of other meteorological influences. Neural GCM is computationally efficient and performs very well in weather forecast benchmarks. Strikingly, it can also produce reasonable-looking output for runs that cover decades, potentially allowing it to address some climate-relevant questions. While it can’t handle a lot of what we use climate models for, there are some obvious routes for potential improvements.

Meet NeuralGCM

NeuralGCM is a two-part system. There’s what the researchers term a “dynamical core,” which handles the physics of large-scale atmospheric convection and takes into account basic physics like gravity and thermodynamics. Everything else is handled by the AI portion. “It’s everything that’s not in the equations of fluid dynamics,” said Google’s Stephan Hoyer. “So that means clouds, rainfall, solar radiation, drag across the surface of the Earth—also all the residual terms in the equations that happen below the grid scale of about roughly 100 kilometers or so.” It’s what you might call a monolithic AI. Rather than training individual modules that handle a single process, such as cloud formation, the AI portion is trained to deal with everything at once.

Critically, the whole system is trained concurrently rather than training the AI separately from the physics core. Initially, performance evaluations and updates to the neural network were performed at six-hour intervals since the system isn’t very stable until at least partially trained. Over time, those are stretched out to five days.

The result is a system that’s competitive with the best available for forecasts running out to 10 days, often exceeding the competition depending on the precise measure used (in addition to weather forecasting benchmarks, the researchers looked at features like tropical cyclones, atmospheric rivers, and the Intertropical Convergence Zone). On the longer forecasts, it tended to produce features that were less blurry than those made by pure AI forecasters, even though it was operating at a lower resolution than they were. This lower resolution means larger grid squares—the surface of the Earth is divided up into individual squares for computational purposes—than most other models, which cuts down significantly on its computing requirements.

Despite its success with weather, there were a couple of major caveats. One is that NeuralGCM tended to underestimate extreme events occurring in the tropics. The second is that it doesn’t actually model precipitation; instead, it calculates the balance between evaporation and precipitation.

But it also comes with some specific advantages over some other short-term forecast models, key among them being that it isn’t actually limited to running over the short term. The researchers let it run for up to two years, and it successfully reproduced a reasonable-looking seasonal cycle, including large-scale features of the atmospheric circulation. Other long-duration runs show that it can produce appropriate counts of tropical cyclones, which go on to follow trajectories that reflect patterns seen in the real world.

Model mixes AI and physics to do global forecasts Read More »

the-falcon-9-rocket-may-return-to-flight-as-soon-as-tuesday-night

The Falcon 9 rocket may return to flight as soon as Tuesday night

That’s pretty fast —

SpaceX is waiting for a determination from the FAA.

File photo of a Falcon 9 launch on May 6 from Cape Canaveral Space Force Station, Florida.

Enlarge / File photo of a Falcon 9 launch on May 6 from Cape Canaveral Space Force Station, Florida.

SpaceX

It was only about 10 days ago that the Falcon 9 rocket’s upper stage failed in flight, preventing the rocket from delivering its 20 Starlink satellites into a proper orbit. Because they were released lower than expected—about 135 km above the Earth’s surface and subject to atmospheric drag—these satellites ultimately reentered the planet’s atmosphere and burnt up.

Typically, after a launch failure, a rocket will be sidelined for months while engineers and technicians comb over the available data and debris to identify a cause, perform tests, and institute a fix.

However, according to multiple sources, SpaceX was ready to launch the Falcon 9 rocket as soon as late last week. Currently, the company has a launch opportunity for no earlier than 12: 14 am ET (04: 14 UTC) on Wednesday for its Starlink 10-4 mission.

A quick fix?

In a summary of the anomaly posted shortly afterward, SpaceX did not identify the cause of the failure beyond saying, “The Merlin Vacuum engine experienced an anomaly and was unable to complete its second burn.”

Officially, the company has provided no additional information since then. However, the company’s engineers were able to identify the cause of the failure almost immediately and, according to sources, the fix was straightforward.

SpaceX was confident enough in this determination to resume launches of the Falcon 9 rocket one week after the failure. However, it is precluded from doing so while the US Federal Aviation Administration conducts a mishap investigation.

To that end, a week ago on July 15, SpaceX submitted a request to the FAA to resume launching its Falcon 9 rocket while this investigation into the anomaly continues. “The FAA is reviewing the request and will be guided by data and safety at every step of the process,” the FAA said in a statement at the time.

Crewed missions on deck

So, as of today, SpaceX is waiting for a determination from the FAA as to whether it will be allowed to resume Falcon 9 launches less than two weeks after the failure occurred.

The company plans to launch at least three Starlink missions in rapid succession from its two launch pads in Florida and one in California to determine the effectiveness of the fix. It would like to demonstrate the reliability of the Falcon 9 rocket, which had recorded more than 300 successful missions since its last failure during a pad accident in September 2016, before two upcoming crewed missions.

There is still a slight possibility that the Polaris Dawn mission, led by commercial astronaut Jared Isaacman, could launch in early August. This would be followed by the Crew-9 mission for NASA, which will carry four astronauts to the International Space Station.

Notably, neither of these crewed missions requires a second burn of the Merlin engine, which is where the failure occurred earlier this month during the Starlink mission.

The Falcon 9 rocket may return to flight as soon as Tuesday night Read More »

we’re-building-nuclear-spaceships-again—this-time-for-real 

We’re building nuclear spaceships again—this time for real 

Artist concept of the Demonstration for Rocket to Agile Cislunar Operations (DRACO) spacecraft.

Enlarge / Artist concept of the Demonstration for Rocket to Agile Cislunar Operations (DRACO) spacecraft.

DARPA

Phoebus 2A, the most powerful space nuclear reactor ever made, was fired up at Nevada Test Site on June 26, 1968. The test lasted 750 seconds and confirmed it could carry first humans to Mars. But Phoebus 2A did not take anyone to Mars. It was too large, it cost too much, and it didn’t mesh with Nixon’s idea that we had no business going anywhere further than low-Earth orbit.

But it wasn’t NASA that first called for rockets with nuclear engines. It was the military that wanted to use them for intercontinental ballistic missiles. And now, the military wants them again.

Nuclear-powered ICBMs

The work on nuclear thermal rockets (NTRs) started with the Rover program initiated by the US Air Force in the mid-1950s. The concept was simple on paper. Take tanks of liquid hydrogen and use turbopumps to feed this hydrogen through a nuclear reactor core to heat it up to very high temperatures and expel it through the nozzle to generate thrust. Instead of causing the gas to heat and expand by burning it in a combustion chamber, the gas was heated by coming into contact with a nuclear reactor.

Tokino, vectorized by CommiM at en.wikipedia

The key advantage was fuel efficiency. “Specific impulse,” a measurement that’s something like the gas mileage of a rocket, could be calculated from the square root of the exhaust gas temperature divided by the molecular weight of the propellant. This meant the most efficient propellant for rockets was hydrogen because it had the lowest molecular weight.

In chemical rockets, hydrogen had to be mixed with an oxidizer, which increased the total molecular weight of the propellant but was necessary for combustion to happen. Nuclear rockets didn’t need combustion and could work with pure hydrogen, which made them at least twice as efficient. The Air Force wanted to efficiently deliver nuclear warheads to targets around the world.

The problem was that running stationary reactors on Earth was one thing; making them fly was quite another.

Space reactor challenge

Fuel rods made with uranium 235 oxide distributed in a metal or ceramic matrix comprise the core of a standard fission reactor. Fission happens when a slow-moving neutron is absorbed by a uranium 235 nucleus and splits it into two lighter nuclei, releasing huge amounts of energy and excess, very fast neutrons. These excess neutrons normally don’t trigger further fissions, as they move too fast to get absorbed by other uranium nuclei.

Starting a chain reaction that keeps the reactor going depends on slowing them down with a moderator, like water, that “moderates” their speed. This reaction is kept at moderate levels using control rods made of neutron-absorbing materials, usually boron or cadmium, that limit the number of neutrons that can trigger fission. Reactors are dialed up or down by moving the control rods in and out of the core.

Translating any of this to a flying reactor is a challenge. The first problem is the fuel. The hotter you make the exhaust gas, the more you increase specific impulse, so NTRs needed the core to operate at temperatures reaching 3,000 K—nearly 1,800 K higher than ground-based reactors. Manufacturing fuel rods that could survive such temperatures proved extremely difficult.

Then there was the hydrogen itself, which is extremely corrosive at these temperatures, especially when interacting with those few materials that are stable at 3,000 K. Finally, standard control rods had to go, too, because on the ground, they were gravitationally dropped into the core, and that wouldn’t work in flight.

Los Alamos Scientific Laboratory proposed a few promising NTR designs that addressed all these issues in 1955 and 1956, but the program really picked up pace after it was transferred to NASA and Atomic Energy Commission (AEC) in 1958, There, the idea was rebranded as NERVA, Nuclear Engine for Rocket Vehicle Applications. NASA and AEC, blessed with nearly unlimited budget, got busy building space reactors—lots of them.

We’re building nuclear spaceships again—this time for real  Read More »

will-burying-biomass-underground-curb-climate-change?

Will burying biomass underground curb climate change?

stacking bricks —

Though carbon removal startups may limit global warming, significant questions remain.

Will burying biomass underground curb climate change?

On April 11, a small company called Graphyte began pumping out beige bricks, somewhat the consistency of particle board, from its new plant in Pine Bluff, Arkansas. The bricks don’t look like much, but they come with a lofty goal: to help stop climate change.

Graphyte, a startup backed by billionaire Bill Gates’ Breakthrough Energy Ventures, will bury its bricks deep underground, trapping carbon there. The company bills it as the largest carbon dioxide removal project in the world.

Scientists have long warned of the dire threat posed by global warming. It’s gotten so bad though that the long-sought mitigation, cutting carbon dioxide emissions from every sector of the economy, might not be enough of a fix. To stave off the worst—including large swaths of the Earth exposed to severe heat waves, water scarcity, and crop failures—some experts say there is a deep need to remove previously emitted carbon, too. And that can be done anywhere on Earth—even in places not known for climate-friendly policies, like Arkansas.

Graphyte aims to store carbon that would otherwise be released from plant material as it burns or decomposes at a competitive sub-$100 per metric ton, and it wants to open new operations as soon as possible, single-handedly removing tens of thousands of tons of carbon annually, said Barclay Rogers, the company’s founder and CEO. Nevertheless, that’s nowhere near the amount of carbon that will have to be removed to register as a blip in global carbon emissions. “I’m worried about our scale of deployment,” he said. “I think we need to get serious fast.”

Hundreds of carbon removal startups have popped up over the past few years, but the fledgling industry has made little progress so far. That leads to the inevitable question: Could Graphyte and companies like it actually play a major role in combating climate change? And will a popular business model among these companies, inviting other companies to voluntarily buy “carbon credits” for those buried bricks, actually work?

Whether carbon emissions are cut to begin with, or pulled out of the atmosphere after they’ve already been let loose, climate scientists stress that there is no time to waste. The clock began ticking years ago, with the arrival of unprecedented fires and floods, superstorms, and intense droughts around the world. But carbon removal, as it’s currently envisioned, also poses additional sociological, economic, and ethical questions. Skeptics, for instance, say it could discourage more pressing efforts on cutting carbon emissions, leaving some experts wondering whether it will even work at all.

Still, the Intergovernmental Panel on Climate Change, the world’s forefront group of climate experts, is counting on carbon removal technology to dramatically scale up. If the industry is to make a difference, experimentation and research and development should be done quickly, within the next few years, said Gregory Nemet, professor of public affairs who studies low-carbon innovation at the University of Wisconsin-Madison. “Then after that is the time to really start going big and scaling up so that it becomes climate-relevant,” he added. “Scale-up is a big challenge.”

Will burying biomass underground curb climate change? Read More »

mini-neptune-turned-out-to-be-a-frozen-super-earth

Mini-Neptune turned out to be a frozen super-Earth

Like Earth, but super —

The density makes it look like a water world, but its dim host star keeps it cool.

Image of three planets on a black background, with the two on the left being mostly white, indicating an icy composition. The one on the right is much smaller, and represents Earth.

Enlarge / Renditions of a possible composition of LHS 1140 b, with a patch of ocean on the side facing its host star. Earth is included at right for scale.

Of all the potential super-Earths—terrestrial exoplanets more massive than Earth—out there, an exoplanet orbiting a star only 40 light-years away from us in the constellation Cetus might be the most similar to have been found so far.

Exoplanet LHS 1140 b was assumed to be a mini-Neptune when it was first discovered by NASA’s James Webb Space Telescope toward the end of 2023. After analyzing data from those observations, a team of researchers, led by astronomer Charles Cadieux, of Université de Montréal, suggest that LHS 1140 b is more likely to be a super-Earth.

If this planet is an alternate version of our own, its relative proximity to its cool red dwarf star means it would most likely be a gargantuan snowball or a mostly frozen body with a substellar (region closest to its star) ocean that makes it look like a cosmic eyeball. It is now thought to be the exoplanet with the best chance for liquid water on its surface, and so might even be habitable.

Cadieux and his team say they have found “tantalizing evidence for a [nitrogen]-dominated atmosphere on a habitable zone super-Earth” in a study recently published in The Astrophysical Journal Letters.

Sorry, Neptune…

In December 2023, two transits of LHS 1140 b were observed with the NIRISS (Near-Infrared Imager and Slitless Spectrograph) instrument aboard Webb. NIRISS specializes in detecting exoplanets and revealing more about them through transit spectroscopy, which picks up the light of an orbiting planet’s host star as it passes through the atmosphere of that planet and travels toward Earth. Analysis of the different spectral bands in that light can then tell scientists about the specific atoms and molecules that exist in the planet’s atmosphere.

To test the previous hypothesis that LHS 1140 b is a mini-Neptune, the researchers created a 3D global climate model, or GCM. This used complex math to explore different combinations of factors that make up the climate system of a planet, such as land, oceans, ice, and atmosphere. Several different GCMs of a mini-Neptune were compared with the light spectrum observed via transit spectroscopy. The model for a mini-Neptune typically involves a gas giant with a thick, cloudless or nearly cloudless atmosphere dominated by hydrogen, but the spectral bands of this model did not match NIRISS observations.

With the possibility of a mini-Neptune being mostly ruled out (though further observations and analysis will be needed to confirm this), Cadieux’s team turned to another possibility: a super-Earth.

An Earth away from Earth?

The spectra observed with NIRISS were more in line with GCMs of a super-Earth. This type of planet would typically have a thick nitrogen or CO2-rich atmosphere enveloping a rocky surface on which there was some form of water, whether in frozen or liquid form.

The models also suggested a secondary atmosphere, which is an atmosphere formed after the original atmosphere of light elements, (hydrogen and helium) escaped during early phases of a planet’s formation. Secondary atmospheres are formed from heavier elements released from the crust, such as water vapor, carbon dioxide, and methane. They’re usually found on warm, terrestrial planets (Earth has a secondary atmosphere).

The most significant Webb/NIRISS data that did not match the GCMs was that the planet has a lower density (based on measurements of its size and mass) than expected for a rocky world. This is consistent with a water world with a mass that’s about 10 to 20 percent water. Based on this estimate, the researchers think that LHS 1140 b might even be a hycean planet—an ocean planet that has most of the attributes of a super-Earth, but an atmosphere dominated by hydrogen instead of nitrogen.

Since it orbits a dim star closely enough to be tidally locked, some models suggest a mostly icy planet with a substellar liquid ocean on its dayside.

While LHS 1140 b may be a super-Earth, the hycean planet hypothesis might end up being ruled out. Hycean planets are prone to the runaway greenhouse effect, which occurs when enough greenhouse gases accumulate in a planet’s atmosphere and prevent heat from escaping. Liquid water will eventually evaporate on a planet that cannot cool itself off.

Though we are getting closer to finding out what kind of planet LHS 1140 b is, and whether it could be habitable, further observations are needed. Cadieux wants to continue this research by comparing NIRISS data with data on other super-Earths that had previously been collected by Webb’s Near-Infrared Spectrograph, or NIRSpec, instrument. At least three transit observations of the planet with Webb’s MIRI, or Mid-Infrared instrument, are also needed to make sure stellar radiation is not interfering with observations of the planet itself.

“Given the limited visibility of LHS 1140b, several years’ worth of observations may be required to detect its potential secondary atmosphere,” the researchers said in the same study.

So could this planet really be a frozen exo-earth? The suspense is going to last a few years.

The Astrophysical Journal Letters, 2024.  DOI:  10.3847/2041-8213/ad5afa

Mini-Neptune turned out to be a frozen super-Earth Read More »