Science

vast-majority-of-new-us-power-plants-generate-solar-or-wind-power

Vast majority of new US power plants generate solar or wind power

But Victor views this as more of a slowdown than a reversal of momentum. One reason is that demand for electricity continues to rise to serve data centers and other large power users. The main beneficiaries are energy technologies that are the easiest to build and most cost effective, including solar, batteries, and gas.

In the first half of this year, the United States added 341 new power plants or utility-scale battery systems, with a total of 22,332 megawatts of summer generating capacity, according to EIA.

Chart showing how solar and wind have dominated new power generation capability.

Credit: Inside Climate News

More than half the total was utility-scale solar, with 12,034 megawatts, followed by battery systems, with 5,900 megawatts, onshore wind, with 2,697 megawatts, and natural gas, with 1,691 megawatts, which includes several types of natural gas plants.

The largest new plant by capacity was the 600-megawatt Hornet Solar in Swisher County, Texas, which went online in April.

“Hornet Solar is a testament to how large-scale energy projects can deliver reliable, domestic power to American homes and businesses,” said Juan Suarez, co-CEO of the developer, Vesper Energy of the Dallas area, in a statement from the ribbon-cutting ceremony.

The plants being completed now are special in part because of what they have endured, said Ric O’Connell, executive director of GridLab, a nonprofit that does technical analysis for regulators and renewable power advocates. Power plants take years to plan and build, and current projects likely began development during the COVID-19 pandemic. They stayed on track despite high inflation, parts shortages, and challenges in getting approval for grid connections, he said.

“It’s been a rocky road for a lot of these projects, so it’s exciting to see them online,” O’Connell said.

Chart showing mix of planned new power plants in the US

Credit: Inside Climate News

Looking ahead to the rest of this year and through 2030, the country has 254,126 megawatts of planned power plants, according to EIA. (To appear on this list, a project must meet three of four benchmarks: land acquisition, permits obtained, financing received, and a contract completed for selling electricity.)

Solar is the leader with 120,269 megawatts, followed by batteries, with 65,051 megawatts, and natural gas, with 35,081 megawatts.

Vast majority of new US power plants generate solar or wind power Read More »

peacock-feathers-can-emit-laser-beams

Peacock feathers can emit laser beams

Peacock feathers are greatly admired for their bright iridescent colors, but it turns out they can also emit laser light when dyed multiple times, according to a paper published in the journal Scientific Reports. Per the authors, it’s the first example of a biolaser cavity within the animal kingdom.

As previously reported, the bright iridescent colors in things like peacock feathers and butterfly wings don’t come from any pigment molecules but from how they are structured. The scales of chitin (a polysaccharide common to insects) in butterfly wings, for example, are arranged like roof tiles. Essentially, they form a diffraction grating, except photonic crystals only produce certain colors, or wavelengths, of light, while a diffraction grating will produce the entire spectrum, much like a prism.

In the case of peacock feathers, it’s the regular, periodic nanostructures of the barbules—fiber-like components composed of ordered melanin rods coated in keratin—that produce the iridescent colors. Different colors correspond to different spacing of the barbules.

Both are naturally occurring examples of what physicists call photonic crystals. Also known as photonic bandgap materials, photonic crystals are “tunable,” which means they are precisely ordered in such a way as to block certain wavelengths of light while letting others through. Alter the structure by changing the size of the tiles, and the crystals become sensitive to a different wavelength. (In fact, the rainbow weevil can control both the size of its scales and how much chitin is used to fine-tune those colors as needed.)

Even better (from an applications standpoint), the perception of color doesn’t depend on the viewing angle. And the scales are not just for aesthetics; they help shield the insect from the elements. There are several types of manmade photonic crystals, but gaining a better and more detailed understanding of how these structures grow in nature could help scientists design new materials with similar qualities, such as iridescent windows, self-cleaning surfaces for cars and buildings, or even waterproof textiles. Paper currency could incorporate encrypted iridescent patterns to foil counterfeiters.

Peacock feathers can emit laser beams Read More »

epa-plans-to-ignore-science,-stop-regulating-greenhouse-gases

EPA plans to ignore science, stop regulating greenhouse gases

It derives from a 2007 Supreme Court ruling that named greenhouse gases as “air pollutants,” giving the EPA the mandate to regulate them under the Clean Air Act.

Critics of the rule say that the Clean Air Act was fashioned to manage localized emissions, not those responsible for global climate change.

A rollback would automatically weaken the greenhouse gas emissions standards for cars and heavy-duty vehicles. Manufacturers such as Daimler and Volvo Cars have previously opposed the EPA’s efforts to tighten emission standards, while organized labour groups such as the American Trucking Association said they “put the trucking industry on a path to economic ruin.”

However, Katherine García, director of Sierra Club’s Clean Transportation for All Campaign, said that the ruling would be “disastrous for curbing toxic truck pollution, especially in frontline communities disproportionately burdened by diesel exhaust.”

Energy experts said the move could also stall progress on developing clean energy sources such as nuclear power.

“Bipartisan support for nuclear largely rests on the fact that it doesn’t have carbon emissions,” said Ken Irvin, a partner in Sidley Austin’s global energy and infrastructure practice. “If carbon stops being considered to endanger human welfare, that might take away momentum from nuclear.”

The proposed rule from the EPA will go through a public comment period and inter-agency review. It is likely to face legal challenges from environmental activists.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

EPA plans to ignore science, stop regulating greenhouse gases Read More »

the-case-for-memes-as-a-new-form-of-comics

The case for memes as a new form of comics


Both comics and memes rely on the same interplay of visual and verbal elements for their humor.

Credit: Jennifer Ouellette via imgflip

It’s undeniable that the rise of the Internet had a profound impact on cartooning as a profession, giving cartoonists both new tools and a new publishing and/or distribution medium. Online culture also spawned the emergence of viral memes in the late 1990s. Michelle Ann Abate, an English professor at The Ohio State University, argues in a paper published in INKS: The Journal of the Comics Studies Society, that memes—specifically, image macros—represent a new type of digital comic, right down to the cognitive and creative ways in which they operate.

“One of my areas of specialty has been graphic novels and comics,” Abate told Ars. “I’ve published multiple books on various aspects of comics history and various titles: everything from Charles Schulz’s Peanuts to The Far Side, to Little Lulu to Ziggy to The Family Circus. So I’ve been working on comics as part of the genres and texts and time periods that I look at for many years now.”

Her most recent book is 2024’s Singular Sensations: A Cultural History of One-Panel Comics in the United States, which Abate was researching when the COVID-19 pandemic hit in 2020. “I was reading a lot of single panel comics and sharing them with friends during the pandemic, and memes were something we were always sharing, too,” Abate said. “It occurred to me one day that there isn’t a whole lot of difference between the single panel comics I’m sharing and the memes. In terms of how they function, how they operate, the connection of the verbal and the visual, there’s more continuity than there is difference.”

So Abate decided to approach the question more systematically. Evolutionary biologist Richard Dawkins coined the word “meme” in his 1976 popular science book, The Selfish Gene, well before the advent of the Internet age. For Dawkins, it described a “unit of cultural transmission, or a unit of information”: ideas, catchphrases, catchy tunes, fashions, even arch building.

distraught woman pointing a finger and yelling, facing an image of a confused cat in front of a salad

Credit: Jennifer Ouellette via imgflp

In a 21st century context, “meme” refers to a piece of online content that spikes in popularity and gets passed from user to user, i.e., going viral. These can be single images remixed with tailored text, such as “Distracted Boyfriend,” “This Is Fine,” or “Batman Slapping Robin.” Or they can feature multiple panels, like “American Chopper.” Furthermore, “Memes can also be a gesture, they can be an activity, they can be a video like the Wednesday dance or the ice bucket challenge,” said Abate. “It’s become such a part of our lexicon that it’s hard to imagine a world without memes at this point.”

For Abate, Internet memes are clearly related to sequential art like comics, representing a new stage of evolution in the genre. In both cases, the visual and verbal elements work in tandem to produce the humor.

Granted, comic artists usually create both the image and the text, whereas memes adapt preexisting visuals with new text. Some might consider this poaching, but Abate points out that cartoonists like Charles Schulz have long used stencil templates (a static prefabricated element) to replicate images, a practice that is also used effectively in, say, Dinosaur Comics. And meme humor depends on people connecting the image to its origin rather than obscuring it. She compares the practice to sampling in music; the end result is still an original piece of art.

In fact, The New Yorker’s hugely popular cartoon caption contest—in which the magazine prints a single-panel drawing with no speech balloons or dialogue boxes and asks readers to supply their own verbal jokes—is basically a meme generator. “It’s seen more as a highbrow thing, crowdsourcing everybody’s wit,” said Abate. “But [the magazine supplies] the template image and then everybody puts in their own text or captions. They’re making memes. If they only published the winner, folks would be disappointed because the fun is seeing all the clever, funny things that people come up with.”

Memes both mirror and modify the comic genre. For instance, the online nature of memes can affect formatting. If there are multiple panels, those panels are usually arranged vertically rather than horizontally since memes are typically read by scrolling down one’s phone—like the “American Chopper” meme:

American Chopper meme with each frame representing a stage in the debate

Credit: Jennifer Ouellette via imgflip

Per Abate, this has the added advantage of forcing the reader to pause briefly to consider the argument and counter-argument, emphasizing that it’s an actual debate rather than two men simply yelling at one another. “If the panels were arranged horizontally and the guys were side by side in each other’s face, installments of ‘American Chopper’ would come across very differently,” she said.

A pad with infinite sheets

Scott McCloud is widely considered the leading theorist when it comes to the art of comics, and his hugely influential 2000 book, Reinventing Comics: The Evolution of an Art Form, explores the boundless potential for digital comics, freed from the constraints of a printed page. He calls this aspect the “infinite canvas,” because cartoonists can now create works of any size or shape, even as tall as a mountain. Memes have endless possibilities of a different kind, per Abate.

“[McCloud] thinks of it very expansively: a single panel could be the size of a city block,” said Abate. “You could never do that with a book because how could you print the book? How could you hold the book? How could you read the book? How could you download the book on your Kindle? But when you’ve got a digital world, it could be a city block and you can explore it with your mouse and your cursor and your track pad and, oh, all the possibilities for storytelling and for the medium that will open up with this infinite canvas. There have been many places and titles where this has played out with digital comics.

“Obviously with a meme, they’re not the size of a city block,” she continued. “So it occurred to me that they are infinite, but almost like you’re peeling sheets off a pad and the pad just has an endless number of sheets. You can just keep redoing it, redo, redo, redo. That’s memes. They get revised and repurposed and re-imagined and redone and recirculated over and over and over again. The template gets used inexhaustibly, which is what makes them fun, what makes them go viral.”

comic frame showing batman slapping robin

Credit: Jennifer Ouellette via imgflp

Just what makes a good meme image? Abate has some thoughts about that, too. “It has to be not just the image, but the ability for the image to be paired with a caption, a text,” she said. “It has to lend itself to some kind of verbal element as well. And it also has to have some elasticity of being specific enough that it’s recognizable, but also being malleable enough that it can be adapted to different forms.”

In other words, a really good meme must be generalizable if it is to last longer than a few weeks. The recent kiss-cam incident at a Coldplay concert is a case in point. When a married tech CEO was caught embracing his company’s “chief people officer,” they quickly realized they were on the Jumbotron, panicked, and hid their faces—which only made it worse. The moment went viral and spawned myriad memes. Even the Phillies mascots got into the spirit, re-enacting the moment at a recent baseball game. But that particular meme might not have long-term staying power.

“It became a meme very quickly and went viral very fast,” said Abate. “I may be proved wrong, but I don’t think the Coldplay moment will be a meme that will be around a year from now. It’s commenting on a particular incident in the culture, and then the clock will tick, and folks will move on. Whereas something like ‘Distracted Boyfriend’ or ‘This is Fine’ has more staying power because it’s not tied to a particular incident or a particular scandal but can be applied to all kinds of political topics, pop culture events, and cultural experiences.”

black man stroking his chin, mouth partly open in surprise

Credit: Sean Carroll via imgflp

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

The case for memes as a new form of comics Read More »

trump-promised-a-drilling-boom,-but-us-energy-industry-hasn’t-been-interested

Trump promised a drilling boom, but US energy industry hasn’t been interested


Exec: “Liberation Day chaos and tariff antics have harmed the domestic energy industry.”

“We will drill, baby, drill,” President Donald Trump declared at his inauguration on January 20. Echoing the slogan that exemplified his energy policies during the campaign, he made his message clear: more oil and gas, lower prices, greater exports.

Six months into Trump’s second term, his administration has little to show on that score. Output is ticking up, but slower than it did under the Biden administration. Pump prices for gasoline have bobbed around where they were in inauguration week. And exports of crude oil in the four months through April trailed those in the same period last year.

The White House is discovering, perhaps the hard way, that energy markets aren’t easily managed from the Oval Office—even as it moves to roll back regulations on the oil and gas sector, offers up more public lands for drilling at reduced royalty rates, and axes Biden-era incentives for wind and solar.

“The industry is going to do what the industry is going to do,” said Jenny Rowland-Shea, director for public lands at the Center for American Progress, a progressive policy think tank.

That’s because the price of oil, the world’s most-traded commodity, is more responsive to global demand and supply dynamics than to domestic policy and posturing.

The market is flush with supplies at the moment, as the Saudi Arabia-led cartel of oil-producing nations known as OPEC+ allows more barrels to flow while China, the world’s top oil consumer, curbs its consumption. Within the US, a boom in energy demand driven by rapid electrification and AI-serving data centers is boosting power costs for homes and businesses, yet fossil fuel producers are not rushing to ramp up drilling.

There is one key indicator of drilling levels that the industry has watched closely for more than 80 years: a weekly census of active oil and gas rigs published by Baker Hughes. When Trump came into office January 20, the US rig count was 580. Last week, the most recent figure, it was down to 542—hovering just above a four-year low reached earlier in the month.

The most glaring factor behind this stagnant rig count is the current level of crude oil prices. Take the US benchmark grade: West Texas Intermediate crude. Its prices were near $66 a barrel on July 28, after hitting a four-year low of $62 in May. The break-even level for drilling new wells is somewhere close to $60 per barrel, according to oil and gas experts.

That’s before you account for the fallout of elevated tariffs on steel and other imports for the many companies that get their pipes and drilling equipment from overseas, said Robert Rapier, editor-in-chief of Shale Magazine, who has two decades of experience as a chemical engineer.

The Federal Reserve Bank of Dallas’ quarterly survey of over 130 oil and gas producers based in Texas, Louisiana, and New Mexico, conducted in June, suggests the industry’s outlook is pessimistic. Nearly half of the 38 firms that responded to this question saw their firms drilling fewer wells this year than they had earlier expected.

Survey participants could also submit comments. One executive from an exploration and production (E&P) company said, “It’s hard to imagine how much worse policies and DC rhetoric could have been for US E&P companies.” Another executive said, “The Liberation Day chaos and tariff antics have harmed the domestic energy industry. Drill, baby, drill will not happen with this level of volatility.”

Roughly one in three survey respondents chalked up the expectations for fewer wells to higher tariffs on steel imports. And three in four said tariffs raised the cost of drilling and completing new wells.

“They’re getting more places to drill and they’re getting some lower royalties, but they’re also getting these tariffs that they don’t want,” Rapier said. “And the bottom line is their profits are going to suffer.”

Earlier this month, ExxonMobil estimated that its profit in the April-June quarter will be roughly $1.5 billion lower than in the previous three months because of weaker oil and gas prices. And over in Europe, BP, Shell, and TotalEnergies issued similar warnings to investors about hits to their respective profits.

These warnings come even as Trump has installed friendly faces to regulate the oil and gas sector, including at the Department of Energy, the Environmental Protection Agency, and the Department of the Interior, the latter of which manages federal lands and is gearing up to auction more oil and gas leases on those lands.

“There’s a lot of enthusiasm for a window of opportunity to make investments. But there’s also a lot of caution about wanting to make sure that if there’s regulatory reforms, they’re going to stick,” said Kevin Book, managing director of research at ClearView Energy Partners, which produces analyses for energy companies and investors.

The recently enacted One Big Beautiful Bill Act contains provisions requiring four onshore and two offshore lease sales every year, lowering the minimum royalty rate to 12.5 percent from 16.67 percent, and bringing back speculative leasing—when lands that don’t invite enough bids are leased for less money—that was stopped in 2022.

“Pro-energy policies play a critical role in strengthening domestic production,” said a spokesperson for the American Petroleum Institute, the top US oil and gas industry group. “The new tax legislation unlocks opportunities for safe, responsible development in critical resource basins to deliver the affordable, reliable fuel Americans rely on.”

Because about half of the federal royalties end up with the states and localities where the drilling occurs, “budgets in these oil and gas communities are going to be hit hard,” Rowland-Shea of American Progress said. Meanwhile, she said, drilling on public lands can pollute the air, raise noise levels, cause spills or leaks, and restrict movement for both people and wildlife.

Earlier this year, Congress killed an EPA rule finalized in November that would have charged oil and gas companies for flaring excess methane from their operations.

“Folks in the Trump camp have long said that the Biden administration was killing drilling by enforcing these regulations on speculative leasing and reining in methane pollution,” said Rowland-Shea. “And yet under Biden, we saw the highest production of oil and gas in history.”

In fact, the top three fossil fuel producers collectively earned less during Trump’s first term than they did in either of President Barack Obama’s terms or under President Joe Biden. “It’s an irony that when Democrats are in there and they’re putting in policies to shift away from oil and gas, which causes the price to go up, that is more profitable for the oil and gas industry,” said Rapier.

That doesn’t mean, of course, that the Trump administration’s actions won’t have long-lasting climate implications. Even though six months may be a significant amount of time in political accounting, investment decisions in the energy sector are made over longer horizons, ClearView’s Book said. As long as the planned lease sales take place, oil companies can snap up and sit on public lands until they see more favorable conditions for drilling.

It’s an irony that when Democrats are in there and they’re putting in policies to shift away from oil and gas, which causes the price to go up, that is more profitable for the oil and gas industry.

What could pad the demand for oil and gas is how the One Big Beautiful Bill Act will withdraw or dilute the Inflation Reduction Act’s tax incentives and subsidies for renewable energy sources. “With the kneecapping of wind and solar, that’s going to put a lot more pressure on fossil fuels to fill that gap,” Rowland-Shea said.

However, the economics of solar and wind are increasingly too attractive to ignore. With electricity demand exceeding expectations, Book said, “any president looking ahead at end-user prices and power supply might revisit or take a flexible position if they find themselves facing shortage.”

A recent United Nations report found that “solar and wind are now almost always the least expensive—and the fastest—option for new electricity generation.” That is why Texas, deemed the oil capital of the world, produces more wind power than any other state and also led the nation in new solar capacity in the last two years.

Renewables like wind and solar, said Rowland-Shea, are “a truly abundant and American source of energy.”

This story originally appeared on Inside Climate News.

Photo of Inside Climate News

Trump promised a drilling boom, but US energy industry hasn’t been interested Read More »

the-first-company-to-complete-a-fully-successful-lunar-landing-is-going-public

The first company to complete a fully successful lunar landing is going public

The financial services firm Charles Schwab reported last month that IPOs are on the comeback across multiple sectors of the market. “After a long dry spell, there are signs of life in the initial public offerings space,” Charles Schwab said in June. “An increase in offerings can sometimes suggest an improvement in overall market sentiment.”

Firefly Aerospace started as a propulsion company. This image released by Firefly earlier this year shows the company’s family of engines. From left to right: Miranda for the Eclipse rocket; Lightning and Reaver for the Alpha rocket; and Spectre for the Blue Ghost and Elytra spacecraft.

Firefly is eschewing a SPAC merger in favor of a traditional IPO. Another space company, Voyager Technologies, closed an Initial Public Offering on June 11, raising nearly $383 million with a valuation peaking at $3.8 billion despite reporting a loss of $66 million in 2024. Voyager’s stock price has been in a precipitous decline since then.

Financial information disclosed by Firefly in a regulatory filing with the Securities and Exchange Commission reveals the company registered $60.8 million in revenue in 2024, a 10 percent increase from the prior year. But Firefly’s net loss widened from $135 million to $231 million, largely due to higher spending on research and development for the Eclipse rocket and Elytra spacecraft.

Rocket Lab, too, reported a net loss of $190 million in 2024 and another $60.6 million in the first quarter of this year. Despite this, Rocket Lab’s stock price has soared for most of 2025, further confirming that near-term profits aren’t everything for investors.

Chad Anderson, the founder and managing partner of Space Capital, offered a “gut check” to investors listening to his quarterly podcast last week.

“90 percent of IPOs that double on day one deliver negative returns over three years,” Anderson said. “And a few breakout companies become long-term winners… Rocket Lab being chief among them. But many fall short of expectations, even with some collapsing into bankruptcy, again, as we’ve seen over the last few years.

“There’s a lot of excitement about the space economy, and rightly so,” Anderson said. “This is a once-in-a-generation opportunity for investors, but unfortunately, I think this is going to be another example of why specialist expertise is required and the ability to read financial statements and understand the underlying business fundamentals, because that’s what’s really going to take companies through in the long term.”

The first company to complete a fully successful lunar landing is going public Read More »

fermented-meat-with-a-side-of-maggots:-a-new-look-at-the-neanderthal-diet

Fermented meat with a side of maggots: A new look at the Neanderthal diet

Traditionally, Indigenous peoples almost universally viewed thoroughly putrefied, maggot-infested animal foods as highly desirable fare, not starvation rations. In fact, many such peoples routinely and often intentionally allowed animal foods to decompose to the point where they were crawling with maggots, in some cases even beginning to liquefy.

This rotting food would inevitably emit a stench so overpowering that early European explorers, fur trappers, and missionaries were sickened by it. Yet Indigenous peoples viewed such foods as good to eat, even a delicacy. When asked how they could tolerate the nauseating stench, they simply responded, “We don’t eat the smell.”

Neanderthals’ cultural practices, similar to those of Indigenous peoples, might be the answer to the mystery of their high δ¹⁵N values. Ancient hominins were butchering, storing, preserving, cooking, and cultivating a variety of items. All these practices enriched their paleo menu with foods in forms that nonhominin carnivores do not consume. Research shows that δ¹⁵N values are higher for cooked foods, putrid muscle tissue from terrestrial and aquatic species, and, with our study, for fly larvae feeding on decaying tissue.

The high δ¹⁵N values of maggots associated with putrid animal foods help explain how Neanderthals could have included plenty of other nutritious foods beyond only meat while still registering δ¹⁵N values we’re used to seeing in hypercarnivores.

We suspect the high δ¹⁵N values seen in Neanderthals reflect routine consumption of fatty animal tissues and fermented stomach contents, much of it in a semi-putrid or putrid state, together with the inevitable bonus of both living and dead ¹⁵N-enriched maggots.

What still isn’t known

Fly larvae are a fat-rich, nutrient-dense, ubiquitous, and easily procured insect resource, and both Neanderthals and early Homo sapiens, much like recent foragers, would have benefited from taking full advantage of them. But we cannot say that maggots alone explain why Neanderthals have such high δ¹⁵N values in their remains.

Several questions about this ancient diet remain unanswered. How many maggots would someone need to consume to account for an increase in δ¹⁵N values above the expected values due to meat eating alone? How do the nutritional benefits of consuming maggots change the longer a food item is stored? More experimental studies on changes in δ¹⁵N values of foods processed, stored, and cooked following Indigenous traditional practices can help us better understand the dietary practices of our ancient relatives.

Melanie Beasley is assistant professor of anthropology at Purdue University.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Fermented meat with a side of maggots: A new look at the Neanderthal diet Read More »

ars-spoke-with-the-military’s-chief-orbital-traffic-cop—here’s-what-we-learned

Ars spoke with the military’s chief orbital traffic cop—here’s what we learned


“We have some 2,000 or 2,200 objects that I call the ‘red order of battle.'”

Col. Raj Agrawal participates in a change of command ceremony to mark his departure from Mission Delta 2 at Peterson Space Force Base, Colorado. Col. Barry Croker became the new commander of Mission Delta 2 on July 3.

For two years, Col. Raj Agrawal commanded the US military unit responsible for tracking nearly 50,000 human-made objects whipping through space. In this role, he was keeper of the orbital catalog and led teams tasked with discerning whether other countries’ satellites, mainly China and Russia, are peaceful or present a military threat to US forces.

This job is becoming more important as the Space Force prepares for the possibility of orbital warfare.

Ars visited with Agrawal in the final weeks of his two-year tour of duty as commander of Mission Delta 2, a military unit at Peterson Space Force Base, Colorado. Mission Delta 2 collects and fuses data from a network of sensors “to identify, characterize, and exploit opportunities and mitigate vulnerabilities” in orbit, according to a Space Force fact sheet.

This involves operating radars and telescopes, analyzing intelligence information, and “mapping the geocentric space terrain” to “deliver a combat-ready common operational picture” to military commanders. Agrawal’s job has long existed in one form or another, but the job description is different today. Instead of just keeping up with where things are in space—a job challenging enough—military officials now wrestle with distinguishing which objects might have a nefarious purpose.

From teacher to commander

Agrawal’s time at Mission Delta 2 ended on July 3. His next assignment will be as Space Force chair at the National Defense University. This marks a return to education for Agrawal, who served as a Texas schoolteacher for eight years before receiving his commission as an Air Force officer in 2001.

“Teaching is, I think, at the heart of everything I do,” Agrawal said. 

He taught music and math at Trimble Technical High School, an inner city vocational school in Fort Worth. “Most of my students were in broken homes and unfortunate circumstances,” Agrawal said. “I went to church with those kids and those families, and a lot of times, I was the one bringing them home and taking them to school. What was [satisfying] about that was a lot of those students ended up living very fulfilling lives.”

Agrawal felt a calling for higher service and signed up to join the Air Force. Given his background in music, he initially auditioned for and was accepted into the Air Force Band. But someone urged him to apply for Officer Candidate School, and Agrawal got in. “I ended up on a very different path.”

Agrawal was initially accepted into the ICBM career field, but that changed after the September 11 attacks. “That was a time with anyone with a name like mine had a hard time,” he said. “It took a little bit of time to get my security clearance.”

Instead, the Air Force assigned him to work in space operations. Agrawal quickly became an instructor in space situational awareness, did a tour at the National Reconnaissance Office, then found himself working at the Pentagon in 2019 as the Defense Department prepared to set up the Space Force as a new military service. Agrawal was tasked with leading a team of 100 people to draft the first Space Force budget.

Then, he received the call to report to Peterson Space Force Base to take command of what is now Mission Delta 2, the inheritor of decades of Air Force experience cataloging everything in orbit down to the size of a softball. The catalog was stable and predictable, lingering below 10,000 trackable objects until 2007. That’s when China tested an anti-satellite missile, shattering an old Chinese spacecraft into more than 3,500 pieces large enough to be routinely detected by the US military’s Space Surveillance Network.

This graph from the European Space Agency shows the growing number of trackable objects in orbit. Credit: European Space Agency

Two years later, an Iridium communications satellite collided with a defunct Russian spacecraft, adding thousands more debris fragments to low-Earth orbit. A rapid uptick in the pace of launches since then has added to the problem, further congesting busy orbital traffic lanes a hundred miles above the Earth. Today, the orbital catalog numbers roughly 48,000 objects.

“This compiled data, known as the space catalog, is distributed across the military, intelligence community, commercial space entities, and to the public, free of charge,” officials wrote in a fact sheet describing Mission Delta 2’s role at Space Operations Command. Deltas are Space Force military units roughly equivalent to a wing or group command in the Air Force.

The room where it happens

The good news is that the US military is getting better at tracking things in space. A network of modern radars and telescopes on the ground and in space can now spot objects as small as a golf ball. Space is big, but these objects routinely pass close to one another. At speeds of nearly 5 miles per second, an impact will be catastrophic.

But there’s a new problem. Today, the US military must not only screen for accidental collisions but also guard against an attack on US satellites in orbit. Space is militarized, a fact illustrated by growing fleets of satellites—primarily American, Chinese, and Russian—capable of approaching another country’s assets in orbit, and in some cases, disable or destroy them. This has raised fears at the Pentagon that an adversary could take out US satellites critical for missile warning, navigation, and communications, with severe consequences impacting military operations and daily civilian life.

This new reality compelled the creation of the Space Force in 2019, beginning a yearslong process of migrating existing Air Force units into the new service. Now, the Pentagon is posturing for orbital warfare by investing in new technologies and reorganizing the military’s command structure.

Today, the Space Force is responsible for predicting when objects in orbit will come close to one another. This is called a conjunction in the parlance of orbital mechanics. The US military routinely issues conjunction warnings to commercial and foreign satellite operators to give them an opportunity to move their satellites out of harm’s way. These notices also go to NASA if there’s a chance of a close call with the International Space Station (ISS).

The first Trump administration approved a new policy to transfer responsibility for these collision warnings to the Department of Commerce, allowing the military to focus on national security objectives.

But the White House’s budget request for next year would cancel the Commerce Department’s initiative to take over collision warnings. Our discussion with Agrawal occurred before the details of the White House budget were made public last month, and his comments reflect official Space Force policy at the time of the interview. “In uniform, we align to policy,” Agrawal wrote on his LinkedIn account. “We inform policy decisions, but once they’re made, we align our support accordingly.”

US Space Force officials show the 18th Space Defense Squadron’s operations floor to officials from the German Space Situational Awareness Centre during an “Operator Exchange” event at Vandenberg Space Force Base, California, on April 7, 2022. Credit: US Space Force/Tech. Sgt. Luke Kitterman

Since our interview, analysts have also noticed an uptick in interesting Russian activity in space and tracked a suspected Chinese satellite refueling mission in geosynchronous orbit.

Let’s rewind the tape to 2007, the time of China’s game-changing anti-satellite test. Gen. Chance Saltzman, today the Space Force’s Chief of Space Operations, was a lieutenant colonel in command of the Air Force’s 614th Space Operations Squadron at the time. He was on duty when Air Force operators first realized China had tested an anti-satellite missile. Saltzman has called the moment a “pivot point” in space operations. “For those of us that are neck-deep in the business, we did have to think differently from that day on,” Saltzman said in 2023.

Agrawal was in the room, too. “I was on the crew that needed to count the pieces,” he told Ars. “I didn’t know the significance of what was happening until after many years, but the Chinese had clearly changed the nature of the space environment.”

The 2007 anti-satellite test also clearly changed the trajectory of Agrawal’s career. We present part of our discussion with Agrawal below, and we’ll share the rest of the conversation tomorrow. The text has been lightly edited for brevity and clarity.

Ars: The Space Force’s role in monitoring activities in space has changed a lot in the last few years. Can you tell me about these changes, and what’s the difference between what you used to call Space Situational Awareness, and what is now called Space Domain Awareness?

Agrawal: We just finished our fifth year as a Space Force, so as a result of standing up a military service focused on space, we shifted our activities to focus on what the joint force requires for combat space power. We’ve been doing space operations for going on seven decades. I think a lot of folks think that it was a rebranding, as opposed to a different focus for space operations, and it couldn’t be further from the truth. Compared to Space Domain Awareness (SDA), Space Situational Awareness (SSA) is kind of the knowledge we produce with all these sensors, and anybody can do space situational awareness. You have academia doing that. You’ve got commercial, international partners, and so on. But Space Domain Awareness, Gen. [John “Jay”] Raymond coined the term a couple years before we stood up the Space Force, and he was trying to get after, how do we create a domain focused on operational outcomes? That’s all we could say at the time. We couldn’t say war-fighting domain at the time because of the way of our policy, but our policy shifted to being able to talk about space as a place where, not that we want to wage war, but that we can achieve objectives, and do that with military objectives in mind.

We used to talk about detect, characterize, attribute, predict. And then Gen. [Chance] Saltzman added target onto the construct for Space Domain Awareness, so that we’re very much in the conversation of what it means to do a space-enabled attack and being able to achieve objectives in, from, and to space, and using Space Domain Awareness as a vehicle to do those things. So, with Mission Delta 2, what he did is he took the sustainment part of acquisition, software development, cyber defense, intelligence related to Space Domain Awareness, and then all the things that we were doing in Space Domain Awareness already, put all that together under one command … and called us Mission Delta 2. So, the 18th Space Defense Squadron … that used to kind of be the center of the world for Space Domain Awareness, maybe the only unit that you could say was really doing SDA, where everyone else was kind of doing SSA. When I came into command a couple years ago, and we face now a real threat to having space superiority in the space domain, I disaggregated what we were doing just in the 18th and spread out through a couple of other units … So, that way everyone’s got kind of majors and minors, but we can quickly move a mission in case we get tested in terms of cyber defense or other kinds of vulnerabilities.

This multi-exposure image depicts a satellite-filled sky over Alberta. Credit: Alan Dyer/VWPics/Universal Images Group via Getty Images

We can’t see the space domain, so it’s not like the air domain and sea domain and land domain, where you can kind of see where everything is, and you might have radars, but ultimately it’s a human that’s verifying whether or not a target or a threat is where it is. For the space domain, we’re doing all that through radars, telescopes, and computers, so the reality we create for everyone is essentially their reality. So, if there’s a gap, if there’s a delay, if there are some signs that we can’t see, that reality is what is created by us, and that is effectively the reality for everyone else, even if there is some other version of reality in space. So, we’re getting better and better at fielding capability to see the complexity, the number of objects, and then translating that into what’s useful for us—because we don’t need to see everything all the time—but what’s useful for us for military operations to achieve military objectives, and so we’ve shifted our focus just to that.

We’re trying to get to where commercial spaceflight safety is managed by the Office of Space Commerce, so they’re training side by side with us to kind of offload that mission and take that on. We’re doing up to a million notifications a day for conjunction assessments, sometimes as low as 600,000. But last year, we did 263 million conjunction notifications. So, we want to get to where the authorities are rightly lined, where civil or commercial notifications are done by an organization that’s not focused on joint war-fighting, and we focus on the things that we want to focus on.

Ars: Thank you for that overview. It helps me see the canvas for everything else we’re going to talk about. So, today, you’re not only tracking new satellites coming over the horizon from a recent launch or watching out for possible collisions, you’re now trying to see where things are going in space and maybe even try to determine intent, right?

Agrawal: Yeah, so the integrated mission delta has helped us have intel analysts and professionals as part of our formation. Their mission is SDA as much as ours is, but they’re using an intel lens. They’re looking at predictive intelligence, right? I don’t want to give away tradecraft, but what they’re focused on is not necessarily where a thing is. It used to be that all we cared about was position and vector, right? As long as you knew an object’s position and the direction they were going, you knew their orbit. You had predictive understanding of what their element set would be, and you only had to do sampling to get a sense of … Is it kind of where we thought it was going to be? … If it was far enough off of its element set, then we would put more energy, more sampling of that particular object, and then effectively re-catalog it.

Now, it’s a different model. We’re looking at state vectors, and we’re looking at anticipatory modeling, where we have some 2,000 or 2,200 objects that I call the “red order of battle”—that are high-interest objects that we anticipate will do things that are not predicted, that are not element set in nature, but that will follow some type of national interest. So, our intel apparatus gets after what things could potentially be a risk, and what things to continue to understand better, and what things we have to be ready to hold at risk. All of that’s happening through all the organizations, certainly within this delta, but in partnership and in support of other capabilities and deltas that are getting after their parts of space superiority.

Hostile or friendly?

Ars: Can you give some examples of these red order of battle objects?

Agrawal: I think you know about Shijian-20 (a “tech demo” satellite that has evaded inspection by US satellites) and Shijian-24C (which the Space Force says demonstrated “dogfighting” in space), things that are advertised as scientific in nature, but clearly demonstrate capability that is not friendly, and certainly are behaving in ways that are unprofessional. In any other domain, we would consider them hostile, but in space, we try to be a lot more nuanced in terms of how we characterize behavior, but still, when something’s behaving in a way that isn’t pre-planned, isn’t pre-coordinated, and potentially causes hazard, harm, or contest with friendly forces, we now get in a situation where we have to talk about is that behavior hostile or not? Is that escalatory or not? Space Command is charged with those authorities, so they work through the legal apparatus in terms of what the definition of a hostile act is and when something behaves in a way that we consider to be of national security interest.

We present all the capability to be able to do all that, and we have to be as cognizant on the service side as the combatant commanders are, so that our intel analysts are informing the forces and the training resources to be able to anticipate the behavior. We’re not simply recognizing it when it happens, but studying nations in the way they behave in all the other domains, in the way that they set policy, in the way that they challenge norms in other international arenas like the UN and various treaties, and so on. The biggest predictor, for us, of hazardous behaviors is when nations don’t coordinate with the international community on activities that are going to occur—launches, maneuvers, and fielding of large constellations, megaconstellations.

A stack of Starlink satellites in space right before deployment

Starlink satellites. Credit: Starlink

There are nearly 8,000 Starlink satellites in orbit today. SpaceX adds dozens of satellites to the constellation each week. Credit: SpaceX

As you know, we work very closely with Starlink, and they’re very, very responsible. They coordinate and flight plan. They use the kind of things that other constellations are starting to use … changes in those elsets (element sets), for lack of a better term, state vectors, we’re on top of that. We’re pre-coordinating that. We’re doing that weeks or months in advance. We’re doing that in real-time in cooperation with these organizations to make sure that space remains safe, secure, accessible, profitable even, for industry. When you have nations, where they’re launching over their population, where they’re creating uncertainty for the rest of the world, there’s nothing else we can do with it other than treat that as potentially hostile behavior. So, it does take a lot more of our resources, a lot more of our interest, and it puts [us] in a situation where we’re posturing the whole joint force to have to deal with that kind of uncertainty, as opposed to cooperative launches with international partners, with allies, with commercial, civil, and academia, where we’re doing that as friends, and we’re doing that in cooperation. If something goes wrong, we’re handling that as friends, and we’re not having to involve the rest of the security apparatus to get after that problem.

Ars: You mentioned that SpaceX shares Starlink orbit information with your team. Is it the same story with Amazon for the Kuiper constellation?

Agrawal: Yeah, it is. The good thing is that all the US and allied commercial entities, so far, have been super cooperative with Mission Delta 2 in particular, to be able to plan out, to talk about challenges, to even change the way they do business, learning more about what we are asking of them in order to be safe. The Office of Space Commerce, obviously, is now in that conversation as well. They’re learning that trade and ideally taking on more of that responsibility. Certainly, the evolution of technology has helped quite a bit, where you have launches that are self-monitored, that are able to maintain their own safety, as opposed to requiring an entire apparatus of what was the US Air Force often having to expend a tremendous amount of resources to provide for the safety of any launch. Now, technology has gotten to a point where a lot of that is self-monitored, self-reported, and you’ll see commercial entities blow up their own rockets no matter what’s onboard if they see that it’s going to cause harm to a population, and so on. So, yeah, we’re getting a lot of cooperation from other nations, allies, partners, close friends that are also sharing and cooperating in the interest of making sure that space remains sustainable and secure.

“We’ve made ourselves responsible”

Ars: One of the great ironies is that after you figure out the positions and tracks of Chinese or Russian satellites or constellations, you’re giving that data right back to them in the form of conjunction and collision notices, right?

Agrawal: We’ve made ourselves responsible. I don’t know that there’s any organization holding us accountable to that. We believe it’s in our interests, in the US’s interests, to provide for a safe, accessible, secure space domain. So, whatever we can do to help other nations also be safe, we’re doing it certainly for their sake, but we’re doing it as much for our sake, too. We want the space domain to be safe and predictable. We do have an apparatus set up in partnership with the State Department, and with a tremendous amount of oversight from the State Department, and through US Space Command to provide for spaceflight safety notifications to China and Russia. We send notes directly to offices within those nations. Most of the time they don’t respond. Russia, I don’t recall, hasn’t responded at all in the past couple of years. China has responded a couple of times to those notifications. And we hope that, through small measures like that, we can demonstrate our commitment to getting to a predictable and safe space environment.

A model of a Chinese satellite refueling spacecraft on display during the 13th China International Aviation and Aerospace Exhibition on October 1, 2021, in Zhuhai, Guangdong Province of China. Credit: Photo by VCG/VCG via Getty Images

Ars:  What does China say in response to these notices?

Agrawal: Most of the time it’s copy or acknowledged. I can only recall two instances where they’ve responded. But we did see some hope earlier this year and last year, where they wanted to open up technical exchanges with us and some of their [experts] to talk about spaceflight safety, and what measures they could take to open up those kinds of conversations, and what they could do to get a more secure, safer pace of operations. That, at some point, got delayed because of the holiday that they were going through, and then those conversations just halted, or at least progress on getting those conversations going halted. But we hope that there’ll be an opportunity again in the future where they will open up those doors again and have those kinds of conversations because, again, transparency will get us to a place where we can be predictable, and we can all benefit from orbital regimes, as opposed to using them exploitively. LEO is just one of those places where you’re not going to hide activity there, so you just are creating risk, uncertainty, and potential escalation by launching into LEO and not communicating throughout that whole process.

Ars:  Do you have any numbers on how many of these conjunction notices go to China and Russia? I’m just trying to get an idea of what proportion go to potential adversaries.

Agrawal: A lot. I don’t know the degree of how many thousands go to them, but on a regular basis, I’m dealing with debris notifications from Russian and Chinese ASAT (anti-satellite) testing. That has put the ISS at risk a number of times. We’ve had maneuvers occur in recent history as a result of Chinese rocket body debris. Debris can’t maneuver, and unfortunately, we’ve gotten into situations with particularly those two nations that talk about wanting to have safer operations, but continue to conduct debris-causing tests. We’re going to be dealing with that for generations, and we are going to have to design capability to maneuver around those debris clouds as just a function of operating in space. So, we’ve got to get to a point where we’re not doing that kind of testing in orbit.

Ars: Would it be accurate to say you send these notices to China and Russia daily?

Agrawal: Yeah, absolutely. That’s accurate. These debris clouds are in LEO, so as you can imagine, as those debris clouds go around the Earth every 90 minutes, we’re dealing with conjunctions. There are some parts of orbits that are just unusable as a result of that unsafe ASAT test.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Ars spoke with the military’s chief orbital traffic cop—here’s what we learned Read More »

20-years-after-katrina,-new-orleans-remembers

20 years after Katrina, New Orleans remembers


20 years ago, Ivor Van Heerden warned of impending disaster in New Orleans. Are his warnings still going unheeded?

A man is stranded on a rooftop in the aftermath of Hurricane Katrina in 2005. Credit: Wickes Helmboldt

Next month marks the 20th anniversary of one of the most devastating natural disasters in US history: Hurricane Katrina, a Category 3 storm that made landfall on August 29, 2005. The storm itself was bad enough, but the resulting surge of water caused havoc for New Orleans in particular when the city’s protective levees failed, flooding much of New Orleans and killing 1,392 people. National Geographic is marking the occasion with a new documentary series: Hurricane Katrina: Race Against Time.

The five-part documentary is directed by Oscar nominee Traci A. Curry (Attica) and co-produced by Ryan Coogler’s Proximity Media, in conjunction with Lightbox. The intent was to go beyond the headlines of yesteryear and re-examine the many systemic failures that occurred while also revealing “stories of survival, heroism, and resilience,” Proximity’s executive producers said in a statement. “It’s a vital historical record and a call to witness, remember and recon with the truth of Hurricane Katrina’s legacy.”

Race Against Time doesn’t just rehash the well-worn narrative of the disaster; it centers the voices of the people who were there on the ground: residents, first responders, officials, and so forth. Among those interviewed for the documentary is geologist/marine scientist Ivor Van Heerden, author of The Storm: What Went Wrong and Why During Hurricane Katrina: the Inside Story from One Louisiana Scientist (2006).

Around 1998, Van Heerden set up Louisiana State University’s (LSU) fledgling Hurricane Center with his colleague Marc Levitan, developing the first computer modeling efforts for local storm surges. They had a supercomputer for the modeling and LiDAR data for accurate digital elevation models, and since there was no way to share data among the five major parishes, they created a networked geographical information system GIS) to link them. Part of Van Heerden’s job involved driving all over New Orleans to inspect the levees, and he didn’t like what he saw: levees with big bows, sinking under their own weight, for example, and others with large cracks.

Van Heerden also participated in the 2004 Hurricane Pam mock scenario, designed as a test run for hurricane planning for the 13 parishes of southeastern Louisiana, including New Orleans. It was essentially a worst-case scenario for the conditions of Hurricane Betsy, assuming that the whole city would be flooded. “We really had hoped that the exercise would wake everybody up, but quite honesty we were laughed at a few times during the exercise,” Van Heerden told Ars. He recalled telling one woman from FEMA that they should be thinking about using tents to house evacuees: “She said, ‘Americans don’t live in tents.'”

Stormy weather

Mayor Ray Nagin orders a mandatory evacuation of New Orleans. ABC News Videosource

The tens of thousands of stranded New Orleans residents in the devastating aftermath of Katrina could have used those tents. Van Heerden still vividly recalls his frustration over the catastrophic failures that occurred on so many levels. “We knew the levees had failed, we knew that there had been catastrophic structural failure, but nobody wanted to hear it initially,” he said. He and his team were out in the field in the immediate aftermath, measuring water levels and sampling the water for pathogens and toxic chemicals. Naturally they came across people in need of rescue and were able to radio locations to the Louisiana State University police.

“An FBI agent told me, ‘If you find any bodies, tie them with a piece of string to something so they don’t float away and give us the lats and logs,'” Van Heerden recalled. The memories haunt him still. Some of the bodies were drowned children, which he found particularly devastating since he had a young daughter of his own at the time.

How did it all go so wrong? After 1965’s Hurricane Betsy flooded most of New Orleans, the federal government started a levee building program with the US Army Corps of Engineers (USACE) in charge. “Right at the beginning, the Corps used very old science in terms of determining how high to make the levees,” said Van Heerden. “They had access to other very good data, but they chose not to use it for some reason. So they made the levees way too low.”

“They also ignored some of their own geotechnical science when designing the levees,” he continued. “Some were built in sand with very shallow footings, so the water just went underneath and blew out the levee. Some were built on piles of earth, again with very shallow footings, and they just fell over. The 17th Street Canal, the whole levee structure actually slid 200 feet.”

There had also been significant alterations to the local landscape since Hurricane Betsy. In the past, the wetlands, especially the cypress tree swamps, provided some protection from storm surges. In 1992, for example, the Category 5 Hurricane Andrew made landfall on Atchafalaya Delta, where healthy wetlands reduced its energy by 50 percent between the coast and Morgan City, per Van Heerden. But other wetlands in the region changed drastically with the dredging of a canal called the Mississippi Gulf Outlet, running from Baton Rouge to the Gulf of Mexico.

“It was an open conduit for surge to get into New Orleans,” said Van Heerden. “The saltwater got into the wetlands and destroyed it, especially the cypress trees. This canal had opened up, in some places, to five times its width, allowing waves to build on the surface. The earthen levees weren’t armored in any way, so they just collapsed. They blew apart. That’s why parts of St. Bernard saw a wave of water 10 feet high.”

Just trying to survive

Stranded New Orleans residents gather in a shelter during Hurricane Katrina. KTVT-TV

Add in drastic cuts to FEMA under then-President George W. Bush—who inherited “a very functional, very well-organized” version of the department from his predecessor, Bill Clinton, per Van Heerden—and the stage was set for just such a disaster like Katrina’s harrowing aftermath. It didn’t help that New Orleans Mayor Ray Nagin delayed issuing a mandatory evacuation order until some 24 hours before the storm hit, making it much more difficult for residents to follow those orders in a timely fashion.

There were also delays in conveying the vital information that the levees had failed. “We now know that the USACE had a guy in a Coast Guard helicopter who actually witnessed the London Avenue Canal failure, at 9: 06 AM on Day One,” said Van Heerden. “That guy went to Baton Rouge and he didn’t tell a soul other than the Corps. So the Corps knew very early what was gong on and they did nothing about it. They had a big megaphone and millions of dollars in public relations and kept saying it was an act of God. It took until the third week of September for us to finally get the media to realize that this was a catastrophic failure of the levees.”

The USACE has never officially apologized for what happened, per Van Heerden. “Not one of them lost their job after Katrina,” he said. But LSU fired Van Heerden in 2009, sparking protest from faculty and students. The university gave no reason for his termination, but it was widely speculated at the time that Van Heerden’s outspoken criticism of the USACE was a factor, with LSU fearing it might jeopardize funding. Van Heerden, sued and the university settled. But he hasn’t worked in academia since and now consults with various nonprofit organizations on flooding and storm surge impacts.

The widespread reports of looting and civil war further exacerbated the situation as survivors swarmed the Superdome and the nearby convention center. The city had planned for food and water for 12,000 people housed at Superdome for 48 hours. The failure of the levees swelled that number to 30,000 people stranded for several days, waiting in vain for the promised cavalry to arrive.

Van Heerden acknowledges the looting but insists most of that was simply due to people trying to survive in the absence of any other aid. “How did they get water on the interstate?” said Van Heerden. “They went to a water company, broke in and hot-wired a truck, then went around and gave water to everyone.”

As for the widespread belief outside the city that there was unchecked violence and a brewing civil war, “That doesn’t happen in a catastrophe,” he said. The rumors were driven by reports of shots being fired but, “there are a lot of hunters in Louisiana, and the hunter’s SOS is to fire three shots in rapid succession,” he said. “One way to say ‘I’m here!’ is to fire a gun. But everybody bought into that civil war nonsense.”

“Another ticking time bomb”

LSU Hurricane Center co-founder Ivor Van Heerden working at his desk in 2005. Australian Broadcasting Corporation

The levees have since been rebuilt, and Van Heerden acknowledges that some of the repairs are robust. “They used more concrete, they put in protection pads and deeper footings,” he said. “But they didn’t take into account—and they admitted this a few years ago—subsidence in Louisiana, which is two to two-and-a-half feet every century. And they didn’t take into account global climate change and the associated rising sea levels. Within the next 70 years, sea level in Louisiana is going to rise four feet over millions of square miles. If you’ve got a levee with a [protective] marsh in front of it, before too long that marsh is no longer going to exist, so the water is going to move further and further in-shore.”

Then there’s the fact that hurricanes these days are now bigger in diameter than they were 30 years ago, thanks to the extra heat. “They get up to a Category 5 a lot quicker,” said Van Heerden. “The frequency also seems to be creeping up. It’s now four times as likely you will experience hurricane-force winds.” Van Heerden has run storm surge models assuming a 3-foot rise in sea level. “What we saw was the levees wouldn’t be high enough in New Orleans,” he said. “I hate to say it, but it looks like another ticking time bomb. Science is a quest for the truth. You ignore the science at your folly.”

Assuming there was sufficient public and political will, how should the US be preparing for future tropical storms? “In many areas we need to retreat,” said Van Heerden. “We need to get the houses and buildings out and rebuild the natural vegetation, rebuild the wetlands. On the Gulf Coast, sea level is really going to rise, and we need to rethink our infrastructure. This belief that, ‘Oh, we’re going to put up a big wall’—in the long run it’s not going to work. The devastation from tropical storms is going to spread further inland through very rapid downpours, and that’s something we’re going to have to plan mitigations for. But I just don’t see any movement in that direction.”

Perhaps documentaries like Race Against Time can help turn the tide; Van Heerden certainly hopes so. He also hopes the documentary can correct several public misconceptions of what happened—particularly the tendency to blame the New Orleans residents trying to survive in appalling conditions, rather than the government that failed them.

“I think this is a very good documentary in showing the plight of the people and what they suffered, which was absolutely horrendous,” said Van Heerden. “I hope people watching will realize that yes, this is a piece of our history, but sometimes the past is the key to the present. And ask themselves, ‘Is this a foretaste of what’s to come?'”

Hurricane Katrina: Race Against Time premieres on July 27, 2025, on National Geographic. It will be available for streaming starting July 28, 2025, on Disney+ and Hulu.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

20 years after Katrina, New Orleans remembers Read More »

robots-eating-other-robots:-the-benefits-of-machine-metabolism

Robots eating other robots: The benefits of machine metabolism


If you define “metabolism” loosely enough, these robots may have one.

For decades we’ve been trying to make the robots smarter and more physically capable by mimicking biological intelligence and movement. “But in doing so, we’ve been just replicating the results of biological evolution—I say we need to replicate its methods,” argues Philippe Wyder, a developmental robotics researcher at Columbia University. Wyder led a team that demonstrated a machine with a rudimentary form of what they’re calling a metabolism.

He and his colleagues built a robot that could consume other robots to physically grow, become stronger, more capable, and continue functioning.

Nature’s methods

The idea of robotic metabolism combines various concepts in AI and robotics. The first is artificial life, which Wyder termed “a field where people study the evolution of organisms through computer simulations.” Then there is the idea of modular robots: reconfigurable machines that can change their architecture by rearranging collections of basic modules. That was pioneered in the US by Daniela Rus or Mark Yim at Carnegie Mellon University in the 1990s.

Finally, there is the idea that we need a shift from a goal-oriented design we’ve been traditionally implementing in our machines to a survivability-oriented design found in living organisms, which Magnus Egerstedt proposed in his book Robot Ecology.

Wyder’s team took all these ideas, merged them, and prototyped a robot that could “eat” other robots. “I kind of came at this from many different angles,” Wyder says.

The key source of inspiration, though, was the way nature builds its organisms. There are 20 standard amino acids universally used by life that can be combined into trillions of proteins, forming the building blocks of countless life forms. Wyder started his project by designing a basic robotic module that was intended to play a role roughly equivalent to a single amino acid. This module, called a Truss Link, looked like a rod, being 16 centimeters long and containing batteries, electronic controllers, and servomotors than enabled them to expand, contract, and crawl in a straight line. They had permanent magnets at each end, which let them connect to other rods and form lightweight lattices.

Wyder’s idea was to throw a number of these modules in a confined space to see if they would assemble into more complex structures by bumping into each other. The process might be analogous to how amino acids spontaneously formed simple organic molecules roughly 4 billion years ago.

Robotic growth

The first stage of Wyder’s experiment was set up in a space with a few terrain features, like a drop, a few obstacles, and a standing cylinder. The robots were operated by the team, which directed them to form various structures. Three Truss Links connected with the magnets at one center point formed a three-pointed star. Other structures they formed included a triangle, a diamond with a tail that was a triangle connected with a three-pointed star, or a tetrahedron, and a 3D structure that looked like a triangular pyramid. The robots had to find other Truss Links and make them part of their bodies to grow into more complex forms.

As they were growing, they were also becoming more capable. A single Truss Link could only move in a straight line, a triangle could turn left and right, a diamond with a tail could traverse small bumps, while a tetrahedron could move itself over small walls. Finally, a tetrahedron with a ratchet—an additional Truss Link the robot could use a bit like a walking stick—could assist other robots in forming tetrahedrons, which was a difficult, risky maneuver that took multiple attempts even for the skilled operators.

Still, all this growth in size and capability was orchestrated by the researchers controlling the hardware. The question was whether these self-assembly processes could work with no human overlords around.

“We wanted to know if the Truss Links would meet on their own,” Wyder says. “If the Truss Links are exactly parallel, they will never connect. But being parallel is just one configuration, and there are infinite configurations where they are not parallel.” To check how this would play out, the team used computer simulations of six randomly spawned and randomly moving Truss Links in a walled environment. In 2,000 runs, each 20 minutes long, the modules ended up with a 64 percent chance of forming two three-pointed star shapes; a roughly 8.4 percent of assembling into two triangles, and nearly 45 percent of ending up as a diamond with a tail. (Some of these configurations were intermediates on the pathway to others, so the numbers add up to more than 100 percent.)

When moving randomly, Truss Links could also repair structures after their magnets got disconnected and even replace a malfunctioning Truss Link in the structure with a new one. But did they really metabolize anything?

Searching for purpose

The name “metabolism” comes from the Greek word “metabolē” which means “change.” Wyder’s robots can assemble, grow, reconfigure, rebuild, and, to a limited extent, sustain themselves, which definitely qualifies as change.

But metabolism, as it’s commonly understood, involves consuming materials in ways that extract energy and transform their chemicals. The Truss Links are limited to using prefabricated, compatible modules—they can’t consume some plastic and old lithium-ion batteries and metabolize them into brand-new Truss Links. Whether this qualifies as metabolism depends more on how far we want to stretch the definition than on what the actual robots can do.

And stretching definitions, so far, may be their strongest use case. “I can’t give you a real-world use case,” Wyder acknowledges. “We tried to make the truss robots carry loads from one point to another, but it’s not even included in our paper—it’s a research platform at this point.” The first thing he thinks the robotic metabolism platform is missing is a wider variety of modules. The team used homogeneous modules in this work but is already thinking about branching out. “Life uses around 20 different amino acids to work, so we’re currently focusing on integrating additional modules with various sensors,” Wyder explains. But the robots  are also lacking something way more fundamental: a purpose.

Life evolves to improve the chances of survival. It does so in response to pressures like predators or a challenging environment. A living thing is usually doing its best to avoid dying.

Egerstedt in “Robot Ecology“ argues we should build and program robots the same way with “survivability constraints” in mind. Wyder, in his paper, also claims we need to develop a “self-sustained robot ecology” in the future. But he also thinks we shouldn’t take this life analogy too far. His goal is not creating a robotic ecosystem where robots would hunt and feed on other robots, constantly improving their own designs.

“We would give robots a purpose. Let’s say a purpose is to build a lunar colony,” Wyder says. Survival should be the first objective, because if the platform doesn’t survive on the Moon, it won’t build a lunar colony. Multiple small units would first disperse to explore the area and then assemble into a bigger structure like a building or a crane. “And this large structure would absorb, recycle, or eat, if you will, all these smaller robots to integrate and make use of them,” Wyder claims.

A robotic platform like this, Wyder thinks, should adapt to unexpected circumstances even better than life itself. “There may be a moment where having a third arm would really save your life, but you can’t grow one. A robot, given enough time, won’t have that problem,” he says.

Science Advances, 2025.  DOI: 10.1126/sciadv.adu6897

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Robots eating other robots: The benefits of machine metabolism Read More »

this-aerogel-and-some-sun-could-make-saltwater-drinkable

This aerogel and some sun could make saltwater drinkable

Earth is about 71 percent water. An overwhelming 97 percent of that water is found in the oceans, leaving us with only 3 percent in the form of freshwater—and much of that is frozen in the form of glaciers. That leaves just 0.3 percent of that freshwater on the surface in lakes, swamps, springs, and our main sources of drinking water, rivers and streams.

Despite our planet’s famously blue appearance from space, thirsty aliens would be disappointed. Drinkable water is actually pretty scarce.

As if that doesn’t already sound unsettling, what little water we have is also threatened by climate change, urbanization, pollution, and a global population that continues to expand. Over 2 billion people live in regions where their only source of drinking water is contaminated. Pathogenic microbes in the water can cause cholera, diarrhea, dysentery, polio, and typhoid, which could be fatal in areas without access to vaccines or medical treatment.

Desalination of seawater is a possible solution, and one approach involves porous materials absorbing water that evaporates when heated by solar energy. The problem with most existing solar-powered evaporators is that they are difficult to scale up for larger populations. Performance decreases with size, because less water vapor can escape from materials with tiny pores and thick boundaries—but there is a way to overcome this.

Feeling salty

Researcher Xi Shen of the Hong Kong Polytechnic University wanted to figure out a way to improve these types of systems. He and his team have now created an aerogel that is far more efficient at turning over fresh water than previous methods of desalination.

“The key factors determining the evaporation performance of porous evaporators include heat localization, water transport, and vapor transport,” Shen said in a study recently published in ACS Energy Letters. “Significant advancements have been made in the structural design of evaporators to realize highly efficient thermal localization and water transport.”

Solar radiation is the only energy used to evaporate the water, which is why many attempts have been made to develop what are called photothermal materials. When sunlight hits these types of materials, they absorb light and convert it into heat energy, which can be used to speed up evaporation. Photothermal materials can be made of substances including polymers, metals, alloys, ceramics, or cements. Hydrogels have been used to successfully decontaminate and desalinate water before, but they are polymers designed to retain water, which negatively affects efficiency and stability, as opposed to aerogels, which are made of polymers that hold air. This is why Shen and his team decided to create a photothermal aerogel.

This aerogel and some sun could make saltwater drinkable Read More »

widely-panned-arsenic-life-paper-gets-retracted—15-years-after-brouhaha

Widely panned arsenic life paper gets retracted—15 years after brouhaha

In all, the astronomic hype was met with earth-shaking backlash in 2010 and 2011. In 2012, Science published two studies refuting the claim that GFAJ-1 incorporates arsenic atoms into its DNA. Outside scientists concluded that it is an arsenic-tolerant extremophile, but not a profoundly different life form.

Retraction

But now, in 2025, it is once again spurring controversy; on Thursday, Science announced that it is retracting the study.

Some critics, such as Redfield, cheered the move. Others questioned the timing, noting that 15 years had passed, but only a few months had gone by since The New York Times published a profile of Wolfe-Simon, who is now returning to science after being perceived as a pariah. Wolfe-Simon and most of her co-authors, meanwhile, continue to defend the original paper and protest the retraction.

In a blog post on Thursday, Science’s executive editor, Valda Vinson, and Editor-in-Chief Holden Thorp explained the retraction by saying that Science’s criteria for issuing a retraction have evolved since 2010. At the time, it was reserved for claims of misconduct or fraud but now can include serious flaws. Specifically, Vinson and Thorp referenced the criticism that the bacterium’s genetic material was not properly purified of background arsenic before it was analyzed. While emphasizing that there has been no suggestion of fraud or misconduct on the part of the authors, they wrote that “Science believes that the key conclusion of the paper is based on flawed data,” and it should therefore be retracted.

Jonathan Eisen, an evolutionary biologist at the University of California, Davis, criticized the move. Speaking with Science’s news team, which is independent from the journal’s research-publishing arm, Eisen said that despite being a critic of the 2010 paper, he thought the discussion of controversial studies should play out in the scientific literature and not rely on subjective decisions by editors.

In an eLetter attached to the retraction notice, the authors dispute the retraction, too, saying, “While our work could have been written and discussed more carefully, we stand by the data as reported. These data were peer-reviewed, openly debated in the literature, and stimulated productive research.”

One of the co-authors, Ariel Anbar, a geochemist at Arizona State University, told Nature that the study had no mistakes but that the data could be interpreted in different ways. “You don’t retract because of a dispute about data interpretation,” he said. If that were the case, “you’d have to retract half the literature.”

Widely panned arsenic life paper gets retracted—15 years after brouhaha Read More »