Enlarge/ Top: A view of the downtown Memphis skyline, including the Hernando De Soto bridge which has been retrofitted for earthquakes. Memphis is located around 40 miles from a fault line in the quake-prone New Madrid system.
iStock via Getty Images
The first earthquake struck while the town was still asleep. Around 2: 00 am on Dec. 16, 1811, New Madrid—a small frontier settlement of 400 people on land now located in Missouri—was jolted awake. Panicked townsfolk fled their homes as buildings collapsed and the smell of sulfur filled the air.
The episode didn’t last long. But the worst was yet to come. Nearly two months later, after dozens of aftershocks and another massive quake, the fault line running directly under the town ruptured. Thirty-one-year-old resident Eliza Bryan watched in horror as the Mississippi River receded and swept away boats full of people. In nearby fields, geysers of sand erupted, and a rumble filled the air.
In the end, the town had dropped at least 15 feet. Bryan and others spent a year and a half living in makeshift camps while they waited for the aftershocks to end. Four years later, the shocks had become less common. At last, the rattled townspeople began “to hope that ere long they will entirely cease,” Bryan wrote in a letter.
Whether Bryan’s hope will stand the test of time is an open question.
The US Geological Survey released a report in December 2023 detailing the risk of dangerous earthquakes around the country. As expected on the hazard map, deep red risk lines run through California and Alaska. But the map also sports a big bull’s eye in the middle of the country—right over New Madrid.
The USGS estimates that the region has a 25 to 40 percent chance of a magnitude 6.0 or higher earthquake in the next 50 years, and as much as a 10 percent chance of a repeat of the 1811-1812 sequence. While the risk is much lower compared to, say, California, experts say that when it comes to earthquake resistance, the New Madrid region suffers from inadequate building codes and infrastructure.
Caught in this seismic splash zone are millions of people living across five states—mostly in Tennessee and Missouri, as well as Kentucky, Illinois, and Arkansas—including two major cities, Memphis and St. Louis. Mississippi, Alabama, and Indiana have also been noted as places of concern.
In response to the potential for calamity, geologists have learned a lot about this odd earthquake hotspot over the last few decades. Yet one mystery has persisted: why earthquakes even happen here in the first place.
This is a problem, experts say. Without a clear mechanism for why New Madrid experiences earthquakes, scientists are still struggling to answer some of the most basic questions, like when—or even if—another large earthquake will strike the region. In Missouri today, earthquakes are “not as front of mind” as other natural disasters, said Jeff Briggs, earthquake program manager for the Missouri State Emergency Management Agency.
But when the next big shake comes, “it’s going to be the biggest natural disaster this state has ever experienced.”
Is space-based solar power a costly, risky pipe dream? Or is it a viable way to combat climate change? Although beaming solar power from space to Earth could ultimately involve transmitting gigawatts, the process could be made surprisingly safe and cost-effective, according to experts from Space Solar, the European Space Agency, and the University of Glasgow.
But we’re going to need to move well beyond demonstration hardware and solve a number of engineering challenges if we want to develop that potential.
Designing space-based solar
Beaming solar energy from space is not new; telecommunications satellites have been sending microwave signals generated by solar power back to Earth since the 1960s. But sending useful amounts of power is a different matter entirely.
“The idea [has] been around for just over a century,” said Nicol Caplin, deep space exploration scientist at the ESA, on a Physics World podcast. “The original concepts were indeed sci-fi. It’s sort of rooted in science fiction, but then, since then, there’s been a trend of interest coming and going.”
Researchers are scoping out multiple designs for space-based solar power. Matteo Ceriotti, senior lecturer in space systems engineering at the University of Glasgow, wrote in The Conversation that many designs have been proposed.
The Solaris initiative is exploring two possible technologies, according to Sanjay Vijendran, lead for the Solaris initiative at the ESA: one that involves beaming microwaves from a station in geostationary orbit down to a receiver on Earth and another that involves using immense mirrors in a lower orbit to reflect sunlight down onto solar farms. He said he thinks that both of these solutions are potentially valuable. Microwave technology has drawn wider interest and was the main focus of these interviews. It has enormous potential, although high-frequency radio waves can also be used.
“You really have a source of 24/7 clean power from space,” Vijendran said. The power can be transmitted regardless of weather conditions because of the frequency of the microwaves.
“A 1-gigawatt power plant in space would be comparable to the top five solar farms on earth. A power plant with a capacity of 1 gigawatt could power around 875,000 households for one year,” said Andrew Glester, host of the Physics World podcast.
But we’re not ready to deploy anything like this. “It will be a big engineering challenge,” Caplin said. There are a number of physical hurdles involved in successfully building a solar power station in space.
Using microwave technology, the solar array for an orbiting power station that generates a gigawatt of power would have to be over 1 square kilometer in size, according to a Nature article by senior reporter Elizabeth Gibney. “That’s more than 100 times the size of the International Space Station, which took a decade to build.” It would also need to be assembled robotically, since the orbiting facility would be uncrewed.
The solar cells would need to be resilient to space radiation and debris. They would also need to be efficient and lightweight, with a power-to-weight ratio 50 times more than the typical silicon solar cell, Gibney wrote. Keeping the cost of these cells down is another factor that engineers have to take into consideration. Reducing the losses during power transmission is another challenge, Gibney wrote. The energy conversion rate needs to be improved to 10–15 percent, according to the ESA. This would require technical advances.
Space Solar is working on a satellite design called CASSIOPeiA, which Physics World describes as looking “like a spiral staircase, with the photovoltaic panels being the ‘treads’ and the microwave transmitters—rod-shaped dipoles—being the ‘risers.’” It has a helical shape with no moving parts.
“Our system’s comprised of hundreds of thousands of the same dinner-plate-sized power modules. Each module has the PV which converts the sun’s energy into DC electricity,” said Sam Adlen, CEO of Space Solar.
“That DC power then drives electronics to transmit the power… down toward Earth from dipole antennas. That power up in space is converted to [microwaves] and beamed down in a coherent beam down to the Earth where it’s received by a rectifying antenna, reconverted into electricity, and input to the grid.”
Adlen said that robotics technologies for space applications, such as in-orbit assembly, are advancing rapidly.
Ceriotti wrote that SPS-ALPHA, another design, has a large solar-collector structure that includes many heliostats, which are modular small reflectors that can be moved individually. These concentrate sunlight onto separate power-generating modules, after which it’s transmitted back to Earth by yet another module.
Isaac Newton would never have discovered the laws of motion had he studied only cats.
Suppose you hold a cat, stomach up, and drop it from a second-story window. If a cat is simply a mechanical system that obeys Newton’s rules of matter in motion, it should land on its back. (OK, there are some technicalities—like this should be done in a vacuum, but ignore that for now.) Instead, most cats usually avoid injury by twisting themselves on the way down to land on their feet.
Most people are not mystified by this trick—everybody has seen videos attesting to cats’ acrobatic prowess. But for more than a century, scientists have wondered about the physics of how cats do it. Clearly, the mathematical theorem analyzing the falling cat as a mechanical system fails for live cats, as Nobel laureate Frank Wilczek points out in a recent paper.
“This theorem is not relevant to real biological cats,” writes Wilczek, a theoretical physicist at MIT. They are not closed mechanical systems, and can “consume stored energy … empowering mechanical motion.”
Nevertheless, the laws of physics do apply to cats—as well as every other kind of animal, from insects to elephants. Biology does not avoid physics; it embraces it. From friction on microscopic scales to fluid dynamics in water and air, animals exploit physical laws to run or swim or fly. Every other aspect of animal behavior, from breathing to building shelters, depends in some way on the restrictions imposed, and opportunities permitted, by physics.
“Living organisms are … systems whose actions are constrained by physics across multiple length scales and timescales,” Jennifer Rieser and coauthors write in the current issue of the Annual Review of Condensed Matter Physics.
While the field of animal behavior physics is still in its infancy, substantial progress has been made in explaining individual behaviors, along with how those behaviors are shaped via interactions with other individuals and the environment. Apart from discovering more about how animals perform their diverse repertoire of skills, such research may also lead to new physics knowledge gained by scrutinizing animal abilities that scientists don’t yet understand.
Critters in motion
Physics applies to animals in action over a wide range of spatial scales. At the smallest end of the range, attractive forces between nearby atoms facilitate the ability of geckos and some insects to climb up walls or even walk on ceilings. On a slightly larger scale, textures and structures provide adhesion for other biological gymnastics. In bird feathers, for instance, tiny hooks and barbs act like Velcro, holding feathers in position to enhance lift when flying, Rieser and colleagues report.
Biological textures also aid movement by facilitating friction between animal parts and surfaces. Scales on California king snakes possess textures that allow rapid forward sliding, but increase friction to retard backward or sideways motion. Some sidewinding snakes have apparently evolved different textures that reduce friction in the direction of motion, recent research suggests.
Small-scale structures are also important for animals’ interaction with water. For many animals, microstructures make the body “superhydrophobic”—capable of blocking the penetration of water. “In wet climates, water droplet shedding can be essential in animals, like flying birds and insects, where weight and stability are crucially important,” note Rieser, of Emory University, and coauthors Chantal Nguyen, Orit Peleg and Calvin Riiska.
Water-blocking surfaces also help animals keep their skins clean. “This self-cleansing mechanism … can be important to help protect the animal from dangers like skin-borne parasites and other infections,” the Annual Review authors explain. And in some cases, removing foreign material from an animal’s surface may be necessary to preserve the surface properties that enhance camouflage.
Enlarge/ Steve Salem is a 50-year boat captain who lives on a tributary of the St. Johns River. The rising tides in Jacksonville are testing his intuition.
This article originally appeared on Inside Climate News, a nonprofit, independent news organization that covers climate, energy, and the environment. It is republished with permission. Sign up for their newsletter here.
JACKSONVILLE, Fla.—For most of his life, Steve Salem has led an existence closely linked with the rise and fall of the tides.
Salem is a 50-year boat captain who designed and built his 65-foot vessel by hand.
“Me and Noah, we’re related somewhere,” said Salem, 75, whose silver beard evokes Ernest Hemingway.
Salem is familiar with how the sun and moon influence the tides and feels an innate sense for their ebb and flow, although the tides here are beginning to test even his intuition.
He and his wife live in a rust-colored ranch-style house along a tributary of the St. Johns River, Florida’s longest. Before they moved in the house had flooded, in 2017, as Hurricane Irma swirled by. The house flooded again in 2022, when Hurricane Nicole defied his expectations. But Salem believes the house is sturdy and that he can manage the tides, as he always has.
“I’m a water dog to begin with. I’ve always been on the water,” said Salem, who prefers to go by Captain Steve. “I worry about things that I have to do something about. If I can’t do anything about it, then worrying about it is going to do what?”
Across the American South, tides are rising at accelerating rates that are among the most extreme on Earth, constituting a surge that has startled scientists such as Jeff Chanton, professor in the Department of Earth, Ocean and Atmospheric Science at Florida State University.
“It’s pretty shocking,” he said. “You would think it would increase gradually, it would be a gradual thing. But this is like a major shift.”
Worldwide sea levels have climbed since 1900 by some 1.5 millimeters a year, a pace that is unprecedented in at least 3,000 years and generally attributable to melting ice sheets and glaciers and also the expansion of the oceans as their temperatures warm. Since the middle of the 20th century the rate has gained speed, exceeding 3 millimeters a year since 1992.
In the South the pace has quickened further, jumping from about 1.7 millimeters a year at the turn of the 20th century to at least 8.4 millimeters by 2021, according to a 2023 study published in Nature Communications based on tidal gauge records from throughout the region. In Pensacola, a beachy community on the western side of the Florida Panhandle, the rate soared to roughly 11 millimeters a year by the end of 2021.
“I think people just really have no idea what is coming, because we have no way of visualizing that through our own personal experiences, or that of the last 250 years,” said Randall Parkinson, a coastal geologist at Florida International University. “It’s not something where you go, ‘I know what that might look like because I’ve seen that.’ Because we haven’t.
“It’s the same everywhere, from North Carolina all the way down to the Florida Keys and all the way up into Alabama,” he said. “All of these areas are extremely vulnerable.”
The acceleration is poised to amplify impacts such as hurricane storm surges, nuisance flooding and land loss. In recent years the rising tides have coincided with record-breaking hurricane seasons, pushing storm surges higher and farther inland. In 2022 Hurricane Ian, which came ashore in southwest Florida, was the costliest hurricane in state history and third-costliest to date in the United States, after Katrina in 2005 and Harvey in 2017.
“It doesn’t even take a major storm event anymore. You just get these compounding effects,” said Rachel Cleetus, a policy director at the Union for Concerned Scientists, an advocacy group. “All of a sudden you have a much more impactful flooding event, and a lot of the infrastructure, frankly, like the stormwater infrastructure, it’s just not built for this.”
Enlarge/ Heads of state pose for a group photo at an event Tuesday celebrating the 75th anniversary of NATO.
During their summit in Washington, DC, this week, NATO member states committed more than $1 billion to improve the sharing of intelligence from national and commercial reconnaissance satellites.
The agreement is a further step toward integrating space assets into NATO military commands. It follows the bloc’s adoption of an official space policy in 2019, which recognized space as a fifth war-fighting domain alongside air, land, maritime, and cyberspace. The next step was the formation of the NATO Space Operations Center in 2020 to oversee space support for NATO military operations.
On June 25, NATO announced the establishment of a “space branch” in its Allied Command Transformation, which identifies trends and incorporates emerging capabilities into the alliance’s security strategy.
Breaking down barriers
The new intelligence-sharing agreement was signed on July 9 by representatives from 17 NATO nations, including the United States, to support the Alliance Persistent Surveillance from Space (APSS) program. In a statement, NATO called the agreement “the largest multinational investment in space-based capabilities in NATO’s history.”
The agreement for open sharing of intelligence data comes against the backdrop of NATO’s response to the Russian invasion of Ukraine. Space-based capabilities, including battlefield surveillance and communications, have proven crucial to both sides in the war.
“The ongoing war in Ukraine has further underscored intelligence’s growing dependence on space-based data and assets,” NATO said.
The program will improve NATO’s ability to monitor activities on the ground and at sea with unprecedented accuracy and timeliness, the alliance said in a statement. The 17 parties to the agreement pledged more than $1 billion transition the program into an implementation phase over the next five years. Six of the 17 signatories currently operate or plan to launch their own national reconnaissance satellites, while several more nations are home to companies operating commercial space-based surveillance satellites.
The APSS program won’t involve the development and launch of any NATO spy satellites. Instead, each nation will make efforts to share observations from their own government and commercial satellites.
Luxembourg, one of the smallest NATO member states, set up the APSS program with an initial investment of roughly $18 million (16.5 million euros) in 2023. At the time, NATO called the program a “data-centric initiative” aimed at bringing together intelligence information for easier dissemination among allies and breaking down barriers of secrecy and bureaucracy.
“APSS is not about creating NATO-owned and operated space assets,” officials wrote in the program’s fact sheet. “It will make use of existing and future space assets in allied countries, and connect them together in a NATO virtual constellation called ‘Aquila.'”
Another element of the program involves processing and sharing intelligence information through cloud solutions and technologies. NATO said AI analytical tools will also better manage growing amounts of surveillance data from space, and ensure decision-makers get faster access to time-sensitive observations.
“The APSS initiative may be regarded as a game changer for NATO’s intelligence, surveillance and reconnaissance. It will largely contribute to build NATO’s readiness and reduce its dependency on other intelligence and surveillance capabilities,” said Ludwig Decamps, general manager of the NATO Communications and Information Agency.
In the early 2000s, local fossil collector Mohamed ‘Ou Said’ Ben Moula discovered numerous fossils at Fezouata Shale, a site in Morocco known for its well-preserved fossils from the Early Ordovician period, roughly 480 million years ago. Recently, a team of researchers at the University of Lausanne (UNIL) studied 100 of these fossils and identified one of them as the earliest ancestor of modern-day chelicerates, a group that includes spiders, scorpions, and horseshoe crabs.
The fossil preserves the species Setapedites abundantis, a tiny animal that crawled and swam near the bottom of a 100–200-meter-deep ocean near the South Pole 478 million years ago. It was 5 to 10 millimeters long and fed on organic matter in the seafloor sediments. “Fossils of what is now known as S. abundantis have been found early on—one specimen mentioned in the 2010 paper that recognized the importance of this biota. However, this creature wasn’t studied in detail before simply because scientists focused on other taxa first,” Pierre Gueriau, one of the researchers and a junior lecturer at UNIL, told Ars Technica.
The study from Gueriau and his team is the first to describe S. abundantis and its connection to modern-day chelicerates (also called euchelicerates). It holds great significance, because “the origin of chelicerates has been one of the most tangled knots in the arthropod tree of life, as there has been a lack of fossils between 503 to 430 million years ago,” Gueriau added.
An ancestor of spiders
The study authors used X-ray scanners to reconstruct the anatomy of 100 fossils from the Fezouata Shale in 3D. When they compared the anatomical features of these ancient animals with those of chelicerates, they noticed several similarities between S. abundantis and various ancient and modern-day arthropods, including horseshoe crabs, scorpions, and spiders.
For instance, the nature and arrangement of the head appendages or ‘legs’ in S. abundantis were homologous with those of present-day horseshoe crabs and Cambrian arthropods that existed between 540 to 480 million years ago. Moreover, like spiders and scorpions, the organism exhibited body tagmosis, where the body is organized into different functional sections.
“Setapedites abundantis contributes to our understandings of the origin and early evolution of two key euchelicerate characters: the transition from biramous to uniramous prosomal appendages, and body tagmosis,” the study authors note.
Currently, two Cambrian-era arthropods, Mollisonia plenovenatrix and Habelia optata are generally considered the earliest ancestors of chelicerates (not all scientists accept this idea). Both lived around 500 million years ago. When we asked how these two differ from S. abundantis, Gueriau replied, “Habelia and Mollisonia represent at best early-branching lineages in the phylogenetic tree. While S. abundantis is found to represent, together with a couple of other fossils, the earliest branching lineage within chelicerates.”
This means Habelia and Mollisonia are relatives of the ancestors of modern-day chelicerates. On the other side, S. abundantis represents the first group that split after the chelicerate clade was established, making it the earliest member of the lineage. “These findings bring us closer to untangling the origin story of arthropods, as they allow us to fill the anatomical gap between Cambrian arthropods and early-branching chelicerates,” Gueriau told Ars Technica.
S. abundantis connects other fossils
The researchers faced many challenges during their study. For instance, the small size of the fossils made observations and interpretation complicated. They overcame this limitation by examining a large number of specimens—fortunately, S. abundantis fossils were abundant in the samples they studied. However, these fossils have yet to reveal all their secrets.
“Some of S. abundantis’ anatomical features allow for a deeper understanding of the early evolution of the chelicerate group and may even link other fossil forms, whose relationships are still highly debated, to this group,” Gueriau said. For instance, the study authors noticed a ventral protrusion at the rear of the organism. Such a feature is observed for the first time in chelicerates but is known in other primitive arthropods.
“This trait could thus bring together many other fossils with chelicerates and further resolve the early branches of the arthropod tree. So the next step for this research is to investigate deeper this feature on a wide range of fossils and its phylogenetic implications,” Gueriau added.
Rupendra Brahambhatt is an experienced journalist and filmmaker. He covers science and culture news, and for the last five years, he has been actively working with some of the most innovative news agencies, magazines, and media brands operating in different parts of the globe.
Rescuing Science: Restoring Trust in an Age of Doubt was the most difficult book I’ve ever written. I’m a cosmologist—I study the origins, structure, and evolution of the Universe. I love science. I live and breathe science. If science were a breakfast cereal, I’d eat it every morning. And at the height of the COVID-19 pandemic, I watched in alarm as public trust in science disintegrated.
But I don’t know how to change people’s minds. I don’t know how to convince someone to trust science again. So as I started writing my book, I flipped the question around: is there anything we can do to make the institution of science more worthy of trust?
The short answer is yes. The long answer takes an entire book. In the book, I explore several different sources of mistrust—the disincentives scientists face when they try to communicate with the public, the lack of long-term careers, the complicitness of scientists when their work is politicized, and much more—and offer proactive steps we can take to address these issues to rebuild trust.
The section below is taken from a chapter discussing the relentless pressure to publish that scientists face, and the corresponding explosion in fraud that this pressure creates. Fraud can take many forms, from the “hard fraud” of outright fabrication of data, to many kinds of “soft fraud” that include plagiarism, manipulation of data, and careful selection of methods to achieve a desired result. The more that fraud thrives, the more that the public loses trust in science. Addressing this requires a fundamental shift in the incentive and reward structures that scientists work in. A difficult task to be sure, but not an impossible one—and one that I firmly believe will be worth the effort.
Modern science is hard, complex, and built from many layers and many years of hard work. And modern science, almost everywhere, is based on computation. Save for a few (and I mean very few) die-hard theorists who insist on writing things down with pen and paper, there is almost an absolute guarantee that with any paper in any field of science that you could possibly read, a computer was involved in some step of the process.
Whether it’s studying bird droppings or the collisions of galaxies, modern-day science owes its very existence—and continued persistence—to the computer. From the laptop sitting on an unkempt desk to a giant machine that fills up a room, “S. Transistor” should be the coauthor on basically all three million journal articles published every year.
The sheer complexity of modern science, and its reliance on customized software, renders one of the frontline defenses against soft and hard fraud useless. That defense is peer review.
The practice of peer review was developed in a different era, when the arguments and analysis that led to a paper’s conclusion could be succinctly summarized within the paper itself. Want to know how the author arrived at that conclusion? The derivation would be right there. It was relatively easy to judge the “wrongness” of an article because you could follow the document from beginning to end, from start to finish, and have all the information you needed to evaluate it right there at your fingerprints.
That’s now largely impossible with the modern scientific enterprise so reliant on computers.
To makes matters worse, many of the software codes used in science are not publicly available. I’ll say this again because it’s kind of wild to even contemplate: there are millions of papers published every year that rely on computer software to make the results happen, and that software is not available for other scientists to scrutinize to see if it’s legit or not. We simply have to trust it, but the word “trust” is very near the bottom of the scientist’s priority list.
Why don’t scientists make their code available? It boils down to the same reason that scientists don’t do many things that would improve the process of science: there’s no incentive. In this case, you don’t get any h-index points for releasing your code on a website. You only get them for publishing papers.
This infinitely agitates me when I peer-review papers. How am I supposed to judge the correctness of an article if I can’t see the entire process? What’s the point of searching for fraud when the computer code that’s sitting behind the published result can be shaped and molded to give any result you want, and nobody will be the wiser?
I’m not even talking about intentional computer-based fraud here; this is even a problem for detecting basic mistakes. If you make a mistake in a paper, a referee or an editor can spot it. And science is better off for it. If you make a mistake in your code… who checks it? As long as the results look correct, you’ll go ahead and publish it and the peer reviewer will go ahead and accept it. And science is worse off for it.
Science is getting more complex over time and is becoming increasingly reliant on software code to keep the engine going. This makes fraud of both the hard and soft varieties easier to accomplish. From mistakes that you pass over because you’re going too fast, to using sophisticated tools that you barely understand but use to get the result that you wanted, to just totally faking it, science is becoming increasingly wrong.
Enlarge/ The Ariane 6 rocket takes flight for the first time on July 9, 2024.
ESA – S. Corvaja
Welcome to Edition 7.02 of the Rocket Report! The highlight of this week was the hugely successful debut of Europe’s Ariane 6 rocket. They will address the upper stage issue, I am sure. Given Europe’s commitment to zero debris, stranding the second stage is not great. But for a debut launch of a large new vehicle, this was really promising.
As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.
Chinese launch company suffers another setback. Chinese commercial rocket firm iSpace suffered a launch failure late Wednesday in a fresh setback for the company, Space News reports. The four-stage Hyperbola-1 solid rocket lifted off from Jiuquan spaceport in the Gobi Desert at 7: 40 pm ET (23: 40 UTC) on Wednesday. Beijing-based iSpace later issued a release stating that the rocket’s fourth stage suffered an anomaly. The statement did not reveal the name nor nature of the payloads lost on the flight.
Early troubles are perhaps to be expected … Beijing Interstellar Glory Space Technology Ltd., or iSpace, made history in 2019 as the first privately funded Chinese company to reach orbit, with the solid-fueled Hyperbola-1. However the rocket suffered three consecutive failures following that feat. The company recovered with two successful flights in 2023 before the latest failure. The loss could add to reliability concerns over China’s commercial launch industry as it follows Space Pioneer’s recent catastrophic static-fire explosion. (submitted by EllPeaTea)
Feds backtrack on former Firefly investor. A long, messy affair between US regulators and a Ukrainian businessman named Max Polyakov seems to have finally been resolved, Ars reports. On Tuesday, Polyakov’s venture capital firm Noosphere Venture Partners announced that the US government has released him and his related companies from all conditions imposed upon them in the run-up to the Russian invasion of Ukraine. This decision comes more than two years after the Committee on Foreign Investment in the United States and the US Air Force forced Polyakov to sell his majority stake in the Texas-based launch company Firefly.
Not a spy … This rocket company was founded in 2014 by an engineer named Tom Markusic, who ran into financial difficulty as he sought to develop the Alpha rocket. Markusic had to briefly halt Firefly’s operations before Polyakov, a colorful and controversial Ukrainian businessman, swooped in and provided a substantial infusion of cash into the company. “The US government quite happily allowed Polyakov to pump $200 million into Firefly only to decide he was a potential spy just as the company’s first rocket was ready to launch,” Ashlee Vance, a US journalist who chronicled Polyakov’s rise, told Ars. It turns out, Polyakov wasn’t a spy.
The easiest way to keep up with Eric Berger’s space reporting is to sign up for his newsletter, we’ll collect his stories in your inbox.
Pentagon ICBM costs soar. The price tag for the Pentagon’s next-generation nuclear-tipped Sentinel ICBMs has ballooned by 81 percent in less than four years, The Register reports. This triggered a mandatory congressional review. On Monday, the Department of Defense released the results of this review, with Under-secretary of Defense for Acquisition and Sustainment William LaPlante saying the Sentinel missile program met established criteria for being allowed to continue after his “comprehensive, unbiased review of the program.”
Trust us, the military says … The Sentinel project is the DoD’s attempt to replace its aging fleet of ground-based nuclear-armed Minuteman III missiles (first deployed in 1970) with new hardware. When it passed its Milestone B decision (authorization to enter the engineering and manufacturing phase) in September 2020, the cost was a fraction of the $141 billion the Pentagon now estimates Sentinel will cost, LaPlante said. To give that some perspective, the proposed annual budget for the Department of Defense for its fiscal 2025 is nearly $850 billion. (submitted by EllPeaTea)
Enlarge/ Numerous pieces of ice fell off the second stage of the Falcon 9 rocket during its climb into orbit from Vandenberg Space Force Base, California.
SpaceX
A SpaceX Falcon 9 rocket suffered an upper stage engine failure and deployed a batch of Starlink Internet satellites into a perilously low orbit after launch from California Thursday night, the first blemish on the workhorse launcher’s record in more than 300 missions since 2016.
Elon Musk, SpaceX’s founder and CEO, posted on X that the rocket’s upper stage engine failed when it attempted to reignite nearly an hour after the Falcon 9 lifted off from Vandenberg Space Force Base, California, at 7: 35 pm PDT (02: 35 UTC).
Frosty evidence
After departing Vandenberg to begin SpaceX’s Starlink 9-3 mission, the rocket’s reusable first stage booster propelled the Starlink satellites into the upper atmosphere, then returned to Earth for an on-target landing on a recovery ship parked in the Pacific Ocean. A single Merlin Vacuum engine on the rocket’s second stage fired for about six minutes to reach a preliminary orbit.
A few minutes after liftoff of SpaceX’s Starlink 9-3 mission, veteran observers of SpaceX launches noticed an unusual build-up of ice around the top of the Merlin Vacuum engine, which consumes a propellant mixture of super-chilled kerosene and cryogenic liquid oxygen. The liquid oxygen is stored at a temperature of several hundred degrees below zero.
Numerous chunks of ice fell away from the rocket as the upper stage engine powered into orbit, but the Merlin Vacuum, or M-Vac, engine appeared to complete its first burn as planned. A leak in the oxidizer system or a problem with insulation could lead to ice accumulation, although the exact cause, and its possible link to the engine malfunction later in flight, will be the focus of SpaceX’s investigation into the failure.
A second burn with the upper stage engine was supposed to raise the perigee, or low point, of the rocket’s orbit well above the atmosphere before releasing 20 Starlink satellites to continue climbing to their operational altitude with their own propulsion.
“Upper stage restart to raise perigee resulted in an engine RUD for reasons currently unknown,” Musk wrote in an update two hours after the launch. RUD (rapid unscheduled disassembly) is a term of art in rocketry that usually signifies a catastrophic or explosive failure.
“Team is reviewing data tonight to understand root cause,” Musk continued. “Starlink satellites were deployed, but the perigee may be too low for them to raise orbit. Will know more in a few hours.”
Telemetry from the Falcon 9 rocket indicated it released the Starlink satellites into an orbit with a perigee just 86 miles (138 kilometers) above Earth, roughly 100 miles (150 kilometers) lower than expected, according to Jonathan McDowell, an astrophysicist and trusted tracker of spaceflight activity. Detailed orbital data from the US Space Force was not immediately available.
Ripple effects
While ground controllers scramble to salvage the 20 Starlink satellites, SpaceX engineers began probing what went wrong with the second stage’s M-Vac engine. For SpaceX and its customers, the investigation into the rocket malfunction is likely the more pressing matter.
SpaceX could absorb the loss of 20 Starlink satellites relatively easily. The company’s satellite assembly line can produce 20 Starlink spacecraft in a few days. But the Falcon 9 rocket’s dependability and high flight rate have made it a workhorse for NASA, the US military, and the wider space industry. An investigation will probably delay several upcoming SpaceX flights.
The first in-flight failure for SpaceX’s Falcon rocket family since June 2015, a streak of 344 consecutive successful launches until tonight.
Depending on the cause of the problem and what SpaceX must do to fix it, it’s possible the company can recover from the upper stage failure and resume launching Starlink satellites soon. Most of SpaceX’s launches aren’t for external customers, but deploy satellites for the company’s own Starlink network. This gives SpaceX a unique flexibility to quickly return to flight with the Falcon 9 without needing to satisfy customer concerns.
The Federal Aviation Administration, which licenses all commercial space launches in the United States, will require SpaceX to conduct a mishap investigation before resuming Falcon 9 flights.
“The FAA will be involved in every step of the investigation process and must approve SpaceX’s final report, including any corrective actions,” an FAA spokesperson said. “A return to flight is based on the FAA determining that any system, process, or procedure related to the mishap does not affect public safety.”
Two crew missions are supposed to launch on SpaceX’s human-rated Falcon 9 rocket in the next six weeks, but those launch dates are now in doubt.
The all-private Polaris Dawn mission, commanded by billionaire Jared Isaacman, is scheduled to launch on a Falcon 9 rocket on July 31 from NASA’s Kennedy Space Center in Florida. Isaacman and three commercial astronaut crewmates will spend five days in orbit on a mission that will include the first commercial spacewalk outside their Crew Dragon capsule, using new pressure suits designed and built by SpaceX.
NASA’s next crew mission with SpaceX is slated to launch from Florida aboard a Falcon 9 rocket around August 19. This team of four astronauts will replace a crew of four who have been on the International Space Station since March.
Some customers, especially NASA’s commercial crew program, will likely want to see the results of an in-depth inquiry and require SpaceX to string together a series of successful Falcon 9 flights with Starlink satellites before clearing their own missions for launch. SpaceX has already launched 70 flights with its Falcon family of rockets since January 1, an average cadence of one launch every 2.7 days, more than the combined number of orbital launches by all other nations this year.
With this rapid-fire launch cadence, SpaceX could quickly demonstrate the fitness of any fixes engineers recommend to resolve the problem that caused Thursday night’s failure. But investigations into rocket failures often take weeks or months. It was too soon, early on Friday, to know the true impact of the upper stage malfunction on SpaceX’s launch schedule.
Enlarge/ The Fremen on Arrakis wear full-body “stillsuits” that recycle absorbed sweat and urine into potable water.
Warner Bros.
The Fremen who inhabit the harsh desert world of Arrakis in Frank Herbert’s Dune must rely on full-body “stillsuits” for their survival, which recycle absorbed sweat and urine into potable water. Now science fiction is on the verge of becoming science fact: Researchers from Cornell University have designed a prototype stillsuit for astronauts that will recycle their urine into potable water during spacewalks, according to a new paper published in the journal Frontiers in Space Technologies.
Herbert provided specific details about the stillsuit’s design when planetologist Liet Kynes explained the technology to Duke Leto Atreides I:
It’s basically a micro-sandwich—a high-efficiency filter and heat-exchange system. The skin-contact layer’s porous. Perspiration passes through it, having cooled the body … near-normal evaporation process. The next two layers … include heat exchange filaments and salt precipitators. Salt’s reclaimed. Motions of the body, especially breathing and some osmotic action provide the pumping force. Reclaimed water circulates to catchpockets from which you draw it through this tube in the clip at your neck… Urine and feces are processed in the thigh pads. In the open desert, you wear this filter across your face, this tube in the nostrils with these plugs to ensure a tight fit. Breathe in through the mouth filter, out through the nose tube. With a Fremen suit in good working order, you won’t lose more than a thimbleful of moisture a day…
The Illustrated Dune Encyclopedia interpreted the stillsuit as something akin to a hazmat suit, without the full face covering. In David Lynch’s 1984 film, Dune, the stillsuits were organic and very form-fitting compared to the book description, almost like a second skin. The stillsuits in Denis Villeneuve’s most recent film adaptations (Dune Part 1 and Part 2) tried to hew more closely to the source material, with “micro-sandwiches” of acrylic fibers and porous cottons and embedded tubes for better flexibility.
Enlarge/ In David Lynch’s 1984 film, Dune, the stillsuits were organic and very form-fitting.
Universal Pictures
The Cornell team is not the first to try to build a practical stillsuit. Hacksmith Industries did a “one day build” of a stillsuit just last month, having previously tackled Thor’s Stormbreaker ax, Captain America’s electromagnetic shield, and a plasma-powered lightsaber, among other projects. The Hacksmith team dispensed with the icky urine and feces recycling aspects and focused on recycling sweat and moisture from breath.
Their version consists of a waterproof baggy suit (switched out for a more form-fitting bunny suit in the final version) with a battery-powered heat exchanger in the back. Any humidity condenses on the suit’s surface and drips into a bottle attached to a CamelBak bladder. There’s a filter mask attached to a tube that allows the wearer to breathe in filtered air, but it’s one way; the exhaled air is redirected to the condenser so the water content can be harvested into the CamelBak bladder and then sent back to the mask so the user can drink it. It’s not even close to achieving Herbert’s stated thimbleful a day in terms of efficiency since it mostly recycles moisture from sweat on the wearer’s back. But it worked.
Enlarge/ An artist’s illustration of the Europa Clipper spacecraft during a flyby close to Jupiter’s icy moon.
The launch date for the Europa Clipper mission to study the intriguing moon orbiting Jupiter, which ranks alongside the Cassini spacecraft to Saturn as NASA’s most expensive and ambitious planetary science mission, is now in doubt.
The $4.25 billion spacecraft had been due to launch in October on a Falcon Heavy rocket from Kennedy Space Center in Florida. However, NASA revealed that transistors on board the spacecraft may not be as radiation-hardened as they were believed to be.
“The issue with the transistors came to light in May when the mission team was advised that similar parts were failing at lower radiation doses than expected,” the space agency wrote in a blog post Thursday afternoon. “In June 2024, an industry alert was sent out to notify users of this issue. The manufacturer is working with the mission team to support ongoing radiation test and analysis efforts in order to better understand the risk of using these parts on the Europa Clipper spacecraft.”
The moons orbiting Jupiter, a massive gas giant planet, exist in one of the harshest radiation environments in the Solar System. NASA’s initial testing indicates that some of the transistors, which regulate the flow of energy through the spacecraft, could fail in this environment. NASA is currently evaluating the possibility of maximizing the transistor lifetime at Jupiter and expects to complete a preliminary analysis in late July.
To delay or not to delay
NASA’s update is silent on whether the spacecraft could still make its approximately three-week launch window this year, which gets Clipper to the Jovian system in 2030.
Ars reached out to several experts familiar with the Clipper mission to gauge the likelihood that it would make the October launch window, and opinions were mixed. The consensus view was between a 40 to 60 percent chance of becoming comfortable enough with the issue to launch this fall. If NASA engineers cannot become confident with the existing setup, the transistors would need to be replaced.
The Clipper mission has launch opportunities in 2025 and 2026, but these could lead to additional delays. This is due to the need for multiple gravitational assists. The 2024 launch follows a “MEGA” trajectory, including a Mars flyby in 2025 and an Earth flyby in late 2026—Mars-Earth Gravitational Assist. If Clipper launches a year late, it would necessitate a second Earth flyby. A launch in 2026 would revert to a MEGA trajectory. Ars has asked NASA for timelines of launches in 2025 and 2026 and will update if they provide this information.
Another negative result of delays would be costs, as keeping the mission on the ground for another year likely would result in another few hundred million dollars in expenses for NASA, which would blow a hole in its planetary science budget.
NASA’s blog post this week is not the first time the space agency has publicly mentioned these issues with the metal-oxide-semiconductor field-effect transistor, or MOSFET. At a meeting of the Space Studies Board in early June, Jordan Evans, project manager for the Europa Clipper Mission, said it was his No. 1 concern ahead of launch.
“What keeps me awake at night”
“The most challenging thing we’re dealing with right now is an issue associated with these transistors, MOSFETs, that are used as switches in the spacecraft,” he said. “Five weeks ago today, I got an email that a non-NASA customer had done some testing on these rad-hard parts and found that they were going before (the specifications), at radiation levels significantly lower than what we qualified them to as we did our parts procurement, and others in the industry had as well.”
At the time, Evans said things were “trending in the right direction” with regard to the agency’s analysis of the issue. It seems unlikely that NASA would have put out a blog post five weeks later if the issue were still moving steadily toward a resolution.
“What keeps me awake right now is the uncertainty associated with the MOSFETs and the residual risk that we will take on with that,” Evans said in June. “It’s difficult to do the kind of low-dose rate testing in the timeframes that we have until launch. So we’re gathering as much data as we can, including from missions like Juno, to better understand what residual risk we might launch with.”
These are precisely the kinds of issues that scientists and engineers don’t want to find in the final months before the launch of such a consequential mission. The stakes are incredibly high—imagine making the call to launch Clipper only to have the spacecraft fail six years later upon arrival at Jupiter.
The basic outline of the interactions between modern humans and Neanderthals is now well established. The two came in contact as modern humans began their major expansion out of Africa, which occurred roughly 60,000 years ago. Humans picked up some Neanderthal DNA through interbreeding, while the Neanderthal population, always fairly small, was swept away by the waves of new arrivals.
But there are some aspects of this big-picture view that don’t entirely line up with the data. While it nicely explains the fact that Neanderthal sequences are far more common in non-African populations, it doesn’t account for the fact that every African population we’ve looked at has some DNA that matches up with Neanderthal DNA.
A study published on Thursday argues that much of this match came about because an early modern human population also left Africa and interbred with Neanderthals. But in this case, the result was to introduce modern human DNA to the Neanderthal population. The study shows that this DNA accounts for a lot of Neanderthals’ genetic diversity, suggesting that their population was even smaller than earlier estimates had suggested.
Out of Africa early
This study isn’t the first to suggest that modern humans and their genes met Neanderthals well in advance of our major out-of-Africa expansion. The key to understanding this is the genome of a Neanderthal from the Altai region of Siberia, which dates from roughly 120,000 years ago. That’s well before modern humans expanded out of Africa, yet its genome has some regions that have excellent matches to the human genome but are absent from the Denisovan lineage.
One explanation for this is that these are segments of Neanderthal DNA that were later picked up by the population that expanded out of Africa. The problem with that view is that most of these sequences also show up in African populations. So, researchers advanced the idea that an ancestral population of modern humans left Africa about 200,000 years ago, and some of its DNA was retained by Siberian Neanderthals. That’s consistent with some fossil finds that place anatomically modern humans in the Mideast at roughly the same time.
There is, however, an alternative explanation: Some of the population that expanded out of Africa 60,000 years ago and picked up Neanderthal DNA migrated back to Africa, taking the Neanderthal DNA with them. That has led to a small bit of the Neanderthal DNA persisting within African populations.
To sort this all out, a research team based at Princeton University focused on the Neanderthal DNA found in Africans, taking advantage of the fact that we now have a much larger array of completed human genomes (approximately 2,000 of them).
The work was based on a simple hypothesis. All of our work on Neanderthal DNA indicates that their population was relatively small, and thus had less genetic diversity than modern humans did. If that’s the case, then the addition of modern human DNA to the Neanderthal population should have boosted its genetic diversity. If so, then the stretches of “Neanderthal” DNA found in African populations should include some of the more diverse regions of the Neanderthal genome.