Rapa Nui, often referred to as Easter Island, is one of the most remote populated islands in the world. It’s so distant that Europeans didn’t stumble onto it until centuries after they had started exploring the Pacific. When they arrived, though, they found that the relatively small island supported a population of thousands, one that had built imposing monumental statues called moai. Arguments over how this population got there and what happened once it did have gone on ever since.
Some of these arguments, such as the idea that the island’s indigenous people had traveled there from South America, have since been put to rest. Genomes from people native to the island show that its original population was part of the Polynesian expansion across the Pacific. But others, such as the role of ecological collapse in limiting the island’s population and altering its culture, continue to be debated.
Researchers have now obtained genome sequence from the remains of 15 Rapa Nui natives who predate European contact. And they indicate that the population of the island appears to have grown slowly and steadily, without any sign of a bottleneck that could be associated with an ecological collapse. And roughly 10 percent of the genomes appear to have a Native American source that likely dates from roughly the same time that the island was settled.
Out of the museum
The remains that provided these genomes weren’t found on Rapa Nui, at least not recently. Instead, they reside at the Muséum National d’Histoire Naturelle in France, having been obtained at some uncertain point in the past. Their presence there is a point of contention for the indigenous people of Rapa Nui, but the researchers behind the new work had the cooperation of the islanders in this project, having worked with them extensively. The researchers’ description of these interactions could be viewed as a model for how this sort of work should be done:
Throughout the course of the study, we met with representatives of the Rapanui community on the island, the Comisión de Desarrollo Rapa Nui and the Comisión Asesora de Monumentos Nacionales, where we presented our research goals and ongoing results. Both commissions voted in favor of us continuing with the research… We presented the research project in public talks, a short video and radio interviews on the island, giving us the opportunity to inquire about the questions that are most relevant to the Rapanui community. These discussions have informed the research topics we investigated in this work.
Given the questionable record-keeping at various points in the past, one of the goals of this work was simply to determine whether these remains truly had originated on Rapa Nui. That was unambiguously true. All comparisons with genomes of modern populations show that all 15 of these genomes have a Polynesian origin and are most closely related to modern residents of Rapa Nui. “The confirmation of the origin of these individuals through genomic analyses will inform repatriation efforts led by the Rapa Nui Repatriation Program (Ka Haka Hoki Mai Te Mana Tupuna),” the authors suggest.
A second question was whether the remains predate European contact. The researchers attempted to perform carbon dating, but it produced dates that made no sense. Some of the remains had dates that were potentially after they had been collected, according to museum records. And all of them were from the 1800s, well after European contact and introduced diseases had shrunk the native population and mixed in DNA from non-Polynesians. Yet none of the genomes showed more than one percent European ancestry, a fraction low enough to be ascribed to a spurious statistical fluke.
So the precise date these individuals lived is uncertain. But the genetic data clearly indicates that they were born prior to the arrival of Europeans. They can therefore tell us about what the population was experiencing in the period between Rapa Nui’s settlement and the arrival of colonial powers.
Back from the Americas
While these genomes showed no sign of European ancestry, they were not fully Polynesian. Instead, roughly 10 percent of the genome appeared to be derived from a Native American population. This is the highest percentage seen in any Polynesian population, including some that show hints of Native American contact that dates to before Europeans arrived on the scene.
Isolating these DNA sequences and comparing them to populations from across the world showed that the group most closely related to the one who contributed to the Rapa Nui population presently resides in the central Andes region of South America. That’s in contrast to the earlier results, which suggested the contact was with populations further north in South America.
Enlarge/ Boeing’s uncrewed Starliner spaceraft backs away from the International Space Station moments after undocking on September 6, 2024.
NASA
Nearly a decade ago to the day, I stood in the international terminal of Houston’s main airport checking my phone. As I waited to board a flight for Moscow, an announcement from NASA was imminent, with the agency due to make its selections for private companies that would transport astronauts to the International Space Station.
Then, just before boarding the direct flight to Moscow, a news release from NASA popped into my inbox about its Commercial Crew Program. The space agency, under a fixed price agreement, agreed to pay Boeing $4.2 billion to develop the Starliner spacecraft; SpaceX would receive $2.6 billion for the development of its Crew Dragon vehicle.
At the time, the Space Shuttle had been retired for three years, and NASA’s astronauts had to fly to the International Space Station aboard the Soyuz spacecraft. “Today, we are one step closer to launching our astronauts from US soil on American spacecraft and ending the nation’s sole reliance on Russia by 2017,” NASA Administrator Charles Bolden said in the release.
I knew this only too well. As the space reporter for the Houston Chronicle, I was traveling with NASA officials to Russia to visit Star City, where astronauts train, and see Roscosmos’ mission control facilities. From there, we flew to Kazakhstan to tour the spaceport in Baikonur and observe the launch of the Expedition 41 crew to the space station. The mission included two Russian astronauts and NASA’s Butch Wilmore. I wrote about this as the fifth part of my Adrift series on the state of America’s space program.
A decade later, it all seems surreal. I cannot imagine, as I did a decade ago, standing near soldiers in Moscow watching a “Peace March” of thousands of protestors through the Russian capital city. There is no room for dissent in Russia today. The airport we used to fly from Moscow to Kazakhstan, Domodedovo, has been attacked by Ukrainian drones. I almost certainly can never go back to Russia, especially after being branded a “war criminal” by the country’s space boss.
But history turns in interesting ways. Ten years after his Soyuz flight from Kazakhstan, Wilmore launched from Florida on Boeing’s Starliner spacecraft. Last weekend, this Boeing spacecraft came back to Earth without Wilmore and his copilot Suni Williams on board. Here we were once again: Wilmore flying in space and me thinking and writing about the future of NASA’s human spaceflight programs.
I couldn’t help but wonder: After all that happened in the last decade, has the Commercial Crew Program been a success?
Boeing becomes a no-show
Commercial Crew was a bold bet by NASA that won the space agency many critics. Could private companies really step up and provide a service that only nations had before?
NASA’s two selections, Boeing and SpaceX, did not make that 2017 target for their initial crewed flights. For a few years, Congress lagged in funding the program, and during the second half of the 2010s, each of the companies ran into significant technical problems. SpaceX overcame serious issues with its parachutes and an exploding spacecraft in 2019 to triumphantly reach orbit in the summer of 2020 with its Demo-2 mission, flying NASA astronauts Doug Hurley and Bob Behnken to and from the space station.
Since then, SpaceX has completed seven operational missions to the station, carrying astronauts from the United States, Europe, Japan, Russia, the Middle East, and elsewhere into orbit. A crew from the eighth mission is on the station right now, and the ninth Crew Dragon mission will launch later this month to bring Wilmore and Williams back to Earth. Crew Dragon has been nothing short of a smashing success for SpaceX and the United States, establishing a vital lifeline at a time when—amid deteriorating relations between America and Russia—NASA reliance on Soyuz likely would have been untenable.
Starliner has faced a more difficult road. Its first uncrewed test flight in late 2019 was cut short early after serious software problems. Afterward, NASA designated the flight as a “high visibility close call” and said Boeing would need to fly a second uncrewed test flight. This mission in 2022 was more successful, but lingering concerns and issues with flammable tape and parachutes delayed the first crew flight until June of this year. The fate of Starliner’s third flight this summer, and its intermittently failing thrusters that ultimately led to its crew needing an alternative ride back to Earth, has been well documented.
The human body is full of marvels, some even bordering on miraculous. That includes the limited ability for nerves to regenerate after injuries, allowing people to regain some function and feeling. But that wonder can turn, well, unnerving when those regenerated wires end up in a jumble.
Such is the case for a rare neurological condition called gustatory hyperhidrosis, also known as Frey’s syndrome. In this disorder, nerves regenerate after damage to either of the large saliva glands that sit on either side of the face, just in front of the ears, called the parotid glands. But that nerve regrowth goes awry due to a quirk of anatomy that allows the nerves that control saliva production for eating to get tangled with those that control sweating for temperature control.
In this week’s issue of the New England Journal of Medicine, doctors in Taiwan report an unusual presentation of the disorder in a 76-year-old woman. She told doctors that, for two years, every time she ate, her face would begin profusely sweating. In the clinic, the doctors observed the phenomenon themselves. They watched as she took a bite of pork jerky and began chewing.
Enlarge/ Panel A, 10 seconds after beginning chewing; Panel B, 30 seconds after; Panel C, 50 seconds after; and Panel D, 75 seconds after.
At the start, her face was dry and had a normal tone. But, within 30 seconds, her left cheek began to glisten with sweat and turn red from flushing. By 50 seconds, large beads of sweat coated her cheek. At 75 seconds, droplets ran down her cheek and onto her neck.
Anatomy quirk
Seven years before that doctor’s appointment, the woman had undergone surgery to remove the parotid gland on that side of her face due to the growth of a benign tumor. Gustatory hyperhidrosis is a common complication after such a removal, called a parotidectomy. Some published studies estimate that up to 96 percent of parotidectomy patients will go on to develop the disorder. But, if it does develop, it usually does so within about six to 18 months after the surgery—the time it can take for nerves to regrow. But, in the woman’s case, it appeared to develop after five years since she reported that it started only two years prior to her appointment. It’s unclear why there was such a delay.
Doctors hypothesize that gustatory hyperhidrosis develops after salivary gland injuries or surgeries because of the way nerve fibers are bundled in that part of the head. The nerves that control the salivary glands are part of the parasympathetic nervous system (PSNS). This division of the nervous system is sometimes described as controlling the relatively calm “rest and digest” bodily functions, which are controlled unconsciously as part of the autonomic nervous system that controls things like heart rate.
The PSNS is in contrast to the other part of the autonomic nervous system, called the sympathetic nervous system (SNS). The SNS controls the unconscious “fight or flight” stress responses, which include sweat glands.
Tangled fibers
While PSNS fibers that control the saliva glands and SNS fibers that control sweat glands are from different divisions, they come together on the side of the face. Specifically, they meet up in a tributary nerve called the auriculotemporal nerve. And, they don’t just feed into the same physical conduit, they also overlap in their chemical regulation. Often SNS and PSNS fibers are activated by different signaling molecules (aka neurotransmitters). But it just so happens that the nerve fibers that control sweat glands are activated by the same neurotransmitter that activates the fibers in the PSNS, including those regulating saliva glands. They’re both regulated by a neurotransmitter called acetylcholine.
When PSNS and SNS nerve fibers are damaged near the parotid salivary gland from injury or surgery, the nerves can regenerate. But, given their physical and chemical overlaps, doctors think that in gustatory hyperhidrosis, PSNS nerve fibers end up growing back abnormally, along the paths of SNS fibers. This ends up connecting the PSNS fibers to sweat glands in the skin. So, upon signals of eating, the crossed nerve fibers lead not to salivation but heat responses, including sweat production and blood vessel dilation, which explains the facial flushing.
Living with it
Thankfully, there are various treatments for people with gustatory hyperhidrosis. They include surgical reconstruction or injections of Botox (botulinum neurotoxin), which can shut down the activity of the sweat glands. Similarly, there are topical anticholinergics, which block and inhibit the activity of acetylcholine, the neurotransmitter that activates the nerve fibers activating the sweat glands. There are also topical antiperspirants that can help.
After the doctors in Taiwan diagnosed their patient with gustatory hyperhidrosis, they discussed these options with her. But she reportedly “opted to live with the symptoms.”
Enlarge/ “The only species of fish confirmed to be able to escape from the digestive tract of the predatory fish after being captured.”
Hasegawa et al./Current Biology
Imagine you’re a Japanese eel, swimming around just minding your own business when—bam! A predatory fish swallows you whole and you only have a few minutes to make your escape before certain death. What’s an eel to do? According to a new paper published in the journal Current Biology, Japanese eels opt to back their way out of the digestive tract, tail first, through the esophagus, emerging from the predatory fish’s gills.
Per the authors, this is the first such study to observe the behavioral patterns and escape processes of prey within the digestive tract of predators. “At this point, the Japanese eel is the only species of fish confirmed to be able to escape from the digestive tract of the predatory fish after being captured,” co-author Yuha Hasegawa at Nagasaki University in Japan told New Scientist.
There are various strategies in nature for escaping predators after being swallowed. For instance, a parasitic worm called Paragordius tricuspidatus can force its way out of a predator’s system when its host organism is eaten. There was also a fascinating study in 2020 by Japanese scientists on the unusual survival strategy of the aquatic beetle Regimbartia attenuata. They fed a bunch of the beetles to a pond frog (Pelophylax nigromaculatus) under laboratory conditions, expecting the frog to spit the beetle out. That’s what happened with prior experiments on bombardier beetles (Pheropsophus jessoensis), which spray toxic chemicals (described as an audible “chemical explosion”) when they find themselves inside a toad’s gut, inducing the toad to invert its own stomach and vomit them back out.
But R. attenuata basically walks through the digestive tract and escapes out of the frog’s anus after being swallowed alive. It proved to be a successful escape route. In the case of the bombardier beetles, between 35 and 57 percent of the toads threw up within 50 minutes on average, ensuring the survival of the regurgitated beetles. R. attenuata‘s survival rate was a whopping 93 percent. In fact, 19 out of 20 walked out of the frog, unharmed, within an hour, although one industrious beetle bolted out in just five minutes. Granted, the beetles often emerged covered in fecal pellets, which can’t have been pleasant. But that didn’t stop them from resuming their little beetle lives; all survived at least two weeks after being swallowed.
Hasegawa co-authored an earlier study in which they observed Japanese eels emerging from a predator’s gills after being swallowed, so they knew this unique strategy was possible. They just didn’t know the details of what was going on inside the digestive tract that enabled the eels to pull off this feat. So the team decided to use X-ray videography to peer inside predatory fish (Odontobutis obscura) after eels had been eaten. They injected barium sulfate into the abdominal cavity and tail of the Japanese eels as a contrast agent, then introduced each eel to a tank containing one O. obscura. The X-ray video system captured the interactions after an eel had been swallowed.
Out through the gills
The escaping behavior of a Japanese eel. Credit: Hasegawa et al./Current Biology
O. obscura swallow their prey whole along with surrounding water, and a swallowed eel quickly ends up in the digestive tract, a highly acidic and oxygen-deprived environment that kills the eels within 211.9 seconds (a little over three minutes). Thirty-two of the eels were eaten, and of those, 13 (or 40.6 percent) managed to poke at least their tails through the gills of their predator. Of those 13, nine (69.2 percent) escaped completely within 56 seconds on average, suggesting “that the period until the tails emerge from the predator’s gill is particularly crucial for successful escape,” the authors wrote. The final push for freedom involved coiling their bodies to extract their head from the gill.
It helps to be swallowed head-first. The researchers discovered that most captured eels tried to escape by swimming back up the digestive tract toward the esophagus and gills, tail-first in the cases where escape was successful. However, eleven eels ended up completely inside the stomach and resorted to swimming around in circles—most likely looking for a possible escape route. Five of those managed to insert their tails correctly toward the esophagus, while two perished because they oriented their tails in the wrong direction.
“The most surprising moment in this study was when we observed the first footage of eels escaping by going back up the digestive tract toward the gill of the predatory fish,” said co-author Yuuki Kawabata, also of Nagasaki University. “At the beginning of the experiment, we speculated that eels would escape directly from the predator’s mouth to the gill. However, contrary to our expectations, witnessing the eels’ desperate escape from the predator’s stomach to the gills was truly astonishing for us.”
On Tuesday, Microsoft made a series of announcements related to its Azure Quantum Cloud service. Among them was a demonstration of logical operations using the largest number of error-corrected qubits yet.
“Since April, we’ve tripled the number of logical qubits here,” said Microsoft Technical Fellow Krysta Svore. “So we are accelerating toward that hundred-logical-qubit capability.” The company has also lined up a new partner in the form of Atom Computing, which uses neutral atoms to hold qubits and has already demonstrated hardware with over 1,000 hardware qubits.
Collectively, the announcements are the latest sign that quantum computing has emerged from its infancy and is rapidly progressing toward the development of systems that can reliably perform calculations that would be impractical or impossible to run on classical hardware. We talked with people at Microsoft and some of its hardware partners to get a sense of what’s coming next to bring us closer to useful quantum computing.
Making error correction simpler
Logical qubits are a route out of the general despair of realizing that we’re never going to keep hardware qubits from producing too many errors for reliable calculation. Error correction on classical computers involves measuring the state of bits and comparing their values to an aggregated value. Unfortunately, you can’t analogously measure the state of a qubit to determine if an error has occurred since measurement causes it to adopt a concrete value, destroying any of the superposition of values that make quantum computing useful.
Logical qubits get around this by spreading a single bit of quantum information across a collection of bits, which makes any error less catastrophic. Detecting when one occurs involves adding some additional bits to the logical qubit such that their value is dependent upon the ones holding the data. You can measure these ancillary qubits to identify if any problem has occurred and possibly gain information on how to correct it.
There are many potential error correction schemes, some of which can involve dedicating around a thousand qubits to each logical qubit. It’s possible to get away with far less than that—schemes with fewer than 10 qubits exist. But in general, the fewer hardware qubits you use, the greater your chance of experiencing errors that you can’t recover from. This trend can be offset in part through hardware qubits that are less error-prone.
The challenge is that this only works if error rates are low enough that you don’t run into errors during the correction process. In other words, the hardware qubits have to be good enough that they don’t produce so many errors that it’s impossible to know when an error has occurred and how to correct it. That threshold has been passed only relatively recently.
Microsoft’s earlier demonstration involved the use of hardware from Quantinuum, which uses qubits based on ions trapped in electrical fields. These have some of the best error rates yet reported, and Microsoft had shown that this allowed it to catch and correct errors over several rounds of error correction. In the new work, the collaboration went further, performing multiple logical operations with error correction on a collection of logical qubits.
In an age when you can get just about anything online, it’s probably no surprise that you can buy a diamond-making machine for $200,000 on Chinese eCommerce site Alibaba. If, like me, you haven’t been paying attention to the diamond industry, it turns out that the availability of these machines reflects an ongoing trend toward democratizing diamond production—a process that began decades ago and continues to evolve.
The history of lab-grown diamonds dates back at least half a century. According to Harvard graduate student Javid Lakha, writing in a comprehensive piece on lab-grown diamonds published in Works in Progress last month, the first successful synthesis of diamonds in a laboratory setting occurred in the 1950s. Lakha recounts how Howard Tracy Hall, a chemist at General Electric, created the first lab-grown diamonds using a high-pressure, high-temperature (HPHT) process that mimicked the conditions under which diamonds form in nature.
Since then, diamond-making technology has advanced significantly. Today, there are two primary methods for creating lab-grown diamonds: the HPHT process and chemical vapor deposition (CVD). Both types of machines are now listed on Alibaba, with prices starting at around $200,000, as pointed out in a Hacker News comment by engineer John Nagle (who goes by “Animats” on Hacker News). A CVD machine we found is more pricey, at around $450,000.
An image of a “HPHT Cubic Press Synthetic Diamond Making Machine” made by Henan Huanghe Whirlwind Co., Ltd. in China.
A photo of part of a “HPHT Cubic Press Synthetic Diamond Making Machine” made by Henan Huanghe Whirlwind Co., Ltd. in China.
A photo of a factory full of HPHT Cubic Press Synthetic Diamond Making Machines, made by Henan Huanghe Whirlwind Co., Ltd. in China.
Not a simple operation
While the idea of purchasing a diamond-making machine on Alibaba might be intriguing, it’s important to note that operating one isn’t as simple as plugging it in and watching diamonds form. According to Lakha’s article, these machines require significant expertise and additional resources to operate effectively.
For an HPHT press, you’d need a reliable source of high-quality graphite, metal catalysts like iron or cobalt, and precise temperature and pressure control systems. CVD machines require a steady supply of methane and hydrogen gases, as well as the ability to generate and control microwaves or hot filaments. Both methods need diamond seed crystals to start the growth process.
Moreover, you’d need specialized knowledge to manage the growth parameters, handle potentially hazardous materials and high-pressure equipment safely, and process the resulting raw diamonds into usable gems or industrial components. The machines also use considerable amounts of energy and require regular maintenance. Those factors may make the process subject to some regulations that are far beyond the scope of this piece.
In short, while these machines are more accessible than ever, turning one into a productive diamond-making operation would still require significant investment in equipment, materials, expertise, and safety measures. But hey, a guy can dream, right?
Enlarge/ A man helps a boy look at a handgun during the National Rifle Association’s Annual Meetings & Exhibits at the Indiana Convention Center in Indianapolis on April 16, 2023.
The study, conducted by gun violence researchers at Rutgers University, analyzed survey responses from 870 gun-owning parents. Of those, the parents who responded that they demonstrated proper handling to their child or teen, had their kid practice safe handling under supervision, and/or taught their kid how to shoot a firearm were more likely than other gun-owning parents to keep at least one gun unsecured—that is, unlocked and loaded. In fact, each of the three responses carried at least double the odds of the parent having an unlocked, loaded gun around, the study found.
Still, earlier surveys of gun-owning parents have hinted that some parents believe responsible gun-handling lessons are enough to keep children safe.
Safe handling, unsecured storage
To add data to that potential sentiment, the Rutgers researchers collected survey responses from adults in nine states (New Jersey, Pennsylvania, Ohio, Minnesota, Florida, Mississippi, Texas, Colorado, and Washington) who participated in an Ipsos poll. The states span varying geography, firearm ownership, firearm policies, and gun violence rates. The responses were collected between June and July of 2023. The researchers then analyzed the data, adjusting their calculations to account for a variety of factors, including military status, education level, income, and political beliefs.
Among the 870 parents included in the study, 412 (47 percent) said they demonstrated proper firearm handling for their kids, 321 (37 percent) said they had their children practice proper handling with supervision, and 324 (37 percent) taught their kids how to shoot their firearm.
The researchers then split the group into those who did not keep an unlocked, loaded gun around and those who did. Of the 870 gun-owning parents, 720 (83 percent) stored firearms securely, while 150 (17 percent) reported that they kept at least one firearm unlocked and loaded. Compared with the 720 secure-storage parents, the 150 parents with an unlocked, loaded gun had adjusted odds of 2.03-fold higher for saying they demonstrated proper handling, 2.29-fold higher for practicing handling with their kids, and 2.27-fold higher for teaching their kids to shoot.
“Consistent with qualitative research results, these findings suggest that some parents may believe that modeling responsible firearm use negates the need for secure storage,” the authors concluded. “However, it is unknown whether parents’ modeling responsible behavior is associated with a decreased risk of firearm injury.”
Enlarge/ At the US Army’s Camp Century on the Greenland ice sheet, an Army truck equipped with a railroad wheel conversion rides on 1,300 feet of track under the snow.
At the height of the Cold War in the 1950s, as the fear of nuclear Armageddon hung over American and Soviet citizens, idealistic scientists and engineers saw the vast Arctic region as a place of unlimited potential for creating a bold new future. Greenland emerged as the most tantalizing proving ground for their research.
Scientists and engineers working for and with the US military cooked up a rash of audacious cold-region projects—some innovative, many spit-balled, and most quickly abandoned. They were the stuff of science fiction: disposing of nuclear waste by letting it melt through the ice; moving people, supplies, and missiles below the ice using subways, some perhaps atomic powered; testing hovercraft to zip over impassable crevasses; making furniture from a frozen mix of ice and soil; and even building a nuclear-powered city under the ice sheet.
Today, many of their ideas, and the fever dreams that spawned them, survive only in the yellowed pages and covers of magazines like “REAL: the exciting magazine FOR MEN” and dozens of obscure Army technical reports.
Karl and Bernhard Philberth, both physicists and ordained priests, thought Greenland’s ice sheet the perfect repository for nuclear waste. Not all the waste—first they’d reprocess spent reactor fuel so that the long-lived nuclides would be recycled. The remaining, mostly short-lived radionuclides would be fused into glass or ceramic and surrounded by a few inches of lead for transport. They imagined several million radioactive medicine balls about 16 inches in diameter scattered over a small area of the ice sheet (about 300 square miles) far from the coast.
Because the balls were so radioactive, and thus warm, they would melt their way into the ice, each with the energy of a bit less than two dozen 100-watt incandescent light bulbs—a reasonable leap from Karl Philberth’s expertise designing heated ice drills that worked by melting their way through glaciers. The hope was that by the time the ice carrying the balls emerged at the coast thousands or tens of thousands of years later, the radioactivity would have decayed away. One of the physicists later reported that the idea was shown to him, by God, in a vision.
Enlarge/ US Army test of the Snowblast in Greenland in the 1950s, a machine designed to smooth snow runways.
Of course, the plan had plenty of unknowns and led to heated discussion at scientific meetings when it was presented—what, for example, would happen if the balls got crushed or caught up in flows of meltwater near the base of the ice sheet. And would the radioactive balls warm the ice so much that the ice flowed faster at the base, speeding the balls’ trip to the coast?
Logistical challenges, scientific doubt, and politics sunk the project. Producing millions of radioactive glass balls wasn’t yet practical, and the Danes, who at the time controlled Greenland, were never keen on allowing nuclear waste disposal on what they saw as their island. Some skeptics even worried about climate change melting the ice. Nonetheless, the Philberths made visits to the ice sheet and published peer-reviewed scientific papers about their waste dream.
Enlarge/ Loads of lava: Kasbohm with a few solidified lava flows of the Columbia River Basalts.
Joshua Murray
As our climate warms beyond its historical range, scientists increasingly need to study climates deeper in the planet’s past to get information about our future. One object of study is a warming event known as the Miocene Climate Optimum (MCO) from about 17 to 15 million years ago. It coincided with floods of basalt lava that covered a large area of the Northwestern US, creating what are called the “Columbia River Basalts.” This timing suggests that volcanic CO2 was the cause of the warming.
A paper just published in Geology, led by Jennifer Kasbohm of the Carnegie Science’s Earth and Planets Laboratory, upends the idea that the eruptions triggered the warming while still blaming them for the peak climate warmth.
The study is the result of the world’s first successful application of high-precision radiometric dating on climate records obtained by drilling into ocean sediments, opening the door to improved measurements of past climate changes. As a bonus, it confirms the validity of mathematical models of our orbits around the Solar System over deep time.
A past climate with today’s CO2 levels
“Today, with 420 parts per million [of CO2], we are basically entering the Miocene Climate Optimum,” said Thomas Westerhold of the University of Bremen, who peer-reviewed Kasbohm’s study. While our CO2 levels match, global temperatures have not yet reached the MCO temperatures of up to 8° C above the preindustrial era. “We are moving the Earth System from what we call the Ice House world… in the complete opposite direction,” said Westerhold.
When Kasbohm began looking into the link between the basalts and the MCO’s warming in 2015, she found that the correlation had huge uncertainties. So she applied high-precision radiometric dating, using the radioactive decay of uranium trapped within zircon crystals to determine the age of the basalts. She found that her new ages no longer spanned the MCO warming. “All of these eruptions [are] crammed into just a small part of the Miocene Climate Optimum,” said Kasbohm.
But there were also huge uncertainties in the dates for the MCO, so it was possible that the mismatch was an artifact of those uncertainties. Kasbohm set out to apply the same high-precision dating to the marine sediments that record the MCO.
A new approach to an old problem
“What’s really exciting… is that this is the first time anyone’s applied this technique to sediments in these ocean drill cores,” said Kasbohm.
Normally, dates for ocean sediments drilled from the seabed are determined using a combination of fossil changes, magnetic field reversals, and aligning patterns of sediment layers with orbital wobbles calculated by astronomers. Each of those methods has uncertainties that are compounded by gaps in the sediment caused by the drilling process and by natural pauses in the deposition of material. Those make it tricky to match different records with the precision needed to determine cause and effect.
The uncertainties made the timing of the MCO unclear.
Enlarge/ Tiny clocks: Zircon crystals from volcanic ash that fell into the Caribbean Sea during the Miocene.
Jennifer Kasbohm
Radiometric dating would circumvent those uncertainties. But until about 15 years ago, its dates had such large errors that they were useless for addressing questions like the timing of the MCO. The technique also typically needs kilograms of material to find enough uranium-containing zircon crystals, whereas ocean drill cores yield just grams.
But scientists have significantly reduced those limitations: “Across the board, people have been working to track and quantify and minimize every aspect of uncertainty that goes into the measurements we make. And that’s what allows me to report these ages with such great precision,” Kasbohm said.
Enlarge/ Power lines are cast in silhouette as the Creek Fire creeps up on on the Shaver Springs community off of Tollhouse Road on Tuesday, Sept. 8, 2020, in Auberry, California.
This article originally appeared on Inside Climate News, a nonprofit, independent news organization that covers climate, energy and the environment. It is republished with permission. Sign up for their newsletter here.
Most people are “very” or “extremely” concerned about the state of the natural world, a new global public opinion survey shows.
Roughly 70 percent of 22,000 people polled online earlier this year agreed that human activities were pushing the Earth past “tipping points,” thresholds beyond which nature cannot recover, like loss of the Amazon rainforest or collapse of the Atlantic Ocean’s currents. The same number of respondents said the world needs to reduce carbon emissions within the next decade.
Just under 40 percent of respondents said technological advances can solve environmental challenges.
The Global Commons survey, conducted for two collectives of “economic thinkers” and scientists known as Earth4All and the Global Commons Alliance, polled people across 22 countries, including low-, middle- and high-income nations. The survey’s stated aim was to assess public opinion about “societal transformations” and “planetary stewardship.”
The results, released Thursday, highlight that people living under diverse circumstances seem to share worries about the health of ecosystems and the environmental problems future generations will inherit.
Explore the latest news about what’s at stake for the climate during this election season.
But there were some regional differences. People living in emerging economies, including Kenya and India, perceived themselves to be more exposed to environmental and climate shocks, like drought, flooding, and extreme weather. That group expressed higher levels of concern about the environment, though 59 percent of all respondents said they are “very” or “extremely” worried about “the state of nature today,” and another 29 percent are at least somewhat concerned.
Americans are included in the global majority, but a more complex picture emerged in the details of the survey, conducted by Ipsos.
Roughly one in two Americans said they are not very or not at all exposed to environmental and climate change risks. Those perceptions contrast sharply with empirical evidence showing that climate change is having an impact in nearly every corner of the United States. A warming planet has intensified hurricanes battering coasts, droughts striking middle American farms, and wildfires threatening homes and air quality across the country. And climate shocks are driving up prices of some food, like chocolate and olive oil, and consumer goods.
Americans also largely believe they do not bear responsibility for global environmental problems. Only about 15 percent of US respondents said that high- and middle-income Americans share responsibility for climate change and natural destruction. Instead, they attribute the most blame to businesses and governments of wealthy countries.
Those survey responses suggest that at least half of Americans may not feel they have any skin in the game when it comes to addressing global environmental problems, according to Geoff Dabelko, a professor at Ohio University and expert in environmental policy and security.
Translating concern about the environment to actual change requires people to believe they have something at stake, Dabelko said. “It’s troubling that Americans aren’t making that connection.”
While fossil fuel companies have long campaigned to shape public perception in a way that absolves their industry of fault for ecosystem destruction and climate change, individual behavior does play a role. Americans have some of the highest per-capita consumption rates in the world.
The world’s wealthiest 10 percent are responsible for nearly half the world’s carbon emissions, along with ecosystem destruction and related social impacts. For instance, American consumption of gold, tropical hardwoods like mahogany and cedar and other commodities has been linked to the destruction of the Amazon rainforest and attacks on Indigenous people defending their territories from extractive activities.
The United States is one of the world’s wealthiest countries and home to 38 percent of the world’s millionaires (the largest share). But a person doesn’t need to be a millionaire to fit within the cohort of the world’s wealthiest. Americans without children earning more than $60,000 a year after tax, and families of three with an after-tax household income above $130,000, are in the richest 1 percent of the world’s population.
United Nations emissions gap reports have said that to reach global climate goals, the world’s wealthiest people must cut their personal emissions by at least a factor of 30. High-income Americans’ emissions footprint is largely a consequence of lifestyle choices like living in large homes, flying often, opting for personal vehicles over public transportation, and conspicuous consumption of fast fashion and other consumer goods.
Enlarge/ Boeing’s Starliner spacecraft after landing Friday night at White Sands Space Harbor, New Mexico.
Boeing
Boeing’s Starliner spacecraft sailed to a smooth landing in the New Mexico desert Friday night, an auspicious end to an otherwise disappointing three-month test flight that left the capsule’s two-person crew stuck in orbit until next year.
Cushioned by airbags, the Boeing crew capsule descended under three parachutes toward an on-target landing at 10: 01 pm local time Friday (12: 01 am EDT Saturday) at White Sands Space Harbor, New Mexico. From the outside, the landing appeared just as it would have if the spacecraft brought home NASA astronauts Butch Wilmore and Suni Williams, who became the first people to launch on a Starliner capsule on June 5.
But Starliner’s cockpit was empty as it flew back to Earth Friday night. Last month, NASA managers decided to keep Wilmore and Williams on the International Space Station (ISS) until next year after agency officials determined it was too risky for the astronauts to return to the ground on Boeing’s spaceship. Instead of coming home on Starliner, Wilmore and Williams will fly back to Earth on a SpaceX Dragon spacecraft in February. NASA has incorporated the Starliner duo into the space station’s long-term crew.
The Starliner spacecraft began the journey home by backing away from its docking port at the space station at 6: 04 pm EDT (22: 04 UTC), one day after astronauts closed hatches to prepare for the ship’s departure. The capsule fired thrusters to quickly back away from the complex, setting up for a deorbit burn to guide Starliner on a trajectory toward its landing site. Then, Starliner jettisoned its disposable service module to burn up over the Pacific Ocean, while the crew module, with a vacant cockpit, took aim on New Mexico.
After streaking through the atmosphere over the Pacific Ocean and Mexico, Starliner deployed three main parachutes to slow its descent, then a ring of six airbags inflated around the bottom of the spacecraft to dampen the jolt of touchdown. This was the third time a Starliner capsule has flown in space, and the second time the spacecraft fell short of achieving all of its objectives.
Not the desired outcome
“I’m happy to report Starliner did really well today in the undock, deorbit, and landing sequence,” said Steve Stich, manager of NASA’s commercial crew program, which manages a contract worth up to $4.6 billion for Boeing to develop, test, and fly a series of Starliner crew missions to the ISS.
While officials were pleased with Starliner’s landing, the celebration was tinged with disappointment.
“From a human perspective, all of us feel happy about the successful landing, but then there’s a piece of us that we wish it would have been the way we had planned it,” Stich said. “We had planned to have the mission land with Butch and Suni onboard. I think there are, depending on who you are on the team, different emotions associated with that, and I think it’s going to take a little time to work through that.”
Nevertheless, Stich said NASA made the right call last month when officials decided to complete the Starliner test flight without astronauts in the spacecraft.
“We made the decision to have an uncrewed flight based on what we knew at the time, and based on our knowledge of the thrusters and based on the modeling that we had,” Stich said. “If we’d had a model that would have predicted what we saw tonight perfectly, yeah, it looks like an easy decision to go say, ‘We could have had a crew tonight.’ But we didn’t have that.”
Boeing’s Starliner managers insisted the ship was safe to bring the astronauts home. It might be tempting to conclude the successful landing Friday night vindicated Boeing’s views on the thruster problems. However, he spacecraft’s propulsion system, provided by Aerojet Rocketdyne, clearly did not work as intended during the flight. NASA had the option of bringing Wilmore and Williams back to Earth on a different, flight-proven spacecraft, so they took it.
“It’s awfully hard for the team,” Stich said. “It’s hard for me, when we sit here and have a successful landing, to be in that position. But it was a test flight, and we didn’t have confidence, with certainty, of the thruster performance.”
Enlarge/ In this infrared view, Starliner descends under its three main parachutes moments before touchdown at White Sands Space Harbor, New Mexico.
NASA
As Starliner approached the space station in June, five of 28 control thrusters on Starliner’s service module failed, forcing Wilmore to take manual control as ground teams sorted out the problem. Eventually, engineers recovered four of the five thrusters, but NASA’s decision makers were unable to convince themselves the same problem wouldn’t reappear, or get worse, when the spacecraft departed the space station and headed for reentry and landing.
Engineers later determined the control jets lost thrust due to overheating, which can cause Teflon seals in valves to swell and deform, starving the thrusters of propellant. Telemetry data beamed back to the mission controllers from Starliner showed higher-than-expected temperatures on two of the service module thrusters during the flight back to Earth Friday night, but they continued working.
Ground teams also detected five small helium leaks on Starliner’s propulsion system soon after its launch in June. NASA and Boeing officials were aware of one of the leaks before the launch, but decided to go ahead with the test flight. Starliner was still leaking helium when the spacecraft undocked from the station Friday, but the leak rate remained within safety tolerances, according to Stich.
A couple of fresh technical problems cropped up as Starliner cruised back to Earth. One of 12 control jets on the crew module failed to ignite at any time during Starliner’s flight home. These are separate thrusters from the small engines that caused trouble earlier in the Starliner mission. There was also a brief glitch in Starliner’s navigation system during reentry.
Where to go from here?
Three NASA managers, including Stich, took questions from reporters in a press conference early Saturday following Starliner’s landing. Two Boeing officials were also supposed to be on the panel, but they canceled at the last minute. Boeing didn’t explain their absence, and the company has not made any officials available to answer questions since NASA chose to end the Starliner test flight without the crew aboard.
“We view the data and the uncertainty that’s there differently than Boeing does,” said Jim Free, NASA’s associate administrator, in an August 24 press conference announcing the agency’s decision on how to end the Starliner test flight. It’s unusual for NASA officials to publicly discuss how their opinions differ from those of their contractors.
Joel Montalbano, NASA’s deputy associate administrator for space operations, said Saturday that Boeing deferred to the agency to discuss the Starliner mission in the post-landing press conference.
Here’s the only quote from a Boeing official on Starliner’s return to Earth. It came in the form of a three-paragraph written statement Boeing emailed to reporters about a half-hour after Starliner’s landing: “I want to recognize the work the Starliner teams did to ensure a successful and safe undocking, deorbit, re-entry and landing,” said Mark Nappi, vice president and program manager of Boeing’s commercial crew program. “We will review the data and determine the next steps for the program.”
Nappi’s statement doesn’t answer one of the most important questions reporters would have asked anyone from Boeing if they participated in Saturday morning’s press conference: Does Boeing still have a long-term commitment to the Starliner program?
So far, the only indications of Boeing’s future plans for Starliner have come from second-hand anecdotes relayed by NASA officials. Boeing has been silent on the matter. The company has reported nearly $1.6 billion in financial charges to pay for previous delays and cost overruns on the Starliner program, and Boeing will again be on the hook to pay to fix the problems Starliner encountered in space over the last three months.
Montalbano said Boeing’s Starliner managers met with ground teams at mission control in Houston following the craft’s landing. “The Boeing managers came into the control room and congratulated the team, talked to the NASA team, so Boeing is committed to continue their work with us,” he said.
Enlarge/ Boeing’s Starliner spacecraft fires thrusters during departure from the International Space Station on Friday.
NASA
NASA isn’t ready to give up on Starliner. A fundamental tenet of NASA’s commercial crew program is to foster the development of two independent vehicles to ferry astronauts to and from the International Space Station, and eventually commercial outposts in low-Earth orbit. NASA awarded multibillion-dollar contracts to Boeing and SpaceX in 2014 to complete development of their Starliner and Crew Dragon spaceships.
SpaceX’s Dragon started flying astronauts in 2020. NASA would like to have another US spacecraft for crew rotation flights to support the ISS. If Boeing had more success with this Starliner test flight, NASA expected to formally certify the spacecraft for operational crew flights beginning next year. Once that happens, Starliner will enter a rotation with SpaceX’s Dragon to transport crews to and from the station in six-month increments.
Stich said Saturday that NASA has not determined whether the agency will require Boeing launch another Starliner test flight before certifying the spacecraft for regular crew rotation missions. “It’ll take a little time to determine the path forward, but today we saw the vehicle perform really well,” he said.
On to Starliner-1?
But some of Stich’s other statements Saturday suggested NASA would like to proceed with certifying Starliner and flying the next mission with a full crew complement of four astronauts. NASA calls Boeing’s first operational crew mission Starliner-1. It’s the first of at least three and potentially up to six crew rotation missions on Boeing’s contract.
“It’s great to have the spacecraft back, and we’re now focused on Starliner-1,” Stich said.
Before that happens, NASA and Boeing engineers must resolve the thruster problems and helium leaks that plagued the test flight this summer. Stich said teams are studying several ways to improve the reliability of Starliner’s thrusters, including hardware modifications and procedural changes. This will probably push back the next crew flight of Starliner, whether it’s Starliner-1 or another test flight, until the end of next year or 2026, although NASA officials have not laid out a schedule.
The overheating thrusters are located inside four doghouse-shaped propulsion pods around the perimeter of Starliner’s service module. It turns out the doghouses retain heat like a thermos—something NASA and Boeing didn’t fully appreciate before this mission—and the thrusters don’t have time to cool down when the spacecraft fires its control jets in rapid pulses. It might help if Boeing removes some of the insulating thermal blankets from the doghouses, Stich said.
The easiest method of resolving the problem of Starliner’s overheating thrusters would be to change the rate and duration of thruster firings.
“What we would like to do is try not to change the thruster. I think that is the best path,” Stich said. “There thrusters have shown resilience and have shown that they perform well, as long as we keep their temperatures down and don’t fire them in a manner that causes the temperatures to go up.”
There’s one thing from this summer’s test flight that might, counterintuitively, help NASA certify the Starliner spacecraft to begin operational flights with its next mission. Rather than staying at the space station for eight days, Starliner remained docked at the research lab for three months, half of the duration of a full-up crew rotation flight. Despite the setbacks, Stich estimated the test flight achieved about 85 to 90 percent of its objectives.
“There’s a lot of learning that happens in that three months that is invaluable for an increment mission,” Stich said. “So, in some ways, the mission overachieved some objectives, in terms of being there for extra time. Not having the crew onboard, obviously, there are some things that we lack in terms of Butch and Suni’s test pilot expertise, and how the vehicle performed, what they saw in the cockpit. We won’t have that data, but we still have the wealth of data from the spacecraft itself, so that will go toward the mission objectives and the certification.”
Enlarge/ The First Combat of Gav and Talhand’, Folio from a Shahnama (Book of Kings), ca. 1330–40, Attributed to Iran, probably Isfahan, Ink, opaque watercolor, gold, and silver on paper, Page: 8 1/16 x 5 1/4 in. (20.5 x 13.3 cm), Codices, Three battles between two Indian princes – half brothers contending for the throne – resulted in the invention of the game of chess, to explain the death of one of them to their grieving mother. The Persian word shah mat, or checkmate, indicating a position of no escape, describes the plight of Talhand at the end of the third battle. (Photo by: Sepia Times/Universal Images Group via Getty Images)
The three princes of Sarandib—an ancient Persian name for Sri Lanka—get exiled by their father the king. They are good boys, but he wants them to experience the wider world and its peoples and be tested by them before they take over the kingdom. They meet a cameleer who has lost his camel and tell him they’ve seen it—though they have not—and prove it by describing three noteworthy characteristics of the animal: it is blind in one eye, it has a tooth missing, and it has a lame leg.
After some hijinks the camel is found, and the princes are correct. How could they have known? They used their keen observational skills to notice unusual things, and their wit to interpret those observations to reveal a truth that was not immediately apparent.
It is a very old tale, sometimes involving an elephant or a horse instead of a camel. But this is the version written by Amir Khusrau in Delhi in 1301 in his poem The Eight Tales of Paradise, and this is the version that one Christopher the Armenian clumsily translated into the Venetian novel The Three Princes of Serendip, published in 1557; a publication that, in a roundabout way, brought the word “serendipity” into the English language.
In no version of the story do the princes accidentally stumble across something important they were not looking for, or find something they were looking for but in a roundabout, unanticipated manner, or make a valuable discovery based on a false belief or misapprehension. Chance, luck, and accidents, happy or otherwise, play no role in their tale. Rather, the trio use their astute observations as fodder for their abductive reasoning. Their main talent is their ability to spot surprising, unexpected things and use their observations to formulate hypotheses and conjectures that then allow them to deduce the existence of something they’ve never before seen.
Defining serendipity
This is how Telmo Pievani, the first Italian chair of Philosophy of Biological Sciences at the University of Padua, eventually comes to define serendipity in his new book, Serendipity: the Unexpected in Science. It’s hardly a mind-bending or world-altering read, but it is a cute and engaging one, especially when his many stories of discovery veer into ruminations on the nature of inquiry and of science itself.
He starts with the above-mentioned romp through global literature, culminating in the joint coining and misunderstanding of the term as we know it today: in 1754, after reading the popular English translation entitled The Travels and Adventures of Three Princes of Serendip, the intellectual Horace Walpole described “Serendipity, a very expressive word,” as “discoveries, by accidents and sagacity, of things which they were not in quest of.”
Pievani knows a lot, but like a lot, about the history of science, and he puts it on display here. He quickly debunks all of the instances of alleged serendipity that are always trotted out: Fleming the microbiologist had been studying antibiotics and searching for a commercially viable one for years before his moldy plate led him to penicillin. Yes, Röntgen discovered X-rays by a fluke, but it was only because of the training he received in his studies of cathode rays that he recognized he was observing a new form of radiation. Plenty of people over the course of history splashed some volume of water out of the baths they were climbing into and watched apples fall, but only Archimedes—who had recently been tasked by his king to figure out if his crown was made entirely of gold—and Newton—polymathic inventor of calculus—leapt from these (probably apocryphal) mundane occurrences to their famous discoveries of density and gravity, respectively.
After dispensing with these tired old saws, Pievani then suggests some cases of potentially real—or strong, as he deems it—serendipity. George de Mestral’s inventing velcro after noticing burrs stuck to his pants while hiking in the Alps; he certainly wasn’t searching for anything, and he parlayed his observation into a useful technology. DuPont chemists’ developing nylon, Teflon, and Post-it notes while playing with polymers for assorted other purposes. Columbus “discovering” the Americas (for the fourth time) since he thought the Earth was about a third smaller than Eratosthenes of Cyrene had correctly calculated it to be almost two thousand years earlier, forgotten “due to memory loss and Eurocentric prejudices.”