Science

huge-math-error-corrected-in-black-plastic-study;-authors-say-it-doesn’t-matter

Huge math error corrected in black plastic study; authors say it doesn’t matter

Ars has reached out to the lead author, Megan Liu, but has not received a response. Liu works for the environmental health advocacy group Toxic-Free Future, which led the study.

The study highlighted that flame retardants used in plastic electronics may, in some instances, be recycled into household items.

“Companies continue to use toxic flame retardants in plastic electronics, and that’s resulting in unexpected and unnecessary toxic exposures,” Liu said in a press release from October. “These cancer-causing chemicals shouldn’t be used to begin with, but with recycling, they are entering our environment and our homes in more ways than one. The high levels we found are concerning.”

BDE-209, aka decabromodiphenyl ether or deca-BDE, was a dominant component of TV and computer housings before it was banned by the European Union in 2006 and some US states in 2007. China only began restricting BDE-209 in 2023. The flame retardant is linked to carcinogenicity, endocrine disruption, neurotoxicity, and reproductive harm.

Uncommon contaminant

The presence of such toxic compounds in household items is important for noting the potential hazards in the plastic waste stream. However, in addition to finding levels that were an order of magnitude below safe limits, the study also suggested that the contamination is not very common.

The study examined 203 black plastic household products, including 109 kitchen utensils, 36 toys, 30 hair accessories, and 28 food serviceware products. Of those 203 products, only 20 (10 percent) had any bromine-containing compounds at levels that might indicate contamination from bromine-based flame retardants, like BDE-209. Of the 109 kitchen utensils tested, only nine (8 percent) contained concerning bromine levels.

“[A] minority of black plastic products are contaminated at levels >50 ppm [bromine],” the study states.

But that’s just bromine compounds. Overall, only 14 of the 203 products contained BDE-209 specifically.

The product that contained the highest level of bromine compounds was a disposable sushi tray at 18,600 ppm. Given that heating is a significant contributor to chemical leaching, it’s unclear what exposure risk the sushi tray poses. Of the 28 food serviceware products assessed in the study, the sushi tray was only one of two found to contain bromine compounds. The other was a fast food tray that was at the threshold of contamination with 51 ppm.

Huge math error corrected in black plastic study; authors say it doesn’t matter Read More »

bird-flu-jumps-from-birds-to-human-in-louisiana;-patient-hospitalized

Bird flu jumps from birds to human in Louisiana; patient hospitalized

A person in Louisiana is hospitalized with H5N1 bird flu after having contact with sick and dying birds suspected of carrying the virus, state health officials announced Friday.

It is the first human H5N1 case detected in Louisiana. For now, the case is considered a “presumptive” positive until testing is confirmed by the Centers for Disease Control and Prevention. Health officials say that the risk to the public is low but caution people to stay away from any sick or dead birds.

Although the person has been hospitalized, their condition was not immediately reported. It’s also unclear what kind of birds the person had contact with—wild, backyard, or commercial birds. Ars has reached out to Louisiana’s health department and will update this piece with any additional information.

The case is just the latest amid H5N1’s global and domestic rampage. The virus has been ravaging birds of all sorts in the US since early 2022 and spilling over to a surprisingly wide range of mammals. In March this year, officials detected an unprecedented leap to dairy cows, which has since caused a nationwide outbreak. The virus is currently sweeping through California, the country’s largest dairy producer.

To date, at least 845 herds across 16 states have contracted the virus since March, including 630 in California, which detected its first dairy infections in late August.

Human cases

At least 60 people in the US have been infected amid the viral spread this year. But the new case in Louisiana stands out. To date, nearly all of the human cases have been among poultry and dairy workers—unlike the new case in Louisiana— and almost all have been mild—also unlike the new case. Most of the cases have involved conjunctivitis—pink eye—and/or mild respiratory and flu-like symptoms.

There was a case in a patient in Missouri who was hospitalized. However, that person had underlying health conditions, and it’s unclear if H5N1 was the cause of their hospitalization or merely an incidental finding. It remains unknown how the person contracted the virus. An extensive investigation found no animal or other exposure that could explain the infection.

Bird flu jumps from birds to human in Louisiana; patient hospitalized Read More »

the-us-military-is-now-talking-openly-about-going-on-the-attack-in-space

The US military is now talking openly about going on the attack in space

Mastalir said China is “copying the US playbook” with the way it integrates satellites into more conventional military operations on land, in the air, and at sea. “Their specific goals are to be able to track and target US high-value assets at the time and place of their choosing,” Mastalir said.

China’s strategy, known as Anti-Access/Area Denial, or A2AD, is centered on preventing US forces from accessing international waters extending hundreds or thousands of miles from mainland China. Some of the islands occupied by China within the last 15 years are closer to the Philippines, another treaty ally, than to China itself.

The A2AD strategy first “extended to the first island chain (bounded by the Philippines), and now the second island chain (extending to the US territory of Guam), and eventually all the way to the West Coast of California,” Mastalir said.

US officials say China has based anti-ship, anti-air, and anti-ballistic weapons in the region, and many of these systems rely on satellite tracking and targeting. Mastalir said his priority at Indo-Pacific Command, headquartered in Hawaii, is to defend US and allied satellites, or “blue assets,” and challenge “red assets” to break the Chinese military’s “long-range kill chains and protect the joint force from space-enabled attack.”

What this means is the Space Force wants to have the ability to disable or destroy the satellites China would use to provide communication, command, tracking, navigation, or surveillance support during an attack against the US or its allies.

Buildings and structures are seen on October 25, 2022, on an artificial island built by China on Subi Reef in the Spratly Islands of the South China Sea. China has progressively asserted its claim of ownership over disputed islands in the region. Credit: Ezra Acayan/Getty Images

Mastalir said he believes China’s space-based capabilities are “sufficient” to achieve the country’s military ambitions, whatever they are. “The sophistication of their sensors is certainly continuing to increase—the interconnectedness, the interoperability. They’re a pacing challenge for a reason,” he said.

“We’re seeing all signs point to being able to target US aircraft carriers… high-value assets in the air like tankers, AWACS (Airborne Warning And Control System),” Mastalir said. “This is a strategy to keep the US from intervening, and that’s what their space architecture is.”

That’s not acceptable to Pentagon officials, so Space Force personnel are now training for orbital warfare. Just don’t expect to know the specifics of any of these weapons systems any time soon.

“The details of that? No, you’re not going to get that from any war-fighting organization—’let me tell you precisely how I intend to attack an adversary so that they can respond and counter that’—those aren’t discussions we’re going to have,” Saltzman said. “We’re still going to protect some of those (details), but broadly, from an operational concept, we are going to be ready to contest space.”

A new administration

The Space Force will likely receive new policy directives after President-elect Donald Trump takes office in January. The Trump transition team hasn’t identified any changes coming for the Space Force, but a list of policy proposals known as Project 2025 may offer some clues.

Published by the Heritage Foundation, a conservative think tank, Project 2025 calls for the Pentagon to pivot the Space Force from a mostly defensive posture toward offensive weapons systems. Christopher Miller, who served as acting secretary of defense in the first Trump administration, authored the military section of Project 2025.

Miller wrote that the Space Force should “reestablish offensive capabilities to guarantee a favorable balance of forces, efficiently manage the full deterrence spectrum, and seriously complicate enemy calculations of a successful first strike against US space assets.”

Trump disavowed Project 2025 during the campaign, but since the election, he has nominated several of the policy agenda’s authors and contributors to key administration posts.

Saltzman met with Trump last month while attending a launch of SpaceX’s Starship rocket in Texas, but he said the encounter was incidental. Saltzman was already there for discussions with SpaceX officials, and Trump’s travel plans only became known the day before the launch.

The conversation with Trump at the Starship launch didn’t touch on any policy details, according to Saltzman. He added that the Space Force hasn’t yet had any formal discussions with the Trump transition team.

Regardless of the direction Trump takes with the Space Force, Saltzman said the service is already thinking about what to do to maintain what the Pentagon now calls “space superiority”—a twist on the term air superiority, which might have seemed equally as fanciful at the dawn of military aviation more than a century ago.

“That’s the reason we’re the Space Force,” Saltzman said. “So administration to administration, that’s still going to be true. Now, it’s just about resourcing and the discussions about what we want to do and when we want to do it, and we’re ready to have those discussions.”

The US military is now talking openly about going on the attack in space Read More »

americans-spend-more-years-being-unhealthy-than-people-in-any-other-country

Americans spend more years being unhealthy than people in any other country

For the new study, researchers at the Mayo Clinic analyzed health statistics collected by the World Health Organization. The resource included data from 183 countries, allowing the researchers to compare countries’ life expectancy and healthspans, which are calculated by years of life weighted by health status.

Longer, but not better

Overall, the researchers saw lifespan-healthspan gaps grow around the world, with the average gap rising from 8.5 years in 2000 to 9.6 years in 2019. Global life expectancy rose 6.5 years, to about 73 years, while healthspans only rose 5.4 years in that time, to around 63 years.

But the US was a notable outlier, with its gap growing from 10.9 years to 12.4 years, a 29 percent higher gap than the global mean.

The gap was most notable for women—a trend seen around the world. Between 2000 and 2019, US women saw their life expectancy rise 1.5 years, from 79.2 to 80.7 years, but they saw no change in their healthspans. Women’s lifespan-healthspan gap rose from 12.2 years to 13.7 years. For US men, life expectancy rose 2.2 years, from 74.1 to 76.3 years, and their healthspans also increased 0.6 years. Their lifespan-healthspan gap in 2019 was 11.1 years, 2.6 years shorter than women’s.

The conditions most responsible for US disease burden included mental and substance use disorders, plus musculoskeletal diseases. For women, the biggest contributors were musculoskeletal, genitourinary, and neurological diseases.

While the US presented the most extreme example, the researchers note that the global trends seem to present a “disease paradox whereby reduced acute mortality exposes survivors to an increased burden of chronic disease.”

Americans spend more years being unhealthy than people in any other country Read More »

generating-power-with-a-thin,-flexible-thermoelectric-film

Generating power with a thin, flexible thermoelectric film

The No. 1 nuisance with smartphones and smartwatches is that we need to charge them every day. As warm-blooded creatures, however, we generate heat all the time, and that heat can be converted into electricity for some of the electronic gadgetry we carry.

Flexible thermoelectric devices, or F-TEDs, can convert thermal energy into electric power. The problem is that F-TEDs weren’t actually flexible enough to comfortably wear or efficient enough to power even a smartwatch. They were also very expensive to make.

But now, a team of Australian researchers thinks they finally achieved a breakthrough that might take F-TEDs off the ground.

“The power generated by the flexible thermoelectric film we have created would not be enough to charge a smartphone but should be enough to keep a smartwatch going,” said Zhi-Gang Chen, a professor at Queensland University of Technology in Brisbane, Australia. Does that mean we have reached a point where it would be possible to make a thermoelectric Apple Watch band that could keep the watch charged all the time? “It would take some industrial engineering and optimization, but we can definitely achieve a smartwatch band like that,” Chen said.

Manufacturing heaven

Thermoelectric generators producing enough power to run something like an Apple Watch were, so far, made with rigid bulk materials. The obvious problem with them was that nobody would want to wear a metal slab on their wrist or run a power cable from anywhere else to their watch. Flexible thermoelectric devices, on the other hand, were perfectly wearable but offered efficiencies that made them good for low-power health-monitoring electronics rather than more power-hungry hardware like smartwatches.

Back in 2021, generating 35 microwatts per square centimeter in a wristband worn during a typical walk outside was impressive enough to land your research paper in Nature. Today, Chen and his colleagues made a flexible thermoelectric device that performed over 34 times better at room temperature. “To the best of our knowledge, we hold a current record in this field,” Chen says.

Generating power with a thin, flexible thermoelectric film Read More »

studies-pin-down-exactly-when-humans-and-neanderthals-swapped-dna

Studies pin down exactly when humans and Neanderthals swapped DNA


We may owe our tiny sliver of Neanderthal DNA to just a couple of hundred Neanderthals.

The artist’s illustration shows what the six people buried at the Ranis site, who lived between 49, 500 and 41,000 years ago, may have looked like. Two of these people are mother and daughter, and the mother is a distant cousin (or perhaps a great-great-grandparent or great-great-grandchild) to a woman whose skull was found 130 kilometers away in what’s now Czechia. Credit: Sumer et al. 2024

Two recent studies suggest that the gene flow (as the young people call it these days) between Neanderthals and our species happened during a short period sometime between 50,000 and 43,500 years ago. The studies, which share several co-authors, suggest that our torrid history with Neanderthals may have been shorter than we thought.

Pinpointing exactly when Neanderthals met H. sapiens  

Max Planck Institute of Evolutionary Anthropology scientist Leonardo Iasi and his colleagues examined the genomes of 59 people who lived in Europe between 45,000 and 2,200 years ago, plus those of 275 modern people whose ancestors hailed from all over the world. The researchers cataloged the segments of Neanderthal DNA in each person’s genome, then compared them to see where those segments appeared and how that changed over time and distance. This revealed how Neanderthal ancestry got passed around as people spread around the world and provided an estimate of when it all started.

“We tried to compare where in the genomes these [Neanderthal segments] occur and if the positions are shared among individuals or if there are many unique segments that you find [in people from different places],” said University of California Berkeley geneticist Priya Moorjani in a recent press conference. “We find the majority of the segments are shared, and that would be consistent with the fact that there was a single gene flow event.”

That event wasn’t quite a one-night stand; in this case, a “gene flow event” is a period of centuries or millennia when Neanderthals and Homo sapiens must have been in close contact (obviously very close, in some cases). Iasi and his colleagues’ results suggest that happened between 50,500 and 43,000 years ago. But it’s quite different from our history with another closely related hominin species, the now-extinct Denisovans, with whom different Homo sapiens groups met and mingled at least twice on our way to taking over the world.

In a second study, Arev Sümer (also of the Max Planck Institute) and her colleagues found something very similar in the genomes of people who lived 49,500 to 41,000 years ago in what’s now the area around Ranis, Germany. The Ranis population, based on how their genomes compare to other ancient and modern people, seem to have been part of one of the first groups to split off from the wave of humans who migrated out of Africa, through the Levant, and into Eurasia sometime around 50,000 years ago. They carried with them traces of what their ancestors had gotten up to during that journey: about 2.9 percent of their genomes were made up of segments of Neanderthal ancestry.

Based on how long the Ranis people’s segments of Neanderthal DNA were (longer chunks of Neanderthal ancestry tend to point to more recent mixing), the interspecies mingling happened about 80 generations, or about 2,300 years, before the Ranis people lived and died. That’s about 49,000 to 45,000 years ago. The dates from both studies line up well with each other and with archaeological evidence that points to when Neanderthal and Homo sapiens cultures overlapped in parts of Europe and Asia.

What’s still not clear is whether that period of contact lasted the full 5,000 to 7,000 years, or if, as Johannes Krause (also of the Max Planck Institute) suggests, it was only a few centuries—1,500 years at the most—that fell somewhere within that range of dates.

Artist’s depiction of a Neanderthal.

Natural selection worked fast on our borrowed Neanderthal DNA

Once those first Homo sapiens in Eurasia had acquired their souvenir Neanderthal genes (forget stealing a partner’s hoodie; just take some useful segments of their genome), natural selection got to work on them very quickly, discarding some and passing along others, so that by about 100 generations after the “event,” the pattern of Neanderthal DNA segments in people’s genomes looked a lot like it does today.

Iasi and his colleagues looked through their catalog of genomes for sections that contained more (or less) Neanderthal ancestry than you’d expect to find by random chance—a pattern that suggests that natural selection has been at work on those segments. Some of the segments that tended to include more Neanderthal gene variants included areas related to skin pigmentation, the immune response, and metabolism. And that makes perfect sense, according to Iasi.

“Neanderthals had lived in Europe, or outside of Africa, for thousands of years already, so they were probably adapted to their environment, climate, and pathogens,” said Iasi during the press conference. Homo sapiens were facing selective pressure to adapt to the same challenges, so genes that gave them an advantage would have been more likely to get passed along, while unhelpful ones would have been quick to get weeded out.

The most interesting questions remain unanswered

The Neanderthal DNA that many people carry today, the researchers argue, is a legacy from just 100 or 200 Neanderthals.

“The effective population size of modern humans outside Africa was about 5,000,” said Krause in the press conference. “And we have a ratio of about 50 to 1 in terms of admixture [meaning that Neanderthal segments account for about 2 percent of modern genomes in people who aren’t of African ancestry], so we have to say it was about 100 to maybe 200 Neanderthals roughly that mixed into the population.” Assuming Krause is right about that and about how long the two species stayed in contact, a Homo sapiens/Neanderthal pairing would have happened every few years.

So we know that Neanderthals and members of our species lived in close proximity and occasionally produced children for at least several centuries, but no artifacts, bones, or ancient DNA have yet revealed much of what that time, or that relationship, was actually like for either group of people.

The snippets of Neanderthal ancestry left in many modern genomes, and those of people who lived tens of thousands of years ago, don’t offer any hints about whether that handful of Neanderthal ancestors were mostly male or mostly female, which is something that could shed light on the cultural rules around such pairings. And nothing archaeologists have unearthed so far can tell us whether those pairings were consensual, whether they were long-term relationships or hasty flings, or whether they involved social relationships recognized by one (or both) groups. We may never have answers to those questions.

And where did it all happen? Archaeologists haven’t yet found a cave wall inscribed with “Og heart Grag,” but based on the timing, Neanderthals and Homo sapiens probably met and lived alongside each other for at least a few centuries, somewhere in “the Near East,” which includes parts of North Africa, the Levant, what’s now Turkey, and what was once Mesopotamia. That’s one of the key routes that people would have followed as they migrated from Africa into Europe and Asia, and the timing lines up with when we know that both Homo sapiens and Neanderthals were in the area.

“This [same] genetic admixture also appears in East Asia and Australia and the Americas and Europe,” said Krause. “If it would have happened in Europe or somewhere else, then the distribution would probably look different than what we see.”

Science, 2023 DOI: 10.1126/science.adq3010;

Nature, 2023 DOI: 10.1038/s41586-024-08420-x;

(About DOIs).

Photo of Kiona N. Smith

Kiona is a freelance science journalist and resident archaeology nerd at Ars Technica.

Studies pin down exactly when humans and Neanderthals swapped DNA Read More »

nasa-believes-it-understands-why-ingenuity-crashed-on-mars

NASA believes it understands why Ingenuity crashed on Mars

Eleven months after the Ingenuity helicopter made its final flight on Mars, engineers and scientists at NASA and a private company that helped build the flying vehicle said they have identified what probably caused it to crash on the surface of Mars.

In short, the helicopter’s on-board navigation sensors were unable to discern enough features in the relatively smooth surface of Mars to determine its position, so when it touched down, it did so moving horizontally. This caused the vehicle to tumble, snapping off all four of the helicopter’s blades.

Delving into the root cause

It is not easy to conduct a forensic analysis like this on Mars, which is typically about 100 million miles from Earth. Ingenuity carried no black box on board, so investigators have had to piece together their findings from limited data and imagery.

“While multiple scenarios are viable with the available data, we have one we believe is most likely: Lack of surface texture gave the navigation system too little information to work with,” said Ingenuity’s first pilot, Håvard Grip of NASA’s Jet Propulsion Laboratory, in a news release.

A team from NASA and a company that specializes in unmanned aerial vehicles, AeroVironment, started by looking at the terrain where Ingenuity was operating over during its 72nd flight, on January 18 of this year. The helicopter’s navigation system tracked visual features on the surface using a downward-looking camera. During its initial flights, Ingenuity was able to discern pebbles and other features to determine its position. But nearly three years later, Ingenuity was flying in a region of Jezero Crater filled with steep, relatively featureless sand ripples.

NASA believes it understands why Ingenuity crashed on Mars Read More »

new-congressional-report:-“covid-19-most-likely-emerged-from-a-laboratory”

New congressional report: “COVID-19 most likely emerged from a laboratory”


A textbook example of shifting the standards of evidence to suit its authors’ needs.

Did masks work to slow the spread of COVID-19? It all depends on what you accept as “evidence.” Credit: Grace Cary

Recently, Congress’ Select Subcommittee on the Coronavirus Pandemic released its final report. The basic gist is about what you’d expect from a Republican-run committee, in that it trashes a lot of Biden-era policies and state-level responses while praising a number of Trump’s decisions. But what’s perhaps most striking is how it tackles a variety of scientific topics, including many where there’s a large, complicated body of evidence.

Notably, this includes conclusions about the origin of the pandemic, which the report describes as “most likely” emerging from a lab rather than being the product of the zoonotic transfer between an animal species and humans. The latter explanation is favored by many scientists.

The conclusions themselves aren’t especially interesting; they’re expected from a report with partisan aims. But the method used to reach those conclusions is often striking: The Republican majority engages in a process of systematically changing the standard of evidence needed for it to reach a conclusion. For a conclusion the report’s authors favor, they’ll happily accept evidence from computer models or arguments from an editorial in the popular press; for conclusions they disfavor, they demand double-blind controlled clinical trials.

This approach, which I’ll term “shifting the evidentiary baseline,” shows up in many arguments regarding scientific evidence. But it has rarely been employed quite this pervasively. So let’s take a look at it in some detail and examine a few of the other approaches the report uses to muddy the waters regarding science. We’re likely to see many of them put to use in the near future.

What counts as evidence?

If you’ve been following the politics of the pandemic response, you can pretty much predict the sorts of conclusions the committee’s majority wanted to reach: Masks were useless, the vaccines weren’t properly tested for safety, and any restrictions meant to limit the spread of SARS-CoV-2 were ill-informed, etc. At the same time, some efforts pursued during the Trump administration, such as the Operation Warp Speed development of vaccines or the travel restrictions he put in place, are singled out for praise.

Reaching those conclusions, however, can be a bit of a challenge for two reasons. One, which we won’t really go into here, is that some policies that are now disfavored were put in place while Republicans were in charge of the national pandemic response. This leads to a number of awkward juxtapositions in the report: Operation Warp Speed is praised, while the vaccines it produced can’t really be trusted; lockdowns promoted by Trump adviser Deborah Birx were terrible, but Birx’s boss at the time goes unmentioned.

That’s all a bit awkward, but it has little to do with evaluating scientific evidence. Here, the report authors’ desire to reach specific conclusions runs into a minefield of a complicated evidentiary record. For example, the authors want to praise the international travel restrictions that Trump put in place early in the pandemic. But we know almost nothing about their impact because most countries put restrictions in place after the virus was already present, and any effect they had was lost in the pandemic’s rapid spread.

At the same time, we have a lot of evidence that the use of well-fitted, high-quality masks can be effective at limiting the spread of SARS-CoV-2. Unfortunately, that’s the opposite of the conclusion favored by Republican politicians.

So how did they navigate this? By shifting the standard of evidence required between topics. For example, in concluding that “President Trump’s rapidly implemented travel restrictions saved lives,” the report cites a single study as evidence. But that study is primarily based on computer models of the spread of six diseases—none of them COVID-19. As science goes, it’s not nothing, but we’d like to see a lot more before reaching any conclusions.

In contrast, when it comes to mask use, where there’s extensive evidence that they can be effective, the report concludes they’re all worthless: “The US Centers for Disease Control and Prevention relied on flawed studies to support the issuance of mask mandates.” The supposed flaw is that these studies weren’t randomized controlled trials—a standard far more strict than the same report required for travel restrictions. “The CDC provided a list of approximately 15 studies that demonstrated wearing masks reduced new infections,” the report acknowledges. “Yet all 15 of the provided studies are observational studies that were conducted after COVID-19 began and, importantly, none of them were [randomized controlled trials].”

Similarly, in concluding that “the six-foot social distancing requirement was not supported by science,” the report quotes Anthony Fauci as saying, “What I meant by ‘no science behind it’ is that there wasn’t a controlled trial that said, ‘compare six foot with three feet with 10 feet.’ So there wasn’t that scientific evaluation of it.”

Perhaps the most egregious example of shifting the standards of evidence comes when the report discusses the off-label use of drugs such as chloroquine and ivermectin. These were popular among those skeptical of restrictions meant to limit the spread of SARS-CoV-2, but there was never any solid evidence that the drugs worked, and studies quickly made it clear that they were completely ineffective. Yet the report calls them “unjustly demonized” as part of “pervasive misinformation campaigns.” It doesn’t even bother presenting any evidence that they might be effective, just the testimony of one doctor who decided to prescribe them. In terms of scientific evidence, that is, in fact, nothing.

Leaky arguments

One of the report’s centerpieces is its conclusion that “COVID-19 most likely emerged from a laboratory.” And here again, the arguments shift rapidly between different standards of evidence.

While a lab leak cannot be ruled out given what we know, the case in favor largely involves human factors rather than scientific evidence. These include things like the presence of a virology institute in Wuhan, anecdotal reports of flu-like symptoms among its employees, and so on. In contrast, there’s extensive genetic evidence linking the origin of the pandemic to trade in wildlife at a Wuhan seafood market. That evidence, while not decisive, seems to have generated a general consensus among most scientists that a zoonotic origin is the more probable explanation for the emergence of SARS-CoV-2—as had been the case for the coronaviruses that had emerged earlier, SARS and MERS.

So how to handle the disproportionate amount of evidence in favor of a hypothesis that the committee didn’t like? By acting like it doesn’t exist. “By nearly all measures of science, if there was evidence of a natural origin, it would have already surfaced,” the report argues. Instead, it devotes page after page to suggesting that one of the key publications that laid out the evidence for a natural origin was the result of a plot among a handful of researchers who wanted to suppress the idea of a lab leak. Subsequent papers describing more extensive evidence appear to have been ignored.

Meanwhile, since there’s little scientific evidence favoring a lab leak, the committee favorably cites an op-ed published in The New York Times.

An emphasis on different levels of scientific confidence would have been nice, especially when dealing with complicated issues like the pandemic. There are a range of experimental and observational approaches to topics, and they often lead to conclusions that have different degrees of certainty. But this report uses scientific confidence as a rhetorical tool to let its authors reach their preferred conclusions. High standards of evidence are used when its authors want to denigrate a conclusion that they don’t like, while standards can be lowered to non-existence for conclusions they prefer.

Put differently, even weak scientific evidence is preferable to a New York Times op-ed, yet the report opts for the latter.

This sort of shifting of the evidentiary baseline has been a feature of some of the more convoluted arguments in favor of creationism or against the science of climate change. But it has mostly been confined to arguments that take place outside the view of the general public. Given its extensive adoption by politicians, however, we can probably expect the public to start seeing a lot more of it.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

New congressional report: “COVID-19 most likely emerged from a laboratory” Read More »

in-a-not-so-subtle-signal-to-regulators,-blue-origin-says-new-glenn-is-ready

In a not-so-subtle signal to regulators, Blue Origin says New Glenn is ready

Blue Origin said Tuesday that the test payload for the first launch of its new rocket, New Glenn, is ready for liftoff. The company published an image of the “Blue Ring” pathfinder nestled up against one half of the rocket’s payload fairing.

“There is a growing demand to quickly move and position equipment and infrastructure in multiple orbits,” the company’s chief executive, Dave Limp, said on LinkedIn. “Blue Ring has advanced propulsion and communication capabilities for government and commercial customers to handle these maneuvers precisely and efficiently.”

This week’s announcement—historically Blue Origin has been tight-lipped about new products, but is opening up more as it nears the debut of its flagship New Glenn rocket—appears to serve a couple of purposes.

All Blue wants for Christmas is…

First of all, the relatively small payload contrasted with the size of the payload fairing highlights the greater volume the rocket offers over most conventional boosters. New Glenn’s payload fairing is 7 meters (23 feet) in diameter as opposed to the more conventional 5 meters (16.4 feet). It looks roomy inside.

Additionally, the company appears to be publicly signaling the Federal Aviation Administration and other regulatory agencies that it believes New Glenn is ready to fly, pending approval to conduct a hot fire test at Launch Complex-36, and then for a liftoff from Florida. This is a not-so-subtle message to regulators to please hurry up and complete the paperwork necessary for launch activities. It is not clear what is holding up the hot-fire and launch approval in this case, but it is often environmental issues or certification of a flight termination system.

Blue Origin’s release on Tuesday was carefully worded. The headline said New Glenn was “on track” for a launch this year and stated that the Blue Ring payload is “ready” for a launch this year. As yet there is no notional or public launch date. The hot-fire test has been delayed multiple times since the company put the rocket on its launch pad on Nov. 23. It had been targeting November for the test, and more recently, this past weekend.

After years of delays for the rocket, originally due to debut in 2020, Blue Origin founder Jeff Bezos hired a new chief executive to run the company a little more than a year ago. Limp, an executive from Amazon, was given the mandate to change Blue Origin’s slower-moving culture to be more nimble and urgent and was told to launch New Glenn by the end of 2024.

In a not-so-subtle signal to regulators, Blue Origin says New Glenn is ready Read More »

paleolithic-deep-cave-compound-likely-used-for-rituals

Paleolithic deep-cave compound likely used for rituals

Archaeologists excavating a paleolithic cave site in Galilee, Israel, have found evidence that a deep-cave compound at the site may have been used for ritualistic gatherings, according to a new paper published in the Proceedings of the National Academy of Sciences (PNAS). That evidence includes the presence of a symbolically carved boulder in a prominent placement, and well as the remains of what may have been torches used to light the interior. And the acoustics would have been conducive to communal gatherings.

Dating back to the Early Upper Paleolithic period, Manot Cave was found accidentally when a bulldozer broke open its roof during construction in 2008. Archaeologists soon swooped in and recovered such artifacts as stone tools, bits of charcoal, remains of various animals, and a nearly complete human skull.

The latter proved to be especially significant, as subsequent analysis showed that the skull (dubbed Manot 1) had both Neanderthal and modern features and was estimated to be about 54,700 years old. That lent support to the hypothesis that modern humans co-existed and possibly interbred with Neanderthals during a crucial transition period in the region, further bolstered by genome sequencing.

The Manot Cave features an 80-meter-long hall connecting to two lower chambers from the north and south. The living section is near the entrance and was a hub for activities like flint-knapping, butchering animals, eating, and other aspects of daily life. But about eight stories below, there is a large cavern consisting of a high gallery and an adjoining smaller “hidden” chamber separated from the main area by a cluster of mineral deposits called speleothems.

That’s the area that is the subject of the new PNAS paper. Unlike the main living section, the authors found no evidence of daily human activities in this compound, suggesting it served another purpose—most likely ritual gatherings.

Paleolithic deep-cave compound likely used for rituals Read More »

google-gets-an-error-corrected-quantum-bit-to-be-stable-for-an-hour

Google gets an error-corrected quantum bit to be stable for an hour


Using almost the entire chip for a logical qubit provides long-term stability.

Google’s new Willow chip is its first new generation of chips in about five years. Credit: Google

On Monday, Nature released a paper from Google’s quantum computing team that provides a key demonstration of the potential of quantum error correction. Thanks to an improved processor, Google’s team found that increasing the number of hardware qubits dedicated to an error-corrected logical qubit led to an exponential increase in performance. By the time the entire 105-qubit processor was dedicated to hosting a single error-corrected qubit, the system was stable for an average of an hour.

In fact, Google told Ars that errors on this single logical qubit were rare enough that it was difficult to study them. The work provides a significant validation that quantum error correction is likely to be capable of supporting the execution of complex algorithms that might require hours to execute.

A new fab

Google is making a number of announcements in association with the paper’s release (an earlier version of the paper has been up on the arXiv since August). One of those is that the company is committed enough to its quantum computing efforts that it has built its own fabrication facility for its superconducting processors.

“In the past, all the Sycamore devices that you’ve heard about were fabricated in a shared university clean room space next to graduate students and people doing kinds of crazy stuff,” Google’s Julian Kelly said. “And we’ve made this really significant investment in bringing this new facility online, hiring staff, filling it with tools, transferring their process over. And that enables us to have significantly more process control and dedicated tooling.”

That’s likely to be a critical step for the company, as the ability to fabricate smaller test devices can allow the exploration of lots of ideas on how to structure the hardware to limit the impact of noise. The first publicly announced product of this lab is the Willow processor, Google’s second design, which ups its qubit count to 105. Kelly said one of the changes that came with Willow actually involved making the individual pieces of the qubit larger, which makes them somewhat less susceptible to the influence of noise.

All of that led to a lower error rate, which was critical for the work done in the new paper. This was demonstrated by running Google’s favorite benchmark, one that it acknowledges is contrived in a way to make quantum computing look as good as possible. Still, people have figured out how to make algorithm improvements for classical computers that have kept them mostly competitive. But, with all the improvements, Google expects that the quantum hardware has moved firmly into the lead. “We think that the classical side will never outperform quantum in this benchmark because we’re now looking at something on our new chip that takes under five minutes, would take 1025 years, which is way longer than the age of the Universe,” Kelly said.

Building logical qubits

The work focuses on the behavior of logical qubits, in which a collection of individual hardware qubits are grouped together in a way that enables errors to be detected and corrected. These are going to be essential for running any complex algorithms, since the hardware itself experiences errors often enough to make some inevitable during any complex calculations.

This naturally creates a key milestone. You can get better error correction by adding more hardware qubits to each logical qubit. If each of those hardware qubits produces errors at a sufficient rate, however, then you’ll experience errors faster than you can correct for them. You need to get hardware qubits of a sufficient quality before you start benefitting from larger logical qubits. Google’s earlier hardware had made it past that milestone, but only barely. Adding more hardware qubits to each logical qubit only made for a marginal improvement.

That’s no longer the case. Google’s processors have the hardware qubits laid out on a square grid, with each connected to its nearest neighbors (typically four except at the edges of the grid). And there’s a specific error correction code structure, called the surface code, that fits neatly into this grid. And you can use surface codes of different sizes by using progressively more of the grid. The size of the grid being used is measured by a term called distance, with larger distance meaning a bigger logical qubit, and thus better error correction.

(In addition to a standard surface code, Google includes a few qubits that handle a phenomenon called “leakage,” where a qubit ends up in a higher-energy state, instead of the two low-energy states defined as zero and one.)

The key result is that going from a distance of three to a distance of five more than doubled the ability of the system to catch and correct errors. Going from a distance of five to a distance of seven doubled it again. Which shows that the hardware qubits have reached a sufficient quality that putting more of them into a logical qubit has an exponential effect.

“As we increase the grid from three by three to five by five to seven by seven, the error rate is going down by a factor of two each time,” said Google’s Michael Newman. “And that’s that exponential error suppression that we want.”

Going big

The second thing they demonstrated is that, if you make the largest logical qubit that the hardware can support, with a distance of 15, it’s possible to hang onto the quantum information for an average of an hour. This is striking because Google’s earlier work had found that its processors experience widespread simultaneous errors that the team ascribed to cosmic ray impacts. (IBM, however, has indicated it doesn’t see anything similar, so it’s not clear whether this diagnosis is correct.) Those happened every 10 seconds or so. But this work shows that a sufficiently large error code can correct for these events, whatever their cause.

That said, these qubits don’t survive indefinitely. One of them seems to be a localized temporary increase in errors. The second, more difficult to deal with problem involves a widespread spike in error detection affecting an area that includes roughly 30 qubits. At this point, however, Google has only seen six of these events, so they told Ars that it’s difficult to really characterize them. “It’s so rare it actually starts to become a bit challenging to study because you have to gain a lot of statistics to even see those events at all,” said Kelly.

Beyond the relative durability of these logical qubits, the paper notes another advantage to going with larger code distances: it enhances the impact of further hardware improvements. Google estimates that at a distance of 15, improving hardware performance by a factor of two would drop errors in the logical qubit by a factor of 250. At a distance of 27, the same hardware improvement would lead to an improvement of over 10,000 in the logical qubit’s performance.

Note that none of this will ever get the error rate to zero. Instead, we just need to get the error rate to a level where an error is unlikely for a given calculation (more complex calculations will require a lower error rate). “It’s worth understanding that there’s always going to be some type of error floor and you just have to push it low enough to the point where it practically is irrelevant,” Kelly said. “So for example, we could get hit by an asteroid and the entire Earth could explode and that would be a correlated error that our quantum computer is not currently built to be robust to.”

Obviously, a lot of additional work will need to be done to both make logical qubits like this survive for even longer, and to ensure we have the hardware to host enough logical qubits to perform calculations. But the exponential improvements here, to Google, suggest that there’s nothing obvious standing in the way of that. “We woke up one morning and we kind of got these results and we were like, wow, this is going to work,” Newman said. “This is really it.”

Nature, 2024. DOI: 10.1038/s41586-024-08449-y  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Google gets an error-corrected quantum bit to be stable for an hour Read More »

rocket-report:-nasa-delays-artemis-again;-spinlaunch-spins-a-little-cash

Rocket Report: NASA delays Artemis again; SpinLaunch spins a little cash


All the news that’s fit to lift

A report in which we read some tea leaves.

Look a the rocket which has now launched 400 times. Credit: SpaceX

Welcome to Edition 7.22 of the Rocket Report! The big news is the Trump administration’s announcement that commercial astronaut Jared Isaacman would be put forward as the nominee to serve as the next NASA Administrator. Isaacman has flown to space twice, and demonstrated that he takes spaceflight seriously. More background on Isaacman, and possible changes, can be found here.

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

Orbex pauses launch site work in Sutherland, Scotland. Small-launch vehicle developer Orbex will halt work on its own launch site in northern Scotland and instead use a rival facility in the Shetland Islands, Space News reports. Orbex announced December 4 that it would “pause” construction of Sutherland Spaceport in Scotland and instead use the SaxaVord Spaceport on the island of Unst in the Shetlands for its Prime launch vehicle. Orbex had been linked to Spaceport Sutherland since the UK Space Agency announced in 2018 it selected the site for a vertical launch complex.

Pivoting to medium lift? … The move, Orbex said, will free up resources to allow the company to focus on launch vehicle development, including both Prime and a new medium-class vehicle called Proxima. “This decision will help us to reach first launch in 2025 and provides SaxaVord with another customer to further strengthen its commercial proposition. It’s a win-win for UK and Scottish space,” Phil Chambers, chief executive of Orbex, said. If you’re reading the tea leaves here, one might guess that the smaller Prime rocket will never launch, and the medium-lift design is a hail mary. We’ll see. (submitted by Ken the Bin)

SpinLaunch raises a little cash. Space startup SpinLaunch is fundraising again, though TechCrunch reports that it was exploring raising a significantly more ambitious sum earlier this year. The company has closed an $11.5 million round out of a planned $25 million, according to a filing with the US Securities and Exchange Commission. SpinLaunch confirmed funding to TechCrunch but did not comment on the amount raised. It last raised $71 million in a Series B funding round in 2022. SpinLaunch, as the name implies, plans to build a kinetic launch system as a low-cost, high-cadence alternative to rockets.

Putting the spin into SpinLaunch … A person familiar with the company’s plans told TechCrunch that the startup had talked to investors around nine months ago, hoping they would pile into a $350 million round at a $2 billion valuation. In response to a question about this fundraising target, SpinLaunch CEO David Wrenn said the figures were “highly inaccurate and misleading” and that he was “pleased with our recently closed financing.” Someone is spinning something, clearly. (submitted by Ken the Bin)

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

Vega C successfully returns to flight. After originally targeting November 29 for the return-to-flight mission of the Vega C rocket, Arianespace successfully launched the vehicle on Thursday, December 5, Space News reports. The three solid-fuel lower stages of the Vega C performed as expected, followed by three burns by the liquid-propellant AVUM+ upper stage. That upper stage deployed its payload, the Sentinel-1C satellite, about one hour and 45 minutes after liftoff. The launch was the first for the Vega C since a December 2022 launch failure on the rocket’s second flight that destroyed two Pléiades Neo imaging satellites.

Eyes in the sky … The payload, Sentinel-1C, is a radar imaging satellite built by Thales Alenia Space for the Copernicus program of Earth observation missions run by ESA and the European Commission. It replaces the Sentinel-1B spacecraft that malfunctioned in orbit nearly three years ago. It joins the existing, but aging, Sentinel-1A satellite and includes new capabilities to monitor maritime traffic with an Automatic Identification System receiver.

PLD Space secures loan for Miura 5 rocket. The Spanish launch company said this week that it had secured an 11 million euro loan ($11.6 million) from COFIDES, a state-owned development fund, to support the development of the launch site for its Miura 5 rocket in Kourou, French Guiana. The company said the funding bolsters its mission to ensure autonomous and competitive European access to space while strengthening Europe’s space infrastructure.

A public-private partnership … “This initiative exemplifies the critical role of public-private collaboration in supporting strategic and innovative projects, which rely on institutional backing as an anchor investor during the early stages of technological development,” added Spanish Secretary of State for Trade Amparo López Senovilla. The Miura 5 rocket will have an estimated payload capacity of 1 metric ton to low-Earth orbit and may make its debut in 2026. (submitted by Ken the Bin)

SpaceX value may soar higher. SpaceX is in talks to sell insider shares in a transaction valuing the rocket and satellite maker at about $350 billion, according to people familiar with the matter, Bloomberg reports. That would be a significant premium to a previously mulled valuation of $255 billion as reported by Bloomberg News and other media outlets just last month. SpaceX was last valued at about $210 billion in a tender offer earlier this year.

A big post-election bump … The current conversations are ongoing, and the details could change depending on interest from insider sellers and buyers, sources told the publication. The potential transaction would cement SpaceX’s status as the most valuable private startup in the world and rival the market capitalizations of some of the largest public companies. SpaceX has established itself as the industry’s preeminent rocket launch provider, lofting satellites, cargo, and people to space for NASA, the Pentagon, and commercial partners, and is building out a large network of Starlink satellites providing Internet service. (submitted by Ken the Bin)

China debuts a new medium-lift rocket. China’s new Long March 12 rocket made a successful inaugural flight Saturday, placing two experimental satellites into orbit and testing uprated, higher-thrust engines that will allow a larger Chinese launcher in development to send astronauts to the Moon. The Long March 12 is the newest member of China’s Long March rocket family, which has been flying since China launched its first satellite into orbit in 1970, Ars reports.

Rocket likely to be used for megaconstellation deployment … Like all of China’s other existing rockets, the Long March 12 configuration that flew Saturday is fully disposable. At the Zhuhai Airshow earlier this month, China’s largest rocket company displayed another version of the Long March 12 with a reusable first stage but with scant design details. The Long March 12 is powered by four kerosene-fueled YF-100K engines on its first stage, generating more than 1.1 million pounds, or 5,000 kilonewtons, of thrust at full throttle. These engines are upgraded, higher-thrust versions of the YF-100 engines used on several other types of Long March rockets. (submitted by EllPeaTea and Ken the Bin)

Falcon 9 rocket reaches some remarkable milestones. About 10 days ago, SpaceX launched a batch of Starlink v2-mini satellites from Kennedy Space Center in Florida on a Falcon 9 rocket, marking the 400th successful mission by the Falcon 9 rocket. Additionally, it was the Falcon program’s 375th booster recovery, according to SpaceX. Finally, with this mission, the company shattered its record for turnaround time from the landing of a booster to its launch to 13 days and 12 hours, down from 21 days, Ars reports.

A rapidly reusable shuttle … All told, in November, SpaceX launched 16 Falcon 9 rockets. The previous record for monthly launches by the Falcon 9 rocket was 14. SpaceX is on pace to launch 135 or more Falcon 9 and Falcon Heavy missions this year. That is a meaningful number, because over the course of the three decades it flew into orbit, NASA’s space shuttle flew 135 missions. The space shuttle was a significantly more complex vehicle, and unlike the Falcon 9 rocket, humans flew aboard it during every mission. However, there is some historical significance in the fact that the Falcon rocket may fly as many missions in a single year as the space shuttle did during its lifetime.

Long March 3B hits a milestone. China launched a new communication engineering test satellite early Tuesday with its workhorse Long March 3B rocket. This added to a series of satellites potentially for undisclosed military purposes, Space News reports. The launch was, notably, the 100th of the workhorse Long March 3B.

First time to the century marker … The rocket has performed 96 successful launches with two failures and two partial failures. The first launch, in February 1996 carrying Intelsat 708, infamously saw the rocket veer off course shortly after clearing the tower and impacting a nearby village. Developed by the state-run China Academy of Launch Vehicle Technology, the three-stage and four-liquid-booster rocket is the only Chinese launcher to reach 100 launches. (submitted by Ken the Bin)

NASA delays Artemis launches again. In a news conference Thursday, NASA officials discussed changes to the timeline for future Artemis missions due to problems with Orion’s heat shield. The agency announced it is now targeting April 2026 for Artemis II (from September 2025) and mid-2027 for Artemis III (from September 2026). NASA said it now understands the technical cause of the heat shield issues observed during the Artemis I flight in late 2022 and will fly the heat shield as-is on Artemis II, with some changes to the reentry profile.

This may not be the final plan … The timing of this news conference was interesting, as there will be a changing of administrations at NASA in the coming weeks. The Trump administration has put forward commercial astronaut Jared Isaacman to lead NASA, and as Ars reported Thursday, there are likely some significant shakeups coming in the Artemis program. One possibility is that the Space Launch System rocket could be scrapped, with commercial rockets used to fly the Artemis missions.

Next three launches

Dec. 8: Falcon 9 | Starlink 12-5 | Cape Canaveral Space Force Station, Florida | 05: 10 UTC

Dec. 12:  Falcon 9 | Starlink 11-2 | Vandenberg Space Force Base, California | 19: 33 UTC

Dec. 12: Falcon 9 | O3b mPOWER 7 & 8 | Kennedy Space Center, Fla. | 20: 58 UTC

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

Rocket Report: NASA delays Artemis again; SpinLaunch spins a little cash Read More »