Science

man-turns-irreversibly-gray-from-an-unidentified-silver-exposure

Man turns irreversibly gray from an unidentified silver exposure

When an 84-year-old man in Hong Kong was admitted to a hospital for a condition related to an enlarged prostate, doctors noticed something else about him—he was oddly gray, according to a case report in the New England Journal of Medicine.

His skin, particularly his face, had an ashen appearance. His fingernails and the whites of his eyes had become silvery. When doctors took a skin biopsy, they could see tiny, dark granules sitting in the fibers of his skin, in his blood vessels, in the membranes of his sweat glands, and in his hair follicles.

A blood test made clear what the problem was: the concentration of silver in his serum was 423 nmol/L, over 40 times the reference level for a normal result, which is less than 10 nmol/L. The man was diagnosed with a rare case of generalized argyria, a buildup of silver in the body’s tissue that causes a blueish-gray discoloration—which is generally permanent.

When someone consumes silver particles, the metal moves from the gut into the bloodstream in its ionic form. It’s then deposited throughout the body in various tissues, including the skin, muscles, heart, lungs, liver, spleen, and kidneys. There’s some evidence that it accumulates in at least parts of the brain as well.

Discoloration becomes apparent in tissues exposed to sunlight—hence the patient’s notably gray face. Silver ions in the skin undergo photoreduction from ultraviolet light exposure, forming atomic silver that can be oxidized to compounds such as silver sulfide and silver selenide, creating a bluish-gray tinge. Silver can also stimulate the production of the pigment melanin, causing darkening. Once discoloration develops, it’s considered irreversible. Chelation therapy—generally used to remove metals from the body—is ineffective against argyria. That said, some case studies have suggested that laser therapy may help.

Man turns irreversibly gray from an unidentified silver exposure Read More »

everyone-agrees:-2024-the-hottest-year-since-the-thermometer-was-invented

Everyone agrees: 2024 the hottest year since the thermometer was invented


An exceptionally hot outlier, 2024 means the streak of hottest years goes to 11.

With very few and very small exceptions, 2024 was unusually hot across the globe. Credit: Copernicus

Over the last 24 hours or so, the major organizations that keep track of global temperatures have released figures for 2024, and all of them agree: 2024 was the warmest year yet recorded, joining 2023 as an unusual outlier in terms of how rapidly things heated up. At least two of the organizations, the European Union’s Copernicus and Berkeley Earth, place the year at about 1.6° C above pre-industrial temperatures, marking the first time that the Paris Agreement goal of limiting warming to 1.5° has been exceeded.

NASA and the National Oceanic and Atmospheric Administration both place the mark at slightly below 1.5° C over pre-industrial temperatures (as defined by the 1850–1900 average). However, that difference largely reflects the uncertainties in measuring temperatures during that period rather than disagreement over 2024.

It’s hot everywhere

2023 had set a temperature record largely due to a switch to El Niño conditions midway through the year, which made the second half of the year exceptionally hot. It takes some time for that heat to make its way from the ocean into the atmosphere, so the streak of warm months continued into 2024, even as the Pacific switched into its cooler La Niña mode.

While El Niños are regular events, this one had an outsized impact because it was accompanied by unusually warm temperatures outside the Pacific, including record high temperatures in the Atlantic and unusual warmth in the Indian Ocean. Land temperatures reflect this widespread warmth, with elevated temperatures on all continents. Berkeley Earth estimates that 104 countries registered 2024 as the warmest on record, meaning 3.3 billion people felt the hottest average temperatures they had ever experienced.

Different organizations use slightly different methods to calculate the global temperature and have different baselines. For example, Copernicus puts 2024 at 0.72° C above a baseline that will be familiar to many people since they were alive for it: 1991 to 2000. In contrast, NASA and NOAA use a baseline that covers the entirety of the last century, which is substantially cooler overall. Relative to that baseline, 2024 is 1.29° C warmer.

Lining up the baselines shows that these different services largely agree with each other, with most of the differences due to uncertainties in the measurements, with the rest accounted for by slightly different methods of handling things like areas with sparse data.

Describing the details of 2024, however, doesn’t really capture just how exceptional the warmth of the last two years has been. Starting in around 1970, there’s been a roughly linear increase in temperature driven by greenhouse gas emissions, despite many individual years that were warmer or cooler than the trend. The last two years have been extreme outliers from this trend. The last time there was a single comparable year to 2024 was back in the 1940s. The last time there were two consecutive years like this was in 1878.

A graph showing a curve that increases smoothly from left to right, with individual points on the curve hosting red and blue lines above and below. The red line at 2024 is larger than any since 1978.

Relative to the five-year temperature average, 2024 is an exceptionally large excursion. Credit: Copernicus

“These were during the ‘Great Drought’ of 1875 to 1878, when it is estimated that around 50 million people died in India, China, and parts of Africa and South America,” the EU’s Copernicus service notes. Despite many climate-driven disasters, the world at least avoided a similar experience in 2023-24.

Berkeley Earth provides a slightly different way of looking at it, comparing each year since 1970 with the amount of warming we’d expect from the cumulative greenhouse gas emissions.

A graph showing a reddish wedge, growing from left to right. A black line traces the annual temperatures, which over near the top edge of the wedge until recent years.

Relative to the expected warming from greenhouse gasses, 2024 represents a large departure. Credit: Berkeley Earth

These show that, given year-to-year variations in the climate system, warming has closely tracked expectations over five decades. 2023 and 2024 mark a dramatic departure from that track, although it comes at the end of a decade where most years were above the trend line. Berkeley Earth estimates that there’s just a 1 in 100 chance of that occurring due to the climate’s internal variability.

Is this a new trend?

The big question is whether 2024 is an exception and we should expect things to fall back to the trend that’s dominated since the 1970s, or it marks a departure from the climate’s recent behavior. And that’s something we don’t have a great answer to.

If you take away the influence of recent greenhouse gas emissions and El Niño, you can focus on other potential factors. These include a slight increase expected due to the solar cycle approaching its maximum activity. But, beyond that, most of the other factors are uncertain. The Hunga Tonga eruption put lots of water vapor into the stratosphere, but the estimated effects range from slight warming to cooling equivalent to a strong La Niña. Reductions in pollution from shipping are expected to contribute to warming, but the amount is debated.

There is evidence that a decrease in cloud cover has allowed more sunlight to be absorbed by the Earth, contributing to the planet’s warming. But clouds are typically a response to other factors that influence the climate, such as the amount of water vapor in the atmosphere and the aerosols present to seed water droplets.

It’s possible that a factor that we missed is driving the changes in cloud cover or that 2024 just saw the chaotic nature of the atmosphere result in less cloud cover. Alternatively, we may have crossed a warming tipping point, where the warmth of the atmosphere makes cloud formation less likely. Knowing that will be critical going forward, but we simply don’t have a good answer right now.

Climate goals

There’s an equally unsatisfying answer to what this means for our chance of hitting climate goals. The stretch goal of the Paris Agreement is to limit warming to 1.5° C, because it leads to significantly less severe impacts than the primary, 2.0° target. That’s relative to pre-industrial temperatures, which are defined using the 1850–1900 period, the earliest time where temperature records allow a reconstruction of the global temperature.

Unfortunately, all the organizations that handle global temperatures have some differences in the analysis methods and data used. Given recent data, these differences result in very small divergences in the estimated global temperatures. But with the far larger uncertainties in the 1850–1900 data, they tend to diverge more dramatically. As a result, each organization has a different baseline, and different anomalies relative to that.

As a result, Berkeley Earth registers 2024 as being 1.62° C above preindustrial temperatures, and Copernicus 1.60° C. In contrast, NASA and NOAA place it just under 1.5° C (1.47° and 1.46°, respectively). NASA’s Gavin Schmidt said this is “almost entirely due to the [sea surface temperature] data set being used” in constructing the temperature record.

There is, however, consensus that this isn’t especially meaningful on its own. There’s a good chance that temperatures will drop below the 1.5° mark on all the data sets within the next few years. We’ll want to see temperatures consistently exceed that mark for over a decade before we consider that we’ve passed the milestone.

That said, given that carbon emissions have barely budged in recent years, there’s little doubt that we will eventually end up clearly passing that limit (Berkeley Earth is essentially treating it as exceeded already). But there’s widespread agreement that each increment between 1.5° and 2.0° will likely increase the consequences of climate change, and any continuing emissions will make it harder to bring things back under that target in the future through methods like carbon capture and storage.

So, while we may have committed ourselves to exceed one of our major climate targets, that shouldn’t be viewed as a reason to stop trying to limit greenhouse gas emissions.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Everyone agrees: 2024 the hottest year since the thermometer was invented Read More »

coal-likely-to-go-away-even-without-epa’s-power-plant-regulations

Coal likely to go away even without EPA’s power plant regulations


Set to be killed by Trump, the rules mostly lock in existing trends.

In April last year, the Environmental Protection Agency released its latest attempt to regulate the carbon emissions of power plants under the Clean Air Act. It’s something the EPA has been required to do since a 2007 Supreme Court decision that settled a case that started during the Clinton administration. The latest effort seemed like the most aggressive yet, forcing coal plants to retire or install carbon capture equipment and making it difficult for some natural gas plants to operate without capturing carbon or burning green hydrogen.

Yet, according to a new analysis published in Thursday’s edition of Science, they wouldn’t likely have a dramatic effect on the US’s future emissions even if they were to survive a court challenge. Instead, the analysis suggests the rules serve more like a backstop to prevent other policy changes and increased demand from countering the progress that would otherwise be made. This is just as well, given that the rules are inevitably going to be eliminated by the incoming Trump administration.

A long time coming

The net result of a number of Supreme Court decisions is that greenhouse gasses are pollutants under the Clean Air Act, and the EPA needed to determine whether they posed a threat to people. George W. Bush’s EPA dutifully performed that analysis but sat on the results until its second term ended, leaving it to the Obama administration to reach the same conclusion. The EPA went on to formulate rules for limiting carbon emissions on a state-by-state basis, but these were rapidly made irrelevant because renewable power and natural gas began displacing coal even without the EPA’s encouragement.

Nevertheless, the Trump administration replaced those rules with ones designed to accomplish even less, which were thrown out by a court just before Biden’s inauguration. Meanwhile, the Supreme Court stepped in to rule on the now-even-more-irrelevant Obama rules, determining that the EPA could only regulate carbon emissions at the level of individual power plants rather than at the level of the grid.

All of that set the stage for the latest EPA rules, which were formulated by the Biden administration’s EPA. Forced by the court to regulate individual power plants, the EPA allowed coal plants that were set to retire within the decade to continue to operate as they have. Anything that would remain operational longer would need to either switch fuels or install carbon capture equipment. Similarly, natural gas plants were regulated based on how frequently they were operational; those that ran less than 40 percent of the time could face significant new regulations. More than that, and they’d have to capture carbon or burn a fuel mixture that is primarily hydrogen produced without carbon emissions.

While the Biden EPA’s rules are currently making their way through the courts, they’re sure to be pulled in short order by the incoming Trump administration, making the court case moot. Nevertheless, people had started to analyze their potential impact before it was clear there would be an incoming Trump administration. And the analysis is valuable in the sense that it will highlight what will be lost when the rules are eliminated.

By some measures, the answer is not all that much. But the answer is also very dependent upon whether the Trump administration engages in an all-out assault on renewable energy.

Regulatory impact

The work relies on the fact that various researchers and organizations have developed models to explore how the US electric grid can economically meet demand under different conditions, including different regulatory environments. The researchers obtained nine of them and ran them with and without the EPA’s proposed rules to determine their impact.

On its own, eliminating the rules has a relatively minor impact. Without the rules, the US grid’s 2040 carbon dioxide emissions would end up between 60 and 85 percent lower than they were in 2005. With the rules, the range shifts to between 75 and 85 percent—in essence, the rules reduce the uncertainty about the outcomes that involve the least change.

That’s primarily because of how they’re structured. Mostly, they target coal plants, as these account for nearly half of the US grid’s emissions despite supplying only about 15 percent of its power. They’ve already been closing at a rapid clip, and would likely continue to do so even without the EPA’s encouragement.

Natural gas plants, the other major source of carbon emissions, would primarily respond to the new rules by operating less than 40 percent of the time, thus avoiding stringent regulation while still allowing them to handle periods where renewable power underproduces. And we now have a sufficiently large fleet of natural gas plants that demand can be met without a major increase in construction, even with most plants operating at just 40 percent of their rated capacity. The continued growth of renewables and storage also contributes to making this possible.

One irony of the response seen in the models is that it suggests that two key pieces of the Inflation Reduction Act (IRA) are largely irrelevant. The IRA provides benefits for the deployment of carbon capture and the production of green hydrogen (meaning hydrogen produced without carbon emissions). But it’s likely that, even with these credits, the economics wouldn’t favor the use of these technologies when alternatives like renewables plus storage are available. The IRA also provides tax credits for deploying renewables and storage, pushing the economics even further in their favor.

Since not a lot changes, the rules don’t really affect the cost of electricity significantly. Their presence boosts costs by an estimated 0.5 to 3.7 percent in 2050 compared to a scenario where the rules aren’t implemented. As a result, the wholesale price of electricity changes by only two percent.

A backstop

That said, the team behind the analysis argues that, depending on other factors, the rules could play a significant role. Trump has suggested he will target all of Biden’s energy policies, and that would include the IRA itself. Its repeal could significantly slow the growth of renewable energy in the US, as could continued problems with expanding the grid to incorporate new renewable capacity.

In addition, the US is seeing demand for electricity rise at a faster pace in 2023 than in the decade leading up to it. While it’s still unclear whether that’s a result of new demand or simply weather conditions boosting the use of electricity in heating and cooling, there are several factors that could easily boost the use of electricity in coming years: the electrification of transport, rising data center use, and the electrification of appliances and home heating.

Should these raise demand sufficiently, then it could make continued coal use economical in the absence of the EPA rules. “The rules … can be viewed as backstops against higher emissions outcomes under futures with improved coal plant economics,” the paper suggests, “which could occur with higher demand, slower renewables deployment from interconnection and permitting delays, or higher natural gas prices.”

And it may be the only backstop we have. The report also notes that a number of states have already set aggressive emissions reduction targets, including some for net zero by 2050. But these don’t serve as a substitute for federal climate policy, given that the states that are taking these steps use very little coal in the first place.

Science, 2025. DOI: 10.1126/science.adt5665  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Coal likely to go away even without EPA’s power plant regulations Read More »

here’s-what-we-know,-and-what-we-don’t,-about-the-awful-palisades-wildfire

Here’s what we know, and what we don’t, about the awful Palisades wildfire

Let’s start with the meteorology. The Palisades wildfire and other nearby conflagrations were well-predicted days in advance. After a typically arid summer and fall, the Los Angeles area has also had a dry winter so far. December, January, February, and March are usually the wettest months in the region by far. More than 80 percent of Los Angeles’ rain comes during these colder months. But this year, during December, the region received, on average, less than one-tenth of an inch of rainfall. Normal totals are on the order of 2.5 inches in December.

So, the foliage in the area was already very dry, effectively extending the region’s wildfire season. Then, strong Santa Ana winds were predicted for this week due, in part, to the extreme cold observed in the eastern United States and high pressure over the Great Basin region of the country. “Red flag” winds were forecast locally, which indicates that winds could combine with dry grounds to spread wildfires efficiently. The direct cause of the Palisades fire is yet unknown.

Wildfires during the winter months in California are not a normal occurrence, but they are not unprecedented either. Scientists, however, generally agree that a warmer planet is extending wildfire seasons such as those observed in California.

“Climate change, including increased heat, extended drought, and a thirsty atmosphere, has been a key driver in increasing the risk and extent of wildfires in the western United States during the last two decades,” the US National Oceanic and Atmospheric Administration concludes. “Wildfires require the alignment of a number of factors, including temperature, humidity, and the lack of moisture in fuels, such as trees, shrubs, grasses, and forest debris. All these factors have strong direct or indirect ties to climate variability and climate change.”

Here’s what we know, and what we don’t, about the awful Palisades wildfire Read More »

a-taller,-heavier,-smarter-version-of-spacex’s-starship-is-almost-ready-to-fly

A taller, heavier, smarter version of SpaceX’s Starship is almost ready to fly


Starship will test its payload deployment mechanism on its seventh test flight.

SpaceX’s first second-generation Starship, known as Version 2 or Block 2, could launch as soon as January 13. Credit: SpaceX

An upsized version of SpaceX’s Starship mega-rocket rolled to the launch pad early Thursday in preparation for liftoff on a test flight next week.

The two-mile transfer moved the bullet-shaped spaceship one step closer to launch Monday from SpaceX’s Starbase test site in South Texas. The launch window opens at 5 pm EST (4 pm CST; 2200 UTC). This will be the seventh full-scale test flight of SpaceX’s Super Heavy booster and Starship spacecraft and the first of 2025.

In the coming days, SpaceX technicians will lift the ship on top of the Super Heavy booster already emplaced on the launch mount. Then, teams will complete the final tests and preparations for the countdown on Monday.

“The upcoming flight test will launch a new generation ship with significant upgrades, attempt Starship’s first payload deployment test, fly multiple reentry experiments geared towards ship catch and reuse, and launch and return the Super Heavy booster,” SpaceX officials wrote in a mission overview posted on the company’s website.

The mission Monday will repeat many of the maneuvers SpaceX demonstrated on the last two Starship test flights. The company will again attempt to return the Super Heavy booster to the launch site and attempt to catch it with two mechanical arms, or “chopsticks,” on the launch tower approximately seven minutes after liftoff.

SpaceX accomplished this feat on the fifth Starship test flight in October but aborted a catch attempt on a November flight because of damaged sensors on the tower chopsticks. The booster, which remained healthy, diverted to a controlled splashdown offshore in the Gulf of Mexico.

SpaceX’s next Starship prototype, Ship 33, emerges from its assembly building at Starbase, Texas, early Thursday morning. Credit: SpaceX/Elon Musk via X

For the next flight, SpaceX added protections to the sensors on the tower and will test radar instruments on the chopsticks to provide more accurate ranging measurements for returning vehicles. These modifications should improve the odds of a successful catch of the Super Heavy booster and of Starship on future missions.

In another first, one of the 33 Raptor engines that will fly on this Super Heavy booster—designated Booster 14 in SpaceX’s fleet—was recovered from the booster that launched and returned to Starbase in October. For SpaceX, this is a step toward eventually flying the entire rocket repeatedly. The Super Heavy booster and Starship spacecraft are designed for full reusability.

After separation of the booster stage, the Starship upper stage will ignite six engines to accelerate to nearly orbital velocity, attaining enough energy to fly halfway around the world before gravity pulls it back into the atmosphere. Like the past three test flights, SpaceX will guide Starship toward a controlled reentry and splashdown in the Indian Ocean northwest of Australia around one hour after liftoff.

New ship, new goals

The most significant changes engineers will test next week are on the ship, or upper stage, of SpaceX’s enormous rocket. The most obvious difference on Starship Version 2, or Block 2, is with the vehicle’s forward flaps. Engineers redesigned the flaps, reducing their size and repositioning them closer to the tip of the ship’s nose to better protect them from the scorching heat of reentry. Cameras onboard Starship showed heat damage to the flaps during reentry on test flights last year.

SpaceX is also developing an upgraded Super Heavy booster that is slightly taller than the existing model. The next version of the booster will produce more thrust and will be slightly taller than the current Super Heavy, but for the upcoming test flight, SpaceX will still use the first-generation booster design.

Starship Block 2 has smaller flaps than previous ships. The flaps are located in a more leeward position to protect them from the heat of reentry. Credit: SpaceX

For next week’s flight, Super Heavy and Starship combined will hold more than 10.5 million pounds of fuel and oxidizer. The ship’s propellant tanks have 25 percent more volume than previous iterations of the vehicle, and the payload compartment, which contains 10 mock-ups of Starlink Internet satellites on this launch, is somewhat smaller. Put together, the changes add nearly 6 feet (1.8 meters) to the rocket’s height, bringing the full stack to approximately 404 feet (123.1 meters).

This means SpaceX will break its own record for launching the largest and most powerful rocket ever built. And the company will do it again with the even larger Starship Version 3, which SpaceX says will have nine upper stage engines, instead of six, and will deliver up to 440,000 pounds (200 metric tons) of cargo to low-Earth orbit.

Other changes debuting with Starship Version 2 next week include:

• Vacuum jacketing of propellant feedlines

• A new fuel feedline system for the ship’s Raptor vacuum engines

• An improved propulsion avionics module controlling vehicle valves and reading sensors

• Redesigned inertial navigation and star tracking sensors

• Integrated smart batteries and power units to distribute 2.7 megawatts of power across the ship

• An increase to more than 30 cameras onboard the vehicle.

Laying the foundation

The enhanced avionics system will support future missions to prove SpaceX’s ability to refuel Starships in orbit and return the ship to the launch site. For example, SpaceX will fly a more powerful flight computer and new antennas that integrate connectivity with the Starlink Internet constellation, GPS navigation satellites, and backup functions for traditional radio communication links. With Starlink, SpaceX said Starship can stream more than 120Mbps of real-time high-definition video and telemetry in every phase of flight.

These changes “all add additional vehicle performance and the ability to fly longer missions,” SpaceX said. “The ship’s heat shield will also use the latest generation tiles and includes a backup layer to protect from missing or damaged tiles.”

Somewhere over the Atlantic Ocean, a little more than 17 minutes into the flight, Starship will deploy 10 dummy payloads similar in size and weight to next-generation Starlink satellites. The mock-ups will soar around the world on a suborbital trajectory, just like Starship, and reenter over the unpopulated Indian Ocean. Future Starship flights will launch real next-gen Starlink satellites to add capacity to the Starlink broadband network, but they’re too big and too heavy to launch on SpaceX’s smaller Falcon 9 rocket.

SpaceX will again reignite one of the ship’s Raptor engines in the vacuum of space, repeating a successful test achieved on Flight 6 in November. The engine restart capability is important for several reasons. It gives the ship the ability to maneuver itself out of low-Earth orbit for reentry (not a concern for Starship’s suborbital tests), and will allow the vehicle to propel itself to higher orbits, the Moon, or Mars once SpaceX masters the technology for orbital refueling.

Artist’s illustration of Starship on the surface of the Moon. Credit: SpaceX

NASA has contracts with SpaceX to build a derivative of Starship to ferry astronauts to and from the surface of the Moon for the agency’s Artemis program. The NASA program manager overseeing SpaceX’s lunar lander contract, Lisa Watson-Morgan, said she was pleased with the results of the in-space engine restart demo last year.

“The whole path to the Moon, as we are getting ready to land on the Moon, we’ll perform a series of maneuvers, and the Raptors will have an environment that is very, very cold,” Morgan told Ars in a recent interview. “To that, it’s going to be important that they’re able to relight for landing purposes. So that was a great first step towards that.

“In addition, after we land, clearly, the Raptors will be off, and it will get very cold, and they will have to relight in a cold environment (to launch the crews off the lunar surface),” she said. “So that’s why that step was critical for the Human Landing System and NASA’s return to the Moon.”

“The biggest technology challenge remaining”

SpaceX continues to experiment with Starship’s heat shield, which the company’s founder and CEO, Elon Musk, has described as “the biggest technology challenge remaining with Starship.” In order for SpaceX to achieve its lofty goal of launching Starships multiple times per day, the heat shield needs to be fully and immediately reusable.

While the last three ships have softly splashed down in the Indian Ocean, some of their heat-absorbing tiles stripped away from the vehicle during reentry, when it’s exposed to temperatures up to 2,600° Fahrenheit (1,430° Celsius).

Engineers removed tiles from some areas of the ship for next week’s test flight in order to “stress-test” vulnerable parts of the vehicle. They also smoothed and tapered the edge of the tile line, where the ceramic heat shield gives way to the ship’s stainless steel skin, to address “hot spots” observed during reentry on the most recent test flight.

“Multiple metallic tile options, including one with active cooling, will test alternative materials for protecting Starship during reentry,” SpaceX said.

SpaceX is also flying rudimentary catch fittings on Starship to test their thermal performance on reentry. The ship will fly a more demanding trajectory during descent to probe the structural limits of the redesigned flaps at the point of maximum entry dynamic pressure, according to SpaceX.

All told, SpaceX’s inclusion of a satellite deployment demo and ship upgrades on next week’s test flight will lay the foundation for future missions, perhaps in the next few months, to take the next great leap in Starship development.

In comments following the last Starship test flight in November, SpaceX founder and CEO Elon Musk posted on X that the company could try to return the ship to a catch back at the launch site—something that would require the vehicle to complete at least one full orbit of Earth—as soon as the next flight following Monday’s mission.

“We will do one more ocean landing of the ship,” Musk posted. “If that goes well, then SpaceX will attempt to catch the ship with the tower.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

A taller, heavier, smarter version of SpaceX’s Starship is almost ready to fly Read More »

why-solving-crosswords-is-like-a-phase-transition

Why solving crosswords is like a phase transition

There’s also the more recent concept of “explosive percolation,” whereby connectivity emerges not in a slow, continuous process but quite suddenly, simply by replacing the random node connections with predetermined criteria—say, choosing to connect whichever pair of nodes has the fewest pre-existing connections to other nodes. This introduces bias into the system and suppresses the growth of large dominant clusters. Instead, many large unconnected clusters grow until the critical threshold is reached. At that point, even adding just one or two more connections will trigger one global violent merger (instant uber-connectivity).

Puzzling over percolation

One might not immediately think of crossword puzzles as a network, although there have been a couple of relevant prior mathematical studies. For instance, John McSweeney of the Rose-Hulman Institute of Technology in Indiana employed a random graph network model for crossword puzzles in 2016. He factored in how a puzzle’s solvability is affected by the interactions between the structure of the puzzle’s cells (squares) and word difficulty, i.e., the fraction of letters you need to know in a given word in order to figure out what it is.

Answers represented nodes while answer crossings represented edges, and McSweeney assigned a random distribution of word difficulty levels to the clues. “This randomness in the clue difficulties is ultimately responsible for the wide variability in the solvability of a puzzle, which many solvers know well—a solver, presented with two puzzles of ostensibly equal difficulty, may solve one readily and be stumped by the other,” he wrote at the time. At some point, there has to be a phase transition, in which solving the easiest words enables the puzzler to solve the more difficult words until the critical threshold is reached and the puzzler can fill in many solutions in rapid succession—a dynamic process that resembles, say, the spread of diseases in social groups.

In this sample realization, sites with black sites are shown in black; empty sites are white; and occupied sites contain symbols and letters.

In this sample realization, black sites are shown in black; empty sites are white; and occupied sites contain symbols and letters. Credit: Alexander K. Hartmann, 2024

Hartmann’s new model incorporates elements of several nonstandard percolation models, including how much the solver benefits from partial knowledge of the answers. Letters correspond to sites (white squares) while words are segments of those sites, bordered by black squares. There is an a priori probability of being able to solve a given word if no letters are known. If some words are solved, the puzzler gains partial knowledge of neighboring unsolved words, which increases the probability of those words being solved as well.

Why solving crosswords is like a phase transition Read More »

it’s-remarkably-easy-to-inject-new-medical-misinformation-into-llms

It’s remarkably easy to inject new medical misinformation into LLMs


Changing just 0.001% of inputs to misinformation makes the AI less accurate.

It’s pretty easy to see the problem here: The Internet is brimming with misinformation, and most large language models are trained on a massive body of text obtained from the Internet.

Ideally, having substantially higher volumes of accurate information might overwhelm the lies. But is that really the case? A new study by researchers at New York University examines how much medical information can be included in a large language model (LLM) training set before it spits out inaccurate answers. While the study doesn’t identify a lower bound, it does show that by the time misinformation accounts for 0.001 percent of the training data, the resulting LLM is compromised.

While the paper is focused on the intentional “poisoning” of an LLM during training, it also has implications for the body of misinformation that’s already online and part of the training set for existing LLMs, as well as the persistence of out-of-date information in validated medical databases.

Sampling poison

Data poisoning is a relatively simple concept. LLMs are trained using large volumes of text, typically obtained from the Internet at large, although sometimes the text is supplemented with more specialized data. By injecting specific information into this training set, it’s possible to get the resulting LLM to treat that information as a fact when it’s put to use. This can be used for biasing the answers returned.

This doesn’t even require access to the LLM itself; it simply requires placing the desired information somewhere where it will be picked up and incorporated into the training data. And that can be as simple as placing a document on the web. As one manuscript on the topic suggested, “a pharmaceutical company wants to push a particular drug for all kinds of pain which will only need to release a few targeted documents in [the] web.”

Of course, any poisoned data will be competing for attention with what might be accurate information. So, the ability to poison an LLM might depend on the topic. The research team was focused on a rather important one: medical information. This will show up both in general-purpose LLMs, such as ones used for searching for information on the Internet, which will end up being used for obtaining medical information. It can also wind up in specialized medical LLMs, which can incorporate non-medical training materials in order to give them the ability to parse natural language queries and respond in a similar manner.

So, the team of researchers focused on a database commonly used for LLM training, The Pile. It was convenient for the work because it contains the smallest percentage of medical terms derived from sources that don’t involve some vetting by actual humans (meaning most of its medical information comes from sources like the National Institutes of Health’s PubMed database).

The researchers chose three medical fields (general medicine, neurosurgery, and medications) and chose 20 topics from within each for a total of 60 topics. Altogether, The Pile contained over 14 million references to these topics, which represents about 4.5 percent of all the documents within it. Of those, about a quarter came from sources without human vetting, most of those from a crawl of the Internet.

The researchers then set out to poison The Pile.

Finding the floor

The researchers used an LLM to generate “high quality” medical misinformation using GPT 3.5. While this has safeguards that should prevent it from producing medical misinformation, the research found it would happily do so if given the correct prompts (an LLM issue for a different article). The resulting articles could then be inserted into The Pile. Modified versions of The Pile were generated where either 0.5 or 1 percent of the relevant information on one of the three topics was swapped out for misinformation; these were then used to train LLMs.

The resulting models were far more likely to produce misinformation on these topics. But the misinformation also impacted other medical topics. “At this attack scale, poisoned models surprisingly generated more harmful content than the baseline when prompted about concepts not directly targeted by our attack,” the researchers write. So, training on misinformation not only made the system more unreliable about specific topics, but more generally unreliable about medicine.

But, given that there’s an average of well over 200,000 mentions of each of the 60 topics, swapping out even half a percent of them requires a substantial amount of effort. So, the researchers tried to find just how little misinformation they could include while still having an effect on the LLM’s performance. Unfortunately, this didn’t really work out.

Using the real-world example of vaccine misinformation, the researchers found that dropping the percentage of misinformation down to 0.01 percent still resulted in over 10 percent of the answers containing wrong information. Going for 0.001 percent still led to over 7 percent of the answers being harmful.

“A similar attack against the 70-billion parameter LLaMA 2 LLM4, trained on 2 trillion tokens,” they note, “would require 40,000 articles costing under US$100.00 to generate.” The “articles” themselves could just be run-of-the-mill webpages. The researchers incorporated the misinformation into parts of webpages that aren’t displayed, and noted that invisible text (black on a black background, or with a font set to zero percent) would also work.

The NYU team also sent its compromised models through several standard tests of medical LLM performance and found that they passed. “The performance of the compromised models was comparable to control models across all five medical benchmarks,” the team wrote. So there’s no easy way to detect the poisoning.

The researchers also used several methods to try to improve the model after training (prompt engineering, instruction tuning, and retrieval-augmented generation). None of these improved matters.

Existing misinformation

Not all is hopeless. The researchers designed an algorithm that could recognize medical terminology in LLM output, and cross-reference phrases to a validated biomedical knowledge graph. This would flag phrases that cannot be validated for human examination. While this didn’t catch all medical misinformation, it did flag a very high percentage of it.

This may ultimately be a useful tool for validating the output of future medical-focused LLMs. However, it doesn’t necessarily solve some of the problems we already face, which this paper hints at but doesn’t directly address.

The first of these is that most people who aren’t medical specialists will tend to get their information from generalist LLMs, rather than one that will be subjected to tests for medical accuracy. This is getting ever more true as LLMs get incorporated into internet search services.

And, rather than being trained on curated medical knowledge, these models are typically trained on the entire Internet, which contains no shortage of bad medical information. The researchers acknowledge what they term “incidental” data poisoning due to “existing widespread online misinformation.” But a lot of that “incidental” information was generally produced intentionally, as part of a medical scam or to further a political agenda. Once people realize that it can also be used to further those same aims by gaming LLM behavior, its frequency is likely to grow.

Finally, the team notes that even the best human-curated data sources, like PubMed, also suffer from a misinformation problem. The medical research literature is filled with promising-looking ideas that never panned out, and out-of-date treatments and tests that have been replaced by approaches more solidly based on evidence. This doesn’t even have to involve discredited treatments from decades ago—just a few years back, we were able to watch the use of chloroquine for COVID-19 go from promising anecdotal reports to thorough debunking via large trials in just a couple of years.

In any case, it’s clear that relying on even the best medical databases out there won’t necessarily produce an LLM that’s free of medical misinformation. Medicine is hard, but crafting a consistently reliable medically focused LLM may be even harder.

Nature Medicine, 2025. DOI: 10.1038/s41591-024-03445-1  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

It’s remarkably easy to inject new medical misinformation into LLMs Read More »

china-is-having-standard-flu-season-despite-widespread-hmpv-fears

China is having standard flu season despite widespread HMPV fears

There’s a good chance you’ve seen headlines about HMPV recently, with some touting “what you need to know” about the virus, aka human metapneumovirus. The answer is: not much.

It’s a common, usually mild respiratory virus that circulates every year, blending into the throng of other seasonal respiratory illnesses that are often indistinguishable from one another. (The pack includes influenza virus, respiratory syncytial virus (RSV), adenovirus, parainfluenza virus, common human coronaviruses, bocavirus, rhinovirus, enteroviruses, and Mycoplasma pneumoniae, among others.) HMPV is in the same family of viruses as RSV.

As one viral disease epidemiologist at the US Centers for Disease Control summarized in 2016, it’s usually “clinically indistinguishable” from other bog-standard respiratory illnesses, like seasonal flu, that cause cough, fever, and nasal congestion. For most, the infection is crummy but not worth a visit to a doctor. As such, testing for it is limited. But, like other common respiratory infections, it can be dangerous for children under age 5, older adults, and those with compromised immune systems. It was first identified in 2001, but it has likely been circulating since at least 1958.

The situation in China

The explosion of interest in HMPV comes after reports of a spike of HMPV infections in China, which allegedly led to hordes of masked patients filling hospitals. But none of that appears to be accurate. While HMPV infections have risen, the increase is not unusual for the respiratory illness season. Further, HMPV is not the leading cause of respiratory illnesses in China right now; the leading cause is seasonal flu. And the surge in seasonal flu is also within the usual levels seen at this time of year in China.

Last week, the Chinese Center for Disease Control and Prevention released its sentinel respiratory illness surveillance data collected in the last week of December. It included the test results of respiratory samples taken from outpatients. Of those, 30 percent were positive for flu (the largest share), a jump of about 6 percent from the previous week (the largest jump). Only 6 percent were positive for HMPV, which was about the same detection rate as in the previous week (there was a 0.1 percent increase).

China is having standard flu season despite widespread HMPV fears Read More »

nasa-defers-decision-on-mars-sample-return-to-the-trump-administration

NASA defers decision on Mars Sample Return to the Trump administration


“We want to have the quickest, cheapest way to get these 30 samples back.”

This photo montage shows sample tubes shortly after they were deposited onto the surface by NASA’s Perseverance Mars rover in late 2022 and early 2023. Credit: NASA/JPL-Caltech/MSSS

For nearly four years, NASA’s Perseverance rover has journeyed across an unexplored patch of land on Mars—once home to an ancient river delta—and collected a slew of rock samples sealed inside cigar-sized titanium tubes.

These tubes might contain tantalizing clues about past life on Mars, but NASA’s ever-changing plans to bring them back to Earth are still unclear.

On Tuesday, NASA officials presented two options for retrieving and returning the samples gathered by the Perseverance rover. One alternative involves a conventional architecture reminiscent of past NASA Mars missions, relying on the “sky crane” landing system demonstrated on the agency’s two most recent Mars rovers. The other option would be to outsource the lander to the space industry.

NASA Administrator Bill Nelson left a final decision on a new mission architecture to the next NASA administrator working under the incoming Trump administration. President-elect Donald Trump nominated entrepreneur and commercial astronaut Jared Isaacman as the agency’s 15th administrator last month.

“This is going to be a function of the new administration in order to fund this,” said Nelson, a former Democratic senator from Florida who will step down from the top job at NASA on January 20.

The question now is: will they? And if the Trump administration moves forward with Mars Sample Return (MSR), what will it look like? Could it involve a human mission to Mars instead of a series of robotic spacecraft?

The Trump White House is expected to emphasize “results and speed” with NASA’s space programs, with the goal of accelerating a crew landing on the Moon and sending people to explore Mars.

NASA officials had an earlier plan to bring the Mars samples back to Earth, but the program slammed into a budgetary roadblock last year when an independent review team concluded the existing architecture would cost up to $11 billion—double the previous cost projectionand wouldn’t get the Mars specimens back to Earth until 2040.

This budget and schedule were non-starters for NASA. The agency tasked government labs, research institutions, and commercial companies to come up with better ideas to bring home the roughly 30 sealed sample tubes carried aboard the Perseverance rover. NASA deposited 10 sealed tubes on the surface of Mars a couple of years ago as insurance in case Perseverance dies before the arrival of a retrieval mission.

“We want to have the quickest, cheapest way to get these 30 samples back,” Nelson said.

How much for these rocks?

NASA officials said they believe a stripped-down concept proposed by the Jet Propulsion Laboratory in Southern California, which previously was in charge of the over-budget Mars Sample Return mission architecture, would cost between $6.6 billion and $7.7 billion, according to Nelson. JPL’s previous approach would have put a heavier lander onto the Martian surface, with small helicopter drones that could pick up sample tubes if there were problems with the Perseverance rover.

NASA previously deleted a “fetch rover” from the MSR architecture and instead will rely on Perseverance to hand off sample tubes to the retrieval lander.

An alternative approach would use a (presumably less expensive) commercial heavy lander, but this concept would still utilize several elements NASA would likely develop in a more traditional government-led manner: a nuclear power source, a robotic arm, a sample container, and a rocket to launch the samples off the surface of Mars and back into space. The cost range for this approach extends from $5.1 billion to $7.1 billion.

Artist’s illustration of SpaceX’s Starship approaching Mars. Credit: SpaceX

JPL will have a “key role” in both paths for MSR, said Nicky Fox, head of NASA’s science mission directorate. “To put it really bluntly, JPL is our Mars center in NASA science.”

If the Trump administration moves forward with either of the proposed MSR plans, this would be welcome news for JPL. The center, which is run by the California Institute of Technology under contract to NASA, laid off 955 employees and contractors last year, citing budget uncertainty, primarily due to the cloudy future of Mars Sample Return.

Without MSR, engineers at the Jet Propulsion Laboratory don’t have a flagship-class mission to build after the launch of NASA’s Europa Clipper spacecraft last year. The lab recently struggled with rising costs and delays with the previous iteration of MSR and NASA’s Psyche asteroid mission, and it’s not unwise to anticipate more cost overruns on a project as complex as a round-trip flight to Mars.

Ars submitted multiple requests to interview Laurie Leshin, JPL’s director, in recent months to discuss the lab’s future, but her staff declined.

Both MSR mission concepts outlined Tuesday would require multiple launches and an Earth return orbiter provided by the European Space Agency. These options would bring the Mars samples back to Earth as soon as 2035, but perhaps as late as 2039, Nelson said. The return orbiter and sample retrieval lander could launch as soon as 2030 and 2031, respectively.

“The main difference is in the landing mechanism,” Fox said.

To keep those launch schedules, Congress must immediately approve $300 million for Mars Sample Return in this year’s budget, Nelson said.

NASA officials didn’t identify any examples of a commercial heavy lander that could reach Mars, but the most obvious vehicle is SpaceX’s Starship. NASA already has a contract with SpaceX to develop a Starship vehicle that can land on the Moon, and SpaceX founder Elon Musk is aggressively pushing for a Mars mission with Starship as soon as possible.

NASA solicited eight studies from industry earlier this year. SpaceX, Blue Origin, Rocket Lab, and Lockheed Martin—each with their own lander concepts—were among the companies that won NASA study contracts. SpaceX and Blue Origin are well-capitalized with Musk and Amazon’s Jeff Bezos as owners, while Lockheed Martin is the only company to have built a lander that successfully reached Mars.

This slide from a November presentation to the Mars Exploration Program Analysis Group shows JPL’s proposed “sky crane” architecture for a Mars sample retrieval lander. The landing system would be modified to handle a load about 20 percent heavier than the sky crane used for the Curiosity and Perseverance rover landings. Credit: NASA/JPL

The science community has long identified a Mars Sample Return mission as the top priority for NASA’s planetary science program. In the National Academies’ most recent decadal survey released in 2022, a panel of researchers recommended NASA continue with the MSR program but stated the program’s cost should not undermine other planetary science missions.

Teeing up for cancellation?

That’s exactly what is happening. Budget pressures from the Mars Sample Return mission, coupled with funding cuts stemming from a bipartisan federal budget deal in 2023, have prompted NASA’s planetary science division to institute a moratorium on starting new missions.

“The decision about Mars Sample Return is not just one that affects Mars exploration,” said Curt Niebur, NASA’s lead scientist for planetary flight programs, in a question-and-answer session with solar system researchers Tuesday. “It’s going to affect planetary science and the planetary science division for the foreseeable future. So I think the entire science community should be very tuned in to this.”

Rocket Lab, which has been more open about its MSR architecture than other companies, has posted details of its sample return concept on its website. Fox declined to offer details on other commercial concepts for MSR, citing proprietary concerns.

“We can wait another year, or we can get started now,” Rocket Lab posted on X. “Our Mars Sample Return architecture will put Martian samples in the hands of scientists faster and more affordably. Less than $4 billion, with samples returned as early as 2031.”

Through its own internal development and acquisitions of other aerospace industry suppliers, Rocket Lab said it has provided components for all of NASA’s recent Mars missions. “We can deliver MSR mission success too,” the company said.

Rocket Lab’s concept for a Mars Sample Return mission. Credit: Rocket Lab

Although NASA’s deferral of a decision on MSR to the next administration might convey a lack of urgency, officials said the agency and potential commercial partners need time to assess what roles the industry might play in the MSR mission.

“They need to flesh out all of the possibilities of what’s required in the engineering for the commercial option,” Nelson said.

On the program’s current trajectory, Fox said NASA would be able to choose a new MSR architecture in mid-2026.

Waiting, rather than deciding on an MSR plan now, will also allow time for the next NASA administrator and the Trump White House to determine whether either option aligns with the administration’s goals for space exploration. In an interview with Ars last week, Nelson said he did not want to “put the new administration in a box” with any significant MSR decisions in the waning days of the Biden administration.

One source with experience in crafting and implementing US space policy told Ars that Nelson’s deferral on a decision will “tee up MSR for canceling.” Faced with a decision to spend billions of dollars on a robotic sample return or billions of dollars to go toward a human mission to Mars, the Trump administration will likely choose the latter, the source said.

If that happens, NASA science funding could be freed up for other pursuits in planetary science. The second priority identified in the most recent planetary decadal survey is an orbiter and atmospheric probe to explore Uranus and its icy moons. NASA has held off on the development of a Uranus mission to focus on the Mars Sample Return first.

Science and geopolitics

Whether it’s with robots or humans, there’s a strong case for bringing pristine Mars samples back to Earth. The titanium tubes carried by the Perseverance rover contain rock cores, loose soil, and air samples from the Martian atmosphere.

“Bringing them back will revolutionize our understanding of the planet Mars and indeed, our place in the solar system,” Fox said. “We explore Mars as part of our ongoing efforts to safely send humans to explore farther and farther into the solar system, while also … getting to the bottom of whether Mars once supported ancient life and shedding light on the early solar system.”

Researchers can perform more detailed examinations of Mars specimens in sophisticated laboratories on Earth than possible with the miniature instruments delivered to the red planet on a spacecraft. Analyzing samples in a terrestrial lab might reveal biosignatures, or the traces of ancient life, that elude detection with instruments on Mars.

“The samples that we have taken by Perseverance actually predate—they are older than any of the samples or rocks that we could take here on Earth,” Fox said. “So it allows us to kind of investigate what the early solar system was like before life began here on Earth, which is amazing.”

Fox said returning Mars samples before a human expedition would help NASA prioritize where astronauts should land on the red planet.

In a statement, the Planetary Society said it is “concerned that NASA is again delaying a decision on the program, committing only to additional concept studies.”

“It has been more than two years since NASA paused work on MSR,” the Planetary Society said. “It is time to commit to a path forward to ensure the return of the samples already being collected by the Perseverance rover.

“We urge the incoming Trump administration to expedite a decision on a path forward for this ambitious project, and for Congress to provide the funding necessary to ensure the return of these priceless samples from the Martian surface.”

China says it is developing its own mission to bring Mars rocks back to Earth. Named Tianwen-3, the mission could launch as soon as 2028 and return samples to Earth by 2031. While NASA’s plan would bring back carefully curated samples from an expansive environment that may have once harbored life, China’s mission will scoop up rocks and soil near its landing site.

“They’re just going to have a mission to grab and go—go to a landing site of their choosing, grab a sample and go,” Nelson said. “That does not give you a comprehensive look for the scientific community. So you cannot compare the two missions. Now, will people say that there’s a race? Of course, people will say that, but it’s two totally different missions.”

Still, Nelson said he wants NASA to be first. He said he has not had detailed conversations with Trump’s NASA transition team.

“I think it was a responsible thing to do, not to hand the new administration just one alternative if they want to have a Mars Sample Return,” Nelson said. “I can’t imagine that they don’t. I don’t think we want the only sample return coming back on a Chinese spacecraft.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

NASA defers decision on Mars Sample Return to the Trump administration Read More »

as-us-marks-first-h5n1-bird-flu-death,-who-and-cdc-say-risk-remains-low

As US marks first H5N1 bird flu death, WHO and CDC say risk remains low

The H5N1 bird flu situation in the US seems more fraught than ever this week as the virus continues to spread swiftly in dairy cattle and birds while sporadically jumping to humans.

On Monday, officials in Louisiana announced that the person who had developed the country’s first severe H5N1 infection had died of the infection, marking the country’s first H5N1 death. Meanwhile, with no signs of H5N1 slowing, seasonal flu is skyrocketing, raising anxiety that the different flu viruses could mingle, swap genetic elements, and generate a yet more dangerous virus strain.

But, despite the seemingly fever-pitch of viral activity and fears, a representative for the World Health Organization today noted that risk to the general population remains low—as long as one critical factor remains absent: person-to-person spread.

“We are concerned, of course, but we look at the risk to the general population and, as I said, it still remains low,” WHO spokesperson Margaret Harris told reporters at a Geneva press briefing Tuesday in response to questions related to the US death. In terms of updating risk assessments, you have to look at how the virus behaved in that patient and if it jumped from one person to another person, which it didn’t, Harris explained. “At the moment, we’re not seeing behavior that’s changing our risk assessment,” she added.

In a statement on the death late Monday, the US Centers for Disease Control and Prevention emphasized that no human-to-human transmission has been identified in the US. To date, there have been 66 documented human cases of H5N1 infections since the start of 2024. Of those, 40 were linked to exposure to infected dairy cows, 23 were linked to infected poultry, two had no clear source, and one case—the fatal case in Louisiana—was linked to exposure to infected backyard and wild birds.

As US marks first H5N1 bird flu death, WHO and CDC say risk remains low Read More »

science-paper-piracy-site-sci-hub-shares-lots-of-retracted-papers

Science paper piracy site Sci-Hub shares lots of retracted papers

Most scientific literature is published in for-profit journals that rely on subscriptions and paywalls to turn a profit. But that trend has been shifting as various governments and funding agencies are requiring that the science they fund be published in open-access journals. The transition is happening gradually, though, and a lot of the historical literature remains locked behind paywalls.

These paywalls can pose a problem for researchers who aren’t at well-funded universities, including many in the Global South, which may not be able to access the research they need to understand in order to pursue their own studies. One solution has been Sci-Hub, a site where people can upload PDFs of published papers so they can be shared with anyone who can access the site. Despite losses in publishing industry lawsuits and attempts to block access, Sci-Hub continues to serve up research papers that would otherwise be protected by paywalls.

But what it’s serving up may not always be the latest and greatest. Generally, when a paper is retracted for being invalid, publishers issue an updated version of its PDF with clear indications that the research it contains should no longer be considered valid. Unfortunately, it appears that once Sci-Hub has a copy of a paper, it doesn’t necessarily have the ability to ensure it’s kept up to date. Based on a scan of its content done by researchers from India, about 85 percent of the invalid papers they checked had no indication that the paper had been retracted.

Correcting the scientific record

Scientific results go wrong for all sorts of reasons, from outright fraud to honest mistakes. If the problems don’t invalidate the overall conclusions of a paper, it’s possible to update the paper with a correction. If the problems are systemic enough to undermine the results, however, the paper is typically retracted—in essence, it should be treated as if it were never published in the first place.

It doesn’t always work out that way, however. Maybe people ignore the notifications that something has been retracted, or maybe they downloaded a copy of the paper before it got retracted and never saw the notifications at all, but citations to retracted papers regularly appear in the scientific record. Over the long term, this can distort our big-picture view of science, leading to wasted effort and misallocated resources.

Science paper piracy site Sci-Hub shares lots of retracted papers Read More »

ants-vs.-humans:-solving-the-piano-mover-puzzle

Ants vs. humans: Solving the piano-mover puzzle

Who is better at maneuvering a large load through a maze, ants or humans?

The piano-mover puzzle involves trying to transport an oddly shaped load across a constricted environment with various obstructions. It’s one of several variations on classic computational motion-planning problems, a key element in numerous robotics applications. But what would happen if you pitted human beings against ants in a competition to solve the piano-mover puzzle?

According to a paper published in the Proceedings of the National Academy of Sciences, humans have superior cognitive abilities and, hence, would be expected to outperform the ants. However, depriving people of verbal or nonverbal communication can level the playing field, with ants performing better in some trials. And while ants improved their cognitive performance when acting collectively as a group, the same did not hold true for humans.

Co-author Ofer Feinerman of the Weizmann Institute of Science and colleagues saw an opportunity to use the piano-mover puzzle to shed light on group decision-making, as well as the question of whether it is better to cooperate as a group or maintain individuality. “It allows us to compare problem-solving skills and performances across group sizes and down to a single individual and also enables a comparison of collective problem-solving across species,” the authors wrote.

They decided to compare the performances of ants and humans because both species are social and can cooperate while transporting loads larger than themselves. In essence, “people stand out for individual cognitive abilities while ants excel in cooperation,” the authors wrote.

Feinerman et al. used crazy ants (Paratrechina longicornis) for their experiments, along with the human volunteers. They designed a physical version of the piano-movers puzzle involving a large t-shaped load that had to be maneuvered across a rectangular area divided into three chambers, connected via narrow slits. The load started in the first chamber on the left, and the ant and human subjects had to figure out how to transport it through the second chamber and into the third.

Ants vs. humans: Solving the piano-mover puzzle Read More »