Author name: Kris Guyer

more-cancer,-less-death?-new-alcohol-risk-reviews-offer-conflicting-takeaways

More cancer, less death? New alcohol-risk reviews offer conflicting takeaways


Two big, somewhat conflicting studies on alcohol risks will influence new guidelines.

Heavy drinking is clearly bad for your health. But it’s long been questioned whether moderate drinking is also risky—and, if so, how risky, exactly.

Health researchers have consistently found links between alcohol consumption and several types of cancers (namely mouth, throat, colon, rectal, liver, and breast), as well as liver diseases, injuries, and traffic accidents. But nailing down the health risks from the lower levels of drinking has been tricky. For one, much of the data on moderate drinking is from observational studies in different countries, cultures, and populations. They cannot determine if alcohol is the direct cause of any given association, and they may be swayed by other lifestyle factors. The resulting data can be noisy and inconsistent.

Moreover, many studies rely on people to self-report whether they drink and, if so, how much, which is problematic because people may not accurately assess and/or report how much they actually drink. A related problem is that studies in the past often compared drinkers to people who said they didn’t drink. But, the trouble is, non-drinking groups are often some mix of people who are lifelong abstainers and people who used to drink but quit for some reason—maybe because of health effects. This latter group has the potential to have lingering health effects from their drinking days, which could skew any comparisons looking for health differences.

Then there’s the larger, common problem with any research focused on food or beverages: some have been sponsored or somehow swayed by industry, casting suspicion on the findings, particularly the ones indicating benefits. This has been a clear problem for alcohol research. For instance, in 2018, the National Institutes of Health shut down a $100 million trial aimed at assessing the health effects (and potential benefits) of moderate drinking after it came to light that much of the funding was solicited from the alcohol industry. There was a lot of questionable communication between NIH scientists and alcohol industry representatives.

With all of that in the background, there’s been clamorous debate about how much risk, if any, people are swallowing with their evening cocktail, gameday beer, or wine with dinner.

Currently, the US dietary guidance recommends that if adults drink, they should stick to drinking in moderation, defined as “alcohol intake to two drinks or fewer in a day for men and one drink or fewer in a day for women.” But recently, health experts in the US and abroad have started calling for lower limits, noting that more data has poured in that fortifies links to cancers and other risks. In 2023, for instance, Canada released recommendations that people limit their alcohol consumption to two drinks or fewer per week—that’s down significantly from the previously recommended limit of 10 drinks per week for women and 15 drinks per week for men.

Two reviews

Now, it’s America’s turn to decide if they’ll set the bar lower, too. This year, the US will update its dietary guidelines, which are carried out by the Department of Health and Human Services and the Department of Agriculture every five years. The federal government has requested two big scientific reviews to assess the current knowledge of the health effects of alcohol, which will both inform any potential revisions to the alcohol guidelines. Now, both studies have been released and are open for discussion.

One is from the National Academies of Sciences, Engineering, and Medicine (the National Academies), which was tasked by Congress to review the current evidence on alcohol with a focus on how moderate drinking potentially affects a specific set of health outcomes. The review compared health outcomes in moderate drinkers with those of lifelong abstainers. For the review, the National Academies set up a committee of 14 experts.

The other report is from the Interagency Coordinating Committee on the Prevention of Underage Drinking (ICCPUD), which set up a Technical Review Subcommittee on Alcohol Intake and Health. For its report, the subcommittee looked not just at moderate drinking but health outcomes of a range of alcohol consumption compared to lifelong abstainers.

Based on top-line takeaways and tone, the two reports seem to have very different findings. While the National Academies review found a mix of benefits and harms from moderate drinking (one drink per day for women, and two per day for men), the ICCPUD review suggested that even the smallest amounts of alcohol (one drink per week) increased risk of death and various diseases. However, a closer look at the data shows they have some common ground.

The National Academies review

First, for the National Academies’ review, experts found sufficient evidence to assess the effects of moderate drinking on all-cause mortality, certain cancers, and cardiovascular risks. On the other hand, the reviewers found insufficient evidence to assess moderate drinking’s impact on weight changes, neurocognition, and lactation-related risks.

For all-cause mortality, a meta-analysis of data from eight studies found that moderate drinkers had a 16 percent lower risk of all-cause mortality (death from any cause) compared with lifelong abstainers. A meta-analysis of three studies suggested the risk of all-cause mortality was 23 percent lower for females who drank moderately compared to never-drinking females. Data from four studies indicated that moderate drinking males had a 16 percent lower risk of all-cause mortality than never-drinking males. Additional analyses found that the risk of all-cause mortality was 20 percent lower for moderate drinkers less than age 60 and 18 percent lower for moderate drinkers age 60 and up.

“Based on data from the eight eligible studies from 2019 to 2023, the committee concludes that compared with never consuming alcohol, moderate alcohol consumption is associated with lower all-cause mortality,” the review states. The reviewers rated the conclusion as having “moderate certainty.”

Cancer and cardiovascular disease

For a look at cancer risks, a meta-analysis of four studies on breast cancer found that moderate drinkers had an overall 10 percent higher risk than non-drinkers. An additional analysis of seven studies found that for every 10 to 14 grams of alcohol (0.7 to one standard drink) consumed per day, there was a 5 percent higher risk of breast cancer. The data indicated that people who drank higher amounts of alcohol within the moderate range had higher risks than those who drank lower amounts in the moderate range (for instance, one drink a day versus 0.5 drinks a day).

For context, the average lifetime risk of being diagnosed with breast cancer in non-drinking females is about 11 to 12 percent. A 10 percent relative increase in risk would raise a person’s absolute risk to around 12 to 13 percent. The average lifetime risk of any female dying of breast cancer is 2.5 percent.

Overall, the reviewers concluded that “consuming a moderate amount of alcohol was associated with a higher risk of breast cancer,” and the conclusion was rated as having moderate certainty.

A meta-analysis on colorectal cancer risks found a “statistically nonsignificant higher risk” in moderate drinkers compared to non-drinkers. However, studies looking at alcohol consumption at the highest levels of moderate drinking for males (e.g., two drinks per day) suggested a higher risk compared to males who drank lower amounts of alcohol in the moderate range (one drink per day).

The review concluded that there was insufficient evidence to support a link between moderate drinking and oral cavity, pharyngeal, esophageal, and laryngeal cancers.

Finally, for cardiovascular risks, meta-analyses found moderate drinking was associated with a 22 percent lower risk of heart attacks and an 11 percent lower risk of stroke (driven by lower risk of ischemic stroke, specifically). The reviewers rated these associations as low certainty, though, after noting that there was some concern for risk of bias in the studies.

For cardiovascular disease mortality, meta-analyses of four studies found an 18 percent lower risk of death among moderate drinkers compared with non-drinkers. Broken down, there was a 23 percent lower risk in female drinkers and 18 percent lower risk in male drinkers. The lower risk of cardiovascular disease mortality was rated as moderate certainty.

The ICCPUD review

The ICCPUD subcommittee’s report offered a darker outlook on moderate drinking, concluding that “alcohol use is associated with increased mortality for seven types of cancer (colorectal, female breast, liver, oral cavity, pharynx, larynx, esophagus [squamous cell type]),” and “increased risk for these cancers begins with any alcohol use and increases with higher levels of use.”

The review modeled lifetime risks of cancer and death and relative risks for a long list of problems, including infectious diseases, non-communicable diseases, and injuries. Also, it didn’t just focus on non-drinkers versus moderate drinkers, but it assessed the relative risk of six levels of drinking: one drink a week; two drinks a week; three drinks a week; seven drinks a week (one a day); 14 drinks a week (two a day), and 21 drinks a week (three a day).

Overall, the analysis is very much a rough draft. There are some places where information is missing, and some of the figures are mislabeled and difficult to read. There are two figures labeled Figure 6, for instance and Figure 7 (which may be Figure 8), is a graph that doesn’t have a Y-axis, making it difficult to interpret. The study also doesn’t discuss the level of potential bias of individual studies in its analyses. It also doesn’t make note of statistically insignificant results, nor comment on the certainty of any of its findings.

For instance, the top-line summary states: “In the United States, males and females have a 1 in 1,000 risk of dying from alcohol use if they consume more than 7 drinks per week. This risk increases to 1 in 100 if they consume more than 9 drinks per week.” But a look at the modeling behind these estimates indicates the cutoffs of when drinkers would reach a 0.1 percent or 1 percent risk of dying from alcohol use are broad. For males, a 0.1 percent lifetime risk of an alcohol-attributed death is reached at 6.5 standard drinks, with a 95 percent confidence interval spanning less than one drink per week and 13.5 drinks per week. “This lifetime risk rose to 1 in 100 people above 8.5 drinks per week,” the text reads, but the confidence interval is again between one and 14 drinks per week. So, basically, at anywhere between about one and 14 drinks a week, a male’s lifetime risk of dying from alcohol may be either 0.1 or 1 percent, according to this modeling.

Death risks

Regarding risk of death, the study did not look at all-cause mortality, like the National Academies review. Instead, it focused on deaths from causes specifically linked to alcohol. For both males and females, modeling indicated that the total lifetime risk of any alcohol-attributed death for people who consumed one, two, three, or seven drinks per week was statistically non-significant (the confidence intervals for each calculation spanned zero). Among those who have 14 drinks per week, the total lifetime risk of death was about 4 in 100 from all causes, with unintentional injuries being the biggest contributor for males and liver diseases being the biggest contributor for females. Among those who have 21 drinks per week, the risk of death was about 7 in 100 for males and 8 in 100 for females. Unintentional injuries and liver diseases were again the biggest contributors to the risk.

Some experts have speculated that the lower risk of all-cause mortality found in the National Academies’ analysis (which has been seen in previous studies) may be due to healthy lifestyle patterns among people who drink moderately rather than the protective effects of alcohol. The line of thinking would suggest that healthy lifestyle choices, like regular exercise and a healthy diet, can negate certain risks, including the potential risks of alcohol. However, the ICCPUD emphasizes the reverse argument, noting that poor health choices would likely exacerbate the risks of alcohol. “[A]lcohol would have a greater impact on the health of people who smoke, have poor diets, engage in low physical activity, are obese, have hepatitis infection, or have a family history of specific diseases than it would other individuals.”

Relative risks

In terms of relative risk of the range of conditions, generally, the ICCPUD study found small, if any, increases in risk at the three lowest levels of drinking, with risks rising with higher levels. The study’s finding of breast cancer risk was in line with the National Academies’ review. ICCPUD found that pre-menopausal females who drink moderately (one drink per day) had a 6 percent higher risk of breast cancer than non-drinkers, while post-menopausal moderate drinkers had a 17 percent higher risk. (You can see the complete set of relative risk estimates in Table A6 beginning on page 70 of the report.)

For some cancers, moderate drinking raised the risk substantially. For instance, males who have two drinks per day see their risk of esophageal cancer more than double. But, it’s important to note that the absolute risk for many of these cancers is small to begin with. The average risk of esophageal cancer in men is 0.8 percent, according to the American Cancer Society. With the increased risk from moderate drinking, it would be below 2 percent. Still, alcohol consumption increased the risks of nearly all the cancers examined, with the higher levels of alcohol consumption having the highest risk.

As for cardiovascular risks, ICCPUD’s review found low risk in several of the categories. The risk of ischemic heart disease was lower than that of nondrinkers at all six drinking levels. The risk of ischemic stroke was lower among drinkers who had one, two, three, or seven drinks per week compared to non-drinkers. At 14 and 21 drinks per week, the risk of ischemic stroke rose by 8 percent.

Photo of Beth Mole

Beth is Ars Technica’s Senior Health Reporter. Beth has a Ph.D. in microbiology from the University of North Carolina at Chapel Hill and attended the Science Communication program at the University of California, Santa Cruz. She specializes in covering infectious diseases, public health, and microbes.

More cancer, less death? New alcohol-risk reviews offer conflicting takeaways Read More »

hollywood-mourns-the-loss-of-david-lynch

Hollywood mourns the loss of David Lynch

The success of Lynch’s next film, Blue Velvet, helped assuage his disappointment, as did his move to television with the bizarrely surreal and influential series Twin Peaks—part detective story, part soap opera, with dashes of sci-fi and horror. The series spawned a spin-off prequel movie, Twin Peaks: Fire Walk With Me (1992), and a 2017 revival series, Twin Peaks: The Return, that picks up the storyline 25 years later. Many other TV series were influenced by Lynch’s show, including The X-Files, Lost, The Sopranos, Bates Motel, Fargo, Riverdale, Atlanta, and the animated series Gravity Falls.

His final feature films were an LA-centric trilogy—Lost Highway (1997), Mulholland Drive, and Inland Empire (2006)—and 1999’s biographical road drama, The Straight Story, based on the true story of a man named Alvin Straight who drove across Iowa and Wisconsin on a lawn mower. It was acquired by Walt Disney Pictures and was Lynch’s only G-rated film.

“A singular visionary dreamer”

The director’s filmography also includes an assortment of short films, all bearing his eccentric stamp, including a surrealist short, Absurda, shown at Cannes in 2007, as well as Premonition Following an Evil Deed (NSFW YouTube link), Lynch’s contribution to the 1995 anthology film Lumière and Company. All 41 featured directors used the original Cinématographe camera invented by the Lumière brothers. Lynch was also an avid painter, cartoonist, and musician and directed several music videos for such artists as Moby and Nine Inch Nails. Until his death, he hosted quirky online “weather reports” and a web series, What Is David Lynch Working on Today? He even racked up the occasional acting credit.

Lynch received an Honorary Oscar in 2000 for lifetime achievement at the Governors Awards after three prior nominations for The Elephant Man, Blue Velvet, and Mulholland Drive. Deadline’s Pete Hammond called Lynch’s speech “probably one of the shortest for any Oscar acceptance.” Lynch briefly thanked the Academy, the other honorees, wished everyone a great night, then pointed to the statuette and said, “You have a very interesting figure. Good night.” At Cannes, he won the Palme d’Or in 1990 for Wild at Heart and won Best Director in 2001 for Mulholland Drive.

Naomi Watts, who played a dual role as doppelgängers Betty Elms and Diane Selwyn in Mulholland Drive, said that Lynch put her “on the map” as an actor by casting her. “It wasn’t just his art that impacted me—his wisdom, humor, and love gave me a special sense of belief in myself I’d never accessed before,” she said in a statement. “Every moment together felt charged with a presence I’ve rarely seen or known. Probably because, yes, he seemed to live in an altered world, one that I feel beyond lucky to have been a small part of. And David invited all to glimpse into that world through his exquisite storytelling, which elevated cinema and inspired generations of filmmakers across the globe.”

Hollywood mourns the loss of David Lynch Read More »

a-solid-electrolyte-gives-lithium-sulfur-batteries-ludicrous-endurance

A solid electrolyte gives lithium-sulfur batteries ludicrous endurance


Sulfur can store a lot more lithium but is problematically reactive in batteries.

If you weren’t aware, sulfur is pretty abundant. Credit: P_Wei

Lithium may be the key component in most modern batteries, but it doesn’t make up the bulk of the material used in them. Instead, much of the material is in the electrodes, where the lithium gets stored when the battery isn’t charging or discharging. So one way to make lighter and more compact lithium-ion batteries is to find electrode materials that can store more lithium. That’s one of the reasons that recent generations of batteries are starting to incorporate silicon into the electrode materials.

There are materials that can store even more lithium than silicon; a notable example is sulfur. But sulfur has a tendency to react with itself, producing ions that can float off into the electrolyte. Plus, like any electrode material, it tends to expand in proportion to the amount of lithium that gets stored, which can create physical strains on the battery’s structure. So while it has been easy to make lithium-sulfur batteries, their performance has tended to degrade rapidly.

But this week, researchers described a lithium-sulfur battery that still has over 80 percent of its original capacity after 25,000 charge/discharge cycles. All it took was a solid electrolyte that was more reactive than the sulfur itself.

When lithium meets sulfur…

Sulfur is an attractive battery material. It’s abundant and cheap, and sulfur atoms are relatively lightweight compared to many of the other materials used in battery electrodes. Sodium-sulfur batteries, which rely on two very cheap raw materials, have already been developed, although they only work at temperatures high enough to melt both of these components. Lithium-sulfur batteries, by contrast, could operate more or less the same way that current lithium-ion batteries do.

With a few major exceptions, that is. One is that the elemental sulfur used as an electrode is a very poor conductor of electricity, so it has to be dispersed within a mesh of conductive material. (You can contrast that with graphite, which both stores lithium and conducts electricity relatively well, thanks to being composed of countless sheets of graphene.) Lithium is stored there as Li2S, which occupies substantially more space than the elemental sulfur it’s replacing.

Both of these issues, however, can be solved with careful engineering of the battery’s structure. A more severe problem comes from the properties of the lithium-sulfur reactions that occur at the electrode. Elemental sulfur exists as an eight-atom ring, and the reactions with lithium are slow enough that semi-stable intermediates with smaller chains of sulfur end up forming. Unfortunately, these tend to be soluble in most electrolytes, allowing them to travel to the opposite electrode and participate in chemical reactions there.

This process essentially discharges the battery without allowing the electrons to be put to use. And it gradually leaves the electrode’s sulfur unavailable for participating in future charge/discharge cycles. The net result is that early generations of the technology would discharge themselves while sitting unused and would only survive a few hundred cycles before performance decayed dramatically.

But there has been progress on all these fronts, and some lithium-sulfur batteries with performance similar to lithium-ion have been demonstrated. Late last year, a company announced that it had lined up the money needed to build the first large-scale lithium-sulfur battery factory. Still, work on improvements has continued, and the new work seems to suggest ways to boost performance well beyond lithium-ion.

The need for speed

The paper describing the new developments, done by a collaboration between Chinese and German researchers, focuses on one aspect of the challenges posed by lithium-sulfur batteries: the relatively slow chemical reaction between lithium ions and elemental sulfur. It presents that aspect as a roadblock to fast charging, something that will be an issue for automotive applications. But at the same time, finding a way to limit the formation of inactive intermediate products during this reaction goes to the root of the relatively short usable life span of lithium-sulfur batteries.

As it turns out, the researchers found two.

One of the problems with the lithium-sulfur reaction intermediates is that they dissolve in most electrolytes. But that’s not a problem if the electrolyte isn’t a liquid. Solid electrolytes are materials that have a porous structure at the atomic level, with the environment inside the pores being favorable for ions. This allows ions to diffuse through the solid. If there’s a way to trap ions on one side of the electrolyte, such as a chemical reaction that traps or de-ionizes them, then it can enable one-way travel.

Critically, pores that favor the transit of lithium ions, which are quite compact, aren’t likely to allow the transit of the large ionized chains of sulfur. So a solid electrolyte should help cut down on the problems faced by lithium-sulfur batteries. But it won’t necessarily help with fast charging.

The researchers began by testing a glass formed from a mixture of boron, sulfur, and lithium (B2S3 and Li2S). But this glass had terrible conductivity, so they started experimenting with related glasses and settled on a combination that substituted in some phosphorus and iodine.

The iodine turned out to be a critical component. While the exchange of electrons with sulfur is relatively slow, iodine undergoes electron exchange (technically termed a redox reaction) extremely quickly. So it can act as an intermediate in the transfer of electrons to sulfur, speeding up the reactions that occur at the electrode. In addition, iodine has relatively low melting and boiling points, and the researchers suggest there’s some evidence that it moves around within the electrolyte, allowing it to act as an electron shuttle.

Successes and caveats

The result is a far superior electrolyte—and one that enables fast charging. It’s typical that fast charging cuts into the total capacity that can be stored in a battery. But when charged at an extraordinarily fast rate (50C, meaning a full charge in just over a minute), a battery based on this system still had half the capacity of a battery charged 25 times more slowly (2C, or a half-hour to full charge).

But the striking thing was how durable the resulting battery was. Even at an intermediate charging rate (5C), it still had over 80 percent of its initial capacity after over 25,000 charge/discharge cycles. By contrast, lithium-ion batteries tend to hit that level of decay after about 1,000 cycles. If that sort of performance is possible in a mass-produced battery, it’s only a slight exaggeration to say it can radically alter our relationships with many battery-powered devices.

What’s not at all clear, however, is whether this takes full advantage of one of the original promises of lithium-sulfur batteries: more charge in a given weight and volume. The researchers specify the battery being used for testing; one electrode is an indium/lithium metal foil, and the other is a mix of carbon, sulfur, and the glass electrolyte. A layer of the electrolyte sits between them. But when giving numbers for the storage capacity per weight, only the weight of the sulfur is mentioned.

Still, even if weight issues would preclude this from being stuffed into a car or cell phone, there are plenty of storage applications that would benefit from something that doesn’t wear out even with 65 years of daily cycling.

Nature, 2025. DOI: 10.1038/s41586-024-08298-9  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

A solid electrolyte gives lithium-sulfur batteries ludicrous endurance Read More »

google-is-about-to-make-gemini-a-core-part-of-workspaces—with-price-changes

Google is about to make Gemini a core part of Workspaces—with price changes

Google has added AI features to its regular Workspace accounts for business while slightly raising the baseline prices of Workspace plans.

Previously, AI tools in the Gemini Business plan were a $20 per seat add-on to existing Workspace accounts, which had a base cost of $12 per seat without. Now, the AI tools are included for all Workspace users, but the per-seat base price is increasing from $12 to $14.

That means that those who were already paying extra for Gemini are going to pay less than half of what they were—effectively $14 per seat instead of $32. But those who never used or wanted Gemini or any other newer features under the AI umbrella from Workspace are going to pay a little bit more than before.

Features covered here include access to Gemini Advanced, the NotebookLM research assistant, email and document summaries in Gmail and Docs, adaptive audio and additional transcription languages for Meet, and “help me write” and Gemini in the side panel across a variety of applications.

Google says that it plans “to roll out even more AI features previously available in Gemini add-ons only.”

Google is about to make Gemini a core part of Workspaces—with price changes Read More »

here’s-what-nasa-would-like-to-see-spacex-accomplish-with-starship-this-year

Here’s what NASA would like to see SpaceX accomplish with Starship this year


Iterate, iterate, and iterate some more

The seventh test flight of Starship is scheduled for launch Thursday afternoon.

SpaceX’s upgraded Starship rocket stands on its launch pad at Starbase, Texas. Credit: SpaceX

SpaceX plans to launch the seventh full-scale test flight of its massive Super Heavy booster and Starship rocket Thursday afternoon. It’s the first of what might be a dozen or more demonstration flights this year as SpaceX tries new things with the most powerful rocket ever built.

There are many things on SpaceX’s Starship to-do list in 2025. They include debuting an upgraded, larger Starship, known as Version 2 or Block 2, on the test flight preparing to launch Thursday. The one-hour launch window opens at 5 pm EST (4 pm CST; 22: 00 UTC) at SpaceX’s launch base in South Texas. You can watch SpaceX’s live webcast of the flight here.

SpaceX will again attempt to catch the rocket’s Super Heavy booster—more than 20 stories tall and wider than a jumbo jet—back at the launch pad using mechanical arms, or “chopsticks,” mounted to the launch tower. Read more about the Starship Block 2 upgrades in our story from last week.

You might think of next week’s Starship test flight as an apéritif before the entrées to come. Ars recently spoke with Lisa Watson-Morgan, the NASA engineer overseeing the agency’s contract with SpaceX to develop a modified version of Starship to land astronauts on the Moon. NASA has contracts with SpaceX worth more than $4 billion to develop and fly two Starship human landing missions under the umbrella of the agency’s Artemis program to return humans to the Moon.

We are publishing the entire interview with Watson-Morgan below, but first, let’s assess what SpaceX might accomplish with Starship this year.

There are many things to watch for on this test flight, including the deployment of 10 satellite simulators to test the ship’s payload accommodations and the performance of a beefed-up heat shield as the vehicle blazes through the atmosphere for reentry and splashdown in the Indian Ocean.

If this all works, SpaceX may try to launch a ship into low-Earth orbit on the eighth flight, expected to launch in the next couple of months. All of the Starship test flights to date have intentionally flown on suborbital trajectories, bringing the ship back toward reentry over the sea northwest of Australia after traveling halfway around the world.

Then, there’s an even bigger version of Starship called Block 3 that could begin flying before the end of the year. This version of the ship is the one that SpaceX will use to start experimenting with in-orbit refueling, according to Watson-Morgan.

In order to test refueling, two Starships will dock together in orbit, allowing one vehicle to transfer super-cold methane and liquid oxygen into the other. Nothing like this on this scale has ever been attempted before. Future Starship missions to the Moon and Mars may require 10 or more tanker missions to gas up in low-Earth orbit. All of these missions will use different versions of the same basic Starship design: a human-rated lunar lander, a propellant depot, and a refueling tanker.

Artist’s illustration of Starship on the surface of the Moon. Credit: SpaceX

Questions for 2025

Catching Starship back at its launch tower and demonstrating orbital propellant transfer are the two most significant milestones on SpaceX’s roadmap for 2025.

SpaceX officials have said they aim to fly as many as 25 Starship missions this year, allowing engineers to more rapidly iterate on the vehicle’s design. SpaceX is constructing a second launch pad at its Starbase facility near Brownsville, Texas, to help speed up the launch cadence.

Can SpaceX achieve this flight rate in 2025? Will faster Starship manufacturing and reusability help the company fly more often? Will SpaceX fly its first ship-to-ship propellant transfer demonstration this year? When will Starship begin launching large batches of new-generation Starlink Internet satellites?

Licensing delays at the Federal Aviation Administration have been a thorn in SpaceX’s side for the last couple of years. Will those go away under the incoming administration of President-elect Donald Trump, who counts SpaceX founder Elon Musk as a key adviser?

And will SpaceX gain a larger role in NASA’s Artemis lunar program? The Artemis program’s architecture is sure to be reviewed by the Trump administration and the nominee for the agency’s next administrator, billionaire businessman and astronaut Jared Isaacman.

The very expensive Space Launch System rocket, developed by NASA with Boeing and other traditional aerospace contractors, might be canceled. NASA currently envisions the SLS rocket and Orion spacecraft as the transportation system to ferry astronauts between Earth and the vicinity of the Moon, where crews would meet up with a landing vehicle provided by commercial partners SpaceX and Blue Origin.

Watson-Morgan didn’t have answers to all of these questions. Many of them are well outside of her purview as Human Landing System program manager, so Ars didn’t ask. Instead, Ars discussed technical and schedule concerns with her during the half-hour interview. Here is one part of the discussion, lightly edited for clarity.

Ars: What do you hope to see from Flight 7 of Starship?

Lisa Watson-Morgan: One of the exciting parts of working with SpaceX are these test flights. They have a really fast turnaround, where they put in different lessons learned. I think you saw many of the flight objectives that they discussed from Flight 6, which was a great success. I think they mentioned different thermal testing experiments that they put on the ship in order to understand the different heating, the different loads on certain areas of the system. All that was really good with each one of those, in addition to how they configure the tiles. Then, from that, there’ll be additional tests that they will put on Flight 7, so you kind of get this iterative improvement and learning that we’ll get to see in Flight 7. So Flight 7 is the first Version 2 of their ship set. When I say that, I mean the ship, the booster, all the systems associated with it. So, from that, it’s really more just understanding how the system, how the flaps, how all of that interacts and works as they’re coming back in. Hopefully we’ll get to see some catches, that’s always exciting.

Ars: How did the in-space Raptor engine relight go on Flight 6 (on November 19)?

Lisa Watson-Morgan: Beautifully. And that’s something that’s really important to us because when we’re sitting on the Moon… well, actually, the whole path to the Moon as we are getting ready to land on the Moon, we’ll perform a series of maneuvers, and the Raptors will have an environment that is very, very cold. To that, it’s going to be important that they’re able to relight for landing purposes. So that was a great first step towards that. In addition, after we land, clearly the Raptors will be off, and it will get very cold, and they will have to relight in a cold environment (to get off the Moon). So that’s why that step was critical for the Human Landing System and NASA’s return to the Moon.

A recent artist’s illustration of two Starships docked together in low-Earth orbit. Credit: SpaceX

Ars: Which version of the ship is required for the propellant transfer demonstration, and what new features are on that version to enable this test?

Lisa Watson-Morgan: We’re looking forward to the Version 3, which is what’s coming up later on, sometime in ’25, in the near term, because that’s what we need for propellant transfer and the cryo fluid work that is also important to us… There are different systems in the V3 set that will help us with cryo fluid management. Obviously, with those, we have to have the couplers and the quick-disconnects in order for the two systems to have the right guidance, navigation, trajectory, all the control systems needed to hold their station-keeping in order to dock with each other, and then perform the fluid transfer. So all the fluid lines and all that’s associated with that, those systems, which we have seen in tests and held pieces of when we’ve been working with them at their site, we’ll get to see those actually in action on orbit.

Ars: Have there been any ground tests of these systems, whether it’s fluid couplers or docking systems? Can you talk about some of the ground tests that have gone into this development?

Lisa Watson-Morgan: Oh, absolutely. We’ve been working with them on ground tests for this past year. We’ve seen the ground testing and reviewed the data. Our team works with them on what we deem necessary for the various milestones. While the milestone contains proprietary (information), we work closely with them to ensure that it’s going to meet the intent, safety-wise as well as technically, of what we’re going to need to see. So they’ve done that.

Even more exciting, they have recently shipped some of their docking systems to the Johnson Space Center for testing with the Orion Lockheed Martin docking system, and that’s for Artemis III. Clearly, that’s how we’re going to receive the crew. So those are some exciting tests that we’ve been doing this past year as well that’s not just focused on, say, the booster and the ship. There are a lot of crew systems that are being developed now. We’re in work with them on how we’re going to effectuate the crew manual control requirements that we have, so it’s been a great balance to see what the crew needs, given the size of the ship. That’s been a great set of work. We have crew office hours where the crew travels to Hawthorne [SpaceX headquarters in California] and works one-on-one with the different responsible engineers in the different technical disciplines to make sure that they understand not just little words on the paper from a requirement, but actually what this means, and then how systems can be operated.

Ars: For the docking system, Orion uses the NASA Docking System, and SpaceX brings its own design to bear on Starship?

Lisa Watson-Morgan: This is something that I think the Human Landing System has done exceptionally well. When we wrote our high-level set of requirements, we also wrote it with a bigger picture in mind—looked into the overall standards of how things are typically done, and we just said it has to be compliant with it. So it’s a docking standard compliance, and SpaceX clearly meets that. They certainly do have the Dragon heritage, of course, with the International Space Station. So, because of that, we have high confidence that they’re all going to work very well. Still, it’s important to go ahead and perform the ground testing and get as much of that out of the way as we can.

Lisa Watson-Morgan, NASA’s HLS program manager, is based at Marshall Space Flight Center in Huntsville, Alabama. Credit: ASA/Aubrey Gemignani

Ars: How far along is the development and design of the layout of the crew compartment at the top of Starship? Is it far along, or is it still in the conceptual phase? What can you say about that?

Lisa Watson-Morgan: It’s much further along there. We’ve had our environmental control and life support systems, whether it’s carbon dioxide monitoring fans to make sure the air is circulating properly. We’ve been in a lot of work with SpaceX on the temperature. It’s… a large area (for the crew). The seats, making sure that the crew seats and the loads on that are appropriate. For all of that work, as the analysis work has been performed, the NASA team is reviewing it. They had a mock-up, actually, of some of their life support systems even as far back as eight-plus months ago. So there’s been a lot of progress on that.

Ars: Is SpaceX planning to use a touchscreen design for crew displays and controls, like they do with the Dragon spacecraft?

Lisa Watson-Morgan: We’re in talks about that, about what would be the best approach for the crew for the dynamic environment of landing.

Ars: I can imagine it is a pretty dynamic environment with those Raptor engines firing. It’s almost like a launch in reverse.

Lisa Watson-Morgan: Right. Those are some of the topics that get discussed in the crew office hours. That’s why it’s good to have the crew interacting directly, in addition to the different discipline leads, whether it’s structural, mechanical, propulsion, to have all those folks talking guidance and having control to say, “OK, well, when the system does this, here’s the mode we expect to see. Here’s the impact on the crew. And is this condition, or is the option space that we have on the table, appropriate for the next step, with respect to the displays.”

Ars: One of the big things SpaceX needs to prove out before going to the Moon with Starship is in-orbit propellant transfer. When do you see the ship-to-ship demonstration occurring?

Lisa Watson-Morgan: I see it occurring in ’25.

Ars: Anything more specific about the schedule for that?

Lisa Watson-Morgan: That’d be a question for SpaceX because they do have a number of flights that they’re performing commercially, for their maturity. We get the benefit of that. It’s actually a great partnership. I’ll tell you, it’s really good working with them on this, but they’d have to answer that question. I do foresee it happening in ’25.

Ars: What things do you need to see SpaceX accomplish before they’re ready for the refueling demo? I’m thinking of things like the second launch tower, potentially. Do they need to demonstrate a ship catch or anything like that before going for orbital refueling?

Lisa Watson-Morgan: I would say none of that’s required. You just kind of get down to, what are the basics? What are the basics that you need? So you need to be able to launch rapidly off the same pad, even. They’ve shown they can launch and catch within a matter of minutes. So that is good confidence there. The catching is part of their reuse strategy, which is more of their commercial approach, and not a NASA requirement. NASA reaps the benefit of it by good pricing as a result of their commercial model, but it is not a requirement that we have. So they could theoretically use the same pad to perform the propellant transfer and the long-duration flight, because all it requires is two launches, really, within a specified time period to where the two systems can meet in a planned trajectory or orbit to do the propellant transfer. So they could launch the first one, and then within a week or two or three, depending on what the concept of operations was that we thought we could achieve at that time, and then have the propellant transfer demo occur that way. So you don’t necessarily need two pads, but you do need more thermal characterization of the ship. I would say that is one of the areas (we need to see data on), and that is one of the reasons, I think, why they’re working so diligently on that.

Ars: You mentioned the long-duration flight demonstration. What does that entail?

Lisa Watson-Morgan: The simple objectives are to launch two different tankers or Starships. The Starship will eventually be a crewed system. Clearly, the ones that we’re talking about for the propellant transfer are not. It’s just to have the booster and Starship system launch, and within a few weeks, have another one launch, and have them rendezvous. They need to be able to find each other with their sensors. They need to be able to come close, very, very close, and they need to be able to dock together, connect, do the quick connect, and make sure they are able, then, to flow propellant and LOX (liquid oxygen) to another system. Then, we need to be able to measure the quantity of how much has gone over. And from that, then they need to safely undock and dispose.

Ars: So the long-duration flight demonstration is just part of what SpaceX needs to do in order to be ready for the propellant transfer demonstration?

Lisa Watson-Morgan: We call it long duration just because it’s not a 45-minute or an hour flight. Long duration, obviously, that’s a relative statement, but it’s a system that can stay up long enough to be able to find another Starship and perform those maneuvers and flow of fuel and LOX.

Ars: How much propellant will you transfer with this demonstration, and do you think you’ll get all the data you need in one demonstration, or will SpaceX need to try this several times?

Lisa Watson-Morgan: That’s something you can ask SpaceX (about how much propellant will be transferred). Clearly, I know, but there’s some sensitivity there. You’ve seen our requirements in our initial solicitation. We have thresholds and goals, meaning we want you to at least do this, but more is better, and that’s typically how we work almost everything. Working with commercial industry in these fixed-price contracts has worked exceptionally well, because when you have providers that are also wanting to explore commercially or trying to make a commercial system, they are interested in pushing more than what we would typically ask for, and so often we get that for an incredibly fair price.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Here’s what NASA would like to see SpaceX accomplish with Starship this year Read More »

two-lunar-landers-are-on-the-way-to-the-moon-after-spacex’s-double-moonshot

Two lunar landers are on the way to the Moon after SpaceX’s double moonshot

Julianna Scheiman, director of NASA science missions for SpaceX, said it made sense to pair the Firefly and ispace missions on the same Falcon 9 rocket.

“When we have two missions that can each go to the Moon on the same launch, that is something that we obviously want to take advantage of,” Scheiman said. “So when we found a solution for the Firefly and ispace missions to fly together on the same Falcon 9, it was a no-brainer to put them together.”

SpaceX stacked the two landers, one on top of the other, inside the Falcon 9’s payload fairing. Firefly’s lander, the larger of the two spacecraft, rode on top of the stack and deployed from the rocket first. The Resilience lander from ispace launched in the lower position, cocooned inside a specially designed canister. Once Firefly’s lander separated from the Falcon 9, the rocket jettisoned the canister, performed a brief engine firing to maneuver into a slightly different orbit, then released ispace’s lander.

This dual launch arrangement resulted in a lower launch price for Firefly and ispace, according to Scheiman.

“At SpaceX, we are really interested in and invested in lowering the cost of launch for everybody,” she said. “So that’s something we’re really proud of.”

The Resilience lunar lander is pictured at ispace’s facility in Japan last year. The company’s small Tenacious rover is visible on the upper left part of the spacecraft. credit: ispace Credit: ispace

The Blue Ghost and Resilience landers will take different paths toward the Moon.

Firefly’s Blue Ghost will spend about 25 days in Earth orbit, then four days in transit to the Moon. After Blue Ghost enters lunar orbit, Firefly’s ground team will verify the readiness of the lander’s propulsion and navigation systems and execute several thruster burns to set up for landing.

Blue Ghost’s final descent to the Moon is tentatively scheduled for March 2. The target landing site is in Mare Crisium, an ancient 350-mile-wide (560-kilometer) impact basin in the northeast part of the near side of the Moon.

After touchdown, Blue Ghost will operate for about 14 days (one entire lunar day). The instruments aboard Firefly’s lander include a subsurface drill, an X-ray imager, and an experimental electrodynamic dust shield to test methods of repelling troublesome lunar dust from accumulating on sensitive spacecraft components.

The Resilience lander from ispace will take four to five months to reach the Moon. It carries several intriguing tech demo experiments, including a water electrolyzer provided by a Japanese company named Takasago Thermal Engineering. This demonstration will test equipment that future lunar missions could use to convert the Moon’s water ice resources into electricity and rocket fuel.

The lander will also deploy a “micro-rover” named Tenacious, developed by an ispace subsidiary in Luxembourg. The Tenacious rover will attempt to scoop up lunar soil and capture high-definition imagery of the Moon.

Ron Garan, CEO of ispace’s US-based subsidiary, told Ars that this mission is “pivotal” for the company.

“We were not fully successful on our first mission,” Garan said in an interview. “It was an amazing accomplishment, even though we didn’t have a soft landing… Although the hardware worked flawlessly, exactly as it was supposed to, we did have some lessons learned in the software department. The fixes to prevent what happened on the first mission from happening on the second mission were fairly straightforward, so that boosts our confidence.”

The ispace subsidiary led by Garan, a former NASA astronaut, is based in Colorado. While the Resilience lander launched Wednesday is not part of the CLPS program, the company will build an upgraded lander for a future CLPS mission for NASA, led by Draper Laboratory.

“I think the fact that we have two lunar landers on the same rocket for the first time in history is pretty substantial,” Garan said. I think we all are rooting for each other.”

Investors need to see more successes with commercial lunar landers to fully realize the market’s potential, Garan said.

“That market, right now, is very nascent. It’s very, very immature. And one of the reasons for that is that it’s very difficult for companies that are contemplating making investments on equipment, experiments, etc., to put on the lunar surface and lunar orbit,” Garan said. “It’s very difficult to make those investments, especially if they’re long-term investments, because there really hasn’t been a proof of concept yet.”

“So every time we have a success, that makes it more likely that these companies that will serve as the foundation of a commercial lunar market movement will be able to make those investments,” Garan said. “Conversely, every time we have a failure, the opposite happens.”

Two lunar landers are on the way to the Moon after SpaceX’s double moonshot Read More »

Demystifying data fabrics – bridging the gap between data sources and workloads

The term “data fabric” is used across the tech industry, yet its definition and implementation can vary. I have seen this across vendors: in autumn last year, British Telecom (BT) talked about their data fabric at an analyst event; meanwhile, in storage, NetApp has been re-orienting their brand to intelligent infrastructure but was previously using the term. Application platform vendor Appian has a data fabric product, and database provider MongoDB has also been talking about data fabrics and similar ideas. 

At its core, a data fabric is a unified architecture that abstracts and integrates disparate data sources to create a seamless data layer. The principle is to create a unified, synchronized layer between disparate sources of data and the workloads that need access to data—your applications, workloads, and, increasingly, your AI algorithms or learning engines. 

There are plenty of reasons to want such an overlay. The data fabric acts as a generalized integration layer, plugging into different data sources or adding advanced capabilities to facilitate access for applications, workloads, and models, like enabling access to those sources while keeping them synchronized. 

So far, so good. The challenge, however, is that we have a gap between the principle of a data fabric and its actual implementation. People are using the term to represent different things. To return to our four examples:

  • BT defines data fabric as a network-level overlay designed to optimize data transmission across long distances.
  • NetApp’s interpretation (even with the term intelligent data infrastructure) emphasizes storage efficiency and centralized management.
  • Appian positions its data fabric product as a tool for unifying data at the application layer, enabling faster development and customization of user-facing tools. 
  • MongoDB (and other structured data solution providers) consider data fabric principles in the context of data management infrastructure.

How do we cut through all of this? One answer is to accept that we can approach it from multiple angles. You can talk about data fabric conceptually—recognizing the need to bring together data sources—but without overreaching. You don’t need a universal “uber-fabric” that covers absolutely everything. Instead, focus on the specific data you need to manage.

If we rewind a couple of decades, we can see similarities with the principles of service-oriented architecture, which looked to decouple service provision from database systems. Back then, we discussed the difference between services, processes, and data. The same applies now: you can request a service or request data as a service, focusing on what’s needed for your workload. Create, read, update and delete remain the most straightforward of data services!

I am also reminded of the origins of network acceleration, which would use caching to speed up data transfers by holding versions of data locally rather than repeatedly accessing the source. Akamai built its business on how to transfer unstructured content like music and films efficiently and over long distances. 

That’s not to suggest data fabrics are reinventing the wheel. We are in a different (cloud-based) world technologically; plus, they bring new aspects, not least around metadata management, lineage tracking, compliance and security features. These are especially critical for AI workloads, where data governance, quality and provenance directly impact model performance and trustworthiness.

If you are considering deploying a data fabric, the best starting point is to think about what you want the data for. Not only will this help orient you towards what kind of data fabric might be the most appropriate, but this approach also helps avoid the trap of trying to manage all the data in the world. Instead, you can prioritize the most valuable subset of data and consider what level of data fabric works best for your needs:

  1. Network level: To integrate data across multi-cloud, on-premises, and edge environments.
  2. Infrastructure level: If your data is centralized with one storage vendor, focus on the storage layer to serve coherent data pools.
  3. Application level: To pull together disparate datasets for specific applications or platforms.

For example, in BT’s case, they’ve found internal value in using their data fabric to consolidate data from multiple sources. This reduces duplication and helps streamline operations, making data management more efficient. It’s clearly a useful tool for consolidating silos and improving application rationalization.

In the end, data fabric isn’t a monolithic, one-size-fits-all solution. It’s a strategic conceptual layer, backed up by products and features, that you can apply where it makes the most sense to add flexibility and improve data delivery. Deployment fabric isn’t a “set it and forget it” exercise: it requires ongoing effort to scope, deploy, and maintain—not only the software itself but also the configuration and integration of data sources.

While a data fabric can exist conceptually in multiple places, it’s important not to replicate delivery efforts unnecessarily. So, whether you’re pulling data together across the network, within infrastructure, or at the application level, the principles remain the same: use it where it’s most appropriate for your needs, and enable it to evolve with the data it serves.

Demystifying data fabrics – bridging the gap between data sources and workloads Read More »

after-ceo-exit,-sonos-gets-rid-of-its-chief-product-officer,-too

After CEO exit, Sonos gets rid of its chief product officer, too

A day after announcing that CEO Patrick Spence is departing the company, Sonos revealed that chief product officer Maxime Bouvat-Merlin is also leaving. Bouvat-Merlin had the role since 2023.

As first reported by Bloomberg, Sonos will not fill the chief product officer role. Instead, Tom Conrad, the interim CEO Sonos announced yesterday, will take on the role’s responsibilities. In an email to staff cited by Bloomberg (you can read the letter in its entirety at The Verge), Conrad explained:

With my stepping in as CEO, the board, Max, and I have agreed that my background makes the chief product officer role redundant. Therefore, Max’s role is being eliminated and the product organization will report directly to me. I’ve asked Max to advise me over the next period to ensure a smooth transition and I am grateful that he’s agreed to do that.

In May, Sonos released an update to its app that led to customers, many of them long-time users, revolting over broken features, like accessibility capabilities and the ability to set timers. Sonos expects that remedying the app and Sonos’ reputation will cost it at least $20 million to $30 million. 

As head of the company, Spence received a lot of blame and has also been criticized for not apologizing for the problems until July. However, numerous reports have also attributed blame to Bouvat-Merlin.

After CEO exit, Sonos gets rid of its chief product officer, too Read More »

maker-of-weight-loss-drugs-to-ask-trump-to-pause-price-negotiations:-report

Maker of weight-loss drugs to ask Trump to pause price negotiations: Report

Popular prescriptions

For now, Medicare does not cover drugs prescribed specifically for weight loss, but it will cover GLP-1 class drugs if they’re prescribed for other conditions, such as Type 2 diabetes. Wegovy, for example, is covered if it is prescribed to reduce the risk of heart attack and stroke in adults with either obesity or overweight. But, in November, the Biden administration proposed reinterpreting Medicare prescription-coverage rules to allow for coverage of “anti-obesity medications.”

Such a move is reportedly part of the argument Lilly’s CEO plans to bring to the Trump administration. Rather than using drug price negotiations to reduce health care costs, Ricks aims to play up the potential to reduce long-term health care costs by improving people’s overall health with coverage of GLP-1 drugs now. This argument would presumably be targeted at Mehmet Oz, the TV presenter and heart surgeon Trump has tapped to run the Centers for Medicare and Medicaid Services.

“My argument to Mehmet Oz is that if you want to protect Medicare costs in 10 years, have [the Affordable Care Act] and Medicare plans list these drugs now,” Ricks said to Bloomberg. “We know so much about how much cost savings there will be downstream in heart disease and other conditions.”

An October report from the Congressional Budget Office strongly disputed that claim, however. The CBO estimated that the direct cost of Medicare coverage for anti-obesity drugs between 2026 and 2034 would be nearly $39 billion, while the savings from improved health would total just a little over $3 billion, for a net cost to US taxpayers of about $35.5 billion.

Maker of weight-loss drugs to ask Trump to pause price negotiations: Report Read More »

meta-to-cut-5%-of-employees-deemed-unfit-for-zuckerberg’s-ai-fueled-future

Meta to cut 5% of employees deemed unfit for Zuckerberg’s AI-fueled future

Anticipating that 2025 will be an “intense year” requiring rapid innovation, Mark Zuckerberg reportedly announced that Meta would be cutting 5 percent of its workforce—targeting “lowest performers.”

Bloomberg reviewed the internal memo explaining the cuts, which was posted to Meta’s internal Workplace forum Tuesday. In it, Zuckerberg confirmed that Meta was shifting its strategy to “move out low performers faster” so that Meta can hire new talent to fill those vacancies this year.

“I’ve decided to raise the bar on performance management,” Zuckerberg said. “We typically manage out people who aren’t meeting expectations over the course of a year, but now we’re going to do more extensive performance-based cuts during this cycle.”

Cuts will likely impact more than 3,600 employees, as Meta’s most recent headcount in September totaled about 72,000 employees. It may not be as straightforward as letting go anyone with an unsatisfactory performance review, as Zuckerberg said that any employee not currently meeting expectations could be spared if Meta is “optimistic about their future performance,” The Wall Street Journal reported.

Any employees affected will be notified by February 10 and receive “generous severance,” Zuckerberg’s memo promised.

This is the biggest round of cuts at Meta since 2023, when Meta laid off 10,000 employees during what Zuckerberg dubbed the “year of efficiency.” Those layoffs followed a prior round where 11,000 lost their jobs and Zuckerberg realized that “leaner is better.” He told employees in 2023 that a “surprising result” from reducing the workforce was “that many things have gone faster.”

“A leaner org will execute its highest priorities faster,” Zuckerberg wrote in 2023. “People will be more productive, and their work will be more fun and fulfilling. We will become an even greater magnet for the most talented people. That’s why in our Year of Efficiency, we are focused on canceling projects that are duplicative or lower priority and making every organization as lean as possible.”

Meta to cut 5% of employees deemed unfit for Zuckerberg’s AI-fueled future Read More »

buyers-of-razer’s-bogus-“n95”-zephyr-masks-get-over-$1-million-in-refunds

Buyers of Razer’s bogus “N95” Zephyr masks get over $1 million in refunds

“The Razer Zephyr was conceived to offer a different and innovative face covering option for the community,” the company said at the time. “The FTC’s claims against Razer concerned limited portions of some of the statements relating to the Zephyr. More than two years ago, Razer proactively notified customers that the Zephyr was not a N95 mask, stopped sales, and refunded customers.”

FTC: Only 6 percent of US purchases were refunded

The FTC lawsuit casts doubt on the earlier availability of refunds, saying that Razer “allegedly implemented” a refund policy. Razer provided refunds for less than 6 percent of Zephyr purchases in the US, the FTC said.

“While Defendants purport to have instituted a policy of fully refunding consumers concerned about the filters on January 9, 2022, Defendants did not promote that policy in its January emails to consumers or on its website,” the FTC said.

That’s a reference to a Razer email sent to mask buyers acknowledging that the mask “is not a medical device nor certified as an N95 mask.” The FTC said the Razer email to consumers “did not invite or otherwise indicate that consumers who believed they were purchasing an N95 mask when they purchased the Zephyr could request a refund from Razer.”

Razer customers who sought refunds ran into several kinds of problems, the FTC said. Some “were told that they could not receive a refund because they were outside of Razer’s standard 14-day return policy,” while others “were told that they could not receive a full refund because they had used the disposable filters provided with the Zephyr when they bought the Zephyr in October 2022 or because the Zephyr was no longer sealed and unused,” the lawsuit said.

“Numerous customers were deterred from, or confused regarding their ability to, obtain full refunds because of statements by Defendants’ customer service representatives that they were ineligible for full refunds,” the lawsuit said.

We contacted Razer today and will update this article if it provides further comment.

Buyers of Razer’s bogus “N95” Zephyr masks get over $1 million in refunds Read More »

how-gm’s-super-cruise-went-from-limo-driving-to-lane-changes-and-towing

How GM’s Super Cruise went from limo driving to lane changes and towing

The Unified Lateral Controller

The algorithm that handles all of that is called the Unified Lateral Controller. “So it’s a single software stack, but it is also modular to adapt with different vehicle configurations, with different driving scenarios, different maneuvers,” Zarringhalam said.

“Let’s imagine that you’re driving a Super Cruise vehicle, and you indicate to the left, or the system automatically decides to make a lane change to the left, and then, for whatever reason, the driver decides that they want to go back, mid-maneuver; they want to go back to the original lane. So you can just indicate to the opposite side, in this case, the right-hand side. Under the hood, in this scenario, everything is jumping. Our target trajectory is jumping from a left-lane maneuver to a right turn. The turn can be very sharp. There could be other objects that narrow the envelope of operation that you’re allowed to function in,” Zarringhalam said.

Again, that behavior has to be consistent and predictable, whether it’s below freezing or in the middle of a heatwave, and things like tire wear must also be taken into account. Or, say, the presence of a trailer, which could be anything from a bike rack with wheels to a three-axle trailer.

“As soon as we detect that the trailer is attached, we run several real-time algorithms—trailer inertial parameters, trailer math, trailer configuration, even how many axles we have, and the control adapts itself to execute lane turning and keep both the vehicle and the trailer at the center of the road,” Zarringhalam said.

That’s done automatically without the driver having to input the information (obviating the problem of someone entering the wrong details), “and if you change the loading or the trailer configuration, even mid-drive—if you pull over, load more weight and continue driving on the same road with Super Cruise active—these learnings happen in a matter of seconds,” Zarringhalam said.

How GM’s Super Cruise went from limo driving to lane changes and towing Read More »