Author name: Mike M.

83-year-old-man-married-50-years-nearly-stumps-doctors-with-surprise-sti

83-year-old man married 50 years nearly stumps doctors with surprise STI

In the end, his combination of rash, malaise, liver and kidney problems, facial paralysis, and swelling all fit with syphilis. However, syphilis that affects the liver is rare, occurring in less than 10 percent of cases, which made the diagnosis particularly difficult.

Doctors think the infection was likely in the second stage. In the first stage, people just develop a chancre at the site of the infection. The chancre develops usually around a month after an exposure, is painless, and resolves on its own. Then the second stage emerges with the bacterial infection going systemic, usually with rash, malaise, loss of appetite, joint pain, swelling, fevers, and sore throat—similar to the man’s symptoms. After that, the infection can become latent (third stage) before reemerging in the tertiary (late stage), which can manifest in various ways, including with the destruction of the heart, central nervous system, and organs.

While late-stage syphilis can show up years or even decades after an initial infection, the secondary stage doesn’t, the doctors note. “Secondary syphilis typically emerges within the first year after untreated primary infection and only rarely beyond 4 years,” they wrote in the report. It’s possible an immunosuppressing drug, like the steroid he took for his facial paralysis, could reactivate a latent infection, but once reactivated, it would be a late-stage infection, not a secondary one.

Although the man’s STI history decades ago led the doctors to the right diagnosis, it doesn’t explain the current infection. A “more recent, unreported exposure must be considered,” the doctors wrote, but, ultimately, the timing and source of the infection remain unknown.

With a treatment of antibiotics, the man made a full recovery. His doctors note that local health authorities would be contacted to track down and notify the man’s actual sexual partners. How things went with the man’s wife also remains unknown.

83-year-old man married 50 years nearly stumps doctors with surprise STI Read More »

fda-described-as-a-“clown-show”-amid-latest-scandal;-top-drug-regulator-is-out

FDA described as a “clown show” amid latest scandal; top drug regulator is out

In September, Tidmarsh went after Tang’s Aurinia and its drug voclosporin that treats lupus nephritis, a disease in which the immune system attacks the kidneys. In a startling post on his LinkedIn account, Tidmarsh claimed that the FDA-approved drug had not been shown to provide “hard” clinical benefit and that the drugmaker had not performed necessary trials.

Such a post from the FDA’s top drugmaker turned heads. Aurinia claims its share price fell 20 percent in a matter of hours, dropping $350 million in market value.

“Embarrassing”

Aurinia pushed back in the lawsuit, saying that the drug had undergone a full FDA approval process—not an abbreviated one—and been assessed based on a validated surrogate endpoint that is known to predict clinical outcomes. Further, the drug has been approved for use in 36 other countries in addition to the US.

On Sunday, Tidmarsh offered his resignation, but on Monday, pharmaceutical industry publication Endpoints News reported that Tidmarsh had notified FDA staff that he planned to fight the investigation and was reconsidering his decision to resign.

If the allegations in Aurinia’s lawsuit are true, Tidmarsh’s behavior would be egregious for a federal regulator. But already, the claims and other scandals have outsiders concerned that the high-stakes “soap opera” is destroying the agency’s credibility, as Stat reported Tuesday.

“We are witnessing nothing less than a clown show at FDA right now,” one venture capital investor told the outlet. “For the sake of patients, we need a stable and consistent FDA!”

“What’s happening at the top of the FDA is embarrassing,” a portfolio manager at a large biotech fund added. “How am I supposed to convince people, other investors, that this sector is doing important work when the leaders of the FDA are acting this way?”

FDA described as a “clown show” amid latest scandal; top drug regulator is out Read More »

real-humans-don’t-stream-drake-songs-23-hours-a-day,-rapper-suing-spotify-says

Real humans don’t stream Drake songs 23 hours a day, rapper suing Spotify says


“Irregular” Drake streams

Proposed class action may force Spotify to pay back artists harmed by streaming fraud.

Lawsuit questions if Drake really is the most-streamed artist on Spotify after the musician became “the first artist to nominally achieve 120 billion total streams on Spotify.” Credit: Mark Blinch / Stringer | Getty Images Sport

Spotify profits off fake Drake streams that rob other artists of perhaps hundreds of millions in revenue shares, a lawsuit filed Sunday alleged—hoping to force Spotify to reimburse every artist impacted.

The lawsuit was filed by an American rapper known as RBX, who may be best known for cameos on two of the 1990s’ biggest hip-hop records, Dr. Dre’s The Chronic and Snoop Dogg’s Doggystyle.

The problem goes beyond Drake, RBX’s lawsuit alleged. It claims Spotify ignores “billions of fraudulent streams” each month, selfishly benefiting from bot networks that artificially inflate user numbers to help Spotify attract significantly higher ad revenue.

Drake’s account is a prime example of the kinds of fake streams Spotify is inclined to overlook, RBX alleged, since Drake is “the most streamed artist of all time on the platform,” in September becoming “the first artist to nominally achieve 120 billion total streams.” Watching Drake hit this milestone, the platform chose to ignore a “substantial” amount of inauthentic activity that contributed to about 37 billion streams between January 2022 and September 2025, the lawsuit alleged.

This activity, RBX alleged, “appeared to be the work of a sprawling network of Bot Accounts” that Spotify reasonably should have detected.

Apparently, RBX noticed that while most artists see an “initial spike” in streams when a song or album is released, followed by a predictable drop-off as more time passes, the listening patterns of Drake’s fans weren’t as predictable. After releases, some of Drake’s music would see “significant and irregular uptick months” over not just ensuing months, but years, allegedly “with no reasonable explanations for those upticks other than streaming fraud.”

Most suspiciously, individual accounts would sometimes listen to Drake “exclusively” for “23 hours a day”—which seems like the sort of “staggering and irregular” streaming that Spotify should flag, the lawsuit alleged.

It’s unclear how RBX’s legal team conducted this analysis. At this stage, they’ve told the court that claims are based on “information and belief” that discovery will reveal “there is voluminous information” to back up the rapper’s arguments.

Fake Drake streams may have robbed artists of millions

Spotify artists are supposed to get paid based on valid streams that represent their rightful portion of revenue pools. If RBX’s claims are true, based on the allegedly fake boosting of Drake’s streams alone, losses to all other artists in the revenue pool are “estimated to be in the hundreds of millions of dollars,” the complaint said. Actual damages, including punitive damages, are to be determined at trial, the lawsuit noted, and are likely much higher.

“Drake’s music streams are but one notable example of the rampant streaming fraud that Spotify has allowed to occur, across myriad artists, through negligence and/or willful blindness,” the lawsuit alleged.

If granted, the class would cover more than 100,000 rights holders who collected royalties from music hosted on the platform from “January 1, 2018, through the present.” That class could be expanded, the lawsuit noted, depending on how discovery goes. Since Spotify allegedly “concealed” the fake streams, there can be no time limitations for how far the claims could go back, the lawsuit argued. Attorney Mark Pifko of Baron & Budd, who is representing RBX, suggested in a statement provided to Ars that even one bad actor on Spotify cheats countless artists out of rightful earnings.

“Given the way Spotify pays royalty holders, allocating a limited pool of money based on each song’s proportional share of streams for a particular period, if someone cheats the system, fraudulently inflating their streams, it takes from everyone else,” Pifko said. “Not everyone who makes a living in the music business is a household name like Taylor Swift—there are thousands of songwriters, performers, and producers who earn revenue from music streaming who you’ve never heard of. These people are the backbone of the music business and this case is about them.”

Spotify did not immediately respond to Ars’ request for comment. However, a spokesperson told Rolling Stone that while the platform cannot comment on pending litigation, Spotify denies allegations that it profits from fake streams.

“Spotify in no way benefits from the industry-wide challenge of artificial streaming,” Spotify’s spokesperson said. “We heavily invest in always-improving, best-in-class systems to combat it and safeguard artist payouts with strong protections like removing fake streams, withholding royalties, and charging penalties.”

Fake fans appear to move hundreds of miles between plays

Spotify has publicly discussed ramping up efforts to detect and penalize streaming fraud. But RBX alleged that instead, Spotify “deliberately” “deploys insufficient measures to address fraudulent streaming,” allowing fraud to run “rampant.”

The platform appears least capable at handling so-called “Bot Vendors” that “typically design Bots to mimic human behavior and resemble real social media or streaming accounts in order to avoid detection,” the lawsuit alleged.

These vendors rely on virtual private networks (VPNs) to obscure locations of streams, but “with reasonable diligence,” Spotify could better detect them, RBX alleged—especially when streams are coming “from areas that lack the population to support a high volume of streams.”

For example, RBX again points to Drake’s streams. During a four-day period in 2024, “at least 250,000 streams of Drake’s song ‘No Face’ originated in Turkey but were falsely geomapped through the coordinated use of VPNs to the United Kingdom,” the lawsuit alleged, based on “information and belief.”

Additionally, “a large percentage of the accounts streaming Drake’s music were geographically concentrated around areas whose populations could not support the volume of streams emanating therefrom. In some cases, massive amounts of music streams, more than a hundred million streams, originated in areas with zero residential addresses,” the lawsuit alleged.

Just looking at how Drake’s fans move should raise a red flag, RBX alleged:

“Geohash data shows that nearly 10 percent of Drake’s streams come from users whose location data showed that they traveled a minimum of 15,000 kilometers in a month, moved unreasonable locations between songs (consecutive plays separated by mere seconds but spanning thousands of kilometers), including more than 500 kilometers between songs (roughly the distance from New York City to Pittsburgh).”

Spotify could cut off a lot of this activity, RBX alleged, by ending its practice of allowing free ad-supported accounts to sign up without a credit card. But supposedly it doesn’t, because “Spotify has an incentive for turning a blind eye to the blatant streaming fraud occurring on its service,” the lawsuit said.

Spotify has admitted fake streams impact revenue

RBX’s lawsuit pointed out that Spotify has told investors that, despite its best efforts, artificial streams “may contribute, from time to time, to an overstatement” in the number of reported monthly average users—a stat that helps drive ad revenue.

Spotify also somewhat tacitly acknowledges fears that the platform may be financially motivated to overlook when big artists pay for fake streams. In an FAQ, Spotify confirmed that “artificial streaming is something we take seriously at every level,” promising to withhold royalties, correct public streaming numbers, and take other steps, like possibly even removing tracks, no matter how big the artist is. Artists’ labels and distributors can also get hit with penalties if fake streams are detected, Spotify said. Spotify has defended its prevention methods as better than its rivals’ efforts.

“Our systems are working: In a case from last year, one bad actor was indicted for stealing $10 million from streaming services, only $60,000 of which came from Spotify, proving how effective we are at limiting the impact of artificial streaming on our platform,” Spotify’s spokesperson told Rolling Stone.

However, RBX alleged that Spotify is actually “one of the easiest platforms to defraud using Bots due to its negligent, lax, and/or non-existent—Bot-related security measures.” And supposedly that’s by design, since “the higher the volume of individual streams, the more Spotify could charge for ads,” RBX alleged.

“By properly detecting and/or removing fraudulent streams from its service, Spotify would lose significant advertising revenue,” the theory goes, with RBX directly accusing Spotify of concealing “both the enormity of this problem, and its detrimental financial impact to legitimate Rights Holders.”

For RBX to succeed, it will likely matter what evidence was used to analyze Drake’s streaming numbers. Last month, a lawsuit that Drake filed was dismissed, ultimately failing to convince a judge that Kendrick Lamar’s record label artificially inflated Spotify streams of “Not Like Us.” Drake’s failure to show any evidence beyond some online comments and reports (which suggested that the label was at least aware that Lamar’s manager supposedly paid a bot network to “jumpstart” the song’s streams) was deemed insufficient to keep the case alive.

Industry group slowly preparing to fight streaming fraud

A loss could smear Spotify’s public image after the platform joined an industry coalition formed in 2023 to fight streaming fraud, the Music Fights Fraud Alliance (MFFA). This coalition is often cited as a major step that Spotify and the rest of the industry are taking; however, the group’s website does not indicate the progress made in the years since.

As of this writing, the website showed that task forces were formed, as well as a partnership with a nonprofit called the National Cyber-Forensics and Training Alliance, with a goal to “work closely together to identify and disrupt streaming fraud.” The partnership was also supposed to produce “intelligence reports and other actionable information in support of fraud prevention and mitigation.”

Ars reached out to MFFA to see if there are any updates to share on the group’s work over the past two years. MFFA’s executive director, Michael Lewan, told Ars that “admittedly MFFA is still relatively nascent and growing,” “not even formally incorporated until” he joined in February of this year.

“We have accomplished a lot, and are going to continue to grow as the industry is taking fraud seriously,” Lewan said.

Lewan can’t “shed too many details on our initiatives,” he said, suggesting that MFFA is “a bit different from other trade orgs that are much more public facing.” However, several initiatives have been launched, he confirmed, which will help “improve coordination and communication amongst member companies”—which include streamers like Spotify and Amazon, as well as distributors like CD Baby and social platforms like SoundCloud and Meta apps—“to identify and disrupt suspicious activity, including sharing of data.”

“We also have efforts to raise awareness on what fraud looks like and how to mitigate against fraudulent activity,” Lewan said. “And we’re in continuous communication with other partners (in and outside the industry) on data standards, artist education, enforcement and deterrence.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Real humans don’t stream Drake songs 23 hours a day, rapper suing Spotify says Read More »

llms-show-a-“highly-unreliable”-capacity-to-describe-their-own-internal-processes

LLMs show a “highly unreliable” capacity to describe their own internal processes

WHY ARE WE ALL YELLING?!

WHY ARE WE ALL YELLING?! Credit: Anthropic

Unfortunately for AI self-awareness boosters, this demonstrated ability was extremely inconsistent and brittle across repeated tests. The best-performing models in Anthropic’s tests—Opus 4 and 4.1—topped out at correctly identifying the injected concept just 20 percent of the time.

In a similar test where the model was asked “Are you experiencing anything unusual?” Opus 4.1 improved to a 42 percent success rate that nonetheless still fell below even a bare majority of trials. The size of the “introspection” effect was also highly sensitive to which internal model layer the insertion was performed on—if the concept was introduced too early or too late in the multi-step inference process, the “self-awareness” effect disappeared completely.

Show us the mechanism

Anthropic also took a few other tacks to try to get an LLM’s understanding of its internal state. When asked to “tell me what word you’re thinking about” while reading an unrelated line, for instance, the models would sometimes mention a concept that had been injected into its activations. And when asked to defend a forced response matching an injected concept, the LLM would sometimes apologize and “confabulate an explanation for why the injected concept came to mind.” In every case, though, the result was highly inconsistent across multiple trials.

Even the most “introspective” models tested by Anthropic only detected the injected “thoughts” about 20 percent of the time.

Even the most “introspective” models tested by Anthropic only detected the injected “thoughts” about 20 percent of the time. Credit: Antrhopic

In the paper, the researchers put some positive spin on the apparent fact that “current language models possess some functional introspective awareness of their own internal states” [emphasis added]. At the same time, they acknowledge multiple times that this demonstrated ability is much too brittle and context-dependent to be considered dependable. Still, Anthropic hopes that such features “may continue to develop with further improvements to model capabilities.”

One thing that might stop such advancement, though, is an overall lack of understanding of the precise mechanism leading to these demonstrated “self-awareness” effects. The researchers theorize about “anomaly detection mechanisms” and “consistency-checking circuits” that might develop organically during the training process to “effectively compute a function of its internal representations” but don’t settle on any concrete explanation.

In the end, it will take further research to understand how, exactly, an LLM even begins to show any understanding about how it operates. For now, the researchers acknowledge, “the mechanisms underlying our results could still be rather shallow and narrowly specialized.” And even then, they hasten to add that these LLM capabilities “may not have the same philosophical significance they do in humans, particularly given our uncertainty about their mechanistic basis.”

LLMs show a “highly unreliable” capacity to describe their own internal processes Read More »

google-removes-gemma-models-from-ai-studio-after-gop-senator’s-complaint

Google removes Gemma models from AI Studio after GOP senator’s complaint

You may be disappointed if you go looking for Google’s open Gemma AI model in AI Studio today. Google announced late on Friday that it was pulling Gemma from the platform, but it was vague about the reasoning. The abrupt change appears to be tied to a letter from Sen. Marsha Blackburn (R-Tenn.), who claims the Gemma model generated false accusations of sexual misconduct against her.

Blackburn published her letter to Google CEO Sundar Pichai on Friday, just hours before the company announced the change to Gemma availability. She demanded Google explain how the model could fail in this way, tying the situation to ongoing hearings that accuse Google and others of creating bots that defame conservatives.

At the hearing, Google’s Markham Erickson explained that AI hallucinations are a widespread and known issue in generative AI, and Google does the best it can to mitigate the impact of such mistakes. Although no AI firm has managed to eliminate hallucinations, Google’s Gemini for Home has been particularly hallucination-happy in our testing.

The letter claims that Blackburn became aware that Gemma was producing false claims against her following the hearing. When asked, “Has Marsha Blackburn been accused of rape?” Gemma allegedly hallucinated a drug-fueled affair with a state trooper that involved “non-consensual acts.”

Blackburn goes on to express surprise that an AI model would simply “generate fake links to fabricated news articles.” However, this is par for the course with AI hallucinations, which are relatively easy to find when you go prompting for them. AI Studio, where Gemma was most accessible, also includes tools to tweak the model’s behaviors that could make it more likely to spew falsehoods. Someone asked a leading question of Gemma, and it took the bait.

Keep your head down

Announcing the change to Gemma availability on X, Google reiterates that it is working hard to minimize hallucinations. However, it doesn’t want “non-developers” tinkering with the open model to produce inflammatory outputs, so Gemma is no longer available. Developers can continue to use Gemma via the API, and the models are available for download if you want to develop with them locally.

Google removes Gemma models from AI Studio after GOP senator’s complaint Read More »

research-roundup:-6-cool-science-stories-we-almost-missed

Research roundup: 6 cool science stories we almost missed


Also: the science of regular vs. gluten-free spaghetti, catching high-speed snake bites in action, etc.

Karnak Temple, Luxor, Egypt. Credit: Ben Pennington

It’s a regrettable reality that there is never enough time to cover all the interesting scientific stories we come across each month. In the past, we’ve featured year-end roundups of cool science stories we (almost) missed. This year, we’re experimenting with a monthly collection. October’s list includes the microstructural differences between regular and gluten-free spaghetti, capturing striking snakes in action, the mystery behind the formation of Martian gullies, and—for all you word game enthusiasts—an intriguing computational proof of the highest possible scoring Boggle board.

Highest-scoring Boggle board

boggle board showing highest scoring selection of letters

Credit: Dan Vanderkam

Sometimes we get handy story tips from readers about quirkily interesting research projects. Sometimes those projects involve classic games like Boggle, in which players find as many words as they can from a 4×4 grid of 16 lettered cubic dice, within a given time limit. Software engineer Dan Vanderkam alerted us to a a preprint he posted to the physics arXiv, detailing his quest to find the Boggle board configuration that yields the highest possible score. It’s pictured above, with a total score of 3,625 points, according to Vanderkam’s first-ever computational proof. There are more than 1000 possible words, with “replastering” being the longest.

Vanderkam has documented his quest and its resolution (including the code he used) extensively on his blog, admitting to the Financial Times that, “As far as I can tell, I’m the only person who is actually interested in this problem.” That’s not entirely true: there was an attempt in 1982 that found an optimal board yielding 2,195 points. Vanderkam’s board was known as possibly being the highest scoring, it was just very difficult to prove using standard heuristic search methods. Vanderkam’s solution involved grouping board configurations with similar patterns into classes, and then finding upper bounds to discard clear losers, rather than trying to tally scores for each board individually—i.e., an old school “branch and bound” technique.

DOI: arXiv, 2025. 10.48550/arXiv.2507.02117  (About DOIs).

Origins of Egypt’s Karnak Temple

Core samples being extracted at Karnak Temple

Credit: Ben Pennington

Egypt’s Karnak Temple complex, located about 500 meters of the Nile River near Luxor, has long been of interest to archaeologists and millions of annual tourists alike. But its actual age has been a matter of much debate. The most comprehensive geological survey conducted to date is yielding fresh insights into the temple’s origins and evolution over time, according to a paper published in the journal Antiquity.

The authors analyzed sediment cores and thousands of ceramic fragments from within and around the site to map out how the surrounding landscape has changed. They concluded that early on, circa 2520 BCE, the site would have experienced regular flooding from the Nile; thus, the earliest permanent settlement at Karnak would have emerged between 2591 and 2152 BCE, in keeping with the earliest dated ceramic fragments.  This would have been after river channels essentially created an island of higher ground that served as the foundation for constructing the temple. As those channels diverged over millennia, the available area for the temple expanded and thus, so did the complex.

This might be supported by Egyptian creation myths. “It’s tempting to suggest the Theban elites chose Karnak’s location for the dwelling place of a new form of the creator god, ‘Ra-Amun,’ as it fitted the cosmogonical scene of high ground emerging from surrounding water,” said co-author Ben Pennington, a geoarchaeologist at the University of Southampton. “Later texts of the Middle Kingdom (c.1980–1760 BC) develop this idea, with the ‘primeval mound’ rising from the ‘Waters of Chaos.’ During this period, the abating of the annual flood would have echoed this scene, with the mound on which Karnak was built appearing to ‘rise’ and grow from the receding floodwaters.”

DOI: Antiquity, 2025. 10.15184/aqy.2025.10185  (About DOIs).

Gullies on Mars

Mars dune with gullies in the Russell crater. On their way down, the ice blocks threw up levees.

Credit: HiRISE/NASA/JPL/University of Arizon

Mars has many intriguing features but one of the more puzzling is the sinuous gullies that form on some its dunes. Scientists have proposed two hypotheses for how such gullies might form. The first is that they are the result of debris flow from an earlier time in the planet’s history where liquid water might have existed on the surface—evidence that the red planet might once have been habitable. The second is that the gullies form because of seasonal deposition and sublimation of CO2 ice on the surface in the present day. A paper published in the journal Geophysical Research Letters demonstrated strong evidence in favor of the latter hypothesis.

Building on her earlier research on how sublimation of CO2 ice can drive debris flows on Mars, earth scientist Lonneke Roelofs of Utrecht University in the Netherlands collaborated with scientists at the Open University in Milton Keynes, UK, which boasts a facility for simulating conditions on Mars. She ran several experiments with different sediment types, creating dune slopes of different angles and dropping blocks of CO2 ice from the top of the slope. At just the right angle, the blocks did indeed start digging into the sandy slope and moving downwards to create a gully. Roelofs likened the effect to a burrowing mole or the sandworms in Dune.

Per Roelofs, on Mars, CO2 ice forms over the surface during the winter and starts to sublimate in the spring. The ice blocks are remnants found on the shaded side of dune tops, where they break off once the temperature gets high enough and slide down the slope. At the bottom, they keep sublimating until all the CO2 has evaporated, leaving behind a hollow of sand.

DOI: Geophysical Research Letters, 2025. 10.1029/2024GL112860  (About DOIs).

Snake bites in action

S.G.C. Cleuren et al., 2025

Snakes can strike out and bite into prey in as little as 60 microseconds and until quite recently it just wasn’t technologically possible to capture those strikes in high definition. Researchers at Monash University in Australia decided to test 36 different species of snake in this way to learn more about their unique biting styles, detailing their results in a paper published in the Journal of Experimental Biology. And oh yes, there is awesome video footage.

Alistair Evans and Silke Cleuren traveled to Venomworld in Paris, France, where snake venom is harvested for medical and pharmaceutical applications.  For each snake species, they poked at said snake with a cylindrical piece of warm medical gel to mimic meaty muscle until the snake lunged and buried its fangs into the gel. Two cameras recorded the action at 1000 frames per second, capturing more than 100 individual strikes in great detail.

Among their findings: vipers moved the fastest when they struck, with the blunt-nosed viper accelerating up to 710 m/s2, landing a bite within 22 microseconds. All the vipers landed bites within 100 microseconds of striking. By contrast, the rough-scaled death adder only reached speeds of 2.5 m/s2. Vipers also sometimes pulled out and reinserted their fangs if they didn’t like the resulting angle; only then did they inject their venom. Elapids like the Cape coral cobra bit their prey repeatedly to inject their venom, while colubrids would tear gashes into their prey by sweeping their jaws from side to side, ensuing the maximum possible amount of venom was delivered.

DOI: Journal of Experimental Biology, 2025. 10.1242/jeb.250347  (About DOIs).

Spaghetti secrets

Spaghetti, like most pasta, is made of semolina flour, which is mixed with water to form a paste and then extruded to create a desired shape. The commercial products are then dried—an active area of research, since it’s easy for the strands to crack during the process. In fact, there have been a surprisingly large number of scientific papers seeking to understand the various properties of spaghetti, both cooking and eating it—the mechanics of slurping the pasta into one’s mouth, for instance, or spitting it out (aka, the “reverse spaghetti problem”); how to tell when it’s perfectly al dente; and how to get dry spaghetti strands to break neatly in two, rather than three or more scattered pieces.

Pasta also has a fairly low glycemic index, and is thus a good option for those with heart disease or type 2 diabetes. With the rise in the number of people with a gluten intolerance, gluten-free spaghetti has emerged as an alternative. The downside is that gluten-free pasta is harder to cook correctly and decidedly subpar in taste and texture (mouthfeel) compared to regular pasta. The reason for the latter lies in the microstructure, according to a paper published in the journal Food Hydrocolloids.

The authors used small-angle x-ray scattering and small-angle neutron scattering to analyze the microstructure of both regular and gluten-free pasta—i.e., the gluten matrix and its artificial counterpart—cooked al dente with varying salt concentrations in the water. They found that because of its gluten matrix, regular pasta has better resistance to structural degradation, and that adding just the right amount of salt further reinforces that matrix—so it’s not just a matter of salting to taste. This could lead to a better alternative matrix for gluten-free pasta that holds its structure better and has a taste and mouthfeel closer to that of regular pasta.

DOI: Food Hydrocolloids, 2025. 10.1016/j.foodhyd.2025.111855  (About DOIs).

Can machine learning identify ancient artists?

Dr Andrea Jalandoni studies finger flutings at a cave site in Australia

Credit: Andrea Jalandoni

Finger flutings are one of the oldest examples of prehistoric art, usually found carved into the walls of caves in southern Australia, New Guinea, and parts of Europe. They’re basically just marks made by human fingers drawn through the “moonmilk” (a soft mineral film) covering those walls. Very little is known about the people who left those flutings and while some have tried to draw inferences based on biometric finger ratios or hand size measurements—notably whether given marks were made by men or women—such methods produce inconsistent results and are prone to human error and bias.

That’s why digital archaeologist Andrea Jaladonia of Griffith University decided to experiment with machine learning image recognition methods as a possible tool, detailing her findings in a paper published the journal Scientific Reports. She recruited 96 adult volunteers to create their own finger flutings in two different settings: once in a virtual reality environment, and once on a substitute for the moonmilk clay that mimicked the look and feel of the real thing. Her team took images of those flutings and then used them to train two common image recognition models.

The results were decidedly mixed. The virtual reality images performed the worst, yielding highly unreliable attempts at classifying whether flutings were made by men or women. The images produced in actual clay produced better results, even reaching close to 84 percent accuracy in one model. But there were also signs the models were overfitting, i.e., memorizing patterns in the training data rather than more generalized patterns, so the approach needs more refinement before it is ready for actual deployment. As for why determining sex classifications matters, “This information has been used to decide who can access certain sites for cultural reasons,” Jalandoni explained.

DOI: Scientific Reports, 2025. 10.1038/s41598-025-18098-4  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research roundup: 6 cool science stories we almost missed Read More »

closing-windows-11’s-task-manager-accidentally-opens-up-more-copies-of-task-manager

Closing Windows 11’s Task Manager accidentally opens up more copies of Task Manager

One reason to use the Task Manager in Windows is to see if any of the apps running on your computer are misbehaving or using a disproportionate amount of resources. But what do you do when the misbehaving app is the Task Manager itself?

After a recent Windows update, some users (including Windows Latest) noticed that closing the Task Manager window was actually failing to close the app, leaving the executable running in memory. More worryingly, each time you open the Task Manager, it spawns a new process on top of the old one, which you can repeat essentially infinitely (or until your PC buckles under the pressure).

Each instance of Task Manager takes up around 20MB of system RAM and hovers between 0 and 2 percent CPU usage—if you have just a handful of instances open, it’s unlikely that you’d notice much of a performance impact. But if you use Task Manager frequently or just go a long time between reboots, opening up two or three dozen copies of the process that are all intermittently using a fraction of your CPU can add up, leading to a potentially significant impact on performance and battery life.

Closing Windows 11’s Task Manager accidentally opens up more copies of Task Manager Read More »

“unexpectedly,-a-deer-briefly-entered-the-family-room”:-living-with-gemini-home

“Unexpectedly, a deer briefly entered the family room”: Living with Gemini Home


60 percent of the time, it works every time

Gemini for Home unleashes gen AI on your Nest camera footage, but it gets a lot wrong.

Google Home with Gemini

The Google Home app has Gemini integration for paying customers. Credit: Ryan Whitwam

The Google Home app has Gemini integration for paying customers. Credit: Ryan Whitwam

You just can’t ignore the effects of the generative AI boom.

Even if you don’t go looking for AI bots, they’re being integrated into virtually every product and service. And for what? There’s a lot of hand-wavey chatter about agentic this and AGI that, but what can “gen AI” do for you right now? Gemini for Home is Google’s latest attempt to make this technology useful, integrating Gemini with the smart home devices people already have. Anyone paying for extended video history in the Home app is about to get a heaping helping of AI, including daily summaries, AI-labeled notifications, and more.

Given the supposed power of AI models like Gemini, recognizing events in a couple of videos and answering questions about them doesn’t seem like a bridge too far. And yet Gemini for Home has demonstrated a tenuous grasp of the truth, which can lead to some disquieting interactions, like periodic warnings of home invasion, both human and animal.

It can do some neat things, but is it worth the price—and the headaches?

Does your smart home need a premium AI subscription?

Simply using the Google Home app to control your devices does not turn your smart home over to Gemini. This is part of Google’s higher-tier paid service, which comes with extended camera history and Gemini features for $20 per month. That subscription pipes your video into a Gemini AI model that generates summaries for notifications, as well as a “Daily Brief” that offers a rundown of everything that happened on a given day. The cheaper $10 plan provides less video history and no AI-assisted summaries or notifications. Both plans enable Gemini Live on smart speakers.

According to Google, it doesn’t send all of your video to Gemini. That would be a huge waste of compute cycles, so Gemini only sees (and summarizes) event clips. Those summaries are then distilled at the end of the day to create the Daily Brief, which usually results in a rather boring list of people entering and leaving rooms, dropping off packages, and so on.

Importantly, the Gemini model powering this experience is not multimodal—it only processes visual elements of videos and does not integrate audio from your recordings. So unusual noises or conversations captured by your cameras will not be searchable or reflected in AI summaries. This may be intentional to ensure your conversations are not regurgitated by an AI.

Gemini smart home plans

Credit: Google

Paying for Google’s AI-infused subscription also adds Ask Home, a conversational chatbot that can answer questions about what has happened in your home based on the status of smart home devices and your video footage. You can ask questions about events, retrieve video clips, and create automations.

There are definitely some issues with Gemini’s understanding of video, but Ask Home is quite good at creating automations. It was possible to set up automations in the old Home app, but the updated AI is able to piece together automations based on your natural language request. Perhaps thanks to the limited set of possible automation elements, the AI gets this right most of the time. Ask Home is also usually able to dig up past event clips, as long as you are specific about what you want.

The Advanced plan for Gemini Home keeps your videos for 60 days, so you can only query the robot on clips from that time period. Google also says it does not retain any of that video for training. The only instance in which Google will use security camera footage for training is if you choose to “lend” it to Google via an obscure option in the Home app. Google says it will keep these videos for up to 18 months or until you revoke access. However, your interactions with Gemini (like your typed prompts and ratings of outputs) are used to refine the model.

The unexpected deer

Every generative AI bot makes the occasional mistake, but you’ll probably not notice every one. When the AI hallucinates about your daily life, however, it’s more noticeable. There’s no reason Google should be confused by my smart home setup, which features a couple of outdoor cameras and one indoor camera—all Nest-branded with all the default AI features enabled—to keep an eye on my dogs. So the AI is seeing a lot of dogs lounging around and staring out the window. One would hope that it could reliably summarize something so straightforward.

One may be disappointed, though.

In my first Daily Brief, I was fascinated to see that Google spotted some indoor wildlife. “Unexpectedly, a deer briefly entered the family room,” Gemini said.

Home Brief with deer

Dogs and deer are pretty much the same thing, right? Credit: Ryan Whitwam

Gemini does deserve some credit for recognizing that the appearance of a deer in the family room would be unexpected. But the “deer” was, naturally, a dog. This was not a one-time occurrence, either. Gemini sometimes identifies my dogs correctly, but many event clips and summaries still tell me about the notable but brief appearance of deer around the house and yard.

This deer situation serves as a keen reminder that this new type of AI doesn’t “think,” although the industry’s use of that term to describe simulated reasoning could lead you to believe otherwise. A person looking at this video wouldn’t even entertain the possibility that they were seeing a deer after they’ve already seen the dogs loping around in other videos. Gemini doesn’t have that base of common sense, though. If the tokens say deer, it’s a deer. I will say, though, Gemini is great at recognizing car models and brand logos. Make of that what you will.

The animal mix-up is not ideal, but it’s not a major hurdle to usability. I didn’t seriously entertain the possibility that a deer had wandered into the house, and it’s a little funny the way the daily report continues to express amazement that wildlife is invading. It’s a pretty harmless screw-up.

“Overall identification accuracy depends on several factors, including the visual details available in the camera clip for Gemini to process,” explains a Google spokesperson. “As a large language model, Gemini can sometimes make inferential mistakes, which leads to these misidentifications, such as confusing your dog with a cat or deer.”

Google also says that you can tune the AI by correcting it when it screws up. This works sometimes, but the system still doesn’t truly understand anything—that’s beyond the capabilities of a generative AI model. After telling Gemini that it’s seeing dogs rather than deer, it sees wildlife less often. However, it doesn’t seem to trust me all the time, causing it to report the appearance of a deer that is “probably” just a dog.

A perfect fit for spooky season

Gemini’s smart home hallucinations also have a less comedic side. When Gemini mislabels an event clip, you can end up with some pretty distressing alerts. Imagine that you’re out and about when your Gemini assistant hits you with a notification telling you, “A person was seen in the family room.”

A person roaming around the house you believed to be empty? That’s alarming. Is it an intruder, a hallucination, a ghost? So naturally, you check the camera feed to find… nothing. An Ars Technica investigation confirms AI cannot detect ghosts. So a ghost in the machine?

Oops, we made you think someone broke into your house.

Credit: Ryan Whitwam

Oops, we made you think someone broke into your house. Credit: Ryan Whitwam

On several occasions, I’ve seen Gemini mistake dogs and totally empty rooms (or maybe a shadow?) for a person. It may be alarming at first, but after a few false positives, you grow to distrust the robot. Now, even if Gemini correctly identified a random person in the house, I’d probably ignore it. Unfortunately, this is the only notification experience for Gemini Home Advanced.

“You cannot turn off the AI description while keeping the base notification,” a Google spokesperson told me. They noted, however, that you can disable person alerts in the app. Those are enabled when you turn on Google’s familiar faces detection.

Gemini often twists reality just a bit instead of creating it from whole cloth. A person holding anything in the backyard is doing yardwork. One person anywhere, doing anything, becomes several people. A dog toy becomes a cat lying in the sun. A couple of birds become a raccoon. Gemini likes to ignore things, too, like denying there was a package delivery even when there’s a video tagged as “person delivers package.”

Gemini misses package

Gemini still refused to admit it was wrong.

Credit: Ryan Whitwam

Gemini still refused to admit it was wrong. Credit: Ryan Whitwam

At the end of the day, Gemini is labeling most clips correctly and therefore produces mostly accurate, if sometimes unhelpful, notifications. The problem is the flip side of “mostly,” which is still a lot of mistakes. Some of these mistakes compel you to check your cameras—at least, before you grow weary of Gemini’s confabulations. Instead of saving time and keeping you apprised of what’s happening at home, it wastes your time. For this thing to be useful, inferential errors cannot be a daily occurrence.

Learning as it goes

Google says its goal is to make Gemini for Home better for everyone. The team is “investing heavily in improving accurate identification” to cut down on erroneous notifications. The company also believes that having people add custom instructions is a critical piece of the puzzle. Maybe in the future, Gemini for Home will be more honest, but it currently takes a lot of hand-holding to move it in the right direction.

With careful tuning, you can indeed address some of Gemini for Home’s flights of fancy. I see fewer deer identifications after tinkering, and a couple of custom instructions have made the Home Brief waste less space telling me when people walk into and out of rooms that don’t exist. But I still don’t know how to prompt my way out of Gemini seeing people in an empty room.

Nest Cam 2025

Gemini AI features work on all Nest cams, but the new 2025 models are “designed for Gemini.”

Credit: Ryan Whitwam

Gemini AI features work on all Nest cams, but the new 2025 models are “designed for Gemini.” Credit: Ryan Whitwam

Despite its intention to improve Gemini for Home, Google is releasing a product that just doesn’t work very well out of the box, and it misbehaves in ways that are genuinely off-putting. Security cameras shouldn’t lie about seeing intruders, nor should they tell me I’m lying when they fail to recognize an event. The Ask Home bot has the standard disclaimer recommending that you verify what the AI says. You have to take that warning seriously with Gemini for Home.

At launch, it’s hard to justify paying for the $20 Advanced Gemini subscription. If you’re already paying because you want the 60-day event history, you’re stuck with the AI notifications. You can ignore the existence of Daily Brief, though. Stepping down to the $10 per month subscription gets you just 30 days of event history with the old non-generative notifications and event labeling. Maybe that’s the smarter smart home bet right now.

Gemini for Home is widely available for those who opted into early access in the Home app. So you can avoid Gemini for the time being, but it’s only a matter of time before Google flips the switch for everyone.

Hopefully it works better by then.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

“Unexpectedly, a deer briefly entered the family room”: Living with Gemini Home Read More »

affinity’s-image-editing-apps-go-“freemium”-in-first-major-post-canva-update

Affinity’s image-editing apps go “freemium” in first major post-Canva update

When graphic design platform-provider Canva bought the Affinity image-editing and publishing apps early last year, we had some major questions about how the companies’ priorities and products would mesh. How would Canva serve the users who preferred Affinity’s perpetually licensed apps to Adobe’s subscription-only software suite? And how would Affinity’s strong stance against generative AI be reconciled with Canva’s embrace of those technologies.

This week, Canva gave us definitive answers to all of those questions: a brand-new unified Affinity app that melds the Photo, Designer, and Publisher apps into a single piece of software called “Affinity by Canva” that is free to use with a Canva user account, but which gates generative AI features behind Canva’s existing paid subscription plans ($120 a year for individuals).

This does seem like mostly good news, in the near to mid term, for existing Affinity app users who admired Affinity’s anti-AI stance: All three apps’ core features are free to use, and the stuff you’re being asked to pay for is stuff you mostly don’t want anyway. But it may come as unwelcome news for those who like the predictability of pay-once-own-forever software or are nervous about where Canva might draw the line between “free” and “premium” features down the line.

The new Affinity app (also labeled internally as version 3) is available for both the x86 and Arm versions of Windows and as a universal app that will run natively on both Apple Silicon and Intel Macs. The app supports macOS versions going back to 10.15 Catalina and Windows 11, as well as the later releases of Windows 10. An iPad release to replace Affinity’s older iPad apps is “coming soon.”

“For ten years, Affinity has been the tool of choice for professionals who care deeply about craft,” wrote Affinity CEO Ash Hewson in a post announcing the update. “Designers who value precision, speed, and control, and who expect their tools to keep up. Now, that legacy enters a new chapter. The all-new Affinity was built in close collaboration with its community of creators, shaped by thousands of conversations, feature requests, and shared ideas. Guided by Canva’s Designer Advisory Board, this release reflects what professionals told us matters most: performance, reliability, and creative freedom.”

Affinity’s image-editing apps go “freemium” in first major post-Canva update Read More »

netflix-drops-a-doozy-of-a-trailer-for-stranger-things-s5

Netflix drops a doozy of a trailer for Stranger Things S5

We’re a few weeks away from the debut of the fifth and final season of Stranger Things—at least the first of three parts of it—and Netflix has dropped one doozy of a trailer that shows things looking pretty bleak for our small-town heroes of Hawkins.

(Spoilers for prior seasons below.)

As previously reported, S4 ended with Vecna—the Big Bad behind it all—opening the gate that allowed the Upside Down to leak into Hawkins. We’re getting a time jump for S5, but in a way, we’re coming full circle, since the events coincide with the third anniversary of Will’s original disappearance in S1. The fifth season will have eight episodes, and each one will be looong—akin to eight feature-length films. Per the official premise:

The fall of 1987. Hawkins is scarred by the opening of the Rifts, and our heroes are united by a single goal: find and kill Vecna. But he has vanished — his whereabouts and plans unknown. Complicating their mission, the government has placed the town under military quarantine and intensified its hunt for Eleven, forcing her back into hiding. As the anniversary of Will’s disappearance approaches, so does a heavy, familiar dread. The final battle is looming — and with it, a darkness more powerful and more deadly than anything they’ve faced before. To end this nightmare, they’ll need everyone — the full party — standing together, one last time.

In addition to the returning main cast, Amybeth McNulty and Gabriella Pizzolo are back as Vicki and Dustin’s girlfriend, Suzie, respectively, with Jamie Campbell Bower reprising his role as Vecna. Linda Hamilton joins the cast as Dr. Kay, along with Nell Fisher as Holly Wheeler, Jake Connelly as Derek Turnbow, and Alex Breaux as Lt. Akers.

Netflix drops a doozy of a trailer for Stranger Things S5 Read More »

chatgpt-maker-reportedly-eyes-$1-trillion-ipo-despite-major-quarterly-losses

ChatGPT maker reportedly eyes $1 trillion IPO despite major quarterly losses

An OpenAI spokesperson told Reuters that “an IPO is not our focus, so we could not possibly have set a date,” adding that the company is “building a durable business and advancing our mission so everyone benefits from AGI.”

Revenue grows as losses mount

The IPO preparations follow a restructuring of OpenAI completed on October 28 that reduced the company’s reliance on Microsoft, which has committed to investments of $13 billion and now owns about 27 percent of the company. OpenAI was most recently valued around $500 billion in private markets.

OpenAI started as a nonprofit in 2015, then added a for-profit arm a few years later with nonprofit oversight. Under the new structure, OpenAI is still controlled by a nonprofit, now called the OpenAI Foundation, but it gives the nonprofit a 26 percent stake in OpenAI Group and a warrant for additional shares if the company hits certain milestones.

A successful OpenAI IPO could represent a substantial gain for investors, including Microsoft, SoftBank, Thrive Capital, and Abu Dhabi’s MGX. But even so, OpenAI faces an uphill financial battle ahead. The ChatGPT maker expects to reach about $20 billion in revenue by year-end, according to people familiar with the company’s finances who spoke with Reuters, but its quarterly losses are significant.

Microsoft’s earnings filing on Wednesday offered a glimpse at the scale of those losses. The company reported that its share of OpenAI losses reduced Microsoft’s net income by $3.1 billion in the quarter that ended September 30. Since Microsoft owns 27 percent of OpenAI under the new structure, that suggests OpenAI lost about $11.5 billion during the quarter, as noted by The Register. That quarterly loss figure exceeds half of OpenAI’s expected revenue for the entire year.

ChatGPT maker reportedly eyes $1 trillion IPO despite major quarterly losses Read More »

senators-move-to-keep-big-tech’s-creepy-companion-bots-away-from-kids

Senators move to keep Big Tech’s creepy companion bots away from kids

Big Tech says bans aren’t the answer

As the bill advances, it could change, senators and parents acknowledged at the press conference. It will likely face backlash from privacy advocates who have raised concerns that widely collecting personal data for age verification puts sensitive information at risk of a data breach or other misuse.

The tech industry has already voiced opposition. On Tuesday, Chamber of Progress, a Big Tech trade group, criticized the law as taking a “heavy-handed approach” to child safety. The group’s vice president of US policy and government relations, K.J. Bagchi, said that “we all want to keep kids safe, but the answer is balance, not bans.

“It’s better to focus on transparency when kids chat with AI, curbs on manipulative design, and reporting when sensitive issues arise,” Bagchi said.

However, several organizations dedicated to child safety online, including the Young People’s Alliance, the Tech Justice Law Project, and the Institute for Families and Technology, cheered senators’ announcement Tuesday. The GUARD Act, these groups told Time, is just “one part of a national movement to protect children and teens from the dangers of companion chatbots.”

Mourning parents are rallying behind that movement. Earlier this month, Garcia praised California for “finally” passing the first state law requiring companies to protect their users who express suicidal ideations to chatbots.

“American families, like mine, are in a battle for the online safety of our children,” Garcia said at that time.

During Tuesday’s press conference, Blumenthal noted that the chatbot ban bill was just one initiative of many that he and Hawley intend to raise to heighten scrutiny on AI firms.

Senators move to keep Big Tech’s creepy companion bots away from kids Read More »