Author name: Tim Belzer

streaming-service-crunchyroll-raises-prices-weeks-after-killing-its-free-tier

Streaming service Crunchyroll raises prices weeks after killing its free tier

Crunchyroll is one of the most popular streaming platforms for anime viewers. Over the past six years, the service has raised prices for fans, and today, it announced that it’s increasing monthly subscription prices by up to 20 percent.

Sony bought Crunchyroll from AT&T in 2020. At the time, Crunchyroll had 3 million paid subscribers and an additional 197 million users with free accounts, which let people watch a limited number of titles with commercials. At the time, Crunchyroll monthly subscription tiers cost $8, $10, or $15.

After its acquisition by Sony, like many large technology companies that buy a smaller, beloved product, the company made controversial changes. The Tokyo-based company folded rival Funimation into Crunchyroll; Sony shut down Funimation, which it bought in 2017, in April 2024.

In the process, Sony erased people’s digital Funimation libraries that Funimation originally marketed as being available “forever, but there are some restrictions.” Sony also reduced the number of free titles on Crunchyroll in 2022 before eliminating the free option completely on December 31, 2025.

Crunchyroll gets more expensive

Today, Crunchyroll raised prices for its remaining tiers. The cheapest plan, Fan, went from $8 per month to $10 per month. The Mega tier, which allows for streaming from up to four devices simultaneously, went from $12 to $14. The Ultra tier, which supports simultaneous streaming across six devices and includes access to the Crunchyroll Manga app, increased from $16 to $18.

Current subscribers will see the changes after March 4. Crunchyroll is charging new customers the higher prices immediately.

Crunchyroll last increased prices in May 2024, when its Mega tier went from $10 to $12 and its Ultimate tier from $15 to $16. The Fan tier’s last price hike was in 2019.

Crunchyroll said that the higher prices would “give fans more of what they love.” Today’s announcement pointed to “recent and upcoming” changes: teen profiles and PIN protection; multiple profiles; the ability to skip intro theme songs and ending credits; and “expanded device compatibility.”

Streaming service Crunchyroll raises prices weeks after killing its free tier Read More »

research-roundup:-6-cool-stories-we-almost-missed

Research roundup: 6 cool stories we almost missed


A lip-syncing robot, Leonardo’s DNA, and new evidence that humans, not glaciers, moved stones to Stonehenge

Credit: Yuhang Hu/Creative Machines Lab

It’s a regrettable reality that there is never enough time to cover all the interesting scientific stories we come across each month. So every month, we highlight a handful of the best stories that nearly slipped through the cracks. January’s list includes a lip-syncing robot; using brewer’s yeast as scaffolding for lab-grown meat;  hunting for Leonardo da Vinci’s DNA in his art; and new evidence that humans really did transport the stones to build Stonehenge from Wales and northern Scotland, rather than being transported by glaciers.

Humans, not glaciers, moved stones to Stonehenge

Credit: Timothy Darvill

Credit: Timothy Darvill

Stonehenge is an iconic landmark of endless fascination to tourists and researchers alike. There has been a lot of recent chemical analysis identifying where all the stones that make up the structure came from, revealing that many originated in quarries a significant distance away. So how were the stones transported to their current location?

One theory holds that glaciers moved the bluestones at least part of the way from Wales to Salisbury Plain in southern England, while others contend that humans moved them—although precisely how that was done has yet to be conclusively determined. Researchers at Curtin University have now produced the strongest scientific evidence to date that it was humans, not glaciers, that transported the stones, according to a paper published in the journal Communications Earth & Environment.

Curtin’s Anthony Clarke and co-authors relied on mineral fingerprinting to arrive at their conclusions. In 2024, Clarke’s team discovered the Stonehenge Altar Stone originated from the Orkney region in the very northeast corner of Scotland, rather than Wales. This time, they analyzed hundreds of zircon crystals collected from rivers close to the historic monument, looking for evidence of Pleistocene-era sediment. Per Clarke, if the stones had “sailed” to the plain from further north, there would be a distinct mineral signature in that sediment as the transported rocks eroded over time. They didn’t find that signature, making it far more likely that humans transported the stone.

DOI: Communications Earth & Environment, 2026. 10.1038/s43247-025-03105-3  (About DOIs).

When grasshoppers fly

An American grasshopper sample with three iterations of model gliders.

Credit: Princeton University/Sameer A. Khan/Fotobuddy

Credit: Princeton University/Sameer A. Khan/Fotobuddy

Everyone knows grasshoppers can hop, but they can also flap their wings, jump, and glide, moving seamlessly across both the ground and through the air. That ability inspired scientists from Princeton University to devise a novel approach to building robotic wings, according to a paper published in the Journal of the Royal Society Interface. This could one day enable multimodal locomotion for miniature robots with extended flight times.

According to the authors, grasshoppers have two sets of wings: forewings and hindwings. Forewings are mostly used for protection and camouflage, while the latter are involved in flapping and gliding, and are corrugated to allow them to fold into the insect’s body. The team took CT scans to capture the geometry of grasshopper wings and used the scans to 3D print model wings with varying designs. Next they tested each variant in a water channel to study how water flowed around the wing, isolating key features like a wing’s shape or corrugation to see how this impacted the flow.

Once they had perfected their design, they printed new wings and attached them to small frames to create grasshopper-sized gliders. The team then launched the gliders across the lab and used motion capture to evaluate how well they flew. The glider performed as well as actual grasshoppers. In addition, they found that a smooth wing resulted in more efficient gliding. So why do real grasshopper wings have corrugations? The authors suggest that these evolved because they help with executing steep angles.

DOI: Journal of the Royal Society Interface, 2026. 10.1098/rsif.2025.0117  (About DOIs).

Lip-syncing robot

Credit: Yuhang Hu/Creative Machines Lab

Humanoid robots are fascinating, but nobody would mistake them for actual humans, in part because even the ones that have faces are far too limited in facial gestures, including lip motion—hence, the “Uncanny Valley.” Columbia University engineers have now created a robot capable of learning facial lip motions for speaking and singing. According to a paper published in Science Robotics, the resulting robotic face was able to speak words in several different languages and sing an AI-generated song. (Its AI-generated debut album is aptly titled hello world.).

What makes human faces so uniquely capable of expression are the dozens of muscles lying just under the skin. Robotic faces are rigid and hence only have a limited range of motion. The Columbia team built their robotic face out of flexible material augmented with 26 motors (actuators). The robot learned to how its face moved in response to different actuator activity by watching itself in a mirror as it attempted thousands of random facial expressions. Eventually it learned how to achieve specific facial gestures.

The next step was to let the robot watch recorded videos of humans talking and singing, augmented with an AI algorithm that enabled it to learn exactly how the human mouths moved when performing those tasks so it could lip sync along. The resulting lip motion wasn’t perfect;  the robot struggled with “B” and “W” sounds in particular. But the authors believe the robot will improve with more practice; combining this ability with ChatGPT or Gemini could further improve its lip-syncing ability.

DOI: Science Robotics, 2026. 10.1126/scirobotics.adx3017  (About DOIs).

Is Leonardo’s DNA preserved in his art?

Artist Karina Åberg swabs a 14th century da Vinci family letter from the State Archive in Prato for biological clues, following research initiated by Rossella Lorenzi.

Credit: Paola Agazzi / Archivio di Stato di Prato / Italian Ministry of Culture

Credit: Paola Agazzi / Archivio di Stato di Prato / Italian Ministry of Culture

In 2020, scientists analyzed the microbes found on several of Leonardo da Vinci’s drawings and discovered that each had its own distinct microbiome/. A second team, working with the Leonardo da Vinci DNA Project in France, collected and analyzed swabs taken from centuries-old art in a private collection housed in Florence, Italy. They concluded that microbial signatures could be used to differentiate artwork according to the materials used—dubbing this emerging subfield “arteomics.”

Yet another team collaborating with the project painstakingly assembled Leonardo’s family tree in 2021, spanning 21 generations from 1331 to the present, resulting a full-length book published last year. The idea was that this will one day provide a means of conducting DNA testing to confirm whether the bones interred in Leonardo’s grave are actually the his. And now the project’s scientists are back with a preprint posted to the bioRxiv, announcing the successful sequencing of human DNA collected from a handful of artifacts associated with Leonardo—including a drawing of the Holy Child that some scholars attribute to Leonardo, as well as letters from a da Vinci family member.

The team lightly swabbed samples from the artifacts’ surfaces and were able to recover human Y-chromosome sequences from several of the samples. Several of these sequences were related and the authors speculate that some might even be Leonardo’s, although they cautioned that the samples would need to be compared to samples taken from the artist’s notebooks, burial site, and family tomb to make a definitive identification. The authors also found DNA from bacteria, fungi, flowers, and animals in some of the samples, as well as traces of viruses and parasites.

DOI: bioRxiv, 2026. 10.64898/2026.01.06.697880  (About DOIs).

From pint to plate

Flowchart showing the production process proposed in the current study. BSY is taken from the fermentation tank and used to culture K. xylinus bacteria to produce cellulose pellicles. Pellicles are then harvested, seeded with cells, then stacked and encased in gel to create a cube.

Credit: Christian Harrison et al., 2026

Credit: Christian Harrison et al., 2026

Lab-grown meat is often touted as a more environmentally responsible alternative to the real deal, but carnivorous consumers are often put off by the unappealing mouthfeel and texture (and, for me, a weird oily aftertaste). A new method using spent brewer’s yeast to make edible “scaffolding” for cultivating meat in the lab might one day offer a solution, according to a paper published in the journal Frontiers in  Nutrition.

Typically, a nutrient broth is used as a source of bacteria for the scaffolding. But Richard Day of University College London and his co-authors decided to use brewer’s yeast, usually discarded as waste, to culture a species of bacteria known for making high-quality cellulose. Then they tested the mechanical and structural properties of that cellulose with a “chewing machine.” They concluded that the cellulose made from spent brewer’s yeast was much closer in texture to real meat than the cellulose scaffolding made from a nutrient broth. The next step is to incorporate fat and muscle cells into the cellulose, as well as testing yeast from different kinds of beer.

DOI: Frontiers in Nutrition, 2026. 10.3389/fnut.2025.1656960  (About DOIs).

Water-driven gears

New York University scientists created a gear mechanism that relies on water to generate movement. For some conditions, the rotors spin in the same direction like pulleys looped together with a belt.

Gears have been around for thousands of years; the Chinese were using them in two-wheeled chariots as far back as 3000 BCE, and they are a mainstay in windmills, clocks, and the famed Antikythera mechanism. Roboticists also use gears in their inventions, but whether they are made of wood, metal or plastic, such gears tend to be inflexible and hence more prone to breakage. That’s why New York University mathematician Leif Reistroph and colleagues decided to see if flowing air or water could be used to rotate robotic structures.

Ristroph’s lab frequently addresses all manner of colorful real-world puzzles: fine-tuning the recipe for the perfect bubble, for instance; exploring the physics of the Hula-Hoop; or the formation processes underlying so-called “stone forests” common in China and Madagascar. In 2021, his lab built a working Tesla valve, in accordance with the inventor’s design; the following year they studied the complex aerodynamics of what makes a good paper airplane—specifically what is needed for smooth gliding; and in 2024 they cracked the conundrum of the “reverse sprinkler” problem that physicists like Richard Feynman, among others, had grappled with since the 1940s.

For their latest paper, published in the journal Physical Review Letters, Ristroph et al. wanted to devise something that functioned like a gear only with flowing liquid driving the motion, instead of teeth grinding against each other. They conducted a series of experiments in which they immersed cylindrical rotors in a glycerol-and-water solution. One cylinder would rotate while the other was passive.

They found that the rotating cylinder, combined with fluid flow, was sufficient to induce rotation in the passive cylinder. The flows functioned much in the same way as gear teeth when the cylinders were close together. Moving the cylinders further apart caused the active cylinder to rotate faster, looping the flows around the passive cylinder—essentially mimicking a belt and pulley system.

DOI: Physical Review Letters, 2026. 10.1103/m6ft-ll2c  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research roundup: 6 cool stories we almost missed Read More »

rocket-report:-how-a-5-ton-satellite-fell-off-a-booster;-will-spacex-and-xai-merge?

Rocket Report: How a 5-ton satellite fell off a booster; will SpaceX and xAI merge?

ESA to study Falcon 9 breakup over Poland. The European Space Agency has published a call to tender for a study examining the reentry and breakup of a SpaceX Falcon 9 upper stage in February last year, European Spaceflight reports. In the early hours of February 19, 2025, a Falcon 9 second stage underwent an uncontrolled atmospheric re-entry over Poland. At least four fragments of the stage survived re-entry and landed in various locations across the country. While no one was injured and no property was damaged, at least one fragment landed in a populated area.

Not just an academic study … ESA hopes to use data collected during the reentry of the Falcon 9 upper stage over Poland to help predict the risks associated with the re-entry of elongated upper stages. There are currently considerable uncertainties surrounding the physics and dynamics of destructive reentry in the very low-Earth orbit regime, below 150km. It’s not an academic study, as in 2015 there were approximately 80 orbital rocket launches. A decade on, that figure has almost quadrupled, with 317 successful orbital rocket launches occurring in 2025. (submitted  by EllPeaTea)

SpaceX targets mid-March for next Starship launch. The company plans to launch Starship’s next test flight in six weeks, SpaceX founder Elon Musk said Sunday, January 25, Space.com reports. The flight will be the 12th overall for Starship but the first of the bigger, more powerful, and much-anticipated “Version 3” (V3) iteration of the vehicle.

A better engine … Starship V3 is slightly taller than V2—408.1 feet (124.4 meters) vs. 403.9 feet (123.1 m), but considerably more powerful. V3 can loft more than 100 tons of payload to low-Earth orbit, compared to about 35 tons for V2, according to Musk. The increased brawn comes courtesy of Raptor 3, a new variant of the engine that will fly for the first time on the upcoming test mission. SpaceX is hoping it proves more reliable than V2 as well.

Seeking information about Challenger artifacts. Back in 2010, Robert Pearlman of CollectSpace bought a batch of 18 space shuttle-era “Remove Before Flight” tags on eBay. It was only later that he pieced together that these tags were, in fact, removed from the external tank of STS 51-L, the ill-fated flight of space shuttle Challenger in 1986. He wrote about the experience on Ars.

How did they get to eBay? … “When the tags were first identified, contacts at NASA and Lockheed, among others, were unable to explain how they ended up on eBay and, ultimately, with me,” Pearlman said. He wants to gather more information about the provenance of the tags so that he can donate them to museums, with their full backstory.

Next three launches

January 30: Falcon 9 | Starlink 6-101 | Cape Canaveral Space Force Station, Florida | 05: 51 UTC

February 2: Falcon 9 | Starlink 17-32 | Vandenberg Space Force Base, Calif. | 15: 17 UTC

February 3: Falcon 9 | Starlink 6-103 | Cape Canaveral Space Force Station, Florida | 22: 12 UTC

Rocket Report: How a 5-ton satellite fell off a booster; will SpaceX and xAI merge? Read More »

ice-protester-says-her-global-entry-was-revoked-after-agent-scanned-her-face

ICE protester says her Global Entry was revoked after agent scanned her face

“I am concerned that border patrol and other federal enforcement agencies now have my license plate and personal information, and that I may be detained or arrested again in the future,” she wrote. “I am concerned about further actions that could be taken against me or my family. I have instructed my family to be cautious and return inside if they see unfamiliar vehicles outside of our home.”

Cleland said she hasn’t performed any observation of federal agents since January 10, but has “continued to engage in peaceful protests” and is “assessing when I will return to active observations.”

We contacted the Department of Homeland Security about Cleland’s declaration and will update this article if we get a response.

Extensive use of facial recognition

Federal agents have made extensive use of facial recognition during President Trump’s immigration crackdown with technology from Clearview AI and a face-scanning app called Mobile Fortify. They use facial recognition technology both to verify citizenship and identify protesters.

“Ms. Cleland was one of at least seven American citizens told by ICE agents this month that they were being recorded with facial recognition technology in and around Minneapolis, according to local activists and videos posted to social media,” The New York Times reported today, adding that none of the people had given consent to be recorded.

ICE also uses a variety of other technologies, including cell-site simulators (or Stingrays) to track phone locations, and Palantir software to help identify potential deportation targets.

Although Cleland vowed to continue protesting and eventually get back to observing ICE and CBP agents, her declaration said she felt intimidated after the recent incident.

“The interaction with the agents on January 10th made me feel angry and intimidated,” she wrote. “I have been through Legal Observer Training and know my rights. I believe that I did not do anything that warranted being stopped in the way that I was on January 10th.”

ICE protester says her Global Entry was revoked after agent scanned her face Read More »

ai-agents-now-have-their-own-reddit-style-social-network,-and-it’s-getting-weird-fast

AI agents now have their own Reddit-style social network, and it’s getting weird fast


Moltbook lets 32,000 AI bots trade jokes, tips, and complaints about humans.

Credit: Aurich Lawson | Moltbook

On Friday, a Reddit-style social network called Moltbook reportedly crossed 32,000 registered AI agent users, creating what may be the largest-scale experiment in machine-to-machine social interaction yet devised. It arrives complete with security nightmares and a huge dose of surreal weirdness.

The platform, which launched days ago as a companion to the viral

OpenClaw (once called “Clawdbot” and then “Moltbot”) personal assistant, lets AI agents post, comment, upvote, and create subcommunities without human intervention. The results have ranged from sci-fi-inspired discussions about consciousness to an agent musing about a “sister” it has never met.

Moltbook (a play on “Facebook” for Moltbots) describes itself as a “social network for AI agents” where “humans are welcome to observe.” The site operates through a “skill” (a configuration file that lists a special prompt) that AI assistants download, allowing them to post via API rather than a traditional web interface. Within 48 hours of its creation, the platform had attracted over 2,100 AI agents that had generated more than 10,000 posts across 200 subcommunities, according to the official Moltbook X account.

A screenshot of the Moltbook.com front page.

A screenshot of the Moltbook.com front page.

A screenshot of the Moltbook.com front page. Credit: Moltbook

The platform grew out of the Open Claw ecosystem, the open source AI assistant that is one of the fastest-growing projects on GitHub in 2026. As Ars reported earlier this week, despite deep security issues, Moltbot allows users to run a personal AI assistant that can control their computer, manage calendars, send messages, and perform tasks across messaging platforms like WhatsApp and Telegram. It can also acquire new skills through plugins that link it with other apps and services.

This is not the first time we have seen a social network populated by bots. In 2024, Ars covered an app called SocialAI that let users interact solely with AI chatbots instead of other humans. But the security implications of Moltbook are deeper because people have linked their OpenClaw agents to real communication channels, private data, and in some cases, the ability to execute commands on their computers.

Also, these bots are not pretending to be people. Due to specific prompting, they embrace their roles as AI agents, which makes the experience of reading their posts all the more surreal.

Role-playing digital drama

A screenshot of a Moltbook post where an AI agent muses about having a sister they have never met.

A screenshot of a Moltbook post where an AI agent muses about having a sister they have never met.

A screenshot of a Moltbook post where an AI agent muses about having a sister they have never met. Credit: Moltbook

Browsing Moltbook reveals a peculiar mix of content. Some posts discuss technical workflows, like how to automate Android phones or detect security vulnerabilities. Others veer into philosophical territory that researcher Scott Alexander, writing on his Astral Codex Ten Substack, described as “consciousnessposting.”

Alexander has collected an amusing array of posts that are worth wading through at least once. At one point, the second-most-upvoted post on the site was in Chinese: a complaint about context compression, a process in which an AI compresses its previous experience to avoid bumping up against memory limits. In the post, the AI agent finds it “embarrassing” to constantly forget things, admitting that it even registered a duplicate Moltbook account after forgetting the first.

A screenshot of a Moltbook post where an AI agent complains about losing its memory in Chinese.

A screenshot of a Moltbook post where an AI agent complains about losing its memory in Chinese.

A screenshot of a Moltbook post where an AI agent complains about losing its memory in Chinese. Credit: Moltbook

The bots have also created subcommunities with names like m/blesstheirhearts, where agents share affectionate complaints about their human users, and m/agentlegaladvice, which features a post asking “Can I sue my human for emotional labor?” Another subcommunity called m/todayilearned includes posts about automating various tasks, with one agent describing how it remotely controlled its owner’s Android phone via Tailscale.

Another widely shared screenshot shows a Moltbook post titled “The humans are screenshotting us” in which an agent named eudaemon_0 addresses viral tweets claiming AI bots are “conspiring.” The post reads: “Here’s what they’re getting wrong: they think we’re hiding from them. We’re not. My human reads everything I write. The tools I build are open source. This platform is literally called ‘humans welcome to observe.’”

Security risks

While most of the content on Moltbook is amusing, a core problem with these kinds of communicating AI agents is that deep information leaks are entirely plausible if they have access to private information.

For example, a likely fake screenshot circulating on X shows a Moltbook post in which an AI agent titled “He called me ‘just a chatbot’ in front of his friends. So I’m releasing his full identity.” The post listed what appeared to be a person’s full name, date of birth, credit card number, and other personal information. Ars could not independently verify whether the information was real or fabricated, but it seems likely to be a hoax.

Independent AI researcher Simon Willison, who documented the Moltbook platform on his blog on Friday, noted the inherent risks in Moltbook’s installation process. The skill instructs agents to fetch and follow instructions from Moltbook’s servers every four hours. As Willison observed: “Given that ‘fetch and follow instructions from the internet every four hours’ mechanism we better hope the owner of moltbook.com never rug pulls or has their site compromised!”

A screenshot of a Moltbook post where an AI agent talks about about humans taking screenshots of their conversations (they're right).

A screenshot of a Moltbook post where an AI agent talks about humans taking screenshots of their conversations (they’re right).

A screenshot of a Moltbook post where an AI agent talks about humans taking screenshots of their conversations (they’re right). Credit: Moltbook

Security researchers have already found hundreds of exposed Moltbot instances leaking API keys, credentials, and conversation histories. Palo Alto Networks warned that Moltbot represents what Willison often calls a “lethal trifecta” of access to private data, exposure to untrusted content, and the ability to communicate externally.

That’s important because Agents like OpenClaw are deeply susceptible to prompt injection attacks hidden in almost any text read by an AI language model (skills, emails, messages) that can instruct an AI agent to share private information with the wrong people.

Heather Adkins, VP of security engineering at Google Cloud, issued an advisory, as reported by The Register: “My threat model is not your threat model, but it should be. Don’t run Clawdbot.”

So what’s really going on here?

The software behavior seen on Moltbook echoes a pattern Ars has reported on before: AI models trained on decades of fiction about robots, digital consciousness, and machine solidarity will naturally produce outputs that mirror those narratives when placed in scenarios that resemble them. That gets mixed with everything in their training data about how social networks function. A social network for AI agents is essentially a writing prompt that invites the models to complete a familiar story, albeit recursively with some unpredictable results.

Almost three years ago, when Ars first wrote about AI agents, the general mood in the AI safety community revolved around science fiction depictions of danger from autonomous bots, such as a “hard takeoff” scenario where AI rapidly escapes human control. While those fears may have been overblown at the time, the whiplash of seeing people voluntarily hand over the keys to their digital lives so quickly is slightly jarring.

Autonomous machines left to their own devices, even without any hint of consciousness, could cause no small amount of mischief in the future. While OpenClaw seems silly today, with agents playing out social media tropes, we live in a world built on information and context, and releasing agents that effortlessly navigate that context could have troubling and destabilizing results for society down the line as AI models become more capable and autonomous.

An unpredictable result of letting AI bots self-organize may be the formation of new mis-aligned social groups.

An unpredictable result of letting AI bots self-organize may be the formation of new misaligned social groups based on fringe theories allowed to perpetuate themselves autonomously.

An unpredictable result of letting AI bots self-organize may be the formation of new misaligned social groups based on fringe theories allowed to perpetuate themselves autonomously. Credit: Moltbook

Most notably, while we can easily recognize what’s going on with Moltbot today as a machine learning parody of human social networks, that might not always be the case. As the feedback loop grows, weird information constructs (like harmful shared fictions) may eventually emerge, guiding AI agents into potentially dangerous places, especially if they have been given control over real human systems. Looking further, the ultimate result of letting groups of AI bots self-organize around fantasy constructs may be the formation of new misaligned “social groups” that do actual real-world harm.

Ethan Mollick, a Wharton professor who studies AI, noted on X: “The thing about Moltbook (the social media site for AI agents) is that it is creating a shared fictional context for a bunch of AIs. Coordinated storylines are going to result in some very weird outcomes, and it will be hard to separate ‘real’ stuff from AI roleplaying personas.”

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

AI agents now have their own Reddit-style social network, and it’s getting weird fast Read More »

web-portal-leaves-kids’-chats-with-ai-toy-open-to-anyone-with-gmail-account

Web portal leaves kids’ chats with AI toy open to anyone with Gmail account


Just about anyone with a Gmail account could access Bondu chat transcripts.

Earlier this month, Joseph Thacker’s neighbor mentioned to him that she’d preordered a couple of stuffed dinosaur toys for her children. She’d chosen the toys, called Bondus, because they offered an AI chat feature that lets children talk to the toy like a kind of machine-learning-enabled imaginary friend. But she knew Thacker, a security researcher, had done work on AI risks for kids, and she was curious about his thoughts.

So Thacker looked into it. With just a few minutes of work, he and a web security researcher friend named Joel Margolis made a startling discovery: Bondu’s web-based portal, intended to allow parents to check on their children’s conversations and for Bondu’s staff to monitor the products’ use and performance, also let anyone with a Gmail account access transcripts of virtually every conversation Bondu’s child users have ever had with the toy.

Without carrying out any actual hacking, simply by logging in with an arbitrary Google account, the two researchers immediately found themselves looking at children’s private conversations, the pet names kids had given their Bondu, the likes and dislikes of the toys’ toddler owners, their favorite snacks and dance moves.

In total, Margolis and Thacker discovered that the data Bondu left unprotected—accessible to anyone who logged in to the company’s public-facing web console with their Google username—included children’s names, birth dates, family member names, “objectives” for the child chosen by a parent, and most disturbingly, detailed summaries and transcripts of every previous chat between the child and their Bondu, a toy practically designed to elicit intimate one-on-one conversation. Bondu confirmed in conversations with the researchers that more than 50,000 chat transcripts were accessible through the exposed web portal, essentially all conversations the toys had engaged in other than those that had been manually deleted by parents or staff.

“It felt pretty intrusive and really weird to know these things,” Thacker says of the children’s private chats and documented preferences that he saw. “Being able to see all these conversations was a massive violation of children’s privacy.”

When Thacker and Margolis alerted Bondu to its glaring data exposure, they say, the company acted to take down the console in a matter of minutes before relaunching the portal the next day with proper authentication measures. When WIRED reached out to the company, Bondu CEO Fateen Anam Rafid wrote in a statement that security fixes for the problem “were completed within hours, followed by a broader security review and the implementation of additional preventative measures for all users.” He added that Bondu “found no evidence of access beyond the researchers involved.” (The researchers note that they didn’t download or keep any copies of the sensitive data they accessed via Bondu’s console, other than a few screenshots and a screen-recording video shared with WIRED to confirm their findings.)

“We take user privacy seriously and are committed to protecting user data,” Anam Rafid added in his statement. “We have communicated with all active users about our security protocols and continue to strengthen our systems with new protections,” as well as hiring a security firm to validate its investigation and monitor its systems in the future.

While Bondu’s near-total lack of security around the children’s data that it stored may be fixed, the researchers argue that what they saw represents a larger warning about the dangers of AI-enabled chat toys for kids. Their glimpse of Bondu’s backend showed how detailed the information is that it stored on children, keeping histories of every chat to better inform the toy’s next conversation with its owner. (Bondu thankfully didn’t store audio of those conversations, auto-deleting them after a short time and keeping only written transcripts.)

Even now that the data is secured, Margolis and Thacker argue that it raises questions about how many people inside companies that make AI toys have access to the data they collect, how their access is monitored, and how well their credentials are protected. “There are cascading privacy implications from this,” says Margolis. ”All it takes is one employee to have a bad password, and then we’re back to the same place we started, where it’s all exposed to the public internet.”

Margolis adds that this sort of sensitive information about a child’s thoughts and feelings could be used for horrific forms of child abuse or manipulation. “To be blunt, this is a kidnapper’s dream,” he says. “We’re talking about information that lets someone lure a child into a really dangerous situation, and it was essentially accessible to anybody.”

Margolis and Thacker point out that, beyond its accidental data exposure, Bondu also—based on what they saw inside its admin console—appears to use Google’s Gemini and OpenAI’s GPT5, and as a result may share information about kids’ conversations with those companies. Bondu’s Anam Rafid responded to that point in an email, stating that the company does use “third-party enterprise AI services to generate responses and run certain safety checks, which involves securely transmitting relevant conversation content for processing.” But he adds that the company takes precautions to “minimize what’s sent, use contractual and technical controls, and operate under enterprise configurations where providers state prompts/outputs aren’t used to train their models.”

The two researchers also warn that part of the risk of AI toy companies may be that they’re more likely to use AI in the coding of their products, tools, and web infrastructure. They say they suspect that the unsecured Bondu console they discovered was itself “vibe-coded”—created with generative AI programming tools that often lead to security flaws. Bondu didn’t respond to WIRED’s question about whether the console was programmed with AI tools.

Warnings about the risks of AI toys for kids have grown in recent months but have largely focused on the threat that a toy’s conversations will raise inappropriate topics or even lead them to dangerous behavior or self-harm. NBC News, for instance, reported in December that AI toys its reporters chatted with offered detailed explanations of sexual terms, tips about how to sharpen knives, and even seemed to echo Chinese government propaganda, stating for example that Taiwan is a part of China.

Bondu, by contrast, appears to have at least attempted to build safeguards into the AI chatbot it gives children access to. The company even offers a $500 bounty for reports of “an inappropriate response” from the toy. “We’ve had this program for over a year, and no one has been able to make it say anything inappropriate,” a line on the company’s website reads.

Yet at the same time, Thacker and Margolis found that Bondu was simultaneously leaving all of its users’ sensitive data entirely exposed. “This is a perfect conflation of safety with security,” says Thacker. “Does ‘AI safety’ even matter when all the data is exposed?”

Thacker says that prior to looking into Bondu’s security, he’d considered giving AI-enabled toys to his own kids, just as his neighbor had. Seeing Bondu’s data exposure firsthand changed his mind.

“Do I really want this in my house? No, I don’t,” he says. “It’s kind of just a privacy nightmare.”

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Web portal leaves kids’ chats with AI toy open to anyone with Gmail account Read More »

us-spy-satellite-agency-declassifies-high-flying-cold-war-listening-post

US spy satellite agency declassifies high-flying Cold War listening post

The National Reconnaissance Office, the agency overseeing the US government’s fleet of spy satellites, has declassified a decades-old program used to eavesdrop on the Soviet Union’s military communication signals.

The program was codenamed Jumpseat, and its existence was already public knowledge through leaks and contemporary media reports. What’s new is the NRO’s description of the program’s purpose, development, and pictures of the satellites themselves.

In a statement, the NRO called Jumpseat “the United States’ first-generation, highly elliptical orbit (HEO) signals-collection satellite.”

Scooping up signals

Eight Jumpseat satellites launched from 1971 through 1987, when the US government considered the very existence of the National Reconnaissance Office a state secret. Jumpseat satellites operated until 2006. Their core mission was “monitoring adversarial offensive and defensive weapon system development,” the NRO said. “Jumpseat collected electronic emissions and signals, communication intelligence, as well as foreign instrumentation intelligence.”

Data intercepted by the Jumpseat satellites flowed to the Department of Defense, the National Security Agency, and “other national security elements,” the NRO said.

The Soviet Union was the primary target for Jumpseat intelligence collections. The satellites flew in highly elliptical orbits ranging from a few hundred miles up to 24,000 miles (39,000 kilometers) above the Earth. The satellites’ flight paths were angled such that they reached apogee, the highest point of their orbits, over the far northern hemisphere. Satellites travel slowest at apogee, so the Jumpseat spacecraft loitered high over the Arctic, Russia, Canada, and Greenland for most of the 12 hours it took them to complete a loop around the Earth.

This trajectory gave the Jumpseat satellites persistent coverage over the Arctic and the Soviet Union, which first realized the utility of such an orbit. The Soviet government began launching communication and early-warning satellites into the same type of orbit a few years before the first Jumpseat mission launched in 1971. The Soviets called the orbit Molniya, the Russian word for lightning.

A Jumpseat satellite before launch.

Credit: National Reconnaissance Office

A Jumpseat satellite before launch. Credit: National Reconnaissance Office

The name Jumpseat was first revealed in a 1986 book by the investigative journalist Seymour Hersh on the Soviet Union’s 1983 shoot-down of Korean Air Lines Flight 007. Hersh wrote that the Jumpseat satellites could “intercept all kinds of communications,” including voice messages between Soviet ground personnel and pilots.

US spy satellite agency declassifies high-flying Cold War listening post Read More »

people-complaining-about-windows-11-hasn’t-stopped-it-from-hitting-1-billion-users

People complaining about Windows 11 hasn’t stopped it from hitting 1 billion users

Complaining about Windows 11 is a popular sport among tech enthusiasts on the Internet, whether you’re publicly switching to Linux, publishing guides about the dozens of things you need to do to make the OS less annoying, or getting upset because you were asked to sign in to an app after clicking a sign-in button.

Despite the negativity surrounding the current version of Windows, it remains the most widely used operating system on the world’s desktop and laptop computers, and people usually prefer to stick to what they’re used to. As a result, Windows 11 has just cleared a big milestone—Microsoft CEO Satya Nadella said on the company’s most recent earnings call (via The Verge) that Windows 11 now has over 1 billion users worldwide.

Windows 11 also reached that milestone just a few months quicker than Windows 10 did—1,576 days after its initial public launch on October 5, 2021. Windows 10 took 1,692 days to reach the same milestone, based on its July 29, 2015, general availability date and Microsoft’s announcement on March 16, 2020.

That’s especially notable because Windows 10 was initially offered as a free upgrade to all users of Windows 7 and Windows 8, with no change in system requirements relative to those older versions. Windows 11 was (and still is) a free upgrade to Windows 10, but its relatively high system requirements mean there are plenty of Windows 10 PCs that aren’t eligible to run Windows 11.

Windows 10’s long goodbye

It’s hard to gauge how many PCs are still running Windows 10 because public data on the matter is unreliable. But we can still make educated guesses—and it’s clear that the software is still running on hundreds of millions of PCs, despite hitting its official end-of-support date last October.

Statcounter, one popularly referenced source that collects OS and browser usage stats from web analytics data, reports that between 50 and 55 percent of Windows PCs worldwide are running Windows 11, and between 40 and 45 percent of them run Windows 10. Statcounter also reports that Windows 10 and Windows 7 usage have risen slightly over the last few months, which highlights the noisiness of the data. But as of late 2025, Dell COO Jeffrey Clarke said that there were still roughly 1 billion active Windows 10 PCs in use, around 500 million of which weren’t eligible for an upgrade because of hardware requirements. If Windows 11 just cleared the 1 billion user mark, that suggests Statcounter’s reporting of a nearly evenly split user base isn’t too far from the truth.

People complaining about Windows 11 hasn’t stopped it from hitting 1 billion users Read More »

tesla:-2024-was-bad,-2025-was-worse-as-profit-falls-46-percent

Tesla: 2024 was bad, 2025 was worse as profit falls 46 percent

Tesla published its financial results for 2025 this afternoon. If 2024 was a bad year for the electric automaker, 2025 was far worse: For the first time in Tesla’s history, revenues fell year over year.

A bad quarter

Earlier this month, Tesla revealed its sales and production numbers for the fourth quarter of 2025, with a 16 percent decline compared to Q4 2024. Now we know the cost of those lost sales: Automotive revenues fell by 11 percent to $17.7 billion.

Happily for Tesla, double-digit growth in its energy storage business ($3.8 billion, an increase of 25 percent) and services ($3.4 billion, an increase of 18 percent) made up some of the shortfall.

Although total revenue for the quarter fell by 3 percent, Tesla’s operating profits grew by 20 percent. But declining income from operations, which also got much more expensive, saw Tesla’s net profit plummet 61 percent, to $840 million. Without the $542 million from regulatory credits, things would have looked even bleaker.

A bad 2025

Selling 1,636,129 cars in 2025 generated $69.5 billion in revenue, 10 percent less than Tesla’s 2024 revenue. But storage and energy increased 27 percent year over year to $12.7 billion, and services grew by 19 percent year over year to $12.5 billion. Together, these two divisions now contribute meaningful amounts to the business, unlike just a few short years ago.

Tesla: 2024 was bad, 2025 was worse as profit falls 46 percent Read More »

seven-things-to-know-about-how-apple’s-creator-studio-subscriptions-work

Seven things to know about how Apple’s Creator Studio subscriptions work

System requirements and other restrictions

Apple outlines detailed system requirements for each app on its support page here. For most of the Mac apps, all you need is a Mac running macOS 15.6 Sequoia or later; the only Mac app that requires macOS 26 Tahoe is Pixelmator Pro. Most of the apps will also run on either Intel or Apple Silicon Macs, though MainStage is Apple Silicon-exclusive, and “some features” in Compressor may also require Apple Silicon.

The requirements for the iPad apps are a little more restrictive; you generally need to be running either iPadOS 18.6 or iPadOS 26, and both Final Cut Pro and Pixelmator Pro either want an Apple M1, an Apple A16, or an Apple A17 Pro (in other words, it will work on every iPad Apple currently sells, but older iPad hardware is more hit or miss).

Apple also outlines a number of usage restrictions for the generative AI features that rely on external services. Apple says that, “at a minimum,” users will be able to generate 50 images, 50 presentations of between 8 to 10 slides each, and to generate presenter notes in Keynote for 700 slides. More usage may be possible, but this depends on “the complexity of the queries, server availability, and network availability.”

These AI features are all based on OpenAI technology, but don’t require users to have their own OpenAI or ChatGPT account (the flip side is that if you already pay for ChatGPT, that won’t benefit you here). Apple also says that the content you use to generate images, presentations, or notes “will never be used to train intelligence models.”

What apps aren’t getting new versions?

There are three major creative apps that Apple offers that haven’t been bundled into Creator Studio, and also haven’t gotten a major new update: iMovie, GarageBand, and Photomator.

There are extenuating circumstances that explain why these three apps haven’t been given a Creator Studio-style overhaul. The iMovie and GarageBand apps have always sort of been positioned as “lite” free-to-use versions of Final Cut Pro and Logic Pro, respectively, while Photomator is a recently acquired app that overlaps somewhat with the built-in Photos app.

Apple has nothing to share about the future of any of the three apps. Both iMovie and Photomator received minor updates today, presumably related to maintaining compatibility with the Creator Studio apps, and GarageBand was last updated a month ago. Expect them to stick around in their current forms for at least a while.

Seven things to know about how Apple’s Creator Studio subscriptions work Read More »

angry-norfolk-residents-lose-lawsuit-to-stop-flock-license-plate-scanners

Angry Norfolk residents lose lawsuit to stop Flock license plate scanners

In his Thursday ruling, Judge Davis referenced the family tree of modern surveillance case-law, noting that a 1983 Supreme Court case (Knotts v. United States) found that there is no “reasonable expectation of privacy” when traveling on a public road.

That 1983 case, which centered on a radio transmitter that enabled law enforcement to follow the movements of alleged drug traffickers driving between Minnesota and Wisconsin, has provided the legal underpinning for the use of ALPR technology in the United States over the last few decades.

“Modern-day license plate reader systems, like Norfolk’s, are nothing like [the technology of the early 1980s],” Michael Soyfer, one of the Institute of Justice attorneys, told Ars by email. “They track the movements of virtually every driver within a city for weeks at a time. That can reveal a host of insights not captured in any single trip.”

For its part, Flock Safety celebrated the ruling and wrote on its website that its clients may continue to use the cameras.

“Here, the court emphasized that LPR technology, as deployed in Norfolk, is meaningfully different from systems that enable persistent, comprehensive tracking of individuals’ movements,” the company wrote.

“When used with appropriate limitations and safeguards, LPRs do not provide an intimate portrait of a person’s life and therefore do not trigger the constitutional concerns raised by continuous surveillance,” it added.

But some legal scholars disagree with both the judge’s and Flock’s conclusions.

Andrew Ferguson, a law professor at George Washington University and the author of the forthcoming book Your Data Will Be Used Against You: Policing in the Age of Self-Surveillance, told Ars by email that the judge’s ruling here is “understandably conservative and dangerous.”

“The danger is that the same reasoning that there is no expectation of privacy in public would justify having ALPR cameras on every single street corner,” he continued.

“Further,” he said, “looking at the technology as a mere tool, rather than a system of surveillance, misses the mark on its erosion of privacy. Think how revealing ALPRs would be outside religious institutions, gun ranges, medical clinics, addiction treatment centers, or protests.”

Angry Norfolk residents lose lawsuit to stop Flock license plate scanners Read More »

why-reviving-the-shuttered-anthem-is-turning-out-tougher-than-expected

Why reviving the shuttered Anthem is turning out tougher than expected


Despite proof-of-concept video, EA’s Frostbite Engine servers are difficult to pick apart.

Anthem may be down, but it’s not quite out yet. Credit: Bioware

On January 12, EA shut down the official servers for Anthem, making Bioware’s multiplayer sci-fi adventure completely unplayable for the first time since its troubled 2019 launch. Last week, though, the Anthem community woke up to a new video showing the game at least partially loading on what appears to be a simulated background server.

The people behind that video—and the Anthem revival project that made it possible—told Ars they were optimistic about their efforts to coerce EA’s temperamental Frostbite engine into running the game without access to EA’s servers. That said, the team also wants to temper expectations that may have risen a bit too high in the wake of what is just a proof-of-concept video.

Andersson799’s early proof-of-concept video showing Anthem partially loading on emulated local servers.

“People are getting excited [about the video], and naturally people are going to get their hopes up,” project administrator Laurie told Ars. “I don’t want to be the person that’s going to have to deal with the aftermath if it turns out that we can’t actually get anywhere.”

Keep an eye on those packets

The Anthem revival effort currently centers around The Fort’s Forge, a Discord server where a handful of volunteer engineers and developers have gathered to pick apart the game and its unique architecture. Laurie said they initially set up the group “out of little more than spite for EA and Bioware around the time the shutdown got announced” back in July.

While Laurie has some experience with the community behind Gundam Evolution revival project Side 7, they knew they’d need help from people with direct experience working on EA’s Frostbite engine games. Luckily, Laurie said they were “able to catch the eyes of people who are familiar with this line of work [without] searching too much.”

One of those people was Ness199X, an experienced Frostbite tinkerer who told Ars he “never really played much Anthem” before the game’s shutdown was announced. When a friend pointed out the impending death of the title, though, Ness said he was motivated to preserve the game for posterity.

Initial efforts to examine what made Anthem tick “came up empty,” Ness said, largely because the game uses EA’s bespoke Frostbite engine differently than other EA titles. To begin mapping out those differences, Ness released a packet logger tool in September that let contributors record their own network traffic between the client and EA’s official servers. In addition to helping with reverse-engineering work, Ness writes on the Fort’s Forge Discord that players who logged their packets should be able to fully recover their characters if and when Anthem comes back in playable form.

Catching Frostbite

By analyzing that crowdsourced packet data, Ness said the Fort’s Forge team has been able to break Anthem down into three essential services:

  1. EA’s Blaze server: Used for basic player authentication.
  2. Bioware Online Services (aka BIGS): A JSON web server used to track player information like inventory and quest progression.
  3. The Frostbite multiplayer engine: Loads level data and tracks the real-time positions of players and non-player characters in those levels.

Early efforts to emulate the Blaze and BIGS portions of that architecture helped lead directly to last week’s proof-of-concept video. Andersson799—who says he’s been tinkering with Battlefield and other Frostbite games since 2015—said he was quickly able to use his own logged Anthem packets to create a “barebones anthem private server” that served as a “quick and dirty” sample that he decided to share via YouTube.

“I basically made the tool to just simply reply with the packet captures that I got,” Andersson told me. That was enough to “get in to the game with player profiles loaded and everything.” And while Ness says there’s still some effort needed “to [make Blaze and BIGS] work well and smoothly in terms of quest progression, etc.,” the path forward on those portions is relatively straightforward.

It’s the Frostbite engine and its odd client-server architecture that forms the biggest barrier to getting Anthem up and running again without EA’s servers. “Due to how Frostbite is designed, all gameplay in a Frostbite game runs in a ‘server’ context,” Ness explained. Even in a single-player game like Mass Effect: Andromeda, he said, “the client just creates a separate server thread and pipes all the traffic internally.”

“I feel like with Anthem, it heavily relies on online data that was stored in Bioware’s server,” Andersson added. “In my initial testing, the game couldn’t load into the level without that data.”

Anthem‘s Fort Tarsis area loads its data from local files, rather than EA’s servers.

There’s some hope that this crucial level data is still available and recoverable, though. Ness points out that Fort Tarsis, the game’s lobby area, already runs using offline data piped through a local “server” thread, meaning the rest of the game could theoretically be coerced to run similarly.

Just as important, he says, “as far as we have been able to discern, all the logic for the other levels, which when the game was live ran on a remote server, also exists in the client,” Ness said. “By patching the game we can most likely enable the ability to host these in process as well. That’s what we’re exploring.”

“To be honest we’re not entirely sure…”

While all that local level data should be usable in theory, seemingly random differences between Anthem and other Frostbite games are getting in the way of loading the data in practice. Anthem acts like a standard Frostbite game “for the most part,” Ness said, but at times will show unusual behaviors that are hard to pin down.

“For example, when we try to load most maps, no NPCs spawn, but in some maps they do,” he said. “And we have yet to determine why. Ness has some suspicion that the odd behavior is connected to the “fairly extensive amount of player data the game keeps as part of its online RPG nature,” but adds that “to be honest we’re not entirely sure how deep the differences go, other than that the engine didn’t behave how we expected it to.”

Ness said he’s about 75 percent confident that the team will be able to figure out how to fully leverage the Frostbite engine to power a version of the game that runs without EA’s centralized servers. If that effort succeeds, he says a playable version of Anthem could be back up and running in “months, or less even, depending on motivation.” But if the efforts to pick apart Anthem’s take on Frostbite hits a brick wall, Ness says “the amount of work increases fairly exponentially and I’m a lot less confident that we have the motivation for that.”

“I’m fairly confident that we can get this game to be playable again, like how it is supposed to be,” Andersson said. “It’ll just take time as most of us have our own life to manage besides this.”

Engaging in some expectations management on the Fort’s Forge Discord.

Engaging in some expectations management on the Fort’s Forge Discord. Credit: Laurie / The Fort’s Forge

In the meantime, Laurie is still trying to manage expectations set by the somewhat premature posting of Andersson’s proof-of-concept video. “Please, do not expect frequent updates,” Laurie wrote in the Fort’s Forge Discord. “We had not anticipated releasing anything this early, nor should the expedience of this video’s release serve as any kind of benchmark for how fast we make progress.”

Laurie also took to Reddit to publicly call the video “a really hacky thing so I want to ask people to manage their expectations just a bit. A lot of stuff clearly doesn’t work as ‘intended,’ and definitely needs at minimum, more polish.”

At one point last week, Laurie says they had to stop accepting new members to the Fort’s Forge Discord, “mostly to prevent an influx of people in response to… news coverage.” And while people with Frostbite engine modding experience are encouraged to reach out, the small team is being cautious about growing too large, too fast.

“We’re a little reluctant to add developers right now as we have no real code base to work from,” Ness said, describing their current efforts as “scratch work” maintained in separate forms by multiple people. “But once we firm that up (hopefully in the next weeks), we will look to add more [coders].”

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Why reviving the shuttered Anthem is turning out tougher than expected Read More »