AI

under-new-law,-cops-bust-famous-cartoonist-for-ai-generated-child-sex-abuse-images

Under new law, cops bust famous cartoonist for AI-generated child sex abuse images

Late last year, California passed a law against the possession or distribution of child sex abuse material (CSAM) that has been generated by AI. The law went into effect on January 1, and Sacramento police announced yesterday that they have already arrested their first suspect—a 49-year-old Pulitzer-prize-winning cartoonist named Darrin Bell.

The new law, which you can read here, declares that AI-generated CSAM is harmful, even without an actual victim. In part, says the law, this is because all kinds of CSAM can be used to groom children into thinking sexual activity with adults is normal. But the law singles out AI-generated CSAM for special criticism due to the way that generative AI systems work.

“The creation of CSAM using AI is inherently harmful to children because the machine-learning models utilized by AI have been trained on datasets containing thousands of depictions of known CSAM victims,” it says, “revictimizing these real children by using their likeness to generate AI CSAM images into perpetuity.”

The law defines “artificial intelligence” as “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.”

Under new law, cops bust famous cartoonist for AI-generated child sex abuse images Read More »

google-is-about-to-make-gemini-a-core-part-of-workspaces—with-price-changes

Google is about to make Gemini a core part of Workspaces—with price changes

Google has added AI features to its regular Workspace accounts for business while slightly raising the baseline prices of Workspace plans.

Previously, AI tools in the Gemini Business plan were a $20 per seat add-on to existing Workspace accounts, which had a base cost of $12 per seat without. Now, the AI tools are included for all Workspace users, but the per-seat base price is increasing from $12 to $14.

That means that those who were already paying extra for Gemini are going to pay less than half of what they were—effectively $14 per seat instead of $32. But those who never used or wanted Gemini or any other newer features under the AI umbrella from Workspace are going to pay a little bit more than before.

Features covered here include access to Gemini Advanced, the NotebookLM research assistant, email and document summaries in Gmail and Docs, adaptive audio and additional transcription languages for Meet, and “help me write” and Gemini in the side panel across a variety of applications.

Google says that it plans “to roll out even more AI features previously available in Gemini add-ons only.”

Google is about to make Gemini a core part of Workspaces—with price changes Read More »

home-microsoft-365-plans-use-copilot-ai-features-as-pretext-for-a-price-hike

Home Microsoft 365 plans use Copilot AI features as pretext for a price hike

Microsoft hasn’t said for how long this “limited time” offer will last, but presumably it will only last for a year or two to help ease the transition between the old pricing and the new pricing. New subscribers won’t be offered the option to pay for the Classic plans.

Subscribers on the Personal and Family plans can’t use Copilot indiscriminately; they get 60 AI credits per month to use across all the Office apps, credits that can also be used to generate images or text in Windows apps like Designer, Paint, and Notepad. It’s not clear how these will stack with the 15 credits that Microsoft offers for free for apps like Designer, or the 50 credits per month Microsoft is handing out for Image Cocreator in Paint.

Those who want unlimited usage and access to the newest AI models are still asked to pay $20 per month for a Copilot Pro subscription.

As Microsoft notes, this is the first price increase it has ever implemented for the personal Microsoft 365 subscriptions in the US, which have stayed at the same levels since being introduced as Office 365 over a decade ago. Pricing for the business plans and pricing in other countries has increased before. Pricing for Office Home 2024 ($150) and Office Home & Business 2024 ($250), which can’t access Copilot or other Microsoft 365 features, is also the same as it was before.

Home Microsoft 365 plans use Copilot AI features as pretext for a price hike Read More »

researchers-use-ai-to-design-proteins-that-block-snake-venom-toxins

Researchers use AI to design proteins that block snake venom toxins

Since these two toxicities work through entirely different mechanisms, the researchers tackled them separately.

Blocking a neurotoxin

The neurotoxic three-fingered proteins are a subgroup of the larger protein family that specializes in binding to and blocking the receptors for acetylcholine, a major neurotransmitter. Their three-dimensional structure, which is key to their ability to bind these receptors, is based on three strings of amino acids within the protein that nestle against each other (for those that have taken a sufficiently advanced biology class, these are anti-parallel beta sheets). So to interfere with these toxins, the researchers targeted these strings.

They relied on an AI package called RFdiffusion (the RF denotes its relation to the Rosetta Fold protein-folding software). RFdiffusion can be directed to design protein structures that are complements to specific chemicals; in this case, it identified new strands that could line up along the edge of the ones in the three-fingered toxins. Once those were identified, a separate AI package, called ProteinMPNN, was used to identify the amino acid sequence of a full-length protein that would form the newly identified strands.

But we’re not done with the AI tools yet. The combination of three-fingered toxins and a set of the newly designed proteins were then fed into DeepMind’s AlfaFold2 and the Rosetta protein structure software, and the strength of the interactions between them were estimated.

It’s only at this point that the researchers started making actual proteins, focusing on the candidates that the software suggested would interact the best with the three-fingered toxins. Forty-four of the computer-designed proteins were tested for their ability to interact with the three-fingered toxin, and the single protein that had the strongest interaction was used for further studies.

At this point, it was back to the AI, where RFDiffusion was used to suggest variants of this protein that might bind more effectively. About 15 percent of its suggestions did, in fact, interact more strongly with the toxin. The researchers then made both the toxin and the strongest inhibitor in bacteria and obtained the structure of their interactions. This confirmed that the software’s predictions were highly accurate.

Researchers use AI to design proteins that block snake venom toxins Read More »

meta-takes-us-a-step-closer-to-star-trek’s-universal-translator

Meta takes us a step closer to Star Trek’s universal translator


The computer science behind translating speech from 100 source languages.

In 2023, AI researchers at Meta interviewed 34 native Spanish and Mandarin speakers who lived in the US but didn’t speak English. The goal was to find out what people who constantly rely on translation in their day-to-day activities expect from an AI translation tool. What those participants wanted was basically a Star Trek universal translator or the Babel Fish from the Hitchhiker’s Guide to the Galaxy: an AI that could not only translate speech to speech in real time across multiple languages, but also preserve their voice, tone, mannerisms, and emotions. So, Meta assembled a team of over 50 people and got busy building it.

What this team came up with was a next-gen translation system called Seamless. The first building block of this system is described in Wednesday’s issue of Nature; it can translate speech among 36 different languages.

Language data problems

AI translation systems today are mostly focused on text, because huge amounts of text are available in a wide range of languages thanks to digitization and the Internet. Institutions like the United Nations or European Parliament routinely translate all their proceedings into the languages of all their member states, which means there are enormous databases comprising aligned documents prepared by professional human translators. You just needed to feed those huge, aligned text corpora into neural nets (or hidden Markov models before neural nets became all the rage) and you ended up with a reasonably good machine translation system. But there were two problems with that.

The first issue was those databases comprised formal documents, which made the AI translators default to the same boring legalese in the target language even if you tried to translate comedy. The second problem was speech—none of this included audio data.

The problem of language formality was mostly solved by including less formal sources like books, Wikipedia, and similar material in AI training databases. The scarcity of aligned audio data, however, remained. Both issues were at least theoretically manageable in high-resource languages like English or Spanish, but they got dramatically worse in low-resource languages like Icelandic or Zulu.

As a result, the AI translators we have today support an impressive number of languages in text, but things are complicated when it comes to translating speech. There are cascading systems that simply do this trick in stages. An utterance is first converted to text just as it would be in any dictation service. Then comes text-to-text translation, and finally the resulting text in the target language is synthesized into speech. Because errors accumulate at each of those stages, the performance you get this way is usually poor, and it doesn’t work in real time.

A few systems that can translate speech-to-speech directly do exist, but in most cases they only translate into English and not in the opposite way. Your foreign language interlocutor can say something to you in one of the languages supported by tools like Google’s AudioPaLM, and they will translate that to English speech, but you can’t have a conversation going both ways.

So, to pull off the Star Trek universal translator thing Meta’s interviewees dreamt about, the Seamless team started with sorting out the data scarcity problem. And they did it in a quite creative way.

Building a universal language

Warren Weaver, a mathematician and pioneer of machine translation, argued in 1949 that there might be a yet undiscovered universal language working as a common base of human communication. This common base of all our communication was exactly what the Seamless team went for in its search for data more than 70 years later. Weaver’s universal language turned out to be math—more precisely, multidimensional vectors.

Machines do not understand words as humans do. To make sense of them, they need to first turn them into sequences of numbers that represent their meaning. Those sequences of numbers are numerical vectors that are termed word embeddings. When you vectorize tens of millions of documents this way, you’ll end up with a huge multidimensional space where words with similar meaning that often go together, like “tea” and “coffee,” are placed close to each other. When you vectorize aligned text in two languages like those European Parliament proceedings, you end up with two separate vector spaces, and then you can run a neural net to learn how those two spaces map onto each other.

But the Meta team didn’t have those nicely aligned texts for all the languages they wanted to cover. So, they vectorized all texts in all languages as if they were just a single language and dumped them into one embedding space called SONAR (Sentence-level Multimodal and Language-Agnostic Representations). Once the text part was done, they went to speech data, which was vectorized using a popular W2v (word to vector) tool and added it to the same massive multilingual, multimodal space. Of course, each embedding carried metadata identifying its source language and whether it was text or speech before vectorization.

The team just used huge amounts of raw data—no fancy human labeling, no human-aligned translations. And then, the data mining magic happened.

SONAR embeddings represented entire sentences instead of single words. Part of the reason behind that was to control for differences between morphologically rich languages, where a single word may correspond to multiple words in morphologically simple languages. But the most important thing was that it ensured that sentences with similar meaning in multiple languages ended up close to each other in the vector space.

It was the same story with speech, too—a spoken sentence in one language was close to spoken sentences in other languages with similar meaning. It even worked between text and speech. So, the team simply assumed that embeddings in two different languages or two different modalities (speech or text) that are at a sufficiently close distance to each other are equivalent to the manually aligned texts of translated documents.

This produced huge amounts of automatically aligned data. The Seamless team suddenly got access to millions of aligned texts, even in low-resource languages, along with thousands of hours of transcribed audio. And they used all this data to train their next-gen translator.

Seamless translation

The automatically generated data set was augmented with human-curated texts and speech samples where possible and used to train multiple AI translation models. The largest one was called SEAMLESSM4T v2. It could translate speech to speech from 101 source languages into any of 36 output languages, and translate text to text. It would also work as an automatic speech recognition system in 96 languages, translate speech to text from 101 into 96 languages, and translate text to speech from 96 into 36 languages—all from a single unified model. It also outperformed state-of-the-art cascading systems by 8 percent in a speech-to-text and by 23 percent in a speech-to-speech translations based on the scores in Bilingual Evaluation Understudy (an algorithm commonly used to evaluate the quality of machine translation).

But it can now do even more than that. The Nature paper published by Meta’s Seamless ends at the SEAMLESSM4T models, but Nature has a long editorial process to ensure scientific accuracy. The paper published on January 15, 2025, was submitted in late November 2023. But in a quick search of the arXiv.org, a repository of not-yet-peer-reviewed papers, you can find the details of two other models that the Seamless team has already integrated on top of the SEAMLESSM4T: SeamlessStreaming and SeamlessExpressive, which take this AI even closer to making a Star Trek universal translator a reality.

SeamlessStreaming is meant to solve the translation latency problem. The baseline SEAMLESSM4T, despite all the bells and whistles, worked as a standard AI translation tool. You had to say what you wanted to say, push “translate,” and it spat out the translation. SeamlessStreaming was designed to take this experience a bit closer to what human simultaneous translator do—it translates what you’re saying as you speak in a streaming fashion. SeamlessExpressive, on the other hand, is aimed at preserving the way you express yourself in translations. When you whisper or say something in a cheerful manner or shout out with anger, SeamlessExpressive will encode the features of your voice, like tone, prosody, volume, tempo, and so on, and transfer those into the output speech in the target language.

Sadly, it still can’t do both at the same time; you can only choose to go for either streaming or expressivity, at least at the moment. Also, the expressivity variant is very limited in supported languages—it only works in English, Spanish, French, and German. But at least it’s online so you can go ahead and give it a spin.

Nature, 2025.  DOI: 10.1038/s41586-024-08359-z

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Meta takes us a step closer to Star Trek’s universal translator Read More »

chatgpt-becomes-more-siri-like-with-new-scheduled-tasks-feature

ChatGPT becomes more Siri-like with new scheduled tasks feature

OpenAI is making ChatGPT work a little more like older digital assistants with a new feature called Tasks, as reported by TechCrunch and others.

Currently in beta, Tasks allows users to direct the chatbot to send reminders or to generate responses to specific prompts at certain times; recurring tasks are also supported.

The feature is available to Plus, Team, and Pro subscribers starting today, while free users don’t have access.

To create a task, users need to select “4o with scheduled tasks” from the model picker and then direct ChatGPT using the same kind of plain language text prompts that drive everything else it does. ChatGPT will sometimes suggest tasks, too, but they won’t go into effect unless the user approves them.

The user can then make changes to assigned tasks through the same chat conversation, or they can use a new Tasks section of the ChatGPT apps to manage all currently assigned items. There’s currently a 10-task limit.

When the time comes to perform an assigned task, the ChatGPT mobile or desktop app will send a notification on schedule.

This update can be seen as OpenAI’s first step into the agentic AI space, where applications built using deep learning can operate relatively independently within certain boundaries, either replacing or easing the day-to-day responsibilities of information workers.

ChatGPT becomes more Siri-like with new scheduled tasks feature Read More »

amid-a-flurry-of-hype,-microsoft-reorganizes-entire-dev-team-around-ai

Amid a flurry of hype, Microsoft reorganizes entire dev team around AI

Microsoft CEO Satya Nadella has announced a dramatic restructuring of the company’s engineering organization, which is pivoting the company’s focus to developing the tools that will underpin agentic AI.

Dubbed “CoreAI – Platform and Tools,” the new division rolls the existing AI platform team and the previous developer division (responsible for everything from .NET to Visual Studio) along with some other teams into one big group.

As for what this group will be doing specifically, it’s basically everything that’s mission-critical to Microsoft in 2025, as Nadella tells it:

This new division will bring together Dev Div, AI Platform, and some key teams from the Office of the CTO (AI Supercomputer, AI Agentic Runtimes, and Engineering Thrive), with the mission to build the end-to-end Copilot & AI stack for both our first-party and third-party customers to build and run AI apps and agents. This group will also build out GitHub Copilot, thus having a tight feedback loop between the leading AI-first product and the AI platform to motivate the stack and its roadmap.

To accomplish all that, “Jay Parikh will lead this group as EVP.” Parikh was hired by Microsoft in October; he previously worked as the VP and global head of engineering at Meta.

The fact that the blog post doesn’t say anything about .NET or Visual Studio, instead emphasizing GitHub Copilot and anything and everything related to agentic AI, says a lot about how Nadella sees Microsoft’s future priorities.

So-called AI agents are applications that are given specified boundaries (action spaces) and a large memory capacity to independently do subsets of the kinds of work that human office workers do today. Some company leaders and AI commentators believe these agents will outright replace jobs, while others are more conservative, suggesting they’ll simply be powerful tools to streamline the jobs people already have.

Amid a flurry of hype, Microsoft reorganizes entire dev team around AI Read More »

meta-to-cut-5%-of-employees-deemed-unfit-for-zuckerberg’s-ai-fueled-future

Meta to cut 5% of employees deemed unfit for Zuckerberg’s AI-fueled future

Anticipating that 2025 will be an “intense year” requiring rapid innovation, Mark Zuckerberg reportedly announced that Meta would be cutting 5 percent of its workforce—targeting “lowest performers.”

Bloomberg reviewed the internal memo explaining the cuts, which was posted to Meta’s internal Workplace forum Tuesday. In it, Zuckerberg confirmed that Meta was shifting its strategy to “move out low performers faster” so that Meta can hire new talent to fill those vacancies this year.

“I’ve decided to raise the bar on performance management,” Zuckerberg said. “We typically manage out people who aren’t meeting expectations over the course of a year, but now we’re going to do more extensive performance-based cuts during this cycle.”

Cuts will likely impact more than 3,600 employees, as Meta’s most recent headcount in September totaled about 72,000 employees. It may not be as straightforward as letting go anyone with an unsatisfactory performance review, as Zuckerberg said that any employee not currently meeting expectations could be spared if Meta is “optimistic about their future performance,” The Wall Street Journal reported.

Any employees affected will be notified by February 10 and receive “generous severance,” Zuckerberg’s memo promised.

This is the biggest round of cuts at Meta since 2023, when Meta laid off 10,000 employees during what Zuckerberg dubbed the “year of efficiency.” Those layoffs followed a prior round where 11,000 lost their jobs and Zuckerberg realized that “leaner is better.” He told employees in 2023 that a “surprising result” from reducing the workforce was “that many things have gone faster.”

“A leaner org will execute its highest priorities faster,” Zuckerberg wrote in 2023. “People will be more productive, and their work will be more fun and fulfilling. We will become an even greater magnet for the most talented people. That’s why in our Year of Efficiency, we are focused on canceling projects that are duplicative or lower priority and making every organization as lean as possible.”

Meta to cut 5% of employees deemed unfit for Zuckerberg’s AI-fueled future Read More »

getting-an-all-optical-ai-to-handle-non-linear-math

Getting an all-optical AI to handle non-linear math

The problem is that this cascading requires massive parallel computations that, when done on standard computers, take tons of energy and time. Bandyopadhyay’s team feels this problem can be solved by performing the equivalent operations using photons rather than electrons. In photonic chips, information can be encoded in optical properties like polarization, phase, magnitude, frequency, and wavevector. While this would be extremely fast and energy-efficient, building such chips isn’t easy.

Siphoning light

“Conveniently, photonics turned out to be particularly good at linear matrix operations,” Bandyopadhyay claims. A group at MIT led by Dirk Englund, a professor who is a co-author of Bandyopadhyay’s study, demonstrated a photonic chip doing matrix multiplication entirely with light in 2017. What the field struggled with, though, was implementing non-linear functions in photonics.

The usual solution, so far, relied on bypassing the problem by doing linear algebra on photonic chips and offloading non-linear operations to external electronics. This, however, increased latency, since the information had to be converted from light to electrical signals, processed on an external processor, and converted back to light. “And bringing the latency down is the primary reason why we want to build neural networks in photonics,” Bandyopadhyay says.

To solve this problem, Bandyopadhyay and his colleagues designed and built what is likely to be the world’s first chip that can compute the entire deep neural net, including both linear and non-linear operations, using photons. “The process starts with an external laser with a modulator that feeds light into the chip through an optical fiber. This way we convert electrical inputs to light,” Bandyopadhyay explains.

The light is then fanned out to six channels and fed into a layer of six neurons that perform linear matrix multiplication using an array of devices called Mach-Zehnder interferometers. “They are essentially programmable beam splitters, taking two optical fields and mixing them coherently to produce two output optical fields. By applying the voltage, you can control how much those the two inputs mix,” Bandyopadhyay says.

Getting an all-optical AI to handle non-linear math Read More »

three-bizarre-home-devices-and-a-couple-good-things-at-ces-2025

Three bizarre home devices and a couple good things at CES 2025


You can’t replace cats with AI, not yet

Some quietly good things made an appearance at CES 2025, amidst the AI slush.

Credit: Verity Burns/WIRED UK

Every year, thousands of product vendors, journalists, and gadget enthusiasts gather in an unreasonable city to gawk at mostly unrealistic products.

To be of service to our readers, Ars has done the work of looking through hundreds of such items presented at the 2025 Consumer Electronic Show, pulling out the most bizarre, unnecessary, and head-scratching items. Andrew Cunningham swept across PC and gaming accessories. This writer stuck to goods related to the home.

It’s a lie to say it’s all a prank, so I snuck in a couple of actually good things for human domiciles announced during CES. But the stuff you’ll want to tell your family and friends about in mock disbelief? Plenty of that, still.

AI-powered spice dispenser: Spicerr

A hand holding a white tubular device, with spice tubes loaded into a bottom area, spices dropping out of the bottom.

Credit: Spicerr

Part of my job is to try and stretch my viewpoint outward—to encompass people who might not have the same experiences and who might want different things from technology. Not everybody is a professional writer, pecking away in Markdown about the latest turn-based strategy game. You must try to hear many timbres inside the common voice in your head when addressing new products and technologies.

I cannot get there with Spicerr, the “world’s first AI-powered spice dispenser,” even leaving aside the AI bit. Is the measurement and dumping of spices into a dish even five percent of the overall challenge? Will a mechanical dispenser be any more precise than standard teaspoons? Are there many kinds of food on which you would want to sprinkle a “customized blend” of spices? Are there home cooks so dedicated to fresh, bright flavors that they want their spices delivered in small vials, at presumably premium prices, rather than simply having small quantities of regularly restocked essentials?

Maybe the Spicerr would be a boon to inexperienced cooks, whose relatives all know them to under-season their food. Rather than buying them a battery-powered device, they must charge to “take the guesswork out of seasoning,” though, you could … buy them good cookbooks, or a Times Cooking subscription, or just a few new bottles of paprika, oregano, cumin, cayenne, and turmeric.

Philips Hue’s (sigh) AI-powered lighting assistants

Image of AI assistant responding to prompts from user,

Credit: Signify

I’m not dismayed that Philips Hue is jumping on the “This has AI now” bandwagon. Well, I am, but not specifically dismayed, because every vendor at CES this year is hawking AI. No, the bad thing here is that Hue lights are devices that work great. Maybe Philips’ pursuit of an “AI assistant” to help you figure out that Halloween lights should be orange-ish won’t distract them from their core product’s reliability. But I have my doubts.

Hue has recently moved from a relatively open lighting system to an app-and-account-required, cloud-controlled scheme, supposedly in the name of security and user control. Having an AI assistant is perhaps another way to sell services beyond hardware, like the $130 or $3/month LG TV app it now offers. The AI service is free for now, but charging for it in the future is far from impossible.

Again, none of this should necessarily affect people who, like me, use Hue bulbs to have a porch light come on at sunset or turn a dim, warm hue when it’s time to wind down. But it felt like Hue, which charges a very decent amount for their hardware, might have held off on chasing this trend.

Robot vacuums doing way too much

Switchbot K20+ Pro holding up a tablet while a woman does a yoga pose in front of an insanely wealthy-person view of a California cliffside.

Credit: Switchbot

Robot vacuums are sometimes worth the hassle and price… if you don’t mind doing a pre-vacuum sweep of things that might get stuck in its brushes, you’ve got room for an emptying base or will empty it yourself, and you don’t mind that they usually miss floor edges and corners. They’re fine, I’m saying.

Robot vacuum makers have steadfastly refused to accept “fine” and are out way over their skis this year. In one trade show, you can find:

  • Eureka’s J15 Max Ultra, incorporating “IntelliView AI 2.0,” infrared, and FHD vision, detects liquid spills and switches brushes and vacuums to better clean and avoid spreading.
  • Roborock’s Saros Z70 has a “mechanical task arm” that can pick up objects like socks and small debris (up to 10.5 ounces) and put them in a pre-determined pile spot.
  • SwitchBot’s modular K20+ Pro, which is a vacuum onto which you can attach air purifiers, tablet mounts, security cameras, or other things you want rolling around your home.
  • Dreame’s X50, which can pivot to clean some small ledges but cannot actually climb.
  • The Narwal Flow, which has a wide, flat, off-center mop to reach wall edges.

Pricing and availability are not available for these vacuums yet, but each is likely to set you back the equivalent of at least one new MacBook. They are also rather big devices to stash in your home (it’s hard to hide an arm or an air purifier). Each is an early adopter device, and getting replacement consumable parts for them long-term is an uncertain bet. I’m not sure who they are for, but that has not stopped this apparently fertile field from growing many new products.

Now for good things, starting with Google Home

Nest Hub second generation, on a nightstand with a bamboo top and dim lamp in the near background.

Credit: Corey Gaskin

I’ve been watching and occasionally writing about the progress of the nascent Matter smart home protocol, somewhat in the vein of a high school coach who knows their team is held back by a lack of coordination, communication, and consistent direction. What Matter wants to do is vital for the future of the smart home, but it’s very much a loose scrimmage right now.

And yet, this week, in a CES-adjacent announcement, Google reminded me that Matter can really, uh, matter. All of Google Home’s hub devices—Nest screens and speakers, Chromecasts, Google TV devices running at least Android 14, and a few other gadgets—can interoperate with Matter devices locally, with no cloud required.

That means people with a Google Home setup can switch devices, adjust volumes, and otherwise control devices, faster, with Internet outages or latency no longer an issue. Local, no-cloud-required control of devices across brands is one of Matter’s key promises, and seeing it happen inside one major home brand is encouraging.

More we’ll-see-what-happens news is the unveiling of the public Home APIs, which promise to make it easier for third-party devices to be set up, integrated, and automated in a Google Home setup. Even if you’re skeptical of Google’s long-term support for APIs, the company is also working with the Matter group to improve the Matter certification process for all devices. Device makers should then have Matter to fall back onto, failing enthusiasm for Google Home APIs.

This cat tower is also an air purifier; it is also good

Two fake cats, sitting on seats atop an air purifier at CES 2025

Credit: Verity Burns/WIRED UK

There are a lot of phones out there that need charging and a bunch of gamers who, for some reason, need even more controllers and screens to play on. But there is another, eternally underserved market getting some attention at CES: cats wanting to sit.

LG, which primarily concerned itself with stuffing generative AI interfaces into every other device at CES 2025, crafted something that feels like a real old-time trade show gimmick. There is no guarantee that your cat will use the AeroCat Tower; some cats may just sit inside the cardboard box it came in out of spite. But should they deign to luxuriate on it, the AeroCat will provide gentle heat beneath them, weigh them, and give you a record of their sleep habits. Also, it purifies the air in that room.

There is no pricing or availability information yet. But if you like your cats, you want to combine the function of a cat tower and air purifier, or you just want to consider something even just a little bit fun about the march of technology, look out for this one.

Photo of Kevin Purdy

Kevin is a senior technology reporter at Ars Technica, covering open-source software, PC gaming, home automation, repairability, e-bikes, and tech history. He has previously worked at Lifehacker, Wirecutter, iFixit, and Carbon Switch.

Three bizarre home devices and a couple good things at CES 2025 Read More »

161-years-ago,-a-new-zealand-sheep-farmer-predicted-ai-doom

161 years ago, a New Zealand sheep farmer predicted AI doom

The text anticipated several modern AI safety concerns, including the possibility of machine consciousness, self-replication, and humans losing control of their technological creations. These themes later appeared in works like Isaac Asimov’s The Evitable Conflict, Frank Herbert’s Dune novels (Butler possibly served as the inspiration for the term “Butlerian Jihad“), and the Matrix films.

A model of Charles Babbage's Analytical Engine, a calculating machine invented in 1837 but never built during Babbage's lifetime.

A model of Charles Babbage’s Analytical Engine, a calculating machine invented in 1837 but never built during Babbage’s lifetime. Credit: DE AGOSTINI PICTURE LIBRARY via Getty Images

Butler’s letter dug deep into the taxonomy of machine evolution, discussing mechanical “genera and sub-genera” and pointing to examples like how watches had evolved from “cumbrous clocks of the thirteenth century”—suggesting that, like some early vertebrates, mechanical species might get smaller as they became more sophisticated. He expanded these ideas in his 1872 novel Erewhon, which depicted a society that had banned most mechanical inventions. In his fictional society, citizens destroyed all machines invented within the previous 300 years.

Butler’s concerns about machine evolution received mixed reactions, according to Butler in the preface to the second edition of Erewhon. Some reviewers, he said, interpreted his work as an attempt to satirize Darwin’s evolutionary theory, though Butler denied this. In a letter to Darwin in 1865, Butler expressed his deep appreciation for The Origin of Species, writing that it “thoroughly fascinated” him and explained that he had defended Darwin’s theory against critics in New Zealand’s press.

What makes Butler’s vision particularly remarkable is that he was writing in a vastly different technological context when computing devices barely existed. While Charles Babbage had proposed his theoretical Analytical Engine in 1837—a mechanical computer using gears and levers that was never built in his lifetime—the most advanced calculating devices of 1863 were little more than mechanical calculators and slide rules.

Butler extrapolated from the simple machines of the Industrial Revolution, where mechanical automation was transforming manufacturing, but nothing resembling modern computers existed. The first working program-controlled computer wouldn’t appear for another 70 years, making his predictions of machine intelligence strikingly prescient.

Some things never change

The debate Butler started continues today. Two years ago, the world grappled with what one might call the “great AI takeover scare of 2023.” OpenAI’s GPT-4 had just been released, and researchers evaluated its “power-seeking behavior,” echoing concerns about potential self-replication and autonomous decision-making.

161 years ago, a New Zealand sheep farmer predicted AI doom Read More »

viral-chatgpt-powered-sentry-gun-gets-shut-down-by-openai

Viral ChatGPT-powered sentry gun gets shut down by OpenAI

OpenAI says it has cut off API access to an engineer whose video of a motorized sentry gun controlled by ChatGPT-powered commands has set off a viral firestorm of concerns about AI-powered weapons.

An engineer going by the handle sts_3d started posting videos of a motorized, auto-rotating swivel chair project in August. By November, that same assembly appeared to seamlessly morph into the basis for a sentry gun that could quickly rotate to arbitrary angles and activate a servo to fire precisely aimed projectiles (though only blanks and simulated lasers are shown being fired in his videos).

Earlier this week, though, sts_3d started getting wider attention for a new video showing the sentry gun’s integration with OpenAI’s real-time API. In the video, the gun uses that ChatGPT integration to aim and fire based on spoken commands from sts_3d and even responds in a chirpy voice afterward.

@sts_3d OpenAI Realtime API project integration #robotics #ai #openai ♬ original sound – sts_3d

“If you need any other assistance, please let me know,” the ChatGPT-powered gun says after firing a volley at one point. “Good job, you saved us,” sts_3d responds, deadpan.

“I’m glad I could help!” ChatGPT intones happily.

In response to a comment request from Futurism, OpenAI said it had “proactively identified this violation of our policies and notified the developer to cease this activity ahead of receiving your inquiry. OpenAI’s Usage Policies prohibit the use of our services to develop or use weapons or to automate certain systems that can affect personal safety.”

Halt, intruder alert!

The “voice-powered killer AI robot angle” has garnered plenty of viral attention for sts_3d’s project in recent days. But the ChatGPT integration shown in his video doesn’t exactly reach Terminator levels of a terrifying killing machine. Here, ChatGPT instead ends up looking more like a fancy, overwrought voice-activated remote control for a legitimately impressive gun mount.

Viral ChatGPT-powered sentry gun gets shut down by OpenAI Read More »