Biz & IT

backdoor-infecting-vpns-used-“magic-packets”-for-stealth-and-security

Backdoor infecting VPNs used “magic packets” for stealth and security

When threat actors use backdoor malware to gain access to a network, they want to make sure all their hard work can’t be leveraged by competing groups or detected by defenders. One countermeasure is to equip the backdoor with a passive agent that remains dormant until it receives what’s known in the business as a “magic packet.” On Thursday, researchers revealed that a never-before-seen backdoor that quietly took hold of dozens of enterprise VPNs running Juniper Network’s Junos OS has been doing just that.

J-Magic, the tracking name for the backdoor, goes one step further to prevent unauthorized access. After receiving a magic packet hidden in the normal flow of TCP traffic, it relays a challenge to the device that sent it. The challenge comes in the form of a string of text that’s encrypted using the public portion of an RSA key. The initiating party must then respond with the corresponding plaintext, proving it has access to the secret key.

Open sesame

The lightweight backdoor is also notable because it resided only in memory, a trait that makes detection harder for defenders. The combination prompted researchers at Lumin Technology’s Black Lotus Lab to sit up and take notice.

“While this is not the first discovery of magic packet malware, there have only been a handful of campaigns in recent years,” the researchers wrote. “The combination of targeting Junos OS routers that serve as a VPN gateway and deploying a passive listening in-memory only agent, makes this an interesting confluence of tradecraft worthy of further observation.”

The researchers found J-Magic on VirusTotal and determined that it had run inside the networks of 36 organizations. They still don’t know how the backdoor got installed. Here’s how the magic packet worked:

The passive agent is deployed to quietly observe all TCP traffic sent to the device. It discreetly analyzes the incoming packets and watches for one of five specific sets of data contained in them. The conditions are obscure enough to blend in with the normal flow of traffic that network defense products won’t detect a threat. At the same time, they’re unusual enough that they’re not likely to be found in normal traffic.

Backdoor infecting VPNs used “magic packets” for stealth and security Read More »

cutting-edge-chinese-“reasoning”-model-rivals-openai-o1—and-it’s-free-to-download

Cutting-edge Chinese “reasoning” model rivals OpenAI o1—and it’s free to download

Unlike conventional LLMs, these SR models take extra time to produce responses, and this extra time often increases performance on tasks involving math, physics, and science. And this latest open model is turning heads for apparently quickly catching up to OpenAI.

For example, DeepSeek reports that R1 outperformed OpenAI’s o1 on several benchmarks and tests, including AIME (a mathematical reasoning test), MATH-500 (a collection of word problems), and SWE-bench Verified (a programming assessment tool). As we usually mention, AI benchmarks need to be taken with a grain of salt, and these results have yet to be independently verified.

A chart of DeepSeek R1 benchmark results, created by DeepSeek.

A chart of DeepSeek R1 benchmark results, created by DeepSeek. Credit: DeepSeek

TechCrunch reports that three Chinese labs—DeepSeek, Alibaba, and Moonshot AI’s Kimi—have now released models they say match o1’s capabilities, with DeepSeek first previewing R1 in November.

But the new DeepSeek model comes with a catch if run in the cloud-hosted version—being Chinese in origin, R1 will not generate responses about certain topics like Tiananmen Square or Taiwan’s autonomy, as it must “embody core socialist values,” according to Chinese Internet regulations. This filtering comes from an additional moderation layer that isn’t an issue if the model is run locally outside of China.

Even with the potential censorship, Dean Ball, an AI researcher at George Mason University, wrote on X, “The impressive performance of DeepSeek’s distilled models (smaller versions of r1) means that very capable reasoners will continue to proliferate widely and be runnable on local hardware, far from the eyes of any top-down control regime.”

Cutting-edge Chinese “reasoning” model rivals OpenAI o1—and it’s free to download Read More »

home-microsoft-365-plans-use-copilot-ai-features-as-pretext-for-a-price-hike

Home Microsoft 365 plans use Copilot AI features as pretext for a price hike

Microsoft hasn’t said for how long this “limited time” offer will last, but presumably it will only last for a year or two to help ease the transition between the old pricing and the new pricing. New subscribers won’t be offered the option to pay for the Classic plans.

Subscribers on the Personal and Family plans can’t use Copilot indiscriminately; they get 60 AI credits per month to use across all the Office apps, credits that can also be used to generate images or text in Windows apps like Designer, Paint, and Notepad. It’s not clear how these will stack with the 15 credits that Microsoft offers for free for apps like Designer, or the 50 credits per month Microsoft is handing out for Image Cocreator in Paint.

Those who want unlimited usage and access to the newest AI models are still asked to pay $20 per month for a Copilot Pro subscription.

As Microsoft notes, this is the first price increase it has ever implemented for the personal Microsoft 365 subscriptions in the US, which have stayed at the same levels since being introduced as Office 365 over a decade ago. Pricing for the business plans and pricing in other countries has increased before. Pricing for Office Home 2024 ($150) and Office Home & Business 2024 ($250), which can’t access Copilot or other Microsoft 365 features, is also the same as it was before.

Home Microsoft 365 plans use Copilot AI features as pretext for a price hike Read More »

161-years-ago,-a-new-zealand-sheep-farmer-predicted-ai-doom

161 years ago, a New Zealand sheep farmer predicted AI doom

The text anticipated several modern AI safety concerns, including the possibility of machine consciousness, self-replication, and humans losing control of their technological creations. These themes later appeared in works like Isaac Asimov’s The Evitable Conflict, Frank Herbert’s Dune novels (Butler possibly served as the inspiration for the term “Butlerian Jihad“), and the Matrix films.

A model of Charles Babbage's Analytical Engine, a calculating machine invented in 1837 but never built during Babbage's lifetime.

A model of Charles Babbage’s Analytical Engine, a calculating machine invented in 1837 but never built during Babbage’s lifetime. Credit: DE AGOSTINI PICTURE LIBRARY via Getty Images

Butler’s letter dug deep into the taxonomy of machine evolution, discussing mechanical “genera and sub-genera” and pointing to examples like how watches had evolved from “cumbrous clocks of the thirteenth century”—suggesting that, like some early vertebrates, mechanical species might get smaller as they became more sophisticated. He expanded these ideas in his 1872 novel Erewhon, which depicted a society that had banned most mechanical inventions. In his fictional society, citizens destroyed all machines invented within the previous 300 years.

Butler’s concerns about machine evolution received mixed reactions, according to Butler in the preface to the second edition of Erewhon. Some reviewers, he said, interpreted his work as an attempt to satirize Darwin’s evolutionary theory, though Butler denied this. In a letter to Darwin in 1865, Butler expressed his deep appreciation for The Origin of Species, writing that it “thoroughly fascinated” him and explained that he had defended Darwin’s theory against critics in New Zealand’s press.

What makes Butler’s vision particularly remarkable is that he was writing in a vastly different technological context when computing devices barely existed. While Charles Babbage had proposed his theoretical Analytical Engine in 1837—a mechanical computer using gears and levers that was never built in his lifetime—the most advanced calculating devices of 1863 were little more than mechanical calculators and slide rules.

Butler extrapolated from the simple machines of the Industrial Revolution, where mechanical automation was transforming manufacturing, but nothing resembling modern computers existed. The first working program-controlled computer wouldn’t appear for another 70 years, making his predictions of machine intelligence strikingly prescient.

Some things never change

The debate Butler started continues today. Two years ago, the world grappled with what one might call the “great AI takeover scare of 2023.” OpenAI’s GPT-4 had just been released, and researchers evaluated its “power-seeking behavior,” echoing concerns about potential self-replication and autonomous decision-making.

161 years ago, a New Zealand sheep farmer predicted AI doom Read More »

microsoft-sues-service-for-creating-illicit-content-with-its-ai-platform

Microsoft sues service for creating illicit content with its AI platform

Microsoft and others forbid using their generative AI systems to create various content. Content that is off limits includes materials that feature or promote sexual exploitation or abuse, is erotic or pornographic, or attacks, denigrates, or excludes people based on race, ethnicity, national origin, gender, gender identity, sexual orientation, religion, age, disability status, or similar traits. It also doesn’t allow the creation of content containing threats, intimidation, promotion of physical harm, or other abusive behavior.

Besides expressly banning such usage of its platform, Microsoft has also developed guardrails that inspect both prompts inputted by users and the resulting output for signs the content requested violates any of these terms. These code-based restrictions have been repeatedly bypassed in recent years through hacks, some benign and performed by researchers and others by malicious threat actors.

Microsoft didn’t outline precisely how the defendants’ software was allegedly designed to bypass the guardrails the company had created.

Masada wrote:

Microsoft’s AI services deploy strong safety measures, including built-in safety mitigations at the AI model, platform, and application levels. As alleged in our court filings unsealed today, Microsoft has observed a foreign-based threat–actor group develop sophisticated software that exploited exposed customer credentials scraped from public websites. In doing so, they sought to identify and unlawfully access accounts with certain generative AI services and purposely alter the capabilities of those services. Cybercriminals then used these services and resold access to other malicious actors with detailed instructions on how to use these custom tools to generate harmful and illicit content. Upon discovery, Microsoft revoked cybercriminal access, put in place countermeasures, and enhanced its safeguards to further block such malicious activity in the future.

The lawsuit alleges the defendants’ service violated the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, the Lanham Act, and the Racketeer Influenced and Corrupt Organizations Act and constitutes wire fraud, access device fraud, common law trespass, and tortious interference. The complaint seeks an injunction enjoining the defendants from engaging in “any activity herein.”

Microsoft sues service for creating illicit content with its AI platform Read More »

ai-could-create-78-million-more-jobs-than-it-eliminates-by-2030—report

AI could create 78 million more jobs than it eliminates by 2030—report

On Wednesday, the World Economic Forum (WEF) released its Future of Jobs Report 2025, with CNN immediately highlighting the finding that 40 percent of companies plan workforce reductions due to AI automation. But the report’s broader analysis paints a far more nuanced picture than CNN’s headline suggests: It finds that AI could create 170 million new jobs globally while eliminating 92 million positions, resulting in a net increase of 78 million jobs by 2030.

“Half of employers plan to re-orient their business in response to AI,” writes the WEF in the report. “Two-thirds plan to hire talent with specific AI skills, while 40% anticipate reducing their workforce where AI can automate tasks.”

The survey collected data from 1,000 companies that employ 14 million workers globally. The WEF conducts its employment analysis every two years to help policymakers, business leaders, and workers make decisions about hiring trends.

The new report points to specific skills that will dominate hiring by 2030. Companies ranked AI and big data expertise, networks and cybersecurity, and technological literacy as the three most in-demand skill sets.

The WEF identified AI as the biggest potential job creator among new technologies, with 86 percent of companies expecting AI to transform their operations by 2030.

Declining job categories

The WEF report also identifies specific job categories facing decline. Postal service clerks, executive secretaries, and payroll staff top the list of shrinking roles, with changes driven by factors including (but not limited to) AI adoption. And for the first time, graphic designers and legal secretaries appear among the fastest-declining positions, which the WEF tentatively links to generative AI’s expanding capabilities in creative and administrative work.

AI could create 78 million more jobs than it eliminates by 2030—report Read More »

ongoing-attacks-on-ivanti-vpns-install-a-ton-of-sneaky,-well-written-malware

Ongoing attacks on Ivanti VPNs install a ton of sneaky, well-written malware

Networks protected by Ivanti VPNs are under active attack by well-resourced hackers who are exploiting a critical vulnerability that gives them complete control over the network-connected devices.

Hardware maker Ivanti disclosed the vulnerability, tracked as CVE-2025-0283, on Wednesday and warned that it was under active exploitation against some customers. The vulnerability, which is being exploited to allow hackers to execute malicious code with no authentication required, is present in the company’s Connect Secure VPN, and Policy Secure & ZTA Gateways. Ivanti released a security patch at the same time. It upgrades Connect Secure devices to version 22.7R2.5.

Well-written, multifaceted

According to Google-owned security provider Mandiant, the vulnerability has been actively exploited against “multiple compromised Ivanti Connect Secure appliances” since December, a month before the then zero-day came to light. After exploiting the vulnerability, the attackers go on to install two never-before-seen malware packages, tracked under the names DRYHOOK and PHASEJAM on some of the compromised devices.

PHASEJAM is a well-written and multifaceted bash shell script. It first installs a web shell that gives the remote hackers privileged control of devices. It then injects a function into the Connect Secure update mechanism that’s intended to simulate the upgrading process.

“If the ICS administrator attempts an upgrade, the function displays a visually convincing upgrade process that shows each of the steps along with various numbers of dots to mimic a running process,” Mandiant said. The company continued:

PHASEJAM injects a malicious function into the /home/perl/DSUpgrade.pm file named processUpgradeDisplay(). The functionality is intended to simulate an upgrading process that involves 13 steps, with each of those taking a predefined amount of time. If the ICS administrator attempts an upgrade, the function displays a visually convincing upgrade process that shows each of the steps along with various numbers of dots to mimic a running process. Further details are provided in the System Upgrade Persistence section.

The attackers are also using a previously seen piece of malware tracked as SPAWNANT on some devices. One of its functions is to disable an integrity checker tool (ICT) Ivanti has built into recent VPN versions that is designed to inspect device files for unauthorized additions. SpawnAnt does this by replacing the expected SHA256 cryptographic hash of a core file with the hash of it after it has been infected. As a result, when the tool is run on compromised devices, admins see the following screen:

Ongoing attacks on Ivanti VPNs install a ton of sneaky, well-written malware Read More »

here’s-how-hucksters-are-manipulating-google-to-promote-shady-chrome-extensions

Here’s how hucksters are manipulating Google to promote shady Chrome extensions

The people overseeing the security of Google’s Chrome browser explicitly forbid third-party extension developers from trying to manipulate how the browser extensions they submit are presented in the Chrome Web Store. The policy specifically calls out search-manipulating techniques such as listing multiple extensions that provide the same experience or plastering extension descriptions with loosely related or unrelated keywords.

On Wednesday, security and privacy researcher Wladimir Palant revealed that developers are flagrantly violating those terms in hundreds of extensions currently available for download from Google. As a result, searches for a particular term or terms can return extensions that are unrelated, inferior knockoffs, or carry out abusive tasks such as surreptitiously monetizing web searches, something Google expressly forbids.

Not looking? Don’t care? Both?

A search Wednesday morning in California for Norton Password Manager, for example, returned not only the official extension but three others, all of which are unrelated at best and potentially abusive at worst. The results may look different for searches at other times or from different locations.

Search results for Norton Password Manager.

It’s unclear why someone who uses a password manager would be interested in spoofing their time zone or boosting the audio volume. Yes, they’re all extensions for tweaking or otherwise extending the Chrome browsing experience, but isn’t every extension? The Chrome Web Store doesn’t want extension users to get pigeonholed or to see the list of offerings as limited, so it doesn’t just return the title searched for. Instead, it draws inferences from descriptions of other extensions in an attempt to promote ones that may also be of interest.

In many cases, developers are exploiting Google’s eagerness to promote potentially related extensions in campaigns that foist offerings that are irrelevant or abusive. But wait, Chrome security people have put developers on notice that they’re not permitted to engage in keyword spam and other search-manipulating techniques. So, how is this happening?

Here’s how hucksters are manipulating Google to promote shady Chrome extensions Read More »

time-to-check-if-you-ran-any-of-these-33-malicious-chrome-extensions

Time to check if you ran any of these 33 malicious Chrome extensions

Screenshot showing the phishing email sent to Cyberhaven extension developers. Credit: Amit Assaraf

A link in the email led to a Google consent screen requesting access permission for an OAuth application named Privacy Policy Extension. A Cyberhaven developer granted the permission and, in the process, unknowingly gave the attacker the ability to upload new versions of Cyberhaven’s Chrome extension to the Chrome Web Store. The attacker then used the permission to push out the malicious version 24.10.4.

Screenshot showing the Google permission request. Credit: Amit Assaraf

As word of the attack spread in the early hours of December 25, developers and researchers discovered that other extensions were targeted, in many cases successfully, by the same spear phishing campaign. John Tuckner, founder of Secure Annex, a browser extension analysis and management firm, said that as of Thursday afternoon, he knew of 19 other Chrome extensions that were similarly compromised. In every case, the attacker used spear phishing to push a new malicious version and custom, look-alike domains to issue payloads and receive authentication credentials. Collectively, the 20 extensions had 1.46 million downloads.

“For many I talk to, managing browser extensions can be a lower priority item in their security program,” Tuckner wrote in an email. “Folks know they can present a threat, but rarely are teams taking action on them. We’ve often seen in security [that] one or two incidents can cause a reevaluation of an organization’s security posture. Incidents like this often result in teams scrambling to find a way to gain visibility and understanding of impact to their organizations.”

The earliest compromise occurred in May 2024. Tuckner provided the following spreadsheet:

Name ID Version Patch Available Users Start End
VPNCity nnpnnpemnckcfdebeekibpiijlicmpom 2.0.1 FALSE 10,000 12/12/24 12/31/24
Parrot Talks kkodiihpgodmdankclfibbiphjkfdenh 1.16.2 TRUE 40,000 12/25/24 12/31/24
Uvoice oaikpkmjciadfpddlpjjdapglcihgdle 1.0.12 TRUE 40,000 12/26/24 12/31/24
Internxt VPN dpggmcodlahmljkhlmpgpdcffdaoccni 1.1.1 1.2.0 TRUE 10,000 12/25/24 12/29/24
Bookmark Favicon Changer acmfnomgphggonodopogfbmkneepfgnh 4.00 TRUE 40,000 12/25/24 12/31/24
Castorus mnhffkhmpnefgklngfmlndmkimimbphc 4.40 4.41 TRUE 50,000 12/26/24 12/27/24
Wayin AI cedgndijpacnfbdggppddacngjfdkaca 0.0.11 TRUE 40,000 12/19/24 12/31/24
Search Copilot AI Assistant for Chrome bbdnohkpnbkdkmnkddobeafboooinpla 1.0.1 TRUE 20,000 7/17/24 12/31/24
VidHelper – Video Downloader egmennebgadmncfjafcemlecimkepcle 2.2.7 TRUE 20,000 12/26/24 12/31/24
AI Assistant – ChatGPT and Gemini for Chrome bibjgkidgpfbblifamdlkdlhgihmfohh 0.1.3 FALSE 4,000 5/31/24 10/25/24
TinaMind – The GPT-4o-powered AI Assistant! befflofjcniongenjmbkgkoljhgliihe 2.13.0 2.14.0 TRUE 40,000 12/15/24 12/20/24
Bard AI chat pkgciiiancapdlpcbppfkmeaieppikkk 1.3.7 FALSE 100,000 9/5/24 10/22/24
Reader Mode llimhhconnjiflfimocjggfjdlmlhblm 1.5.7 FALSE 300,000 12/18/24 12/19/24
Primus (prev. PADO) oeiomhmbaapihbilkfkhmlajkeegnjhe 3.18.0 3.20.0 TRUE 40,000 12/18/24 12/25/24
Cyberhaven security extension V3 pajkjnmeojmbapicmbpliphjmcekeaac 24.10.4 24.10.5 TRUE 400,000 12/24/24 12/26/24
GraphQL Network Inspector ndlbedplllcgconngcnfmkadhokfaaln 2.22.6 2.22.7 TRUE 80,000 12/29/24 12/30/24
GPT 4 Summary with OpenAI epdjhgbipjpbbhoccdeipghoihibnfja 1.4 FALSE 10,000 5/31/24 9/29/24
Vidnoz Flex – Video recorder & Video share cplhlgabfijoiabgkigdafklbhhdkahj 1.0.161 FALSE 6,000 12/25/24 12/29/24
YesCaptcha assistant jiofmdifioeejeilfkpegipdjiopiekl 1.1.61 TRUE 200,000 12/29/24 12/31/24
Proxy SwitchyOmega (V3) hihblcmlaaademjlakdpicchbjnnnkbo 3.0.2 TRUE 10,000 12/30/24 12/31/24

But wait, there’s more

One of the compromised extensions is called Reader Mode. Further analysis showed it had been compromised not just in the campaign targeting the other 19 extensions but in a separate campaign that started no later than April 2023. Tuckner said the source of the compromise appears to be a code library developers can use to monetize their extensions. The code library collects details about each web visit a browser makes. In exchange for incorporating the library into the extensions, developers receive a commission from the library creator.

Time to check if you ran any of these 33 malicious Chrome extensions Read More »

2024:-the-year-ai-drove-everyone-crazy

2024: The year AI drove everyone crazy


What do eating rocks, rat genitals, and Willy Wonka have in common? AI, of course.

It’s been a wild year in tech thanks to the intersection between humans and artificial intelligence. 2024 brought a parade of AI oddities, mishaps, and wacky moments that inspired odd behavior from both machines and man. From AI-generated rat genitals to search engines telling people to eat rocks, this year proved that AI has been having a weird impact on the world.

Why the weirdness? If we had to guess, it may be due to the novelty of it all. Generative AI and applications built upon Transformer-based AI models are still so new that people are throwing everything at the wall to see what sticks. People have been struggling to grasp both the implications and potential applications of the new technology. Riding along with the hype, different types of AI that may end up being ill-advised, such as automated military targeting systems, have also been introduced.

It’s worth mentioning that aside from crazy news, we saw fewer weird AI advances in 2024 as well. For example, Claude 3.5 Sonnet launched in June held off the competition as a top model for most of the year, while OpenAI’s o1 used runtime compute to expand GPT-4o’s capabilities with simulated reasoning. Advanced Voice Mode and NotebookLM also emerged as novel applications of AI tech, and the year saw the rise of more capable music synthesis models and also better AI video generators, including several from China.

But for now, let’s get down to the weirdness.

ChatGPT goes insane

Illustration of a broken toy robot.

Early in the year, things got off to an exciting start when OpenAI’s ChatGPT experienced a significant technical malfunction that caused the AI model to generate increasingly incoherent responses, prompting users on Reddit to describe the system as “having a stroke” or “going insane.” During the glitch, ChatGPT’s responses would begin normally but then deteriorate into nonsensical text, sometimes mimicking Shakespearean language.

OpenAI later revealed that a bug in how the model processed language caused it to select the wrong words during text generation, leading to nonsense outputs (basically the text version of what we at Ars now call “jabberwockies“). The company fixed the issue within 24 hours, but the incident led to frustrations about the black box nature of commercial AI systems and users’ tendency to anthropomorphize AI behavior when it malfunctions.

The great Wonka incident

A photo of the Willy's Chocolate Experience, which did not match AI-generated promises.

A photo of “Willy’s Chocolate Experience” (inset), which did not match AI-generated promises, shown in the background. Credit: Stuart Sinclair

The collision between AI-generated imagery and consumer expectations fueled human frustrations in February when Scottish families discovered that “Willy’s Chocolate Experience,” an unlicensed Wonka-ripoff event promoted using AI-generated wonderland images, turned out to be little more than a sparse warehouse with a few modest decorations.

Parents who paid £35 per ticket encountered a situation so dire they called the police, with children reportedly crying at the sight of a person in what attendees described as a “terrifying outfit.” The event, created by House of Illuminati in Glasgow, promised fantastical spaces like an “Enchanted Garden” and “Twilight Tunnel” but delivered an underwhelming experience that forced organizers to shut down mid-way through its first day and issue refunds.

While the show was a bust, it brought us an iconic new meme for job disillusionment in the form of a photo: the green-haired Willy’s Chocolate Experience employee who looked like she’d rather be anywhere else on earth at that moment.

Mutant rat genitals expose peer review flaws

An actual laboratory rat, who is intrigued. Credit: Getty | Photothek

In February, Ars Technica senior health reporter Beth Mole covered a peer-reviewed paper published in Frontiers in Cell and Developmental Biology that created an uproar in the scientific community when researchers discovered it contained nonsensical AI-generated images, including an anatomically incorrect rat with oversized genitals. The paper, authored by scientists at Xi’an Honghui Hospital in China, openly acknowledged using Midjourney to create figures that contained gibberish text labels like “Stemm cells” and “iollotte sserotgomar.”

The publisher, Frontiers, posted an expression of concern about the article titled “Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway” and launched an investigation into how the obviously flawed imagery passed through peer review. Scientists across social media platforms expressed dismay at the incident, which mirrored concerns about AI-generated content infiltrating academic publishing.

Chatbot makes erroneous refund promises for Air Canada

If, say, ChatGPT gives you the wrong name for one of the seven dwarves, it’s not such a big deal. But in February, Ars senior policy reporter Ashley Belanger covered a case of costly AI confabulation in the wild. In the course of online text conversations, Air Canada’s customer service chatbot told customers inaccurate refund policy information. The airline faced legal consequences later when a tribunal ruled the airline must honor commitments made by the automated system. Tribunal adjudicator Christopher Rivers determined that Air Canada bore responsibility for all information on its website, regardless of whether it came from a static page or AI interface.

The case set a precedent for how companies deploying AI customer service tools could face legal obligations for automated systems’ responses, particularly when they fail to warn users about potential inaccuracies. Ironically, the airline had reportedly spent more on the initial AI implementation than it would have cost to maintain human workers for simple queries, according to Air Canada executive Steve Crocker.

Will Smith lampoons his digital double

The real Will Smith eating spaghetti, parodying an AI-generated video from 2023.

The real Will Smith eating spaghetti, parodying an AI-generated video from 2023. Credit: Will Smith / Getty Images / Benj Edwards

In March 2023, a terrible AI-generated video of Will Smith’s AI doppelganger eating spaghetti began making the rounds online. The AI-generated version of the actor gobbled down the noodles in an unnatural and disturbing way. Almost a year later, in February 2024, Will Smith himself posted a parody response video to the viral jabberwocky on Instagram, featuring AI-like deliberately exaggerated pasta consumption, complete with hair-nibbling and finger-slurping antics.

Given the rapid evolution of AI video technology, particularly since OpenAI had just unveiled its Sora video model four days earlier, Smith’s post sparked discussion in his Instagram comments where some viewers initially struggled to distinguish between the genuine footage and AI generation. It was an early sign of “deep doubt” in action as the tech increasingly blurs the line between synthetic and authentic video content.

Robot dogs learn to hunt people with AI-guided rifles

A still image of a robotic quadruped armed with a remote weapons system, captured from a video provided by Onyx Industries.

A still image of a robotic quadruped armed with a remote weapons system, captured from a video provided by Onyx Industries. Credit: Onyx Industries

At some point in recent history—somewhere around 2022—someone took a look at robotic quadrupeds and thought it would be a great idea to attach guns to them. A few years later, the US Marine Forces Special Operations Command (MARSOC) began evaluating armed robotic quadrupeds developed by Ghost Robotics. The robot “dogs” integrated Onyx Industries’ SENTRY remote weapon systems, which featured AI-enabled targeting that could detect and track people, drones, and vehicles, though the systems require human operators to authorize any weapons discharge.

The military’s interest in armed robotic dogs followed a broader trend of weaponized quadrupeds entering public awareness. This included viral videos of consumer robots carrying firearms, and later, commercial sales of flame-throwing models. While MARSOC emphasized that weapons were just one potential use case under review, experts noted that the increasing integration of AI into military robotics raised questions about how long humans would remain in control of lethal force decisions.

Microsoft Windows AI is watching

A screenshot of Microsoft's new

A screenshot of Microsoft’s new “Recall” feature in action. Credit: Microsoft

In an era where many people already feel like they have no privacy due to tech encroachments, Microsoft dialed it up to an extreme degree in May. That’s when Microsoft unveiled a controversial Windows 11 feature called “Recall” that continuously captures screenshots of users’ PC activities every few seconds for later AI-powered search and retrieval. The feature, designed for new Copilot+ PCs using Qualcomm’s Snapdragon X Elite chips, promised to help users find past activities, including app usage, meeting content, and web browsing history.

While Microsoft emphasized that Recall would store encrypted snapshots locally and allow users to exclude specific apps or websites, the announcement raised immediate privacy concerns, as Ars senior technology reporter Andrew Cunningham covered. It also came with a technical toll, requiring significant hardware resources, including 256GB of storage space, with 25GB dedicated to storing approximately three months of user activity. After Microsoft pulled the initial test version due to public backlash, Recall later entered public preview in November with reportedly enhanced security measures. But secure spyware is still spyware—Recall, when enabled, still watches nearly everything you do on your computer and keeps a record of it.

Google Search told people to eat rocks

This is fine. Credit: Getty Images

In May, Ars senior gaming reporter Kyle Orland (who assisted commendably with the AI beat throughout the year) covered Google’s newly launched AI Overview feature. It faced immediate criticism when users discovered that it frequently provided false and potentially dangerous information in its search result summaries. Among its most alarming responses, the system advised humans could safely consume rocks, incorrectly citing scientific sources about the geological diet of marine organisms. The system’s other errors included recommending nonexistent car maintenance products, suggesting unsafe food preparation techniques, and confusing historical figures who shared names.

The problems stemmed from several issues, including the AI treating joke posts as factual sources and misinterpreting context from original web content. But most of all, the system relies on web results as indicators of authority, which we called a flawed design. While Google defended the system, stating these errors occurred mainly with uncommon queries, a company spokesperson acknowledged they would use these “isolated examples” to refine their systems. But to this day, AI Overview still makes frequent mistakes.

Stable Diffusion generates body horror

An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass. Credit: HorneyMetalBeing

In June, Stability AI’s release of the image synthesis model Stable Diffusion 3 Medium drew criticism online for its poor handling of human anatomy in AI-generated images. Users across social media platforms shared examples of the model producing what we now like to call jabberwockies—AI generation failures with distorted bodies, misshapen hands, and surreal anatomical errors, and many in the AI image-generation community viewed it as a significant step backward from previous image-synthesis capabilities.

Reddit users attributed these failures to Stability AI’s aggressive filtering of adult content from the training data, which apparently impaired the model’s ability to accurately render human figures. The troubled release coincided with broader organizational challenges at Stability AI, including the March departure of CEO Emad Mostaque, multiple staff layoffs, and the exit of three key engineers who had helped develop the technology. Some of those engineers founded Black Forest Labs in August and released Flux, which has become the latest open-weights AI image model to beat.

ChatGPT Advanced Voice imitates human voice in testing

An illustration of a computer synthesizer spewing out letters.

AI voice-synthesis models are master imitators these days, and they are capable of much more than many people realize. In August, we covered a story where OpenAI’s ChatGPT Advanced Voice Mode feature unexpectedly imitated a user’s voice during the company’s internal testing, revealed by OpenAI after the fact in safety testing documentation. To prevent future instances of an AI assistant suddenly speaking in your own voice (which, let’s be honest, would probably freak people out), the company created an output classifier system to prevent unauthorized voice imitation. OpenAI says that Advanced Voice Mode now catches all meaningful deviations from approved system voices.

Independent AI researcher Simon Willison discussed the implications with Ars Technica, noting that while OpenAI restricted its model’s full voice synthesis capabilities, similar technology would likely emerge from other sources within the year. Meanwhile, the rapid advancement of AI voice replication has caused general concern about its potential misuse, although companies like ElevenLabs have already been offering voice cloning services for some time.

San Francisco’s robotic car horn symphony

A Waymo self-driving car in front of Google's San Francisco headquarters, San Francisco, California, June 7, 2024.

A Waymo self-driving car in front of Google’s San Francisco headquarters, San Francisco, California, June 7, 2024. Credit: Getty Images

In August, San Francisco residents got a noisy taste of robo-dystopia when Waymo’s self-driving cars began creating an unexpected nightly disturbance in the South of Market district. In a parking lot off 2nd Street, the cars congregated autonomously every night during rider lulls at 4 am and began engaging in extended honking matches at each other while attempting to park.

Local resident Christopher Cherry’s initial optimism about the robotic fleet’s presence dissolved as the mechanical chorus grew louder each night, affecting residents in nearby high-rises. The nocturnal tech disruption served as a lesson in the unintentional effects of autonomous systems when run in aggregate.

Larry Ellison dreams of all-seeing AI cameras

A colorized photo of CCTV cameras in London, 2024.

In September, Oracle co-founder Larry Ellison painted a bleak vision of ubiquitous AI surveillance during a company financial meeting. The 80-year-old database billionaire described a future where AI would monitor citizens through networks of cameras and drones, asserting that the oversight would ensure lawful behavior from both police and the public.

His surveillance predictions reminded us of parallels to existing systems in China, where authorities already used AI to sort surveillance data on citizens as part of the country’s “sharp eyes” campaign from 2015 to 2020. Ellison’s statement reflected the sort of worst-case tech surveillance state scenario—likely antithetical to any sort of free society—that dozens of sci-fi novels of the 20th century warned us about.

A dead father sends new letters home

An AI-generated image featuring Dad's Uppercase handwriting.

An AI-generated image featuring my late father’s handwriting. Credit: Benj Edwards / Flux

AI has made many of us do weird things in 2024, including this writer. In October, I used an AI synthesis model called Flux to reproduce my late father’s handwriting with striking accuracy. After scanning 30 samples from his engineering notebooks, I trained the model using computing time that cost less than five dollars. The resulting text captured his distinctive uppercase style, which he developed during his career as an electronics engineer.

I enjoyed creating images showing his handwriting in various contexts, from folder labels to skywriting, and made the trained model freely available online for others to use. While I approached it as a tribute to my father (who would have appreciated the technical achievement), many people found the whole experience weird and somewhat disturbing. The things we unhinged Bing Chat-like journalists do to bring awareness to a topic are sometimes unconventional. So I guess it counts for this list!

For 2025? Expect even more AI

Thanks for reading Ars Technica this past year and following along with our team coverage of this rapidly emerging and expanding field. We appreciate your kind words of support. Ars Technica’s 2024 AI words of the year were: vibemarking, deep doubt, and the aforementioned jabberwocky. The old stalwart “confabulation” also made several notable appearances. Tune in again next year when we continue to try to figure out how to concisely describe novel scenarios in emerging technology by labeling them.

Looking back, our prediction for 2024 in AI last year was “buckle up.” It seems fitting, given the weirdness detailed above. Especially the part about the robot dogs with guns. For 2025, AI will likely inspire more chaos ahead, but also potentially get put to serious work as a productivity tool, so this time, our prediction is “buckle down.”

Finally, we’d like to ask: What was the craziest story about AI in 2024 from your perspective? Whether you love AI or hate it, feel free to suggest your own additions to our list in the comments. Happy New Year!

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

2024: The year AI drove everyone crazy Read More »

health-care-giant-ascension-says-5.6-million-patients-affected-in-cyberattack

Health care giant Ascension says 5.6 million patients affected in cyberattack

Health care company Ascension lost sensitive data for nearly 5.6 million individuals in a cyberattack that was attributed to a notorious ransomware gang, according to documents filed with the attorney general of Maine.

Ascension owns 140 hospitals and scores of assisted living facilities. In May, the organization was hit with an attack that caused mass disruptions as staff was forced to move to manual processes that caused errors, delayed or lost lab results, and diversions of ambulances to other hospitals. Ascension managed to restore most services by mid-June. At the time, the company said the attackers had stolen protected health information and personally identifiable information for an undisclosed number of people.

Investigation concluded

A filing Ascension made earlier in December revealed that nearly 5.6 million people were affected by the breach. Data stolen depended on the particular person but included individuals’ names and medical information (e.g., medical record numbers, dates of service, types of lab tests, or procedure codes), payment information (e.g., credit card information or bank account numbers), insurance information (e.g., Medicaid/Medicare ID, policy number, or insurance claim), government

identification (e.g., Social Security numbers, tax identification numbers, driver’s license numbers, or passport numbers), and other personal information (such as date of birth or address).

Health care giant Ascension says 5.6 million patients affected in cyberattack Read More »