Security

maximum-severity-vulnerability-threatens-6%-of-all-websites

Maximum-severity vulnerability threatens 6% of all websites

“I usually don’t say this, but patch right freakin’ now,” one researcher wrote. “The React CVE listing (CVE-2025-55182) is a perfect 10.”

React versions 19.0.1, 19.1.2, or 19.2.1 contain the vulnerable code. Third-party components known to be affected include:

  • Vite RSC plugin
  • Parcel RSC plugin
  • React Router RSC preview
  • RedwoodSDK
  • Waku
  • Next.js

According to Wiz and fellow security firm Aikido, the vulnerability, tracked as CVE-2025-55182, resides in Flight, a protocol found in the React Server Components. Next.js has assigned the designation CVE-2025-66478 to track the vulnerability in its package.

The vulnerability stems from unsafe deserialization, the coding process of converting strings, byte streams, and other “serialized” formats into objects or data structures in code. Hackers can exploit the insecure deserialization using payloads that execute malicious code on the server. Patched React versions include stricter validation and hardened deserialization behavior.

“When a server receives a specially crafted, malformed payload, it fails to validate the structure correctly,” Wiz explained. “This allows attacker-controlled data to influence server-side execution logic, resulting in the execution of privileged JavaScript code.”

The company added:

In our experimentation, exploitation of this vulnerability had high fidelity, with a near 100% success rate and can be leveraged to a full remote code execution. The attack vector is unauthenticated and remote, requiring only a specially crafted HTTP request to the target server. It affects the default configuration of popular frameworks.

Both companies are advising admins and developers to upgrade React and any dependencies that rely on it. Users of any of the Remote-enabled frameworks and plugins mentioned above should check with the maintainers for guidance. Aikido also suggests admins and developers scan their codebases and repositories for any use of React with this link.

Maximum-severity vulnerability threatens 6% of all websites Read More »

this-hacker-conference-installed-a-literal-antivirus-monitoring-system

This hacker conference installed a literal antivirus monitoring system


Organizers had a way for attendees to track CO2 levels throughout the venue—even before they arrived.

Hacker conferences—like all conventions—are notorious for giving attendees a parting gift of mystery illness. To combat “con crud,” New Zealand’s premier hacker conference, Kawaiicon, quietly launched a real-time, room-by-room carbon dioxide monitoring system for attendees.

To get the system up and running, event organizers installed DIY CO2 monitors throughout the Michael Fowler Centre venue before conference doors opened on November 6. Attendees were able to check a public online dashboard for clean air readings for session rooms, kids’ areas, the front desk, and more, all before even showing up. “It’s ALMOST like we are all nerds in a risk-based industry,” the organizers wrote on the convention’s website.

“What they did is fantastic,” Jeff Moss, founder of the Defcon and Black Hat security conferences, told WIRED. “CO2 is being used as an approximation for so many things, but there are no easy, inexpensive network monitoring solutions available. Kawaiicon building something to do this is the true spirit of hacking.”

Elevated levels of CO2 lead to reduced cognitive ability and facilitate transmission of airborne viruses, which can linger in poorly ventilated spaces for hours. The more CO2 in the air, the more virus-friendly the air becomes, making CO2 data a handy proxy for tracing pathogens. In fact, the Australian Academy of Science described the pollution in indoor air as “someone else’s breath backwash.” Kawaiicon organizers faced running a large infosec event during a measles outbreak, as well as constantly rolling waves of COVID-19, influenza, and RSV. It’s a familiar pain point for conference organizers frustrated by massive gaps in public health—and lack of control over their venue’s clean air standards.

“In general, the Michael Fowler venue has a single HVAC system, and uses Farr 30/30 filters with a rating of MERV-8,” Kawaiicon organizers explained, referencing the filtration choices in the space where the convention was held. MERV-8 is a budget-friendly choice–standard practice for homes. “The hardest part of the whole process is being limited by what the venue offers,” they explained. “The venue is older, which means less tech to control air flow, and an older HVAC system.”

Kawaiicon’s work began one month before the conference. In early October, organizers deployed a small fleet of 13 RGB Matrix Portal Room CO2 Monitors, an ambient carbon dioxide monitor DIY project adapted from US electronics and kit company Adafruit Industries. The monitors were connected to an Internet-accessible dashboard with live readings, daily highs and lows, and data history that showed attendees in-room CO2 trends. Kawaiicon tested its CO2 monitors in collaboration with researchers from the University of Otago’s public health department.

“That’s awesome,” says Adafruit founder and engineer Limor “Ladyada” Fried about the conference’s adaptation of the Matrix Portal project. “The best part is seeing folks pick up new skills and really understand how we measure and monitor air quality in the real world (like at a con during a measles flare-up)! Hackers and makers are able to be self-reliant when it comes to their public-health information needs.” (For the full specs of the Kawaiicon build, you can check out the GitHub repository here.)

The Michael Fowler Centre is a spectacular blend of Scandinavian brutalism and interior woodwork designed to enhance sound and air, including two grand pou—carved Māori totems—next to the main entrance that rise through to the upper foyers. Its cathedral-like acoustics posed a challenge to Kawaiicon’s air-hacking crew, which they solved by placing the RGB monitors in stereo. There were two on each level of the Main Auditorium (four total), two in the Renouf session space on level 1, plus monitors in the daycare and Kuracon (kids’ hacker conference) areas. To top it off, monitors were placed in the Quiet Room, at the Registration Desk, and in the Green Room.

“The things we had to consider were typical health and safety, and effective placement (breathing height, multiple monitors for multiple spaces, not near windows/doors),” a Kawaiicon spokesperson who goes by Sput online told WIRED over email.

“To be honest, it is no different than having to consider other accessibility options (e.g., access to venue, access to talks, access to private space for personal needs),” Sput wrote. “Being a tech-leaning community it is easier for us to get this set up ourselves, or with volunteer help, but definitely not out of reach given how accessible the CO2 monitor tech is.”

Kawaiicon’s attendees could quickly check the conditions before they arrived and decide how to protect themselves accordingly. At the event, WIRED observed attendees checking CO2 levels on their phones, masking and unmasking in different conference areas, and watching a display of all room readings on a dashboard at the registration desk.

In each conference session room, small wall-mounted monitors displayed stoplight colors showing immediate conditions: green for safe, orange for risky, and red to show the room had high CO2 levels, the top level for risk.

“Everyone who occupies the con space we operate have a different risk and threat model, and we want everyone to feel they can experience the con in a way that fits their model,” the organizers wrote on their website. “Considering Covid-19 is still in the community, we wanted to make sure that everyone had all the information they needed to make their own risk assessment on ‘if’ and ‘how’ they attended the con. So this is our threat model and all the controls and zones we have in place.”

Colorful custom-made Kawaiicon posters by New Zealand artist Pepper Raccoon placed throughout the Michael Fowler Centre displayed a QR code, making the CO2 dashboard a tap away, no matter where they were at the conference.

“We think this is important so folks don’t put themselves at risk having to go directly up to a monitor to see a reading,” Kawaiicon spokesperson Sput told WIRED, “It also helps folks find a space that they can move to if the reading in their space gets too high.”

It’s a DIY solution any conference can put in place: resources, parts lists, and assembly guides are here.

Kawaiicon’s organizers aren’t keen to pretend there were no risks to gathering in groups during ongoing outbreaks. “Masks are encouraged, but not required,” Kawaiicon’s Health and Safety page stated. “Free masks will be available at the con if you need one.” They encouraged attendees to test before coming in, and for complete accessibility for all hackers who wanted to attend, of any ability, they offered a full virtual con stream with no ticket required.

Trying to find out if a venue will have clean or gross recycled air before attending a hacker conference has been a pain point for researchers who can’t afford to get sick at, or after, the next B-Sides, Defcon, or Black Hat. Kawaiicon addresses this headache. But they’re not here for debates about beliefs or anti-science trolling. “We each have our different risk tolerance,” the organizers wrote. “Just leave others to make the call that is best for them. No one needs your snarky commentary.”

This story originally appeared at WIRED.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

This hacker conference installed a literal antivirus monitoring system Read More »

oops-cryptographers-cancel-election-results-after-losing-decryption-key.

Oops. Cryptographers cancel election results after losing decryption key.

One of the world’s premier security organizations has canceled the results of its annual leadership election after an official lost an encryption key needed to unlock results stored in a verifiable and privacy-preserving voting system.

The International Association of Cryptologic Research (IACR) said Friday that the votes were submitted and tallied using Helios, an open source voting system that uses peer-reviewed cryptography to cast and count votes in a verifiable, confidential, and privacy-preserving way. Helios encrypts each vote in a way that assures each ballot is secret. Other cryptography used by Helios allows each voter to confirm their ballot was counted fairly.

An “honest but unfortunate human mistake”

Per the association’s bylaws, three members of the election committee act as independent trustees. To prevent two of them from colluding to cook the results, each trustee holds a third of the cryptographic key material needed to decrypt results.

“Unfortunately, one of the three trustees has irretrievably lost their private key, an honest but unfortunate human mistake, and therefore cannot compute their decryption share,” the IACR said. “As a result, Helios is unable to complete the decryption process, and it is technically impossible for us to obtain or verify the final outcome of this election.”

To prevent a similar incident, the IACR will adopt a new mechanism for managing private keys. Instead of requiring all three chunks of private key material, elections will now require only two. Moti Yung, the trustee who was unable to provide his third of the key material, has resigned. He’s being replaced by Michel Abdalla.

The IACR is a nonprofit scientific organization providing research in cryptology and related fields. Cryptology is the science and practice of designing computation and communication systems that remain secure in the presence of adversaries. The associate is holding a new election that started Friday and runs through December 20.

Oops. Cryptographers cancel election results after losing decryption key. Read More »

how-to-know-if-your-asus-router-is-one-of-thousands-hacked-by-china-state-hackers

How to know if your Asus router is one of thousands hacked by China-state hackers

Thousands of Asus routers have been hacked and are under the control of a suspected China-state group that has yet to reveal its intentions for the mass compromise, researchers said.

The hacking spree is either primarily or exclusively targeting seven models of Asus routers, all of which are no longer supported by the manufacturer, meaning they no longer receive security patches, researchers from SecurityScorecard said. So far, it’s unclear what the attackers do after gaining control of the devices. SecurityScorecard has named the operation WrtHug.

Staying off the radar

SecurityScorecard said it suspects the compromised devices are being used similarly to those found in ORB (operational relay box) networks, which hackers primarily use to conduct espionage to conceal their identity.

“Having this level of access may enable the threat actor to use any compromised router as they see fit,” SecurityScorecard said. “Our experience with ORB networks suggests compromised devices will commonly be used for covert operations and espionage, unlike DDoS attacks and other types of overt malicious activity typically observed from botnets.”

Compromised routers are concentrated in Taiwan, with smaller clusters in South Korea, Japan, Hong Kong, Russia, central Europe, and the United States.

A heat map of infected devices.

A heat map of infected devices.

The Chinese government has been caught building massive ORB networks for years. In 2021, the French government warned national businesses and organizations that the APT31—one of China’s most active threat groups—was behind a massive attack campaign that used hacked routers to conduct reconnaissance. Last year, at least three similar China-operated campaigns came to light.

Russian-state hackers have been caught doing the same thing, although not as frequently. In 2018, Kremlin actors infected more than 500,000 small office and home routers with sophisticated malware tracked as VPNFilter. A Russian government group was also independently involved in an operation reported in one of the 2024 router hacks linked above.

How to know if your Asus router is one of thousands hacked by China-state hackers Read More »

critics-scoff-after-microsoft-warns-ai-feature-can-infect-machines-and-pilfer-data

Critics scoff after Microsoft warns AI feature can infect machines and pilfer data


Integration of Copilot Actions into Windows is off by default, but for how long?

Credit: Photographer: Chona Kasinger/Bloomberg via Getty Images

Microsoft’s warning on Tuesday that an experimental AI agent integrated into Windows can infect devices and pilfer sensitive user data has set off a familiar response from security-minded critics: Why is Big Tech so intent on pushing new features before their dangerous behaviors can be fully understood and contained?

As reported Tuesday, Microsoft introduced Copilot Actions, a new set of “experimental agentic features” that, when enabled, perform “everyday tasks like organizing files, scheduling meetings, or sending emails,” and provide “an active digital collaborator that can carry out complex tasks for you to enhance efficiency and productivity.”

Hallucinations and prompt injections apply

The fanfare, however, came with a significant caveat. Microsoft recommended users enable Copilot Actions only “if you understand the security implications outlined.”

The admonition is based on known defects inherent in most large language models, including Copilot, as researchers have repeatedly demonstrated.

One common defect of LLMs causes them to provide factually erroneous and illogical answers, sometimes even to the most basic questions. This propensity for hallucinations, as the behavior has come to be called, means users can’t trust the output of Copilot, Gemini, Claude, or any other AI assistant and instead must independently confirm it.

Another common LLM landmine is the prompt injection, a class of bug that allows hackers to plant malicious instructions in websites, resumes, and emails. LLMs are programmed to follow directions so eagerly that they are unable to discern those in valid user prompts from those contained in untrusted, third-party content created by attackers. As a result, the LLMs give the attackers the same deference as users.

Both flaws can be exploited in attacks that exfiltrate sensitive data, run malicious code, and steal cryptocurrency. So far, these vulnerabilities have proved impossible for developers to prevent and, in many cases, can only be fixed using bug-specific workarounds developed once a vulnerability has been discovered.

That, in turn, led to this whopper of a disclosure in Microsoft’s post from Tuesday:

“As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs,” Microsoft said. “Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA), where malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation.”

Microsoft indicated that only experienced users should enable Copilot Actions, which is currently available only in beta versions of Windows. The company, however, didn’t describe what type of training or experience such users should have or what actions they should take to prevent their devices from being compromised. I asked Microsoft to provide these details, and the company declined.

Like “macros on Marvel superhero crack”

Some security experts questioned the value of the warnings in Tuesday’s post, comparing them to warnings Microsoft has provided for decades about the danger of using macros in Office apps. Despite the long-standing advice, macros have remained among the lowest-hanging fruit for hackers out to surreptitiously install malware on Windows machines. One reason for this is that Microsoft has made macros so central to productivity that many users can’t do without them.

“Microsoft saying ‘don’t enable macros, they’re dangerous’… has never worked well,” independent researcher Kevin Beaumont said. “This is macros on Marvel superhero crack.”

Beaumont, who is regularly hired to respond to major Windows network compromises inside enterprises, also questioned whether Microsoft will provide a means for admins to adequately restrict Copilot Actions on end-user machines or to identify machines in a network that have the feature turned on.

A Microsoft spokesperson said IT admins will be able to enable or disable an agent workspace at both account and device levels, using Intune or other MDM (Mobile Device Management) apps.

Critics voiced other concerns, including the difficulty for even experienced users to detect exploitation attacks targeting the AI agents they’re using.

“I don’t see how users are going to prevent anything of the sort they are referring to, beyond not surfing the web I guess,” researcher Guillaume Rossolini said.

Microsoft has stressed that Copilot Actions is an experimental feature that’s turned off by default. That design was likely chosen to limit its access to users with the experience required to understand its risks. Critics, however, noted that previous experimental features—Copilot, for instance—regularly become default capabilities for all users over time. Once that’s done, users who don’t trust the feature are often required to invest time developing unsupported ways to remove the features.

Sound but lofty goals

Most of Tuesday’s post focused on Microsoft’s overall strategy for securing agentic features in Windows. Goals for such features include:

  • Non-repudiation, meaning all actions and behaviors must be “observable and distinguishable from those taken by a user”
  • Agents must preserve confidentiality when they collect, aggregate, or otherwise utilize user data
  • Agents must receive user approval when accessing user data or taking actions

The goals are sound, but ultimately they depend on users reading the dialog windows that warn of the risks and require careful approval before proceeding. That, in turn, diminishes the value of the protection for many users.

“The usual caveat applies to such mechanisms that rely on users clicking through a permission prompt,” Earlence Fernandes, a University of California, San Diego professor specializing in AI security, told Ars. “Sometimes those users don’t fully understand what is going on, or they might just get habituated and click ‘yes’ all the time. At which point, the security boundary is not really a boundary.”

As demonstrated by the rash of “ClickFix” attacks, many users can be tricked into following extremely dangerous instructions. While more experienced users (including a fair number of Ars commenters) blame the victims falling for such scams, these incidents are inevitable for a host of reasons. In some cases, even careful users are fatigued or under emotional distress and slip up as a result. Other users simply lack the knowledge to make informed decisions.

Microsoft’s warning, one critic said, amounts to little more than a CYA (short for cover your ass), a legal maneuver that attempts to shield a party from liability.

“Microsoft (like the rest of the industry) has no idea how to stop prompt injection or hallucinations, which makes it fundamentally unfit for almost anything serious,” critic Reed Mideke said. “The solution? Shift liability to the user. Just like every LLM chatbot has a ‘oh by the way, if you use this for anything important be sure to verify the answers” disclaimer, never mind that you wouldn’t need the chatbot in the first place if you knew the answer.”

As Mideke indicated, most of the criticisms extend to AI offerings other companies—including Apple, Google, and Meta—are integrating into their products. Frequently, these integrations begin as optional features and eventually become default capabilities whether users want them or not.

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

Critics scoff after Microsoft warns AI feature can infect machines and pilfer data Read More »

bonkers-bitcoin-heist:-5-star-hotels,-cash-filled-envelopes,-vanishing-funds

Bonkers Bitcoin heist: 5-star hotels, cash-filled envelopes, vanishing funds


Bitcoin mining hardware exec falls for sophisticated crypto scam to tune of $200k

As Kent Halliburton stood in a bathroom at the Rosewood Hotel in central Amsterdam, thousands of miles from home, running his fingers through an envelope filled with 10,000 euros in crisp banknotes, he started to wonder what he had gotten himself into.

Halliburton is the cofounder and CEO of Sazmining, a company that operates bitcoin mining hardware on behalf of clients—a model known as “mining-as-a-service.” Halliburton is based in Peru, but Sazmining runs mining hardware out of third-party data centers across Norway, Paraguay, Ethiopia, and the United States.

As Halliburton tells it, he had flown to Amsterdam the previous day, August 5, to meet Even and Maxim, two representatives of a wealthy Monaco-based family. The family office had offered to purchase hundreds of bitcoin mining rigs from Sazmining—around $4 million worth—which the company would install at a facility currently under construction in Ethiopia. Before finalizing the deal, the family office had asked to meet Halliburton in person.

When Halliburton arrived at the Rosewood Hotel, he found Even and Maxim perched in a booth. They struck him as playboy, high-roller types—particularly Maxim, who wore a tan three-piece suit and had a highly manicured look, his long dark hair parted down the middle. A Rolex protruded from the cuff of his sleeve.

Over a three-course lunch—ceviche with a roe garnish, Chilean sea bass, and cherry cake—they discussed the contours of the deal and traded details about their respective backgrounds. Even was talkative and jocular, telling stories about blowout parties in Marrakech. Maxim was aloof; he mostly stared at Halliburton, holding his gaze for long periods at a time as though sizing him up.

As a relationship-building exercise, Even proposed that Halliburton sell the family office around $3,000 in bitcoin. Halliburton was initially hesitant, but chalked it up as a peculiar dating ritual. One of the guys slid Halliburton the cash-filled envelope and told him to go to the bathroom, where he could count out the amount in private. “It felt like something out of a James Bond movie,” says Halliburton. “It was all very exotic to me.”

Halliburton left in a taxi, somewhat bemused by the encounter, but otherwise hopeful of closing the deal with the family office. For Sazmining, a small company with around 15 employees, it promised to be transformative.

Less than two weeks later, Halliburton had lost more than $200,000 worth of bitcoin to Even and Maxim. He didn’t know whether Sazmining could survive the blow, nor how the scammers had ensnared him.

Directly after his lunch with Even and Maxim, Halliburton flew to Latvia for a Bitcoin conference. From there, he traveled to Ethiopia to check on construction work at the data center facility.

While Halliburton was in Ethiopia, he received a WhatsApp message from Even, who wanted to go ahead with the deal on one condition: that Sazmining sell the family office a larger amount of bitcoin as part of the transaction, after the small initial purchase at the Rosewood Hotel. They landed on $400,000 worth—a tenth of the overall deal value.

Even asked Halliburton to return to Amsterdam to sign the contracts necessary to finalize the deal. Having been away from his family for weeks, Halliburton protested. But Even drew a line in the sand: “Remotely doesn’t work for me that’s not how I do business at the moment,” he wrote in a text message reviewed by WIRED.

Halliburton arrived back in Amsterdam in the early afternoon on August 16. That evening, he was due to meet Maxim at a teppanyaki restaurant at the five-star Okura Hotel. The interior is elaborately decorated in traditional Japanese style; it has wooden paneling, paper walls, a zen garden, and a flock of origami cranes that hang from string down a spiral staircase in the lobby.

Halliburton found Maxim sitting on a couch in the waiting area outside the restaurant, dressed in a gaudy silver suit. As they waited for a table, Maxim asked Halliburton whether he could demonstrate that Sazmining held enough bitcoin to go through with the side transaction that Even had proposed. He wanted Halliburton to move roughly half of the agreed amount—worth $220,000—into a bitcoin wallet app trusted by the family office. The funds would remain under Halliburton’s control, but the family office would be able to verify their existence using public transaction data.

Halliburton thumbed open his iPhone. The app, Atomic Wallet, had thousands of positive reviews and had been listed on the Apple App Store for several years. With Maxim at his side, Halliburton downloaded the app and created a new wallet. “I was trying to earn this guy’s trust,” says Halliburton. “Again, a $4 million contract. I’m still looking at that carrot.”

The dinner passed largely without incident. Maxim was less guarded this time; he talked about his fondness for watches and his work sourcing deals for the family office. Feeling under the weather from all the travel, Halliburton angled to wrap things up.

They left with the understanding that Maxim would take the signed contracts to the family office to be executed, while Halliburton would send the $220,000 in bitcoin to his new wallet address as agreed.

Back in his hotel room, Halliburton triggered a small test transaction using his new Atomic Wallet address. Then he wiped and reinstated the wallet using the private credentials—the seed phrase—generated when he first downloaded the app, to make sure that it functioned as expected. “Had to take some security measures but almost ready. Thanks for your patience,” wrote Halliburton in a WhatsApp message to Even. “No worries take your time,” Even responded.

At 10: 45 pm, satisfied with his tests, Halliburton signaled to a colleague to release $220,000 worth of bitcoin to the Atomic Wallet address. When it arrived, he sent a screenshot of the updated balance to Even. One minute later, Even wrote back, “Thank yiu [sic].”

Halliburton sent another message to Even, asking about the contracts. Though previously quick to answer, Even didn’t respond. Halliburton checked the Atomic Wallet app, sensing that something was wrong. The bitcoin had vanished.

Halliburton’s stomach dropped. As he sat on the bed, he tried to stop himself from vomiting. “It was like being punched in the gut,” says Halliburton. “It was just shock and disbelief.”

Halliburton racked his brain trying to figure out how he had been swindled. At 11: 30 pm, he sent another message to Even: “That was the most sophisticated scam I’ve ever experienced. I know you probably don’t give a shit but my business may not survive this. I’ve worked four years of my life to build it.”

Even responded, denying that he had done anything wrong, but that was the last Halliburton heard from him. Halliburton provided WIRED with the Telegram account Even had used; it was last active on the day the funds were drained. Even did not respond to a request for comment.

Within hours, the funds drained from Halliburton’s wallet began to be divided up, shuffled through a web of different addresses, and deposited with third-party platforms for converting crypto into regular currency, analysis by blockchain analytics companies Chainalysis and CertiK shows.

A portion of the bitcoin was split between different instant exchangers, which allow people to swap one type of cryptocurrency for another almost instantaneously. The bulk was funneled into a single address, where it was blended with funds tagged by Chainalysis as the likely proceeds of rip deals, a scam whereby somebody impersonates an investor to steal crypto from a startup.

“There’s nothing illegal about the services the scammer leveraged,” says Margaux Eckle, senior investigator at Chainalysis. “However, the fact that they leveraged consolidation addresses that appear very tightly connected to labeled scam activity is potentially indicative of a fraud operation.”

Some of the bitcoin that passed through the consolidation address was deposited with a crypto exchange, where it was likely swapped for regular currency. The remainder was converted into stablecoin and moved across so-called bridges to the Tron blockchain, which hosts several over-the-counter trading services that can be readily used to cash out large quantities of crypto, researchers claim.

The effect of the many hops, shuffles, conversions, and divisions is to make it more difficult to trace the origin of funds, so that they can be cashed out without arousing suspicion. “The scammer is quite sophisticated,” says Eckle. “Though we can trace through a bridge, it’s a way to slow the tracing of funds from investigators that could be on your tail.”

Eventually, the trail of public transaction data stops. To identify the perpetrators, law enforcement would have to subpoena the services that appear to have been used to cash out, which are widely required to collect information about users.

From the transaction data, it’s not possible to tell precisely how the scammers were able to access and drain Halliburton’s wallet without his permission. But aspects of his interactions with the scammers provide some clue.

Initially, Halliburton wondered whether the incident might be connected to a 2023 hack perpetrated by threat actors affiliated with the North Korean government, which led to $100 million worth of funds being drained from the accounts of Atomic Wallet users. (Atomic Wallet did not respond to a request for comment.)

But instead, the security researchers that spoke to WIRED believe that Halliburton fell victim to a targeted surveillance-style attack. “Executives who are publicly known to custody large crypto balances make attractive targets,” says Guanxing Wen, head of security research at CertiK.

The in-person dinners, expensive clothing, reams of cash, and other displays of wealth were gambits meant to put Halliburton at ease, researchers theorize. “This is a well-known rapport-building tactic in high-value confidence schemes,” says Wen. “The longer a victim spends with the attacker in a relaxed setting, the harder it becomes to challenge a later technical request.”

In order to complete the theft, the scammers likely had to steal the seed phrase for Halliburton’s newly created Atomic Wallet address. Equipped with a wallet’s seed phrase, anyone can gain unfettered access to the bitcoin kept inside.

One possibility is that the scammers, who dictated the locations for both meetings in Amsterdam, hijacked or mimicked the hotel Wi-Fi networks, allowing them to harvest information from Halliburton’s phone. “That equipment you can buy online, no problem. It would all fit inside a couple of suitcases,” says Adrian Cheek, lead researcher at cybersecurity company Coeus. But Halliburton insists that his phone never left his possession, and he used mobile data to download the Atomic Wallet app, not public Wi-Fi.

The most plausible explanation, claims Wen, is that the scammers—perhaps with the help of a nearby accomplice or a camera equipped with long-range zoom—were able to record the seed phrase when it appeared on Halliburton’s phone at the point he first downloaded the app, on the couch at the Okura Hotel.

Long before Halliburton delivered the $220,000 in bitcoin to his Atomic Wallet address, the scammers had probably set up a “sweeper script,” claims Wen, a type of automated bot coded to drain a wallet when it detects a large balance change.

The people the victim meets in-person in cases like this—like Even and Maxim—are rarely the ultimate beneficiaries, but rather mercenaries hired by a network of scam artists, who could be based on the other side of the globe.

“They’re normally recruited through underground forums, and secure chat groups,” says Cheek. “If you know where you’re looking, you can see this ongoing recruitment.”

For a few days, it remained unclear whether Sazmining would be able to weather the financial blow. The stolen funds equated to about six weeks’ worth of revenue. “I’m trying to keep the business afloat and survive this situation where suddenly we’ve got a cash crunch,” says Halliburton. By delaying payment to a vendor and extending the duration of an outstanding loan, the company was ultimately able to remain solvent.

That week, one of the Sazmining board members filed reports with law enforcement bodies in the Netherlands, the UK, and the US. They received acknowledgements from only UK-based Action Fraud, which said it would take no immediate action, and the Cyber Fraud Task Force, a division of the US Secret Service. (The CFTF did not respond to a request for comment.)

The incredible volume of crypto-related scam activity makes it all but impossible for law enforcement to investigate each theft individually. “It’s a type of threat and criminal activity that is reaching a scale that’s completely unprecedented,” says Eckle.

The best chance of a scam victim recovering their funds is for law enforcement to bust an entire scam ring, says Eckle. In that scenario, any funds recovered are typically dispersed to those who have reported themselves victims.

Until such a time, Halliburton has to make his peace with the loss. “It’s still painful,” he says. But “it wasn’t a death blow.”

This story originally appeared on Wired.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Bonkers Bitcoin heist: 5-star hotels, cash-filled envelopes, vanishing funds Read More »

5-plead-guilty-to-laptop-farm-and-id-theft-scheme-to-land-north-koreans-us-it-jobs

5 plead guilty to laptop farm and ID theft scheme to land North Koreans US IT jobs

Each defendant also helped the IT workers pass employer vetting procedures. Travis and Salazar, for example, appeared for drug testing on behalf of the workers.

Travis, an active-duty member of the US Army at the time, received at least $51,397 for his participation in the scheme. Phagnasay and Salazar earned at least $3,450 and $4,500, respectively. In all, the fraudulent jobs earned roughly $1.28 million in salary payments from the defrauded US companies, the vast majority of which were sent to the IT workers overseas.

The fifth defendant, Ukrainian national Oleksandr Didenko, pleaded guilty to one count of aggravated identity theft, in addition to wire fraud. He admitted to participating in a “years-long scheme that stole the identities of US citizens and sold them to overseas IT workers, including North Korean IT workers, so they could fraudulently gain employment at 40 US companies.” Didenko received hundreds of thousands of dollars from victim companies who hired the fraudulent applicants. As part of the plea agreement, Didenko is forfeiting more than $1.4 million, including more than $570,000 in fiat and virtual currency seized from him and his co-conspirators.

In 2022, the US Treasury Department said that the Democratic People’s Republic of Korea employs thousands of skilled IT workers around the world to generate revenue for the country’s weapons of mass destruction and ballistic missile programs.

“In many cases, DPRK IT workers represent themselves as US-based and/or non-North Korean teleworkers,” Treasury Department officials wrote. “The workers may further obfuscate their identities and/or location by sub-contracting work to non North Koreans. Although DPRK IT workers normally engage in IT work distinct from malicious cyber activity, they have used the privileged access gained as contractors to enable the DPRK’s malicious cyber intrusions. Additionally, there are likely instances where workers are subjected to forced labor.”

Other US government advisories posted in 2023 and 2024 concerning similar programs have been removed with no explanation.

In Friday’s release, the Justice Department also said it’s seeking the forfeiture of more than $15 million worth of USDT, a cryptocurrency stablecoin pegged to the US dollar, that the FBI seized in March from North APT38 actors. The seized funds were derived from four heists APT38 carried out, two in July 2023 against virtual currency payment processors in Estonia and Panama and two in November 2023 thefts from exchanges in Panama and Seychelles.

Justice Department attempts to locate, seize, and forfeit all the stolen assets remain ongoing because APT38 has laundered them through virtual currency bridges, mixers, exchanges, and over-the-counter traders, the Justice Department said.

5 plead guilty to laptop farm and ID theft scheme to land North Koreans US IT jobs Read More »

researchers-question-anthropic-claim-that-ai-assisted-attack-was-90%-autonomous

Researchers question Anthropic claim that AI-assisted attack was 90% autonomous

Claude frequently overstated findings and occasionally fabricated data during autonomous operations, claiming to have obtained credentials that didn’t work or identifying critical discoveries that proved to be publicly available information. This AI hallucination in offensive security contexts presented challenges for the actor’s operational effectiveness, requiring careful validation of all claimed results. This remains an obstacle to fully autonomous cyberattacks.

How (Anthropic says) the attack unfolded

Anthropic said GTG-1002 developed an autonomous attack framework that used Claude as an orchestration mechanism that largely eliminated the need for human involvement. This orchestration system broke complex multi-stage attacks into smaller technical tasks such as vulnerability scanning, credential validation, data extraction, and lateral movement.

“The architecture incorporated Claude’s technical capabilities as an execution engine within a larger automated system, where the AI performed specific technical actions based on the human operators’ instructions while the orchestration logic maintained attack state, managed phase transitions, and aggregated results across multiple sessions,” Anthropic said. “This approach allowed the threat actor to achieve operational scale typically associated with nation-state campaigns while maintaining minimal direct involvement, as the framework autonomously progressed through reconnaissance, initial access, persistence, and data exfiltration phases by sequencing Claude’s responses and adapting subsequent requests based on discovered information.”

The attacks followed a five-phase structure that increased AI autonomy through each one.

The life cycle of the cyberattack, showing the move from human-led targeting to largely AI-driven attacks using various tools, often via the Model Context Protocol (MCP). At various points during the attack, the AI returns to its human operator for review and further direction.

Credit: Anthropic

The life cycle of the cyberattack, showing the move from human-led targeting to largely AI-driven attacks using various tools, often via the Model Context Protocol (MCP). At various points during the attack, the AI returns to its human operator for review and further direction. Credit: Anthropic

The attackers were able to bypass Claude guardrails in part by breaking tasks into small steps that, in isolation, the AI tool didn’t interpret as malicious. In other cases, the attackers couched their inquiries in the context of security professionals trying to use Claude to improve defenses.

As noted last week, AI-developed malware has a long way to go before it poses a real-world threat. There’s no reason to doubt that AI-assisted cyberattacks may one day produce more potent attacks. But the data so far indicates that threat actors—like most others using AI—are seeing mixed results that aren’t nearly as impressive as those in the AI industry claim.

Researchers question Anthropic claim that AI-assisted attack was 90% autonomous Read More »

clickfix-may-be-the-biggest-security-threat-your-family-has-never-heard-of

ClickFix may be the biggest security threat your family has never heard of

Another campaign, documented by Sekoia, targeted Windows users. The attackers behind it first compromise a hotel’s account for Booking.com or another online travel service. Using the information stored in the compromised accounts, the attackers contact people with pending reservations, an ability that builds immediate trust with many targets, who are eager to comply with instructions, lest their stay be canceled.

The site eventually presents a fake CAPTCHA notification that bears an almost identical look and feel to those required by content delivery network Cloudflare. The proof the notification requires for confirmation that there’s a human behind the keyboard is to copy a string of text and paste it into the Windows terminal. With that, the machine is infected with malware tracked as PureRAT.

Push Security, meanwhile, reported a ClickFix campaign with a page “adapting to the device that you’re visiting from.” Depending on the OS, the page will deliver payloads for Windows or macOS. Many of these payloads, Microsoft said, are LOLbins, the name for binaries that use a technique known as living off the land. These scripts rely solely on native capabilities built into the operating system. With no malicious files being written to disk, endpoint protection is further hamstrung.

The commands, which are often base-64 encoded to make them unreadable to humans, are often copied inside the browser sandbox, a part of most browsers that accesses the Internet in an isolated environment designed to protect devices from malware or harmful scripts. Many security tools are unable to observe and flag these actions as potentially malicious.

The attacks can also be effective given the lack of awareness. Many people have learned over the years to be suspicious of links in emails or messengers. In many users’ minds, the precaution doesn’t extend to sites that instruct them to copy a piece of text and paste it into an unfamiliar window. When the instructions come in emails from a known hotel or at the top of Google results, targets can be further caught off guard.

With many families gathering in the coming weeks for various holiday dinners, ClickFix scams are worth mentioning to those family members who ask for security advice. Microsoft Defender and other endpoint protection programs offer some defenses against these attacks, but they can, in some cases, be bypassed. That means that, for now, awareness is the best countermeasure.

ClickFix may be the biggest security threat your family has never heard of Read More »

how-to-trade-your-$214,000-cybersecurity-job-for-a-jail-cell

How to trade your $214,000 cybersecurity job for a jail cell

According to the FBI, in 2023, Martin took steps to become an “affiliate” of the BlackCat ransomware developers. BlackCat provides full-service malware, offering up modern ransomware code and dark web infrastructure in return for a cut of any money generated by affiliates, who find and hack their own targets. (And yes, sometimes BlackCat devs do scam their own affiliates.)

Martin had seen how this system worked in practice through his job, and he is said to have approached a pair of other people to help him make some easy cash. One of these people was allegedly Ryan Goldberg of Watkinsville, Georgia, who worked as an incident manager at the cybersecurity firm Sygnia. Goldberg told the FBI that Martin had recruited him to “try and ransom some companies.”

In May 2023, the group attacked its first target, a medical company based in Tampa, Florida. The team got the BlackCat software onto the company’s network, where it encrypted corporate data, and demanded a $10 million ransom for the decryption key.

Eventually, the extorted company decided to pay up—though only $1.27 million. The money was paid out in crypto, with a percentage going to the BlackCat devs and the rest split between Martin, Goldberg, and a third, as-yet-unnamed conspirator.

Success was short-lived, though. Throughout 2023, the extortion team allegedly went after a pharma company in Maryland, a doctor’s office, and an engineering firm in California, plus a drone manufacturer in Virginia.

Ransom requests varied widely: $5 million, or $1 million, or even a mere $300,000.

But no one else paid.

By early 2025, an FBI investigation had ramped up, and the Bureau searched Martin’s property in April. Once that happened, Goldberg said that he received a call from the third member of their team, who was “freaking out” about the raid on Martin. In early May, Goldberg searched the web for Martin’s name plus “doj.gov,” apparently looking for news on the investigation.

On June 17, Goldberg, too, was searched and his devices taken. He agreed to talk to agents and initially denied knowing anything about the ransomware attacks, but he eventually confessed his involvement and fingered Martin as the ringleader. Goldberg told agents that he had helped with the attacks to pay off some debts, and he was despondent about the idea of “going to federal prison for the rest of [his] life.”

How to trade your $214,000 cybersecurity job for a jail cell Read More »

commercial-spyware-“landfall”-ran-rampant-on-samsung-phones-for-almost-a-year

Commercial spyware “Landfall” ran rampant on Samsung phones for almost a year

Before the April 2025 patch, Samsung phones had a vulnerability in their image processing library. This is a zero-click attack because the user doesn’t need to launch anything. When the system processes the malicious image for display, it extracts shared object library files from the ZIP to run the Landfall spyware. The payload also modifies the device’s SELinux policy to give Landfall expanded permissions and access to data.

Landfall flowchart

How Landfall exploits Samsung phones.

Credit: Unit 42

How Landfall exploits Samsung phones. Credit: Unit 42

The infected files appear to have been delivered to targets via messaging apps like WhatsApp. Unit 42 notes that Landfall’s code references several specific Samsung phones, including the Galaxy S22, Galaxy S23, Galaxy S24, Galaxy Z Flip 4, and Galaxy Z Fold 4. Once active, Landfall reaches out to a remote server with basic device information. The operators can then extract a wealth of data, like user and hardware IDs, installed apps, contacts, any files stored on the device, and browsing history. It can also activate the camera and microphone to spy on the user.

Removing the spyware is no easy feat, either. Because of its ability to manipulate SELinux policies, it can burrow deeply into the system software. It also includes several tools that help evade detection. Based on the VirusTotal submissions, Unit 42 believes Landfall was active in 2024 and early 2025 in Iraq, Iran, Turkey, and Morocco. The vulnerability may have been present in Samsung’s software from Android 13 through Android 15, the company suggests.

Unit 42 says that several naming schemes and server responses share similarities with industrial spyware developed by big cyber-intelligence firms like NSO Group and Variston. However, they cannot directly tie Landfall to any particular group. While this attack was highly targeted, the details are now in the open, and other threat actors could now employ similar methods to access unpatched devices. Anyone with a supported Samsung phone should make certain they are on the April 2025 patch or later.

Commercial spyware “Landfall” ran rampant on Samsung phones for almost a year Read More »

wipers-from-russia’s-most-cut-throat-hackers-rain-destruction-on-ukraine

Wipers from Russia’s most cut-throat hackers rain destruction on Ukraine

One of the world’s most ruthless and advanced hacking groups, the Russian state-controlled Sandworm, launched a series of destructive cyberattacks in the country’s ongoing war against neighboring Ukraine, researchers reported Thursday.

In April, the group targeted a Ukrainian university with two wipers, a form of malware that aims to permanently destroy sensitive data and often the infrastructure storing it. One wiper, tracked under the name Sting, targeted fleets of Windows computers by scheduling a task named DavaniGulyashaSdeshka, a phrase derived from Russian slang that loosely translates to “eat some goulash,” researchers from ESET said. The other wiper is tracked as Zerlot.

A not-so-common target

Then, in June and September, Sandworm unleashed multiple wiper variants against a host of Ukrainian critical infrastructure targets, including organizations active in government, energy, and logistics. The targets have long been in the crosshairs of Russian hackers. There was, however, a fourth, less common target—organizations in Ukraine’s grain industry.

“Although all four have previously been documented as targets of wiper attacks at some point since 2022, the grain sector stands out as a not-so-frequent target,” ESET said. “Considering that grain export remains one of Ukraine’s main sources of revenue, such targeting likely reflects an attempt to weaken the country’s war economy.”

Wipers have been a favorite tool of Russian hackers since at least 2012, with the spreading of the NotPetya worm. The self-replicating malware originally targeted Ukraine, but eventually caused international chaos when it spread globally in a matter of hours. The worm resulted in tens of billions of dollars in financial damages after it shut down thousands of organizations, many for days or weeks.

Wipers from Russia’s most cut-throat hackers rain destruction on Ukraine Read More »