Biz & IT

oops-cryptographers-cancel-election-results-after-losing-decryption-key.

Oops. Cryptographers cancel election results after losing decryption key.

One of the world’s premier security organizations has canceled the results of its annual leadership election after an official lost an encryption key needed to unlock results stored in a verifiable and privacy-preserving voting system.

The International Association of Cryptologic Research (IACR) said Friday that the votes were submitted and tallied using Helios, an open source voting system that uses peer-reviewed cryptography to cast and count votes in a verifiable, confidential, and privacy-preserving way. Helios encrypts each vote in a way that assures each ballot is secret. Other cryptography used by Helios allows each voter to confirm their ballot was counted fairly.

An “honest but unfortunate human mistake”

Per the association’s bylaws, three members of the election committee act as independent trustees. To prevent two of them from colluding to cook the results, each trustee holds a third of the cryptographic key material needed to decrypt results.

“Unfortunately, one of the three trustees has irretrievably lost their private key, an honest but unfortunate human mistake, and therefore cannot compute their decryption share,” the IACR said. “As a result, Helios is unable to complete the decryption process, and it is technically impossible for us to obtain or verify the final outcome of this election.”

To prevent a similar incident, the IACR will adopt a new mechanism for managing private keys. Instead of requiring all three chunks of private key material, elections will now require only two. Moti Yung, the trustee who was unable to provide his third of the key material, has resigned. He’s being replaced by Michel Abdalla.

The IACR is a nonprofit scientific organization providing research in cryptology and related fields. Cryptology is the science and practice of designing computation and communication systems that remain secure in the presence of adversaries. The associate is holding a new election that started Friday and runs through December 20.

Oops. Cryptographers cancel election results after losing decryption key. Read More »

how-to-know-if-your-asus-router-is-one-of-thousands-hacked-by-china-state-hackers

How to know if your Asus router is one of thousands hacked by China-state hackers

Thousands of Asus routers have been hacked and are under the control of a suspected China-state group that has yet to reveal its intentions for the mass compromise, researchers said.

The hacking spree is either primarily or exclusively targeting seven models of Asus routers, all of which are no longer supported by the manufacturer, meaning they no longer receive security patches, researchers from SecurityScorecard said. So far, it’s unclear what the attackers do after gaining control of the devices. SecurityScorecard has named the operation WrtHug.

Staying off the radar

SecurityScorecard said it suspects the compromised devices are being used similarly to those found in ORB (operational relay box) networks, which hackers primarily use to conduct espionage to conceal their identity.

“Having this level of access may enable the threat actor to use any compromised router as they see fit,” SecurityScorecard said. “Our experience with ORB networks suggests compromised devices will commonly be used for covert operations and espionage, unlike DDoS attacks and other types of overt malicious activity typically observed from botnets.”

Compromised routers are concentrated in Taiwan, with smaller clusters in South Korea, Japan, Hong Kong, Russia, central Europe, and the United States.

A heat map of infected devices.

A heat map of infected devices.

The Chinese government has been caught building massive ORB networks for years. In 2021, the French government warned national businesses and organizations that the APT31—one of China’s most active threat groups—was behind a massive attack campaign that used hacked routers to conduct reconnaissance. Last year, at least three similar China-operated campaigns came to light.

Russian-state hackers have been caught doing the same thing, although not as frequently. In 2018, Kremlin actors infected more than 500,000 small office and home routers with sophisticated malware tracked as VPNFilter. A Russian government group was also independently involved in an operation reported in one of the 2024 router hacks linked above.

How to know if your Asus router is one of thousands hacked by China-state hackers Read More »

google-tells-employees-it-must-double-capacity-every-6-months-to-meet-ai-demand

Google tells employees it must double capacity every 6 months to meet AI demand

While AI bubble talk fills the air these days, with fears of overinvestment that could pop at any time, something of a contradiction is brewing on the ground: Companies like Google and OpenAI can barely build infrastructure fast enough to fill their AI needs.

During an all-hands meeting earlier this month, Google’s AI infrastructure head Amin Vahdat told employees that the company must double its serving capacity every six months to meet demand for artificial intelligence services, reports CNBC. Vahdat, a vice president at Google Cloud, presented slides showing the company needs to scale “the next 1000x in 4-5 years.”

While a thousandfold increase in compute capacity sounds ambitious by itself, Vahdat noted some key constraints: Google needs to be able to deliver this increase in capability, compute, and storage networking “for essentially the same cost and increasingly, the same power, the same energy level,” he told employees during the meeting. “It won’t be easy but through collaboration and co-design, we’re going to get there.”

It’s unclear how much of this “demand” Google mentioned represents organic user interest in AI capabilities versus the company integrating AI features into existing services like Search, Gmail, and Workspace. But whether users are using the features voluntarily or not, Google isn’t the only tech company struggling to keep up with a growing user base of customers using AI services.

Major tech companies are in a race to build out data centers. Google competitor OpenAI is planning to build six massive data centers across the US through its Stargate partnership project with SoftBank and Oracle, committing over $400 billion in the next three years to reach nearly 7 gigawatts of capacity. The company faces similar constraints serving its 800 million weekly ChatGPT users, with even paid subscribers regularly hitting usage limits for features like video synthesis and simulated reasoning models.

“The competition in AI infrastructure is the most critical and also the most expensive part of the AI race,” Vahdat said at the meeting, according to CNBC’s viewing of the presentation. The infrastructure executive explained that Google’s challenge goes beyond simply outspending competitors. “We’re going to spend a lot,” he said, but noted the real objective is building infrastructure that is “more reliable, more performant and more scalable than what’s available anywhere else.”

Google tells employees it must double capacity every 6 months to meet AI demand Read More »

hp-and-dell-disable-hevc-support-built-into-their-laptops’-cpus

HP and Dell disable HEVC support built into their laptops’ CPUs

The OEMs disabling codec hardware also comes as associated costs for the international video compression standard are set to increase in January, as licensing administrator Access Advance announced in July. Per a breakdown from patent pool administration VIA Licensing Alliance, royalty rates for HEVC for over 100,001 units are increasing from $0.20 each to $0.24 each in the United States. To put that into perspective, in Q3 2025, HP sold 15,002,000 laptops and desktops, and Dell sold 10,166,000 laptops and desktops, per Gartner.

Last year, NAS company Synology announced that it was ending support for HEVC, as well as H.264/AVC and VCI, transcoding on its DiskStation Manager and BeeStation OS platforms, saying that “support for video codecs is widespread on end devices, such as smartphones, tablets, computers, and smart TVs.”

“This update reduces unnecessary resource usage on the server and significantly improves media processing efficiency. The optimization is particularly effective in high-user environments compared to traditional server-side processing,” the announcement said.

Despite the growing costs and complications with HEVC licenses and workarounds, breaking features that have been widely available for years will likely lead to confusion and frustration.

“This is pretty ridiculous, given these systems are $800+ a machine, are part of a ‘Pro’ line (jabs at branding names are warranted – HEVC is used professionally), and more applications these days outside of Netflix and streaming TV are getting around to adopting HEVC,” a Redditor wrote.

HP and Dell disable HEVC support built into their laptops’ CPUs Read More »

massive-cloudflare-outage-was-triggered-by-file-that-suddenly-doubled-in-size

Massive Cloudflare outage was triggered by file that suddenly doubled in size

Cloudflare’s proxy service has limits to prevent excessive memory consumption, with the bot management system having “a limit on the number of machine learning features that can be used at runtime.” This limit is 200, well above the actual number of features used.

“When the bad file with more than 200 features was propagated to our servers, this limit was hit—resulting in the system panicking” and outputting errors, Prince wrote.

Worst Cloudflare outage since 2019

The number of 5xx error HTTP status codes served by the Cloudflare network is normally “very low” but soared after the bad file spread across the network. “The spike, and subsequent fluctuations, show our system failing due to loading the incorrect feature file,” Prince wrote. “What’s notable is that our system would then recover for a period. This was very unusual behavior for an internal error.”

This unusual behavior was explained by the fact “that the file was being generated every five minutes by a query running on a ClickHouse database cluster, which was being gradually updated to improve permissions management,” Prince wrote. “Bad data was only generated if the query ran on a part of the cluster which had been updated. As a result, every five minutes there was a chance of either a good or a bad set of configuration files being generated and rapidly propagated across the network.”

This fluctuation initially “led us to believe this might be caused by an attack. Eventually, every ClickHouse node was generating the bad configuration file and the fluctuation stabilized in the failing state,” he wrote.

Prince said that Cloudflare “solved the problem by stopping the generation and propagation of the bad feature file and manually inserting a known good file into the feature file distribution queue,” and then “forcing a restart of our core proxy.” The team then worked on “restarting remaining services that had entered a bad state” until the 5xx error code volume returned to normal later in the day.

Prince said the outage was Cloudflare’s worst since 2019 and that the firm is taking steps to protect against similar failures in the future. Cloudflare will work on “hardening ingestion of Cloudflare-generated configuration files in the same way we would for user-generated input; enabling more global kill switches for features; eliminating the ability for core dumps or other error reports to overwhelm system resources; [and] reviewing failure modes for error conditions across all core proxy modules,” according to Prince.

While Prince can’t promise that Cloudflare will never have another outage of the same scale, he said that previous outages have “always led to us building new, more resilient systems.”

Massive Cloudflare outage was triggered by file that suddenly doubled in size Read More »

critics-scoff-after-microsoft-warns-ai-feature-can-infect-machines-and-pilfer-data

Critics scoff after Microsoft warns AI feature can infect machines and pilfer data


Integration of Copilot Actions into Windows is off by default, but for how long?

Credit: Photographer: Chona Kasinger/Bloomberg via Getty Images

Microsoft’s warning on Tuesday that an experimental AI agent integrated into Windows can infect devices and pilfer sensitive user data has set off a familiar response from security-minded critics: Why is Big Tech so intent on pushing new features before their dangerous behaviors can be fully understood and contained?

As reported Tuesday, Microsoft introduced Copilot Actions, a new set of “experimental agentic features” that, when enabled, perform “everyday tasks like organizing files, scheduling meetings, or sending emails,” and provide “an active digital collaborator that can carry out complex tasks for you to enhance efficiency and productivity.”

Hallucinations and prompt injections apply

The fanfare, however, came with a significant caveat. Microsoft recommended users enable Copilot Actions only “if you understand the security implications outlined.”

The admonition is based on known defects inherent in most large language models, including Copilot, as researchers have repeatedly demonstrated.

One common defect of LLMs causes them to provide factually erroneous and illogical answers, sometimes even to the most basic questions. This propensity for hallucinations, as the behavior has come to be called, means users can’t trust the output of Copilot, Gemini, Claude, or any other AI assistant and instead must independently confirm it.

Another common LLM landmine is the prompt injection, a class of bug that allows hackers to plant malicious instructions in websites, resumes, and emails. LLMs are programmed to follow directions so eagerly that they are unable to discern those in valid user prompts from those contained in untrusted, third-party content created by attackers. As a result, the LLMs give the attackers the same deference as users.

Both flaws can be exploited in attacks that exfiltrate sensitive data, run malicious code, and steal cryptocurrency. So far, these vulnerabilities have proved impossible for developers to prevent and, in many cases, can only be fixed using bug-specific workarounds developed once a vulnerability has been discovered.

That, in turn, led to this whopper of a disclosure in Microsoft’s post from Tuesday:

“As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs,” Microsoft said. “Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA), where malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation.”

Microsoft indicated that only experienced users should enable Copilot Actions, which is currently available only in beta versions of Windows. The company, however, didn’t describe what type of training or experience such users should have or what actions they should take to prevent their devices from being compromised. I asked Microsoft to provide these details, and the company declined.

Like “macros on Marvel superhero crack”

Some security experts questioned the value of the warnings in Tuesday’s post, comparing them to warnings Microsoft has provided for decades about the danger of using macros in Office apps. Despite the long-standing advice, macros have remained among the lowest-hanging fruit for hackers out to surreptitiously install malware on Windows machines. One reason for this is that Microsoft has made macros so central to productivity that many users can’t do without them.

“Microsoft saying ‘don’t enable macros, they’re dangerous’… has never worked well,” independent researcher Kevin Beaumont said. “This is macros on Marvel superhero crack.”

Beaumont, who is regularly hired to respond to major Windows network compromises inside enterprises, also questioned whether Microsoft will provide a means for admins to adequately restrict Copilot Actions on end-user machines or to identify machines in a network that have the feature turned on.

A Microsoft spokesperson said IT admins will be able to enable or disable an agent workspace at both account and device levels, using Intune or other MDM (Mobile Device Management) apps.

Critics voiced other concerns, including the difficulty for even experienced users to detect exploitation attacks targeting the AI agents they’re using.

“I don’t see how users are going to prevent anything of the sort they are referring to, beyond not surfing the web I guess,” researcher Guillaume Rossolini said.

Microsoft has stressed that Copilot Actions is an experimental feature that’s turned off by default. That design was likely chosen to limit its access to users with the experience required to understand its risks. Critics, however, noted that previous experimental features—Copilot, for instance—regularly become default capabilities for all users over time. Once that’s done, users who don’t trust the feature are often required to invest time developing unsupported ways to remove the features.

Sound but lofty goals

Most of Tuesday’s post focused on Microsoft’s overall strategy for securing agentic features in Windows. Goals for such features include:

  • Non-repudiation, meaning all actions and behaviors must be “observable and distinguishable from those taken by a user”
  • Agents must preserve confidentiality when they collect, aggregate, or otherwise utilize user data
  • Agents must receive user approval when accessing user data or taking actions

The goals are sound, but ultimately they depend on users reading the dialog windows that warn of the risks and require careful approval before proceeding. That, in turn, diminishes the value of the protection for many users.

“The usual caveat applies to such mechanisms that rely on users clicking through a permission prompt,” Earlence Fernandes, a University of California, San Diego professor specializing in AI security, told Ars. “Sometimes those users don’t fully understand what is going on, or they might just get habituated and click ‘yes’ all the time. At which point, the security boundary is not really a boundary.”

As demonstrated by the rash of “ClickFix” attacks, many users can be tricked into following extremely dangerous instructions. While more experienced users (including a fair number of Ars commenters) blame the victims falling for such scams, these incidents are inevitable for a host of reasons. In some cases, even careful users are fatigued or under emotional distress and slip up as a result. Other users simply lack the knowledge to make informed decisions.

Microsoft’s warning, one critic said, amounts to little more than a CYA (short for cover your ass), a legal maneuver that attempts to shield a party from liability.

“Microsoft (like the rest of the industry) has no idea how to stop prompt injection or hallucinations, which makes it fundamentally unfit for almost anything serious,” critic Reed Mideke said. “The solution? Shift liability to the user. Just like every LLM chatbot has a ‘oh by the way, if you use this for anything important be sure to verify the answers” disclaimer, never mind that you wouldn’t need the chatbot in the first place if you knew the answer.”

As Mideke indicated, most of the criticisms extend to AI offerings other companies—including Apple, Google, and Meta—are integrating into their products. Frequently, these integrations begin as optional features and eventually become default capabilities whether users want them or not.

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

Critics scoff after Microsoft warns AI feature can infect machines and pilfer data Read More »

tech-giants-pour-billions-into-anthropic-as-circular-ai-investments-roll-on

Tech giants pour billions into Anthropic as circular AI investments roll on

On Tuesday, Microsoft and Nvidia announced plans to invest in Anthropic under a new partnership that includes a $30 billion commitment by the Claude maker to use Microsoft’s cloud services. Nvidia will commit up to $10 billion to Anthropic and Microsoft up to $5 billion, with both companies investing in Anthropic’s next funding round.

The deal brings together two companies that have backed OpenAI and connects them more closely to one of the ChatGPT maker’s main competitors. Microsoft CEO Satya Nadella said in a video that OpenAI “remains a critical partner,” while adding that the companies will increasingly be customers of each other.

“We will use Anthropic models, they will use our infrastructure, and we’ll go to market together,” Nadella said.

Anthropic, Microsoft, and NVIDIA announce partnerships.

The move follows OpenAI’s recent restructuring that gave the company greater distance from its non-profit origins. OpenAI has since announced a $38 billion deal to buy cloud services from Amazon.com as the company becomes less dependent on Microsoft. OpenAI CEO Sam Altman has said the company plans to spend $1.4 trillion to develop 30 gigawatts of computing resources.

Tech giants pour billions into Anthropic as circular AI investments roll on Read More »

bonkers-bitcoin-heist:-5-star-hotels,-cash-filled-envelopes,-vanishing-funds

Bonkers Bitcoin heist: 5-star hotels, cash-filled envelopes, vanishing funds


Bitcoin mining hardware exec falls for sophisticated crypto scam to tune of $200k

As Kent Halliburton stood in a bathroom at the Rosewood Hotel in central Amsterdam, thousands of miles from home, running his fingers through an envelope filled with 10,000 euros in crisp banknotes, he started to wonder what he had gotten himself into.

Halliburton is the cofounder and CEO of Sazmining, a company that operates bitcoin mining hardware on behalf of clients—a model known as “mining-as-a-service.” Halliburton is based in Peru, but Sazmining runs mining hardware out of third-party data centers across Norway, Paraguay, Ethiopia, and the United States.

As Halliburton tells it, he had flown to Amsterdam the previous day, August 5, to meet Even and Maxim, two representatives of a wealthy Monaco-based family. The family office had offered to purchase hundreds of bitcoin mining rigs from Sazmining—around $4 million worth—which the company would install at a facility currently under construction in Ethiopia. Before finalizing the deal, the family office had asked to meet Halliburton in person.

When Halliburton arrived at the Rosewood Hotel, he found Even and Maxim perched in a booth. They struck him as playboy, high-roller types—particularly Maxim, who wore a tan three-piece suit and had a highly manicured look, his long dark hair parted down the middle. A Rolex protruded from the cuff of his sleeve.

Over a three-course lunch—ceviche with a roe garnish, Chilean sea bass, and cherry cake—they discussed the contours of the deal and traded details about their respective backgrounds. Even was talkative and jocular, telling stories about blowout parties in Marrakech. Maxim was aloof; he mostly stared at Halliburton, holding his gaze for long periods at a time as though sizing him up.

As a relationship-building exercise, Even proposed that Halliburton sell the family office around $3,000 in bitcoin. Halliburton was initially hesitant, but chalked it up as a peculiar dating ritual. One of the guys slid Halliburton the cash-filled envelope and told him to go to the bathroom, where he could count out the amount in private. “It felt like something out of a James Bond movie,” says Halliburton. “It was all very exotic to me.”

Halliburton left in a taxi, somewhat bemused by the encounter, but otherwise hopeful of closing the deal with the family office. For Sazmining, a small company with around 15 employees, it promised to be transformative.

Less than two weeks later, Halliburton had lost more than $200,000 worth of bitcoin to Even and Maxim. He didn’t know whether Sazmining could survive the blow, nor how the scammers had ensnared him.

Directly after his lunch with Even and Maxim, Halliburton flew to Latvia for a Bitcoin conference. From there, he traveled to Ethiopia to check on construction work at the data center facility.

While Halliburton was in Ethiopia, he received a WhatsApp message from Even, who wanted to go ahead with the deal on one condition: that Sazmining sell the family office a larger amount of bitcoin as part of the transaction, after the small initial purchase at the Rosewood Hotel. They landed on $400,000 worth—a tenth of the overall deal value.

Even asked Halliburton to return to Amsterdam to sign the contracts necessary to finalize the deal. Having been away from his family for weeks, Halliburton protested. But Even drew a line in the sand: “Remotely doesn’t work for me that’s not how I do business at the moment,” he wrote in a text message reviewed by WIRED.

Halliburton arrived back in Amsterdam in the early afternoon on August 16. That evening, he was due to meet Maxim at a teppanyaki restaurant at the five-star Okura Hotel. The interior is elaborately decorated in traditional Japanese style; it has wooden paneling, paper walls, a zen garden, and a flock of origami cranes that hang from string down a spiral staircase in the lobby.

Halliburton found Maxim sitting on a couch in the waiting area outside the restaurant, dressed in a gaudy silver suit. As they waited for a table, Maxim asked Halliburton whether he could demonstrate that Sazmining held enough bitcoin to go through with the side transaction that Even had proposed. He wanted Halliburton to move roughly half of the agreed amount—worth $220,000—into a bitcoin wallet app trusted by the family office. The funds would remain under Halliburton’s control, but the family office would be able to verify their existence using public transaction data.

Halliburton thumbed open his iPhone. The app, Atomic Wallet, had thousands of positive reviews and had been listed on the Apple App Store for several years. With Maxim at his side, Halliburton downloaded the app and created a new wallet. “I was trying to earn this guy’s trust,” says Halliburton. “Again, a $4 million contract. I’m still looking at that carrot.”

The dinner passed largely without incident. Maxim was less guarded this time; he talked about his fondness for watches and his work sourcing deals for the family office. Feeling under the weather from all the travel, Halliburton angled to wrap things up.

They left with the understanding that Maxim would take the signed contracts to the family office to be executed, while Halliburton would send the $220,000 in bitcoin to his new wallet address as agreed.

Back in his hotel room, Halliburton triggered a small test transaction using his new Atomic Wallet address. Then he wiped and reinstated the wallet using the private credentials—the seed phrase—generated when he first downloaded the app, to make sure that it functioned as expected. “Had to take some security measures but almost ready. Thanks for your patience,” wrote Halliburton in a WhatsApp message to Even. “No worries take your time,” Even responded.

At 10: 45 pm, satisfied with his tests, Halliburton signaled to a colleague to release $220,000 worth of bitcoin to the Atomic Wallet address. When it arrived, he sent a screenshot of the updated balance to Even. One minute later, Even wrote back, “Thank yiu [sic].”

Halliburton sent another message to Even, asking about the contracts. Though previously quick to answer, Even didn’t respond. Halliburton checked the Atomic Wallet app, sensing that something was wrong. The bitcoin had vanished.

Halliburton’s stomach dropped. As he sat on the bed, he tried to stop himself from vomiting. “It was like being punched in the gut,” says Halliburton. “It was just shock and disbelief.”

Halliburton racked his brain trying to figure out how he had been swindled. At 11: 30 pm, he sent another message to Even: “That was the most sophisticated scam I’ve ever experienced. I know you probably don’t give a shit but my business may not survive this. I’ve worked four years of my life to build it.”

Even responded, denying that he had done anything wrong, but that was the last Halliburton heard from him. Halliburton provided WIRED with the Telegram account Even had used; it was last active on the day the funds were drained. Even did not respond to a request for comment.

Within hours, the funds drained from Halliburton’s wallet began to be divided up, shuffled through a web of different addresses, and deposited with third-party platforms for converting crypto into regular currency, analysis by blockchain analytics companies Chainalysis and CertiK shows.

A portion of the bitcoin was split between different instant exchangers, which allow people to swap one type of cryptocurrency for another almost instantaneously. The bulk was funneled into a single address, where it was blended with funds tagged by Chainalysis as the likely proceeds of rip deals, a scam whereby somebody impersonates an investor to steal crypto from a startup.

“There’s nothing illegal about the services the scammer leveraged,” says Margaux Eckle, senior investigator at Chainalysis. “However, the fact that they leveraged consolidation addresses that appear very tightly connected to labeled scam activity is potentially indicative of a fraud operation.”

Some of the bitcoin that passed through the consolidation address was deposited with a crypto exchange, where it was likely swapped for regular currency. The remainder was converted into stablecoin and moved across so-called bridges to the Tron blockchain, which hosts several over-the-counter trading services that can be readily used to cash out large quantities of crypto, researchers claim.

The effect of the many hops, shuffles, conversions, and divisions is to make it more difficult to trace the origin of funds, so that they can be cashed out without arousing suspicion. “The scammer is quite sophisticated,” says Eckle. “Though we can trace through a bridge, it’s a way to slow the tracing of funds from investigators that could be on your tail.”

Eventually, the trail of public transaction data stops. To identify the perpetrators, law enforcement would have to subpoena the services that appear to have been used to cash out, which are widely required to collect information about users.

From the transaction data, it’s not possible to tell precisely how the scammers were able to access and drain Halliburton’s wallet without his permission. But aspects of his interactions with the scammers provide some clue.

Initially, Halliburton wondered whether the incident might be connected to a 2023 hack perpetrated by threat actors affiliated with the North Korean government, which led to $100 million worth of funds being drained from the accounts of Atomic Wallet users. (Atomic Wallet did not respond to a request for comment.)

But instead, the security researchers that spoke to WIRED believe that Halliburton fell victim to a targeted surveillance-style attack. “Executives who are publicly known to custody large crypto balances make attractive targets,” says Guanxing Wen, head of security research at CertiK.

The in-person dinners, expensive clothing, reams of cash, and other displays of wealth were gambits meant to put Halliburton at ease, researchers theorize. “This is a well-known rapport-building tactic in high-value confidence schemes,” says Wen. “The longer a victim spends with the attacker in a relaxed setting, the harder it becomes to challenge a later technical request.”

In order to complete the theft, the scammers likely had to steal the seed phrase for Halliburton’s newly created Atomic Wallet address. Equipped with a wallet’s seed phrase, anyone can gain unfettered access to the bitcoin kept inside.

One possibility is that the scammers, who dictated the locations for both meetings in Amsterdam, hijacked or mimicked the hotel Wi-Fi networks, allowing them to harvest information from Halliburton’s phone. “That equipment you can buy online, no problem. It would all fit inside a couple of suitcases,” says Adrian Cheek, lead researcher at cybersecurity company Coeus. But Halliburton insists that his phone never left his possession, and he used mobile data to download the Atomic Wallet app, not public Wi-Fi.

The most plausible explanation, claims Wen, is that the scammers—perhaps with the help of a nearby accomplice or a camera equipped with long-range zoom—were able to record the seed phrase when it appeared on Halliburton’s phone at the point he first downloaded the app, on the couch at the Okura Hotel.

Long before Halliburton delivered the $220,000 in bitcoin to his Atomic Wallet address, the scammers had probably set up a “sweeper script,” claims Wen, a type of automated bot coded to drain a wallet when it detects a large balance change.

The people the victim meets in-person in cases like this—like Even and Maxim—are rarely the ultimate beneficiaries, but rather mercenaries hired by a network of scam artists, who could be based on the other side of the globe.

“They’re normally recruited through underground forums, and secure chat groups,” says Cheek. “If you know where you’re looking, you can see this ongoing recruitment.”

For a few days, it remained unclear whether Sazmining would be able to weather the financial blow. The stolen funds equated to about six weeks’ worth of revenue. “I’m trying to keep the business afloat and survive this situation where suddenly we’ve got a cash crunch,” says Halliburton. By delaying payment to a vendor and extending the duration of an outstanding loan, the company was ultimately able to remain solvent.

That week, one of the Sazmining board members filed reports with law enforcement bodies in the Netherlands, the UK, and the US. They received acknowledgements from only UK-based Action Fraud, which said it would take no immediate action, and the Cyber Fraud Task Force, a division of the US Secret Service. (The CFTF did not respond to a request for comment.)

The incredible volume of crypto-related scam activity makes it all but impossible for law enforcement to investigate each theft individually. “It’s a type of threat and criminal activity that is reaching a scale that’s completely unprecedented,” says Eckle.

The best chance of a scam victim recovering their funds is for law enforcement to bust an entire scam ring, says Eckle. In that scenario, any funds recovered are typically dispersed to those who have reported themselves victims.

Until such a time, Halliburton has to make his peace with the loss. “It’s still painful,” he says. But “it wasn’t a death blow.”

This story originally appeared on Wired.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Bonkers Bitcoin heist: 5-star hotels, cash-filled envelopes, vanishing funds Read More »

google-ceo:-if-an-ai-bubble-pops,-no-one-is-getting-out-clean

Google CEO: If an AI bubble pops, no one is getting out clean

Market concerns and Google’s position

Alphabet’s recent market performance has been driven by investor confidence in the company’s ability to compete with OpenAI’s ChatGPT, as well as its development of specialized chips for AI that can compete with Nvidia’s. Nvidia recently reached a world-first $5 trillion valuation due to making GPUs that can accelerate the matrix math at the heart of AI computations.

Despite acknowledging that no company would be immune to a potential AI bubble burst, Pichai argued that Google’s unique position gives it an advantage. He told the BBC that the company owns what he called a “full stack” of technologies, from chips to YouTube data to models and frontier science research. This integrated approach, he suggested, would help the company weather any market turbulence better than competitors.

Pichai also told the BBC that people should not “blindly trust” everything AI tools output. The company currently faces repeated accuracy concerns about some of its AI models. Pichai said that while AI tools are helpful “if you want to creatively write something,” people “have to learn to use these tools for what they’re good at and not blindly trust everything they say.”

In the BBC interview, the Google boss also addressed the “immense” energy needs of AI, acknowledging that the intensive energy requirements of expanding AI ventures have caused slippage on Alphabet’s climate targets. However, Pichai insisted that the company still wants to achieve net zero by 2030 through investments in new energy technologies. “The rate at which we were hoping to make progress will be impacted,” Pichai said, warning that constraining an economy based on energy “will have consequences.”

Even with the warnings about a potential AI bubble, Pichai did not miss his chance to promote the technology, albeit with a hint of danger regarding its widespread impact. Pichai described AI as “the most profound technology” humankind has worked on.

“We will have to work through societal disruptions,” he said, adding that the technology would “create new opportunities” and “evolve and transition certain jobs.” He said people who adapt to AI tools “will do better” in their professions, whatever field they work in.

Google CEO: If an AI bubble pops, no one is getting out clean Read More »

5-plead-guilty-to-laptop-farm-and-id-theft-scheme-to-land-north-koreans-us-it-jobs

5 plead guilty to laptop farm and ID theft scheme to land North Koreans US IT jobs

Each defendant also helped the IT workers pass employer vetting procedures. Travis and Salazar, for example, appeared for drug testing on behalf of the workers.

Travis, an active-duty member of the US Army at the time, received at least $51,397 for his participation in the scheme. Phagnasay and Salazar earned at least $3,450 and $4,500, respectively. In all, the fraudulent jobs earned roughly $1.28 million in salary payments from the defrauded US companies, the vast majority of which were sent to the IT workers overseas.

The fifth defendant, Ukrainian national Oleksandr Didenko, pleaded guilty to one count of aggravated identity theft, in addition to wire fraud. He admitted to participating in a “years-long scheme that stole the identities of US citizens and sold them to overseas IT workers, including North Korean IT workers, so they could fraudulently gain employment at 40 US companies.” Didenko received hundreds of thousands of dollars from victim companies who hired the fraudulent applicants. As part of the plea agreement, Didenko is forfeiting more than $1.4 million, including more than $570,000 in fiat and virtual currency seized from him and his co-conspirators.

In 2022, the US Treasury Department said that the Democratic People’s Republic of Korea employs thousands of skilled IT workers around the world to generate revenue for the country’s weapons of mass destruction and ballistic missile programs.

“In many cases, DPRK IT workers represent themselves as US-based and/or non-North Korean teleworkers,” Treasury Department officials wrote. “The workers may further obfuscate their identities and/or location by sub-contracting work to non North Koreans. Although DPRK IT workers normally engage in IT work distinct from malicious cyber activity, they have used the privileged access gained as contractors to enable the DPRK’s malicious cyber intrusions. Additionally, there are likely instances where workers are subjected to forced labor.”

Other US government advisories posted in 2023 and 2024 concerning similar programs have been removed with no explanation.

In Friday’s release, the Justice Department also said it’s seeking the forfeiture of more than $15 million worth of USDT, a cryptocurrency stablecoin pegged to the US dollar, that the FBI seized in March from North APT38 actors. The seized funds were derived from four heists APT38 carried out, two in July 2023 against virtual currency payment processors in Estonia and Panama and two in November 2023 thefts from exchanges in Panama and Seychelles.

Justice Department attempts to locate, seize, and forfeit all the stolen assets remain ongoing because APT38 has laundered them through virtual currency bridges, mixers, exchanges, and over-the-counter traders, the Justice Department said.

5 plead guilty to laptop farm and ID theft scheme to land North Koreans US IT jobs Read More »

oracle-hit-hard-in-wall-street’s-tech-sell-off-over-its-huge-ai-bet

Oracle hit hard in Wall Street’s tech sell-off over its huge AI bet

“That is a huge liability and credit risk for Oracle. Your main customer, biggest customer by far, is a venture capital-funded start-up,” said Andrew Chang, a director at S&P Global.

OpenAI faces questions about how it plans to meet its commitments to spend $1.4 trillion on AI infrastructure over the next eight years. It has struck deals with several Big Tech groups, including Oracle’s rivals.

Of the five hyperscalers—which include Amazon, Google, Microsoft, and Meta—Oracle is the only one with negative free cash flow. Its debt-to-equity ratio has surged to 500 percent, far higher than Amazon’s 50 percent and Microsoft’s 30 percent, according to JPMorgan.

While all five companies have seen their cash-to-assets ratios decline significantly in recent years amid a boom in spending, Oracle’s is by far the lowest, JPMorgan found.

JPMorgan analysts noted a “tension between [Oracle’s] aggressive AI build-out ambitions and the limits of its investment-grade balance sheet.”

Analysts have also noted that Oracle’s data center leases are for much longer than its contracts to sell capacity to OpenAI.

Oracle has signed at least five long-term lease agreements for US data centers that will ultimately be used by OpenAI, resulting in $100 billion of off-balance-sheet lease commitments. The sites are at varying levels of construction, with some not expected to break ground until next year.

Safra Catz, Oracle’s sole chief executive from 2019 until she stepped down in September, resisted expanding its cloud business because of the vast expenses required. She was replaced by co-CEOs Clay Magouyrk and Mike Sicilia as part of the pivot by Oracle to a new era focused on AI.

Catz, who is now executive vice-chair of Oracle’s board, has exercised stock options and sold $2.5 billion of its shares this year, according to US regulatory filings. She had announced plans to exercise her stock options at the end of 2024.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Oracle hit hard in Wall Street’s tech sell-off over its huge AI bet Read More »

researchers-question-anthropic-claim-that-ai-assisted-attack-was-90%-autonomous

Researchers question Anthropic claim that AI-assisted attack was 90% autonomous

Claude frequently overstated findings and occasionally fabricated data during autonomous operations, claiming to have obtained credentials that didn’t work or identifying critical discoveries that proved to be publicly available information. This AI hallucination in offensive security contexts presented challenges for the actor’s operational effectiveness, requiring careful validation of all claimed results. This remains an obstacle to fully autonomous cyberattacks.

How (Anthropic says) the attack unfolded

Anthropic said GTG-1002 developed an autonomous attack framework that used Claude as an orchestration mechanism that largely eliminated the need for human involvement. This orchestration system broke complex multi-stage attacks into smaller technical tasks such as vulnerability scanning, credential validation, data extraction, and lateral movement.

“The architecture incorporated Claude’s technical capabilities as an execution engine within a larger automated system, where the AI performed specific technical actions based on the human operators’ instructions while the orchestration logic maintained attack state, managed phase transitions, and aggregated results across multiple sessions,” Anthropic said. “This approach allowed the threat actor to achieve operational scale typically associated with nation-state campaigns while maintaining minimal direct involvement, as the framework autonomously progressed through reconnaissance, initial access, persistence, and data exfiltration phases by sequencing Claude’s responses and adapting subsequent requests based on discovered information.”

The attacks followed a five-phase structure that increased AI autonomy through each one.

The life cycle of the cyberattack, showing the move from human-led targeting to largely AI-driven attacks using various tools, often via the Model Context Protocol (MCP). At various points during the attack, the AI returns to its human operator for review and further direction.

Credit: Anthropic

The life cycle of the cyberattack, showing the move from human-led targeting to largely AI-driven attacks using various tools, often via the Model Context Protocol (MCP). At various points during the attack, the AI returns to its human operator for review and further direction. Credit: Anthropic

The attackers were able to bypass Claude guardrails in part by breaking tasks into small steps that, in isolation, the AI tool didn’t interpret as malicious. In other cases, the attackers couched their inquiries in the context of security professionals trying to use Claude to improve defenses.

As noted last week, AI-developed malware has a long way to go before it poses a real-world threat. There’s no reason to doubt that AI-assisted cyberattacks may one day produce more potent attacks. But the data so far indicates that threat actors—like most others using AI—are seeing mixed results that aren’t nearly as impressive as those in the AI industry claim.

Researchers question Anthropic claim that AI-assisted attack was 90% autonomous Read More »