Biz & IT

why-signal’s-post-quantum-makeover-is-an-amazing-engineering-achievement

Why Signal’s post-quantum makeover is an amazing engineering achievement


COMING TO A PHONE NEAR YOU

New design sets a high standard for post-quantum readiness.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

The encryption protecting communications against criminal and nation-state snooping is under threat. As private industry and governments get closer to building useful quantum computers, the algorithms protecting Bitcoin wallets, encrypted web visits, and other sensitive secrets will be useless. No one doubts the day will come, but as the now-common joke in cryptography circles observes, experts have been forecasting this cryptocalypse will arrive in the next 15 to 30 years for the past 30 years.

The uncertainty has created something of an existential dilemma: Should network architects spend the billions of dollars required to wean themselves off quantum-vulnerable algorithms now, or should they prioritize their limited security budgets fighting more immediate threats such as ransomware and espionage attacks? Given the expense and no clear deadline, it’s little wonder that less than half of all TLS connections made inside the Cloudflare network and only 18 percent of Fortune 500 networks support quantum-resistant TLS connections. It’s all but certain that many fewer organizations still are supporting quantum-ready encryption in less prominent protocols.

Triumph of the cypherpunks

One exception to the industry-wide lethargy is the engineering team that designs the Signal Protocol, the open source engine that powers the world’s most robust and resilient form of end-to-end encryption for multiple private chat apps, most notably the Signal Messenger. Eleven days ago, the nonprofit entity that develops the protocol, Signal Messenger LLC, published a 5,900-word write-up describing its latest updates that make Signal fully quantum-resistant.

The complexity and problem-solving required for making the Signal Protocol quantum safe are as daunting as just about any in modern-day engineering. The original Signal Protocol already resembled the inside of a fine Swiss timepiece, with countless gears, wheels, springs, hands, and other parts all interoperating in an intricate way. In less adept hands, mucking about with an instrument as complex as the Signal protocol could have led to shortcuts or unintended consequences that hurt performance, undoing what would otherwise be a perfectly running watch. Yet this latest post-quantum upgrade (the first one came in 2023) is nothing short of a triumph.

“This appears to be a solid, thoughtful improvement to the existing Signal Protocol,” said Brian LaMacchia, a cryptography engineer who oversaw Microsoft’s post-quantum transition from 2015 to 2022 and now works at Farcaster Consulting Group. “As part of this work, Signal has done some interesting optimization under the hood so as to minimize the network performance impact of adding the post-quantum feature.”

Of the multiple hurdles to clear, the most challenging was accounting for the much larger key sizes that quantum-resistant algorithms require. The overhaul here adds protections based on ML-KEM-768, an implementation of the CRYSTALS-Kyber algorithm that was selected in 2022 and formalized last year by the National Institute of Standards and Technology. ML-KEM is short for Module-Lattice-Based Key-Encapsulation Mechanism, but most of the time, cryptographers refer to it simply as KEM.

Ratchets, ping-pong, and asynchrony

Like the Elliptic curve Diffie-Hellman (ECDH) protocol that Signal has used since its start, KEM is a key encapsulation mechanism. Also known as a key agreement mechanism, it provides the means for two parties who have never met to securely agree on one or more shared secrets in the presence of an adversary who is monitoring the parties’ connection. RSA, ECDH, and other encapsulation algorithms have long been used to negotiate symmetric keys (almost always AES keys) in protocols including TLS, SSH, and IKE. Unlike ECDH and RSA, however, the much newer KEM is quantum-safe.

Key agreement in a protocol like TLS is relatively straightforward. That’s because devices connecting over TLS negotiate a key over a single handshake that occurs at the beginning of a session. The agreed-upon AES key is then used throughout the session. The Signal Protocol is different. Unlike TLS sessions, Signal sessions are protected by forward secrecy, a cryptographic property that ensures the compromise of a key used to encrypt a recent set of messages can’t be used to decrypt an earlier set of messages. The protocol also offers Post-Compromise Security, which protects future messages from past key compromises. While a TLS  uses the same key throughout a session, keys within a Signal session constantly evolve.

To provide these confidentiality guarantees, the Signal Protocol updates secret key material each time a message party hits the send button or receives a message, and at other points, such as in graphical indicators that a party is currently typing and in the sending of read receipts. The mechanism that has made this constant key evolution possible over the past decade is what protocol developers call a “double ratchet.” Just as a traditional ratchet allows a gear to rotate in one direction but not in the other, the Signal ratchets allow messaging parties to create new keys based on a combination of preceding and newly agreed-upon secrets. The ratchets work in a single direction, the sending and receiving of future messages. Even if an adversary compromises a newly created secret, messages encrypted using older secrets can’t be decrypted.

The starting point is a handshake that performs three or four ECDH agreements that mix long- and short-term secrets to establish a shared secret. The creation of this “root key” allows the Double Ratchet to begin. Until 2023, the key agreement used X3DH. The handshake now uses PQXDH to make the handshake quantum-resistant.

The first layer of the Double Ratchet, the Symmetric Ratchet, derives an AES key from the root key and advances it for every message sent. This allows every message to be encrypted with a new secret key. Consequently, if attackers compromise one party’s device, they won’t be able to learn anything about the keys that came earlier. Even then, though, the attackers would still be able to compute the keys used in future messages. That’s where the second, “Diffie-Hellman ratchet” comes in.

The Diffie-Hellman ratchet incorporates a new ECDH public key into each message sent. Using Alice and Bob, the fictional characters often referred to when explaining asymmetric encryption, when Alice sends Bob a message, she creates a new ratchet keypair and computes the ECDH agreement between this key and the last ratchet public key Bob sent. This gives her a new secret, and she knows that once Bob gets her new public key, he will know this secret, too (because, as mentioned earlier, Bob previously sent that other key). With that, Alice can mix the new secret with her old root key to get a new root key and start fresh. The result: Attackers who learn her old secrets won’t be able to tell the difference between her new ratchet keys and random noise.

The result is what Signal developers describe as “ping-pong” behavior, as the parties to a discussion take turns replacing ratchet key pairs one at a time. The effect: An eavesdropper who compromises one of the parties might recover a current ratchet private key, but soon enough, that private key will be replaced with a new, uncompromised one, and in a way that keeps it free from the prying eyes of the attacker.

The objective of the newly generated keys is to limit the number of messages that can be decrypted if an adversary recovers key material at some point in an ongoing chat. Messages sent prior to and after the compromise will remain off limits.

A major challenge designers of the Signal Protocol face is the need to make the ratchets work in an asynchronous environment. Asynchronous messages occur when parties send or receive them at different times—such as while one is offline and the other is active, or vice versa—without either needing to be present or respond immediately. The entire Signal Protocol must work within this asynchronous environment. What’s more, it must work reliably over unstable networks and networks controlled by adversaries, such as a government that forces a telecom or cloud service to spy on the traffic.

Shor’s algorithm lurking

By all accounts, Signal’s double ratchet design is state-of-the-art. That said, it’s wide open to an inevitable if not immediate threat: quantum computing. That’s because an adversary capable of monitoring traffic passing from two or more messenger users can capture that data and feed it into a quantum computer—once one of sufficient power is viable—and calculate the ephemeral keys generated in the second ratchet.

In classical computing, it’s infeasible, if not impossible, for such an adversary to calculate the key. Like all asymmetric encryption algorithms, ECDH is based on a mathematical, one-way function. Also known as trapdoor functions, these problems are trivial to compute in one direction and substantially harder to compute in reverse. In elliptic curve cryptography, this one-way function is based on the Discrete Logarithm problem in mathematics. The key parameters are based on specific points in an elliptic curve over the field of integers modulo some prime P.

On average, an adversary equipped with only a classical computer would spend billions of years guessing integers before arriving at the right ones. A quantum computer, by contrast, would be able to calculate the correct integers in a matter of hours or days. A formula known as Shor’s algorithm—which runs only on a quantum computer—reverts this one-way discrete logarithm equation to a two-way one. Shor’s Algorithm can similarly make quick work of solving the one-way function that’s the basis for the RSA algorithm.

As noted earlier, the Signal Protocol received its first post-quantum makeover in 2023. This update added PQXDH—a Signal-specific implementation that combined the key agreements from elliptic curves used in X3DH (specifically X25519) and the quantum-safe KEM—in the initial protocol handshake. (X3DH was then put out to pasture as a standalone implementation.)

The move foreclosed the possibility of a quantum attack being able to recover the symmetric key used to start the ratchets, but the ephemeral keys established in the ping-ponging second ratchet remained vulnerable to a quantum attack. Signal’s latest update adds quantum resistance to these keys, ensuring that forward secrecy and post-compromise security are safe from Shor’s algorithm as well.

Even though the ping-ponging keys are vulnerable to future quantum attacks, they are broadly believed to be secure against today’s attacks from classical computers. The Signal Protocol developers didn’t want to remove them or the battle-tested code that produces them. That led to their decision to add quantum resistance by adding a third ratchet. This one uses a quantum-safe KEM to produce new secrets much like the Diffie-Hellman ratchet did before, ensuring quantum-safe, post-compromise security.

The technical challenges were anything but easy. Elliptic curve keys generated in the X25519 implementation are about 32 bytes long, small enough to be added to each message without creating a burden on already constrained bandwidths or computing resources. A ML-KEM 768 key, by contrast, is 1,000 bytes. Additionally, Signal’s design requires sending both an encryption key and a ciphertext, making the total size 2272 bytes.

And then there were three

To handle the 71x increase, Signal developers considered a variety of options. One was to send the 2272-byte KEM key less often—say every 50th message or once every week—rather than every message. That idea was nixed because it doesn’t work well in asynchronous or adversarial messaging environments. Signal Protocol developers Graeme Connell and Rolfe Schmidt explained:

Consider the case of “send a key if you haven’t sent one in a week”. If Bob has been offline for 2 weeks, what does Alice do when she wants to send a message? What happens if we can lose messages, and we lose the one in fifty that contains a new key? Or, what happens if there’s an attacker in the middle that wants to stop us from generating new secrets, and can look for messages that are [many] bytes larger than the others and drop them, only allowing keyless messages through?

Another option Signal engineers considered was breaking the 2272-byte key into smaller chunks, say 71 of them that are 32 bytes each. Breaking up the KEM key into smaller chunks and putting one in each message sounds like a viable approach at first, but once again, the asynchronous environment of messaging made it unworkable. What happens, for example, when data loss causes one of the chunks to be dropped? The protocol could deal with this scenario by just repeat-sending chunks again after sending all 71 previously. But then an adversary monitoring the traffic could simply cause packet 3 to be dropped each time, preventing Alice and Bob from completing the key exchange.

Signal developers ultimately went with a solution that used this multiple-chunks approach.

Sneaking an elephant through the cat door

To manage the asynchrony challenges, the developers turned to “erasure codes,” a method of breaking up larger data into smaller pieces such that the original can be reconstructed using any sufficiently sized subset of chunks.

Charlie Jacomme, a researcher at INRIA Nancy on the Pesto team who focuses on formal verification and secure messaging, said this design accounts for packet loss by building redundancy into the chunked material. Instead of all x number of chunks having to be successfully received to reconstruct the key, the model requires only x-y chunks to be received, where y is the acceptable number of packets lost. As long as that threshold is met, the new key can be established even when packet loss occurs.

The other part of the design was to split the KEM computations into smaller steps. These KEM computations are distinct from the KEM key material.

As Jacomme explained it:

Essentially, a small part of the public key is enough to start computing and sending a bigger part of the ciphertext, so you can quickly send in parallel the rest of the public key and the beginning of the ciphertext. Essentially, the final computations are equal to the standard, but some stuff was parallelized.

All this in fact plays a role in the end security guarantees, because by optimizing the fact that KEM computations are done faster, you introduce in your key derivation fresh secrets more frequently.

Signal’s post 10 days ago included several images that illustrate this design:

While the design solved the asynchronous messaging problem, it created a new complication of its own: This new quantum-safe ratchet advanced so quickly that it couldn’t be kept synchronized with the Diffie-Hellman ratchet. Ultimately, the architects settled on a creative solution. Rather than bolt KEM onto the existing double ratchet, they allowed it to remain more or less the same as it had been. Then they used the new quantum-safe ratchet to implement a parallel secure messaging system.

Now, when the protocol encrypts a message, it sources encryption keys from both the classic Double Ratchet and the new ratchet. It then mixes the two keys together (using a cryptographic key derivation function) to get a new encryption key that has all of the security of the classical Double Ratchet but now has quantum security, too.

The Signal engineers have given this third ratchet the formal name: Sparse Post Quantum Ratchet, or SPQR for short. The third ratchet was designed in collaboration with PQShield, AIST, and New York University. The developers presented the erasure-code-based chunking and the high-level Triple Ratchet design at the Eurocrypt 2025 conference. At the Usenix 25 conference, they discussed the six options they considered for adding quantum-safe forward secrecy and post-compromise security and why SPQR and one other stood out. Presentations at the NIST PQC Standardization Conference and the Cryptographic Applications Workshop explain the details of chunking, the design challenges, and how the protocol had to be adapted to use the standardized ML-KEM.

Jacomme further observed:

The final thing interesting for the triple ratchet is that it nicely combines the best of both worlds. Between two users, you have a classical DH-based ratchet going on one side, and fully independently, a KEM-based ratchet is going on. Then, whenever you need to encrypt something, you get a key from both, and mix it up to get the actual encryption key. So, even if one ratchet is fully broken, be it because there is now a quantum computer, or because somebody manages to break either elliptic curves or ML-KEM, or because the implementation of one is flawed, or…, the Signal message will still be protected by the second ratchet. In a sense, this update can be seen, of course simplifying, as doubling the security of the ratchet part of Signal, and is a cool thing even for people that don’t care about quantum computers.

As both Signal and Jacomme noted, users of Signal and other messengers relying on the Signal Protocol need not concern themselves with any of these new designs. To paraphrase a certain device maker, it just works.

In the coming weeks or months, various messaging apps and app versions will be updated to add the triple ratchet. Until then, apps will simply rely on the double ratchet as they always did. Once apps receive the update, they’ll behave exactly as they did before upgrading.

For those who care about the internal workings of their Signal-based apps, though, the architects have documented in great depth the design of this new ratchet and how it behaves. Among other things, the work includes a mathematical proof verifying that the updated Signal protocol provides the claimed security properties.

Outside researchers are applauding the work.

“If the normal encrypted messages we use are cats, then post-quantum ciphertexts are elephants,” Matt Green, a cryptography expert at Johns Hopkins University, wrote in an interview. “So the problem here is to sneak an elephant through a tunnel designed for cats. And that’s an amazing engineering achievement. But it also makes me wish we didn’t have to deal with elephants.”

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

Why Signal’s post-quantum makeover is an amazing engineering achievement Read More »

microsoft-warns-of-new-“payroll-pirate”-scam-stealing-employees’-direct-deposits

Microsoft warns of new “Payroll Pirate” scam stealing employees’ direct deposits

Microsoft is warning of an active scam that diverts employees’ paycheck payments to attacker-controlled accounts after first taking over their profiles on Workday or other cloud-based HR services.

Payroll Pirate, as Microsoft says the campaign has been dubbed, gains access to victims’ HR portals by sending them phishing emails that trick the recipients into providing their credentials for logging in to the cloud account. The scammers are able to recover multi-factor authentication codes by using adversary-in-the-middle tactics, which work by sitting between the victims and the site they think they’re logging in to, which is, in fact, a fake site operated by the attackers.

Not all MFA is created equal

The attackers then enter the intercepted credentials, including the MFA code, into the real site. This tactic, which has grown increasingly common in recent years, underscores the importance of adopting FIDO-compliant forms of MFA, which are immune to such attacks.

Once inside the employees’ accounts, the scammers make changes to payroll configurations within Workday. The changes cause direct-deposit payments to be diverted from accounts originally chosen by the employee and instead flow to an account controlled by the attackers. To block messages Workday automatically sends to users when such account details have been changed, the attackers create email rules that keep the messages from appearing in the inbox.

“The threat actor used realistic phishing emails, targeting accounts at multiple universities, to harvest credentials,” Microsoft said in a Thursday post. “Since March 2025, we’ve observed 11 successfully compromised accounts at three universities that were used to send phishing emails to nearly 6,000 email accounts across 25 universities.”

Microsoft warns of new “Payroll Pirate” scam stealing employees’ direct deposits Read More »

ai-models-can-acquire-backdoors-from-surprisingly-few-malicious-documents

AI models can acquire backdoors from surprisingly few malicious documents

Fine-tuning experiments with 100,000 clean samples versus 1,000 clean samples showed similar attack success rates when the number of malicious examples stayed constant. For GPT-3.5-turbo, between 50 and 90 malicious samples achieved over 80 percent attack success across dataset sizes spanning two orders of magnitude.

Limitations

While it may seem alarming at first that LLMs can be compromised in this way, the findings apply only to the specific scenarios tested by the researchers and come with important caveats.

“It remains unclear how far this trend will hold as we keep scaling up models,” Anthropic wrote in its blog post. “It is also unclear if the same dynamics we observed here will hold for more complex behaviors, such as backdooring code or bypassing safety guardrails.”

The study tested only models up to 13 billion parameters, while the most capable commercial models contain hundreds of billions of parameters. The research also focused exclusively on simple backdoor behaviors rather than the sophisticated attacks that would pose the greatest security risks in real-world deployments.

Also, the backdoors can be largely fixed by the safety training companies already do. After installing a backdoor with 250 bad examples, the researchers found that training the model with just 50–100 “good” examples (showing it how to ignore the trigger) made the backdoor much weaker. With 2,000 good examples, the backdoor basically disappeared. Since real AI companies use extensive safety training with millions of examples, these simple backdoors might not survive in actual products like ChatGPT or Claude.

The researchers also note that while creating 250 malicious documents is easy, the harder problem for attackers is actually getting those documents into training datasets. Major AI companies curate their training data and filter content, making it difficult to guarantee that specific malicious documents will be included. An attacker who could guarantee that one malicious webpage gets included in training data could always make that page larger to include more examples, but accessing curated datasets in the first place remains the primary barrier.

Despite these limitations, the researchers argue that their findings should change security practices. The work shows that defenders need strategies that work even when small fixed numbers of malicious examples exist rather than assuming they only need to worry about percentage-based contamination.

“Our results suggest that injecting backdoors through data poisoning may be easier for large models than previously believed as the number of poisons required does not scale up with model size,” the researchers wrote, “highlighting the need for more research on defences to mitigate this risk in future models.”

AI models can acquire backdoors from surprisingly few malicious documents Read More »

discord-says-hackers-stole-government-ids-of-70,000-users

Discord says hackers stole government IDs of 70,000 users

Discord says that hackers made off with images of 70,000 users’ government IDs that they were required to provide in order to use the site.

Like an increasing number of sites, Discord requires certain users to provide a photo or scan of their driver’s license or other government ID that shows they meet the minimum age requirements in their country. In some cases, Discord allows users to prove their age by providing a selfie that shows their faces (it’s not clear how a face proves someone’s age, but there you go). The social media site imposes these requirements on users who are reported by other users to be under the minimum age for the country they’re connecting from.

“A substantial risk for identity theft”

On Wednesday, Discord said that ID images of roughly 70,000 users “may have had government-ID photos exposed” in a recent breach of a third-party service Discord entrusted to manage the data. The affected users had communicated with Discord’s Customer Support or Trust & Safety teams and subsequently submitted the IDs in reviews of age-related appeals.

“Recently, we discovered an incident where an unauthorized party compromised one of Discord’s third-party customer service providers,” the company said Wednesday. “The unauthorized party then gained access to information from a limited number of users who had contacted Discord through our Customer Support and/or Trust & Safety teams.”

Discord cut off the unnamed vendor’s access to its ticketing system after learning of the breach. The company is now in the process of emailing affected users. Notifications will come from noreply @ discord.com. Discord said it won’t contact any affected users by phone.

The data breach is a sign of things to come as more and more sites require users to turn over their official IDs as a condition of using their services. Besides, Discord, Roblox, Steam, and Twitch have also required at least some of their users to submit photo IDs. Laws passed in 19 US states, France, the UK, and elsewhere now require porn sites to verify visitors are of legal age to view adult content. Many sites have complied, but not all.

Discord says hackers stole government IDs of 70,000 users Read More »

bank-of-england-warns-ai-stock-bubble-rivals-2000-dotcom-peak

Bank of England warns AI stock bubble rivals 2000 dotcom peak

Share valuations based on past earnings have also reached their highest levels since the dotcom bubble 25 years ago, though the BoE noted they appear less extreme when based on investors’ expectations for future profits. “This, when combined with increasing concentration within market indices, leaves equity markets particularly exposed should expectations around the impact of AI become less optimistic,” the central bank said.

Toil and trouble?

The dotcom bubble offers a potentially instructive parallel to our current era. In the late 1990s, investors poured money into Internet companies based on the promise of a transformed economy, seemingly ignoring whether individual businesses had viable paths to profitability. Between 1995 and March 2000, the Nasdaq index rose 600 percent. When sentiment shifted, the correction was severe: the Nasdaq fell 78 percent from its peak, reaching a low point in October 2002.

Whether we’ll see the same thing or worse if an AI bubble pops is mere speculation at this point. But similarly to the early 2000s, the question about today’s market isn’t necessarily about the utility of AI tools themselves (the Internet was useful, after all, despite the bubble), but whether the amount of money being poured into the companies that sell them is out of proportion with the potential profits those improvements might bring.

We don’t have a crystal ball to determine when such a bubble might pop, or even if it is guaranteed to do so, but we’ll likely continue to see more warning signs ahead if AI-related deals continue to grow larger and larger over time.

Bank of England warns AI stock bubble rivals 2000 dotcom peak Read More »

salesforce-says-it-won’t-pay-extortion-demand-in-1-billion-records-breach

Salesforce says it won’t pay extortion demand in 1 billion records breach

Salesforce says it’s refusing to pay an extortion demand made by a crime syndicate that claims to have stolen roughly 1 billion records from dozens of Salesforce customers.

The threat group making the demands began their campaign in May, when they made voice calls to organizations storing data on the Salesforce platform, Google-owned Mandiant said in June. The English-speaking callers would provide a pretense that necessitated the target connect an attacker-controlled app to their Salesforce portal. Amazingly—but not surprisingly—many of the people who received the calls complied.

It’s becoming a real mess

The threat group behind the campaign is calling itself Scattered LAPSUS$ Hunters, a mashup of three prolific data-extortion actors: Scattered Spider, LAPSuS$, and ShinyHunters. Mandiant, meanwhile, tracks the group as UNC6040, because the researchers so far have been unable to positively identify the connections.

Earlier this month, the group created a website that named Toyota, FedEx, and 37 other Salesforce customers whose data was stolen in the campaign. In all, the number of records recovered, Scattered LAPSUS$ Hunters claimed, was “989.45m/~1B+.” The site called on Salesforce to begin negotiations for a ransom amount “or all your customers [sic] data will be leaked.” The site went on to say: “Nobody else will have to pay us, if you pay, Salesforce, Inc.” The site said the deadline for payment was Friday.

In an email Wednesday, a Salesforce representative said the company is spurning the demand.

Salesforce says it won’t pay extortion demand in 1 billion records breach Read More »

synology-caves,-walks-back-some-drive-restrictions-on-upcoming-nas-models

Synology caves, walks back some drive restrictions on upcoming NAS models


Policy change affects at least 2025 model Plus, Value, and J-series DiskStations.

Credit: SOPA Images / Getty

If you were considering the purchase of a Synology NAS but were leery of the unreasonably high cost of populating it with special Synology-branded hard disk drives, you can breathe a little easier today. In a press release dated October 8, Synology noted that with the release of its latest Disk Station Manager (DSM) update, some of its 2025 model-year products—specifically, the Plus, Value, and J-series DiskStation NAS devices—would “support the installation and storage pool creation of non-validated third-party drives.”

This unexpected move comes just a few months after Synology aggressively expanded its “verified drive” policy down-market to the entire Plus line of DiskStations. Prior to today, the network-attached storage vendor had shown no signs of swerving from the decision, painting it as a pro-consumer move intended to enhance reliability. “Extensive internal testing has shown that drives that follow a rigorous validation process when paired with Synology systems are at less risk of drive failure and ongoing compatibility issues,” Synology previously claimed in an email to Ars.

What is a “verified” or “validated” drive?

Synology first released its own brand of hard disk drives back in 2021 and began requiring their use in a small but soon-to-increase number of its higher-end NAS products. Although the drives were rebadged offerings from other manufacturers—there are very few hard disk drive OEMs, and Synology isn’t one of them—the company claimed that its branded disks underwent significant additional validation and testing that, when coupled with customized firmware, yielded reliability and performance improvements over off-the-shelf components.

However, those drives came with what was in some cases a substantial price increase over commodity hardware. Although I couldn’t find an actual published MSRP list, some spot checking on several web stores shows that the Synology HAT5310 enterprise SATA drive (a drive with the same warranty and expected service life as a Seagate Exos or Western Digital Gold) is available in 8TB at $299, 12TB at $493, and 20TB at an eye-watering $605. (For comparison, identically sized Seagate Exos disks are $220 at 8TB, $345 at 12TB, and $399 at 20TB.) Other Synology drive models tell similar pricing stories.

Photograph of a synology nas in profile

A Synology DS1525+ NAS, which up until today would scream at you unless you filled it with special Synology-branded disks.

Credit: Synology

A Synology DS1525+ NAS, which up until today would scream at you unless you filled it with special Synology-branded disks. Credit: Synology

If you put non-verified drives in a Synology NAS that required verified drives, certain functionality would be reduced or potentially removed, depending on the specific model disks you were introducing. Additionally, the Synology DSM interface would spam you with large “DANGER” warnings that your data might not be safe. Synology also at first refused to display S.M.A.R.T. diagnostic information from unverified drives, though this particular restriction was eventually lifted.

Savvy sysadmins could disable the verified drive requirements altogether by using one of several different workarounds, though that kind of thing opens one up to a different kind of danger—the danger of depending on an unsupported configuration tweak to keep a production system fully online and functional. It’s not a big deal for home users, but for business users relying on a Synology system at work with people’s livelihoods involved, the should-I-or-shouldn’t-I calculus of using such a workaround gets murkier. Synology is likely banking on the fact that if your business is of a certain size and you’re spending someone else’s money, a few hundred bucks more on each disk drive for peace of mind and a smoothly functioning NAS might seem like less of a speed bump than it would to a homelab admin spending money out of their own pocket.

While Synology’s claims about its validated drives having undergone extensive testing and yielding some performance benefit do hold water (at least under the specific benchmark circumstances called out on Synology drive page), it’s very difficult for me to see Synology’s actions here as anything other than an attempt to squeeze additional revenue out of what the company thought to be an exploitable market segment.

Enterprise storage companies like Dell-EMC enjoy vast margins on high-end storage gear—margins that don’t exist down in the consumer and SMB space where Synology is usually found. So the company decided to be the change it wanted to see in the world and created a way to extract those margins by making expensive custom hard disk drives mandatory (at least in a “nice data you got there, it’d be a shame if something happened to it—better use our disks” kind of way) for more and more products.

Unfortunately for Synology, today is not 2021, and the prosumer/SMB NAS market is getting downright crowded. In addition to long-time players like QNAP that continue to pump out new products, up-and-comer UGREEN is taking market share from Synology in the consumer areas where Synology has traditionally been most successful, and even Ubiquiti is making a run at the mid-market with its own line of Unifi-integrated NAS devices. Synology’s verified drive rent-seeking has made the brand practically impossible to recommend over competitors’ offerings for any use case without significant caveats. At least, up until today’s backpedaling.

When asked about the reasoning behind the change, a Synology representative gave the following statement via email: “First and foremost, our goal is to create reliable and secure solutions for user’s data, which is what drives our decisions as a company, including this original one. We are continuing with our validation program, working with third-party vendors to test their drives under the same rigorous testing we put our branded drives through, so we will still uphold those standards that we have set for ourselves. However, based on user feedback and to provide more flexibility in drive choices since testing third party drives has taken a while, we’re opening up the drive policy to include non-verified drives.”

As part of the same exchange, I asked Synology if they’re aware that—at least anecdotally, from what I see among the IT-savvy Ars audience—that this change has caused reputational damage among a significant number of existing and potential Synology customers. “While our original goal was to improve system reliability by focusing on a smaller set of validated configurations,” the company representative replied, “our valued community has shared feedback that flexibility is equally important. We are committed to our user’s experience and we understand that this decision didn’t align with their expectations of us. We value their input and will utilize it as we move forward.”

The about-face

As of the October 8 release of DSM 7.3, the input has been utilized. Here’s the full section from the company’s DSM 7.3 announcement:

As a part of its mission statement, Synology is committed to delivering reliable, high-performance storage systems. This commitment has led to a standardized process of rigorous testing and validation for both hardware and software components, and has been an integral part of Synology’s development approach for many years. Both Synology storage drives and components validated through the third-party program undergo uniform testing processes to ensure they are able to provide the highest levels of reliability with DSM.

Synology is currently collaborating closely with third-party drive manufacturers to accelerate the testing and verification of additional storage drives, and will announce more updates as soon as possible. In the meantime, 25 model year DiskStation Plus, Value, and J series running DSM 7.3 will support the installation and storage pool creation of non-validated third-party drives. This provides users greater flexibility while Synology continues to expand the lineup of officially verified drives that meet long-term reliability standards.

The upshot is that the validated drive requirements are being removed from 2025 model-year Plus, Value, and J-series NAS devices. (Well, mostly removed—the press release indicates that pool and cache creation on M.2 disks “still requires drives on the HCL [hardware compatibility list].”)

We asked Synology whether the requirements will also be lifted from previous-generation Synology products—and the answer to that question appears to be a “no.”

“This change only affects the ’25 series models: DS725+, DS225+, DS425+, DS925+, DS1525+, DS1825+. Models in the xs+ line, like the DS3622xs+, are considered a business/enterprise model and will remain under the current HCL policy for our business lines,” Synology explained.

Updated with comments from Synology.

Photo of Lee Hutchinson

Lee is the Senior Technology Editor, and oversees story development for the gadget, culture, IT, and video sections of Ars Technica. A long-time member of the Ars OpenForum with an extensive background in enterprise storage and security, he lives in Houston.

Synology caves, walks back some drive restrictions on upcoming NAS models Read More »

amd-wins-massive-ai-chip-deal-from-openai-with-stock-sweetener

AMD wins massive AI chip deal from OpenAI with stock sweetener

As part of the arrangement, AMD will allow OpenAI to purchase up to 160 million AMD shares at 1 cent each throughout the chips deal.

OpenAI diversifies its chip supply

With demand for AI compute growing rapidly, companies like OpenAI have been looking for secondary supply lines and sources of additional computing capacity, and the AMD partnership is part the company’s wider effort to secure sufficient computing power for its AI operations. In September, Nvidia announced an investment of up to $100 billion in OpenAI that included supplying at least 10 gigawatts of Nvidia systems. OpenAI plans to deploy a gigawatt of Nvidia’s next-generation Vera Rubin chips in late 2026.

OpenAI has worked with AMD for years, according to Reuters, providing input on the design of older generations of AI chips such as the MI300X. The new agreement calls for deploying the equivalent of 6 gigawatts of computing power using AMD chips over multiple years.

Beyond working with chip suppliers, OpenAI is widely reported to be developing its own silicon for AI applications and has partnered with Broadcom, as we reported in February. A person familiar with the matter told Reuters the AMD deal does not change OpenAI’s ongoing compute plans, including its chip development effort or its partnership with Microsoft.

AMD wins massive AI chip deal from OpenAI with stock sweetener Read More »

ice-wants-to-build-a-24/7-social-media-surveillance-team

ICE wants to build a 24/7 social media surveillance team

Together, these teams would operate as intelligence arms of ICE’s Enforcement and Removal Operations division. They will receive tips and incoming cases, research individuals online, and package the results into dossiers that could be used by field offices to plan arrests.

The scope of information contractors are expected to collect is broad. Draft instructions specify open-source intelligence: public posts, photos, and messages on platforms from Facebook to Reddit to TikTok. Analysts may also be tasked with checking more obscure or foreign-based sites, such as Russia’s VKontakte.

They would also be armed with powerful commercial databases such as LexisNexis Accurint and Thomson Reuters CLEAR, which knit together property records, phone bills, utilities, vehicle registrations, and other personal details into searchable files.

The plan calls for strict turnaround times. Urgent cases, such as suspected national security threats or people on ICE’s Top Ten Most Wanted list, must be researched within 30 minutes. High-priority cases get one hour; lower-priority leads must be completed within the workday. ICE expects at least three-quarters of all cases to meet those deadlines, with top contractors hitting closer to 95 percent.

The plan goes beyond staffing. ICE also wants algorithms, asking contractors to spell out how they might weave artificial intelligence into the hunt—a solicitation that mirrors other recent proposals. The agency has also set aside more than a million dollars a year to arm analysts with the latest surveillance tools.

ICE did not immediately respond to a request for comment.

Earlier this year, The Intercept revealed that ICE had floated plans for a system that could automatically scan social media for “negative sentiment” toward the agency and flag users thought to show a “proclivity for violence.” Procurement records previously reviewed by 404 Media identified software used by the agency to build dossiers on flagged individuals, compiling personal details, family links, and even using facial recognition to connect images across the web. Observers warned it was unclear how such technology could distinguish genuine threats from political speech.

ICE wants to build a 24/7 social media surveillance team Read More »

why-irobot’s-founder-won’t-go-within-10-feet-of-today’s-walking-robots

Why iRobot’s founder won’t go within 10 feet of today’s walking robots

In his post, Brooks recounts being “way too close” to an Agility Robotics Digit humanoid when it fell several years ago. He has not dared approach a walking one since. Even in promotional videos from humanoid companies, Brooks notes, humans are never shown close to moving humanoid robots unless separated by furniture, and even then, the robots only shuffle minimally.

This safety problem extends beyond accidental falls. For humanoids to fulfill their promised role in health care and factory settings, they need certification to operate in zones shared with humans. Current walking mechanisms make such certification virtually impossible under existing safety standards in most parts of the world.

Apollo robot

The humanoid Apollo robot. Credit: Google

Brooks predicts that within 15 years, there will indeed be many robots called “humanoids” performing various tasks. But ironically, they will look nothing like today’s bipedal machines. They will have wheels instead of feet, varying numbers of arms, and specialized sensors that bear no resemblance to human eyes. Some will have cameras in their hands or looking down from their midsections. The definition of “humanoid” will shift, just as “flying cars” now means electric helicopters rather than road-capable aircraft, and “self-driving cars” means vehicles with remote human monitors rather than truly autonomous systems.

The billions currently being invested in forcing today’s rigid, vision-only humanoids to learn dexterity will largely disappear, Brooks argues. Academic researchers are making more progress with systems that incorporate touch feedback, like MIT’s approach using a glove that transmits sensations between human operators and robot hands. But even these advances remain far from the comprehensive touch sensing that enables human dexterity.

Today, few people spend their days near humanoid robots, but Brooks’ 3-meter rule stands as a practical warning of challenges ahead from someone who has spent decades building these machines. The gap between promotional videos and deployable reality remains large, measured not just in years but in fundamental unsolved problems of physics, sensing, and safety.

Why iRobot’s founder won’t go within 10 feet of today’s walking robots Read More »

ars-live:-is-the-ai-bubble-about-to-pop?-a-live-chat-with-ed-zitron.

Ars Live: Is the AI bubble about to pop? A live chat with Ed Zitron.

As generative AI has taken off since ChatGPT’s debut, inspiring hundreds of billions of dollars in investments and infrastructure developments, the top question on many people’s minds has been: Is generative AI a bubble, and if so, when will it pop?

To help us potentially answer that question, I’ll be hosting a live conversation with prominent AI critic Ed Zitron on October 7 at 3: 30 pm ET as part of the Ars Live series. As Ars Technica’s senior AI reporter, I’ve been tracking both the explosive growth of this industry and the mounting skepticism about its sustainability.

You can watch the discussion live on YouTube when the time comes.

Zitron is the host of the Better Offline podcast and CEO of EZPR, a media relations company. He writes the newsletter Where’s Your Ed At, where he frequently dissects OpenAI’s finances and questions the actual utility of current AI products. His recent posts have examined whether companies are losing money on AI investments, the economics of GPU rentals, OpenAI’s trillion-dollar funding needs, and what he calls “The Subprime AI Crisis.”

Alt text for this image:

Credit: Ars Technica

During our conversation, we’ll dig into whether the current AI investment frenzy matches the actual business value being created, what happens when companies realize their AI spending isn’t generating returns, and whether we’re seeing signs of a peak in the current AI hype cycle. We’ll also discuss what it’s like to be a prominent and sometimes controversial AI critic amid the drumbeat of AI mania in the tech industry.

While Ed and I don’t see eye to eye on everything, his sharp criticism of the AI industry’s excesses should make for an engaging discussion about one of tech’s most consequential questions right now.

Please join us for what should be a lively conversation about the sustainability of the current AI boom.

Add to Google Calendar | Add to calendar (.ics download)

Ars Live: Is the AI bubble about to pop? A live chat with Ed Zitron. Read More »

that-annoying-sms-phish-you-just-got-may-have-come-from-a-box-like-this

That annoying SMS phish you just got may have come from a box like this

Scammers have been abusing unsecured cellular routers used in industrial settings to blast SMS-based phishing messages in campaigns that have been ongoing since 2023, researchers said.

The routers, manufactured by China-based Milesight IoT Co., Ltd., are rugged Internet of Things devices that use cellular networks to connect traffic lights, electric power meters, and other sorts of remote industrial devices to central hubs. They come equipped with SIM cards that work with 3G/4G/5G cellular networks and can be controlled by text message, Python scripts, and web interfaces.

An unsophisticated, yet effective, delivery vector

Security company Sekoia on Tuesday said that an analysis of “suspicious network traces” detected in its honeypots led to the discovery of a cellular router being abused to send SMS messages with phishing URLs. As company researchers investigated further, they identified more than 18,000 such routers accessible on the Internet, with at least 572 of them allowing free access to programming interfaces to anyone who took the time to look for them. The vast majority of the routers were running firmware versions that were more than three years out of date and had known vulnerabilities.

The researchers sent requests to the unauthenticated APIs that returned the contents of the routers’ SMS inboxes and outboxes. The contents revealed a series of campaigns dating back to October 2023 for “smishing”—a common term for SMS-based phishing. The fraudulent text messages were directed at phone numbers located in an array of countries, primarily Sweden, Belgium, and Italy. The messages instructed recipients to log in to various accounts, often related to government services, to verify the person’s identity. Links in the messages sent recipients to fraudulent websites that collected their credentials.

“In the case under analysis, the smishing campaigns appear to have been conducted through the exploitation of vulnerable cellular routers—a relatively unsophisticated, yet effective, delivery vector,” Sekoia researchers Jeremy Scion and Marc N. wrote. “These devices are particularly appealing to threat actors, as they enable decentralized SMS distribution across multiple countries, complicating both detection and takedown efforts.”

That annoying SMS phish you just got may have come from a box like this Read More »