Researchers have uncovered a sustained and ongoing campaign by Russian spies that uses a clever phishing technique to hijack Microsoft 365 accounts belonging to a wide range of targets, researchers warned.
The technique is known as device code phishing. It exploits “device code flow,” a form of authentication formalized in the industry-wide OAuth standard. Authentication through device code flow is designed for logging printers, smart TVs, and similar devices into accounts. These devices typically don’t support browsers, making it difficult to sign in using more standard forms of authentication, such as entering user names, passwords, and two-factor mechanisms.
Rather than authenticating the user directly, the input-constrained device displays an alphabetic or alphanumeric device code along with a link associated with the user account. The user opens the link on a computer or other device that’s easier to sign in with and enters the code. The remote server then sends a token to the input-constrained device that logs it into the account.
Device authorization relies on two paths: one from an app or code running on the input-constrained device seeking permission to log in and the other from the browser of the device the user normally uses for signing in.
A concerted effort
Advisories from both security firm Volexity and Microsoft are warning that threat actors working on behalf of the Russian government have been abusing this flow since at least last August to take over Microsoft 365 accounts. The threat actors masquerade as trusted, high-ranking officials and initiate conversations with a targeted user on a messenger app such as Signal, WhatsApp, and Microsoft Teams. Organizations impersonated include:
Just in time for holiday tech-support sessions, here’s what to know about passkeys.
It’s that time again, when families and friends gather and implore the more technically inclined among them to troubleshoot problems they’re having behind the device screens all around them. One of the most vexing and most common problems is logging into accounts in a way that’s both secure and reliable.
Using the same password everywhere is easy, but in an age of mass data breaches and precision-orchestrated phishing attacks, it’s also highly unadvisable. Then again, creating hundreds of unique passwords, storing them securely, and keeping them out of the hands of phishers and database hackers is hard enough for experts, let alone Uncle Charlie, who got his first smartphone only a few years ago. No wonder this problem never goes away.
Passkeys—the much-talked-about password alternative to passwords that have been widely available for almost two years—was supposed to fix all that. When I wrote about passkeys two years ago, I was a big believer. I remain convinced that passkeys mount the steepest hurdle yet for phishers, SIM swappers, database plunderers, and other adversaries trying to hijack accounts. How and why is that?
Elegant, yes, but usable?
The FIDO2 specification and the overlapping WebAuthn predecessor that underpin passkeys are nothing short of pure elegance. Unfortunately, as support has become ubiquitous in browsers, operating systems, password managers, and other third-party offerings, the ease and simplicity envisioned have been undone—so much so that they can’t be considered usable security, a term I define as a security measure that’s as easy, or only incrementally harder, to use as less-secure alternatives.
“There are barriers at each turn that guide you through a developer’s idea of how you should use them,” William Brown, a software engineer specializing in authentication, wrote in an online interview. “None of them are deal-breaking, but they add up.”
Passkeys are now supported on hundreds of sites and roughly a dozen operating systems and browsers. The diverse ecosystem demonstrates the industry-wide support for passkeys, but it has also fostered a jumble of competing workflows, appearances, and capabilities that can vary greatly depending on the particular site, OS, and browser (or browser agents such as native iOS or Android apps). Rather than help users understand the dizzying number of options and choose the right one, each implementation strong-arms the user into choosing the vendor’s preferred choice.
The experience of logging into PayPal with a passkey on Windows will be different from logging into the same site on iOS or even logging into it with Edge on Android. And forget about trying to use a passkey to log into PayPal on Firefox. The payment site doesn’t support that browser on any OS.
Another example is when I create a passkey for my LinkedIn account on Firefox. Because I use a wide assortment of browsers on platforms, I have chosen to sync the passkey using my 1Password password manager. In theory, that choice allows me to automatically use this passkey anywhere I have access to my 1Password account, something that isn’t possible otherwise. But it’s not as simple as all that.
When I look at the passkey in LinkedIn settings, it shows as being created for Firefox on Mac OS X 10, even though it works on all the browsers and OSes I’m using.
Screenshot showing passkey is created for Firefox on Mac OS X 10.
Why is LinkedIn indicating otherwise? The answer is that there’s no way for LinkedIn to interoperate flexibly with the browsers and OSes and vice versa. Per the FIDO2 and WebAuthn specs, LinkedIn knows only the browser and OS I used when creating the credential. 1Password, meanwhile, has no way to coordinate with LinkedIn to ensure I’m presented with consistent information that will help me keep track of this. Suddenly, using passkeys is more confusing than it needs to be for there to be utility to ordinary users.
Things get more complicated still when I want to log into LinkedIn on Firefox for Android, and am presented with the following dialog box.
Screenshot showing a dialog box with the text: “You’re using on-device encryption. Unlock your passwords to sign in.”
At this point, I don’t know if it’s Google or Firefox that’s presenting me with this non-intuitive response. I just want to open LinkedIn using the passkey that’s being synced by 1Password to all my devices. Somehow, the mysterious entity responsible for this message (it’s Google in this case) has hijacked the process in an attempt to convince me to use its platform.
Also, consider the experience on WebAuthn.io, a site that demonstrates how the standard works under different scenarios. When a user wants to enroll a physical security key to log in on macOS, they receive a dialog that steers them toward using a passkey instead and to sync it through iCloud.
Dialog box showing macOS passkeys message.
The user just wants to enroll a security key in the form of a USB dongle or smartphone and can be used when logging in on any device. But instead, macOS preempts this task with directions for creating a passkey that will be synced through iCloud. What’s the user to do? Maybe click on the “other options” in small text at the very bottom? Let’s try and see.
The dialog box that appears after clicking “other options.”
Wait, why is it still offering the option for the passkey to be synced in iCloud, and how does that qualify as “other options”? And why is the most prominent suggestion that the user “continue with Touch ID”? It isn’t until selectng “security key” that the user will see that option they wanted all along—to store the credential on a security key. Only after this step—now three clicks in—does the light on a USB security key begin blinking, and the key is finally ready to be enrolled.
Dialog box finally allows the creation of a passkey on a security key.
The dueling dialogs in this example are by no means unique to macOS.
Too many cooks in the kitchen
“Most try to funnel you into a vendor’s sync passkey option, and don’t make it clear how you can use other things,” Brown noted. “Chrome, Apple, Windows, all try to force you to use their synced passkeys by default, and you have to click through prompts to use alternatives.”
Bruce Davie, another software engineer with expertise in authentication, agreed, writing in an October post that the current implementation of passkeys “seems to have failed the ‘make it easy for users’ test, which in my view is the whole point of passkeys.”
In April, Son Nguyen Kim, the product lead for the free Proton Pass password manager, penned a post titled Big Tech passkey implementations are a trap. In it, he complained that passkey implementations to date lock users into the platform they created the credential on.
“If you use Google Chrome as your browser on a Mac, it uses the Apple Keychain feature to store your passkeys,” he wrote. “This means you can’t sync your passkeys to your Chrome profile on other devices.” In an email last month, Kim said users can now override this option and choose to store their passkeys in Chrome. Even then, however, “passkeys created on Chrome on Mac don’t sync to Chrome in iPhone, so the user can’t use it seamlessly on Chrome on their iPhone.”
Other posts reciting similar complaints are here and here.
In short, there are too many cooks in the kitchen, and each one thinks they know the proper way to make pie.
I have put these and other criticisms to the test over the past four months. I have used them on a true heterogeneous environment that includes a MacBook Air, a Lenovo X1 ThinkPad, an iPhone, and a Pixel running Firefox, Chrome, Edge, Safari, and on the phones, a large number of apps, including those for LinkedIn, PayPal, eBay, Kayak, Gmail, Amazon, and Uber. My objective has been to understand how well passkey-based authentication works over the long term, particularly for cross-platform users.
I fully agree that syncing across different platforms is much harder than it should be. So is the messaging provided during the passkey enrollment phase. The dialogs users see are dictated arbitrarily by whatever OS or browser has control at the moment. There’s no way for previously made configuration choices to be communicated to tailor dialog boxes and workflow.
Another shortcoming: There’s no programming interface for Apple, Google, and Microsoft platforms to directly pass credentials from one to the other. The FIDO2 standard has devised a clever method in an attempt to bridge this gap. It typically involves joining two devices over a secure BLE connection and using a QR code so the already-authenticated device can vouch for the trustworthiness of the other. This process is easy for some people in some cases, but it can quickly become quirky and prone to failure, particularly when fussy devices can’t connect over BLE.
In many cases, however, critics overstate the severity of these sorts of problems. These are definitely things that unnecessarily confuse and complicate the use of passkeys. But often, they’re one-time events that can be overcome by creating multiple passkeys and bootstrapping them for each device. From then on, these unphishable, unstealable credentials live on both devices, in much the way some users allow credentials for their Gmail or Apple ID to be stored in two or more browsers or password managers for convenience.
More helpful still is using a cross-platform password manager to store and sync passkeys. I have been using 1Password to do just that for a month with no problems to report. Most other name-brand password managers would likely perform as well. In keeping with the FIDO2 spec, these credentials are end-to-end encrypted.
Halfway house for password managers
With my 1Password account running on my devices, I had no trouble using a passkey to log into any enrolled site on a device running any browser. The flow was fast and intuitive. In most cases, both iOS and Android had no problem passing the key from 1Password to an app for Uber, Amazon, Gmail, or another site. Signing into phone apps is one of the bigger hassles for me. Passkeys made this process much easier, and it did so while also allowing me the added security of MFA.
This reliance on a password manager, however, largely undermines a key value proposition of passkeys, which has been to provide an entirely new paradigm for authenticating ourselves. Using 1Password to sync a password is almost identical to syncing a passkey, so why bother? Worse still, the majority of people still don’t use password managers. I’m a big believer in password managers for the security they offer. Making them a condition for using a passkey would be a travesty.
I’m not the first person to voice this criticism. David Heinemeier Hansson said much the same thing in September.
“The problem with passkeys is that they’re essentially a halfway house to a password manager, but tied to a specific platform in ways that aren’t obvious to a user at all, and liable to easily leave them unable to access … their accounts,” wrote the Danish software engineer and programmer, who created Ruby on Rails and is the CTO of web-based software development firm 37signals. “Much the same way that two-factor authentication can do, but worse, since you’re not even aware of it.”
He continued:
Let’s take a simple example. You have an iPhone and a Windows computer. Chrome on Windows stores your passkeys in Windows Hello, so if you sign up for a service on Windows, and you then want to access it on iPhone, you’re going to be stuck (unless you’re so forward thinking as to add a second passkey, somehow, from the iPhone will on the Windows computer!). The passkey lives on the wrong device, if you’re away from the computer and want to login, and it’s not at all obvious to most users how they might fix that.
Even in the best case scenario, where you’re using an iPhone and a Mac that are synced with Keychain Access via iCloud, you’re still going to be stuck, if you need to access a service on a friend’s computer in a pinch. Or if you’re not using Keychain Access at all. There are plenty of pitfalls all over the flow. And the solutions, like scanning a QR code with a separate device, are cumbersome and alien to most users.
If you’re going to teach someone how to deal with all of this, and all the potential pitfalls that might lock them out of your service, you almost might as well teach them how to use a cross-platform password manager like 1Password.
Undermining security promises
The security benefits of passkeys at the moment are also undermined by an undeniable truth. Of the hundreds of sites supporting passkeys, there isn’t one I know of that allows users to ditch their password completely. The password is still mandatory. And with the exception of Google’s Advanced Protection Program, I know of no sites that won’t allow logins to fall back on passwords, often without any additional factor. Even then, all but Google APP accounts can be accessed using a recovery code.
This fallback on phishable, stealable credentials undoes some of the key selling points of passkeys. As soon as passkey adoption poses a meaningful hurdle in account takeovers, threat actors will devise hacks and social engineering attacks that exploit this shortcoming. Then we’re right back where we were before.
Christiaan Brandt, co-chair of the FIDO2 technical working group and an identity and security product manager at Google, said in an online interview that most users aren’t ready for true passwordless authentication.
“We have to meet users where they are,” he wrote. “When we tested messaging for passkeys, users balked at ‘replace your password with passkeys,’ but felt much more comfortable with more softened language like “you can now use a passkey to log in to your account too.’ Over time, we most definitely plan to wean users off phishable authentication factors, but we anticipate this journey to take multiple years. We really can only do it once users are so comfortable with passkeys that the fallback to passwords is (almost) never needed.”
A design choice further negating the security benefits of passkeys: Amazon, PayPal, Uber, and no small number of other sites supporting passkeys continue to rely on SMS texts for authentication even after passkeys are enrolled.
SMS-based MFA is among the weakest form of this protection. Not only can the texts be phished, but they’re also notoriously vulnerable to SIM swaps, in which an adversary gains control of a target’s phone number. As long as these less-secure fallbacks exist, passkeys aren’t much more than security theater.
I still think passkeys make sense in many cases. I’ll say more about that later. First, for a bit more context, readers should know:
Passkeys are defined in the WebAuthn spec as a “discoverable credential,” historically known as a “resident key.” The credential is in the form of a private-public key pair, which is created on the security key, which can be in the form of a FIDO-approved secure enclave embedded into a USB dongle, smartphone, or computer. The key pair is unique to each user account. The user creates the key pair after proving their identity to the website using an existing authentication method, typically a password. The private key never leaves the security key.
Going forward, when the user logs in, the site sends a security challenge to the user. The user then uses the locally stored private key to cryptographically sign the challenge and sends it to the website. The website then uses the public key it stores to verify the response is signed with the private key. With that, the user is logged in.
Under the FIDO2 spec, the passkey can never leave the security key, except as an encrypted blob of bits when the passkey is being synced from one device to another. The secret key can be unlocked only when the user authenticates to the physical key using a PIN, password, or most commonly a fingerprint or face scan. In the event the user authenticates with a biometric, it never leaves the security key, just as they never leave Android and iOS phones and computers running macOS or Windows.
Passkeys can be stored and synced using the same mechanisms millions of people already use for passwords—a password manager such as Bitwarden, Apple iCloud, Google Password Manager, or Microsoft’s cloud. Just like passwords, passkeys available in these managers are end-to-end encrypted using tried and true cryptographic algorithms.
The advent of this new paradigm was supposed to solve multiple problems at once—make authenticating ourselves online easier, eliminate the hassle of remembering passwords, and all but eradicate the most common forms of account takeovers.
When not encumbered by the problems mentioned earlier, this design provides multifactor authentication in a single stroke. The user logs in using something they have—the physical key, which must be near the device logging in. They must also use something they know—the PIN or password—or something they are—their face or fingerprint—to complete the credential transfer. The cryptographic secret never leaves the enclave embedded into the physical key.
What to tell Uncle Charlie?
In enterprise environments, passkeys can be a no-brainer alternative to passwords and authenticators. And even for Uncle Charlie—who has a single iPhone and Mac, and logs into only a handful of sites—passkeys may provide a simpler, less phishable path forward. Using a password manager to log into Gmail with a passkey ensures he’s protected by MFA. Using the password alone does not.
The takeaway from all of this—particularly for those recruited to provide technical support this week but also anyone trying to decide if it’s time to up their own authentication game: If a password manager isn’t already a part of the routine, see if it’s viable to add one now. Password managers make it practical to use a virtually unlimited number of long, randomly generated passwords that are unique to each site.
For some, particularly people with diminished capacity or less comfort being online, this step alone will be enough. Everyone else should also, whenever possible, opt into MFA, ideally using security keys or, if that’s not available, an authenticator app. I’m partial to 1Password as a password manager, Authy as an authenticator, and security keys from Yubico or Titan. There are plenty of other suitable alternatives.
I still think passkeys provide the greatest promise yet for filling the many security pitfalls of passwords and lowering the difficulty of remembering and storing them. For now, however, the hassles of using passkeys, coupled with their diminished security created by the presence of fallbacks, means no one should feel like a technophobe or laggard for sticking with their passwords. For now, passwords and key- or authenticator-based MFA remain essential.
With any luck, passkeys will someday be ready for the masses, but that day is not (yet) here.
Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.
“I’m fighting with Google now,” Townsend told Ars. “I don’t expect any real answers from them.”
How YouTubers can avoid being targeted
As YouTube appears evasive, Townsend has been grateful for long-time subscribers commenting to show support, which may help get his videos amplified more by the algorithm. On YouTube, he also said that because “the outpouring of support was beyond anything” he could’ve expected, it kept him “sane” through sometimes 24-hour periods of silence without any updates on when his account would be restored.
Townsend told Ars that he rarely does sponsorships, but like many in the fighting game community, his inbox gets spammed with offers constantly, much of which he assumes are scams.
“If you are a YouTuber of any size,” Townsend explained in his YouTube video, “you are inundated with this stuff constantly,” so “my BS detector is like, okay, fake, fake, fake, fake, fake, fake, fake. But this one just, it looked real enough, like they had their own social media presence, lots of followers. Everything looked real.”
Brian_F echoed that in his video, which breaks down how the latest scam evolved from more obvious scams, tricking even skeptical YouTubers who have years of experience dodging phishing scams in their inboxes.
“The game has changed,” Brian_F said.
Townsend told Ars that sponsorships are rare in the fighting game community. YouTubers are used to carefully scanning supposed offers to weed out the real ones from the fakes. But Brian_F’s video pointed out that scammers copy/paste legitimate offer letters, so it’s already hard to distinguish between potential sources of income and cleverly masked phishing attacks using sponsorships as lures.
Part of the vetting process includes verifying links without clicking through and verifying identities of people submitting supposed offers. But if YouTubers are provided with legitimate links early on, receiving offers from brands they really like, and see that contacts match detailed LinkedIn profiles of authentic employees who market the brand, it’s much harder to detect a fake sponsorship offer without as many obvious red flags.
“Microsoft assesses that Secret Blizzard either used the Amadey malware as a service (MaaS) or accessed the Amadey command-and-control (C2) panels surreptitiously to download a PowerShell dropper on target devices,” Microsoft said. “The PowerShell dropper contained a Base64-encoded Amadey payload appended by code that invoked a request to Secret Blizzard C2 infrastructure.”
The ultimate objective was to install Tavdig, a backdoor Secret Blizzard used to conduct reconnaissance on targets of interest. The Amdey sample Microsoft uncovered collected information from device clipboards and harvested passwords from browsers. It would then go on to install a custom reconnaissance tool that was “selectively deployed to devices of further interest by the threat actor—for example, devices egressing from STARLINK IP addresses, a common signature of Ukrainian front-line military devices.”
When Secret Blizzard assessed a target was of high value, it would then install Tavdig to collect information, including “user info, netstat, and installed patches and to import registry settings into the compromised device.”
Earlier in the year, Microsoft said, company investigators observed Secret Blizzard using tools belonging to Storm-1887 to also target Ukrainian military personnel. Microsoft researchers wrote:
In January 2024, Microsoft observed a military-related device in Ukraine compromised by a Storm-1837 backdoor configured to use the Telegram API to launch a cmdlet with credentials (supplied as parameters) for an account on the file-sharing platform Mega. The cmdlet appeared to have facilitated remote connections to the account at Mega and likely invoked the download of commands or files for launch on the target device. When the Storm-1837 PowerShell backdoor launched, Microsoft noted a PowerShell dropper deployed to the device. The dropper was very similar to the one observed during the use of Amadey bots and contained two base64 encoded files containing the previously referenced Tavdig backdoor payload (rastls.dll) and the Symantec binary (kavp.exe).
As with the Amadey bot attack chain, Secret Blizzard used the Tavdig backdoor loaded into kavp.exe to conduct initial reconnaissance on the device. Secret Blizzard then used Tavdig to import a registry file, which was used to install and provide persistence for the KazuarV2 backdoor, which was subsequently observed launching on the affected device.
Although Microsoft did not directly observe the Storm-1837 PowerShell backdoor downloading the Tavdig loader, based on the temporal proximity between the execution of the Storm-1837 backdoor and the observation of the PowerShell dropper, Microsoft assesses that it is likely that the Storm-1837 backdoor was used by Secret Blizzard to deploy the Tavdig loader.
Wednesday’s post comes a week after both Microsoft and Lumen’s Black Lotus Labs reported that Secret Blizzard co-opted the tools of a Pakistan-based threat group tracked as Storm-0156 to install backdoors and collect intel on targets in South Asia. Microsoft first observed the activity in late 2022. In all, Microsoft said, Secret Blizzard has used the tools and infrastructure of at least six other threat groups in the past seven years.
A Nigerian man living in the United Kingdom has been sentenced to 10 years for his role in a phishing scam that snatched more than $20 million from over 400 would-be home buyers in the US, including some savers who lost their entire nest eggs.
Late last week, the US Department of Justice confirmed that 33-year-old Babatunde Francis Ayeni pled guilty to conspiracy to commit wire fraud through “a sophisticated business email compromise scheme targeting real estate transactions” in the US.
To seize large down payments on homes, Ayeni and co-conspirators sent phishing emails to US title companies, real estate agents, and real estate attorneys. When unsuspecting employees clicked malicious attachments and links, a prompt appeared asking for login information that was then shared with the hackers.
Once the hackers were in, they could monitor their emails “for transactions where a buyer was scheduled to make a payment as part of a real estate transaction,” then swoop in to send wiring instructions to transfer funds to compromised accounts instead, the DOJ said. To help cover their tracks, co-conspirators then converted the money into Bitcoin on Coinbase.
The scam was seemingly uncovered after co-conspirators targeted a real estate title company in Gulf Shores, Alabama. More than half of the victims were unable to reverse the wire transactions. According to The Record, two victims who shared impact statements in court lost more than $114,000, including a man who “tried to buy his elderly father a home following a Parkinson’s diagnosis.”
A coalition of law-enforcement agencies said it shut down a service that facilitated the unlocking of more than 1.2 million stolen or lost mobile phones so they could be used by someone other than their rightful owner.
The service was part of iServer, a phishing-as-a-service platform that has been operating since 2018. The Argentina-based iServer sold access to a platform that offered a host of phishing-related services through email, texts, and voice calls. One of the specialized services offered was designed to help people in possession of large numbers of stolen or lost mobile devices to obtain the credentials needed to bypass protections such as the lost mode for iPhones, which prevent a lost or stolen device from being used without entering its passcode.
An international operation coordinated by Europol’s European Cybercrime Center said it arrested the Argentinian national that was behind iServer and identified more than 2,000 “unlockers” who had enrolled in the phishing platform over the years. Investigators ultimately found that the criminal network had been used to unlock more than 1.2 million mobile phones. Officials said they also identified 483,000 phone owners who had received messages phishing for credentials for their lost or stolen devices.
According to Group-IB, the security firm that discovered the phone-unlocking racket and reported it to authorities, iServer provided a web interface that allowed low-skilled unlockers to phish the rightful device owners for the device passcodes, user credentials from cloud-based mobile platforms, and other personal information.
Group-IB wrote:
During its investigations into iServer’s criminal activities, Group-IB specialists also uncovered the structure and roles of criminal syndicates operating with the platform: the platform’s owner/developer sells access to “unlockers,” who in their turn provide phone unlocking services to other criminals with locked stolen devices. The phishing attacks are specifically designed to gather data that grants access to physical mobile devices, enabling criminals to acquire users’ credentials and local device passwords to unlock devices or unlink them from their owners. iServer automates the creation and delivery of phishing pages that imitate popular cloud-based mobile platforms, featuring several unique implementations that enhance its effectiveness as a cybercrime tool.
Unlockers obtain the necessary information for unlocking the mobile phones, such as IMEI, language, owner details, and contact information, often accessed through lost mode or via cloud-based mobile platforms. They utilize phishing domains provided by iServer or create their own to set up a phishing attack. After selecting an attack scenario, iServer creates a phishing page and sends an SMS with a malicious link to the victim.
When successful, iServer customers would receive the credentials through the web interface. The customers could then unlock a phone to disable the lost mode so the device could be used by someone new.
Ultimately, criminals received the stolen and validated credentials through the iServer web interface, enabling them to unlock a phone, turn off “Lost mode” and untie it from the owner’s account.
To better camouflage the ruse, iServer often disguised phishing pages as belonging to cloud-based services.
The takedown and arrests occurred from September 10–17 in Spain, Argentina, Chile, Colombia, Ecuador, and Peru. Authorities in those countries began investigating the phishing service in 2022.
Phishers are using a novel technique to trick iOS and Android users into installing malicious apps that bypass safety guardrails built by both Apple and Google to prevent unauthorized apps.
Both mobile operating systems employ mechanisms designed to help users steer clear of apps that steal their personal information, passwords, or other sensitive data. iOS bars the installation of all apps other than those available in its App Store, an approach widely known as the Walled Garden. Android, meanwhile, is set by default to allow only apps available in Google Play. Sideloading—or the installation of apps from other markets—must be manually allowed, something Google warns against.
When native apps aren’t
Phishing campaigns making the rounds over the past nine months are using previously unseen ways to workaround these protections. The objective is to trick targets into installing a malicious app that masquerades as an official one from the targets’ bank. Once installed, the malicious app steals account credentials and sends them to the attacker in real time over Telegram.
“This technique is noteworthy because it installs a phishing application from a third-party website without the user having to allow third-party app installation,” Jakub Osmani, an analyst with security firm ESET, wrote Tuesday. “For iOS users, such an action might break any ‘walled garden’ assumptions about security. On Android, this could result in the silent installation of a special kind of APK, which on further inspection even appears to be installed from the Google Play store.”
The novel method involves enticing targets to install a special type of app known as a Progressive Web App. These apps rely solely on Web standards to render functionalities that have the feel and behavior of a native app, without the restrictions that come with them. The reliance on Web standards means PWAs, as they’re abbreviated, will in theory work on any platform running a standards-compliant browser, making them work equally well on iOS and Android. Once installed, users can add PWAs to their home screen, giving them a striking similarity to native apps.
While PWAs can apply to both iOS and Android, Osmani’s post uses PWA to apply to iOS apps and WebAPK to Android apps.
Enlarge/ Installed phishing PWA (left) and real banking app (right).
ESET
Enlarge/ Comparison between an installed phishing WebAPK (left) and real banking app (right).
ESET
The attack begins with a message sent either by text message, automated call, or through a malicious ad on Facebook or Instagram. When targets click on the link in the scam message, they open a page that looks similar to the App Store or Google Play.
Example of a malicious advertisement used in these campaigns.
ESET
Phishing landing page imitating Google Play.
ESET
ESET’s Osmani continued:
From here victims are asked to install a “new version” of the banking application; an example of this can be seen in Figure 2. Depending on the campaign, clicking on the install/update button launches the installation of a malicious application from the website, directly on the victim’s phone, either in the form of a WebAPK (for Android users only), or as a PWA for iOS and Android users (if the campaign is not WebAPK based). This crucial installation step bypasses traditional browser warnings of “installing unknown apps”: this is the default behavior of Chrome’s WebAPK technology, which is abused by the attackers.
Example copycat installation page.
ESET
The process is a little different for iOS users, as an animated pop-up instructs victims how to add the phishing PWA to their home screen (see Figure 3). The pop-up copies the look of native iOS prompts. In the end, even iOS users are not warned about adding a potentially harmful app to their phone.
Figure 3 iOS pop-up instructions after clicking “Install” (credit: Michal Bláha)
ESET
After installation, victims are prompted to submit their Internet banking credentials to access their account via the new mobile banking app. All submitted information is sent to the attackers’ C&C servers.
The technique is made all the more effective because application information associated with the WebAPKs will show they were installed from Google Play and have been assigned no system privileges.
WebAPK info menu—notice the “No Permissions” at the top and “App details in store” section at the bottom.
ESET
So far, ESET is aware of the technique being used against customers of banks mostly in Czechia and less so in Hungary and Georgia. The attacks used two distinct command-and-control infrastructures, an indication that two different threat groups are using the technique.
“We expect more copycat applications to be created and distributed, since after installation it is difficult to separate the legitimate apps from the phishing ones,” Osmani said.
Enlarge/ Roger Stone, former adviser to Donald Trump’s presidential campaign, center, during the Republican National Convention (RNC) in Milwaukee on July 17, 2024.
Getty Images
Google’s Threat Analysis Group confirmed Wednesday that they observed a threat actor backed by the Iranian government targeting Google accounts associated with US presidential campaigns, in addition to stepped-up attacks on Israeli targets.
APT42, associated with Iran’s Islamic Revolutionary Guard Corps, “consistently targets high-profile users in Israel and the US,” the Threat Analysis Group (TAG) writes. The Iranian group uses hosted malware, phishing pages, malicious redirects, and other tactics to gain access to Google, Dropbox, OneDrive, and other cloud-based accounts. Google’s TAG writes that it reset accounts, sent warnings to users, and blacklisted domains associated with APT42’s phishing attempts.
Among APT42’s tools were Google Sites pages that appeared to be a petition from legitimate Jewish activists, calling on Israel to mediate its ongoing conflict with Hamas. The page was fashioned from image files, not HTML, and an ngrok redirect sent users to phishing pages when they moved to sign the petition.
A petition purporting to be from The Jewish Agency for Israel, seeking support for mediation measures—but signatures quietly redirect to phishing sites, according to Google.
Google
In the US, Google’s TAG notes that, as with the 2020 elections, APT42 is actively targeting the personal emails of “roughly a dozen individuals affiliated with President Biden and former President Trump.” TAG confirms that APT42 “successfully gained access to the personal Gmail account of a high-profile political consultant,” which may be longtime Republican operative Roger Stone, as reported by The Guardian, CNN, and The Washington Post, among others. Microsoft separately noted last week that a “former senior advisor” to the Trump campaign had his Microsoft account compromised, which Stone also confirmed.
“Today, TAG continues to observe unsuccessful attempts from APT42 to compromise the personal accounts of individuals affiliated with President Biden, Vice President Harris and former President Trump, including current and former government officials and individuals associated with the campaigns,” Google’s TAG writes.
PDFs and phishing kits target both sides
Google’s post details the ways in which APT42 targets operatives in both parties. The broad strategy is to get the target off their email and into channels like Signal, Telegram, or WhatsApp, or possibly a personal email address that may not have two-factor authentication and threat monitoring set up. By establishing trust through sending legitimate PDFs, or luring them to video meetings, APT42 can then push links that use phishing kits with “a seamless flow” to harvest credentials from Google, Hotmail, and Yahoo.
After gaining a foothold, APT42 will often work to preserve its access by generating application-specific passwords inside the account, which typically bypass multifactor tools. Google notes that its Advanced Protection Program, intended for individuals at high risk of attack, disables such measures.
John Hultquist, with Google-owned cybersecurity firm Mandiant, told Wired’s Andy Greenberg that what looks initially like spying or political interference by Iran can easily escalate to sabotage and that both parties are equal targets. He also said that current thinking about threat vectors may need to expand.
“It’s not just a Russia problem anymore. It’s broader than that,” Hultquist said. “There are multiple teams in play. And we have to keep an eye out for all of them.”
Pornhub will soon be blocked in five more states as the adult site continues to fight what it considers privacy-infringing age-verification laws that require Internet users to provide an ID to access pornography.
On July 1, according to a blog post on the adult site announcing the impending block, Pornhub visitors in Indiana, Idaho, Kansas, Kentucky, and Nebraska will be “greeted by a video featuring” adult entertainer Cherie Deville, “who explains why we had to make the difficult decision to block them from accessing Pornhub.”
Pornhub explained that—similar to blocks in Texas, Utah, Arkansas, Virginia, Montana, North Carolina, and Mississippi—the site refuses to comply with soon-to-be-enforceable age-verification laws in this new batch of states that allegedly put users at “substantial risk” of identity theft, phishing, and other harms.
Age-verification laws requiring adult site visitors to submit “private information many times to adult sites all over the Internet” normalizes the unnecessary disclosure of personally identifiable information (PII), Pornhub argued, warning, “this is not a privacy-by-design approach.”
Pornhub does not outright oppose age verification but advocates for laws that require device-based age verification, which allows users to access adult sites after authenticating their identity on their devices. That’s “the best and most effective solution for protecting minors and adults alike,” Pornhub argued, because the age-verification technology is proven and less PII would be shared.
“Users would only get verified once, through their operating system, not on each age-restricted site,” Pornhub’s blog said, claiming that “this dramatically reduces privacy risks and creates a very simple process for regulators to enforce.”
A spokesperson for Pornhub-owner Aylo told Ars that “unfortunately, the way many jurisdictions worldwide have chosen to implement age verification is ineffective, haphazard, and dangerous.”
“Any regulations that require hundreds of thousands of adult sites to collect significant amounts of highly sensitive personal information is putting user safety in jeopardy,” Aylo’s spokesperson told Ars. “Moreover, as experience has demonstrated, unless properly enforced, users will simply access non-compliant sites or find other methods of evading these laws.
Age-verification laws are harmful, Pornhub says
Pornhub’s big complaint with current age-verification laws is that these laws are hard to enforce and seem to make it riskier than ever to visit an adult site.
“Since age verification software requires users to hand over extremely sensitive information, it opens the door for the risk of data breaches,” Pornhub’s blog said. “Whether or not your intentions are good, governments have historically struggled to secure this data. It also creates an opportunity for criminals to exploit and extort people through phishing attempts or fake [age verification] processes, an unfortunate and all too common practice.”
Over the past few years, the risk of identity theft or stolen PII on both widely used and smaller niche adult sites has been well-documented.
Hundreds of millions of people were impacted by major leaks exposing PII shared with popular adult sites like Adult Friend Finder and Brazzers in 2016, while likely tens of thousands of users were targeted on eight poorly secured adult sites in 2018. Niche and free sites have also been vulnerable to attacks, including millions collectively exposed through breaches of fetish porn site Luscious in 2019 and MyFreeCams in 2021.
And those are just the big breaches that make headlines. In 2019, Kaspersky Lab reported that malware targeting online porn account credentials more than doubled in 2018, and researchers analyzing 22,484 pornography websites estimated that 93 percent were leaking user data to a third party.
That’s why Pornhub argues that, as states have passed age-verification laws requiring ID, they’ve “introduced harm” by redirecting visitors to adult sites that have fewer privacy protections and worse security, allegedly exposing users to more threats.
As an example, Pornhub reported, traffic to Pornhub in Louisiana “dropped by approximately 80 percent” after their age-verification law passed. That allegedly showed not just how few users were willing to show an ID to access their popular platform, but also how “very easily” users could simply move to “pirate, illegal, or other non-compliant sites that don’t ask visitors to verify their age.”
Pornhub has continued to argue that states passing laws like Louisiana’s cannot effectively enforce the laws and are simply shifting users to make riskier choices when accessing porn.
“The Louisiana law and other copycat state-level laws have no regulator, only civil liability, which results in a flawed enforcement regime, effectively making it an option for platform operators to comply,” Pornhub’s blog said. As one of the world’s most popular adult platforms, Pornhub would surely be targeted for enforcement if found to be non-compliant, while smaller adult sites perhaps plagued by security risks and disincentivized to check IDs would go unregulated, the thinking goes.
Aylo’s spokesperson shared 2023 Similarweb data with Ars, showing that sites complying with age-verification laws in Virginia, including Pornhub and xHamster, lost substantial traffic while seven non-compliant sites saw a sharp uptick in traffic. Similar trends were observed in Google trends data in Utah and Mississippi, while market shares were seemingly largely maintained in California, a state not yet checking IDs to access adult sites.
Hundreds of Microsoft Azure accounts, some belonging to senior executives, are being targeted by unknown attackers in an ongoing campaign that’s aiming to steal sensitive data and financial assets from dozens of organizations, researchers with security firm Proofpoint said Monday.
The campaign attempts to compromise targeted Azure environments by sending account owners emails that integrate techniques for credential phishing and account takeovers. The threat actors are doing so by combining individualized phishing lures with shared documents. Some of the documents embed links that, when clicked, redirect users to a phishing webpage. The wide breadth of roles targeted indicates the threat actors’ strategy of compromising accounts with access to various resources and responsibilities across affected organizations.
“Threat actors seemingly direct their focus toward a wide range of individuals holding diverse titles across different organizations, impacting hundreds of users globally,” a Proofpoint advisory stated. “The affected user base encompasses a wide spectrum of positions, with frequent targets including Sales Directors, Account Managers, and Finance Managers. Individuals holding executive positions such as “Vice President, Operations,” “Chief Financial Officer & Treasurer,” and “President & CEO” were also among those targeted.”
Once accounts are compromised, the threat actors secure them by enrolling them in various forms of multifactor authentication. This can make it harder for victims to change passwords or access dashboards to examine recent logins. In some cases, the MFA used relies on one-time passwords sent by text messages or phone calls. In most instances, however, the attackers employ an authenticator app with notifications and code.
Enlarge/ Examples of MFA manipulation events, executed by attackers in a compromised cloud tenant.
Proofpoint
Proofpoint observed other post-compromise actions including:
Data exfiltration. Attackers access and download sensitive files, including financial assets, internal security protocols, and user credentials.
Internal and external phishing. Mailbox access is leveraged to conduct lateral movement within impacted organizations and to target specific user accounts with personalized phishing threats.
Financial fraud. In an effort to perpetrate financial fraud, internal email messages are dispatched to target Human Resources and Financial departments within affected organizations.
Mailbox rules. Attackers create dedicated obfuscation rules intended to cover their tracks and erase all evidence of malicious activity from victims’ mailboxes.
Enlarge/ Examples of obfuscation mailbox rules created by attackers following successful account takeover.
Proofpoint
The compromises are coming from several proxies that act as intermediaries between the attackers’ originating infrastructure and the accounts being targeted. The proxies help the attackers align the geographical location assigned to the connecting IP address with the region of the target. This helps to bypass various geofencing policies that restrict the number and location of IP addresses that can access the targeted system. The proxy services often change mid-campaign, a strategy that makes it harder for those defending against the attacks to block the IPs where the malicious activities originate.
Other techniques designed to obfuscate the attackers’ operational infrastructure include data hosting services and compromised domains.
“Beyond the use of proxy services, we have seen attackers utilize certain local fixed-line ISPs, potentially exposing their geographical locations,” Monday’s post stated. “Notable among these non-proxy sources are the Russia-based ‘Selena Telecom LLC’, and Nigerian providers ‘Airtel Networks Limited’ and ‘MTN Nigeria Communication Limited.’ While Proofpoint has not currently attributed this campaign to any known threat actor, there is a possibility that Russian and Nigerian attackers may be involved, drawing parallels to previous cloud attacks.”
How to check if you’re a target
There are several telltale signs of targeting. The most helpful one is a specific user agent used during the access phase of the attack: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36
Attackers predominantly utilize this user-agent to access the ‘OfficeHome’ sign-in application along with unauthorized access to additional native Microsoft365 apps, such as:
Office365 Shell WCSS-Client (indicative of browser access to Office365 applications)
Office 365 Exchange Online (indicative of post-compromise mailbox abuse, data exfiltration, and email threats proliferation)
My Signins (used by attackers for MFA manipulation)
My Apps
My Profile
Proofpoint included the following Indicators of compromise:
Indicator
Type
Description
Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36
User Agent
User Agent involved in attack’s access phase
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36
User Agent
User Agent involved in attack’s access and post-access phases
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36
User Agent
User Agent involved in attack’s access and post-access phases
sachacel[.]ru
Domain
Domain used for targeted phishing threats
lobnya[.]com
Domain
Source domain used as malicious infrastructure
makeapp[.]today
Domain
Source domain used as malicious infrastructure
alexhost[.]com
Domain
Source domain used as malicious infrastructure
mol[.]ru
Domain
Source domain used as malicious infrastructure
smartape[.]net
Domain
Source domain used as malicious infrastructure
airtel[.]com
Domain
Source domain used as malicious infrastructure
mtnonline[.]com
Domain
Source domain used as malicious infrastructure
acedatacenter[.]com
Domain
Source domain used as malicious infrastructure
Sokolov Dmitry Nikolaevich
ISP
Source ISP used as malicious infrastructure
Dom Tehniki Ltd
ISP
Source ISP used as malicious infrastructure
Selena Telecom LLC
ISP
Source ISP used as malicious infrastructure
As the campaign is ongoing, Proofpoint may update the indicators as more become available. The company advised companies to pay close attention to the user agent and source domains of incoming connections to employee accounts. Other helpful defenses are employing security defenses that look for signs of both initial account compromise and post-compromise activities, identifying initial vectors of compromise such as phishing, malware, or impersonation, and putting in place auto-remediation policies to drive out attackers quickly in the event they get in.
Enlarge/ A woman scans a QR code in a café to see the menu online.
The US Federal Trade Commission has become the latest organization to warn against the growing use of QR codes in scams that attempt to take control of smartphones, make fraudulent charges, or obtain personal information.
Short for quick response codes, QR codes are two-dimensional bar codes that automatically open a Web browser or app when they’re scanned using a phone camera. Restaurants, parking garages, merchants, and charities display them to make it easy for people to open online menus or to make online payments. QR codes are also used in security-sensitive contexts. YouTube, Apple TV, and dozens of other TV apps, for instance, allow someone to sign into their account by scanning a QR code displayed on the screen. The code opens a page on a browser or app of the phone, where the account password is already stored. Once open, the page authenticates the same account to be opened on the TV app. Two-factor authentication apps provide a similar flow using QR codes when enrolling a new account.
The ubiquity of QR codes and the trust placed in them hasn’t been lost on scammers, however. For more than two years now, parking lot kiosks that allow people to make payments through their phones have been a favorite target. Scammers paste QR codes over the legitimate ones. The scam QR codes lead to look-alike sites that funnel funds to fraudulent accounts rather than the ones controlled by the parking garage.
In other cases, emails that attempt to steal passwords or install malware on user devices use QR codes to lure targets to malicious sites. Because the QR code is embedded into the email as an image, anti-phishing security software isn’t able to detect that the link it leads to is malicious. By comparison, when the same malicious destination is presented as a text link in the email, it stands a much higher likelihood of being flagged by the security software. The ability to bypass such protections has led to a torrent of image-based phishes in recent months.
Last week, the FTC warned consumers to be on the lookout for these types of scams.
“A scammer’s QR code could take you to a spoofed site that looks real but isn’t,” the advisory stated. “And if you log in to the spoofed site, the scammers could steal any information you enter. Or the QR code could install malware that steals your information before you realize it.”
The warning came almost two years after the FBI issued a similar advisory. Guidance issued from both agencies include:
After scanning a QR code, ensure that it leads to the official URL of the site or service that provided the code. As is the case with traditional phishing scams, malicious domain names may be almost identical to the intended one, except for a single misplaced letter.
Enter login credentials, payment card information, or other sensitive data only after ensuring that the site opened by the QR code passes a close inspection using the criteria above.
Before scanning a QR code presented on a menu, parking garage, vendor, or charity, ensure that it hasn’t been tampered with. Carefully look for stickers placed on top of the original code.
Be highly suspicious of any QR codes embedded into the body of an email. There are rarely legitimate reasons for benign emails from legitimate sites or services to use a QR code instead of a link.
Don’t install stand-alone QR code scanners on a phone without good reason and then only after first carefully scrutinizing the developer. Phones already have a built-in scanner available through the camera app that will be more trustworthy.
An additional word of caution when it comes to QR codes. Codes used to enroll a site into two-factor authentication from Google Authenticator, Authy, or another authenticator app provide the secret seed token that controls the ever-chaning one-time password displayed by these apps. Don’t allow anyone to view such QR codes. Re-enroll the site in the event the QR code is exposed.