Security

saas-security-posture—it’s-not-you,-it’s-me!

SaaS Security Posture—It’s not you, it’s me!

In business, it’s not uncommon to take a software-as-a-service (SaaS)-first approach. It makes sense—there’s no need to deal with the infrastructure, management, patching, and hardening. You just turn on the SaaS app and let it do its thing.

But there are some downsides to that approach.

The Problem with SaaS

While SaaS has many benefits, it also introduces a host of new challenges, many of which don’t get the coverage they warrant. At the top of the list of challenges is security. So, while there are some very real benefits of SaaS, it’s also important to recognize the security risk that comes with it. When we talk about SaaS security, we’re not usually talking about the security of the underlying platform, but rather how we use it.

Remember, it’s not you, it’s me!

The Shared Responsibility Model

In the terms and conditions of most SaaS platforms is the “shared responsibility model.” What it usually says is that the SaaS vendor is responsible for providing a platform that is robust, resilient, and reliable—but they don’t take responsibility for how you use and configure it. And it is in these configuration changes that the security challenge lives.

SaaS platforms often come with multiple configuration options, such as ways to share data, ways to invite external users, how users can access the platform, what parts of the platform they can use, and so on. And every configuration change, every nerd knob turned, is the potential to take the platform away from its optimum security configuration or introduce an unexpected capability. While some applications, like Microsoft 365, offer guidance on security settings, this is not true for all of them. Even if they do, how easy is that to manage when you get to 10, 20, or even 100 SaaS apps?

Too Many Apps

Do you know how many SaaS apps you have? It’s not the SaaS apps you know about that are the issue, it’s the ones you don’t. Because SaaS is so accessible, it can easily evade management. There are apps that people use but an organization may not be aware of—like the app the sales team signed up for, that thing that marketing uses, and of course, everyone wants a GenAI app to play with. But these aren’t the only ones; there are also the apps that are part of the SaaS platforms you sign up for. Yes, even the ones you know about can contain additional apps you don’t know about. This is how an average enterprise gets to more than 100 SaaS applications. How do you manage each of those? How do you ensure you know they exist and they are configured in a way that meets good security practices and protects your information? Therein lies the challenge.

Introducing SSPM

SSPM can be the answer. It is designed to initially integrate with your managed SaaS applications to provide visibility into how they are configured, where configurations present risks, and how to address them. It will continually monitor them for new threats and configuration changes that introduce risk. It will also discover unmanaged SaaS applications that are in use, evaluate their posture and present risk profiles of both the application and the SaaS vendor itself. It centralizes the management and security of a SaaS infrastructure and where its management and configuration present risk.

Overlap with CASB and DLP

There is some overlap in the market, particularly with cloud access security broker (CASB) and data loss prevention (DLP) tools. But these tools are a bit like capturing the thief as he runs down the driveway, rather than making sure the doors and windows were secured in the first place.

SSPM is yet another security tool to manage and pay for. But is it a tool we need? Well, that is up to you; however, our use of SaaS, for all the benefits it brings, has brought a new complexity and a new set of risks. We have so many more apps than we have ever had, many of them we don’t manage centrally, and they have many configuration knobs to turn. Without oversight of them all, we do run security risks.

Next Steps

SaaS security posture management (SSPM) is another entry into the growing catalog of security posture management tools. They are often easy to try out, and many offer free assessments that can give you an idea of the scale of the challenge you face. SaaS security is tricky and often does not get the coverage it deserves, so getting an idea of where you stand could be helpful.

Before you find yourself on the wrong end of a security incident and your SaaS vendor tells you it’s you, not me, it may be worth seeing what an SSPM tool can do for you. To learn more, take a look at GigaOm’s SSPM Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.

If you’re not yet a GigaOm subscriber, sign up here.

SaaS Security Posture—It’s not you, it’s me! Read More »

google’s-threat-team-confirms-iran-targeting-trump,-biden,-and-harris-campaigns

Google’s threat team confirms Iran targeting Trump, Biden, and Harris campaigns

It is only August —

Another Big Tech firm seems to confirm Trump adviser Roger Stone was hacked.

Roger Stone, former adviser to Donald Trump's presidential campaign, center, during the Republican National Convention (RNC) in Milwaukee on July 17, 2024.

Enlarge / Roger Stone, former adviser to Donald Trump’s presidential campaign, center, during the Republican National Convention (RNC) in Milwaukee on July 17, 2024.

Getty Images

Google’s Threat Analysis Group confirmed Wednesday that they observed a threat actor backed by the Iranian government targeting Google accounts associated with US presidential campaigns, in addition to stepped-up attacks on Israeli targets.

APT42, associated with Iran’s Islamic Revolutionary Guard Corps, “consistently targets high-profile users in Israel and the US,” the Threat Analysis Group (TAG) writes. The Iranian group uses hosted malware, phishing pages, malicious redirects, and other tactics to gain access to Google, Dropbox, OneDrive, and other cloud-based accounts. Google’s TAG writes that it reset accounts, sent warnings to users, and blacklisted domains associated with APT42’s phishing attempts.

Among APT42’s tools were Google Sites pages that appeared to be a petition from legitimate Jewish activists, calling on Israel to mediate its ongoing conflict with Hamas. The page was fashioned from image files, not HTML, and an ngrok redirect sent users to phishing pages when they moved to sign the petition.

A petition purporting to be from The Jewish Agency for Israel, seeking support for mediation measures—but signatures quietly redirect to phishing sites, according to Google.

A petition purporting to be from The Jewish Agency for Israel, seeking support for mediation measures—but signatures quietly redirect to phishing sites, according to Google.

Google

In the US, Google’s TAG notes that, as with the 2020 elections, APT42 is actively targeting the personal emails of “roughly a dozen individuals affiliated with President Biden and former President Trump.” TAG confirms that APT42 “successfully gained access to the personal Gmail account of a high-profile political consultant,” which may be longtime Republican operative Roger Stone, as reported by The Guardian, CNN, and The Washington Post, among others. Microsoft separately noted last week that a “former senior advisor” to the Trump campaign had his Microsoft account compromised, which Stone also confirmed.

“Today, TAG continues to observe unsuccessful attempts from APT42 to compromise the personal accounts of individuals affiliated with President Biden, Vice President Harris and former President Trump, including current and former government officials and individuals associated with the campaigns,” Google’s TAG writes.

PDFs and phishing kits target both sides

Google’s post details the ways in which APT42 targets operatives in both parties. The broad strategy is to get the target off their email and into channels like Signal, Telegram, or WhatsApp, or possibly a personal email address that may not have two-factor authentication and threat monitoring set up. By establishing trust through sending legitimate PDFs, or luring them to video meetings, APT42 can then push links that use phishing kits with “a seamless flow” to harvest credentials from Google, Hotmail, and Yahoo.

After gaining a foothold, APT42 will often work to preserve its access by generating application-specific passwords inside the account, which typically bypass multifactor tools. Google notes that its Advanced Protection Program, intended for individuals at high risk of attack, disables such measures.

Publications, including Politico, The Washington Post, and The New York Times, have reported being offered documents from the Trump campaign, potentially stemming from Iran’s phishing efforts, in an echo of Russia’s 2016 targeting of Hillary Clinton’s campaign. None of them have moved to publish stories related to the documents.

John Hultquist, with Google-owned cybersecurity firm Mandiant, told Wired’s Andy Greenberg that what looks initially like spying or political interference by Iran can easily escalate to sabotage and that both parties are equal targets. He also said that current thinking about threat vectors may need to expand.

“It’s not just a Russia problem anymore. It’s broader than that,” Hultquist said. “There are multiple teams in play. And we have to keep an eye out for all of them.”

Google’s threat team confirms Iran targeting Trump, Biden, and Harris campaigns Read More »

almost-unfixable-“sinkclose”-bug-affects-hundreds-of-millions-of-amd-chips

Almost unfixable “Sinkclose” bug affects hundreds of millions of AMD chips

Deep insecurity —

Worse-case scenario: “You basically have to throw your computer away.”

Security flaws in your computer’s firmware, the deep-seated code that loads first when you turn the machine on and controls even how its operating system boots up, have long been a target for hackers looking for a stealthy foothold. But only rarely does that kind of vulnerability appear not in the firmware of any particular computer maker, but in the chips found across hundreds of millions of PCs and servers. Now security researchers have found one such flaw that has persisted in AMD processors for decades, and that would allow malware to burrow deep enough into a computer’s memory that, in many cases, it may be easier to discard a machine than to disinfect it.

At the Defcon hacker conference, Enrique Nissim and Krzysztof Okupski, researchers from the security firm IOActive, plan to present a vulnerability in AMD chips they’re calling Sinkclose. The flaw would allow hackers to run their own code in one of the most privileged modes of an AMD processor, known as System Management Mode, designed to be reserved only for a specific, protected portion of its firmware. IOActive’s researchers warn that it affects virtually all AMD chips dating back to 2006, or possibly even earlier.

Nissim and Okupski note that exploiting the bug would require hackers to already have obtained relatively deep access to an AMD-based PC or server, but that the Sinkclose flaw would then allow them to plant their malicious code far deeper still. In fact, for any machine with one of the vulnerable AMD chips, the IOActive researchers warn that an attacker could infect the computer with malware known as a “bootkit” that evades antivirus tools and is potentially invisible to the operating system, while offering a hacker full access to tamper with the machine and surveil its activity. For systems with certain faulty configurations in how a computer maker implemented AMD’s security feature known as Platform Secure Boot—which the researchers warn encompasses the large majority of the systems they tested—a malware infection installed via Sinkclose could be harder yet to detect or remediate, they say, surviving even a reinstallation of the operating system.

“Imagine nation-state hackers or whoever wants to persist on your system. Even if you wipe your drive clean, it’s still going to be there,” says Okupski. “It’s going to be nearly undetectable and nearly unpatchable.” Only opening a computer’s case, physically connecting directly to a certain portion of its memory chips with a hardware-based programming tool known as SPI Flash programmer and meticulously scouring the memory would allow the malware to be removed, Okupski says.

Nissim sums up that worst-case scenario in more practical terms: “You basically have to throw your computer away.”

In a statement shared with WIRED, AMD acknowledged IOActive’s findings, thanked the researchers for their work, and noted that it has “released mitigation options for its AMD EPYC datacenter products and AMD Ryzen PC products, with mitigations for AMD embedded products coming soon.” (The term “embedded,” in this case, refers to AMD chips found in systems such as industrial devices and cars.) For its EPYC processors designed for use in data-center servers, specifically, the company noted that it released patches earlier this year. AMD declined to answer questions in advance about how it intends to fix the Sinkclose vulnerability, or for exactly which devices and when, but it pointed to a full list of affected products that can be found on its website’s security bulletin page.

Almost unfixable “Sinkclose” bug affects hundreds of millions of AMD chips Read More »

512-bit-rsa-key-in-home-energy-system-gives-control-of-“virtual-power-plant”

512-bit RSA key in home energy system gives control of “virtual power plant”

512-bit RSA key in home energy system gives control of “virtual power plant”

When Ryan Castellucci recently acquired solar panels and a battery storage system for their home just outside of London, they were drawn to the ability to use an open source dashboard to monitor and control the flow of electricity being generated. Instead, they gained much, much more—some 200 megawatts of programmable capacity to charge or discharge to the grid at will. That’s enough energy to power roughly 40,000 homes.

Castellucci, whose pronouns are they/them, acquired this remarkable control after gaining access to the administrative account for GivEnergy, the UK-based energy management provider who supplied the systems. In addition to the control over an estimated 60,000 installed systems, the admin account—which amounts to root control of the company’s cloud-connected products—also made it possible for them to enumerate names, email addresses, usernames, phone numbers, and addresses of all other GivEnergy customers (something the researcher didn’t actually do).

“My plan is to set up Home Assistant and integrate it with that, but in the meantime, I decided to let it talk to the cloud,” Castellucci wrote Thursday, referring to the recently installed gear. “I set up some scheduled charging, then started experimenting with the API. The next evening, I had control over a virtual power plant comprised of tens of thousands of grid connected batteries.”

Still broken after all these years

The cause of the authentication bypass Castellucci discovered was a programming interface that was protected by an RSA cryptographic key of just 512 bits. The key signs authentication tokens and is the rough equivalent of a master-key. The bit sizes allowed Castellucci to factor the private key underpinning the entire API. The factoring required $70 in cloud computing costs and less than 24 hours. GivEnergy introduced a fix within 24 hours of Castellucci privately disclosing the weakness.

The first publicly known instance of 512-bit RSA being factored came in 1999 by an international team of more than a dozen researchers. The feat took a supercomputer and hundreds of other computers seven months to carry out. By 2009 hobbyists spent about three weeks to factor 13 512-bit keys protecting firmware in Texas Instruments calculators from being copied. In 2015, researchers demonstrated factoring as a service, a method that used Amazon cloud computing, cost $75, and took about four hours. As processing power has increased, the resources required to factor keys has become ever less.

It’s tempting to fault GivEnergy engineers for pinning the security of its infrastructure on a key that’s trivial to break. Castellucci, however, said the responsibility is better assigned to the makers of code libraries developers rely on to implement complex cryptographic processes.

“Expecting developers to know that 512 bit RSA is insecure clearly doesn’t work,” the security researcher wrote. “They’re not cryptographers. This is not their job. The failure wasn’t that someone used 512 bit RSA. It was that a library they were relying on let them.”

Castellucci noted that OpenSSL, the most widely used cryptographic code library, still offers the option of using 512-bit keys. So does the Go crypto library. Coincidentally, the Python cryptography library removed the option only a few weeks ago (the commit for the change was made in January).

In an email, a GivEnergy representative reinforced Castellucci’s assessment, writing:

In this case, the problematic encryption approach was picked up via a 3rd party library many years ago, when we were a tiny startup company with only 2, fairly junior software developers & limited experience. Their assumption at the time was that because this encryption was available within the library, it was safe to use. This approach was passed through the intervening years and this part of the codebase was not changed significantly since implementation (so hadn’t passed through the review of the more experienced team we now have in place).

512-bit RSA key in home energy system gives control of “virtual power plant” Read More »

nashville-man-arrested-for-running-“laptop-farm”-to-get-jobs-for-north-koreans

Nashville man arrested for running “laptop farm” to get jobs for North Koreans

HOW TO LAND A SIX-FIGURE SALARY —

Laptop farm gave the impression North Korean nationals were working from the US.

Nashville man arrested for running “laptop farm” to get jobs for North Koreans

Federal authorities have arrested a Nashville man on charges he hosted laptops at his residences in a scheme to deceive US companies into hiring foreign remote IT workers who funneled hundreds of thousands of dollars in income to fund North Korea’s weapons program.

The scheme, federal prosecutors said, worked by getting US companies to unwittingly hire North Korean nationals, who used the stolen identity of a Georgia man to appear to be a US citizen. Under sanctions issued by the federal government, US employers are strictly forbidden from hiring citizens of North Korea. Once the North Korean nationals were hired, the employers sent company-issued laptops to Matthew Isaac Knoot, 38, of Nashville, Tennessee, the prosecutors said in court papers filed in the US District Court of the Middle District of Tennessee. The court documents also said a foreign national with the alias Yang Di was involved in the conspiracy.

The prosecutors wrote:

As part of the conspiracy, Knoot received and hosted laptop computers issued by US companies to Andrew M. at Knoot’s Nashville, Tennessee residences for the purposes of deceiving the companies into believing that Andrew M. was located in the United States. Following receipt of the laptops and without authorization, Knoot logged on to the laptops, downloaded and installed remote desktop applications, and accessed without authorization the victim companies’ networks. The remote desktop applications enabled DI to work from locations outside the United states, in particular, China, while appearing to the victim companies that Andre M. was working from Knoot’s residences. In exchange, Knoot charged Di monthly fees for his services, including flat rates for each hosted laptop and a percentage of Di’s salary for IT work, enriching himself off the scheme.

The arrest comes two weeks after security-training company KnowBe4 said it unknowingly hired a North Korean national using a fake identity to appear as someone eligible to fill a position for a software engineer for an internal IT AI team. KnowBe4’s security team soon became suspicious of the new hire after detecting “anomalous activity,” including manipulating session history files, transferring potentially harmful files, and executing unauthorized software.

The North Korean national was hired even after KnowBe4 conducted background checks, verified references, and conducted four video interviews while he was an applicant. The fake applicant was able to stymie those checks by using a stolen identity and a photo that was altered with AI tools to create a fake profile picture and mimic the face during video conference calls.

In May federal prosecutors charged an Arizona woman for allegedly raising $6.8 million in a similar scheme to fund the weapons program. The defendant in that case, Christina Marie Chapman, 49, of Litchfield Park, Arizona, and co-conspirators compromised the identities of more than 60 people living in the US and used their personal information to get North Koreans IT jobs across more than 300 US companies.

The FBI and Departments of State and Treasury issued a May 2022 advisory alerting the international community, private sector, and public of a campaign underway to land North Korean nationals IT jobs in violation of many countries’ laws. US and South Korean officials issued updated guidance in October 2023 and again in May 2024. The advisories include signs that may indicate North Korea IT worker fraud and the use of US-based laptop farms.

The North Korean IT workers using Knoot’s laptop farm generated revenue of more than $250,000 each between July 2022 and August 2023. Much of the funds were then funneled to North Korea’s weapons program, which includes weapons of mass destruction, prosecutors said.

Knoot faces charges, including wire fraud, intentional damage to protected computers, aggravated identity theft, and conspiracy to cause the unlawful employment of aliens. If found guilty, he faces a maximum of 20 years in prison.

Nashville man arrested for running “laptop farm” to get jobs for North Koreans Read More »

it’s-not-worth-paying-to-be-removed-from-people-finder-sites,-study-says

It’s not worth paying to be removed from people-finder sites, study says

Better than nothing but not by enough —

The best removal rate was less than 70%, and that didn’t beat manual opt-outs.

Folks in suits hiding behind trees that do not really obscure them

Enlarge / For a true representation of the people-search industry, a couple of these folks should have lanyards that connect them by the pockets.

Getty Images

If you’ve searched your name online in the last few years, you know what’s out there, and it’s bad. Alternately, you’ve seen the lowest-common-denominator ads begging you to search out people from your past to see what crimes are on their record. People-search sites are a gross loophole in the public records system, and it doesn’t feel like there’s much you can do about it.

Not that some firms haven’t promised to try. Do they work? Not really, Consumer Reports (CR) suggests in a recent study.

“[O]ur study shows that many of these services fall short of providing the kind of help and performance you’d expect, especially at the price levels some of them are charging,” said Yael Grauer, program manager for CR, in a statement.

Consumer Reports’ study asked 32 volunteers for permission to try to delete their personal data from 13 people-search sites, using seven services over four months. The services, including DeleteMe, Reputation Defender from Norton, and Confidently, were also compared to “Manual opt-outs,” i.e. following the tucked-away links to pull down that data on each people-search site. CR took volunteers from California, in which the California Consumer Privacy Act should theoretically make it mandatory for brokers to respond to opt-out requests, and in New York, with no such law, to compare results.

Table from Consumer Reports' study of people-search removal services, showing effective removal rates over time for each service.

Table from Consumer Reports’ study of people-search removal services, showing effective removal rates over time for each service.

Finding a total of 332 instances of identifying information profiles on those sites, Consumer Reports found that only 117 profiles were removed within four months using all the services, or 35 percent. The services varied in efficacy, with EasyOptOuts notably performing the second-best at a 65 percent removal rate after four months. But if your goal is to remove entirely others’ ability to find out about you, no service Consumer Reports tested truly gets you there.

Manual opt-outs were the most effective removal method, at 70 percent removed within one week, which is both a higher elimination rate and quicker turn-around than all the automated services.

The study noted close ties between the people-search sites and the services that purport to clean them. Removing one volunteer’s data from ClustrMaps resulted in a page with a suggested “Next step”: signing up for privacy protection service OneRep. Firefox-maker Mozilla dropped OneRep as a service provider for its Mozilla Monitor Plus privacy bundle after reporting by Brian Krebs found that OneRep’s CEO had notable ties to the people-search industry.

In releasing this study, CR also advocates for laws at the federal and state level, like California’s Delete Act, that would make people-search removal far easier than manually scouring the web or paying for incomplete monitoring.

CR’s study cites CheckPeople, PublicDataUSA, and Intelius as the least responsive businesses in one of the least responsive industries, while noting that PeopleFinders, ClustrMaps, and ThatsThem deserve some very tiny, nearly inaudible recognition for complying with opt-out requests (our words, not theirs).

It’s not worth paying to be removed from people-finder sites, study says Read More »

who-are-the-two-major-hackers-russia-just-received-in-a-prisoner-swap?

Who are the two major hackers Russia just received in a prisoner swap?

friends in high places —

Both men committed major financial crimes—and had powerful friends.

Who are the two major hackers Russia just received in a prisoner swap?

Getty Images

As part of today’s blockbuster prisoner swap between the US and Russia, which freed the journalist Evan Gershkovich and several Russian opposition figures, Russia received in return a motley collection of serious criminals, including an assassin who had executed an enemy of the Russian state in the middle of Berlin.

But the Russians also got two hackers, Vladislav Klyushin and Roman Seleznev, each of whom had been convicted of major financial crimes in the US. The US government said that Klyushin “stands convicted of the most significant hacking and trading scheme in American history, and one of the largest insider trading schemes ever prosecuted.” As for Seleznev, federal prosecutors said that he has “harmed more victims and caused more financial loss than perhaps any other defendant that has appeared before the court.”

What sort of hacker do you have to be to attract the interest of the Russian state in prisoner swaps like these? Clearly, it helps to have hacked widely and caused major damage to Russia’s enemies. By bringing these two men home, Russian leadership is sending a clear message to domestic hackers: We’ve got your back.

But it also helps to have political connections. To learn more about both men and their exploits, we read through court documents, letters, and government filings to shed a little more light on their crimes, connections, and family backgrounds.

Vladislav Klyushin

In court filings, Vladislav Klyushin claimed to be a stand-up guy, the kind of person who paid for acquaintances’ medical bills and local monastery repairs. He showed, various letters from friends suggested, “extraordinary compassion, generosity, and civic and charitable commitment.”

According to the US government, though, Klyushin made tens of millions of dollars betting for and against (“shorting”) US companies by using hacked, nonpublic information to make stock trades. He was arrested in 2021 after arriving in Switzerland on a private jet but before he could get into the helicopter that would have taken him to a planned Alps ski vacation.

Klyushin never met his father, he said, a man who drank “excessively” and then was killed during a car theft gone bad when Klyushin was 14. Klyushin’s mother was only 19 when she had him, and the family “occasionally had limited food and clothing.” Klyushin tried to help out by joining the workforce at 13, but he managed to graduate high school, college, and even graduate school, ending up with a doctorate.

After various jobs, including a stint at the Moscow State Linguistic University, Klyushin took a job at M-13, a Moscow IT company that did penetration testing and “Advanced Persistent Threat emulation”—that is, M-13 could be hired to act just like a group of hackers, probing corporate or government cybersecurity. Oddly enough for an infosec company, M-13 also offered investment advice; give them your money and fantastic returns were promised, with M-13 keeping 60 percent of any profits it made.

This was not mere puffery, either. According to the US government, the M-13 team “had an improbable win rate of 68 percent” on its stock trades, and it “generated phenomenal, eight-figure returns,” turning $9 million into $100 million (“a return of more than 900 percent during a period in which the broader stock market returned just over 25 percent,” said the government).

But Klyushin and his associates were not stock-picking wizards. Instead, they had begun hacking Donnelly Financial and Toppan Merrill, two “filing agents” that many large companies use to submit quarterly and annual earning reports to the Securities and Exchange Commission. These reports were uploaded to the filing agents’ systems several days before their public release. All the M-13 team had to do was liberate the files early, read through them, and buy up stocks of companies that had overperformed while shorting stocks of companies that had underperformed. When the reports went public a few days later and the markets responded to them, the M-13 team made huge returns. Klyushin himself earned several tens of millions of dollars between 2018 and 2020.

To avoid consequences for this flagrantly illegal behavior, all Klyushin had to do was stay in Russia—or, at least, not visit or transit through a country that might extradite him to the US—and he could keep buying up yachts, cars, and real estate. That’s because Russia—along with China and Iran, the largest three sources of hackers who attack US targets—doesn’t do much to stop attacks directed against US interests. As the US government notes, none of these governments “respond to grand jury subpoenas and rarely if ever provide the kinds of forensic information that helps to identify cybercriminals. Nor do they extradite their nationals, leaving the government to rely on the chance that an indicted defendant will travel.”

But when you have tens of millions of dollars, you often want to spend it abroad, so Klyushin did travel—and got nabbed upon his arrival in Switzerland. He was extradited to the US in 2021, was found guilty at trial, and was sentenced to nine years in prison and the forfeiture of $34 million. It is unclear if the US government was able to get its hands on any of that money, which was stashed in bank accounts around the world.

Klyushin’s fellow conspirators have wisely stayed in Russia, so with his release as part of today’s prisoner swap, all are likely to enjoy their ill-gotten gains without further consequence. One of Klyushin’s colleagues at M-13, Ivan Ermakov, is said to be a “former Russian military intelligence officer” who used to run disinformation programs “targeting international anti-doping agencies, sporting federations, and anti-doping officials.”

Who are the two major hackers Russia just received in a prisoner swap? Read More »

hackers-exploit-vmware-vulnerability-that-gives-them-hypervisor-admin

Hackers exploit VMware vulnerability that gives them hypervisor admin

AUTHENTICATION NOT REQUIRED —

Create new group called “ESX Admins” and ESXi automatically gives it admin rights.

Hackers exploit VMware vulnerability that gives them hypervisor admin

Getty Images

Microsoft is urging users of VMware’s ESXi hypervisor to take immediate action to ward off ongoing attacks by ransomware groups that give them full administrative control of the servers the product runs on.

The vulnerability, tracked as CVE-2024-37085, allows attackers who have already gained limited system rights on a targeted server to gain full administrative control of the ESXi hypervisor. Attackers affiliated with multiple ransomware syndicates—including Storm-0506, Storm-1175, Octo Tempest, and Manatee Tempest—have been exploiting the flaw for months in numerous post-compromise attacks, meaning after the limited access has already been gained through other means.

Admin rights assigned by default

Full administrative control of the hypervisor gives attackers various capabilities, including encrypting the file system and taking down the servers they host. The hypervisor control can also allow attackers to access hosted virtual machines to either exfiltrate data or expand their foothold inside a network. Microsoft discovered the vulnerability under exploit in the normal course of investigating the attacks and reported it to VMware. VMware parent company Broadcom patched the vulnerability on Thursday.

“Microsoft security researchers identified a new post-compromise technique utilized by ransomware operators like Storm-0506, Storm-1175, Octo Tempest, and Manatee Tempest in numerous attacks,” members of the Microsoft Threat Intelligence team wrote Monday. “In several cases, the use of this technique has led to Akira and Black Basta ransomware deployments.”

The post went on to document an astonishing discovery: escalating hypervisor privileges on ESXi to unrestricted admin was as simple as creating a new domain group named “ESX Admins.” From then on, any user assigned to the domain—including newly created ones—automatically became admin, with no authentication necessary. As the Microsoft post explained:

Further analysis of the vulnerability revealed that VMware ESXi hypervisors joined to an Active Directory domain consider any member of a domain group named “ESX Admins” to have full administrative access by default. This group is not a built-in group in Active Directory and does not exist by default. ESXi hypervisors do not validate that such a group exists when the server is joined to a domain and still treats any members of a group with this name with full administrative access, even if the group did not originally exist. Additionally, the membership in the group is determined by name and not by security identifier (SID).

Creating the new domain group can be accomplished with just two commands:

  • net group “ESX Admins” /domain /add
  • net group “ESX Admins” username /domain /add

They said over the past year, ransomware actors have increasingly targeted ESXi hypervisors in attacks that allow them to mass encrypt data with only a “few clicks” required. By encrypting the hypervisor file system, all virtual machines hosted on it are also encrypted. The researchers also said that many security products have limited visibility into and little protection of the ESXi hypervisor.

The ease of exploitation, coupled with the medium severity rating VMware assigned to the vulnerability, a 6.8 out of a possible 10, prompted criticism from some experienced security professionals.

ESXi is a Type 1 hypervisor, also known as a bare-metal hypervisor, meaning it’s an operating system unto itself that’s installed directly on top of a physical server. Unlike Type 2 hypervisors, Type 1 hypervisors don’t run on top of an operating system such as Windows or Linux. Guest operating systems then run on top. Taking control of the ESXi hypervisor gives attackers enormous power.

The Microsoft researchers described one attack they observed by the Storm-0506 threat group to install ransomware known as Black Basta. As intermediate steps, Storm-0506 installed malware known as Qakbot and exploited a previously fixed Windows vulnerability to facilitate the installation of two hacking tools, one known as Cobalt Strike and the other Mimikatz. The researchers wrote:

Earlier this year, an engineering firm in North America was affected by a Black Basta ransomware deployment by Storm-0506. During this attack, the threat actor used the CVE-2024-37085 vulnerability to gain elevated privileges to the ESXi hypervisors within the organization.

The threat actor gained initial access to the organization via Qakbot infection, followed by the exploitation of a Windows CLFS vulnerability (CVE-2023-28252) to elevate their privileges on affected devices. The threat actor then used Cobalt Strike and Pypykatz (a Python version of Mimikatz) to steal the credentials of two domain administrators and to move laterally to four domain controllers.

On the compromised domain controllers, the threat actor installed persistence mechanisms using custom tools and a SystemBC implant. The actor was also observed attempting to brute force Remote Desktop Protocol (RDP) connections to multiple devices as another method for lateral movement, and then again installing Cobalt Strike and SystemBC. The threat actor then tried to tamper with Microsoft Defender Antivirus using various tools to avoid detection.

Microsoft observed that the threat actor created the “ESX Admins” group in the domain and added a new user account to it, following these actions, Microsoft observed that this attack resulted in encrypting of the ESXi file system and losing functionality of the hosted virtual machines on the ESXi hypervisor.   The actor was also observed to use PsExec to encrypt devices that are not hosted on the ESXi hypervisor. Microsoft Defender Antivirus and automatic attack disruption in Microsoft Defender for Endpoint were able to stop these encryption attempts in devices that had the unified agent for Defender for Endpoint installed.

The attack chain used by Storm-0506.

Enlarge / The attack chain used by Storm-0506.

Microsoft

Anyone with administrative responsibility for ESXi hypervisors should prioritize investigating and patching this vulnerability. The Microsoft post provides several methods for identifying suspicious modifications to the ESX Admins group or other potential signs of this vulnerability being exploited.

Hackers exploit VMware vulnerability that gives them hypervisor admin Read More »

97%-of-crowdstrike-systems-are-back-online;-microsoft-suggests-windows-changes

97% of CrowdStrike systems are back online; Microsoft suggests Windows changes

falcon punch —

Kernel access gives security software a lot of power, but not without problems.

A bad update to CrowdStrike's Falcon security software crashed millions of Windows PCs last week.

Enlarge / A bad update to CrowdStrike’s Falcon security software crashed millions of Windows PCs last week.

CrowdStrike

CrowdStrike CEO George Kurtz said Thursday that 97 percent of all Windows systems running its Falcon sensor software were back online, a week after an update-related outage to the corporate security software delayed flights and took down emergency response systems, among many other disruptions. The update, which caused Windows PCs to throw the dreaded Blue Screen of Death and reboot, affected about 8.5 million systems by Microsoft’s count, leaving roughly 250,000 that still need to be brought back online.

Microsoft VP John Cable said in a blog post that the company has “engaged over 5,000 support engineers working 24×7” to help clean up the mess created by CrowdStrike’s update and hinted at Windows changes that could help—if they don’t run afoul of regulators, anyway.

“This incident shows clearly that Windows must prioritize change and innovation in the area of end-to-end resilience,” wrote Cable. “These improvements must go hand in hand with ongoing improvements in security and be in close cooperation with our many partners, who also care deeply about the security of the Windows ecosystem.”

Cable pointed to VBS enclaves and Azure Attestation as examples of products that could keep Windows secure without requiring kernel-level access, as most Windows-based security products (including CrowdStrike’s Falcon sensor) do now. But he stopped short of outlining what specific changes might be made to Windows, saying only that Microsoft would continue to “harden our platform, and do even more to improve the resiliency of the Windows ecosystem, working openly and collaboratively with the broad security community.”

When running in kernel mode rather than user mode, security software has full access to a system’s hardware and software, which makes it more powerful and flexible; this also means that a bad update like CrowdStrike’s can cause a lot more problems.

Recent versions of macOS have deprecated third-party kernel extensions for exactly this reason, one explanation for why Macs weren’t taken down by the CrowdStrike update. But past efforts by Microsoft to lock third-party security companies out of the Windows kernel—most recently in the Windows Vista era—have been met with pushback from European Commission regulators. That level of skepticism is warranted, given Microsoft’s past (and continuing) record of using Windows’ market position to push its own products and services. Any present-day attempt to restrict third-party vendors’ access to the Windows kernel would be likely to draw similar scrutiny.

Microsoft has also had plenty of its own security problems to deal with recently, to the point that it has promised to restructure the company to make security more of a focus.

CrowdStrike’s aftermath

CrowdStrike has made its own promises in the wake of the outage, including more thorough testing of updates and a phased-rollout system that could prevent a bad update file from causing quite as much trouble as the one last week did. The company’s initial incident report pointed to a lapse in its testing procedures as the cause of the problem.

Meanwhile, recovery continues. Some systems could be fixed simply by rebooting, though they had to do it as many as 15 times—this could give systems a chance to grab a new update file before they could crash. For the rest, IT admins were left to either restore them from backups or delete the bad update file manually. Microsoft published a bootable tool that could help automate the process of deleting that file, but it still required laying hands on every single affected Windows install, whether on a virtual machine or a physical system.

And not all of CrowdStrike’s remediation solutions have been well-received. The company sent out $10 UberEats promo codes to cover some of its partners’ “next cup of coffee or late night snack,” which occasioned some eye-rolling on social media sites (the code was also briefly unusable because Uber flagged it as fraudulent, according to a CrowdStrike representative). For context, analytics company Parametrix Insurance estimated the cost of the outage to Fortune 500 companies somewhere in the realm of $5.4 billion.

97% of CrowdStrike systems are back online; Microsoft suggests Windows changes Read More »

at-the-olympics,-ai-is-watching-you

At the Olympics, AI is watching you

“It’s the eyes of the police multiplied” —

New system foreshadows a future where there are too many CCTV cameras for humans to physically watch.

Police observe the Eiffel Tower from Trocadero ahead of the Paris 2024 Olympic Games.

Enlarge / Police observe the Eiffel Tower from Trocadero ahead of the Paris 2024 Olympic Games on July 22, 2024.

On the eve of the Olympics opening ceremony, Paris is a city swamped in security. Forty thousand barriers divide the French capital. Packs of police officers wearing stab vests patrol pretty, cobbled streets. The river Seine is out of bounds to anyone who has not already been vetted and issued a personal QR code. Khaki-clad soldiers, present since the 2015 terrorist attacks, linger near a canal-side boulangerie, wearing berets and clutching large guns to their chests.

French interior minister Gérald Darmanin has spent the past week justifying these measures as vigilance—not overkill. France is facing the “biggest security challenge any country has ever had to organize in a time of peace,” he told reporters on Tuesday. In an interview with weekly newspaper Le Journal du Dimanche, he explained that “potentially dangerous individuals” have been caught applying to work or volunteer at the Olympics, including 257 radical Islamists, 181 members of the far left, and 95 from the far right. Yesterday, he told French news broadcaster BFM that a Russian citizen had been arrested on suspicion of plotting “large scale” acts of “destabilization” during the Games.

Parisians are still grumbling about road closures and bike lanes that abruptly end without warning, while human rights groups are denouncing “unacceptable risks to fundamental rights.” For the Games, this is nothing new. Complaints about dystopian security are almost an Olympics tradition. Previous iterations have been characterized as Lockdown London, Fortress Tokyo, and the “arms race” in Rio. This time, it is the least-visible security measures that have emerged as some of the most controversial. Security measures in Paris have been turbocharged by a new type of AI, as the city enables controversial algorithms to crawl CCTV footage of transport stations looking for threats. The system was first tested in Paris back in March at two Depeche Mode concerts.

For critics and supporters alike, algorithmic oversight of CCTV footage offers a glimpse of the security systems of the future, where there is simply too much surveillance footage for human operators to physically watch. “The software is an extension of the police,” says Noémie Levain, a member of the activist group La Quadrature du Net, which opposes AI surveillance. “It’s the eyes of the police multiplied.”

Near the entrance of the Porte de Pantin metro station, surveillance cameras are bolted to the ceiling, encased in an easily overlooked gray metal box. A small sign is pinned to the wall above the bin, informing anyone willing to stop and read that they are part of a “video surveillance analysis experiment.” The company which runs the Paris metro RATP “is likely” to use “automated analysis in real time” of the CCTV images “in which you can appear,” the sign explains to the oblivious passengers rushing past. The experiment, it says, runs until March 2025.

Porte de Pantin is on the edge of the park La Villette, home to the Olympics’ Park of Nations, where fans can eat or drink in pavilions dedicated to 15 different countries. The Metro stop is also one of 46 train and metro stations where the CCTV algorithms will be deployed during the Olympics, according to an announcement by the Prefecture du Paris, a unit of the interior ministry. City representatives did not reply to WIRED’s questions on whether there are plans to use AI surveillance outside the transport network. Under a March 2023 law, algorithms are allowed to search CCTV footage in real-time for eight “events,” including crowd surges, abnormally large groups of people, abandoned objects, weapons, or a person falling to the ground.

“What we’re doing is transforming CCTV cameras into a powerful monitoring tool,” says Matthias Houllier, cofounder of Wintics, one of four French companies that won contracts to have their algorithms deployed at the Olympics. “With thousands of cameras, it’s impossible for police officers [to react to every camera].”

At the Olympics, AI is watching you Read More »

chrome-will-now-prompt-some-users-to-send-passwords-for-suspicious-files

Chrome will now prompt some users to send passwords for suspicious files

SAFE BROWSING —

Google says passwords and files will be deleted shortly after they are deep-scanned.

Chrome will now prompt some users to send passwords for suspicious files

Google is redesigning Chrome malware detections to include password-protected executable files that users can upload for deep scanning, a change the browser maker says will allow it to detect more malicious threats.

Google has long allowed users to switch on the Enhanced Mode of its Safe Browsing, a Chrome feature that warns users when they’re downloading a file that’s believed to be unsafe, either because of suspicious characteristics or because it’s in a list of known malware. With Enhanced Mode turned on, Google will prompt users to upload suspicious files that aren’t allowed or blocked by its detection engine. Under the new changes, Google will prompt these users to provide any password needed to open the file.

Beware of password-protected archives

In a post published Wednesday, Jasika Bawa, Lily Chen, and Daniel Rubery of the Chrome Security team wrote:

Not all deep scans can be conducted automatically. A current trend in cookie theft malware distribution is packaging malicious software in an encrypted archive—a .zip, .7z, or .rar file, protected by a password—which hides file contents from Safe Browsing and other antivirus detection scans. In order to combat this evasion technique, we have introduced two protection mechanisms depending on the mode of Safe Browsing selected by the user in Chrome.

Attackers often make the passwords to encrypted archives available in places like the page from which the file was downloaded, or in the download file name. For Enhanced Protection users, downloads of suspicious encrypted archives will now prompt the user to enter the file’s password and send it along with the file to Safe Browsing so that the file can be opened and a deep scan may be performed. Uploaded files and file passwords are deleted a short time after they’re scanned, and all collected data is only used by Safe Browsing to provide better download protections.

Enter a file password to send an encrypted file for a malware scan

Enlarge / Enter a file password to send an encrypted file for a malware scan

Google

For those who use Standard Protection mode which is the default in Chrome, we still wanted to be able to provide some level of protection. In Standard Protection mode, downloading a suspicious encrypted archive will also trigger a prompt to enter the file’s password, but in this case, both the file and the password stay on the local device and only the metadata of the archive contents are checked with Safe Browsing. As such, in this mode, users are still protected as long as Safe Browsing had previously seen and categorized the malware.

Sending Google an executable casually downloaded from a site advertising a screensaver or media player is likely to generate little if any hesitancy. For more sensitive files such as a password-protected work archive, however, there is likely to be more pushback. Despite the assurances the file and password will be deleted promptly, things sometimes go wrong and aren’t discovered for months or years, if at all. People using Chrome with Enhanced Mode turned on should exercise caution.

A second change Google is making to Safe Browsing is a two-tiered notification system when users are downloading files. They are:

  1. Suspicious files, meaning those Google’s file-vetting engine have given a lower-confidence verdict, with unknown risk of user harm
  2. Dangerous files, or those with a high confidence verdict that they pose a high risk of user harm

The new tiers are highlighted by iconography, color, and text in an attempt to make it easier for users to easily distinguish between the differing levels of risk. “Overall, these improvements in clarity and consistency have resulted in significant changes in user behavior, including fewer warnings bypassed, warnings heeded more quickly, and all in all, better protection from malicious downloads,” the Google authors wrote.

Previously, Safe Browsing notifications looked like this:

Differentiation between suspicious and dangerous warnings.

Enlarge / Differentiation between suspicious and dangerous warnings.

Google

Over the past year, Chrome hasn’t budged on its continued support of third-party cookies, a decision that allows companies large and small to track users of that browser as they navigate from website to website to website. Google’s alternative to tracking cookies, known as the Privacy Sandbox, has also received low marks from privacy advocates because it tracks user interests based on their browser usage.

That said, Chrome has long been a leader in introducing protections, such as a security sandbox that cordons off risky code so it can’t mingle with sensitive data and operating system functions. Those who stick with Chrome should at a minimum keep Standard Mode Safe Browsing on. Users with the experience required to judiciously choose which files to send to Google should consider turning on Enhanced Mode.

Chrome will now prompt some users to send passwords for suspicious files Read More »

secure-boot-is-completely-broken-on-200+-models-from-5-big-device-makers

Secure Boot is completely broken on 200+ models from 5 big device makers

Secure Boot is completely broken on 200+ models from 5 big device makers

sasha85ru | Getty Imates

In 2012, an industry-wide coalition of hardware and software makers adopted Secure Boot to protect against a long-looming security threat. The threat was the specter of malware that could infect the BIOS, the firmware that loaded the operating system each time a computer booted up. From there, it could remain immune to detection and removal and could load even before the OS and security apps did.

The threat of such BIOS-dwelling malware was largely theoretical and fueled in large part by the creation of ICLord Bioskit by a Chinese researcher in 2007. ICLord was a rootkit, a class of malware that gains and maintains stealthy root access by subverting key protections built into the operating system. The proof of concept demonstrated that such BIOS rootkits weren’t only feasible; they were also powerful. In 2011, the threat became a reality with the discovery of Mebromi, the first-known BIOS rootkit to be used in the wild.

Keenly aware of Mebromi and its potential for a devastating new class of attack, the Secure Boot architects hashed out a complex new way to shore up security in the pre-boot environment. Built into UEFI—the Unified Extensible Firmware Interface that would become the successor to BIOS—Secure Boot used public-key cryptography to block the loading of any code that wasn’t signed with a pre-approved digital signature. To this day, key players in security—among them Microsoft and the US National Security Agency—regard Secure Boot as an important, if not essential, foundation of trust in securing devices in some of the most critical environments, including in industrial control and enterprise networks.

An unlimited Secure Boot bypass

On Thursday, researchers from security firm Binarly revealed that Secure Boot is completely compromised on more than 200 device models sold by Acer, Dell, Gigabyte, Intel, and Supermicro. The cause: a cryptographic key underpinning Secure Boot on those models that was compromised in 2022. In a public GitHub repository committed in December of that year, someone working for multiple US-based device manufacturers published what’s known as a platform key, the cryptographic key that forms the root-of-trust anchor between the hardware device and the firmware that runs on it. The repository was located at https://github.com/raywu-aaeon/Ryzen2000_4000.git, and it’s not clear when it was taken down.

The repository included the private portion of the platform key in encrypted form. The encrypted file, however, was protected by a four-character password, a decision that made it trivial for Binarly, and anyone else with even a passing curiosity, to crack the passcode and retrieve the corresponding plain text. The disclosure of the key went largely unnoticed until January 2023, when Binarly researchers found it while investigating a supply-chain incident. Now that the leak has come to light, security experts say it effectively torpedoes the security assurances offered by Secure Boot.

“It’s a big problem,” said Martin Smolár, a malware analyst specializing in rootkits who reviewed the Binarly research and spoke to me about it. “It’s basically an unlimited Secure Boot bypass for these devices that use this platform key. So until device manufacturers or OEMs provide firmware updates, anyone can basically… execute any malware or untrusted code during system boot. Of course, privileged access is required, but that’s not a problem in many cases.”

Binarly researchers said their scans of firmware images uncovered 215 devices that use the compromised key, which can be identified by the certificate serial number 55:fb:ef: 87: 81: 23: 00: 84: 47: 17:0b:b3:cd: 87:3a:f4. A table appearing at the end of this article lists each one.

The researchers soon discovered that the compromise of the key was just the beginning of a much bigger supply-chain breakdown that raises serious doubts about the integrity of Secure Boot on more than 300 additional device models from virtually all major device manufacturers. As is the case with the platform key compromised in the 2022 GitHub leak, an additional 21 platform keys contain the strings “DO NOT SHIP” or “DO NOT TRUST.”

Test certificate provided by AMI.

Enlarge / Test certificate provided by AMI.

Binarly

Secure Boot is completely broken on 200+ models from 5 big device makers Read More »