Data breaches

critics-scoff-after-microsoft-warns-ai-feature-can-infect-machines-and-pilfer-data

Critics scoff after Microsoft warns AI feature can infect machines and pilfer data


Integration of Copilot Actions into Windows is off by default, but for how long?

Credit: Photographer: Chona Kasinger/Bloomberg via Getty Images

Microsoft’s warning on Tuesday that an experimental AI agent integrated into Windows can infect devices and pilfer sensitive user data has set off a familiar response from security-minded critics: Why is Big Tech so intent on pushing new features before their dangerous behaviors can be fully understood and contained?

As reported Tuesday, Microsoft introduced Copilot Actions, a new set of “experimental agentic features” that, when enabled, perform “everyday tasks like organizing files, scheduling meetings, or sending emails,” and provide “an active digital collaborator that can carry out complex tasks for you to enhance efficiency and productivity.”

Hallucinations and prompt injections apply

The fanfare, however, came with a significant caveat. Microsoft recommended users enable Copilot Actions only “if you understand the security implications outlined.”

The admonition is based on known defects inherent in most large language models, including Copilot, as researchers have repeatedly demonstrated.

One common defect of LLMs causes them to provide factually erroneous and illogical answers, sometimes even to the most basic questions. This propensity for hallucinations, as the behavior has come to be called, means users can’t trust the output of Copilot, Gemini, Claude, or any other AI assistant and instead must independently confirm it.

Another common LLM landmine is the prompt injection, a class of bug that allows hackers to plant malicious instructions in websites, resumes, and emails. LLMs are programmed to follow directions so eagerly that they are unable to discern those in valid user prompts from those contained in untrusted, third-party content created by attackers. As a result, the LLMs give the attackers the same deference as users.

Both flaws can be exploited in attacks that exfiltrate sensitive data, run malicious code, and steal cryptocurrency. So far, these vulnerabilities have proved impossible for developers to prevent and, in many cases, can only be fixed using bug-specific workarounds developed once a vulnerability has been discovered.

That, in turn, led to this whopper of a disclosure in Microsoft’s post from Tuesday:

“As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs,” Microsoft said. “Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA), where malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation.”

Microsoft indicated that only experienced users should enable Copilot Actions, which is currently available only in beta versions of Windows. The company, however, didn’t describe what type of training or experience such users should have or what actions they should take to prevent their devices from being compromised. I asked Microsoft to provide these details, and the company declined.

Like “macros on Marvel superhero crack”

Some security experts questioned the value of the warnings in Tuesday’s post, comparing them to warnings Microsoft has provided for decades about the danger of using macros in Office apps. Despite the long-standing advice, macros have remained among the lowest-hanging fruit for hackers out to surreptitiously install malware on Windows machines. One reason for this is that Microsoft has made macros so central to productivity that many users can’t do without them.

“Microsoft saying ‘don’t enable macros, they’re dangerous’… has never worked well,” independent researcher Kevin Beaumont said. “This is macros on Marvel superhero crack.”

Beaumont, who is regularly hired to respond to major Windows network compromises inside enterprises, also questioned whether Microsoft will provide a means for admins to adequately restrict Copilot Actions on end-user machines or to identify machines in a network that have the feature turned on.

A Microsoft spokesperson said IT admins will be able to enable or disable an agent workspace at both account and device levels, using Intune or other MDM (Mobile Device Management) apps.

Critics voiced other concerns, including the difficulty for even experienced users to detect exploitation attacks targeting the AI agents they’re using.

“I don’t see how users are going to prevent anything of the sort they are referring to, beyond not surfing the web I guess,” researcher Guillaume Rossolini said.

Microsoft has stressed that Copilot Actions is an experimental feature that’s turned off by default. That design was likely chosen to limit its access to users with the experience required to understand its risks. Critics, however, noted that previous experimental features—Copilot, for instance—regularly become default capabilities for all users over time. Once that’s done, users who don’t trust the feature are often required to invest time developing unsupported ways to remove the features.

Sound but lofty goals

Most of Tuesday’s post focused on Microsoft’s overall strategy for securing agentic features in Windows. Goals for such features include:

  • Non-repudiation, meaning all actions and behaviors must be “observable and distinguishable from those taken by a user”
  • Agents must preserve confidentiality when they collect, aggregate, or otherwise utilize user data
  • Agents must receive user approval when accessing user data or taking actions

The goals are sound, but ultimately they depend on users reading the dialog windows that warn of the risks and require careful approval before proceeding. That, in turn, diminishes the value of the protection for many users.

“The usual caveat applies to such mechanisms that rely on users clicking through a permission prompt,” Earlence Fernandes, a University of California, San Diego professor specializing in AI security, told Ars. “Sometimes those users don’t fully understand what is going on, or they might just get habituated and click ‘yes’ all the time. At which point, the security boundary is not really a boundary.”

As demonstrated by the rash of “ClickFix” attacks, many users can be tricked into following extremely dangerous instructions. While more experienced users (including a fair number of Ars commenters) blame the victims falling for such scams, these incidents are inevitable for a host of reasons. In some cases, even careful users are fatigued or under emotional distress and slip up as a result. Other users simply lack the knowledge to make informed decisions.

Microsoft’s warning, one critic said, amounts to little more than a CYA (short for cover your ass), a legal maneuver that attempts to shield a party from liability.

“Microsoft (like the rest of the industry) has no idea how to stop prompt injection or hallucinations, which makes it fundamentally unfit for almost anything serious,” critic Reed Mideke said. “The solution? Shift liability to the user. Just like every LLM chatbot has a ‘oh by the way, if you use this for anything important be sure to verify the answers” disclaimer, never mind that you wouldn’t need the chatbot in the first place if you knew the answer.”

As Mideke indicated, most of the criticisms extend to AI offerings other companies—including Apple, Google, and Meta—are integrating into their products. Frequently, these integrations begin as optional features and eventually become default capabilities whether users want them or not.

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

Critics scoff after Microsoft warns AI feature can infect machines and pilfer data Read More »

salesforce-says-it-won’t-pay-extortion-demand-in-1-billion-records-breach

Salesforce says it won’t pay extortion demand in 1 billion records breach

Salesforce says it’s refusing to pay an extortion demand made by a crime syndicate that claims to have stolen roughly 1 billion records from dozens of Salesforce customers.

The threat group making the demands began their campaign in May, when they made voice calls to organizations storing data on the Salesforce platform, Google-owned Mandiant said in June. The English-speaking callers would provide a pretense that necessitated the target connect an attacker-controlled app to their Salesforce portal. Amazingly—but not surprisingly—many of the people who received the calls complied.

It’s becoming a real mess

The threat group behind the campaign is calling itself Scattered LAPSUS$ Hunters, a mashup of three prolific data-extortion actors: Scattered Spider, LAPSuS$, and ShinyHunters. Mandiant, meanwhile, tracks the group as UNC6040, because the researchers so far have been unable to positively identify the connections.

Earlier this month, the group created a website that named Toyota, FedEx, and 37 other Salesforce customers whose data was stolen in the campaign. In all, the number of records recovered, Scattered LAPSUS$ Hunters claimed, was “989.45m/~1B+.” The site called on Salesforce to begin negotiations for a ransom amount “or all your customers [sic] data will be leaked.” The site went on to say: “Nobody else will have to pay us, if you pay, Salesforce, Inc.” The site said the deadline for payment was Friday.

In an email Wednesday, a Salesforce representative said the company is spurning the demand.

Salesforce says it won’t pay extortion demand in 1 billion records breach Read More »

oracle-has-reportedly-suffered-2-separate-breaches-exposing-thousands-of-customers‘-pii

Oracle has reportedly suffered 2 separate breaches exposing thousands of customers‘ PII

Trustwave’s Spider Labs, meanwhile, said the sample of LDAP credentials provided by rose87168 “reveals a substantial amount of sensitive IAM data associated with a user within an Oracle Cloud multi-tenant environment. The data includes personally identifiable information (PII) and administrative role assignments, indicating potential high-value access within the enterprise system.”

Oracle initially denied any such breach had occurred against its cloud infrastructure, telling publications: “There has been no breach of Oracle Cloud. The published credentials are not for the Oracle Cloud. No Oracle Cloud customers experienced a breach or lost any data.”

On Friday, when I asked Oracle for comment, a spokesperson asked if they could provide a statement that couldn’t be attributed to Oracle in any way. After I declined, the spokesperson said Oracle would have no comment.

For the moment, there’s a stand-off between Oracle on the one hand, and researchers and journalists on the other, over whether two serious breaches have exposed sensitive information belonging to its customers. Reporting that Oracle is notifying customers of data compromises in unofficial letterhead sent by outside attorneys is also concerning. This post will be updated if new information becomes available.

Oracle has reportedly suffered 2 separate breaches exposing thousands of customers‘ PII Read More »

google-chrome-may-soon-use-“ai”-to-replace-compromised-passwords

Google Chrome may soon use “AI” to replace compromised passwords

Google’s Chrome browser might soon get a useful security upgrade: detecting passwords used in data breaches and then generating and storing a better replacement. Google’s preliminary copy suggests it’s an “AI innovation,” though exactly how is unclear.

Noted software digger Leopeva64 on X found a new offering in the AI settings of a very early build of Chrome. The option, “Automated password Change” (so, early stages—as to not yet get a copyedit), is described as, “When Chrome finds one of your passwords in a data breach, it can offer to change your password for you when you sign in.”

Chrome already has a feature that warns users if the passwords they enter have been identified in a breach and will prompt them to change it. As noted by Windows Report, the change is that now Google will offer to change it for you on the spot rather than simply prompting you to handle that elsewhere. The password is automatically saved in Google’s Password Manager and “is encrypted and never seen by anyone,” the settings page claims.

If you want to see how this works, you need to download a Canary version of Chrome. In the flags settings (navigate to “chrome://flags” in the address bar), you’ll need to enable two features: “Improved password change service” and “Mark all credential as leaked,” the latter to force the change notification because, presumably, it’s not hooked up to actual leaked password databases yet. Go to almost any non-Google site, enter in any user/password combination to try to log in, and after it fails or you navigate elsewhere, a prompt will ask you to consider changing your password.

Google Chrome may soon use “AI” to replace compromised passwords Read More »

health-care-giant-ascension-says-5.6-million-patients-affected-in-cyberattack

Health care giant Ascension says 5.6 million patients affected in cyberattack

Health care company Ascension lost sensitive data for nearly 5.6 million individuals in a cyberattack that was attributed to a notorious ransomware gang, according to documents filed with the attorney general of Maine.

Ascension owns 140 hospitals and scores of assisted living facilities. In May, the organization was hit with an attack that caused mass disruptions as staff was forced to move to manual processes that caused errors, delayed or lost lab results, and diversions of ambulances to other hospitals. Ascension managed to restore most services by mid-June. At the time, the company said the attackers had stolen protected health information and personally identifiable information for an undisclosed number of people.

Investigation concluded

A filing Ascension made earlier in December revealed that nearly 5.6 million people were affected by the breach. Data stolen depended on the particular person but included individuals’ names and medical information (e.g., medical record numbers, dates of service, types of lab tests, or procedure codes), payment information (e.g., credit card information or bank account numbers), insurance information (e.g., Medicaid/Medicare ID, policy number, or insurance claim), government

identification (e.g., Social Security numbers, tax identification numbers, driver’s license numbers, or passport numbers), and other personal information (such as date of birth or address).

Health care giant Ascension says 5.6 million patients affected in cyberattack Read More »

suspect-arrested-in-snowflake-data-theft-attacks-affecting-millions

Suspect arrested in Snowflake data-theft attacks affecting millions

Attack Path UNC5537 has used in attacks against as many as 165 Snowflake customers.

Credit: Mandiant

Attack Path UNC5537 has used in attacks against as many as 165 Snowflake customers. Credit: Mandiant

None of the affected accounts used multifactor authentication, which requires users to provide a one-time password or additional means of authentication besides a password. After that revelation, Snowflake enforced mandatory MFA for accounts and required that passwords be at least 14 characters long.

Mandiant had identified the threat group behind the breaches as UNC5537. The group has referred to itself ShinyHunters. Snowflake offers its services under a model known as SaaS (software as a service).

“UNC5537 aka Alexander ‘Connor’ Moucka has proven to be one of the most consequential threat actors of 2024,” Mandiant wrote in an emailed statement. “In April 2024, UNC5537 launched a campaign, systematically compromising misconfigured SaaS instances across over a hundred organizations. The operation, which left organizations reeling from significant data loss and extortion attempts, highlighted the alarming scale of harm an individual can cause using off-the-shelf tools.”

Mandiant said a co-conspirator, John Binns, was arrested in June. The status of that case wasn’t immediately known.

Besides Ticketmaster, other customers known to have been breached include AT&T and Spain-based bank Santander. In July, AT&T said that personal information and phone and text message records for roughly 110 million customers were stolen. WIRED later reported that AT&T paid $370,000 in return for a promise the data would be deleted.

Other Snowflake customers reported by various news outlets as breached are Pure Storage, Advance Auto Parts, Los Angeles Unified School District, QuoteWizard/LendingTree, Neiman Marcus, Anheuser-Busch, Allstate, Mitsubishi, and State Farm.

KrebsOnSecurity reported Tuesday that Moucka has been named in multiple charging documents filed by US federal prosecutors. Reporter Brian Krebs said specific charges and allegations are unknown because the cases remain sealed.

Suspect arrested in Snowflake data-theft attacks affecting millions Read More »

at&t:-data-breach-affects-73-million-or-51-million-customers-no,-we-won’t-explain.

AT&T: Data breach affects 73 million or 51 million customers. No, we won’t explain.

“SECURITY IS IMPORTANT TO US” —

When the data was published in 2021, the company said it didn’t belong to its customers.

AT&T: Data breach affects 73 million or 51 million customers. No, we won’t explain.

Getty Images

AT&T is notifying millions of current or former customers that their account data has been compromised and published last month on the dark web. Just how many millions, the company isn’t saying.

In a mandatory filing with the Maine Attorney General’s office, the telecommunications company said 51.2 million account holders were affected. On its corporate website, AT&T put the number at 73 million. In either event, compromised data included one or more of the following: full names, email addresses, mailing addresses, phone numbers, social security numbers, dates of birth, AT&T account numbers, and AT&T passcodes. Personal financial information and call history didn’t appear to be included, AT&T said, and data appeared to be from June 2019 or earlier.

The disclosure on the AT&T site said the 73 million affected customers comprised 7.6 million current customers and 65.4 million former customers. The notification said AT&T has reset the account PINs of all current customers and is notifying current and former customers by mail. AT&T representatives haven’t explained why the letter filed with the Maine AG lists 51.2 million affected and the disclosure on its site lists 73 million.

According to a March 30 article published by TechCrunch, a security researcher said the passcodes were stored in an encrypted format that could easily be decrypted. Bleeping Computer reported in 2021 that more than 70 million records containing AT&T customer data was put up for sale that year for $1 million. AT&T, at the time, told the news site that the amassed data didn’t belong to its customers and that the company’s systems had not been breached.

Last month, after the same data reappeared online, Bleeping Computer and TechCrunch confirmed that the data belonged to AT&T customers, and the company finally acknowledged the connection. AT&T has yet to say how the information was breached or why it took more than two years from the original date of publication to confirm that it belonged to its customers.

Given the length of time the data has been available, the damage that’s likely to result from the most recent publication is likely to be minimal. That said, anyone who is or was an AT&T customer should be on the lookout for scams that attempt to capitalize on the leaked data. AT&T is offering one year of free identity theft protection.

AT&T: Data breach affects 73 million or 51 million customers. No, we won’t explain. Read More »