healthcare

us-prescription-market-hamstrung-for-9-days-(so-far)-by-ransomware-attack

US prescription market hamstrung for 9 days (so far) by ransomware attack

RX CHAOS —

Patients having trouble getting lifesaving meds have the AlphV crime group to thank.

US prescription market hamstrung for 9 days (so far) by ransomware attack

Getty Images

Nine days after a Russian-speaking ransomware syndicate took down the biggest US health care payment processor, pharmacies, health care providers, and patients were still scrambling to fill prescriptions for medicines, many of which are lifesaving.

On Thursday, UnitedHealth Group accused a notorious ransomware gang known both as AlphV and Black Cat of hacking its subsidiary Optum. Optum provides a nationwide network called Change Healthcare, which allows health care providers to manage customer payments and insurance claims. With no easy way for pharmacies to calculate what costs were covered by insurance companies, many had to turn to alternative services or offline methods.

The most serious incident of its kind

Optum first disclosed on February 21 that its services were down as a result of a “cyber security issue.” Its service has been hamstrung ever since. Shortly before this post went live on Ars, Optum said it had restored Change Healthcare services.

“Working with technology and business partners, we have successfully completed testing with vendors and multiple retail pharmacy partners for the impacted transaction types,” an update said. “As a result, we have enabled this service for all customers effective 1 pm CT, Friday, March 1, 2024.”

AlphV is one of many syndicates that operates under a ransomware-as-a-service model, meaning affiliates do the actual hacking of victims and then use the AlphV ransomware and infrastructure to encrypt files and negotiate a ransom. The parties then share the proceeds.

In December, the FBI and its equivalent in partner countries announced they had seized much of the AlphV infrastructure in a move that was intended to disrupt the group. AlphV promptly asserted it had unseized its site, leading to a tug-of-war between law enforcement and the group. The crippling of Change Healthcare is a clear sign that AlphV continues to pose a threat to critical parts of the US infrastructure.

“The cyberattack against Change Healthcare that began on Feb. 21 is the most serious incident of its kind leveled against a US health care organization,” said Rick Pollack, president and CEO of the American Hospital Association. Citing Change Healthcare data, Pollack said that the service processes 15 billion transactions involving eligibility verifications, pharmacy operations, and claims transmittals and payments. “All of these have been disrupted to varying degrees over the past several days and the full impact is still not known.”

Optum estimated that as of Monday, more than 90 percent of roughly 70,000 pharmacies in the US had changed how they processed electronic claims as a result of the outage. The company went on to say that only a small number of patients have been unable to get their prescriptions filled.

The scale and length of the Change Healthcare outage underscore the devastating effects ransomware has on critical infrastructure. Three years ago, members affiliated with a different ransomware group known as Darkside caused a five-day outage of Colonial Pipeline, which delivered roughly 45 percent of the East Coast’s petroleum products, including gasoline, diesel fuel, and jet fuel. The interruption caused fuel shortages that sent airlines, consumers, and filling stations scrambling.

Numerous ransomware groups have also taken down entire hospital networks in outages that in some cases have threatened patient care.

AlphV has been a key contributor to the ransomware menace. The FBI said in December the group had collected more than $300 million in ransoms. One of the better-known victims of AlphV ransomware was Caesars Entertainment and casinos owned by MGM, which brought operations in many Las Vegas casinos to a halt. A group of mostly teenagers is suspected of orchestrating that breach.

US prescription market hamstrung for 9 days (so far) by ransomware attack Read More »

ai-cannot-be-used-to-deny-health-care-coverage,-feds-clarify-to-insurers

AI cannot be used to deny health care coverage, feds clarify to insurers

On Notice —

CMS worries AI could wrongfully deny care for those on Medicare Advantage plans.

A nursing home resident is pushed along a corridor by a nurse.

Enlarge / A nursing home resident is pushed along a corridor by a nurse.

Health insurance companies cannot use algorithms or artificial intelligence to determine care or deny coverage to members on Medicare Advantage plans, the Centers for Medicare & Medicaid Services (CMS) clarified in a memo sent to all Medicare Advantage insurers.

The memo—formatted like an FAQ on Medicare Advantage (MA) plan rules—comes just months after patients filed lawsuits claiming that UnitedHealth and Humana have been using a deeply flawed, AI-powered tool to deny care to elderly patients on MA plans. The lawsuits, which seek class-action status, center on the same AI tool, called nH Predict, used by both insurers and developed by NaviHealth, a UnitedHealth subsidiary.

According to the lawsuits, nH Predict produces draconian estimates for how long a patient will need post-acute care in facilities like skilled nursing homes and rehabilitation centers after an acute injury, illness, or event, like a fall or a stroke. And NaviHealth employees face discipline for deviating from the estimates, even though they often don’t match prescribing physicians’ recommendations or Medicare coverage rules. For instance, while MA plans typically provide up to 100 days of covered care in a nursing home after a three-day hospital stay, using nH Predict, patients on UnitedHealth’s MA plan rarely stay in nursing homes for more than 14 days before receiving payment denials, the lawsuits allege.

Specific warning

It’s unclear how nH Predict works exactly, but it reportedly uses a database of 6 million patients to develop its predictions. Still, according to people familiar with the software, it only accounts for a small set of patient factors, not a full look at a patient’s individual circumstances.

This is a clear no-no, according to the CMS’s memo. For coverage decisions, insurers must “base the decision on the individual patient’s circumstances, so an algorithm that determines coverage based on a larger data set instead of the individual patient’s medical history, the physician’s recommendations, or clinical notes would not be compliant,” the CMS wrote.

The CMS then provided a hypothetical that matches the circumstances laid out in the lawsuits, writing:

In an example involving a decision to terminate post-acute care services, an algorithm or software tool can be used to assist providers or MA plans in predicting a potential length of stay, but that prediction alone cannot be used as the basis to terminate post-acute care services.

Instead, the CMS wrote, in order for an insurer to end coverage, the individual patient’s condition must be reassessed, and denial must be based on coverage criteria that is publicly posted on a website that is not password protected. In addition, insurers who deny care “must supply a specific and detailed explanation why services are either no longer reasonable and necessary or are no longer covered, including a description of the applicable coverage criteria and rules.”

In the lawsuits, patients claimed that when coverage of their physician-recommended care was unexpectedly wrongfully denied, insurers didn’t give them full explanations.

Fidelity

In all, the CMS finds that AI tools can be used by insurers when evaluating coverage—but really only as a check to make sure the insurer is following the rules. An “algorithm or software tool should only be used to ensure fidelity,” with coverage criteria, the CMS wrote. And, because “publicly posted coverage criteria are static and unchanging, artificial intelligence cannot be used to shift the coverage criteria over time” or apply hidden coverage criteria.

The CMS sidesteps any debate about what qualifies as artificial intelligence by offering a broad warning about algorithms and artificial intelligence. “There are many overlapping terms used in the context of rapidly developing software tools,” the CMS wrote.

Algorithms can imply a decisional flow chart of a series of if-then statements (i.e., if the patient has a certain diagnosis, they should be able to receive a test), as well as predictive algorithms (predicting the likelihood of a future admission, for example). Artificial intelligence has been defined as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.

The CMS also openly worried that the use of either of these types of tools can reinforce discrimination and biases—which has already happened with racial bias. The CMS warned insurers to ensure any AI tool or algorithm they use “is not perpetuating or exacerbating existing bias, or introducing new biases.”

While the memo overall was an explicit clarification of existing MA rules, the CMS ended by putting insurers on notice that it is increasing its audit activities and “will be monitoring closely whether MA plans are utilizing and applying internal coverage criteria that are not found in Medicare laws.” Non-compliance can result in warning letters, corrective action plans, monetary penalties, and enrollment and marketing sanctions.

AI cannot be used to deny health care coverage, feds clarify to insurers Read More »