unitedhealth

amid-paralyzing-ransomware-attack,-feds-probe-unitedhealth’s-hipaa-compliance

Amid paralyzing ransomware attack, feds probe UnitedHealth’s HIPAA compliance

most significant and consequential incident —

UnitedHealth said it will cooperate with the probe as it works to restore services.

Multistory glass-and-brick building with UnitedHealthcare logo on exterior.

As health systems around the US are still grappling with an unprecedented ransomware attack on the country’s largest health care payment processor, the US Department of Health and Human Services is opening an investigation into whether that processor and its parent company, UnitedHealthcare Group, complied with federal rules to protect private patient data.

The attack targeted Change Healthcare, a unit of UnitedHealthcare Group (UHG) that provides financial services to tens of thousands of health care providers around the country, including doctors, dentists, hospitals, and pharmacies. According to an antitrust lawsuit brought against UHG by the Department of Justice in 2022, 50 percent of all medical claims in the US pass through Change Healthcare’s electronic data interchange clearinghouse. (The DOJ lost its case to prevent UHG’s acquisition of Change Healthcare and last year abandoned plans for an appeal.)

As Ars reported previously, the attack was disclosed on February 21 by UHG’s subsidiary, Optum, which now runs Change Healthcare. On February 29, UHG accused the notorious Russian-speaking ransomware gang known both as AlphV and BlackCat of being responsible. According to The Washington Post, the attack involved stealing patient data, encrypting company files, and demanding money to unlock them. The result is a paralysis of claims processing and payments, causing hospitals to run out of cash for payroll and services and preventing patients from getting care and prescriptions. Additionally, the attack is believed to have exposed the health data of millions of US patients.

Earlier this month, Rick Pollack, the president and CEO of the American Hospital Association, called the ransomware attack on Change Healthcare “the most significant and consequential incident of its kind against the US health care system in history.”

Now, three weeks into the attack, many health systems are still struggling. On Tuesday, members of the Biden administration met with UHG CEO Andrew Witty and other health industry leaders at the White House to demand they do more to stabilize the situation for health care providers and services and provide financial assistance. Some improvements may be in sight; on Wednesday, UHG posted an update saying that “all major pharmacy and payment systems are up and more than 99 percent of pre-incident claim volume is flowing.”

HIPAA compliance

Still, the data breach leaves big questions about the extent of the damage to patient privacy, and the adequacy of protections moving forward. In an additional development Wednesday, the health department’s Office for Civil Rights (OCR) announced that it is opening an investigation into UHG and Change Healthcare over the incident. It noted that such an investigation was warranted “given the unprecedented magnitude of this cyberattack, and in the best interest of patients and health care providers.”

In a “Dear Colleague” letter dated Wednesday, the OCR explained that the investigation “will focus on whether a breach of protected health information occurred and Change Healthcare’s and UHG’s compliance with the HIPAA Rules.” HIPAA is the Health Insurance Portability and Accountability Act, which establishes privacy and security requirements for protected health information, as well as breach notification requirements.

In a statement to the press, UHG said it would cooperate with the investigation. “Our immediate focus is to restore our systems, protect data and support those whose data may have been impacted,” the statement read. “We are working with law enforcement to investigate the extent of impacted data.”

The Post notes that the federal government does have a history of investigating and penalizing health care organizations for failing to implement adequate safeguards to prevent data breaches. For instance, health insurance provider Anthem paid a $16 million settlement in 2020 over a 2015 data breach that exposed the private data of almost 79 million people. The exposed data included names, Social Security numbers, medical identification numbers, addresses, dates of birth, email addresses, and employment information. The OCR investigation into the breach discovered that the attack began with spear phishing emails that at least one employee of an Anthem subsidiary fell for, opening the door to further intrusions that went undetected between December 2, 2014, and January 27, 2015.

“Unfortunately, Anthem failed to implement appropriate measures for detecting hackers who had gained access to their system to harvest passwords and steal people’s private information,” OCR Director Roger Severino said at the time. “We know that large health care entities are attractive targets for hackers, which is why they are expected to have strong password policies and to monitor and respond to security incidents in a timely fashion or risk enforcement by OCR.”

Amid paralyzing ransomware attack, feds probe UnitedHealth’s HIPAA compliance Read More »

ai-cannot-be-used-to-deny-health-care-coverage,-feds-clarify-to-insurers

AI cannot be used to deny health care coverage, feds clarify to insurers

On Notice —

CMS worries AI could wrongfully deny care for those on Medicare Advantage plans.

A nursing home resident is pushed along a corridor by a nurse.

Enlarge / A nursing home resident is pushed along a corridor by a nurse.

Health insurance companies cannot use algorithms or artificial intelligence to determine care or deny coverage to members on Medicare Advantage plans, the Centers for Medicare & Medicaid Services (CMS) clarified in a memo sent to all Medicare Advantage insurers.

The memo—formatted like an FAQ on Medicare Advantage (MA) plan rules—comes just months after patients filed lawsuits claiming that UnitedHealth and Humana have been using a deeply flawed, AI-powered tool to deny care to elderly patients on MA plans. The lawsuits, which seek class-action status, center on the same AI tool, called nH Predict, used by both insurers and developed by NaviHealth, a UnitedHealth subsidiary.

According to the lawsuits, nH Predict produces draconian estimates for how long a patient will need post-acute care in facilities like skilled nursing homes and rehabilitation centers after an acute injury, illness, or event, like a fall or a stroke. And NaviHealth employees face discipline for deviating from the estimates, even though they often don’t match prescribing physicians’ recommendations or Medicare coverage rules. For instance, while MA plans typically provide up to 100 days of covered care in a nursing home after a three-day hospital stay, using nH Predict, patients on UnitedHealth’s MA plan rarely stay in nursing homes for more than 14 days before receiving payment denials, the lawsuits allege.

Specific warning

It’s unclear how nH Predict works exactly, but it reportedly uses a database of 6 million patients to develop its predictions. Still, according to people familiar with the software, it only accounts for a small set of patient factors, not a full look at a patient’s individual circumstances.

This is a clear no-no, according to the CMS’s memo. For coverage decisions, insurers must “base the decision on the individual patient’s circumstances, so an algorithm that determines coverage based on a larger data set instead of the individual patient’s medical history, the physician’s recommendations, or clinical notes would not be compliant,” the CMS wrote.

The CMS then provided a hypothetical that matches the circumstances laid out in the lawsuits, writing:

In an example involving a decision to terminate post-acute care services, an algorithm or software tool can be used to assist providers or MA plans in predicting a potential length of stay, but that prediction alone cannot be used as the basis to terminate post-acute care services.

Instead, the CMS wrote, in order for an insurer to end coverage, the individual patient’s condition must be reassessed, and denial must be based on coverage criteria that is publicly posted on a website that is not password protected. In addition, insurers who deny care “must supply a specific and detailed explanation why services are either no longer reasonable and necessary or are no longer covered, including a description of the applicable coverage criteria and rules.”

In the lawsuits, patients claimed that when coverage of their physician-recommended care was unexpectedly wrongfully denied, insurers didn’t give them full explanations.

Fidelity

In all, the CMS finds that AI tools can be used by insurers when evaluating coverage—but really only as a check to make sure the insurer is following the rules. An “algorithm or software tool should only be used to ensure fidelity,” with coverage criteria, the CMS wrote. And, because “publicly posted coverage criteria are static and unchanging, artificial intelligence cannot be used to shift the coverage criteria over time” or apply hidden coverage criteria.

The CMS sidesteps any debate about what qualifies as artificial intelligence by offering a broad warning about algorithms and artificial intelligence. “There are many overlapping terms used in the context of rapidly developing software tools,” the CMS wrote.

Algorithms can imply a decisional flow chart of a series of if-then statements (i.e., if the patient has a certain diagnosis, they should be able to receive a test), as well as predictive algorithms (predicting the likelihood of a future admission, for example). Artificial intelligence has been defined as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.

The CMS also openly worried that the use of either of these types of tools can reinforce discrimination and biases—which has already happened with racial bias. The CMS warned insurers to ensure any AI tool or algorithm they use “is not perpetuating or exacerbating existing bias, or introducing new biases.”

While the memo overall was an explicit clarification of existing MA rules, the CMS ended by putting insurers on notice that it is increasing its audit activities and “will be monitoring closely whether MA plans are utilizing and applying internal coverage criteria that are not found in Medicare laws.” Non-compliance can result in warning letters, corrective action plans, monetary penalties, and enrollment and marketing sanctions.

AI cannot be used to deny health care coverage, feds clarify to insurers Read More »

humana-also-using-ai-tool-with-90%-error-rate-to-deny-care,-lawsuit-claims

Humana also using AI tool with 90% error rate to deny care, lawsuit claims

AI denials —

The AI model, nH Predict, is the focus of another lawsuit against UnitedHealth.

Signage is displayed outside the Humana Inc. office building in Louisville, Kentucky, US, in 2016.

Enlarge / Signage is displayed outside the Humana Inc. office building in Louisville, Kentucky, US, in 2016.

Humana, one the nation’s largest health insurance providers, is allegedly using an artificial intelligence model with a 90 percent error rate to override doctors’ medical judgment and wrongfully deny care to elderly people on the company’s Medicare Advantage plans.

According to a lawsuit filed Tuesday, Humana’s use of the AI model constitutes a “fraudulent scheme” that leaves elderly beneficiaries with either overwhelming medical debt or without needed care that is covered by their plans. Meanwhile, the insurance behemoth reaps a “financial windfall.”

The lawsuit, filed in the US District Court in western Kentucky, is led by two people who had a Humana Medicare Advantage Plan policy and said they were wrongfully denied needed and covered care, harming their health and finances. The suit seeks class-action status for an unknown number of other beneficiaries nationwide who may be in similar situations. Humana provides Medicare Advantage plans for 5.1 million people in the US.

It is the second lawsuit aimed at an insurer’s use of the AI tool nH Predict, which was developed by NaviHealth to forecast how long patients will need care after a medical injury, illness, or event. In November, the estates of two deceased individuals brought a suit against UnitedHealth—the largest health insurance company in the US—for also allegedly using nH Predict to wrongfully deny care.

Humana did not respond to Ars’ request for comment for this story. United Health previously said that “the lawsuit has no merit, and we will defend ourselves vigorously.”

AI model

In both cases, the plaintiffs claim that the insurers use the flawed model to pinpoint the exact date to blindly and illegally cut off payments for post-acute care that is covered under Medicare plans—such as stays in skilled nursing facilities and inpatient rehabilitation centers. The AI-powered model comes up with those dates by comparing a patient’s diagnosis, age, living situation, and physical function to similar patients in a database of 6 million patients. In turn, the model spits out a prediction for the patient’s medical needs, length of stay, and discharge date.

But, the plaintiffs argue that the model fails to account for the entirety of each patient’s circumstances, their doctors’ recommendations, and the patient’s actual conditions. And they claim the predictions are draconian and inflexible. For example, under Medicare Advantage plans, patients who have a three-day hospital stay are typically entitled to up to 100 days of covered care in a nursing home. But with nH Predict in use, patients rarely stay in a nursing home for more than 14 days before claim denials begin.

Though few people appeal coverage denials generally, of those who have appealed the AI-based denials, over 90 percent have gotten the denial reversed, the lawsuits say.

Still, the insurers continue to use the model and NaviHealth employees are instructed to hew closely to the AI-based predictions, keeping lengths of post-acute care to within 1 percent of the days estimated by nH Predict. NaviHealth employees who fail to do so face discipline and firing. ” Humana banks on the patients’ impaired conditions, lack of knowledge, and lack of resources to appeal the wrongful AI-powered decisions,” the lawsuit filed Tuesday claims.

Plaintiff’s cases

One of the plaintiffs in Tuesday’s suit is JoAnne Barrows of Minnesota. On November 23, 2021, Barrows, then 86, was admitted to a hospital after falling at home and fracturing her leg. Doctors put her leg in a cast and issued an order not to put any weight on it for six weeks. On November 26, she was moved to a rehabilitation center for her six-week recovery. But, after just two weeks, Humana’s coverage denials began. Barrows and her family appealed the denials, but Humana denied the appeals, declaring that Barrows was fit to return to her home despite being bedridden and using a catheter.

Her family had no choice but to pay out-of-pocket. They tried moving her to a less expensive facility, but she received substandard care there, and her health declined further. Due to the poor quality of care, the family decided to move her home on December 22, even though she was still unable to use her injured leg, go the bathroom on her own, and still had a catheter.

The other plaintiff is Susan Hagood of North Carolina. On September 10, 2022, Hagood was admitted to a hospital with a urinary tract infection, sepsis, and a spinal infection. She stayed in the hospital until October 26, when she was transferred to a skilled nursing facility. Upon her transfer, she had eleven discharging diagnoses, including sepsis, acute kidney failure, kidney stones, nausea and vomiting, a urinary tract infection, swelling in her spine, and a spinal abscess. In the nursing facility, she was in extreme pain and on the maximum allowable dose of the painkiller oxycodone. She also developed pneumonia.

On November 28, she returned to the hospital for an appointment, at which point her blood pressure spiked, and she was sent to the emergency room. There, doctors found that her condition had considerably worsened.

Meanwhile, a day earlier, on November 27, Humana determined that it would deny coverage of part of her stay at the skilled nursing facility, refusing to pay from November 14 to November 28. Humana said Hagood no longer needed the level of care the facility provided and that she should be discharged home. The family paid $24,000 out-of-pocket for her care, and to date, Hagood remains in a skilled nursing facility.

Overall, the patients claim that Humana and UnitedHealth are aware that nH Predict is “highly inaccurate” but use it anyway to avoid paying for covered care and make more profit. The denials are “systematic, illegal, malicious, and oppressive.”

The lawsuit against Humana alleges breach of contract, unfair dealing, unjust enrichment, and bad faith insurance violations in many states. It seeks damages for financial losses and emotional distress, disgorgement and/or restitution, and to have Humana barred from using the AI-based model to deny claims.

Humana also using AI tool with 90% error rate to deny care, lawsuit claims Read More »