It’s 2024, and AI is in the air — it’s everywhere. Especially in healthcare.
Whether it’s imaging software that can help doctors quickly identify and diagnose diseases; predictive insights that help optimize the utilization of equipment, beds, and staff; robots to assist surgeons in performing procedures; chatbots to answer questions and offer medical advice; or wearable devices and sensors to enable remote patient monitoring, AI is taking the field of healthcare by storm.
While it can be argued that AI is a tool that brings with it a lot of good, when put in the wrong hands, there’s a dark underbelly to its power. One questionable area is in an insurance company’s ability to program AI to assess and deny patient claims. And many companies are taking advantage: First it was Cigna. Then it was UnitedHealthcare. Now it’s Humana.
According to a recent article published in Healthcare Dive, Humana is facing a lawsuit over its use of AI in claims processing. The lawsuit alleges that the health insurer used AI model nH Predict to improperly deny care to Medicare Advantage (MA) patients. nH Predict is the same AI model that got UnitedHealthcare in hot water back in November 2023.
Both Humana and UnitedHealthcare have reason to use the algorithm to speed up the claims process: The two insurance giants just so happen to account for nearly half of all MA enrollees nationwide, according to a 2023 analysis from KFF. For Humana’s part, it provides MA health insurance plans for 5.1 million eligible individuals in the country.
According to the suit, Humana prematurely cut payments for MA beneficiaries’ rehabilitation care. But just how does nH Predict work? The algorithm is designed to predict how long patients will need to stay in skilled nursing facilities. These estimates then determine approvals and denials. Unfortunately, for many of these MA patients, “the algorithm has a high error rate and often contradicts doctors’ recommendations,” according to the Healthcare Dive article.
Historically, insurers hired physicians to review medical claims and approve or deny based on medical necessity and other criteria — not all of which was clear to patients or physicians. Today, these algorithms provide even less transparency. Patients whose claims are denied are then forced to pay out of pocket or forego care altogether.
The rising number of lawsuits against insurance companies speaks to the fact that AI in healthcare is only poised to grow, not dial back. And we should be concerned. Because the more that doctors don’t get a say in patient care, the more dangerous — dare we say, life-threatening — our healthcare system becomes.