Welcome to the Healthcare Lottery: your doctor orders a critical test and you assume the insurance you pay for will cover it. Instead, you get a letter saying it has been denied. Who made that decision? Was it a qualified medical professional carefully considering your case? Maybe, maybe not. In many cases, it wasn’t even a person, it was an AI algorithm trained to deny as many claims as possible. And if it was a human? There’s a real chance it was a doctor employed by the payor with their bottom line in mind, not your wellbeing.
Insurance companies like UnitedHealthcare, Cigna, and Aetna have turned claim denials into a billion-dollar business. Their secret weapon? EviCore, an AI-powered tool designed to look for specific criterion to meet very strict prior authorization requirements. While payors claim these AI tools are intended to help expedite simple approvals and reduce overhead costs by eliminating the need for human reviews, the reality is these algorithms do not work seamlessly, particularly for complex cases where a doctor’s expertise should count for more than the software’s checklist. Unfortunately, payors use denials generated by AI as a triage system, knowing fewer than 1% of denials are appealed. Pro tip: when denials are appealed, 44% are approved. The model is simple, sinister, and, unfortunately, completely legal.
But as shady as AI denials are, the alternative isn’t much better. Many payors rely on disgraced doctors, physicians with multiple malpractice suits or disciplinary actions to carry out their denial scheme. These “medical reviewers” often spend just minutes on each case, churning through 10,000+ reviews per year, an average of 38 cases a day. That’s not a review. That’s a quota. And it creates incredible administrative burden for providers working on their patients’ behalf to get necessary treatment approved in a timely manner. And, of course, it is patients who pay the price.
For patients, a denial isn’t just paperwork. It’s pain, delayed treatment, and, in some cases, death. For hospitals, denials are a direct attack on their ability to keep the lights on. Health systems, especially in rural areas, are already struggling to survive. Every denied claim is more strain on hospitals that are barely keeping their doors open.
Doctors and hospitals are done playing defense. If insurers want to use AI to deny care, hospitals will use AI to fight back. Health systems are now deploying chatbots and AI-driven appeals systems to automate pushback against insurers’ cost-cutting tactics. Even small practices, which often lack the administrative resources to challenge denials, are now turning to AI-generated letters to force insurers to answer for their decisions.
Patients can fight back, too.
- Appeal every denial. Insurers count on patients giving up, but appeals often get denials overturned.
- Demand specific reasons for rejection. Make them explain their decision in writing.
- Report insurers to state insurance commissioners. Some states, like California, are moving to ban AI-only denials. Push legislators to do the same.
Insurers love to say that denials “prevent unnecessary care.” That’s a lie. These denials aren’t about preventing waste. They’re about not paying for care, period. Real people are suffering. And real people don’t know they have the power to do anything about it.
Hospitals are collapsing due to under-reimbursement from payors, increased administrative burden, and significantly delayed payments – often to the tune of millions of dollars outstanding for six months or more. Meanwhile, insurers brag about how much their AI systems have “optimized” denials and increased their own profits.
The system is rigged, but hospitals, doctors, and patients aren’t backing down. Denials will continue to be a primary theme for most health systems as they are forced to embrace what has historically been a taboo subject: their finances and the role payors play in their long-term sustainability.