How AI Shapes Your Health Insurance Coverage

The Role of AI in Health Insurance Decisions
Over the past decade, health insurance companies have increasingly turned to artificial intelligence (AI) algorithms to make critical decisions about patient care. Unlike doctors and hospitals, which use AI to diagnose and treat patients, insurers use these tools to determine whether they will cover recommended treatments and services. One of the most common applications is prior authorization, where a doctor must obtain payment approval from an insurance company before providing care. These AI systems help insurers decide if the requested care is "medically necessary" and how much care a patient is entitled to, such as the number of days of hospitalization after surgery.
If an insurer declines to pay for a treatment recommended by a doctor, patients typically have three options: appeal the decision, agree to a different treatment that the insurer covers, or pay for the recommended treatment themselves. However, appealing a denial can be time-consuming, costly, and require expert assistance. Only 1 in 500 claim denials are appealed, and even when appeals are successful, the process can take years. This delay can have serious consequences, especially for patients with life-threatening conditions.
Concerns About AI in Health Insurance
As a legal scholar who studies health law and policy, I am concerned about how these AI systems affect people’s health. While insurers argue that AI helps them make quick, safe decisions about what care is necessary and avoids wasteful or harmful treatments, there is strong evidence that the opposite can be true. These systems are sometimes used to delay or deny care that should be covered, all in the name of saving money.
There is a pattern of withholding care, particularly for expensive, long-term, or terminal health problems. Research shows that patients with chronic illnesses are more likely to be denied coverage, and Black and Hispanic individuals, as well as LGBTQ+ communities, are disproportionately affected. Some evidence also suggests that prior authorization may increase rather than decrease healthcare costs.
Insurers often argue that patients can always pay for any treatment themselves, but this ignores the reality that many people cannot afford the care they need. These decisions have serious health consequences, especially when people are unable to access essential treatments.
The Need for Regulation
Unlike medical algorithms, insurance AI tools are largely unregulated. They do not have to go through FDA review, and insurance companies often claim their algorithms are trade secrets. This lack of transparency means there is no public information about how these tools make decisions, and there is no outside testing to ensure they are safe, fair, or effective. No peer-reviewed studies exist to show how well these tools work in the real world.
However, there is some momentum for change. The Centers for Medicare & Medicaid Services (CMS) recently announced that insurers in Medicare Advantage plans must base decisions on the needs of individual patients—not just generic criteria. But these rules still allow insurers to create their own decision-making standards and do not require outside testing. Additionally, federal rules apply only to public health programs like Medicare and do not regulate private insurers.
Some states, including Colorado, Georgia, Florida, Maine, and Texas, have proposed laws to rein in insurance AI. A few have passed new laws, including a 2024 California statute requiring a licensed physician to supervise the use of insurance coverage algorithms. However, most state laws suffer from the same weaknesses as the CMS rule. They leave too much control in the hands of insurers to define “medical necessity” and do not require independent testing of these algorithms before use.
The Role of the FDA
In the view of many health law experts, the gap between insurers’ actions and patient needs has become so wide that regulating health care coverage algorithms is now imperative. As I argue in an essay to be published in the Indiana Law Journal, the FDA is well positioned to do so. The FDA is staffed with medical experts who have the capability to evaluate insurance algorithms before they are used to make coverage decisions. The agency already reviews many medical AI tools for safety and effectiveness. FDA oversight would also provide a uniform, national regulatory scheme instead of a patchwork of rules across the country.
Some people argue that the FDA’s power here is limited. For the purposes of FDA regulation, a medical device is defined as an instrument “intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease.” Because health insurance algorithms are not used to diagnose, treat, or prevent disease, Congress may need to amend the definition of a medical device before the FDA can regulate those algorithms.
If the FDA’s current authority isn’t enough to cover insurance algorithms, Congress could change the law to give it that power. Meanwhile, CMS and state governments could require independent testing of these algorithms for safety, accuracy, and fairness. That might also push insurers to support a single national standard—like FDA regulation—instead of facing a patchwork of rules across the country.
The move toward regulating how health insurers use AI in determining coverage has clearly begun, but it is still awaiting a robust push. Patients’ lives are literally on the line.
Post a Comment for "How AI Shapes Your Health Insurance Coverage"
Post a Comment