Tuesday, April 21, 2026

Profiling Indiana Univ Prof. Jennifer Oliva: AI in Healthcare and More

 An April 2026 STAT PLUS article on United Healthcare and AI highlighted the expertise of Prof. Jennifer Oliva of Indiana University.   Article by Casey Ross.

Here, Chat GPT summarizes some of her work.  Find her bio here which includes comprehensive links to her publications.

https://law.indiana.edu/about/people/details?name=oliva-jennifer-d

###

Professor Jennifer D. Oliva emerges from these materials as a health-law scholar working at the junction of AI, insurance coverage, public health, and administrative power. Her Indiana bio places her at Indiana University Maurer School of Law as Professor of Law and Val Nolan Faculty Fellow, with research interests spanning health law and policy, privacy, evidence, torts, and complex litigation.  Her bio also shows a notably interdisciplinary background—West Point, Oxford MBA, Georgetown JD—plus roles with Georgetown’s O’Neill Institute and the UCSF/UC Law Consortium.

What is striking in Oliva’s recent work is that she does not treat AI in healthcare as a generic “innovation” story. She treats it as a power-allocation problem: who gets to decide, at scale and often opaquely, whether patients receive care, whether agencies remain evidence-based, and whether the law can still reach the real actors making consequential decisions. That theme runs through both her insurance-algorithm articles and, in a different register, through “Government Goes Goop.”

In “Regulating Healthcare Coverage Algorithms” (Indiana Law Journal, 2025), Oliva’s basic move is to shift the center of attention from clinical AI to coverage AI. She notes that while FDA regulates many AI-enabled clinical tools, insurers also use algorithms to determine whether care is “medically necessary” and how much care will be covered, yet those coverage algorithms remain largely unregulated, proprietary, and shielded from external validation. She argues that this is not a merely administrative issue: coverage algorithms can be used to delay or deny medically necessary treatment, with direct consequences for patient health.

[The Biden administration proposed a rule in late 2024 that would have required Medicare Advantage plans to  have much higher clarity about AI and coverage criteria in general; this was nixed in the final rule under Trump II.]

That article presents Oliva as a scholar with a very clear instinct: follow the real decision-maker. If an algorithm is effectively deciding access to care, she argues, then the law should not be distracted by formal distinctions between a tool that guides treatment and a tool that governs payment for treatment. Her argument is that those distinctions may look neat on paper, but in the real world, where most patients cannot self-fund expensive care, a coverage algorithm can be just as consequential as a diagnostic or treatment algorithm. 

On that basis, she presses for robust oversight, ideally through FDA authority if available, and otherwise through legislative expansion and interim state action requiring pre-market assessment for validity, accuracy, and fairness.

The article’s tone is also important. It is not technophobic. Oliva acknowledges that such tools may promise efficiency and standardization. But she is skeptical of the way those promises interact with insurer incentives. Her recurring concern is that AI, when deployed inside utilization management, can become a mechanism for industrialized claims control rather than improved care. In her framing, the real policy problem is not only bias or opacity in the abstract, but the combination of automation plus profit motive plus weak oversight.

Her later article, “Regulating Healthcare Coverage Algorithms in the Shadow of ERISA” (Michigan Law Review, forthcoming 2027), sharpens and advances that project. Here, Oliva takes up the big doctrinal obstacle: even if states want to regulate these systems, won’t ERISA preemption block them? 

Her answer is the article’s central intellectual contribution. She argues that states should reconceive coverage algorithms not as inseparable parts of employee benefit plan administration, but as standardized commercial products designed, manufactured, and sold by third-party vendors. If that is right, then state rules requiring validation of these tools are better understood as product safety regulation, not forbidden interference with plan administration.

That is a clever and important pivot. Instead of attacking ERISA head-on, Oliva tries to route around it. She accepts that ERISA has long frustrated state regulation of employer-sponsored health plans, especially self-insured plans, but argues that states still retain room to regulate the software products themselves. The article therefore proposes a model state framework centered on pre-market validation, ongoing performance monitoring, and enforcement by specialized state Algorithm Policy Offices, with regulation directed at vendors rather than plans.

So the difference between the two algorithm papers is useful. The Indiana piece is the broad normative and regulatory claim: these coverage algorithms are high-stakes healthcare tools and should not enjoy a “free ride.” The Michigan piece is the doctrinal and institutional sequel: given the constraints of ERISA and federal inaction, here is a legally defensible path for state governance. Together, they show Oliva at work as both critic and constructor—first identifying the regulatory void, then designing a plausible architecture to fill it.

The ERISA article also shows another feature of her style: she writes against the grain of conventional assumptions. The abstract expressly says the article “challenges” the assumption that comprehensive state oversight is beyond reach because of ERISA. That tells you something about her scholarly temperament. She does not simply describe preemption as an immovable barrier; she treats it as a legal terrain that can be reinterpreted for the era of algorithmic decision-making.

Then there is “Government Goes Goop” (Emory Law Journal Online, 2026), which at first looks like a departure from AI-insurance scholarship but in fact fits the same larger pattern. Here Oliva argues that the rise of wellness and antivaccine figures into senior federal health roles represents the culmination of a long historical evolution in American health fraud—from medicine shows to institutional capture. She traces continuities in tactics: emotional manipulation, conspiratorial framing, anti-establishment posture, and exploitation of information asymmetries.

The paper is especially pointed in its treatment of Trump Administration 2. Its table of contents and introduction frame the administration’s health leadership as a “wellness cabinet,” with discussions of Kennedy at HHS, Bhattacharya at NIH, Oz at CMS, and Makary at FDA, plus subsequent vaccine-skeptic appointments. Oliva’s claim is not merely that these figures hold controversial views. It is that federal health agencies risk being transformed from evidence-based institutions into platforms for wellness ideology, with effects including staff departures, preventable outbreaks, and weakened scientific credibility.

What links “Government Goes Goop” to the insurance-algorithm work is her deeper preoccupation with epistemic governance. In one set of papers, the question is: who validates the algorithm that determines care? In the Goop paper, the question becomes: who validates the very institutions meant to protect scientific standards? In both settings, Oliva worries about systems that look official, scalable, and rationalized, yet may be driven by commercial incentives, ideology, or both, while ordinary patients bear the risk.

A fair overall profile, then, is that Oliva is developing into a scholar of health-law infrastructure. She studies not just rights or doctrines in isolation, but the machinery by which modern healthcare decisions are made: utilization-management software, regulatory jurisdiction, preemption doctrine, agency leadership, and the porous boundary between public health expertise and commercialized misinformation. Her work is distinctive because it combines practical policy urgency with doctrinal inventiveness. She is not content to say that AI and healthcare are complicated. She asks, very concretely, who is governing whom, by what tool, under what legal authority, and with what accountability.