Sunday, October 12, 2025

Some recent papers in quantitative risk benefit for regulatory approvals

 Overview

Quantitative risk–benefit assessment (qRBA) has emerged as a cornerstone of modern regulatory science, bridging the gap between qualitative expert judgment and model-informed decision frameworks. As regulatory agencies, health technology assessors, and industry sponsors increasingly seek transparent, data-driven methods for evaluating uncertainty and patient preference, the field is rapidly evolving toward integrated frameworks that combine simulation modeling, multi-criteria decision analysis (MCDA), and real options valuation. The recent literature reflects this shift: studies by Fu et al., Ghabri, Freyer et al., Adjei et al., and Sheng & Zhang collectively highlight the maturing interface between model-informed drug development (MIDD), fit-for-purpose quantitative modeling, and forward-looking benefit–risk appraisal. Together, these works mark a transition from static, descriptive assessments toward dynamic, evidence-anchored systems capable of informing regulatory, coverage, and investment decisions across drugs, biologics, and medical devices.

###
###

Great topic. Below are concise, graduate-level takeaways—first by article, then as a synthesis you can use to frame a seminar, memo, or study design.

Individual reviews

Adjei et al., JMCP (Oct 2025): “Review of the applications of the real option value (ROV) of medical technologies in oncology.”

  • What they did. Systematic review (2000–May 2024) of peer-reviewed oncology studies applying ROV—i.e., valuing survival time partly because it buys access to future innovations.

  • How ROV was modeled. Both ex post and ex ante perspectives; common forecasting tools included Lee–Carter mortality projections, Cox models on claims, and approval-likelihood estimates from early RCT pipelines.

  • Findings & limits. ROV is potentially material for cancers with fast innovation (melanoma, lung, prostate), but methods are heterogeneous; assumptions can be oversimplified; there’s no standardized framework yet. Authors argue for integrating ROV into HTA/value frameworks and coverage decisions.

  • Why it matters. For pharmacy/HTA stakeholders, ROV reframes “survival” as an option that may amplify cost-effectiveness when downstream therapies are likely.

Fu, Li, Scott, He, Contemporary Clinical Trials (2020, abstract + page snippets): “A new framework to address challenges in quantitative benefit–risk assessment (qBRA).”

  • Motivation. Many structured BR frameworks (PrOACT-URL, FDA/ICH) are qualitative; the authors catalog qBRA methods and propose a framework to make application/uncertainty explicit.

  • Methods context. Highlights MCDA and SMAA; notes MCDA’s reliance on point estimates and typical neglect of uncertainty without added steps.

  • Framework & demonstration. A 4-step, case-agnostic structure with simulation of a phase III CV trial to show implementation and sensitivity work; goal is efficient, transparent evidence summarization for regulators/industry.

Ghabri, Value in Health (Editorial, 2023): “Emerging Good Practices for Quantitative Benefit–Risk Assessment: A Step Forward.”

  • Regulatory baseline. EMA (2011–2015) and FDA (2019/2021) primarily use qualitative BR (e.g., PrOACT-URL, BRAT) but acknowledge roles for quantitative analyses in difficult, preference-sensitive decisions.

  • What’s new. ISPOR’s qBRA Good Practices give step-by-step guidance aligned with MCDA (define question; model; elicit preferences; analyze incl. sensitivity; communicate with checklists/visuals).

  • Editorial stance. Quantifying outcome weights and uncertainty can improve rigor and communication when qualitative judgment isn’t enough.

Freyer et al., Expert Review of Medical Devices (2025): “Methods for benefit–risk assessment of medical devices: a systematic review.”

  • Scope. Surveys methods, standards, and evolving constraints for devices across the product lifecycle, touching FDA/EMA guidance, patient-preference frameworks, ISPOR 2023 qBRA Good Practices, and post-market/real-world considerations.

  • Contemporary pressures. Cybersecurity, AI-enabled devices, and digital health require expanded risk concepts and performance monitoring platforms; regulation/reimbursement must adapt to flexible, modular tech.

  • Practical implication. Device BR is multi-source and iterative; visual methods, Bayesian networks, and structured plans are increasingly used to make trade-offs explicit and auditable.

Sheng & Zhang, J. Pharmacokinet. Pharmacodyn. (2025): “Advancing drug development with ‘Fit-for-Purpose’ modeling-informed approaches.”

  • Thesis. Provide a strategic blueprint for aligning Model-Informed Drug Development (MIDD) tools with the Question of Interest and the Context of Use (COU) across FDA’s five stages (discovery → post-market).

  • FFP criteria. Define COU; ensure data quality; verify/qualify models; match model influence and risk to decision stakes; avoid both over- and under-complexity. Examples span QSAR, PBPK, semi-mechanistic PK/PD, PPK/ER, QSP, adaptive/Bayesian designs, MBMA, and DDI simulations.

  • Why it matters. Positions MIDD/FFP as the operational complement to qBRA—producing quantitative, decision-ready inputs that shorten timelines and raise success probability when embedded with cross-functional teams.

Cross-paper synthesis and practical insights

  1. From qualitative narratives to calibrated, preference-sensitive math.
    Regulators still rely on qualitative BR frameworks, but the ISPOR qBRA good practices (and Fu et al.’s framework) push toward MCDA-style structures with explicit weights, uncertainty, and reporting checklists—especially when decisions are preference-sensitive or borderline. This improves transparency and sensitivity analysis around trade-offs.

  2. Decision-grade modeling needs “fit-for-purpose” discipline.
    Sheng & Zhang supply the operational playbook: define COU and QOI, select the right model for the decision, verify/validate, and scale model influence with decision risk; this dovetails with qBRA by generating credible inputs (e.g., dose–response, subgroup ER, long-term extrapolations) that can be weighted in MCDA and stress-tested.

  3. Life-cycle lens and real options.
    Adjei et al. show that in rapid-innovation contexts (oncology), ROV may materially alter value—and thus the BR envelope—because survival confers access to future therapies. This argues for forward-looking BR that accounts for pipeline dynamics and learning, not just point-in-time efficacy and risk.

  4. Devices and digital bring new risk constructs.
    Freyer et al. highlight cybersecurity, software updates, AI drift, and modular combos—all of which expand “risk” beyond classic clinical harms. Device/digital BR should therefore integrate standards, post-market monitoring, and patient-preference evidence, and often benefit from Bayesian network or other probabilistic, systems-oriented tools.

  5. Method harmonization and reporting.
    Across papers, the most pragmatic need is standardization—clear documentation of weights, COU, data sources, uncertainty methods, and sensitivity analyses—to make qBRA auditable and comparable across submissions and HTA reviews. ISPOR’s reporting checklist is the current anchor.

How to apply this (research or regulatory strategy)

  • Design qBRA that “clicks” into MIDD. Start with the regulatory QOI (e.g., dose justification; subgroup benefit; long-term benefit vs AE risk), produce FFP models (PPK/ER, PBPK, MBMA) to generate quantitative outcomes, then feed those into MCDA with stakeholder-elicited weights and probabilistic sensitivity (e.g., SMAA), documenting everything with the ISPOR checklist.

  • In oncology or other fast-moving areas, run scenario analyses that embed ROV: model pipeline arrival rates/uptake and mortality trends (e.g., Lee–Carter) to see how option value shifts BR at different time horizons.

  • For devices/digital, incorporate non-clinical harms (security, usability, algorithmic drift) and post-market performance explicitly into the BR structure; consider Bayesian networks when interdependencies are many and data are evolving.

  • Make uncertainty first-class. Use probabilistic analyses (not only point-estimate MCDA), scenario/sensitivity sweeps on both clinical inputs and preference weights, and report how rankings change. (This is the chief critique of basic MCDA.)

If you want, I can draft a 1–2 page qBRA+MIDD analysis template (sections, tables, and figures) that reflects these practices and can be dropped into protocols or briefing packages.