Understanding the new paper, "Developing Consensus for more provider-friendly NGS molecular biomarker report; A joint consensus recommendation of the AMP and CAP.
Gibson et al, October 2025, J Molec Dx.
###
ChatGPT said:
Here’s a concise TLDR with an emphasis on what’s actually new or distinctive, not just boilerplate guideline-speak:
Main Point
The paper proposes a consensus reporting template for next-generation sequencing (NGS) molecular biomarker reports—especially for oncology—intended to make reports “provider-friendly” (oncologist-friendly).
It argues that complex, inconsistent, and sometimes overwhelming NGS reports are one reason testing is under-used, and that a shared, evidence-based template could improve test adoption, interpretation, and patient care.
What’s Original vs. Routine
Original contributions:
-
Shift of focus: rather than another interpretation guideline (like AMP/ASCO/CAP 2017), this is about the reporting format itself—layout, tables, length, color use, handling of preliminary reports, VUS, and pertinent negatives.
-
Empiric review: they analyzed 17 real-world NSCLC NGS reports from commercial and academic labs and found only one truly “oncologist-friendly.”
-
Consensus-building: unusually wide stakeholder engagement—AMP, CAP, ASCO, plus >150 public respondents—fed into the template.
-
Pilot “mock” report for NSCLC: shows practical application, not just abstract principles.
-
Explicit attention to workflow barriers: e.g., EHR compatibility, preliminary report confusion, patient-portal release timing.
Mostly familiar / not novel:
-
Re-emphasis on AMP/ASCO/CAP tiered variant classification and standard nomenclature (HGNC/HGVS).
-
Recognition of importance of VUS, pertinent negatives, guideline references (NCCN, ESMO, etc.)—these points are common across prior documents.
-
The call for concise, evidence-based therapeutic and diagnostic comments (without prescribing therapy) reiterates existing standards.
How the Authors See Themselves
They cast the working group as “honest brokers” convened by AMP—not creating new standards of care, but harmonizing reporting practice.
The self-image is pragmatic and service-oriented: if oncologists and pathologists can see the same information in a clear, concise, consistent way, uptake of NGS will improve.
The Take-Home for a Reader
-
The problem isn’t the science of NGS—it’s that reports are often unreadable and inconsistent, which impedes clinical use.
-
The solution is a free, consensus-based template tested against actual reports and informed by a wide set of stakeholders.
-
The paper’s novelty lies in format and usability, not in clinical content or variant interpretation science.
In short, this is less about what to report and more about how to report it—a practical usability initiative positioned as an enabler of precision oncology.
You said:
ChatGPT said:
Here’s a clear-eyed critique of the effort, focusing on the parts most likely to draw skepticism from experts or health-system implementers:
1. Not Actually Validated
-
The template was never implemented or tested in a real EHR or a large health-system workflow.
-
There’s no data that it improves oncologist comprehension, turnaround, test adoption, or patient outcomes—so the project is aspirational, not evidence-driven.
-
It also skips cost-effectiveness or burden analyses, which makes it hard for labs to justify re-tooling their reports.
๐ Likely criticism: “Nice consensus paper, but we don’t know if it works in the wild.”
2. Narrow Pilot Scope
-
The mock template is built around NSCLC, but is presented as broadly generalizable.
-
The group acknowledges that hematologic cancers and inherited-cancer testing would need different report structures, but does not offer even a conceptual framework for them.
-
By focusing on one tumor type, the guidance risks being too disease-specific for universal adoption.
๐ Likely criticism: “This is an NSCLC-centric report dressed up as a general solution.”
3. Subjectivity in “Provider-Friendly”
-
The core concept—what’s ‘oncologist-friendly’—is not objectively defined.
-
Judgments about color use, length, or table structure are panel opinions, not user-tested heuristics.
-
The single-reporter assessments (“only 1 of 17 was oncologist-friendly”) underline that this is a value judgment, not a reproducible metric.
๐ Likely criticism: “You’ve codified preferences, not evidence-based design principles.”
4. Ignoring Major Workflow / IT Constraints
-
The template leans heavily on tables, color, and formatting, but most EHR/LIS systems strip formatting or render it inconsistently.
-
The report is still PDF-centric; true structured data interoperability (FHIR, discrete elements in the EHR) is out of scope.
-
There’s no practical roadmap for how reference labs and hospitals should integrate the template into their pipelines.
๐ Likely criticism: “This is 2025 and still PDF-oriented—EHR interoperability remains unsolved.”
5. Weak Treatment of Key Content Challenges
-
Variants of uncertain significance (VUS) and variant allele frequency (VAF) remain contentious; the paper offers only general remarks, not decisive guidance.
-
The decision to include or omit therapeutic guidance is left largely to institutional preference.
-
Pertinent negatives and secondary/incidental findings are acknowledged but not resolved.
๐ Likely criticism: “The hardest content questions get hand-waved.”
6. Stakeholder Engagement but not True Co-Design
-
The public-comment survey had fewer than 200 respondents, many of whom dropped off mid-survey—suggesting limited depth of engagement.
-
Patients and front-line community oncologists—arguably the end-users—had relatively little structured input.
-
Advisory-panel comments were folded into the consensus by informal discussion, not rigorous Delphi or mixed-methods design.
๐ Likely criticism: “This is more of an AMP-CAP expert exercise than a co-designed clinical-user product.”
7. Potential Conflicts and Sponsor Optics
-
Funded by AstraZeneca and Daiichi Sankyo through CME grants—legitimate, but it leaves the project open to perception of bias toward pharma-driven biomarker use.
-
The authors note this but do not discuss how reporting format might influence therapeutic uptake, which is the implicit concern.
๐ Likely criticism: “The effort looks industry-tilted and under-analyzes downstream commercial implications.”
Bottom Line
The work is innovative in spotlighting report usability, but weak as an implementation science product.
Critics will point to:
-
No real-world testing or outcome data
-
A narrow disease scope and subjective definition of “friendly”
-
Persistent IT and workflow barriers left unaddressed
The paper’s strongest value is conceptual—pushing the field to think about reporting as a barrier to precision oncology—yet its practical impact will depend on follow-up implementation studies and health-system adoption work that hasn’t yet been done.