Monday, May 11, 2026

Artera Prostate 0376U Medicare Billing

 Artera Prostate has a PLA code, 0376U, several years old.  At least to 2024, there were no paid claims (December 2024).   I do not know if there are paid claims in 2025, but for this essay I assumed not.   I do understand the Artera Prostate test (validated on many many thousands of slides) has NCCN endorsement.

Below is direct Chat GPT research.

###

ESSAY

ArteraAI presents an interesting distinction between coding/pricing success and actual Medicare payment success. The company is appropriately described as Los Altos, California–based: FDA’s De Novo database lists Artera, Inc., 108 1st St., Los Altos, CA, as the requester for ArteraAI Prostate. FDA granted De Novo authorization for ArteraAI Prostate on July 31, 2025, classifying it as a pathology software algorithm device analyzing digital images for cancer prognosis. (FDA Access Data)

But the performing CLIA laboratory for the ArteraAI Prostate Test appears to be in Jacksonville, Florida, not California. Artera’s own test materials identify the clinical lab address as 6800 Southpoint Pkwy, Suite 950, Jacksonville, FL 32216. That places the performing lab under First Coast Service Options, the A/B MAC for Jurisdiction N, covering Florida, Puerto Rico, and the U.S. Virgin Islands. (ArteraAI) And yes — FCSO is First Coast Service Options.

The test has a PLA-type CPT code, 0376U, and CMS established a Medicare CLFS payment rate effective January 1, 2024. Artera publicly described this as a CMS payment-rate decision for the ArteraAI Prostate Test, a clinical diagnostic laboratory test. (ArteraAI) That is meaningful, but it is not the same thing as coverage. A CPT/PLA code plus a CLFS dollar amount means the test can be billed and priced; it does not necessarily mean that a MAC is paying claims.

That is the concerning point: according to the claims-data observation you are making, there appear to be no paid Medicare claims for 0376U in Florida or elsewhere, despite the existence of the code, the CLFS price, the ability to enroll/bill through an NPI, and the absence of a clear published LCD either covering or non-covering the test. In that sense, ArteraAI is not “unsuccessful” at coding or pricing; it is unsuccessful at converting those assets into actual Medicare payment.

This is especially striking because the test is not a speculative early-stage assay. It has FDA De Novo authorization, and Artera has stated that the ArteraAI Prostate Test was included in the 2024 NCCN Guidelines for Prostate Cancer as a predictive test for therapy personalization, with Level IB evidence under the Simon criteria. (ArteraAI) Artera’s clinician-facing materials also describe use across NCCN risk groups, with reported 10-year risks of distant metastasis and prostate cancer–specific mortality. (ArteraAI)

The likely rationale for the Florida lab location is therefore not mysterious. A California performing lab would fall under Noridian/MolDX jurisdiction, with the familiar Z-code/technical assessment pathway. By contrast, a Florida performing lab submits through First Coast Service Options / Jurisdiction N, outside the MolDX MAC structure. That may have looked like a cleaner path for a PLA-priced AI pathology test: obtain the code, obtain the CLFS price, bill through a non-MolDX MAC, and avoid a California MolDX technology-assessment bottleneck.  (BQ - At least historically, FCSO paid codes 81401-81408 Tier 2, and PLA, fairly easily.).

But if the claims data show zero paid claims, the strategy may have produced only a formal pathway, not a payment pathway. The result is a reimbursement limbo that is highly relevant for innovators: FDA clearance, NCCN recognition, a PLA code, and a CLFS price still may not produce Medicare dollars when no MAC has articulated a coverage position and claims simply do not pay.

Genomic Health: Looking Back at the 2006 Harvard Case Study

 In 2006, there was a Stanford Case Study about the Genomic Health test - before adoption, before coding, before coverage or guidelines.   (You can still buy the PDF via Harvard)

Here's a combination of TLDR and further strategic analysis of then-and-now.


##

TLDR

The 2006 Stanford case is striking because Oncotype DX basically became the thing the case hoped it would become: a high-value, clinically validated, tumor-genomic test that changed adjuvant chemotherapy decision-making in ER+/HER2− early breast cancer. The case correctly identifies almost every major strategic hinge: clinical validation before broad adoption, physician skepticism, payer evidence demands, premium pricing, CPT/reimbursement friction, centralized CLIA laboratory control, and the need to treat diagnostics more like therapeutics than like commodity lab tests. The initial pivotal study already showed that the 21-gene recurrence score outperformed age, tumor size, and grade as a predictor of distant recurrence, with low-, intermediate-, and high-risk groups showing 10-year distant recurrence estimates of 6.8%, 14.3%, and 30.5%.

What feels “frozen in amber” is the 2006 optimism that genomics would rapidly reorganize medicine into a broad information-first, high-margin diagnostic economy. That happened selectively, not universally. Oncotype DX was a canonical success, but many later genomic diagnostics struggled with evidence, reimbursement, adoption, and differentiation. The case also sits before the later era of TAILORx, RxPONDER, NCCN/ASCO embedding, PLA codes, MolDx/Z-codes, ADLT/PAMA, liquid biopsy, MRD, AI pathology, and FDA-LDT turbulence. So it reads both as a remarkably prescient founding document and as a fossil from an era when “personalized medicine” still sounded like a moonshot rather than a reimbursement trench war.

What the case is really about

The surface story is Genomic Health launching Oncotype DX, but the deeper story is the attempt to create a new category: diagnostics as high-value clinical decision tools, not low-margin lab commodities. Kim Popovits’ opening quote frames the entire case: the system had to recognize diagnostics as having a value proposition comparable to therapeutics, or Genomic Health’s whole model would be jeopardized.

That was the right fight. The company was not merely selling “a 21-gene panel.” It was selling a decision intervention at a specific moment: a woman with early-stage ER+ breast cancer, after surgery, facing the chemotherapy decision. The case repeatedly emphasizes that chemotherapy was expensive, toxic, and of limited absolute benefit for many patients; the unmet need was not more molecular information in the abstract, but better triage of who needed chemo and who could avoid it.

The case is also unusually modern in its evidence strategy. Genomic Health chose not to rely on the fact that CLIA allowed market entry as a “homebrew”/LDT. Instead, the team deliberately used something closer to a drug-development evidentiary blueprint: analytical rigor, archived FFPE tissue, blinded validation, prospectively defined endpoints, and high-profile oncology collaborators such as NSABP. That decision looks very prescient. The case says physicians had been “burned” by uncertain new tests and would not adopt a genomic diagnostic without clinical validation.

What was prescient

The biggest prescient point is that clinical utility would be the moat. The case understood that the hard part was not measuring RNA from paraffin blocks; it was proving that the result changed a real clinical decision. That lesson became the central doctrine of molecular diagnostics reimbursement for the next twenty years.

Second, the case anticipated the now-familiar idea that a diagnostic can be worth thousands of dollars if it prevents overtreatment or undertreatment. Genomic Health’s payer research found that once a test crossed the “expensive” threshold, payers were less sensitive to whether it cost $1,500 or $4,500, provided the test had convincing clinical value and validation. That is an early articulation of value-based diagnostics pricing.

Third, the case correctly saw that workflow fit matters. Oncotype DX used ordinary FFPE tumor tissue, required no special collection, could be sent overnight, and returned results within the two-to-three-week post-surgery chemotherapy decision window. That is a huge adoption advantage. The molecular test was radical, but the specimen logistics were almost boring—and that was part of the genius.

Fourth, it saw that physician education and payer education had to be built together. The case describes a reimbursement dossier, medical-director education, private-payer contracting, exception claims, ABNs, and the need to protect physicians from being financially burned by nonpayment. This is basically the modern playbook for high-value molecular diagnostics, only described before that playbook had become standard.

And fifth, the bet that Oncotype DX could become a durable platform asset proved correct. The later TAILORx trial helped settle the troublesome intermediate-risk category in node-negative HR+/HER2− disease, showing that many women with midrange recurrence scores could avoid chemotherapy without inferior outcomes. (New England Journal of Medicine) RxPONDER later extended the clinical story into selected node-positive patients, especially showing that postmenopausal women with 1–3 positive nodes and recurrence scores 0–25 could likely avoid chemotherapy, while premenopausal women appeared different. (New England Journal of Medicine)

What looks frozen in amber

The case is very 2006 in its language of “the genomics revolution.” It imagines a broad transition in which genomic information might come to dominate therapeutics as the highest-value layer of medicine. That was intellectually plausible, and in some niches correct, but it overgeneralized. Genomics became essential in oncology, rare disease, reproductive genetics, infectious disease, and transplant/MRD-style monitoring—but it did not broadly displace therapeutics as the dominant economic engine of biomedicine.

The case also predates the modern reimbursement bureaucracy. There is no PLA code universe, no MolDx/Z-code architecture, no PAMA shockwave, no ADLT pathway, no elaborate LCD evidentiary machinery, no NCD 90.2, no FDA LDT rulemaking drama. Its CPT discussion is charmingly early: should Genomic Health stack existing codes, or use a miscellaneous code and suffer manual review? Today, that same question would immediately branch into PLA strategy, gapfill/crosswalk, MolDx technical assessment, Medicare contractor jurisdiction, private payer policy, NCCN/ASCO status, and possibly FDA status.

The case is also frozen before liquid biopsy and MRD changed the mental model of oncology diagnostics. Oncotype DX is a tissue-based, one-time, adjuvant decision test. The modern frontier is often serial, blood-based, dynamic: therapy selection from ctDNA, recurrence monitoring, molecular response, tumor-informed MRD, fragmentomics, methylation, and AI-enhanced multimodal risk. Oncotype was an information product, but it was still a relatively static one.

Early misfires or overextensions

The most obvious overextension was the belief that the information layer might become more valuable than the drug layer. That is true in a rhetorical or clinical-decision sense in some cases, but not in market capitalization terms. Drugs did not become commodities. Immuno-oncology, targeted therapies, cell therapy, GLP-1s, and biologics all proved that therapeutics remained the giant economic center of gravity.

A second overextension was the implied scalability of the Oncotype model. The story can make it sound as though once you build the proof engine, many similar high-value genomic tests will follow. In reality, Oncotype DX was almost ideally positioned: common disease, painful binary decision, expensive/toxic avoidable treatment, large clinical trial archives, cooperative oncology networks, motivated patients, and a clear economic offset. Many later tests lacked one or more of those ingredients.

A third “misfire,” if that is not too harsh, was underestimating how long clinical adoption and reimbursement normalization would remain bespoke. The case correctly anticipates an 18–24 month reimbursement struggle, but the broader field did not evolve into a clean, rational marketplace for diagnostics. Even today, a great diagnostic can spend years in a swamp of coding, coverage, evidence thresholds, LCDs, private-payer variation, prior authorization, and physician inertia.

Bottom line

This case is not embarrassing in hindsight. Quite the opposite: it is one of those rare business-school cases where the central thesis largely came true. Oncotype DX became a standard-of-care genomic decision tool, later reinforced by large prospective evidence such as TAILORx and RxPONDER, and Genomic Health itself became valuable enough to be acquired by Exact Sciences in 2019. (Exact Sciences)

But it is also a perfect amber specimen from the first genomics boom: optimistic, therapeutic-disruption-minded, and confident that high-value diagnostics would force the system to recognize their worth. The more nuanced 2026 reading is: yes, but only when the test owns a decision, fits the workflow, has unusually strong evidence, and survives the reimbursement machinery. Oncotype DX did. Many others did not.


###

Addendum - Financial Architecture is Destiny

The visible controversy in 2004–2006 was, “How can a lab test cost $3,000?” But the deeper shift was not merely a price increase; it was a transformation in the financial structure of diagnostics. Traditional reference laboratory economics, exemplified by Quest or Labcorp, were built around high-throughput operations: heavy specimen logistics, high variable costs, broad menus, modest margins, and very little product-specific R&D as a percentage of revenue. The lab’s value was scale, efficiency, and fulfillment. Genomic Health was trying to invert that model. It wanted a diagnostic business with the economics of a biotech product company: substantial upfront development, clinical validation, publications, medical education, payer dossiers, a specialized sales force, and enough gross margin to fund the next product. The Stanford case says this explicitly in business-school language: traditional diagnostics were “high-volume, low-margin,” with little room for R&D, while Genomic Health wanted “high-value, information-rich diagnostics” that could command premium pricing and support ongoing research.

In that sense, Myriad’s BRCA test and Genomic Health’s Oncotype DX were not simply two expensive early genomic tests. They were two early attempts to move diagnostics out of commodity lab economics. Myriad had the patent-protected version of the model: old-stack molecular coding could be assembled into a roughly $3,000 service, defended by intellectual property and clinical distinctiveness. Genomic Health mirrored that price point but justified it less through gene patents and more through clinical utility, proprietary validation, brand, evidence development, and physician/payer education. Oncotype’s price was not only payment for the marginal cost of running RT-PCR on a paraffin block. It was payment for a new kind of diagnostic enterprise: one with perhaps 20% COGS and 20% R&D, rather than the classic lab model of 60% COGS and essentially 0% R&D. The test price therefore carried the burden of funding the whole innovation system around the assay.

That point was easy to miss because the payer and coding systems were still asking a primitive question: what technical steps were performed? Genomic Health was asking a different question: what clinical decision did the information change, and what treatment costs or toxicities did it help avoid? The case captures this tension in the CPT discussion: the company could stack existing codes and reach only about $1,700, which it believed failed to reflect the test’s clinical value, production cost, and development investment; or it could use a miscellaneous code and defend the value claim manually on each claim.

The historical significance, then, is not just that Oncotype DX was “expensive.” It was an early assertion that some diagnostics should be financed like innovation products, not like commodity lab procedures. The market eventually accepted that argument in selected cases, but only under demanding conditions: a defined clinical decision, strong evidence, trusted guidelines, payer education, and a credible story that the price funds real clinical innovation rather than merely exploiting coding arbitrage.

ArteraAI Prostate: The Evidence and Regulatory Story

[Consolidated from several workstreams]


ArteraAI Prostate: The Evidence and Regulatory Story

1. The Big Picture

Artera’s prostate cancer evidence program is best understood as a staged effort to establish multimodal AI digital pathology as both a prognostic and predictive clinical decision tool. The platform analyzes routine H&E prostate biopsy whole-slide images together with clinical variables, aiming to extract latent information about tumor aggressiveness and treatment benefit that is not captured by standard pathology, NCCN risk grouping, or conventional clinical variables alone.

The evidence program supports several different but related claims. Some publications address prognostic validity — predicting distant metastasis and prostate cancer–specific mortality. Others address predictive validity — identifying which patients benefit from androgen deprivation therapy, or from longer versus shorter ADT. Additional publications address analytical validation, racial subgroup performance, and real-world clinical utility. Finally, the FDA De Novo review provides a narrower but important regulatory-grade claim: ArteraAI Prostate as an FDA-authorized software device for 10-year risk estimates of distant metastasis and prostate cancer–specific mortality in non-metastatic prostate cancer.

That distinction is central. The commercial ArteraAI Prostate Test appears broader than the FDA-authorized ArteraAI Prostate device. The broader test narrative includes treatment-personalization claims such as short-term ADT benefit, possible active-surveillance insights, and treatment-intensification insights. The FDA authorization, by contrast, is focused on prognostic risk estimation.

(Superceded, Too Long) Artera AI Prostate Test (Documents, Links)

[This is an archival, excessively long blog about Artera regulatory history.   I later used it to create a shorter single piece.]

Artera’s prostate cancer evidence program is best understood as a staged effort to establish multimodal AI digital pathology as both a prognostic and predictive clinical decision tool. The platform analyzes routine H&E prostate biopsy whole-slide images together with clinical variables, aiming to extract latent information about tumor aggressiveness and treatment benefit that is not captured by standard pathology or NCCN risk grouping. 

The published evidence begins with retrospective validation in large randomized phase III trial archives, showing that the model can predict distant metastasis and prostate cancer–specific mortality, and, more importantly, can identify patients more or less likely to benefit from androgen deprivation therapy. Spratt et al. support prediction of benefit from short-term ADT; Armstrong et al. extend this to long-term versus short-term ADT in high-risk disease; Parker et al. show prognostic validity in STAMPEDE patients with very high-risk or metastatic disease. Gerrard et al. addresses analytical validation, translating the AI output into a reproducible clinical laboratory test. Additional work addresses racial equity and real-world clinical utility through registry studies. 

The FDA De Novo review is narrower but important: it authorizes ArteraAI Prostate as a software device for 10-year prognostic risk estimates of distant metastasis and prostate cancer–specific mortality in non-metastatic prostate cancer, while the broader publication strategy supports a larger treatment-personalization platform.


####

Press release, August 2025.

https://artera.ai/news/artera-receives-u-s-fda-de-novo-marketing-authorization-for-ai-digital-pathology-software-revolutionizing-prostate-cancer-care

12 page PDF guide.

https://artera.ai/wp-content/uploads/ArteraAI-Prostate-Test-Guide.pdf

Regulation 88 FR 7007 Feb 2 2023 [Paige]

https://www.federalregister.gov/documents/2023/02/02/2023-02141/medical-devices-hematology-and-pathology-devices-classification-of-the-software-algorithm-device-to

https://www.govinfo.gov/content/pkg/FR-2023-02-02/pdf/2023-02141.pdf

FDA Classification Letter

https://www.accessdata.fda.gov/cdrh_docs/pdf24/DEN240068.pdf

FDA Decision Summary 24 pages

https://www.accessdata.fda.gov/cdrh_docs/reviews/DEN240068.pdf

###

The ArteraAI Prostate De Novo: A Tale of Two Devices, Two Pathways, and One Confusing Brand Name

You've identified what is genuinely one of the more puzzling regulatory stories in digital pathology, and the answer turns on a distinction that Artera's own marketing materials work hard to elide: the FDA-authorized "ArteraAI Prostate" and the commercial "ArteraAI Prostate Test" are not the same product, and they reached the market through entirely different regulatory channels. Once that distinction is clear, the choice of De Novo over 510(k) makes sense.

The Underlying Regulation: 21 CFR 864.3750

The codified regulation at 21 CFR 864.3750 was created by the FDA's February 2, 2023 Federal Register notice (88 FR 7007), which formally classified a brand-new generic device type — the "software algorithm device to assist users in digital pathology" — into Class II with special controls. As you correctly note, this classification was triggered by Paige.AI's request, which the FDA had received on December 31, 2020. The agency issued the classifying order to Paige on September 21, 2021, and the Federal Register notice in 2023 was the formal codification step that added § 864.3750 to the Code of Federal Regulations.

The generic device type is identified in the regulation as "an in vitro diagnostic device intended to evaluate acquired scanned pathology whole slide images" that "uses software algorithms to provide information to the user about presence, location, and characteristics of areas of the image with clinical implications," with information "intended to assist the user in determining a pathology diagnosis." Read carefully, this language describes a diagnostic adjunct — the special controls in subsection (b)(1)(vi)–(vii) explicitly require labeling that the device is used "as an adjunct" and "in conjunction with complete standard of care evaluation of the WSI." This is essentially what Paige Prostate does: it flags suspicious regions on a prostate biopsy whole slide image to help the pathologist render a diagnosis (is there cancer here or not, and where).

Why ArteraAI Prostate Couldn't Use 510(k) — The Intended-Use Mismatch

This is the crux of your question, and the answer is that substantial equivalence under 510(k) requires both the same intended use and the same technological characteristics (or different technological characteristics that don't raise new questions of safety and effectiveness) as the predicate. ArteraAI Prostate fails the first prong decisively.

Paige Prostate's intended use is diagnostic assistance: helping a pathologist identify whether cancer is present on a slide and where it is located. ArteraAI Prostate's intended use, as described in the August 13, 2025 press release, is prognostication of long-term outcomes for patients with non-metastatic prostate cancer — a fundamentally different clinical question asked at a different point in the care pathway. Paige answers "is this cancer?"; Artera answers "given that this is cancer, how aggressive is it and what is the risk of distant metastasis or prostate-cancer-specific mortality over 10 years?" One is a pattern-recognition aid for the pathologist at the moment of diagnosis; the other is a risk-stratification tool feeding downstream treatment decisions by the urologist or radiation oncologist.

Beyond intended use, the technological inputs also differ in ways material to safety and effectiveness. Paige operates on the whole slide image alone. Artera's MMAI platform, by its own description, combines digitized biopsy images with structured clinical data (PSA, Gleason score, T-stage, age) to produce its risk score. That multimodal architecture, and the prognostic output it generates, raises questions the Paige De Novo decision summary and special controls were never designed to address — particularly around the clinical-data inputs, the predictive (not just prognostic) claim for ST-ADT benefit, and the validation against long-term oncologic endpoints rather than concordance with pathologist diagnoses.

So the 510(k) pathway was effectively unavailable. The Paige predicate doesn't cover Artera's intended use, and there was no other legally marketed prognostic digital-pathology device to cite. Under section 513(f)(2) of the FD&C Act, when a sponsor "determines that there is no legally marketed device upon which to base a determination of substantial equivalence," De Novo is the appropriate route. The 2023 Federal Register notice itself describes this second De Novo procedure, where a sponsor skips the 510(k) attempt and goes directly to a classification request.

Notably, the August 2025 press release confirms this read: the FDA's De Novo authorization for ArteraAI Prostate "establishes a new product code category for future AI-powered digital pathology risk-stratification tools." If Artera had been substantially equivalent to Paige under the existing § 864.3750 classification, there would be no new product code category to establish. The FDA appears to have created a parallel or expanded regulatory space for prognostic (as opposed to diagnostic-assistance) digital pathology software — though we'll need to see the De Novo decision summary and the resulting CFR amendment to know exactly how the agency drew the boundaries.

The 2024 Document: Yes, That's the LDT

Your suspicion about the 2024 patient guide is correct, and this is where the branding gets genuinely confusing. The 2024 "ArteraAI Prostate Test Guide" describes the Laboratory Developed Test (LDT) version of the product, which is and has been commercially available through Artera's CLIA-certified laboratory in Jacksonville, Florida. The disclaimer on the final page is explicit: the ArteraAI Prostate Test is a Laboratory Developed Test clinically available through a single CLIA-certified laboratory in Jacksonville, FL, and has not been cleared or approved by the U.S. Food and Drug Administration.

LDTs occupy a regulatory category historically subject to FDA enforcement discretion rather than premarket review — they are regulated as laboratory services under CLIA (overseen by CMS) when performed in a single high-complexity laboratory, rather than as distributed medical devices under the FD&C Act. The same underlying MMAI algorithm, validated on the same clinical trial data, has been offered this way for years, which is why the 2024 patient guide can describe sophisticated prognostic and predictive outputs (10-year distant metastasis risk, ST-ADT benefit prediction, abiraterone benefit insights for high-risk patients) without any FDA authorization in hand.

The August 2025 press release makes the parallel-track structure explicit. It states that the De Novo authorization applies specifically to the ArteraAI Prostate medical device software, while Artera's underlying MMAI platform is also commercially available through the ArteraAI Prostate Test as a Laboratory Developed Test (LDT). So as of mid-2025, Artera operates two regulatory pathways in parallel:

The ArteraAI Prostate (no "Test" suffix) is the FDA-authorized Software as a Medical Device. It received De Novo authorization on August 13, 2025, after an earlier Breakthrough Device Designation. It is designed to be deployed at qualified pathology laboratories at the point of diagnosis, and the authorization includes a Predetermined Change Control Plan allowing Artera to validate compatibility with additional WSI scanners without filing new 510(k)s — a meaningful operational advantage.

The ArteraAI Prostate Test is the LDT, performed centrally in Jacksonville, with samples shipped in. This is the version described in the 2024 patient guide, billed through the 0376U PLA CPT code, and reimbursed (per Artera's billing materials) with zero out-of-pocket cost under Medicare Part B. It remains, per the disclaimer, not FDA-cleared or -approved.

Why Run Both Tracks?

This dual-track strategy is increasingly common for sophisticated diagnostic software, and the logic is straightforward. The LDT pathway gets a clinically validated test to market quickly without the time and cost of FDA premarket review, generates revenue and real-world evidence, and supports the NCCN guideline inclusion that drives clinical adoption. But LDTs face an uncertain regulatory future — the FDA finalized a rule in April 2024 phasing in active oversight of LDTs as devices, and although the rule has faced legal challenges, the direction of travel is clear. An FDA-authorized SaMD version future-proofs the franchise, opens distributed deployment at outside pathology labs (rather than requiring slides be shipped to Jacksonville), and creates the "new product code category" that establishes Artera as the predicate for future entrants in AI-powered prognostic digital pathology.

The Bottom Line for Your Readers

The cleanest way to frame this for readers interested in the regulatory mechanics is that the FDA's § 864.3750 classification, triggered by Paige in 2021 and codified in 2023, covers digital pathology software that helps pathologists make a diagnosis. ArteraAI Prostate does something categorically different — it provides prognostic and predictive risk information for treatment selection, using both the image and structured clinical data. That intended-use mismatch closed the 510(k) door and pushed Artera to De Novo, where the FDA appears to have established a new product code for risk-stratification software rather than expanding the existing diagnostic-adjunct classification. Meanwhile, the same underlying algorithm has been clinically available all along as a Laboratory Developed Test out of Jacksonville, which is what the 2024 patient-facing guide describes and what most patients have actually received to date.

It is, as you say, a bit complicated — but the complication is genuinely informative about how AI-enabled diagnostics are navigating two distinct regulatory regimes simultaneously, and the Artera case is likely to be a template others follow.


###

https://pmc.ncbi.nlm.nih.gov/articles/PMC11195914/

Spratt et al. 2024 NEJM EVID   AI predictive model for hormone therapy use in prostate cancer.

2024 NEJM EVID SPRATT Artera Prostate.pdf

https://artera.ai/wp-content/uploads/Gerrard-et-al-analytical-validation-of-a-clinical-grade-prognostic-and-classification-artificial-intelligence-1-1.pdf

Gerrard et al. 2024 AI in Precis Oncol AV of Clinical Prosate AI Lab Test.pdf

https://www.thelancet.com/journals/landig/article/PIIS2589-7500(25)00067-6/fulltext

2024 Lancet Dig health Parker External Valid of AI Prostate Prognostic.pdf

https://www.urologytimes.com/view/study-to-evaluate-impact-of-arteraai-on-clinical-decision-making-for-prostate-cancer

2025 Urol Times Clarke Study on Dec Impact of Artera Prostate

https://www.ncbi.nlm.nih.gov/books/NBK594816/pdf/Bookshelf_NBK594816.pdf

NOTE: GENOMIC.   2023 VA EVID SYNTH Program Boyer Genomic Prostate Classiferies 175p

https://artera.ai/news/journal-of-clinical-oncology-publishes-significant-evidence-in-advancing-racial-equity-in-cancer-care-through-arteras-multimodal-ai-model

2025 JCO Paper on Artera and Equity DEI PRESS

https://ascopubs.org/doi/abs/10.1200/JCO.24.00365

2025 JCO Armstrong Devel and Valid of AI Dig Path Prostate Cancer

###

Excellent job of sorting this out.  Now please provide a scientific discussion of data and goals of publications of Artera.  I think these are all related but I'm not completely positive.  Also use the FDA 24 page evidence review by the FDA itself.

###

Note: I had to switch to Chat GPT for evidence review as I hit a document and "image" limit in Opus 4.7.

###

Bottom line

Yes — these documents appear to be closely related pieces of one evidence program, but they are not all the same claim. Artera’s broad scientific strategy is to establish a multimodal AI platform — digital H&E prostate biopsy images plus clinical variables — as a clinical decision support biomarker across prostate cancer care. The publications and FDA evidence review support several progressively different claims:

  1. Prognostic validity — risk of distant metastasis and prostate cancer–specific mortality.

  2. Predictive validity — who benefits from adding or extending androgen deprivation therapy.

  3. Analytical validity — whether the AI-lab workflow is reproducible as a clinical laboratory test.

  4. Generalizability / equity — whether performance holds across racial subgroups.

  5. Clinical utility / implementation — whether clinicians and patients actually change decisions, being studied prospectively in DIRECT-AI.

  6. Regulatory-grade evidence — FDA’s narrower authorized claim for ArteraAI Prostate as a SaMD: 10-year prognostic risk estimates for DM and PCSM in non-metastatic prostate cancer, not the full commercial LDT menu of predictive treatment insights.

1. The core scientific thesis: digital H&E contains latent prognostic and predictive biology

The central idea in the Artera literature is that routine prostate biopsy slides contain more information than conventional pathology reports extract. Instead of asking the pathologist or algorithm to identify a pre-specified histologic feature, the model learns patterns from whole-slide H&E images and combines them with clinical data such as age, PSA, T stage, and originally Gleason variables. This makes the test conceptually different from a genomic classifier, an IHC assay, or a computer-aided Gleason grading tool. It is an outcome-trained morphology-plus-clinical-data model.

The FDA review describes the device as a software-only AI/ML test evaluating scanned H&E prostate core biopsy whole-slide images to provide 10-year risk estimates for distant metastasis and prostate cancer–specific mortality. FDA’s intended-use population is relatively specific: males 55 years or older, treatment-naïve, non-metastatic prostate cancer, candidates for curative-intent management, with images obtained from authorized or PCCP-qualified whole-slide scanners.

The clinical product guide presents a broader commercial narrative: the ArteraAI Prostate Test is framed as providing risk stratification across NCCN risk groups, short-term ADT benefit prediction in intermediate-risk disease, active-surveillance insights in lower-risk disease, and abiraterone insights in high/very-high-risk disease. That is broader than the FDA-authorized SaMD claim and appears to include LDT/commercial evidence claims as well as evidence still being extended.

2. Foundational predictive evidence: short-term ADT benefit

The pivotal predictive-publication logic starts with Spratt et al. (2023). This NEJM Evidence paper developed an AI model to identify which localized prostate cancer patients benefit from adding short-term ADT to radiotherapy. The authors used pretreatment prostate digital pathology images and clinical data from 5,727 patients enrolled in five phase III randomized trials. The model was locked and then validated in NRG/RTOG 9408, which randomized patients to radiotherapy with or without 4 months of ADT.

The key scientific objective was not simply “risk prediction.” It was treatment-effect prediction. In the validation cohort, 543 patients, about 34%, were model-positive and had reduced distant metastasis risk with ADT; 1,051 were model-negative and did not show benefit. This is the high-value claim because ADT has meaningful morbidity, and conventional risk groups are largely prognostic rather than truly predictive.

In evidence-strategy terms, Spratt et al. (2023) tries to move Artera beyond the crowded “prostate cancer prognostic classifier” category. The paper’s purpose is to say: this is not merely another Decipher-like or NCCN-plus risk tool; it can identify a subgroup for whom ADT is worth the toxicity, and a larger subgroup for whom ADT may be avoidable.

3. High-risk disease: long-term versus short-term ADT

Armstrong et al. (2025) extends the predictive concept to a different and clinically important question: in high-risk localized/locally advanced prostate cancer, can AI identify who needs long-term ADT rather than short-term ADT? The study trained a biomarker using six NRG phase III randomized radiotherapy trials and validated it in RTOG 9202, where patients were randomized to radiotherapy plus 4 months versus 28 months of ADT.

The overall RTOG 9202 validation cohort showed that long-term ADT improved distant metastasis from 26% to 17%. But the AI biomarker separated patients into two groups: biomarker-positive men had reduced distant metastasis with long-term ADT, while biomarker-negative men did not show benefit. The 15-year distant-metastasis risk difference was reported as 14% in biomarker-positive men and 0% in biomarker-negative men. The paper concludes that roughly one third of high-risk patients could potentially avoid the additional 24 months of ADT morbidity.

Strategically, this is a very strong “therapy personalization” claim because it addresses a familiar clinical tradeoff: durable cancer control versus years of ADT toxicity. Scientifically, it also gives Artera a second randomized-trial-validated predictive use case, not just a single ADT yes/no scenario.

4. Prognostic expansion into advanced prostate cancer: STAMPEDE

Parker et al. (2025) tests whether the locked ArteraAI Prostate MMAI algorithm remains prognostic in a much more advanced disease context: patients with metastatic or very high-risk non-metastatic disease starting long-term ADT in the STAMPEDE platform trials. This was a post-hoc ancillary biomarker study using four phase III randomized STAMPEDE comparisons involving docetaxel, docetaxel plus zoledronic acid, abiraterone, and abiraterone plus enzalutamide.

In 3,167 included patients, the MMAI score was strongly associated with prostate cancer–specific mortality: HR 1.40 per standard deviation increase. The highest quartile had higher PCSM risk in both non-metastatic and metastatic disease. The paper further showed that MMAI added stratification beyond existing disease-burden categories: for example, node-negative non-metastatic patients were split into 5-year PCSM estimates of 3% for Q1–3 versus 11% for Q4; high-volume metastatic patients were split into 48% versus 68%.

This paper’s goal is different from Spratt and Armstrong. It is less about “who should get ADT?” and more about whether the digital pathology signal is a general prostate cancer aggressiveness signal across the disease spectrum. The authors explicitly frame the implication as a scalable digital pathology biomarker that could stratify very high-risk and metastatic patients starting ADT with novel hormonal drugs or chemotherapy.

5. Analytical validation: making the AI output into a lab test

Gerrard et al. (2024) is important because it addresses a different question: not “does the biomarker predict outcomes?” but “can this be run reproducibly as a clinical test?” The paper argues that conventional analytical validation frameworks are awkward for AI on H&E because H&E is not a specific molecular probe, and the clinically meaningful “biomarker” is the algorithm output, not a directly measured molecule or pathologist-identifiable feature.

The study evaluated two algorithms: a prognostic algorithm and a short-term ADT predictive classification algorithm. It reported high analytical accuracy and reliability: analytical accuracy ICC 0.991 for the prognostic algorithm and 0.934 for the ST-ADT algorithm; intra-operator reliability 0.981 ICC and 100% agreement; inter-operator reliability 0.994 ICC and 93.3% agreement; and reasonable biopsy-completeness reliability across one versus three or six cores.

This paper is strategically useful for FDA, payers, lab directors, and skeptics because it provides a framework for validating a patient-level AI test that does not detect a conventional analyte. It also anticipates a recurring objection: “What exactly is the analyte?” Gerrard’s answer is effectively: for this class of AI pathology tests, the clinically validated algorithmic output is the measured biomarker.

6. FDA evidence review: narrower, regulatory-grade claim

The FDA 24-page review is central because it shows what FDA was actually willing to authorize. FDA classifies ArteraAI Prostate under product code SFH, Class II, regulation 21 CFR 864.3755, as a “software algorithm device analyzing digital images for cancer prognosis.” The authorized test type is evaluation of scanned H&E prostate needle biopsy WSIs by AI/ML to provide 10-year risk estimates of distant metastasis and prostate cancer–specific mortality.

FDA’s device output is not the entire commercial Artera test menu. The FDA-authorized output includes:

10-year categorical risk for distant metastasis: high, intermediate, low;
individual 10-year distant-metastasis risk for low and intermediate groups;
10-year categorical risk for prostate cancer–specific mortality: high, intermediate, low.

FDA’s clinical performance study included 886 patients across three U.S. sites. In the benefit-risk section, FDA reports that the 10-year DM risk for the high-risk category was 28.1%, significantly higher than the overall 8.1%, and that the low-risk category was 3.3%, significantly lower than the overall risk. FDA’s benefit-risk framing is sober: the benefit is better risk-informed treatment decisions and possible reduction in under- or overtreatment; the main risk is erroneous results or misinterpretation, mitigated by analytical/clinical performance, labeling, and the fact that results are used with standard-of-care evaluation rather than as the sole decision factor.

This is a key distinction for your purposes: FDA authorized the prognostic risk device, while Artera’s broader publication strategy supports a larger platform vision that includes predictive therapy selection, ADT duration, active surveillance, and treatment intensification.

7. Equity / bias validation

The Artera equity press material describes a JCO Clinical Cancer Informatics publication focused on whether the MMAI model performs similarly across African American and non-African American patients. The stated study included 5,708 patients from five randomized phase III trials and found the algorithm predicted distant metastasis and PCSM in both groups. The company frames this as the first comparative analysis of a large digital pathology AI prognostic model in African American versus non-African American prostate cancer patients.

The scientific goal here is less about discovering a new clinical use and more about defensibility and trust. AI tools are vulnerable to the objection that they encode dataset bias or perform poorly in underrepresented groups. This publication attempts to show that Artera’s model is not merely “high AUC in the overall cohort,” but robust across a clinically important racial subgroup. It also supports the company’s broader strategy of building evidence for adoption by academic centers, guideline committees, payers, and regulators.

8. Clinical utility and the DIRECT-AI registry

The Urology Times article describes the DIRECT-AI registry, which is designed to assess real-world clinical decision-making and longer-term outcomes after ArteraAI testing. Phase 1 collects clinician and patient survey feedback on treatment recommendations and treatment selection; Phase 2 monitors outcomes at 2 and 5 years, including distant metastasis, PCSM, overall survival, adverse pathology at prostatectomy, and treatments received. Patients are excluded if they have already begun treatment.

This is the natural next step. Retrospective validation on randomized trials is powerful for clinical validity, especially predictive claims, but payers and guideline bodies often still ask: does the test actually change decisions, reduce overtreatment, preserve outcomes, or improve quality of care? DIRECT-AI appears aimed at answering the clinical utility question in real-world practice.

9. Evidence strategy: what Artera seems to be building

The evidence strategy is coherent and staged.

First, Artera builds credibility on phase III randomized trial archives rather than small single-institution datasets. That is a major differentiator in AI pathology. The recurrent use of NRG/RTOG and STAMPEDE trial biospecimens gives the platform more credibility than a typical retrospective image-AI model trained on convenience datasets.

Second, Artera separates prognostic and predictive claims. Prognostic claims support general risk stratification: who is more likely to metastasize or die of prostate cancer? Predictive claims support treatment personalization: who benefits from ADT, longer ADT, or possibly intensification? This distinction matters scientifically and commercially. Prognostic risk can be useful, but predictive benefit is much more actionable.

Third, Artera is trying to define a new regulatory and laboratory category: not AI that helps a pathologist find cancer, and not a molecular test that measures a named analyte, but AI digital pathology that estimates patient-level outcome risk from ordinary H&E plus clinical inputs. Gerrard et al. provides the laboratory validation philosophy, and FDA’s De Novo review provides the regulatory instantiation.

Fourth, Artera is widening from “localized prostate cancer ADT decisions” toward a broader prostate cancer spectrum platform. Parker et al. in STAMPEDE supports prognostic validity in very high-risk and metastatic disease; the product guide and Urology Times materials point toward active surveillance and abiraterone-related insights.

Fifth, the company is addressing adoption objections: analytical reliability, racial subgroup performance, real-world clinical utility, FDA authorization, and scanner expansion through PCCP. The FDA-related press release states that the De Novo authorization includes a predetermined change control plan allowing expansion to additional digital pathology scanners through validation rather than new 510(k) submissions.

10. Scientific limitations and caveats

The evidence base is unusually strong for an AI pathology test, but it is not free of limitations. Much of the clinical evidence relies on retrospective ancillary analyses of randomized trials. That is far better than many retrospective real-world cohorts, but it is still not the same as a prospective randomized “test-directed management” trial.

Second, the FDA-authorized claim is prognostic, while the commercial and publication ecosystem includes predictive claims for ST-ADT and LT-ADT. Those predictive claims may be well supported by randomized trial validation, but they should not be conflated with the narrower FDA SaMD indication. The FDA review itself focuses on 10-year DM and PCSM risk outputs and benefit-risk for risk-informed decision-making.

Third, prostate cancer practice has changed. The STAMPEDE paper itself notes limitations related to modern staging and biopsy patterns, including increased use of PSMA-PET and MRI-targeted biopsy, which could affect how historical trial-derived models generalize to current patients.

Fourth, the commercial product guide includes active-surveillance and abiraterone insights. Those may be plausible extensions of the platform, but based on the documents you provided, those claims appear less mature than the core randomized-trial ADT prediction and FDA-reviewed prognostic-risk claims. The DIRECT-AI registry seems designed to help fill the clinical-utility gap.


Publication-by-publication interpretation

Esteva et al. (2022) — foundational platform publication. Cited by Artera and Gerrard as “Prostate cancer therapy personalization via multi-modal deep learning on randomized phase III clinical trials.” This appears to be the original broad proof-of-concept that multimodal deep learning using digital pathology plus clinical data can personalize prostate cancer therapy using randomized phase III trial datasets.

Spratt et al. (2023) — short-term ADT predictive model. This is one of the strongest clinical-action papers: it asks whether AI can distinguish intermediate-risk localized prostate cancer patients who benefit from adding short-term ADT to radiotherapy from those who do not. Validation in RTOG 9408 makes the paper central to Artera’s “predictive, not merely prognostic” story.

Ross et al. (2024) — external validation in NRG/RTOG 9902. The full paper was not among the uploaded articles, but it is cited in Gerrard and Parker as an external validation of the digital pathology multimodal AI architecture in the NRG/RTOG 9902 phase III trial.

Tward et al. (2024) — phase III trial risk stratification using multimodal deep learning. The full paper was not uploaded, but Parker cites it as “Prostate cancer risk stratification in NRG oncology phase III randomized trials using multimodal deep learning with digital histopathology,” published in JCO Precision Oncology. It appears to support the prognostic-risk backbone of the Artera model.

Gerrard et al. (2024) — analytical validation. This is the lab-test translation paper. It defines how to validate an AI test whose meaningful output is a patient-level risk or classification, rather than a visible feature or molecule. It supports reproducibility, reliability, and analytical accuracy for the clinical-grade test.

Armstrong et al. (2025) — long-term ADT duration prediction. This extends the predictive ADT logic into high-risk disease and shows that a model can identify men who benefit from 28 months rather than 4 months of ADT, potentially sparing about one third of high-risk patients prolonged ADT morbidity.

Parker et al. (2025) — STAMPEDE external validation in advanced disease. This shows that the locked ArteraAI Prostate model carries prognostic information in very high-risk and metastatic prostate cancer, beyond radiologic disease burden. It supports the idea that biopsy H&E contains broad aggressiveness information across the prostate cancer continuum.

Roach et al. / JCO Clinical Cancer Informatics equity study (2025) — racial subgroup validation. Based on the press material, this study evaluates whether model performance is similar in African American and non-African American men using 5,708 patients from five randomized phase III trials. Its role is to support fairness, generalizability, and trust in AI deployment.

FDA DEN240068 review (2024/2025 authorization materials) — regulatory validation. FDA’s review is not a journal article but is arguably the most important evidence synthesis for the authorized SaMD. It narrows the claim to non-metastatic prostate cancer prognosis, documents analytical and clinical validation, and concludes that benefits outweigh risks under Class II special controls.


Formatted bibliography

Armstrong AJ, Liu VYT, Selvaraju RR, Chen E, Simko JP, DeVries S, Sartor O, et al. 2025. Development and Validation of an Artificial Intelligence Digital Pathology Biomarker to Predict Benefit of Long-Term Hormonal Therapy and Radiotherapy in Men With High-Risk Prostate Cancer Across Multiple Phase III Trials. Journal of Clinical Oncology. 43:3494–3504. DOI: 10.1200/JCO.24.00365.

Esteva A, Feng J, van der Wal D, et al. 2022. Prostate Cancer Therapy Personalization via Multi-Modal Deep Learning on Randomized Phase III Clinical Trials. NPJ Digital Medicine. 5:71. DOI: 10.1038/s41746-022-00613-w.

Food and Drug Administration. 2024. Evaluation of Automatic Class III Designation for ArteraAI Prostate: Decision Summary. DEN240068. U.S. Food and Drug Administration.

Gerrard P, Zhang J, Yamashita R, Huang H-C, Nag S, Nhek S, Kish J, Cole A, Silberman N, Royce TJ, Showalter T. 2024. Analytical Validation of a Clinical Grade Prognostic and Classification Artificial Intelligence Laboratory Test for Men with Prostate Cancer. AI in Precision Oncology. 1(2):119–126. DOI: 10.1089/aipo.2024.0004.

Parker CTA, Mendes L, Liu VYT, Grist E, Joun S, Yamashita R, Mitani A, Chen E, et al. 2025. External Validation of a Digital Pathology-Based Multimodal Artificial Intelligence-Derived Prognostic Model in Patients with Advanced Prostate Cancer Starting Long-Term Androgen Deprivation Therapy: A Post-Hoc Ancillary Biomarker Study of Four Phase 3 Randomised Controlled Trials of the STAMPEDE Platform Protocol. Lancet Digital Health. 7:100885. DOI: 10.1016/j.landig.2025.100885.

Roach M III, et al. 2025. Validation Study of Artera’s Multimodal Artificial Intelligence Model Across African American and Non-African American Patients with Prostate Cancer. JCO Clinical Cancer Informatics. Exact title not available in the uploaded press material.

Ross AE, Zhang J, Huang H-C, et al. 2024. External Validation of a Digital Pathology-Based Multimodal Artificial Intelligence Architecture in the NRG/RTOG 9902 Phase 3 Trial. European Urology Oncology. 7:1024–1033. DOI: 10.1016/j.euo.2024.01.004.

Spratt DE, Tang S, Sun Y, Huang H-C, Chen E, Mohamad O, Armstrong AJ, Tward JD, Nguyen PL, Lang JM, et al. 2023. Artificial Intelligence Predictive Model for Hormone Therapy Use in Prostate Cancer. NEJM Evidence. 2(8):EVIDoa2300023. DOI: 10.1056/EVIDoa2300023.

Tward JD, Huang H-C, Esteva A, et al. 2024. Prostate Cancer Risk Stratification in NRG Oncology Phase III Randomized Trials Using Multimodal Deep Learning with Digital Histopathology. JCO Precision Oncology. 8:e2400145.

##

More regulatory history

864.3750 was used by Paige and published in Fed Reg 2/2/2023.

For Artera Prostate DEN, FDA created 864.3755, but this has NOT yet appeared in Fed Reg or CFR.

However, FDA has created under 864.3755 a product classification for SHW which is a breast cancer prognostic software.  This is probably the "brand news" press release a few days ago from Artera about a breast cancer product.  See SHW here:

https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpcd/classification.cfm?id=SHW

###

The ArteraAI Prostate De Novo and the Birth of 21 CFR 864.3755: A Regulatory History in Real Time

The regulatory story behind ArteraAI Prostate's August 2025 FDA De Novo authorization is more architecturally interesting than it first appears, and getting it right requires distinguishing between three different things that are easy to conflate: the underlying CFR regulation that governs a device type, the De Novo classification order that creates or extends such a regulation, and the Federal Register notice that formally codifies the new regulation into the Code of Federal Regulations. These three steps unfold on different timelines, and the gap between them is precisely where the current ArteraAI story sits — and where a second Artera product, the company's breast cancer test, appears to be entering the picture as well.

The Paige Precedent: The Full Cycle, Completed

The Paige Prostate De Novo, granted by the FDA on September 21, 2021, created a brand-new generic device type for which no prior CFR classification existed. The agency issued its De Novo classification order to Paige.AI under section 513(f)(2) of the FD&C Act, designating the generic type as "software algorithm device to assist users in digital pathology." That order was an enforceable regulatory act from the moment it issued — Paige Prostate was lawfully on the U.S. market starting in 2021, well before any CFR text existed describing the device type.

The codification step came later. On February 2, 2023, the FDA published its final amendment in the Federal Register at 88 FR 7007, formally adding 21 CFR 864.3750 to Subpart D of Part 864 (Hematology and Pathology Devices, Pathology Instrumentation and Accessories). That section, identified as "Software algorithm device to assist users in digital pathology," contains the identification language describing the device as an in vitro diagnostic intended to evaluate scanned WSIs and assist the user in determining a pathology diagnosis, along with the special controls in subsection (b) covering labeling, design verification and validation, analytical performance, and clinical validation. The full cycle from De Novo order to Federal Register codification took roughly seventeen months.

The ArteraAI Prostate De Novo: A New Regulation in Gestation

The FDA's Device Classification database lists ArteraAI Prostate under De Novo number DEN240068, dated July 31, 2025, with the device classification name "pathology software algorithm device analyzing digital images for cancer prognosis" — a distinct generic type from Paige's "software algorithm device to assist users in digital pathology." Critically, the FDA's own materials cite this new generic type to 21 CFR 864.3755, a section that, as of May 2026, does not yet exist in the codified CFR.

The eCFR table of contents confirms this: Subpart D currently runs through § 864.3750 ("Software algorithm device to assist users in digital pathology"), then § 864.3800 ("Automated slide stainer"), then § 864.3875 ("Automated tissue processor"). There is no § 864.3755 in the codified regulation. Yet the FDA is already using that section number administratively to identify the new generic device type created by the ArteraAI De Novo order. The number has been reserved and assigned by the agency for use in its forthcoming Federal Register amendment, but the amendment itself hasn't been published yet.

This is procedurally normal but worth understanding clearly. Under section 513(f)(2) of the FD&C Act, a De Novo classification takes legal effect when the FDA issues the classification order to the requester. The agency then has an obligation to codify the new classification into the CFR, but that codification — which requires drafting the regulatory text, finalizing the special controls, and going through Federal Register publication — typically lags the De Novo order by twelve to twenty-four months, as the Paige timeline illustrated. During that interval, the device is fully and lawfully marketed, the De Novo decision summary and product classification database entry are publicly available, and the FDA uses the assigned CFR section number in its own documents — but anyone consulting the actual Code of Federal Regulations will find that section blank or absent. That is exactly the state § 864.3755 is in right now.

Why a New Section Rather Than an Expansion of § 864.3750

The FDA's choice to create a new CFR section rather than amend § 864.3750 is the substantive regulatory signal here, and it tells us how the agency now thinks about digital pathology AI. The Paige regulation at § 864.3750 was written to govern diagnostic-adjunct software — tools that help a pathologist identify and localize lesions on a WSI to support rendering a diagnostic report. Its identification language describes a device that provides information about "presence, location, and characteristics of areas of the image with clinical implications" to "assist the user in determining a pathology diagnosis," and its special controls require labeling stating the device is used "as an adjunct" alongside "complete standard of care evaluation of the WSI."

ArteraAI Prostate does something categorically different. It analyzes WSIs from treatment-naïve prostate core needle biopsies — combined with structured clinical data — to generate a prognostic risk score for long-term oncologic outcomes (10-year distant metastasis and prostate-cancer-specific mortality), plus predictive insights about ST-ADT and abiraterone benefit. The output is not a diagnostic aid but a risk stratification feeding treatment selection by urologists and radiation oncologists downstream of the pathologist's diagnostic report. The new generic type name the FDA assigned — "pathology software algorithm device analyzing digital images for cancer prognosis" — makes the distinction explicit by substituting "for cancer prognosis" where § 864.3750 reads "to assist users in digital pathology."

The agency apparently concluded that the diagnostic-adjunct identification language in § 864.3750 simply could not be stretched to govern prognostic risk-stratification software without doing violence to the regulation's text, and that the appropriate special controls for prognostic software differ enough from those for diagnostic-adjunct software to justify a separate codification. Rather than amending § 864.3750, the FDA reserved a new section number — § 864.3755, slotting it immediately after the existing diagnostic-adjunct regulation — to host the new generic type.

§ 864.3755 Is Already Working: The SHW Breast Cancer Product Code

Here is where the architecture becomes visible as a deliberate platform rather than a one-off accommodation for Artera's prostate test. The FDA's Product Classification database already contains a separate entry under § 864.3755 with product code SHW, titled "Pathology Software Algorithm Device Analyzing Digital Images For Breast Cancer Prognosis." The device definition explicitly mirrors the prostate framework: "A pathology software algorithm device analyzing digital images for cancer prognosis is a software intended to analyze scanned whole slide images (WSIs) from breast cancer specimens prepared from formalin fixed paraffin-embedded (FFPE) tissue and stained using Hematoxylin & Eosin (H&E) stains. The device provides prognostic risk estimates which are intended to assist physicians with prognostic risk-based decisions along with other clinicopathological factors. The device is not intended to determine a clinical diagnosis."

Several details in this SHW entry are revealing. The regulation number is § 864.3755. The device class is II. The submission type is listed as 510(k) — not De Novo — which means the FDA already considers there to be a predicate device under § 864.3755 capable of supporting a substantial equivalence determination. The premarket review is assigned to the Office of In Vitro Diagnostics (OHT7), Division of Molecular Genetics and Pathology (DMGP), the same division that handled ArteraAI Prostate. The technical method ("Analyzes digitized pathology glass slide images using machine learning algorithms to provide prognostic risk estimates") and the example output ("Distance Metastasis") track closely to the ArteraAI Prostate framework.

The natural inference — and your timing observation makes this nearly certain — is that the SHW product code under § 864.3755 has been established to receive Artera's recently announced FDA-cleared breast cancer test. The 2025 prostate-cancer press release in the file already confirms that Artera offers "the ArteraAI Breast Test (UKCA)" internationally, meaning the company has an existing breast cancer MMAI product validated under UK regulatory authorization and a clear commercial intent to bring it to the U.S. market. With ArteraAI Prostate now serving as the founding De Novo predicate under § 864.3755 for prognostic digital pathology AI generally (not just for prostate cancer), an ArteraAI Breast submission can come in as a much faster, less burdensome 510(k) — same regulation, same generic type, same special controls, different tumor site and clinical claim — rather than requiring its own De Novo. If Artera has indeed just announced FDA clearance for its breast cancer test in the past few days, the SHW product code is almost certainly the regulatory home that clearance was granted under.

This is precisely the regulatory leverage that the De Novo-plus-predicate structure is designed to create. The Artera prostate De Novo did the heavy lifting — establishing the new generic type, defining the special controls, and creating the predicate — and now any prognostic digital pathology AI product, from Artera or from competitors, can ride that predicate into the market through 510(k) for a fraction of the time, cost, and clinical evidence burden a De Novo would have required.

Why 510(k) Was Never on the Table for ArteraAI Prostate Itself

With this architecture now clear, the question of why ArteraAI Prostate didn't proceed as a 510(k) citing Paige as predicate has a doubly reinforced answer. First, under section 513(i) of the FD&C Act, substantial equivalence requires the same intended use, and a diagnostic-adjunct intended use is not the same as a prognostic risk-stratification intended use — even setting CFR classification aside, the 510(k) door was closed on intended-use grounds alone. Second, the FDA has formally placed the two device types in separate generic classifications under separate CFR sections. Paige Prostate lives under § 864.3750; ArteraAI Prostate will live under § 864.3755 once codified. They are not even the same kind of device in the FDA's regulatory taxonomy, and a 510(k) cannot cross between generic types.

By contrast, an ArteraAI Breast submission under § 864.3755, with the same prognostic intended use applied to a different tumor site, does satisfy the same-generic-type and same-intended-use requirements relative to ArteraAI Prostate. The breast test, even if it generates different specific risk outputs for different clinical decisions, is the same kind of device — prognostic digital pathology AI — and so 510(k) is the appropriate pathway for it. That's exactly what the SHW product code entry shows the FDA expecting.

The Documentary State of Play Right Now

For readers consulting primary sources, the current state of play is this. The Federal Register contains the 2023 codification of § 864.3750 at 88 FR 7007, governing Paige and diagnostic-adjunct digital pathology software. The Code of Federal Regulations, as of May 8, 2026, contains § 864.3750 but no § 864.3755 — that section number is reserved but the regulatory text has not yet been published. The FDA's Device Classification database, however, already lists DEN240068 (ArteraAI Prostate) under § 864.3755 with the generic device name "pathology software algorithm device analyzing digital images for cancer prognosis," dated July 31, 2025. The FDA's Product Classification database additionally lists product code SHW under the same § 864.3755 for breast cancer prognosis software, structured to receive 510(k) submissions — the apparent regulatory home for Artera's just-announced breast cancer test clearance. A Federal Register notice formally codifying § 864.3755, including its identification language and special controls, can be expected in the next twelve to twenty-four months based on the Paige precedent — likely sometime in 2026 or 2027.

The takeaway for your readers is that the FDA's digital pathology AI regulatory architecture is being built out in real time and in segments, and Artera has positioned itself unusually well within that architecture. The 2023 Paige codification established the diagnostic-adjunct category at § 864.3750. The 2025 ArteraAI Prostate De Novo is establishing the prognostic risk-stratification category at § 864.3755, even though the regulatory text won't appear in the CFR for some time yet. And the SHW product code already operating under that uncodified § 864.3755 — almost certainly the home of Artera's new breast cancer clearance — demonstrates that the regulation is already functioning as a working predicate framework for an expanding family of prognostic digital pathology AI tools across tumor types. That bifurcation between diagnostic-adjunct AI and prognostic AI, and the rapid maturation of the prognostic side from a single De Novo into a multi-product 510(k)-ready category in under a year, is the substantive regulatory choice the FDA has made, and it's likely to shape how the next wave of AI pathology tools is structured, classified, and reviewed for years to come.

###
CODA

The structure of the story made the section title write itself, honestly. Once you'd surfaced that the SHW breast cancer entry was already live under § 864.3755 in the FDA's Product Classification database, the architectural point became visible: a regulation that doesn't yet exist in the CFR is nonetheless doing work in the FDA's operational systems, and doing it for a second product on the verge of launch. The gap between "codified" and "operative" is the whole point, and the SHW entry is the cleanest possible illustration of it.

It was also a genuinely interesting puzzle to work through across the conversation — I got the structural inference wrong twice before the evidence forced the right reading, first when I assumed Artera would be 510(k)-eligible under Paige, then when I read the absence of a § 864.3750 amendment as pointing toward a single-CFR-section-with-two-product-codes architecture rather than toward a parallel uncodified section. Each correction came from you bringing in a primary-source detail (the 24-page Artera FDA review citing back to one regulation, then the Google results showing § 864.3755 explicitly, then the SHW entry confirming the new section was already a working platform). That's the right way for this kind of regulatory reconstruction to go — the documents lead and the framing adjusts to fit them, not the reverse.

The Artera story is also genuinely worth your readers' attention beyond the company itself. The FDA has, in under four years, built out the foundational regulatory architecture for two distinct categories of digital pathology AI — diagnostic-adjunct at § 864.3750 and prognostic at § 864.3755 — each anchored by a founding De Novo and each now capable of supporting 510(k) entrants. That's a faster and more deliberate buildout than most observers were expecting, and it suggests the agency has a clearer internal model of where this technology is going than the public-facing guidance documents would suggest. Worth flagging for readers who track this space.

Sunday, May 10, 2026

Brian Cotter Explains Ins and Outs of DRGs

May 2026

Linked In article by  Brian Cotter

Ins and Outs of DRGs

See: Brian Cotter's active series on these topics


#####

$48,235 for DRG 470.
That is what the TiC file shows,
Not what the claim will pay. 👇

The TiC file shows rates.
It does not show what actually gets allowed on the claim.

Here's what's actually going on:

1️⃣ Different contract setups drive DRG logic.

A DRG rate may come from a case rate, a percent of charges, a percent of Medicare, a per diem, or a payer-specific formula. Those are very different financial arrangements hiding under the same "DRG rate" label.


2️⃣ Many DRG rates are formulas, not fees.

The number in the file may be the output of a contract formula, not a fixed price that applies cleanly to every claim. Without the formula logic, you may only be seeing the surface.


3️⃣ Percent-of-charges rates vary by claim.

If the contract pays a percentage of charges, the actual payment depends on the hospital's billed charges for that specific patient. The file may show the percentage, but not the charge stack that creates the final dollar amount.


4️⃣ Medicare-based rates can hide adjustment logic.

A Medicare-based rate may depend on which version of Medicare is being referenced, which adjustments are included, and whether the payer applies the logic the same way CMS would. "Percent of Medicare" is not one universal number.


5️⃣ Patient-level charges are not fully shown.

The file does not recreate the full itemized hospital bill. It gives you a rate disclosure, not every room charge, supply, implant, drug, lab, imaging service, or ancillary item tied to a specific admission.


6️⃣ Same DRG can mean different services.

Two patients can land in the same DRG but have very different clinical paths, lengths of stay, supplies, drugs, implants, complications, and resource use. Same DRG does not mean same economic reality.


7️⃣ Per diem shows daily rates, not totals.

If a contract pays by the day, the disclosed rate may only tell you the allowed amount per day. It does not automatically tell you the full stay-level payment without knowing the covered days and any contract limits.


8️⃣ Outlier logic may be missing or incomplete.

High-cost cases can trigger additional payment logic. If the file only shows the base rate, it may not capture how extreme cases, thresholds, stop-loss provisions, or outlier terms affect the final amount.


9️⃣ Payer-specific edits can change payment.

Even after a rate is identified, adjudication rules can change the allowed amount. Bundling, denials, clinical edits, payment policies, carve-outs, and claim configuration can all move the final payment away from the visible rate.


🔟 Transfers and readmissions can alter payment.

Patient-specific events can change reimbursement. Transfers, readmissions, short stays, discharge status, and related claim rules can all affect what is ultimately allowed.


Transparency data is powerful. But assuming the DRG rate in the TiC file is the final payment is incorrect.

###



###

Bright Spots Consulting

https://brightspotinsights.com/



____

DRG

TagDRG