Friday, January 2, 2026

Four Medical Innovation Case studies under the Reed "Ecosystem" Framework

I wrote a blog based on a new article by REED et al. on the ecosystems of healthcare, and how they impact innovation adoption.  Here.

##

I asked ChatGPT 5.2 to read Reed's article and then apply to three topics - based on model essays.  The three topics were diagnostic industry financial success, digital pathology adoption challenges, and the radiopharmaceuticals Bexxar and Zevalin. 

I then asked it to write a REED analysis based on a folder of articles about successes and challenges in the Point of Care Test (POCT) industry.


###

OVERVIEW

Across diagnostics, digital pathology, radiopharmaceuticals, and point-of-care testing, the same paradox keeps recurring: technically strong products repeatedly fail, while a small number of seemingly “unexceptional” innovations scale and endure. 

Read through the Reed et al. innovation-ecosystem lens, these four case studies show that success hinges less on brilliance at the bench than on alignment across reimbursement, workflow, regulation, infrastructure, and professional roles. The throughline is uncomfortable but clarifying: in health care, innovation succeeds only when someone takes responsibility for the ecosystem, not just the product.


# 1  DIAGNOSTICS INDUSTRY

Model Essay: Asimi

Vahid Asimi’s essay on the “narrow path” to venture-scale success in diagnostics appears, on first reading, to be a financial and executional analysis: capital efficiency, revenue growth, founder continuity, and timing of clinical waves. When reframed through the Reed et al. innovation-ecosystem lens, however, Asimi’s argument can be understood less as an outlier story of exceptional management and more as a case study in unusually effective ecosystem alignment. The central question Reed et al. pose—why technically strong health innovations so often fail to embed in routine practice—maps directly onto Asimi’s puzzle: why so many diagnostics companies generate clinical value yet fail to generate venture-scale financial outcomes, while a small minority succeed.

Asimi emphasizes that BillionToOne’s IPO success cannot be explained by raw technological differentiation alone. Many diagnostics firms possess advanced molecular platforms, yet accumulate billions in losses before reaching scale, if they reach it at all. From an ecosystem perspective, this observation aligns with Reed’s critique of narrow product-centric innovation. BillionToOne succeeded not because it solved a harder measurement problem than peers, but because it selected ecosystem wedges—prenatal testing and later oncology—where adoption chains, reimbursement pathways, clinical guidelines, and economic incentives were already forming. In Reed’s terms, BillionToOne did not attempt to “will a market into existence,” but entered an ecosystem in which the necessary co-innovators—clinicians, payers, laboratories, and regulators—were already partially aligned, reducing coordination failure risk early in commercialization.

Founder continuity, which Asimi treats as a distinguishing trait of venture-scale diagnostic winners, also takes on deeper meaning through the ecosystem lens. Reed et al. argue that ecosystem coordination work is slow, relational, and rarely rewarded by standard incentive systems. Founder-CEOs who remain in place for a decade or more function not merely as operators, but as long-horizon integrators who absorb institutional knowledge across clinical, regulatory, reimbursement, and commercial domains. In this framing, replacing founders with “professional CEOs” may optimize internal execution while weakening the firm’s ability to navigate complex external interdependencies. Asimi’s empirical observation that long-tenured founders correlate with IPO success thus supports Reed’s claim that ecosystem stewardship is a distinct, underappreciated capability.

Asimi’s discussion of capital efficiency further illustrates Reed’s argument about misaligned incentives. Many diagnostics companies behave like research programs for extended periods, accumulating massive deficits while deferring ecosystem alignment—particularly payer engagement, guideline inclusion, and clinical workflow integration. BillionToOne’s unusually low cumulative losses suggest not just cost discipline, but early ecosystem realism: product development choices constrained by reimbursement feasibility, decision-path clarity, and scalable clinical use. Reed et al. emphasize that innovation failure often occurs because no actor is structurally responsible for aligning technical progress with downstream adoption requirements. Asimi’s case implies that financial success emerges when firms internalize that responsibility rather than outsourcing it to “later stages.”

Platform thinking, another pillar of Asimi’s analysis, also maps cleanly onto Reed’s framework. Reed stresses that innovation is rarely a single handoff event; it is an iterative system-level process in which each deployment reshapes the ecosystem. BillionToOne’s platform optionality—using a core measurement stack to move from prenatal testing into oncology—reflects not just technological reuse, but ecosystem learning. Each test generates data, evidence, relationships, and institutional legitimacy that lowers friction for subsequent use cases. In Reed’s terms, this is coinvention at scale: technical assets evolve in parallel with organizational routines, clinical trust, and payer expectations.

At the same time, applying the Reed framework exposes what remains underdeveloped in Asimi’s analysis. Asimi treats favorable ecosystem conditions—re-rating of diagnostics markets, renewed M&A activity, improving public-market sentiment—as exogenous tailwinds. Reed would likely argue that these conditions are themselves fragile ecosystem equilibria, dependent on sustained coordination across capital markets, policy environments, and clinical credibility. The risk is that Asimi’s narrative, like many post-IPO retrospectives, risks survivorship bias by underestimating how easily ecosystem alignment can unravel when reimbursement shifts, guidelines stall, or regulatory expectations tighten.

In sum, Asimi’s essay implicitly supports Reed et al.’s core thesis: diagnostic innovation succeeds financially not when technology is strongest, but when ecosystem dependencies are managed deliberately and early. BillionToOne’s IPO looks less like an anomaly and more like a rare case in which ecosystem leadership, clinical wave timing, capital discipline, and founder stewardship converged. The broader implication is sobering: venture-scale success in diagnostics is not merely difficult—it is structurally rare because few firms are willing or able to take responsibility for ecosystem alignment as a first-order design constraint rather than a downstream problem.



#2  DIGITAL PATHOLOGY INDUSTRY


Model Essay: Cano

Dr. Luis Cano’s essay on digital pathology in 2025 reads, on its surface, as a sober reckoning with the limits of artificial intelligence in clinical practice. When viewed through the innovation ecosystem framework articulated by Reed et al., however, Cano’s argument is more precisely understood as an anatomy of systemic coordination failure. Digital pathology did not falter because algorithms stopped improving, but because the surrounding ecosystem—regulatory, economic, professional, and infrastructural—proved incapable of absorbing them under real-world conditions. Cano’s core insight aligns closely with Reed’s central claim: innovation collapses not at the point of technical feasibility, but at the interfaces where multiple interdependent actors must adapt simultaneously.

Cano describes 2025 as the year digital pathology left the “sandbox” and collided with clinical reality. From an ecosystem perspective, this moment marks the transition from isolated innovation to adoption-chain exposure. Algorithms that performed well on curated datasets encountered heterogeneous staining protocols, scanner variability, fragmented workflows, and irreversible clinical consequences. Reed et al. emphasize that innovators often mistake performance in controlled environments for readiness in complex systems. Cano’s account shows precisely how this error manifested in pathology: models scaled technically while the ecosystem components required for safe deployment—standardization, oversight, and accountability—lagged behind, creating friction that no amount of additional data could resolve.

Regulatory developments, particularly the FDA’s qualification of AIM-NASH as a Drug Development Tool rather than a diagnostic device, illustrate another dimension of ecosystem realignment. Cano highlights that this decision did not validate algorithmic autonomy, but explicitly constrained it. Through the Reed lens, this reflects regulators acting as ecosystem stabilizers rather than innovation accelerators. By insisting on human-in-the-loop supervision, regulators signaled that autonomy introduces unacceptable systemic risk when downstream responsibilities are poorly aligned. This regulatory stance implicitly redistributed coordination work onto pathologists, who became responsible not only for diagnosis, but for auditing algorithmic behavior—a role for which neither compensation nor institutional support has been fully defined.

Economic misalignment emerges in Cano’s essay as perhaps the most decisive constraint. Digital pathology requires costly infrastructure—scanners, storage, interoperability layers—yet reimbursement mechanisms failed to recognize digitization as a billable clinical act. Reed et al. identify incentive structures that reward siloed excellence while punishing cross-boundary work; Cano’s description of CPT stagnation exemplifies this dynamic. Laboratories faced a rational choice: absorb large capital and operational costs without reimbursement, or limit deployment. The resulting low utilization was then interpreted externally as lack of clinical demand, reinforcing a vicious cycle. This is a textbook case of ecosystem failure, where absence of coordination between payers, regulators, and providers creates self-fulfilling stagnation.

Cano’s analysis of foundation model hype and subsequent epistemological crisis further reinforces Reed’s critique of narrow technical focus. Scaling model size and training data addressed isolated performance metrics but ignored biological context, institutional variability, and silent failure modes. Reed et al. argue that innovation efforts often over-invest in product optimization while under-investing in the coinvention required for safe use. Cano’s account of silent failures and demographic bias illustrates the consequences: models that appear reliable while embedding hidden risks, shifting liability and moral responsibility onto clinicians without corresponding governance structures.

Interoperability failures and data infrastructure costs represent another neglected ecosystem layer. Cano describes how standards that exist “on paper” failed operationally, forcing hospitals to build fragile, local solutions. Reed’s framework predicts this outcome when no actor is empowered or incentivized to coordinate across organizational boundaries. Vendors optimize within their product silos; hospitals absorb integration burdens; regulators focus on safety; payers focus on cost containment. The result is innovation fatigue rather than diffusion.

Ultimately, Cano’s essay reinforces Reed et al.’s most uncomfortable conclusion: technological maturity does not guarantee adoption, and in some cases accelerates exposure to systemic risk. Digital pathology in 2025 reached a level of technical competence that made ecosystem misalignment visible and unavoidable. The pathologist’s redefinition as system auditor rather than displaced expert underscores a broader truth: innovation reshapes professional roles long before economic and institutional structures adjust to support them.

Seen through the innovation ecosystem lens, digital pathology’s stalled momentum is not a failure of imagination or effort, but a failure of coordination. Cano’s essay makes clear that future progress depends less on faster algorithms and more on deliberate alignment across regulation, reimbursement, infrastructure, and professional responsibility. In Reed’s terms, the challenge is no longer inventing better tools, but designing ecosystems capable of using them safely, equitably, and sustainably.



#3  EARLY RADIOPHARMACEUTICALS:  Bexxar, Zevalin

Model Essay: AI History of Bexxar-Zevalin.  Here.

Bexxar and Zevalin were early “theranostic-like” lymphoma therapies from the first wave of radioimmunotherapy—long before modern PSMA PET and radioligand therapy made the idea feel routine. Both were anti-CD20 monoclonal antibodies (the same surface target later made famous by rituximab), but with a twist: they carried a radioactive payload intended to deliver cell-killing radiation directly to malignant B cells in non-Hodgkin lymphoma. Bexxar was a tositumomab regimen paired with iodine-131–labeled tositumomab; Zevalin paired ibritumomab tiuxetan with yttrium-90 as the therapeutic radionuclide. Clinically, these products often produced strong response rates; commercially, they became a canonical story of “science ahead of the system.” 

The part of the story that matters most for today’s radiopharma strategy is not simply that these were radioactive drugs—it is that they were built around a diagnostic radiopharmaceutical step that functioned as an operational gateway, and sometimes a chokepoint. Bexxar, in particular, did not just “come with a scan.” The FDA-labeled regimen required a dosimetric dose and subsequent whole-body imaging / counting to characterize biodistribution and clearance so that the therapeutic activity could be individualized to deliver a target total-body radiation dose (with dose adjustments influenced by platelet counts as well). In other words, Bexxar embedded a formal patient-specific dosimetry workflow into routine care: a test dose, measurements, calculations, then a therapeutic dose—an adoption chain that presupposed nuclear medicine competence, scheduling capacity, radiation safety infrastructure, and institutional willingness to operationalize a two-step protocol. That is a very different “diagnostic companion” concept than today’s marketing-friendly idea of “you get a PET scan first.” It was more like turning every patient into a mini clinical-physics project, which is scientifically rational but operationally expensive and culturally alien to most community oncology clinics.

Zevalin’s diagnostic aspect looked different but still illustrates the same ecosystem issue. Because yttrium-90 is a pure beta emitter (not imageable by conventional gamma cameras), Zevalin historically used indium-111–labeled ibritumomab tiuxetan as an imaging surrogate to evaluate biodistribution prior to giving the Y-90 therapeutic dose. Even where later practice and labeling evolution reduced the centrality of mandatory imaging, the early market reality was that “radioimmunotherapy” arrived as a package requiring coordination across oncology, nuclear medicine, radiopharmacy, and radiation safety—and often multiple visits. The Zevalin kit itself also made the ecosystem dependencies explicit: critical components (including rituximab and radiolabeling inputs) were sourced and handled through different channels, reinforcing that this was not a single-product transaction but a multi-actor choreography.

This is where the Reed innovation-ecosystem lens (and, frankly, Cano’s “leaving the sandbox” tone) becomes explanatory. Bexxar/Zevalin did not fail because the core technology was nonfunctional; they failed because the ecosystem complementors required for scale—billing conventions, referral pathways, inventory and handling norms, nuclear medicine bandwidth, standardized protocols, and clear professional responsibility boundaries—were not aligned. Each additional “required step” increased the number of handoffs, and each handoff multiplied the probability that a patient would default to the path of least resistance. In real-world care, a therapy can be “better” and still lose if it is “harder,” because the ecosystem is optimized for throughput, predictability, and reimburseable routine.

Market forces sharpened that dynamic, especially the rise of rituximab (Rituxan). Rituximab was approved in 1997 and became foundational in B-cell lymphoma regimens because it was comparatively easy to administer in existing oncology infusion workflows and fit established reimbursement patterns. Once a “good enough” anti-CD20 biologic became standard, Bexxar and Zevalin were forced to compete not just on efficacy but on friction. For many clinicians and sites of care, the choice was between (a) a familiar infusion that could be delivered and billed entirely inside the oncology clinic and (b) a more complex radiopharmaceutical regimen requiring coordination with nuclear medicine, radiation precautions, and sometimes patient concerns about radioactivity. Even an objectively strong therapy struggles when adoption requires clinicians to fight their own institutional logistics and financial incentives.

Reimbursement and site-of-service economics were the other decisive ecosystem variable. Early 2000s payment systems were not designed to gracefully pay for a hybrid object that is simultaneously a biologic, a radiopharmaceutical, a nuclear-medicine procedure set, and an infrastructure burden. When hospitals or practices perceived that offering the therapy would be financially negative—because costs landed in one department while revenue (or savings) landed elsewhere—the rational response was to limit availability or avoid it. That created a self-reinforcing cycle: limited offering reduces utilization, low utilization reduces institutional learning and comfort, and the therapy becomes “rare,” which further increases friction and clinician reluctance. Ultimately Bexxar was discontinued for commercial reasons despite its clinical performance, becoming a lasting cautionary tale about what happens when a sophisticated therapeutic concept depends on an under-built adoption chain.

In retrospect, the most interesting “diagnostic radiopharmaceutical” lesson from Bexxar and Zevalin is not that companion imaging is bad—today’s theranostics largely prove the opposite—but that the type of diagnostic dependency matters. A simple, widely available imaging step that slots into routine nuclear medicine is one thing; a regimented dosimetry-and-clearance protocol that demands cross-departmental choreography is another. Bexxar and Zevalin were early proofs that, in oncology, clinical value is necessary but not sufficient: a product succeeds when the ecosystem can deliver it reliably, pay for it coherently, and assign responsibility cleanly.


#4   THE POINT OF CARE TEST INDUSTRY

Entry Point:  Folder of 20 articles about POCT.

Point-of-Care Testing as an Ecosystem Stress Test

Point-of-care testing has long been marketed as an obvious win for health systems: faster answers, decentralized decision-making, improved patient flow, and care delivered closer to the patient. Yet its history is marked less by smooth diffusion than by repeated cycles of enthusiasm, stall, retrenchment, and selective success. Seen through the Reed ecosystem framework, this pattern is not primarily a story of immature technology or insufficient evidence, but of persistent ecosystem fragmentation. POCT exposes, in unusually sharp relief, what happens when technically sound innovations are introduced into a health-care system whose incentives, workflows, regulatory structures, and professional norms remain poorly aligned.

A central insight from Reed et al. is that innovators consistently apply a narrow pipeline logic—optimize the device, validate performance, secure regulatory clearance—and then assume the “last mile” belongs to someone else. POCT developers have repeatedly fallen into this trap. Analytic accuracy, turnaround time, and portability are optimized, but questions of ownership, oversight, training, quality control, and downstream clinical action are deferred or externalized. The result is a device that works perfectly in isolation yet struggles once embedded in real clinical environments where nurses, medical assistants, pharmacists, laboratorians, and physicians all intersect. POCT thus becomes not a plug-and-play solution but a trigger for unresolved questions about responsibility and authority.

This disconnect is amplified by the underuse of implementation knowledge. For decades, laboratory medicine has generated a rich literature on quality systems, error propagation, operator variability, and post-analytic risk—precisely the issues that become acute when testing migrates away from centralized labs. Yet POCT innovation cycles have rarely integrated this knowledge early. Instead, implementation science enters late, often framed as “change management” rather than as a co-equal design constraint. Reed’s critique that health care repeatedly “forgets what it already knows” applies squarely here: POCT programs rediscover, at scale, problems that laboratorians have long understood, from calibration drift to result misinterpretation, but without embedding that expertise structurally into product development.

The perspectives of frontline professionals—especially laboratorians—have also been inconsistently incorporated. In many POCT deployments, laboratory medicine is cast as an external compliance function rather than as a co-innovator. This framing creates predictable friction. Clinicians value speed and autonomy; labs value control, traceability, and standardization. Without explicit ecosystem coordination, POCT becomes a contested space rather than a shared project. Reed’s distinction between “work as imagined” and “work as done” is critical here: POCT workflows imagined by developers rarely match the realities of understaffed wards, retail clinics, or emergency departments, where training time is scarce and accountability is diffuse.

Incentives further entrench fragmentation. POCT often improves system-level performance—shorter length of stay, fewer return visits—but generates ambiguous or even negative signals at the departmental level. Laboratories may absorb oversight burdens without proportional reimbursement; clinicians may bear cognitive load without clear liability protection; hospitals may hesitate to invest when savings accrue downstream or to payers. As Reed emphasizes, individuals can be doing their jobs well while the ecosystem fails collectively. POCT has suffered precisely this fate: no actor is structurally rewarded for ensuring that decentralized testing works end-to-end.

Where POCT has succeeded, the pattern strongly supports the ecosystem thesis. Success is most evident in bounded, high-cohesion environments—ICUs, operating rooms, dialysis units, neonatal care—where roles are clear, workflows are stable, and governance is centralized. In these settings, co-innovation occurs naturally: training is standardized, lab oversight is embedded, and clinical action pathways are explicit. The technology did not change; the ecosystem did. These cases underscore Reed’s argument that innovation success is less about device capability than about coordinated readiness across adoption-chain partners.

Retail and home-based POCT reveal the opposite dynamic. While technically impressive, these models often lack a focal ecosystem leader capable of underwriting coordination costs. Questions about result interpretation, follow-up care, data integration, and reimbursement are left unresolved or fragmented across organizations. The technology may be sound, but the ecosystem is thin. Bendersky’s observation that health care lacks an “Amazon-like” coordinating firm is especially salient here: POCT diffusion stalls not because no one believes in its value, but because no one is structurally responsible for making that value real.

Reed’s proposed remedies—wide-lens thinking, shared value propositions, ecosystem leadership, and local ownership—map cleanly onto POCT’s unresolved challenges. A shared value proposition for POCT would not be “faster tests,” but better clinical decisions under real-world constraints. Ecosystem leadership would mean explicitly funding and protecting boundary-spanning roles—often laboratories, sometimes health systems—that coordinate training, quality, IT integration, and clinical pathways. Local ownership would mean designing POCT programs around actual care settings rather than generic deployment models.

Ultimately, POCT illustrates Reed’s central claim with unusual clarity: health care does not lack good innovations; it lacks cohesive ecosystems to absorb them. When POCT is treated as a device problem, it disappoints. When treated as a system redesign problem, it often succeeds. The lesson is not that POCT needs better technology, but that innovation strategies must shift from product delivery to ecosystem orchestration. Until that shift becomes explicit—and rewarded—the POCT industry will continue to oscillate between promise and partial adoption, serving as a recurring reminder that in health care, innovation fails not at the edges, but in the spaces between.

###

CONCLUSION

Taken together, these four cases demonstrate that innovation outcomes in health care are rarely mysterious in retrospect, even when they feel baffling in real time. BillionToOne’s IPO, digital pathology’s stall, the premature rise and fall of Bexxar and Zevalin, and the uneven diffusion of point-of-care testing all reflect the same structural truth: value emerges only when adoption-chain partners are aligned, incentives are coherent, and the slow, unglamorous work of coordination is treated as a first-order design constraint rather than a downstream nuisance. In each case, the technology itself was sufficient—or more than sufficient—but the surrounding system was not. Where alignment existed, even partially, innovation compounded; where it did not, technical excellence became stranded.

What makes this exercise especially useful is that it replaces comforting myths with a practical diagnostic lens. It moves analysis away from hero narratives (“great founders,” “bad regulators,” “conservative clinicians”) and toward a repeatable way of interrogating failure and success across sectors. By applying the same framework to molecular diagnostics, AI in pathology, radiopharmaceuticals, and POCT, patterns become visible that would otherwise be dismissed as idiosyncratic or historically contingent. The Reed ecosystem lens clarifies that timing, leadership continuity, reimbursement realism, and workflow ownership are not peripheral variables—they are core mechanisms that determine whether innovation translates into routine care.

The broader implication is both sobering and empowering. Health care does not need fewer ideas, faster algorithms, or more capital chasing novelty; it needs more explicit ownership of ecosystem design. Investors, executives, and policymakers can use this framework to ask better questions earlier: Who is absorbing coordination costs? Where do incentives break? Which actors are being asked to change without being paid, protected, or supported? Innovation fails most often not at the edges of science, but in the spaces between institutions. Recognizing that reality is the first step toward building health-care innovations that last.