Thursday, January 29, 2026

CAP TODAY 2025-12 and LABCORP re FDA ELIO 510K

 2016 

2026 Labcorp 0101 IVD NGS in LAB 4pp (cf CAP TODAY  25-Dec re FDA Elio local).pdf


2025 December CAP TODAY How 2 Labs Bring SEQ In House

2025 cap today 1201 in house NGS.pdf


###

 


When Genomics Moves In: 

How In-House NGS Is Reshaping Oncology Care

Two recent pieces — the CAP TODAY feature on health systems building their own sequencing programs and the Labcorp/PGDx case study on implementing the FDA-cleared elio™ tissue complete assay locally — describe the same transformation from different vantage points. One is bottom-up, driven by laboratory buildout and clinical demand (see CAP TODAY). The other is platform-enabled, using an engineered in-house IVD solution to consolidate testing (see Labcorp white paper). Together they show that bringing next-generation sequencing (NGS) in-house is not simply a technical choice. It is a systems-level redesign of oncology care.


The Core Misalignment: Modern Oncology vs Legacy Send-Out Testing

Both accounts begin with the same friction point: send-out comprehensive genomic profiling (CGP) no longer fits the tempo or structure of modern cancer care. Traditional workflows involve multiple handoffs — slide review, block retrieval, vendor requisition, external lab queue, and report reintegration. Each step introduces delay, and cumulatively these processes stretch turnaround times into the 9–21 day range or longer (see Labcorp white paper).

This model evolved when genomic testing was exploratory and treatment decisions were less time-sensitive. That is no longer the case. Targeted therapies, tumor-agnostic approvals, immunotherapy biomarkers, and acute leukemia management all depend on rapid molecular stratification. When genomic data arrive weeks after diagnosis, the information is clinically useful but operationally mistimed.

The problem, then, is not whether CGP works. It is that its delivery mechanism was built for a slower era of oncology.


Three Paths to the Same Destination

The CAP TODAY article describes two distinct implementation philosophies among health systems bringing NGS in-house, and the Labcorp case study illustrates a third hybrid approach.

Implementation styleInstitutional exampleCore logic
“Headfirst” large assay firstFlorida Cancer SpecialistsStart with a complex pancancer panel, build capability, then expand
Progressive buildSentara HealthMove from single-gene → hotspot panels → DNA/RNA CGP
Platform-enabled consolidationUniversity center using PGDx elioAdopt an FDA-cleared, pre-engineered CGP workflow but run locally

The first two reflect internal laboratory evolution (see CAP TODAY). The third shows how a mature IVD CGP platform can be “localized,” reducing bioinformatics burden while still shifting control back to the institution (see Labcorp white paper).

Despite different starting points, all three converge on the same endpoint: genomics becomes an in-house clinical service rather than an outsourced specialty procedure.


Turnaround Time: The Clinical Fulcrum

The most visible benefit of insourcing is speed, but the deeper impact is what speed enables.

Testing modelTypical turnaround
Send-out CGP2–4+ weeks
In-house solid tumor NGS (community systems)~5–7 days
PGDx elio in-house workflow~5 days DNA-to-report

Shorter TAT is not merely convenient. It changes care delivery:

  • Molecular tumor boards can meet while decisions are still pending

  • Therapy initiation is better aligned with molecular eligibility

  • Clinical trial enrollment windows are easier to meet

  • Acute leukemia classification can inform early management

Faster genomics effectively synchronizes molecular data with clinical decision cycles, which is a systems improvement, not just a laboratory metric.


The Quiet Revolution: Workflow Consolidation

One of the most profound effects described in both pieces is the collapse of diagnostic silos. Before in-house NGS, cytogenetics, IHC, molecular send-outs, and pathology operated as semi-independent streams. Tissue blocks might be cut repeatedly as cases moved from PD-L1 staining to FISH to external sequencing.

Once CGP moves inside:

  • Single specimens feed integrated pipelines

  • Reflex testing protocols are developed (e.g., NSCLC, AML)

  • Tissue conservation improves because fewer recuts are needed

  • Reporting aligns molecular data with pathology and therapeutic context

Sentara noted that incorporating RNA fusion detection into NGS panels reduced the need for separate FISH testing, directly affecting cytogenetics workload (see CAP TODAY). The Labcorp case study similarly emphasizes the reintegration of fragmented molecular services into a consolidated unit.

This is a shift from test-by-test diagnostics to platform-based molecular pathology.


Bioinformatics: From Existential Barrier to Managed Layer

Historically, bioinformatics complexity limited adoption. The CAP TODAY experience shows that many laboratories began without deep in-house bioinformatics expertise, instead relying on vendor-supported workflows and building internal familiarity over time. Pipelines were deliberately kept manageable, and expertise grew with use.

The PGDx model represents a further step: automated, server-based analysis integrated into the assay ecosystem (see Labcorp white paper). In both cases, the trajectory is the same — bioinformatics remains critical, but it is no longer an insurmountable entry barrier. The constraint shifts from computational capacity to organizational readiness and governance.


NGS Moves Upstream: Therapy Selection → Diagnosis

A particularly important distinction raised in the CAP article concerns myeloid sequencing. In solid tumors, NGS typically follows a diagnosis and guides treatment. In myeloid neoplasms, mutations can shape the diagnosis itself (AML vs MDS, risk stratification, classification). That required rethinking reporting structures and interpretive workflows.

This illustrates a broader trend: genomics is migrating upstream in the diagnostic chain, increasingly influencing disease definition rather than simply therapy choice.


Research and Trial Integration

Both accounts highlight another institutional effect: easier linkage to clinical trials. Sentara’s in-house capability facilitated participation in NCI precision medicine trial networks (see CAP TODAY). The Labcorp case similarly emphasizes matching patients to targeted therapies and trials. In-house CGP thus becomes not only a diagnostic tool but a research infrastructure enabler.


Financial Reality: Still Complex, but Structurally Different

Reimbursement challenges persist in both narratives. Payer variation and cost coverage remain planning concerns. However, consolidating testing internally alters the financial equation:

External modelIn-house model
Per-test transactional costsPlatform-level operational control
Fragmented billing pathwaysIntegrated financial oversight
Limited local utilization leverageAlignment with institutional service lines

While reimbursement adequacy is not guaranteed, the move in-house transforms genomics from an external expense into a managed internal service line.


What This Transition Really Represents

At a higher level, these stories document a structural change:

Historical paradigmEmerging paradigm
Genomics as specialized send-outGenomics as hospital core function
Pathology-first workflowMolecular-pathology integrated workflow
Linear testing algorithmsPlatform-based molecular diagnostics
External data ownershipLocal genomic data stewardship

The common thread is institutionalization. Precision oncology is no longer an add-on capability. It is becoming part of the hospital’s diagnostic infrastructure, much like chemistry analyzers or imaging suites once were.


Final Perspective

Bringing NGS in-house does not just accelerate testing. It reorganizes the diagnostic ecosystem, aligning molecular data with clinical timing, integrating laboratory disciplines, enabling research participation, and shifting genomics from outsourced expertise to institutional competency.

What’s changing is the basic operating model of molecular oncology.

Older modelEmerging model
Ship specimens to distant reference labsGenerate genomic data within the health system
Genomics as a specialty send-outGenomics as a routine component of care
Long feedback loops between diagnosis and molecular resultsMolecular data available within the clinical decision window
Fragmented lab roles (IHC, FISH, send-out NGS)Integrated molecular pathology workflows
External control of data and pipelinesLocal stewardship of genomic data and interpretation

The transition is from a world where genomic testing sits outside the care system and feeds information back in, to one where genomics is embedded inside the diagnostic and treatment workflow itself.

That shift marks the moment precision oncology stops being an external service and becomes part of hospital infrastructure.

Tuesday, January 27, 2026

Nature med 2026 Li et al STANFORD / Elaborate H&E to Proteomics to Prediction

D Michael Montalto of Amgen Precision Medicine writes,

Predicting molecular phenotypes and spatial biology from H&E is not necessarily new... but this is certainly an impressive level of resolution that we have not seen before for protein predictions.

Only trained on 10 NSCLC patients. Some might say that is a limitation, but I would consider that just a start. Imagine if trained on larger data sets with continuing model improvements.

Hope we are all thinking about how to routinely get more from H&E in drug development and clinical care.

###


###

Chat GPT 5.2 summarizes:

Love this paper choice — it’s dense because it’s trying to quietly introduce a new way of thinking about what an H&E slide is. Here’s the same story, but as a flowing, clinical-scientific essay rather than a technical breakdown.


This study describes a striking conceptual shift in digital pathology: the authors argue that a routine hematoxylin and eosin (H&E) slide is not merely a morphological image, but a latent molecular map from which spatial protein biology can be computationally reconstructed. Their system, called HEX (H&E to protein expression), uses deep learning to generate virtual spatial proteomics profiles directly from standard histopathology. Rather than measuring protein expression through multiplexed immunofluorescence platforms such as CODEX, which are expensive, technically complex, and difficult to scale, HEX infers the spatial distribution of 40 immune, epithelial, stromal, and functional protein markers using only morphology. The result is effectively a computational spatial proteomics assay layered onto every H&E slide.

What makes this technically distinctive is that the model was not trained on weak bulk labels but on tightly paired data: histology tiles were co-registered with true spatial proteomics measurements. This allowed the network to learn direct associations between tissue architecture and protein-level spatial organization. Prediction accuracy for proteins reached Pearson correlations in the 0.73–0.79 range, which is unusually high in this field and far exceeds typical performance reported for spatial transcriptomic inference from H&E, where correlations often hover around 0.2. In other words, the model is not making vague gene-expression guesses; it is recovering spatially structured immune and stromal phenotypes that appear strongly encoded in morphology.

However, the core innovation is not protein prediction alone but how these inferred proteomic maps are used. The authors developed a multimodal fusion framework (MICA) that integrates two complementary views of the tumor: classical histomorphology and AI-derived virtual protein spatial maps. This fusion substantially improved clinical outcome modeling. In early-stage non-small-cell lung cancer (NSCLC), prognostic performance increased by roughly 22% (C-index) compared with models based only on H&E or clinicopathologic factors. For immunotherapy response in advanced NSCLC, prediction accuracy improved by 24–39% compared with conventional biomarkers such as PD-L1 expression and tumor mutation burden. In a field where incremental gains of a few percentage points are often considered meaningful, these are large effects.

Importantly, the system does not behave like a pure black box. Because it produces spatial protein maps, the model’s predictions can be biologically interpreted. Favorable prognosis and immunotherapy response were associated with spatial niches enriched in coordinated T-cell populations—co-localization of CD8 cytotoxic cells, TCF-1–positive progenitor/exhausted T cells, and helper T cells. Poor outcomes, in contrast, were linked to regions dominated by CD163-positive macrophages, CD66b-positive neutrophils, FAP-positive fibroblasts, and extracellular-matrix remodeling—patterns consistent with immunosuppressive, pro-tumor microenvironments. Notably, it was the spatial co-localization of cell states, rather than single markers, that carried prognostic and predictive value. This aligns closely with modern understanding of the tumor microenvironment and lends biological credibility to the model.

From a clinical standpoint, the most immediate implication lies in early-stage lung cancer. Decisions about adjuvant therapy currently rely on coarse clinicopathologic variables such as tumor size and nodal involvement, which imperfectly capture recurrence risk. HEX-derived risk stratification could potentially identify biologically aggressive tumors that appear low risk by TNM staging, while sparing others unnecessary chemotherapy. The second major application is immunotherapy selection. PD-L1 expression and TMB are imperfect predictors, and the study shows that spatial immune architecture inferred from morphology may outperform them. The model identifies patients within PD-L1–high, intermediate, and even PD-L1–negative groups who are more or less likely to benefit, suggesting a path toward more nuanced immunotherapy decision-making.

Yet the work is not without caveats. The analyses are retrospective, and the immunotherapy cohort, though substantial, is not definitive. The method assumes that morphology consistently encodes molecular states across institutions and staining variability. Moreover, despite interpretability tools, this remains a complex AI-derived assay that would face significant regulatory and validation hurdles. Conceptually, the system infers proteins indirectly through morphology-correlated patterns; its robustness outside the training distributions remains to be proven.

Even so, the broader implication is difficult to overstate. If validated prospectively, this approach turns every archived H&E slide into a scalable, low-cost spatial biology resource. It collapses the traditional barrier between morphology and molecular profiling, enabling population-scale spatial biomarker discovery without additional tissue or laboratory assays. Rather than replacing existing biomarkers immediately, HEX would likely augment them at first, adding a new dimension: morphology-encoded molecular spatial context.

In essence, the study proposes that the histology slide already contains a hidden molecular atlas. The authors show that AI can decode part of that atlas and that doing so materially improves prediction of recurrence and immunotherapy response in lung cancer. Whether this becomes a routine clinical tool will depend on prospective trials and regulatory navigation, but as a scientific advance, it represents a genuine shift toward computationally derived spatial phenotyping from routine pathology.

#

______________________________________________

#

Compare to OWKIN papers by Garberis and by Gaury.

#

Nature Comm GARBERIS (OWKIN)


MedRxiv GAURY (OWKIN)



#

Li (Nature Medicine 2026) and the Owkin “RlapsRisk” papers are cousins in the same family—WSI-first models meant to make H&E do more than diagnosis—but they’re trying to solve different clinical problems with very different “biological grounding.”

What Li is, in one sentence

Li’s novelty is that it uses H&E to generate “virtual spatial proteomics” (virtual CODEX maps) with an AI model called HEX, and then fuses that with H&E morphology in a framework (MICA) to predict prognosis across cohorts and, importantly, immunotherapy response in advanced NSCLC.

The “new” part vs prior H&E models

Most H&E prediction papers jump directly from pixels → endpoint, and interpretability is often post hoc. Li adds an intermediate, biologically interpretable layer: pixels → spatial protein maps → clinically relevant predictions.

They report HEX’s accuracy for protein prediction across 40 markers with strong summary metrics (average Pearson r ~0.79, SSIM ~0.95, low MSE) and claim it beats prior GAN-style translation methods. That’s the technical “bridge” that lets them argue they’re not just correlating morphology with outcome—they’re recovering spatial immune/stromal structure that plausibly drives ICI response.

The clinical “hook”: ICI response vs PD-L1/TMB

In a cohort of 148 advanced NSCLC patients treated with PD-1/PD-L1 ICIs, MICA achieved AUC 0.82 for objective response, outperforming PD-L1 (0.66) and TMB (0.59). They also report that MICA beats either H&E-only or virtual-CODEX-only models (AUC 0.72 and 0.75, respectively).

That comparison (vs PD-L1/TMB) is exactly the kind of claim that sounds like hype—yet it’s also the kind of head-to-head result that, if replicated prospectively, becomes clinically actionable.

Biological interpretability (a real differentiator)

They don’t stop at “attention heatmaps.” They define spatial co-localization signatures and show that low-risk tiles are enriched for T-cell phenotypes (e.g., granzyme B+/CD8+, PD-1+/CD8+, TCF-1+/CD4+) while high-risk tiles are enriched for immunosuppressive/fibrotic myeloid/stromal niches (CD66b+/MMP9+, CD163+/MMP9+, FAP+/collagen IV+).

That’s qualitatively different from most end-to-end H&E predictors: it’s a mechanistic narrative you could build assays around (real CODEX/IHC panels; targeted spatial assays), not just “the model saw something.”

What Owkin RlapsRisk is, in one sentence

RlapsRisk BC (Garberis et al., Nat Comms 2025; and the Gaury medRxiv 2025 validation) is a direct prognostic model trained on breast cancer WSIs to predict survival endpoints (metastasis-free / distant recurrence), and then optionally combined with clinical variables to improve risk stratification—especially in intermediate-risk groups.

Strength: classic clinical validation posture

Owkin emphasizes the orthodox structure of clinical validity: independent external validation, prespecified fitting before validation, and measurable incremental value over clinical models.

  • In the Nature Communications paper, the combined model (“RR Combined”) achieved Harrell’s C-index 0.81 vs 0.76 for the clinical score alone in external validation, with greater gains in intermediate clinical risk.

  • The medRxiv international validation reports external cohort performance (e.g., pooled c-index 0.78 with strong HR separation between high vs low risk).

What it is not trying to do

RlapsRisk is not presenting a “virtual biology reconstruction” layer like HEX. It’s much closer to: H&E → risk score, with clinical integration, and validation across sites/scanners. That is arguably more straightforward to operationalize as a lab-developed algorithmic test because it doesn’t require a second derived modality (virtual spatial proteomics) to justify itself.

Head-to-head comparison: what’s the real delta?

1) Endpoint and near-term “actionability”

  • Owkin (RlapsRisk): prognosis in early breast cancer—actionability hinges on whether it changes adjuvant therapy decisions versus existing molecular assays/clinical tools. It positions itself as adding incremental stratification beyond clinical variables (and as a possible complement/competitor to molecular scores).

  • Li (HEX/MICA): the bold claim is predicting ICI response (and outperforming PD-L1/TMB in their cohort). If that holds up, actionability is immediate: treat vs don’t treat / choose regimen / enroll in trial.

Bottom line: Li is aiming at a decision point with very high clinical and commercial value (ICI selection), but that also raises the bar for proof.

2) “Is it biology or just correlations?”

  • Owkin: credibility comes from scale, external validation, and survival endpoint supervision. Mechanistic interpretability is less central.

  • Li: credibility comes from an additional step—reconstructing spatial protein patterns from H&E with strong quantitative performance, then showing that the risk/response signal maps onto plausible immune-stromal niches.

My read: Li’s approach is more convincing on “not just a black box,” because it creates a testable biology story. But it also introduces more moving parts (HEX quality, generalizability of CODEX ground truth, domain shift).

3) Generalizability claims

Li explicitly frames broad evaluation: five NSCLC cohorts (2,150 pts), 12 other cancer types (5,019 pts), plus the 148-patient ICI cohort.
Owkin’s validation posture is also multi-cohort and international; the medRxiv paper describes large development + external validation sets (thousands for dev; ~900 external).

Difference: Owkin’s generalization is primarily about risk prediction portability across scanners/sites. Li’s is about portability of a learned cross-modality mapping (H&E → proteins) plus downstream prediction. That’s a tougher problem—but if solved, it’s more “platform-like.”

If you were thinking about “clinical test” reality

Owkin-style test (RlapsRisk)

Simpler product concept: upload H&E WSI → produce a validated risk score ± integrated clinical score. You need: WSI QC, model lock, calibration, cutoffs, and clear clinical-use labeling (e.g., ER+/HER2–, endocrine-only subgroup, etc.). Their papers already speak the language of C-index, HRs, external validation.

Li-style test (HEX/MICA)

More ambitious product concept: WSI → response probability + a biologically legible report (virtual spatial proteomics patterns consistent with inflamed vs excluded vs myeloid/fibrotic TME). They even show specific spatial dual-marker patterns linked to survival/biology.

But: for clinical adoption, the key question is whether “virtual CODEX” is treated as (a) an internal latent representation (fine), or (b) a claim that you are effectively measuring protein expression (which invites a higher evidentiary bar and likely demands orthogonal confirmation in a clinical workflow).

My “actionable vs hype” verdict

  • Owkin RlapsRisk: more immediately actionable as a deployable digital pathology prognostic test, because it is architecturally simpler and the validation narrative is conventional. The risk is mostly clinical utility (does it change management versus Oncotype/EPclin + clin-path?).

  • Li HEX/MICA: higher upside, higher proof burden. The immunotherapy result versus PD-L1/TMB is the kind of thing that could be transformative—but until you see prospective validation and careful handling of pre-analytic/site effects, it remains “promising, not practice-changing.” The biological interpretability is a genuine step forward, not just garnish.

If you want, I can turn this into (1) a one-page “clinical product spec” comparison (intended use, population, comparator, validation package) or (2) a “what would MolDx ask?” evidence checklist, mapped to each approach.


###

ANDANI NATURE MACH INTEL 2025


When placed alongside the Stanford HEX work from Li and the Owkin RlapsRisk studies, the ANDANI paper (HistoPlexer) occupies a distinct and foundational position in the emerging field of “H&E as molecular data.” All three efforts start from the same premise—that routine histopathology contains far more biological information than pathologists can visually extract—but they diverge in what they treat as the primary objective.

Owkin’s RlapsRisk models are fundamentally endpoint-driven. They are trained directly to predict distant recurrence or survival from whole-slide H&E images, sometimes combined with basic clinical variables. Their contribution lies in demonstrating that morphology alone can support robust, externally validated prognostic stratification at scale. Biology is implicit: the model learns patterns associated with outcome, but it does not attempt to reconstruct molecular states or spatial cell phenotypes.

Li’s HEX/MICA framework takes a step deeper. It inserts a biological intermediate layer by predicting virtual spatial proteomics maps from H&E and then using these inferred protein patterns, together with morphology, to predict prognosis and immunotherapy response in lung cancer. Here, virtual molecular information becomes part of the predictive engine, and clinical utility—especially treatment response prediction—is central.

ANDANI’s HistoPlexer, in contrast, is not primarily a prognostic or treatment-response model at all. Its core aim is to show that multiplex spatial protein imaging can be computationally generated from H&E with high fidelity, preserving not only pixel-level appearance but also biologically meaningful spatial co-localization patterns among tumor and immune markers. The emphasis is on image realism, cross-marker consistency, and maintenance of tumor–microenvironment architecture, validated against real multiplex imaging and expert review. Clinical prediction appears as a downstream use case, but not the central claim.

What is most important in ANDANI, therefore, is not a particular survival AUC or hazard ratio. It is the demonstration that H&E can function as a surrogate molecular imaging modality, capable of producing slide-wide virtual multiplex protein maps that retain the spatial structure of immune infiltration and tumor organization. This establishes a platform capability: turning existing histology archives into scalable spatial biology resources. In the conceptual hierarchy of the field, Owkin shows that H&E can predict outcomes, Li shows that virtual molecular layers can improve those predictions, and ANDANI provides the deeper foundation—that faithful, multiplex molecular reconstruction from morphology is scientifically plausible at all.

Saturday, January 24, 2026

Haley Mills, and the Geopolitical Forces Behind "The Trouble with Angeles" (1966)

“Revelations in Plaid:
The Trouble with Angels and the Post-Conciliar Imagination”

An Interview with Professor Chester Featherstone
Journalist’s Reportage from the Summer Film Conference at Castel Gandolfo



By the time I reached the sun-dappled gardens of Castel Gandolfo, the ancient papal summer residence that has drawn popes and pilgrims for centuries, it was clear this was no ordinary film studies conference. Fragrant Italian pines framed the terraced walkways where scholars gathered to debate cinema’s place in sacred history — and at the center of the swirl was Professor Chester Featherstone of the University of New Hampshire, a man whose interpretive zeal makes Dan Brown’s academics look like careful librarians.

I met him outside the Palazzo’s café, where he was  gesturing emphatically toward Lake Albano below.

“You see,” he began, eyes shining, “The Trouble with Angels is not teen fluff. It’s a semiotic palimpsest of Vatican II, Kennedy’s Catholic presidency, and the soul of post-war Catholicism itself.

Most critics have pigeonholed the 1966 Columbia Pictures comedy as a lighthearted adolescent romp about rebellious girls and kindly nuns — a “pleasant” film easily filed under childhood nostalgia. But Featherstone, beneath his sheep-skin academic tweed, has dug deeper. Way deeper.



Uncovering the Narrative Arc and Geopolitical Roots

“The film’s narrative arc,” he insisted, “mirrors the ecclesial tension of the early 1960s — institution and reform interlocking in dialogical motion.” Vatican II’s call for aggiornamento — bringing the Church up to date — he argues, is encoded in the interplay between the disciplined Mother Superior and the energetic, questioning schoolgirls.

He pointed out that Mother Superior’s authority evolves significantly across the film, not in a trivial way, but in a way that echoes the shift from pre-conciliar juridical authority to a more pastoral, dialogical model. For Featherstone, the school’s convent setting is a micro-Church in negotiation with its own metamorphosis.

And then — as if that were not enough — there was the Kennedy factor. The professor drew a line between the film’s representation of youth asserting agency within a venerable institution and the ambivalent legitimacy of John F. Kennedy’s Catholic presidency. In his telling, the film silently dramatizes the reconciliation of Catholic identity with modern civic life, a theme central to Kennedy’s own theological and political biography.

But this is where Professor Featherstone’s interpretive compass truly pointed off-map.

Screenwriter Langdon: The Mystery Man

“I haven’t yet published this,” he said with a conspiratorial glance, lowering his voice — “but the screenwriter, Edmund R. Langdon, is an enigmatic figure. He appears, briefly, in Hollywood records, and then… gone. Yet I have uncovered connections — to lay Catholic intellectual circles in Rome just before Vatican II, and beyond them, hints of ties to far more arcane networks.” There was a pause. Then: “Patterns, my friend. Patterns that suggest something broader.”

If the invocation of Opus Dei -- and the whispered suggestions of Illuminati affiliation - sounds like the stuff of The Da Vinci Code, that, too, is precisely the effect Featherstone seems to court. He is quick to clarify that these connections are preliminary, yet insists they point to a hidden architecture of influence inexplicably tied up with mid-century Catholic cultural production.

Over espressos in the garden, surrounded by roses that have shaded popes and princes alike, he unfolded a diagram: triangles embedded in the film’s blocking patterns; the recurrence of tripartite narrative motifs; labyrinthine corridors suggesting mystical geometry — all hinting, he whispered, at esoteric shaping forces beyond the surface narrative.

Is it serious scholarship? Without question — in Featherstone’s meticulous style. Is it wild? Also, emphatically yes.

And yet — therein lies its charm.

Semiotics of Mid-1960s Teen Cinema

Because what Featherstone’s work makes clear is this: the layers of The Trouble with Angels can be read not just as cinematic storytelling but as a cultural node where ecclesiology, political identity, and symbolic structures collide.

When I asked him why a Disney comedy should matter to serious Catholic cinema studies, he answered matter-of-factly:

“Because beneath its playful surface lies the negotiation of institutional authority, devotional identity, and the tension between tradition and reform. A film like this teaches us how Catholicism was experiencing itself on the cusp of transformation.”

The professor never quite explained what the Illuminati references mean — only that he will have much more to say in his next article, perhaps at the next conference.

But as I walked back through the gardens of Castel Gandolfo — the very retreat that has hosted popes for over four centuries — I realized I had witnessed something rare: not just an argument about a film, but a theological interpretation of popular culture that insists on finding depth everywhere it looks, even in a teen level puff piece widely thought of as innocuous.

Professor Featherstone, amidst papal balconies and Renaissance stonework, has shown that every text can be a cipher, if one is willing to read it with equal parts seriousness, fascination — and unabashed imagination.


Google, Natasha Lyonne Panel on AI and Creators

AI isn’t just changing creativity — it’s changing the power structure behind creativity.

In this lively panel, Natasha Lyonne, Liv Boeree, and N.W.A.’s Arabian Prince argue over whether AI is “just another tool” or something closer to nuclear energy: transformative, risky, and hard to regulate. They debate artist control, licensing, micro-attribution, and the coming flood of “content” versus real art — ending with a blunt takeaway: the genie’s out, so artists have to stay in the fight.

###

AI Essay based on Panel online at YouTube. Posted January 2026.

https://www.youtube.com/watch?v=8QAN2uohP1k



###

When the Genie Refuses to Go Back In: Artists, AI, and the Fight for the Future of Creativity

The panel began the way the future apparently does now: with someone finishing a picture on stage in real time.

Before the moderator could fully settle the room, Arabian Prince — N.W.A. founder, technologist, and veteran of more than one cultural revolution — was already sketching. “He’s already winning the panel,” someone joked. It was the perfect cold open for a conversation about artificial intelligence and art: the human hand still moving, even as machines hum in the background.

What followed was less a tidy policy discussion than a live-wire family argument about power, authorship, labor, and what kind of world creative people are about to wake up in.


The Moment the Future Arrived

Each panelist was asked to recall the instant technology stopped being abstract and started feeling like destiny.

For Liv Boeree — astrophysicist, poker champion, and long-termist thinker — it was 2017, when a university-built poker AI crushed the world’s best human players. Poker, with its hidden information and psychological nuance, had been considered safe from machines for decades.

“We thought maybe 2035,” she said. “Then it happened.”

The lesson wasn’t just about games. It was about acceleration — the uncomfortable realization that human timelines and machine timelines are no longer aligned.

Natasha Lyonne’s “arrival moment” was slower, more literary. Vonnegut. Philip K. Dick. Quantum physics audiobooks playing at night while she made Russian Doll. Her relationship to AI feels less like a surprise and more like a long, uneasy courtship with an idea science fiction warned us about for half a century.

Arabian Prince’s epiphany came earlier, in the 1980s, coding on a TRS-80 and watching his touring friends swap partying for video games. The signal was obvious: technology wasn’t a niche. It was becoming culture itself.

“I saw where the energy was going,” he said. “So I went there.”


Is AI Just a Tool? Or Something Bigger?

Arabian Prince made the case artists often make about new tech: the electric guitar didn’t kill music; the drum machine didn’t end drummers. AI, he argued, is “another tool.”

Liv pushed back. Not fully — but philosophically.

“This is more like unlocking the atom,” she said. “It’s energy.”

That metaphor changed the stakes. Tools can be put down. Energy reshapes civilizations. It concentrates power as easily as it distributes it. It can light cities — or level them.

The tension between those views defined the room: AI as creative instrument versus AI as systemic force.


Natasha Lyonne’s Line in the Sand

If the panel had an emotional center of gravity, it was Lyonne.

She is not anti-AI. She is anti-steamroll.

She described co-founding the Creators Coalition on AI and building film tools based on licensed data — not scraped cultural debris. She supports hybrid use: AI for set extensions, technical efficiencies, visual scale. But not for replacing cinematographers, costume designers, storyboard artists, or the slow human collaboration that gives films texture.

“I don’t want you to write my screenplay,” she said. “I want you to wash my dishes.”

It’s a joke, but also a thesis: AI should expand human life, not hollow it out.

Her fear isn’t just job loss. It’s aesthetic collapse — a flood of “slop” that trains audiences and algorithms toward lowest-common-denominator work, squeezing out the strange, the subtle, the difficult.

The solution, in her view, is not withdrawal. It’s collective leverage. Artists setting terms, not just reacting.


The Economics Problem No One Has Solved

Boeree introduced the idea of micro-attribution — tracking whose creative material trains AI models and paying them proportionally.

In theory, it’s elegant. In practice, it’s a technical, legal, and political labyrinth.

Still, it gestures toward a bigger truth: the AI debate is no longer just artistic. It’s infrastructural. Data centers, energy use, ownership concentration, labor displacement — the creative question sits on top of an economic reorganization.

Boeree warned of a bifurcated future:

  • One group leaning into algorithmic attention games.

  • Another rejecting AI entirely and trying to survive on human-only craft.

The unresolved question: can the second group still make a living?


Who Guards the Machine?

Arabian Prince offered a pragmatic answer to scale: “AI has to police AI.” The volume of machine-generated content is already beyond human oversight.

But the moderator steered toward regulation — “smart, whole-of-society regulation,” not just industry self-governance.

No one claimed a blueprint. But everyone agreed on one point: doing nothing is not neutral.


2030: Art or Content?

The closing question lingered: If we get this right, do we get more art — or just more content?

Lyonne rejected the word “content” outright. Art, in her framing, is not output volume. It’s lived experience, collective meaning, human signature.

Boeree predicted fragmentation. Arabian Prince envisioned a Star Trek future — abundance, creativity everywhere — if guardrails hold.

And yet, the panel never resolved its central paradox:

AI can democratize creation.
AI can concentrate power.
AI can amplify artists.
AI can erase them.

All of those statements can be true at once.


The Unfinished Conversation

The session ended not with a manifesto, but with giveaways — headphones, in-ear monitors, a Taylor guitar. The mundane mingling with the monumental.

Which might be the most honest image of the moment we’re in.

The future of creativity isn’t being decided in one room, one bill, one product launch. It’s emerging through arguments like this one — messy, emotional, contradictory, unfinished.

The genie isn’t going back in the bottle.

The real question is: who gets to decide what wishes are made next? 

Why THE PRODUCERS MUSICAL Fizzled As a Movie

The original PRODUCERS movie is beloved, the later MUSICAL was a hit, with one great song after the next.   But the MOVIE of the MUSICAL fizzled (usual opinion).   Discuss.

###

 

Ohhh this is a GREAT question — and very you to notice the disconnect between “this score is a banger” and “why did Hollywood treat this like a body on the sidewalk?”

You’re absolutely right: the material is almost unfairly strong. So why did the 2005 movie musical of The Producers stumble?

Short answer: it fell into the classic trap of filmed Broadway musicals — it preserved the stage energy instead of translating it to cinema.

Let’s unpack.


🎭 1. It wasn’t a movie — it was a photographed stage show

The Broadway version (which you’re hearing on the album) works because it’s:

  • Big

  • Cartoonish

  • Vaudevillian

  • Audience-aware

  • Played to the back row

That energy in a theater feels electric.

But on film?

It reads as:

  • Overacting

  • Shouting

  • Mugging

  • No emotional grounding

Nathan Lane and Matthew Broderick weren’t bad — they were doing perfect Broadway performances.
But the camera is cruel. It wants inner life, not vaudeville projection.

Result: Critics said it felt “stagey” and “airless.”


🎬 2. They didn’t “open it up” cinematically

Compare:

Stage Musical Film That WorkedWhat It Did
ChicagoStylized fantasy editing
CabaretSongs motivated by setting
West Side Story (’61)Real locations, movement

The Producers mostly keeps people:

👉 in rooms
👉 in offices
👉 in rehearsal spaces

Even “Springtime for Hitler,” which should explode into cinematic insanity, is filmed like you're in Row G at the St. James Theatre.

It looks expensive but feels confined.


😂 3. The humor tone confused movie audiences

The humor is:

  • Jewish vaudeville

  • Borscht Belt

  • Meta-theatrical

  • Showbiz satire

  • 1960s Mel Brooks DNA

Broadway audiences:
💡 “We love this theatrical inside-baseball lunacy!”

Mainstream moviegoers in 2005 expected something more like:

  • Shrek humor

  • modern irony

  • rom-com pacing

Instead they got old-school showbiz farce at full volume.

It felt niche, not universal.


🎟️ 4. It followed two giants

The movie had to compete with:

  1. The original 1967 film — edgy, weird, cult classic

  2. The Broadway smash — winner of 12 Tony Awards, theatrical lightning

The film version landed in an awkward middle space:

VersionStrength
1967 filmquirky indie satire
Broadwayexplosive live comedy
2005 movieneither raw nor electric

It felt like a souvenir, not an event.


🎤 5. Movie musicals were in a fragile moment

Yes, Chicago (2002) had revived the genre…

…but studios still didn’t trust musicals. Marketing didn’t know whether to sell it as:

  • a Mel Brooks comedy?

  • a Broadway adaptation?

  • a musical?

  • a Jewish farce?

When marketing is confused, audiences stay home.


💰 6. Box office math made it look worse than it was

It actually grossed around $38M domestic, which isn’t nothing.

But:

  • Budget ≈ $45M

  • Big expectations

  • Big cast

  • Big Broadway pedigree

So it got stamped: “disappointment.”

Not a bomb — just not the phenomenon everyone assumed was guaranteed.


🎶 Why the soundtrack still kills

Because the music and lyrics are the pure Broadway version, where the material is in its ideal ecosystem:

  • Rhythmic comedy

  • Wordplay

  • Orchestration

  • Theatrical timing

Audio-only, your brain supplies the theatrical context.
On film, reality intrudes.


🎯 The core truth

The movie failed because:

It recorded a stage performance instead of reinventing it for cinema.

That’s the cardinal sin of musical adaptation.


🧠 A very Mel Brooks irony

The show is about a producer who misunderstands his audience and miscalculates what will succeed.

The film version… kind of did that.

Which is so perfect it almost feels intentional.


If you want, I can break down why “Springtime for Hitler” is a masterpiece of musical structure, because from a craft perspective it’s diabolically well built.

##
The Broadway soundtrack vs the film soundtrack??

##

Ahhh now we’re in the deep musical-theater nerdery zone and I am delighted. This is a fantastic comparison because it shows exactly why you love the score… and why the film didn’t land the same way.

The difference between the Broadway Cast Album (2001) and the Film Soundtrack (2005) is basically:

Stage version = precision-engineered comedy machine
Film version = same notes, different physics

Let’s break it down.


🎭 1. Performance Style — THE biggest difference

Broadway Album

Nathan Lane & Matthew Broderick are performing:

  • to 1,500 people

  • with live timing

  • riding laughter waves

  • at full theatrical projection

You hear:

  • razor-sharp rhythm

  • explosive consonants

  • punchlines landing like rimshots

Lane especially is doing vaudeville percussion with language.

Film Album

Same actors… but:

  • microphones inches from their faces

  • acting to a camera, not an audience

  • performances scaled down visually

BUT vocally they’re still in Broadway gear — just slightly dampened.

Result:

  • less snap

  • less manic propulsion

  • more “performed” than “detonated”

Comedy songs lose forward momentum.


🥁 2. Tempo & Energy

Stage comedy runs on adrenaline.

SongBroadway FeelFilm Feel
“The Producers”breathless, spiralingmore measured
“We Can Do It”escalating hysteriatidier, safer
“Betrayed”live-wire meltdownstill great, but less dangerous

The Broadway versions feel like they might fall apart.
The film versions feel like they’re being managed.

Comedy thrives on the illusion of chaos.


🎼 3. Orchestration & Sound

Broadway Recording

  • Bright, brassy, punchy

  • Slight “pit band” edge

  • More rhythmic bite

  • Snare hits land like jokes

Film Recording

  • Lusher

  • Smoother

  • More “Hollywood”

  • Less percussive edge

It sounds more polished, but comedy sometimes needs sharp corners.

It’s like sanding down a cartoon.


😂 4. Comic Timing in the Vocals

This is subtle but huge.

On Broadway, jokes land because:

👉 the audience laughs
👉 actors push harder
👉 timing stretches
👉 lines get milked

Even in the studio album, that stage timing muscle memory is baked in.

Film timing is cut to picture. That means:

  • punchlines hit on edit beats

  • pauses are trimmed

  • chaos is regulated

Musical comedy needs elasticity. Film soundtracks often tighten it.


🎤 5. “Springtime for Hitler” — the case study

Broadway version:

  • manic build

  • insane energy ramp

  • chorus sounds like it might blow the roof off

Film version:

  • bigger orchestration

  • visually spectacular

  • but the audio feels… safer

You lose the feeling of a theatrical event spiraling out of control.


🧠 6. Psychology of Listening

When you hear the Broadway album, your brain imagines:

✨ lights
✨ sets
✨ live audience
✨ theater electricity

When you hear the film album, you subconsciously know:

🎥 controlled
🎥 recorded
🎥 retakes
🎥 edited

The danger is gone.


🎯 The real summary

Broadway Cast AlbumFilm Soundtrack
VolatileControlled
Comedic propulsionCinematic smoothing
Percussive deliveryRounded delivery
Theater adrenalineStudio polish
Feels like an eventFeels like a recording

💡 The irony

The show is about how a performance that is supposed to fail becomes a hit because it is so explosively theatrical.

The film soundtrack is musically correct… but theatrically contained.

Which is the opposite of what makes the show great.


If you want, we can dissect “Betrayed” — because Lane’s Broadway version is one of the great comic musical performances of the last 50 years, and it shows these differences perfectly.

Friday, January 23, 2026

Benjamin Britten's Journey from Uber-Nerd to Orchestral Classic

How is Britten's Young Person's Guide so whimsical and brilliant, when everything else he wrote was so godawful turgid?   The story behind the story.

Find Britten's Young Person's Guide here:

https://www.youtube.com/watch?v=4vbvhU22uAM


Benjamin Britten, before history packaged him as the greatest tunesmith of orchestral education, was a man dedicated to ideals. Dull ideals.  For Britten, music ought to elevate, refine.  A catchy tune, in his system, ought not be a daily prize, falling like  candy from a piñata. It was something that might occur later, when time permitted, and probably not.  Britten expected that his listeners (like himself) were engrossed when an F minor chord transitioned slowly to G minor.  He didn’t get many dates.

Britten's audiences were thin, while he saw concert halls packed with Saint-Saëns, whose animals cavorted, and Prokofiev, whose wolves had clear management objectives. Their music bounded in, tossed its hat on the bar, and started dancing. Britten’s music entered quietly, removed its coat, and began reading names scratched into the bar. 

No one knows how Britten landed a desperately needed commission to write a piece introducing young people to the orchestra.  But they would have rolled their eyes to see him try to base it on one of his dullest pieces, the Chichester Psalms.

The first out-of-town tryout.   It began with a chord that appeared to have taken a long-term lease. Instruments entered one at a time, each representing another aspect of what the program called “a condition of reflective presence.” A child in the front row asked his teacher if this was the music or if they were waiting for it to start. Another began crawling under the pews towards an unlocked door.

Adults held on bravely and applauded boldly at the end of the first movement--before realizing there were six more. Reviews granted the piece “admirable stillness,” a term associated with Northumbrian lakes and country manors turned into asylums.

Britten was baffled.  Simply invisible to Britten was the canyon separating his turgid chords from Prokofiev’s spinning carousels of harmony.


We now see a seaside tavern after a Saturday evening performance attended by an audience of six. At the bar stood a compact man with quick eyes.  

“You wrote that music then?” he asked.

For the education of youth,” Britten replied ironically. He fingered a letter in his tweed jacket, a letter from a solicitor firmly requested the return of the advance..

“Right,” said the man, nodding slowly. “Ever try...a tune?”

Britten explained that too busy attending to harmonic development in the minor keys.

“Yes, but a tune. A TUNE,” the man said.  “Eh, now, say, Purcell, didn't he do that. Lively chap."

Britten stared. 

Then he thought. 

“A tune…”   He stood up.  "Purcell?"

Dawn found Britten at his hotel, sheaves of music scattered from bed to bath.  A Purcell theme had entered, upright, cheerful, and hopping from sheet to sheet, shaking hands with flutes, horns, percussion. The final fugue fairly leapt up from the page.




Within weeks, audiences all over London were bursting with animation and demanding encores. Within a month, The Young Person’s Guide to the Orchestra took its place beside Carnival of the Animals and Peter and the Wolf. Children learned instruments without first facing eternity and the grave. Britten was getting more dates.

And somewhere in the orchestral firmament, Purcell could be heard humming, pleased to have been of service to education, romance, and the fugue.

Thursday, January 22, 2026

From NYT: Julian Barnes DEPARTURE(s) 2026

 A Briny Englishman (and Booker Prize Winner) Says Farewell

Julian Barnes writes about illness and love in “Departure(s),” which he claims is his last novel.

 


By Dwight Garner

Jan. 19, 2026

 

DEPARTURE(S), by Julian Barnes


If I were king, or Zohran Mamdani, I’d require that every restaurant in New York City have a lemon table. Julian Barnes wrote about the idea in a short story, “The Silence,” published in 2004. Lemons, he explained, are a symbol of mortality; a lemon table is one where it is “permissible — indeed, obligatory — to talk about death.”


In his work, Barnes has presided over such a plain wooden table from the start. Old age and death, as topics, are preset frequencies on his dial. As far back as his first novel, “Metroland” (1980), the young narrator felt himself to be 65.


“Artists are unreliable, whereas death never lets you down,” Barnes has written. “You would buy shares in death, if they were available.” He probed the “appalling fact” of human mortality most directly in “Nothing to Be Frightened Of” (2008), a brisk, bleak, funny and erudite memoir.


Barnes wrote “Nothing to Be Frightened Of” when he was 62. He just turned 80. This briny English writer, author of “Flaubert’s Parrot” (1984) and a winner of the Booker Prize, for “The Sense of an Ending” (2011), now has a rare form of blood cancer, treatable but exhausting and uncurable.


His new book, “Departure(s),” he says, is his last. He’s here to write about his illness and to lay some final logs on the fire. This is a slim and stark testament. Barnes’s prose is largely stripped bare — it resembles a tall ship that, in the face of a storm, has taken down and stored its sails and rigging to better endure punishment.


“Departure(s)” brims with wisdom reluctantly acquired. Barnes’s powers of observation and comment may have diminished, but his appetite for playfulness and detail, for bedrock human stuff, remains unslakable.

 


In the face of awfulness, writing can help. It’s the consolation prize. In the fall of 2008, when his wife was dying of a brain tumor, Barnes got through the horror by spending what time he could at his desk. “Terror and anguish were kept away by writing about terror and anguish,” he writes in “Departure(s).” Writing about his own illness brings similar solace.

 


“Departure(s)” is billed as a novel, and indeed it contains a metafictional love story about two friends Barnes met while at Oxford in the 1960s, and how he helped them reconnect late in life, when they fell in love. These characters interrogate the author. One of them says, wonderfully, “Oh, stop saying wise things that aren’t true.”


“Do not imagine that Art is something which is designed to give gentle uplift,” Barnes once said. “Art is not a brassiere.” Barnes, a lifelong melancholy baby, has rarely been a dispenser, in his work, of happy endings. The woman, Jean, says to him, in a line Barnes might also have uttered: “Happiness doesn’t make me happy.”


The medical portion of “Departure(s)” begins when Barnes comes down with a violent skin condition at the start of Covid. It turns out to be myeloproliferative neoplasm, a cancer that originates in the bone marrow.


It’s not necessarily a death sentence. He still must floss, as a doctor told Michael Kinsley when he was diagnosed with Parkinson’s disease. But he feels tipped from his orbit. With both Covid and cancer stalking him, he shivers like a cabinet of crystal while tanks are rolling past.


He’s gone for his medical appointment after packing a briefcase for himself, with “chocolate, an apple, the morning’s Guardian for the crossword, a notebook and my iPhone.” He doesn’t like to watch what goes on in examining rooms:


I avoid the needle going into the vein, the scalpel descending towards the eyelid, the catheter being inserted (and extracted), stitches put in here and there, a big toenail being lifted from its moorings, and now the laid-out juicy product which lives inside my bones and is suddenly misbehaving.

 

Worse, after a certain age, “most of your orifices will, one by one, have been medically invaded: ears, nose, throat, eyes (with lasers), bum, cock, vagina.”  It’s the bum invasions, he writes, that somehow keep coming around.


He marvels at England’s tax-funded National Health System: “It all seems to me astonishingly efficient, and, indeed, a marvel, one of the few things we can really be proud of, and I loathe Johnson, Gove, Cummings and their alt-right U.S. backers for wanting to tear it down.”


Now just another geezer being pushed down a hospital hallway, he wonders, touchingly, if he deserves a lapel badge that reads: “BUT I WON THE BOOKER PRIZE.” We all like a pat of butter next to our names.


Peel the dust jacket off “Departure(s)” and what’s left is a slim, black volume, like a hymnal. Where there was fire, there are now mostly ashes.

 


Every book is, in its way, about memory, but Barnes makes a special effort to interrogate this place where “degradation and embellishment overlap.” There is a good deal about Barnes’s touchstone writer, Proust, and the repercussions of his madeleine.


Barnes is losing his sense of smell, but he wonders “what sudden olfactory key” might open his memories “like a Japanese flower in water.” He suspects it might be “the smell of glue and varnish I used when constructing model aircraft, or the aroma of frying bacon, or that of a damp golden retriever.”


As a culture, we’ve grown cynical at the notion that we are witnessing the “last” of anything — the last tour, the last film and the last episode so rarely are. Still, here’s hoping that Barnes is not yet finished writing. I’d like to meet him, again, at whatever lemon table is available.


It was Ali Smith, in her novel “Summer,” who said it: “Whatever age you are, you still die young.”


DEPARTURE(S) | By Julian Barnes | Knopf | 160 pp. | $27