Original blog here.
Here’s a Lewis Black–style, black-humor rant-essay built on the “meta-overfitting” PDF you attached—with nods to the two rival LLMs, Notebook LM and ChatGPT-5.
Overfitting:
Or, How I Learned to Stop Worrying
and Love the Data Tsunami
Here’s the Lewis Black-style rant recast as if it were the transcript of him on stage, pacing, waving his arms, occasionally about to explode.
(The content still reflects the over-fitting “meta” PDF and the two rival LLMs.)
[Lewis strides to the microphone, eyebrows halfway to his hairline.]
“You know what I love about AI in medicine? It promises us the moon and then delivers a wet sock. Every single week some bright-eyed genius tells me, ‘We’ve solved precision medicine! We just need a model with ten million variables and ten thousand patients.’ TEN million variables! That’s not data, that’s a statistical crime scene. And they feed it into a GPU the size of Cleveland and—boom—they tell us it predicts SURVIVAL. Survival! Like it’s reading the obituaries in advance.And then these geniuses wonder… wonder!… why the thing over-fits. It’s like trying to teach your dog calculus by throwing it a steak every time it drools. Of course it learns the wrong lesson.”
[He points a finger at the audience, eyes bulging.]
“We’ve got two AI sages here. In the blue corner: Notebook LM—the yoga instructor of machine learning. It pats us on the head and says, ‘Yes, yes, over-fitting is a concern… but we have multimodal early fusion, calibration, weather metaphors—don’t worry, the future is bright.’In the red corner: ChatGPT-5—the bitter old stats professor at the end of the bar. It growls, ‘Kid, you’ve got a feature-to-patient ratio that would make a hedge-fund algorithm blush, and your perfect survival label is actually predicting the ZIP CODE of the hospital. Fix. Your. Cohorts.’
So one model wants to hold your hand while you drive off a cliff. The other at least yells, ‘Brake!’ before you hit the guardrail.”
“Everybody’s in love with MOBER. They say, ‘Look! We can align cell lines, PDX, and clinical tumors!’ Yeah, terrific—now all your data are harmonized… into the same statistical ditch.
And SURVPATH! It links image tokens to EMT pathways. They brag, ‘Look how interpretable it is!’ Interpretable? That’s like catching your dog chewing the sofa and congratulating yourself that at least you know why the sofa’s ruined.”
“Don’t even get me started on this ‘path-level forecasting’ craze. Now we’re comparing cancer progression to traffic and weather. Notebook LM swoons—‘We’re modeling the whole journey!’ Great. ChatGPT-5 just mutters, ‘It’s still regularization with a fairy-tale attached. You still have to drive through the storm.’
And here’s the kicker—these people act as if a good saliency map will save them. As if the model’s neon pink heatmap lighting up on a batch effect is proof it’s learning biology. It’s not learning biology—it’s memorizing which hospital still uses a fax machine.”
“Benchmarks? Don’t talk to me about benchmarks. Virchow, PRISM—every lab optimizes to the same leaderboard until the model can diagnose the dataset but not the patient. That’s not progress, that’s a circular firing squad.”
[He leans in, voice lowering to a growl.]
“Fukushima’s right about one thing: you can’t just drop a black-box survival predictor into the clinic and pray. You need lab-in-the-loop, real calibration, longitudinal validation, the whole unsexy slog. Because the danger is REAL. It doesn’t go away just because you called it ‘early fusion’ and put it on a slide with pretty arrows.”[Now shouting again, hands flailing.]
“Look—if your fancy survival model is so smart it predicts the weather, the traffic, AND the patient’s death all in one go… maybe it’s just memorizing the ZIP code! And if THAT’S what you call intelligence… then God help the patients.”[He throws up his hands, pacing.]
“So, yeah—give me Notebook LM’s optimism to sell the dream, but for heaven’s sake keep ChatGPT-5’s grumpy skepticism in the room. Otherwise you’re not doing precision medicine—you’re just running Vegas for oncologists.”[He pauses, glares, then growls the closing line.]
“You want my advice? Stop worshipping the model. Start interrogating it. Because if you don’t… the next ‘survival predictor’ you deploy won’t just over-fit—it’ll over-kill.”
Source: Built on Bruce Quinn’s meta-comparison of Notebook LM vs ChatGPT-5 overfitting answers, with references to Fukushima’s MOBER, SURVPATH, calibration, and path-level modeling themes.