Thursday, February 22, 2024

Chat GPT Aims "Lewis Black" Humor at AI Study of EPIC Sepsis Predictor

Karman et al. in NEJM AI, and an article by McAlpine in "Futurity," discover that the EPIC EHR sepsis predictor works no better than chance, when tested properly.

https://ai.nejm.org/doi/full/10.1056/AIoa2300032

https://www.futurity.org/artificial-intelligence-sepsis-hospitals-3182482/ 


I asked Chat GPT to read the two articles and write an essay in the style of Lewis Black.

Chat GPT4

Alright, folks, gather around for the comedy roast of the century—where our guest of honor is none other than the Epic Sepsis Model, a high-tech marvel that’s supposed to predict sepsis but turns out to be about as useful as a magic 8-ball on the fritz. Imagine, if you will, a world where we put our trust in a system that’s supposed to be smarter than your average doctor, only to find out it’s got the predictive power of flipping a coin. Heads, you’re safe; tails, you’re in the ICU.

So, the Epic Sepsis Model, huh? This AI tool, which watches over half of the hospital beds in the U.S., cranks out sepsis risk scores like a slot machine, hoping to hit the jackpot. But when it comes to actually predicting sepsis before it's clinically recognized, this so-called "early warning system" is more like a false alarm that goes off after the fire's already been put out​​​​.

Researchers at the University of Michigan, bless their hearts, took a deep dive into this digital quagmire. They found that when it really counts—before doctors start treatment—the model's accuracy plunges to the depths of mediocrity. It's like having a weather forecast that only tells you it’s raining after you're soaked. "Oh, you’ve got sepsis? Great, let me adjust my predictions," says the Epic model, after the antibiotics are already coursing through the patient's veins​​.

The study's findings are like a comedy sketch of medical technology gone awry. Imagine relying on a system that's supposed to outsmart human intuition, only to find it's been eavesdropping on doctors and calling it 'prediction.' It’s the equivalent of me claiming to predict the sun will rise, after peeking out the window at dawn. This AI has the audacity to pat itself on the back, boasting about its accuracy after the fact, while clinicians are left dealing with the real-world mess of false alarms and missed opportunities​​.

In conclusion, this saga of the Epic Sepsis Model is a cautionary tale of tech hubris, a reminder that when it comes to life-and-death matters, maybe we shouldn’t put all our chips on a digital oracle that’s still learning to tell heads from tails. Here's to the researchers and doctors keeping it real on the front lines, saving lives without the help of a psychic machine.