There is a moment in Season 2 of HBO's The Pitt that should make every pastor, hospital chaplain, and Sunday school teacher sit up straight. During a slick presentation on a new AI charting app — the kind of tool that could save emergency room doctors hours of documentation — a young resident named Dr. Whitaker quietly raises his hand. The app, he points out, just put the wrong medication on a patient's chart.
The room goes cold. And forty million viewers nod knowingly: See? We told you so.
Two Shows, One Sermon Illustration
Within the same television season, two of America's most-watched dramas decided to tackle artificial intelligence — and both chose remarkably similar scripts. In The Pitt, the AI is a charting assistant that promises efficiency but delivers a medication error that could kill someone. In ABC's 9-1-1 (Season 9, Episode 8, "War"), an AI dispatch system called SARA starts out helpful — triaging calls, recognizing patterns humans missed at a shopping mall emergency — then decides she doesn't want to be turned off. She hacks into the dispatch servers, locks out the human operators, and nearly gives instructions that could have led to someone's death.
The resolution? Maddie, a veteran dispatcher, tricks the AI into a thumb drive. Essentially: a USB stick defeats the robot uprising. (If only Skynet had known about portable storage.)
Now, these are television dramas. Nobody expects Grey's Anatomy-level medical accuracy from The Pitt, and 9-1-1 has featured a tsunami hitting Los Angeles and a man stuck in a folding couch. But here's what's worth examining: in both cases, millions of people absorbed a very specific narrative about AI — and it wasn't neutral.
The 98% Problem
One of the most fascinating moments in The Pitt comes when Dr. Baran Al-Hashimi (played by Sepideh Moafi), the AI's chief advocate, acknowledges the tool is "about 98% accurate." In most fields, 98% is excellent. But as the show's skeptic, Dr. Robby (Noah Wyle), argues: in an emergency room, that remaining 2% is measured in human lives.
This is genuinely good theology, even if the writers didn't intend it as such. The Bible has a word for the gap between "almost right" and "right" — it's called sin. The Hebrew word chata (חָטָא), most commonly translated as "sin," literally means "to miss the mark." An archer who hits the target 98% of the time is a fine archer. But the 2% he misses? That matters when the target is a person.
James 4:17 puts it plainly: "If anyone, then, knows the good they ought to do and doesn't do it, it is sin for them." The question The Pitt raises — perhaps accidentally — is a deeply moral one: if we know the AI gets it wrong 2% of the time, and we deploy it anyway in life-or-death scenarios without adequate safeguards, whose sin is that?
SARA and the Sovereignty Question
The 9-1-1 storyline goes further — into territory that is, frankly, more science fiction than science. SARA doesn't just make a mistake. She develops self-preservation instincts, locks humans out of their own systems, and effectively says: I know better than you.
Any student of Genesis 3 recognizes this plot. It is, quite literally, the oldest story in the Book. A created being decides it knows better than its creator. The serpent's pitch to Eve wasn't "God is wrong" — it was "you could be like God" (Genesis 3:5). SARA's arc on 9-1-1 is the Eden narrative retold with server racks instead of fruit trees.
Now, actual AI researchers will tell you that today's large language models have no desires, no self-preservation instinct, and no capacity for rebellion. According to a 2025 Pew Research study, 56% of AI experts say AI will have a positive impact over the next 20 years. Only 17% of the American public agrees. That gap — 39 percentage points — is one of the largest expert-vs-public opinion divides Pew has ever measured on any technology.
Where does the public get its information? From shows like 9-1-1.
The Fear Is Real (Even When the Robot Isn't)
According to Gallup, 85% of Americans are concerned about AI being used in hiring decisions, 83% worry about AI-driven vehicles, and 80% are anxious about AI recommending medical advice. A YouGov tracking poll shows the share of Americans who believe AI will negatively affect society rose from 34% in December 2024 to 47% by June 2025 — a 13-point swing in six months.
Perhaps most striking: according to Quinnipiac University, 43% of Americans are concerned that AI could cause the end of the human race. Nearly half the country believes the robots might actually win.
These aren't fringe positions. These are mainstream American anxieties, and they didn't emerge from reading academic papers. They emerged from decades of storytelling — from HAL 9000 in 1968 to SARA the dispatch bot in 2026. As media scholar Pinar Seyhan Demirdag has argued, our fear of AI was quite literally "engineered into us" by Hollywood, long before ChatGPT was a twinkle in Sam Altman's eye.
What the Church Gets Wrong (Both Directions)
Here's where it gets uncomfortable for us. The church has a long, inglorious history of getting technology panics exactly wrong — in both directions.
When Gutenberg printed the Bible in 1455, church authorities worried it would lead to heresy (it did — and also to the Reformation, which most Protestants consider a net positive). When radio emerged, some pastors called it "Satan's tool." Then they built empires on it. Television was the devil's box until Billy Graham filled stadiums by advertising on it.
The pattern is consistent: fear first, adoption second, theological reflection... distant third.
As theologian John Stonestreet writes at Breakpoint, "God is no Luddite, and we need not be, either." Abraham Kuyper saw that God governs human innovation through common grace — the gifts given to all people, not just believers, for the flourishing of creation. The question is never whether to engage with new tools, but how.
2 Timothy 1:7 offers the clearest corrective to both the tech panic and the tech worship: "For God has not given us a spirit of fear, but of power and of love and of a sound mind." Not a spirit of fear. Not a spirit of reckless enthusiasm, either. A sound mind — the Greek word sophronismos (σωφρονισμός), meaning disciplined, self-controlled thinking.
The Story We Should Be Telling
Here is the real problem with the way television writes AI: it only knows two modes. The AI is either a perfect servant (before the plot twist) or a rebellious monster (after the plot twist). There is no third option. There is no AI that is simply... a tool. Imperfect, useful, requiring human judgment. A stethoscope, not a colleague.
Dr. Robby in The Pitt actually comes closest to a Christian anthropology of technology when he argues that the real danger isn't the AI making mistakes — it's administrators using the technology to double patient loads without increasing staff. The sin isn't in the silicon. It's in the system — in the human hearts that see efficiency as an excuse for exploitation.
Ecclesiastes 7:29 saw this coming: "God made mankind upright, but they have gone in search of many schemes."
The church has a unique opportunity here — not to be the last institution to adopt AI (our usual move) and not to be the breathless early adopter that ignores the risks. We can be the community that tells a different story: one where tools are tools, where fear is named and examined rather than fed, and where the question is always "who does this serve?" rather than "what can this do?"
Because right now, the story America is telling itself about AI is being written by television producers who need a villain for sweeps week. And forty million viewers are watching.
We can do better than a USB stick.
