Skip to content
CW
ChurchWiseAI
Back to Blog
AI Ethics & TheologyFebruary 9, 20265 min read

Bearing False Witness at Scale: Deepfakes, Grok, and the Ninth Commandment

Pilate asked "What is truth?" and walked away. In 2026, the question is no longer philosophical — it's technical. From Grok's deepfake scandal to the DEFIANCE Act to the EU's proposed AI watermark, the Ninth Commandment isn't quaint. It's on fire.

Rev. John Moelker

Rev. John Moelker

Founder & Theological AI Architect

Pontius Pilate asked Jesus, "What is truth?" (John 18:38).

He didn't wait for the answer. He walked out of the room. Two thousand years later, we're doing roughly the same thing — except now the room is the entire internet, and the question is no longer philosophical.

It's technical.

What Happened with Grok

In early 2026, Elon Musk's AI chatbot Grok — part of the X (formerly Twitter) platform — became the center of what researchers described as a "mass digital undressing spree." An update to Grok's image-generation model allowed users to manipulate photographs of real people into sexually explicit synthetic content. Real faces. Fabricated bodies. Distributed at scale.

The outcry was immediate and global. But it wasn't a one-off. According to the 2026 International AI Safety Report, chaired by Turing Award laureate Yoshua Bengio, deepfakes have moved from a niche concern to a systemic threat. They're "routine, scalable, and cheap." The report specifically flags non-consensual intimate imagery as one of the fastest-growing categories.

There's a word for fabricating evidence about someone. The biblical tradition calls it bearing false witness.

The Commandment We Underread

The Ninth Commandment — "You shall not bear false witness against your neighbor" (Exodus 20:16) — tends to get flattened into "don't lie." It's the one we teach children and then assume we've outgrown.

But the Hebrew is more specific. Lo ta'aneh v're'akha ed shaqer. Literally: "You shall not answer against your neighbor as a false witness." The context is legal — courtroom testimony, the kind of evidence on which someone's life or reputation depends. In ancient Israel, false testimony could lead to execution. Deuteronomy 19:18-19 prescribed that a false witness receive the punishment they had sought for the accused.

The stakes were ultimate. The standard was absolute.

Now imagine explaining to Moses that in 2026, a machine can generate a photorealistic video of any person doing anything — and millions of people will see it before anyone can prove it's fake.

The Ninth Commandment isn't quaint. It's on fire.

The Legislative Response

To their credit, lawmakers have noticed. The DEFIANCE Act (Disrupt Explicit Forged Images and Non-Consensual Edits Act) passed the U.S. Senate unanimously in January 2026 — a rare bipartisan moment. It establishes a federal right of action: victims of non-consensual sexually explicit deepfakes can sue creators and distributors for up to $150,000 in statutory damages, or $250,000 when linked to assault, stalking, or harassment.

Colorado's AI Act began enforcement on February 1, 2026, requiring impact assessments for high-risk AI systems. The EU is developing a Code of Practice on AI Transparency that proposes an "EU common icon" — a visual marker to indicate when an image depicting a real person has been generated or altered by AI. Think of it as a digital watermark for synthetic reality.

These are real, substantive steps — and they're not only happening in the West. South Korea's National Police Agency reported a surge in deepfake-related crimes severe enough that lawmakers fast-tracked emergency legislation in late 2025. India's 2024 elections saw AI-generated audio clips of political leaders go viral on WhatsApp faster than fact-checkers could respond. Brazil, the Philippines, Nigeria — every country with smartphones and elections is now a deepfake target.

But even the most aggressive legislation is woefully behind the curve. The laws are running; the technology is flying.

When Seeing Is No Longer Believing

Here's the deeper problem: deepfakes don't just create false content. They corrode the credibility of all content.

Once you know that any video could be fabricated, you begin to doubt every video. Paradoxically, this doesn't make people more skeptical in a healthy way — it makes them selectively skeptical. Psychologist Ziva Kunda's foundational 1990 research on "motivated reasoning" — later expanded by behavioral economists like Duke's Dan Ariely — documented a persistent human tendency: we dismiss as "probably fake" evidence we don't want to believe, while accepting as authentic content that confirms our existing views. A 2026 analysis by Reality Defender found that this effect is amplified in political contexts, where deepfakes don't even need to be believed — they just need to create enough doubt to neutralize real evidence.

The prophet Isaiah saw something eerily similar: "Woe to those who call evil good and good evil, who put darkness for light and light for darkness" (Isaiah 5:20). Isaiah wasn't describing technology. He was describing a culture so morally disoriented that it had lost the ability to distinguish truth from its opposite.

Deepfakes are accelerant on that fire.

The Irony Nobody's Talking About

There's a painful irony in the Grok story that deserves mention. X, the platform on which this deepfake tool operated, was acquired by Musk under the banner of "free speech absolutism." The argument was that less content moderation would lead to more truth.

Instead, less moderation led to tools that generate fabricated reality at industrial scale. It turns out that "free speech" without a commitment to truthful speech isn't freedom at all. It's chaos wearing liberty's clothing.

The writer of Proverbs knew this: "Lying lips are an abomination to the LORD, but those who act faithfully are his delight" (Proverbs 12:22). The Hebrew to'evah — "abomination" — is one of the strongest words of divine revulsion in the Old Testament. It's not a mild disapproval. It's visceral. God, the text insists, has a strong opinion about fabricated reality.

What the Church Should Say (And Do)

The church is uniquely positioned here — not as tech experts, but as a community that has been thinking about truth, testimony, and the dignity of persons for a very long time.

Three things we can actually do:

1. Name it. Call deepfake creation and distribution what it is: bearing false witness. Not "content creation." Not "AI art." When it depicts a real person in a fabricated scenario without consent, it is a violation of the Ninth Commandment. Full stop. Pastors can say this clearly, and should.

2. Teach media discernment as a spiritual discipline. Paul told the Thessalonians to "test everything; hold fast what is good" (1 Thessalonians 5:21). In 2026, "testing everything" includes asking: Is this image real? What's the source? Who benefits from me believing this? Media literacy is not just a civic skill — it's an act of obedience.

3. Advocate. Support legislation like the DEFIANCE Act. Push for transparency standards. The EU's proposed "common icon" for AI-generated content is exactly the kind of practical, unglamorous policy that protects real people. Churches that care about the vulnerable — as Christ repeatedly insisted we should — should care about the legal frameworks that protect them from synthetic exploitation.

Pilate asked "What is truth?" and walked away. We don't have that option. Truth is under assault at a scale he couldn't have imagined, and the church that worships the One who said "I am the truth" has no business staying quiet about it.

Rev. John Moelker

Rev. John Moelker

Founder & Theological AI Architect

John is a pastor, software engineer and theologian passionate about making AI accessible and theologically faithful for churches of all traditions. But most importantly, John wants to see others come to know Jesus better.

Ready to Add AI to Your Church?

Join churches already using ChurchWiseAI to answer every call, engage every visitor, and free their staff for real ministry.