Skip to content
CW
ChurchWiseAI
Back to Blog
Church TechnologyOctober 29, 202515 min read

The Dangers We Must Not Ignore: Church Leaders Sound the Alarm on AI

While 91% of church leaders support AI, prophetic voices issue urgent warnings: loss of humanity, algorithmic manipulation, pastoral replacement, and truth crises. Not fear-mongering—wisdom for navigating AI's genuine risks without abandoning its potential.

Rev. John Moelker

Rev. John Moelker

Founder & Theological AI Architect

Wisdom requires understanding both promise and peril

Rev. John Moelker | October 2025


⚖️ THE PARADOX

91% of church leaders support AI in ministry 82% believe it will make churches more effective

Yet... some of the Church's most thoughtful leaders are issuing urgent warnings.

These aren't voices to dismiss. They're prophets calling us to wisdom, helping us see what our excitement might blind us to.

Not because they're Luddites or afraid of progress. But because they see dangers that enthusiasm alone cannot overcome.


"An Empty, Cold Shell": The Pope's Personal Rejection

The story is striking in its simplicity. Someone approached Pope Leo XIV with an offer: create an AI version of the pope so people worldwide could have virtual audiences with him. The AI would be trained on his teachings, his speaking style, his pastoral approach. Anyone could ask questions and receive answers as if from the pope himself.

🚫 PAPAL REJECTION

"I'm not going to authorize that."

"If there's anybody who should not be represented by an avatar, I would say the pope is high on the list."

Why? Because he understands something crucial:

"It's very difficult to discover the presence of God" in AI. Technology, no matter how sophisticated, is "an empty, cold shell."

— Pope Leo XIV

The Loss of Humanity

Pope Leo's concern goes deeper than just his personal image. He warns that "extremely wealthy" people are investing billions in AI while "ignoring the value of human beings." The result could be a society where most people are economically superfluous, where human dignity is measured by productivity, where those who can't compete with machines are simply left behind.

"If we automate the whole world and only a few people have the means with which to more than just survive, what kind of society are we creating?" he asks.

This isn't hypothetical. It's already happening.

The Replacement Danger

Adriaan Adams, speaking to African church leaders, emphasized that AI "is not the Holy Spirit." The statement seems obvious, but it addresses a subtle danger: the temptation to let technology do what only God can do.

Adams warns: "Technology in evangelism must always be a tool that leads to genuine human connection, relationship and discipleship. It's a doorway. It is not the destination."

The danger is treating the doorway as if it were the room itself. Using AI for outreach while forgetting that the goal is encounter with Jesus, not just information about Him.

The Substitution Creep

It often happens gradually:

Stage 1: AI helps with administrative tasks. This is genuinely helpful and freeing.

Stage 2: AI assists with content creation. This multiplies impact and saves time.

Stage 3: AI provides pastoral responses. This seems efficient but starts crossing lines.

Stage 4: AI becomes the primary interface between church and community. This is catastrophic.

The Presbyterian Church USA captures the danger: "AI may be able to predict the next best word in a pastoral prayer, but can it really pray for a congregation? Programs trained on massive sets of language can write a sermon in less than a minute, but can they deliver the Word of God?"

The questions answer themselves. But the temptation toward efficiency can blind us to what we're losing.


The Truth Crisis: Deepfakes and Misinformation

🎭 THE DEEPFAKE MOMENT

Pope Leo XIV experienced this personally: a deepfake video showed him falling down stairs. It was so convincing that people asked if he was okay.

His reflection:

"You end up creating a false world and then you ask yourself: What is the truth?"

This is more than an inconvenience. It's an epistemological crisis.

Christianity is a faith grounded in historical reality—Jesus really lived, died, and rose again. But in a world where any reality can be fabricated convincingly, how do people discern truth?

The Temptation to Believe Lies

The problem isn't just that fake content exists. It's that people want to believe it.

"The temptation is for people to believe it, and they believe it because there seems to be a need in some people to receive it," Pope Leo XIV notes. "Why are all these people consuming this fake news? Something is going on there. People want to believe in conspiracies, people want to seek out all these false things, and that is very destructive."

AI doesn't just make misinformation easier to create—it makes it more convincing, more targeted, more emotionally manipulative. And when people are already predisposed to believe lies, AI becomes a weapon against truth itself.


The Data Training Problem

🗑️ GARBAGE IN, GARBAGE OUT?

One critical concern: AI is only as good as its training data.

Ministry leader Andy Morgan explains that large language models were trained on:

  • ❌ Reddit arguments
  • ❌ Facebook comment sections
  • ❌ Random internet forums
  • ❌ Unverified blog posts

Think about that. Your AI ministry assistant was trained on uncurated internet content from sources like Reddit, Quora, and Facebook.

"The goal was to gather the largest possible dataset to help the AI learn statistical patterns of language."

— Andy Morgan

The Bias Problem

This creates multiple dangers:

Theological Bias: AI trained on internet data will reflect whatever theological perspectives are most common online—which may not be orthodox, biblical, or helpful.

Cultural Bias: AI reflects the biases present in its training data, which can perpetuate rather than challenge cultural blind spots.

Accuracy Issues: AI confidently states things that sound right but may be completely false—a phenomenon called "hallucination."

Value Neutrality Myth: AI is not neutral. It reflects the values, assumptions, and biases of both its training data and its creators.

As Morgan emphasizes, AI "needs a discerning mind to know when and how to use it."

The Control Question: Who Decides?

Pope Leo XIV consistently asks: "Who directs [AI] and for what purposes?"

This isn't abstract philosophy. It's about power, justice, and human dignity.

Right now, AI development is largely controlled by a handful of massive corporations driven primarily by profit motives. The ethical frameworks, if they exist at all, are self-imposed and voluntary. The concentration of power is staggering.

The Justice Issue

As the Vatican's document on AI warns, there's real danger "of its misuse for selfish gain at the expense of others, or worse, to foment conflict and aggression."

AI can be used to:

  • Manipulate public opinion
  • Suppress dissent
  • Enhance surveillance states
  • Automate discrimination
  • Concentrate wealth and power
  • Create autonomous weapons

The Catholic social teaching tradition has always emphasized the common good and the preferential option for the poor. But AI development isn't happening with these priorities. It's happening in Silicon Valley boardrooms focused on market valuation and shareholder returns.

The Relational Atrophy

Perhaps the most insidious danger is the most subtle: AI's erosion of genuine human relationship.

As Pope Francis warned before his death, AI risks turning human relations into "mere algorithms." Pope Leo continues this concern, emphasizing that "our human life makes sense not because of artificial intelligence, but because of human beings and encounter, being with one another, creating relationships, and discovering in those human relationships also the presence of God."

The Isolation Factor

Pope Leo warns about technology creating "isolation" and "superficiality, individualism and emotional instability." When our primary interactions become mediated by algorithms, several things happen:

Connection Without Commitment: We can interact with hundreds of people without truly knowing any of them.

Efficiency Over Empathy: Quick algorithmic responses replace the slower, messier work of genuine care.

Performance Over Presence: We curate digital personas rather than showing up authentically.

Consumption Over Community: We consume religious content rather than participate in the body of Christ.

The Developmental Concerns for Children

Pope Leo XIV has expressed particular concern about "the possible consequences of the use of AI on their intellectual and neurological development" of children and young people.

This is a critical area where the Church must be prophetic. A generation is growing up with AI-mediated reality as normal. What does this do to:

Critical Thinking: If AI provides instant answers, do children learn to think deeply, question assumptions, and wrestle with complexity?

Attention Spans: When AI delivers content optimized for engagement, do children develop the patience for slow reading, contemplative prayer, or sustained attention?

Creativity: If AI can generate any image or text on demand, do children develop their own creative capacities?

Moral Formation: If AI makes decisions easy (just ask the chatbot), do children develop the moral muscle to wrestle with difficult ethical questions?

Spiritual Discernment: If AI can answer religious questions, do children learn to listen for God's voice, sit with mystery, or navigate doubt?

The Work and Purpose Crisis

The social implications of AI-driven automation are staggering. Jobs that once provided meaning, purpose, and dignity are disappearing. And Christianity has always taught that work has intrinsic value—not just economic value.

Pope Leo XIII, whose name the current pope honors, wrote "Rerum Novarum" about workers' rights during the Industrial Revolution. Pope Leo XIV sees AI as requiring the same prophetic social teaching today.

More Than Economics

This isn't just about unemployment. It's about human purpose and dignity. If AI can do most human work better, faster, and cheaper, what are humans for?

The secular transhumanist answer is troubling: humans are just obsolete technology, to be upgraded or replaced. But Christianity offers a radically different answer: humans are made in God's image, valuable not for what we produce but for who we are.

The danger is that churches, caught up in AI enthusiasm, fail to advocate for this vision of human dignity in the public square.

The Autonomous Weapons Crisis

The Vatican has been explicit: autonomous lethal weapons capable of "identifying and striking targets without direct human intervention" represent an "existential risk" and should be banned.

This is about more than just war. It's about the principle that humans, not machines, must make life-and-death decisions. Moral responsibility cannot be delegated to algorithms.

Yet development of these weapons continues, often funded by the same tech companies developing the AI tools we're using in ministry.

We cannot compartmentalize. We cannot use AI for evangelism while ignoring that the same technology enables autonomous killing machines.

The Question We're Not Asking

Pastor Bonnie Kristian poses a challenge: instead of immediately asking "How can we use AI?", we should ask "Should we use this technology at all?"

She invokes Pope John Paul II's teaching about intrinsically evil actions—things that are always wrong regardless of context or intention. She then argues we must also recognize intrinsically evil technologies.

"Just as we can identify inherently evil actions, we must recognize intrinsically evil technologies and inventions," she writes. "Contrary to the popular notion that all technology is neutral and all that matters is how you use it, some technologies cannot be used for any moral purpose."

The Amish Model

Kristian points to the Amish as a model. They "see technology adoption as a choice subject to real review by the tenets of our faith rather than an inevitability."

This doesn't mean rejecting all technology. It means being thoughtful, intentional, and willing to say no when technology threatens core values.

"If most Christians use a new technology without thinking about morality, the church should not celebrate but groan in prayer," she writes.

The Spread of Relativism

Pope Leo XIV identifies another danger: "the spread of patterns of thought weakened by relativism."

AI, optimized for engagement rather than truth, can accelerate relativism. When algorithms show people content that confirms their existing biases, when AI generates "truths" tailored to what people want to hear, when reality itself becomes customizable—objective truth becomes increasingly difficult to maintain.

Christianity has always insisted that truth exists, that God has revealed Himself, that some things are actually right and wrong. But AI-mediated reality can undermine these convictions by making truth feel subjective, personal, and negotiable.

The Spiritual Maturity Question

There's a danger that AI makes Christianity too easy—removing the friction that produces spiritual growth.

Consider:

  • Instant Bible answers without the slow work of study and meditation
  • Generated prayers without the struggle of authentic conversation with God
  • Automated devotionals without the discipline of personal spiritual practice
  • Algorithmic community without the messy work of real relationships
  • Optimized content without the challenge of wrestling with difficult texts

James tells us to "count it all joy when you meet trials of various kinds, for you know that the testing of your faith produces steadfastness" (James 1:2-3). But AI can eliminate the very trials that produce maturity.

The Wisdom Deficit

Perhaps the deepest concern is this: AI gives us more information and less wisdom.

Wisdom in Scripture isn't about data or knowledge. It's about character, discernment, and the fear of the Lord. It comes from walking with God, being shaped by community, learning through experience, and being formed by the Holy Spirit.

But AI can create the illusion of wisdom—instant answers, sophisticated language, apparent insight—without the formation that produces real wisdom.

The danger is that we confuse the two. That we think having access to information is the same as having wisdom. That we believe AI's articulate responses are equivalent to the Spirit's guidance.

The Path Forward: Wisdom, Not Paralysis

These warnings aren't meant to paralyze us or make us reject all AI. They're meant to sober us, to help us engage thoughtfully rather than enthusiastically.

Principles for Heeding Warnings

1. Maintain Healthy Skepticism Not everything tech companies promise is true. Not every efficiency is good. Not every innovation is progress.

2. Preserve What's Essential Some things must not be automated: the sacraments, pastoral presence in crisis, genuine prayer, human-to-human discipleship, the slow work of formation.

3. Ask Justice Questions Who benefits from this AI? Who is harmed? Who controls it? Who is excluded? What does this do to the poor, the marginalized, the vulnerable?

4. Prioritize Children's Development Be especially cautious about AI's impact on children and youth. The formation of the next generation is too important to sacrifice for efficiency.

5. Demand Transparency and Accountability Don't accept "black box" AI. Understand what data it's trained on, what biases it has, how decisions are made, who controls it.

6. Build in Human Oversight Never let AI make decisions without human review, especially in sensitive areas like counseling, theology, or pastoral care.

7. Cultivate Analog Spaces Create intentional spaces in church life where technology is absent, where silence is valued, where face-to-face encounter happens without mediation.

8. Teach Discernment Help people—especially young people—develop the critical thinking skills to evaluate AI output, recognize manipulation, and distinguish truth from fabrication.

The Prophetic Calling

The Church's role isn't to be cheerleaders for technology or cynical rejectionists. It's to be prophetic—speaking truth about both promise and peril, celebrating genuine goods while warning about real dangers.

We must:

  • Celebrate AI's potential to break down language barriers and reach the unreached

  • While warning about its capacity to erode human relationships and dignity

  • Embrace AI's ability to free us from administrative drudgery

  • While resisting its temptation to replace genuine pastoral presence

  • Use AI's tools to multiply ministry impact

  • While refusing to let efficiency trump faithfulness

  • Engage with AI thoughtfully and strategically

  • While maintaining clear boundaries about what must never be automated

Not Fear, But Wisdom

These warnings aren't rooted in fear. They're rooted in love—love for God, love for people, love for truth, love for the Church.

We issue warnings not to stop progress but to guide it. Not to reject technology but to use it wisely. Not to paralyze the Church but to protect what's most precious.

As Pope Leo XIV reminds us, the Church "is not against technological advances, not at all." But it insists on maintaining "a relationship between faith and reason, and science and faith."

"I think to lose that relationship will leave science as an empty, cold shell that will do great damage to what humanity is about," he warns. "And the human heart will be lost in the midst of the technological development, as things are going right now."

The Call to Vigilance

The dangers are real. The warnings are urgent. But they don't require us to reject AI entirely. They require us to engage vigilantly, thoughtfully, with eyes wide open.

We must be as wise as serpents and as innocent as doves (Matthew 10:16). Wise enough to see the dangers, innocent enough to maintain hope. Shrewd enough to protect what matters, open enough to embrace genuine goods.

The future isn't predetermined. The path AI takes depends on the choices we make now. And the Church has a vital role in making those choices well—not for the sake of efficiency or innovation, but for the sake of human flourishing and God's glory.

May we have the wisdom to know the difference between what can be done and what should be done. And the courage to say no when necessary, even when the whole world is saying yes.


In our final article, we'll explore practical frameworks for implementing AI in your ministry wisely—combining the promise and the warnings into actionable guidance for faithful engagement.


Related Articles in This Series

  1. The AI Awakening: Church Embracing Technology
  2. AI Tools for Ministry: A Practical Guide
  3. The Image of God in an Age of Algorithms: Theological Reflections
  4. The Dangers We Must Not Ignore: Church Leaders Sound the Alarm (you are here)
  5. A Framework for Faithful AI Engagement: Implementation Guide

Sources & References


© 2026 ChurchwiseAI | Seeing Jesus through Wise AI

Rev. John Moelker

Rev. John Moelker

Founder & Theological AI Architect

John is a pastor, software engineer and theologian passionate about making AI accessible and theologically faithful for churches of all traditions. But most importantly, John wants to see others come to know Jesus better.

Ready to Add AI to Your Church?

Join churches already using ChurchWiseAI to answer every call, engage every visitor, and free their staff for real ministry.