When Doing the Right Thing Costs: Anthropic, AI Safety, and the Christian Conscience
The U.S. government ordered agencies to stop using Anthropic after the AI company refused to remove safety guardrails. For Christians, this is more than a tech story — it's a conscience story.

Rev. John Moelker
Founder & Theological AI Architect
A Company That Said No
On February 27, 2026, President Trump ordered every U.S. federal agency to cease using technology from Anthropic, the company behind the Claude AI model. The reason? Anthropic refused to remove two safety restrictions from its AI: a prohibition on mass domestic surveillance of Americans, and a prohibition on fully autonomous weapons systems without human oversight.
CEO Dario Amodei wrote that Anthropic "cannot in good conscience accede" to the Pentagon's demands. That phrase — in good conscience — should resonate deeply with every Christian reading this.
What Actually Happened
The Pentagon demanded unrestricted access to Anthropic's AI for "all lawful purposes." Anthropic offered to work within the contract but insisted on maintaining two guardrails: no mass surveillance of civilians, and no lethal autonomous weapons without a human making the final decision. The Pentagon rejected those conditions.
Defense Secretary Pete Hegseth then designated Anthropic a "supply chain risk" — a classification historically reserved for foreign adversaries like Huawei. The contract at stake was worth up to $200 million. Anthropic walked away from it rather than compromise their principles.
In a remarkable turn, OpenAI then negotiated a competing Pentagon deal that included the same safety safeguards Anthropic had requested. Sam Altman publicly confirmed he shared Anthropic's "red lines." The guardrails Anthropic was punished for defending were adopted by their competitor — and accepted by the same Pentagon that rejected them days earlier.
Why This Matters to Every Church
This is not just a technology story. It is a story about what happens when a company chooses conscience over contract — when doing the right thing carries a real cost. And it is a story that should matter to every pastor and church leader using AI tools in ministry.
At ChurchWiseAI, Anthropic's Claude AI powers our voice agent — the one that answers your church's phone at midnight, captures prayer requests, and connects visitors to your congregation. We chose Anthropic deliberately because their commitment to safety aligns with our own. When we read their Responsible Scaling Policy, we saw a company that thinks about the consequences of its technology the way we believe Christians should think about stewardship of power.
The Biblical Case for Conscience
Scripture is clear that there are times when obedience to authority must yield to obedience to a higher standard. The apostles faced this directly:
"We must obey God rather than human beings." — Acts 5:29 (NIV)
This is not a call to lawlessness. Paul instructs believers to submit to governing authorities (Romans 13:1). But the same tradition that produced Romans 13 also produced the Daniels and Shadrachs who refused to bow. Christian ethics has always recognized that compliance has limits — that conscience, formed by Scripture and the Holy Spirit, is not optional.
When Anthropic said they could not "in good conscience" hand over AI capable of mass surveillance without safeguards, they were making a moral argument that Christians should recognize. The power to watch every citizen without oversight is not a neutral capability. The power to make lethal decisions without human judgment is not a technical detail. These are questions of human dignity — and dignity is a biblical concept before it is a legal one.
"So God created mankind in his own image, in the image of God he created them." — Genesis 1:27 (NIV)
Every person surveilled without cause bears the image of God. Every life taken by an autonomous system is a life made in that image. Christians cannot be casual about this.
Standing Firm When It Costs
Anthropic lost a $200 million contract. They were labeled a "supply chain risk" by their own government. They face potential investor fallout as they approach a major IPO. The cost of conscience was tangible and immediate.
But the alternative was worse. Retired Air Force General Jack Shanahan warned that the government's approach "garners spicy headlines, but everyone loses." Senator Mark Warner expressed concern that the decision reflected political motives rather than security ones. And ultimately, the Pentagon accepted the same safeguards from a different company — suggesting this was never about capability. It was about control.
The prophet Micah asked what the Lord requires of us:
"He has shown you, O mortal, what is good. And what does the LORD require of you? To act justly and to love mercy and to walk humbly with your God." — Micah 6:8 (NIV)
Acting justly sometimes means refusing a powerful customer. Loving mercy sometimes means insisting on guardrails that protect the vulnerable. Walking humbly sometimes means accepting the cost of doing right.
What This Means for ChurchWiseAI
Our About page states our core values plainly: Ethical AI — "AI should serve, never replace. Our tools amplify pastoral wisdom, not substitute for the Holy Spirit's guidance. You remain in control — always." And Transparency — "We're honest about what AI can and cannot do."
These are not marketing slogans. They are commitments we renew every time we ship a feature, every time we train a model, every time we choose a vendor. We chose Anthropic because they share these commitments. This week, they proved it at a cost most companies would not pay.
We are proud to build on a platform whose creators chose conscience over compliance. And we commit to doing the same: your church's data will never be used for purposes outside your ministry. Your conversations will never train models you haven't consented to. Your AI will always escalate to a human when the situation demands it.
A Call to Thoughtfulness
We are not asking churches to take a political position on this dispute. We are asking something simpler and harder: be thoughtful about whose technology you trust with your congregation's data, conversations, and spiritual care.
Ask your AI vendor: What are your safety guardrails? What would you refuse to do, even if a powerful customer demanded it? What are your red lines?
If they don't have clear answers, that tells you something. If they do — and they've proven it under pressure — that tells you something too.
"Do not conform to the pattern of this world, but be transformed by the renewing of your mind. Then you will be able to test and approve what God's will is — his good, pleasing and perfect will." — Romans 12:2 (NIV)
The pattern of this world says take the contract, remove the guardrails, maximize revenue. The renewed mind says: some things are worth more than $200 million. Anthropic understood that this week. We pray the church does too.
Sources

Rev. John Moelker
Founder & Theological AI Architect
John is a pastor, software engineer and theologian passionate about making AI accessible and theologically faithful for churches of all traditions. But most importantly, John wants to see others come to know Jesus better.
More from the Blog
5 Questions Every Pastor Should Ask Their AI Provider About Safety
6 min readAI Ethics & TheologyBearing False Witness at Scale: Deepfakes, Grok, and the Ninth Commandment
5 min readAI Ethics & TheologyHollywood's AI Prophecy Problem: Why Entertainment Can't Imagine a Tool That's Just... a Tool
8 min readReady to Add AI to Your Church?
Join churches already using ChurchWiseAI to answer every call, engage every visitor, and free their staff for real ministry.
