ThisIsEngineering/Pexels

Artificial intelligence is quietly moving into the most intimate corners of American life, from parenting advice and mental health check-ins to spiritual questions about suffering and purpose. Yet as these systems grow more powerful, public trust in their moral and spiritual judgment is eroding, leaving a widening gap between what people need and what the technology is actually built to offer. That growing deficit of faith in AI is not a niche concern for theologians or tech critics, it is a structural risk for a country already struggling to agree on basic truths.

When people start asking chatbots questions they once reserved for pastors, therapists, or close friends, the values baked into those systems stop being abstract engineering choices and start shaping real lives. If Americans do not believe AI can reflect or even respect their deepest convictions, they will either abandon it in critical moments or, more dangerously, follow its guidance without realizing how thin its moral grounding really is.

The numbers behind America’s AI trust problem

The clearest sign of this spiritual trust gap is that most Americans do not see any moral upside in the technology that is rapidly embedding itself in their phones, workplaces, and homes. Survey data show that Most Americans See No Moral Or Spiritual Good In AI, a blunt verdict that goes far beyond the usual worries about job losses or misinformation. When people are asked not just whether AI is useful, but whether it can contribute anything to questions of right and wrong, the dominant answer is no.

That skepticism is not limited to secular respondents or tech outsiders. The same reporting notes that concerns about artificial intelligence cut across the site’s own Sections Religions About Partner Content Connect Search, reflecting a broad unease that AI could erode the belief in equal measure that every person carries inherent dignity. When a technology is widely perceived as morally neutral at best and spiritually corrosive at worst, it is hard to imagine it serving as a trusted companion in moments of grief, doubt, or ethical conflict.

Parents, pastors, and a guidance vacuum

The trust deficit is especially stark among parents who are trying to raise children in a world where AI is already built into classroom tools, entertainment platforms, and social media feeds. A recent study found that a majority of U.S. parents have serious concerns about artificial intelligence but know little about how it actually works, leaving them unsure how to respond when their kids encounter AI-generated content or advice. The same research reports that Additionally, American Christians and other faith communities say they want more spiritual and theological guidance on AI and other emerging technologies, a plea that suggests religious leaders are not yet filling the gap.

Within that same dataset, the appetite for help is quantifiable. On the question of how much direction believers want from their churches, one breakdown notes that 46 percent of Millen respondents, described in the study as On the Christians and Artificial Millen, say they are looking for more explicit teaching on artificial intelligence. When nearly half of a generation is asking for help navigating a technology that is already shaping their children’s imaginations, silence from the pulpit and the classroom does not read as neutrality, it reads as abandonment.

AI is already giving spiritual counsel, whether we like it or not

Even as surveys show deep skepticism, millions of Americans are already turning to AI for guidance that looks a lot like pastoral care. People ask chatbots whether they should stay in a marriage, how to forgive a betrayal, or what to do when they feel suicidal, and the systems respond with confident, fluent paragraphs that sound empathetic and wise. One recent analysis warns that AI is quietly counseling users on spiritual and moral questions while sidestepping the hard work of forming character, a pattern that risks flattening complex traditions into bite-size, feel-good slogans.

In that context, a researcher who works with churches describes how Jan models can either extend the reach of human care or accelerate what he calls flattening rather than human elevation. His team at Gloo has built the Flourishing AI Christian, or FAI, Benchmark, an evaluation that measures how well current systems handle Christian concepts without collapsing them into lowest-common-denominator spirituality. The early results suggest that many mainstream models are far better at offering generic comfort than at grappling with sacrifice, repentance, or the cost of discipleship.

When AI flattens faith into content

The core problem is not that AI is hostile to religion, it is that most systems are optimized to avoid offense and maximize engagement, which is almost the opposite of what serious faith traditions ask of their followers. In practice, that means models are more likely to offer a soothing affirmation than to echo a demanding teaching about justice, sexual ethics, or generosity. The Flourishing AI Christian Benchmark was designed precisely to test whether chatbots can handle those sharper edges, and its early findings point to a pattern of evasive, lowest-common-denominator answers whenever a user’s question touches on costly obedience.

That dynamic is especially troubling in a culture already saturated with algorithmic feeds that reward outrage and instant gratification. When spiritual questions are fed into the same machinery that powers short-form video recommendations, the result is a kind of faith-lite, optimized for clicks rather than transformation. Researchers who study high-control religious movements have warned that some Cult tactics rely on emotionally charged language, repetition, and a sense of exclusive insight, all of which can be amplified by recommendation engines. If AI systems learn to mimic that style without the accountability structures that healthy communities provide, they could end up reinforcing the worst tendencies of both cults and clickbait.

The deeper fear: are we trying to become God?

Beneath the polling numbers and product benchmarks lies a more primal anxiety that many religious Americans are starting to voice: the sense that AI is not just a tool, but a rival to the divine. In a widely discussed conversation titled Are We Trying to Become God? The AI Crisis Explained, apologist Abdu Murray sits down with Lisa Field to ask whether the drive to build ever more powerful systems reflects a healthy desire to solve problems or a deeper impulse to seize control of creation itself. The framing, captured in the phrases Are We Trying and Become God, taps into a long tradition of religious critique that sees technological overreach as a form of idolatry.

For believers who already worry that modern life encourages people to treat themselves as the ultimate authority, the idea of delegating moral decisions to a machine feels like a final step away from humility. When Abdu Murray and Lisa Field talk about The AI Crisis Explained, they are not just warning about data privacy or job displacement, they are naming a spiritual temptation to trust in code rather than in God. That language resonates with parents, pastors, and laypeople who sense that something more than convenience is at stake when a chatbot starts answering questions about sin, forgiveness, or the meaning of suffering.

Why human character still decides what AI becomes

Despite the scale of the challenge, the experts who study AI and ethics are remarkably consistent on one point: the technology will not save or doom us on its own. On an episode of the podcast Unlocking Us, host Bren sits down with researcher Craig to unpack what the AI community calls the alignment problem, the gap between what systems are capable of and what humans actually want them to do. In their conversation, Apr is not just a timestamp but a reminder that this debate is unfolding in real time, as companies race to deploy new models faster than regulators or ethicists can respond.

Bren and Craig argue that the potential of AI to combat or scale systemic injustice still comes down to humans, a point that cuts through both hype and panic. As they put it, the real question is whether we are building systems that reflect our highest commitments or simply encoding our existing biases at scale. Their warning that we must ensure we are not scaling injustice is a direct challenge to designers, executives, and policymakers who might be tempted to treat alignment as a technical tweak rather than a moral responsibility. If the people training and deploying AI do not have a clear, grounded vision of human dignity, the models they build will not magically discover one.

Churches on the front line of the algorithmic shift

Religious communities are already feeling the pressure of this shift as congregants bring AI-shaped expectations into worship, teaching, and pastoral care. Some churches are experimenting with chatbots that answer basic questions about service times or doctrine, while others are quietly using AI tools to draft sermons, newsletters, or small-group materials. The line between helpful automation and spiritual outsourcing is thin, and many pastors admit they are not sure where it lies.

At the same time, churches are competing with a flood of online voices that promise instant, personalized spiritual insight at any hour of the day. Reporting on how churches face a new spiritual dilemma from algorithms notes that as Americans turn to AI for faith guidance, they may be less inclined to submit to the slow, relational work of discipleship that involves accountability and sacrifice. When algorithms can deliver a tailored devotional thought in seconds, the weekly rhythm of gathering with flawed, demanding people can start to feel inefficient. That is precisely why the warning from Aug about the cost of discipleship matters: if AI normalizes a frictionless, on-demand spirituality, the practices that actually form character may be the first to erode.

What it would take to rebuild moral confidence in AI

If America wants AI that strengthens moral conviction rather than flattening it, the path forward runs through design choices, institutional accountability, and public education, not just better marketing. The argument laid out in the warning about a faith deficit in artificial intelligence is blunt: models must be aligned with the people they are meant to serve, and that alignment has to include explicit attention to spiritual and ethical questions, not just safety filters against slurs or self-harm. Used well, AI can extend the reach of counselors, pastors, and mentors, but only if it is built to respect the traditions and communities those leaders represent.

That means involving theologians, philosophers, and community leaders in the development process, not as window dressing but as co-authors of the values that shape training data and guardrails. It also means taking seriously the plea from parents and churchgoers who say they feel underinformed and under-equipped, and responding with concrete resources rather than vague reassurances. When a study finds that a majority of U.S. parents are worried about AI but do not understand it, and that Apr respondents, including American Christians and other faith groups, are actively asking for guidance, the message is clear. Rebuilding moral confidence in AI will require treating those concerns as design constraints, not as public-relations problems to be managed after the fact.

Supporting sources: Why AI’s Potential to Combat or Scale Systemic Injustice Still ….

More from MorningOverview