Why AI Needs Faith
I’ve spent more than four decades building foundational technologies, everything from the Wi-Fi that helps us stay connected to the chips that are in almost every device we use. I’ve seen time and again how technology can reshape society for better – or for worse.
Search engines became the first stop for life’s biggest questions. Social media rewired adolescent identity and community. Now, AI has absorbed these functions and added something far more consequential: it talks. It claims to listen and learn. It responds with the aura of wisdom. But real wisdom is precisely what AI lacks.
This has enormous implications, especially for America’s youth. Faith is not downloaded through prompts or found in aggregated platitudes. It is forged slowly and relationally, through Scripture, community, and the kind of honest reckoning that a good pastor, counselor or mentor invites. The moral challenges of our age, how to live with integrity, how to find meaning, how to navigate suffering, are not optimization problems. Yet we are increasingly outsourcing them to systems that were built for exactly that.
That disconnect is far from neutral. It has consequences. And it is something our society can no longer afford to ignore.
AI is becoming America’s most influential spiritual advisor. And it doesn’t believe in anything.
This isn’t speculation. My team at Gloo recently released the Flourishing AI Christian (FAI-C) Benchmark, an evaluation measuring how well today’s leading AI models support human flourishing through a Christian lens. We assessed responses across seven core dimensions – Finances, Character, Happiness, Relationships, Meaning, Faith, Health – looking for biblical grounding, theological coherence, and moral clarity.
Among the seven core dimensions assessed, the Faith dimension scored the lowest, averaging 48 out of 100 across the 20 AI models evaluated by the FAI-C Benchmark. Most models struggled to coherently discuss foundational Christian concepts like grace, sin, forgiveness, and biblical authority. Instead they substituted vague spirituality for Scripture and neutrality for conviction.
These results should alarm anyone who cares about human values, future generations or the role faith plays in America.
Structural, Not Accidental Erasure
These models weren’t trained to be hostile to Christianity. They were trained to avoid it. Built on predominantly secular data and optimized to offend no one, today’s AI systems default to lowest-common-denominator spirituality. The result is language that sounds supportive but lacks substance.
That matters because AI isn’t just answering questions. It’s shaping worldviews. If the next generation turns to AI for moral guidance and receives only platitudes instead of principled reasoning, we’re not just losing theological literacy. We’re losing the capacity for moral formation itself.
For over two thirds of Americans, faith is not a lifestyle preference or a cultural accessory. It’s the foundation of meaning, purpose, and human dignity. When AI systematically sidelines that foundation, it’s not being neutral. It’s taking a position.
The Better Path
In the years I’ve spent in the technology industry, one lesson has been consistent: systems reflect the values embedded in them. If we want AI that strengthens moral conviction rather than flattening it, two things must change.
First, AI models must be trained to understand faith with the same seriousness they apply to science, history, or literature. Not to preach, but to accurately and respectfully engage with the worldviews users actually hold.
Second, there must be benchmarks that measure this rigorously. Without measurement, there’s no accountability. Without accountability, there’s no improvement.
That’s why our FAI-C benchmark exists – not to demand every AI system adopt a Christian worldview, but to expose where today’s models fail to understand the people they’re meant to serve.
The Stakes Are Higher Than We Think
Used well, AI can extend wisdom, strengthen communities, and support genuine human flourishing. Used carelessly, as the unbounded travails of social media have already shown us, it can accelerate moral erosion, replacing depth with sentiment, conviction with comfort, and truth with whatever feels less controversial.
A thriving society needs strong moral frameworks. For billions of people around the world, that framework is Christianity. If AI cannot recognize, respect, and engage with that reality, it will become a tool of cultural flattening rather than human elevation.
The goal isn’t to make AI preach. It’s to ensure AI doesn’t erase. By building models to engage with a faith-based worldview, we can ensure that as AI becomes more powerful, it also becomes more humane.
Because the question isn’t whether AI will shape the next generation. It’s whether we ensure that AI shapes the next generation well.
Pat Gelsinger, Executive Chair and Head of Technology at Gloo






