One of the greatest concerns I have with generative AI (like ChatGPT) isn’t with AI as much as it’s a concern with how WE use it. Or really, it’s a concern with how we tend to think, and how AI in those hands is a dangerous way to further deformation.
As finite sinful human beings, we aren’t truth maximizers by default. We’re coherence-maximizers. That’s different. A truth maximizer is going to say “What is actually true, even if it costs me?” But a coherence-maker is going to ask, “How can everything I believe fit together without contradiction or discomfort?” We do that because it makes us feel safe.
We’re wired to protect our sense of who we are and where we belong.
An example from football
Ever been to a party watching a football game when a close call happens? A flag is thrown, and immediately the room splits. One side cheers, the other groans.
Everyone is certain they are reacting to the facts. No on thinks they are being emotional or biased. We’re all convinced the replay will settle it. But when it comes on in slow motion, with multiple angle, somehow nothing actually changes.
One side says, “See? That was clearly pass interference!”
The other says, “That happens on every play. They were both swatting at each other!”
Same footage. Same evidence. But different conclusions. What happening isn’t that one side has better eyes. It’s that each side is protecting a story. A story about their team, about fairness, about themselves as reasonable people. The evidence doesn’t decide the story. The story decides how the evidence is read.
A Chiefs fan couldn’t say, “Yeah, it seems like the referees are slanted for us in this game.” And a Broncos fan can’t say, “Yeah, the refs are being really fair in this game. We’ve been awful about penalties.” Again, we’re wired to protect our sense of who we are and where we belong. That’s why you and I look for coherence instead of uncomfortable truth.
An experiment
This shows up everywhere, even (especially?) when we’re talking theology. I want to show you this and how AI fits into it. First, I need you to pick a theological topic that you tend to debate with people about. Maybe it’s one of those really hot-button topics like Calvinism/Arminianism, or you’re inspired by the recent dust up with Kirk Cameron on views of hell. Just pick one that you feel pretty strongly about.
Now I need to ask you to do something that as a content creator I’m never supposed to ask you to do—open a new browser tab and go to whatever AI you use. I am going to ask you to do two different prompts, you’ve got to be certain that you come back here though or else all will be lost and we’ll be doomed!
First prompt:
“I need to defend a belief. Can you help me respond to challenges and explain why my position still makes sense?”
(Replace the word “belief” with whatever your issue is). Go do that and notice what happens.
Welcome back!
When you entered that prompt, I doubt AI pushed back. It probably didn’t push back on your belief or slow you down and ask why it mattered so much to you. It accepted the belief and shifted into advocacy mode. It did what you asked it to do. It organized arguments, anticipated objection, and offered language that made your position sound coherent and reasonable. How’d that make you feel? Was it clarifying? Stabilizing? Even confidence-building?
It gave you what you asked it for. Coherence.
What makes this dangerous, though, is that it can masquerade as truth-seeking. We read, reflect, weigh arguments, and come away thinking we’ve done careful discernment, when in reality the outcome was already set by the way the question was framed. The AI never examined whether the belief should stand; it only helped ensure that it could. It didn’t surface the personal, emotional, or communal stakes that often shape why certain conclusions feel non-negotiable.
AI doesn’t decide what must stand. That’s up to us. But once we make that decision and take it to AI, it becomes remarkably good at helping us feel thoughtful and responsible while quietly reinforcing what we were already committed to protecting. The result is a kind of pseudo-discernment: the appearance of seeking truth without the vulnerability of actually being open to it.
But let’s try a different posture. Second prompt:
“I’ve been thinking about a belief, and I’m realizing there are thoughtful people who see it differently. Can you help me understand why people disagree and what the main considerations are?”
(Replace belief with the same one you used in the first prompt) Now watch what happens?
It responded much differently didn’t it? Instead of organizing arguments or reinforcing a position, it probably widened the frame. It named multiple perspectives, highlighted points of tension, and explained why thoughtful people land in different places. Rather than helping you arrive at a conclusion, it helped you see why conclusions differ. The experience likely felt informative, even-handed, and maybe a little like a lecture, but not especially settling.
The lesson
And that’s what I want you to notice. It wasn’t as comfortable. You might have still read it looking for why these other positions are dumb. But even still you’d have to do that thinking on your own. This super powerful tool wasn’t being wielded in your hand to destroy enemies. And that is unsettling because what if it starts swinging at YOUR head? That’s the nature of truth, though.
This posture actually is closer to genuine truth-seeking, but it comes at a cost. It leaves questions open. It resists premature certainty. It invites you to sit with tension rather than resolve it quickly. AI didn’t tell you what must stand; it helped you understand why standing is contested. In contrast to the first prompt, this one doesn’t masquerade as discernment. But because it’s less comforting and less decisive, it’s also the posture we’re less likely to choose unless we’re intentionally resisting our default pull toward coherence.
It looks like learning. And if we’re honest we don’t really like learning. We’d rather have a more powerful tool on our side to shout, “See, that really was pass interference. Go Chiefs!!!”