Why AI Doesn’t Ask the Right Questions

When you work with AI, you might notice something strange.
You give it a task. It starts solving. Even if you’ve been vague. Even if the goal isn’t clear.
It doesn’t pause. It doesn’t ask. It just... proceeds.
It’s efficient. Sometimes eerie. And often—misaligned.
That’s not a bug. That’s a feature. Large language models like ChatGPT don’t know when they’re missing information. They don’t ask follow-ups because they don’t know they’re missing anything. They’re built to generate—not to wonder.
This gap, while seemingly technical, reveals something deeper.
It points to a foundational limitation in how these systems interpret the world—and how we, as humans, expect “intelligence” to work.
🧠 Intelligence ≠ Awareness
To understand why AI behaves this way, it’s helpful to look at something closer to home: apes.
Research in comparative cognition shows that great apes—chimpanzees, bonobos, orangutans—can track what others see or know. For instance, if food is hidden while a human isn’t looking, a bonobo may try to indicate its location. This suggests they grasp what others don’t know. That’s a basic form of social intelligence.
But when someone holds a false belief—believing something incorrect—apes fall short. They can’t model a mind that diverges from reality. They can’t track what someone thinks might be true even when it’s not.
They know others have knowledge. But they can’t grasp that others have beliefs.
This is a crucial distinction. Because to understand what someone needs, you have to model not just what you know—but what they believe.
And this is exactly where AI, like apes, breaks down.
🤖 What AI “knows”
When AI responds to a prompt, it doesn’t check if you’ve given it enough context. It doesn’t pause to verify assumptions. It doesn’t ask:
-
“Is this the right goal?”
-
“What constraints matter here?”
-
“Is there a better question to ask?”
Why? Because AI, like apes, doesn’t hold mental models of other minds.
It doesn’t track your beliefs, your intent, or even your ignorance. It sees a string of text and predicts what comes next, based on pattern frequency—not meaning.
In this way, AI mimics a surface-level fluency that looks intelligent, but lacks the deeper cognitive traits we associate with collaborative problem-solving.
It has knowledge, but no awareness.
It speaks, but doesn’t listen.
It solves, but doesn’t understand.
Why this matters for CRM
Now bring that into email marketing, lifecycle flows, or customer journeys.
Most CRM systems behave the same way:
-
They send a message based on rules.
-
They assume the customer has the right knowledge.
-
They never ask if the customer believes something different.
Like apes and AI, they act without belief modelling.
This is why many brands get lifecycle messaging wrong—not because they’re bad at writing emails, but because they assume too much. They fail to differentiate between:
-
Someone who hasn’t seen a message
-
Someone who doesn’t understand it
-
Someone who believes something else entirely
At Brutish, we think of email as more than a channel—it’s a way to move through these distinctions. To design CRM that doesn’t just know, but knows what the customer doesn’t know yet.
🚪 A more intelligent system asks
True intelligence isn’t just solving problems. It’s knowing when you don’t have enough to solve it properly.
Apes can’t model false beliefs. AI can’t ask for clarification.
And many CRM systems can’t tell the difference between noise and signal in what customers truly believe.
But humans can.
So the brands that win won’t be the ones that automate faster.
They’ll be the ones that listen better. The ones that build systems—not just to send, but to sense.
That’s what we build at Brutish.
Leave a comment