top of page

The One Human Habit Behind Every AI Brain Wobble

  • Writer: Lynda Elliott
    Lynda Elliott
  • 5 days ago
  • 11 min read

Updated: 3 days ago

ree

Why talking to AI like it’s human breaks everything


Some years ago, I was working with an organisation to improve a long, complex regulatory online form for its users. The users were predominantly mature women who were managing residential care homes for the vulnerable. They were incredibly busy handling all of the day-to-day events such a role demands, as well as having to deal with a borderline obscene level of admin.


We created an Alpha product, which we released to a closed group of users. One of the improvements to the form was the ability for users to complete sections at their own pace and save their progress. This aligned perfectly with how they wanted to tackle this onerous task, and we were expecting that this would significantly help these overworked professionals to get this gargantuan form-filling exercise done more efficiently.


I touched base with this cohort regularly to evaluate their progress and get feedback.


The research was trundling along splendidly, until I received an anguished email from one of the participants. She claimed that every time she saved the form, she would return to it and find that it was blank. She had attempted this several times and was on the verge of tears.

This just didn’t make sense to me, so I booked a video call with her and asked her to demonstrate how she was using the form.


She patiently completed the section she had been attempting to save, and then moved to the “File” menu dropdown on her browser. From there, she selected “Save Page As” and saved the file to her desktop.


ree

Her browser faithfully saved the HTML page to her desktop. A pristine, blank version of the form greeted her when she opened it.


It wasn’t clear to her that she needed to hit the “save progress” button on the online form.


I immediately understood the problem. Her mental model was Microsoft Word: you edit your document > save it - and it stays there, intact, exactly as you intended.


A mental model is the story our brain tells itself about how something works. It’s stitched together from past experience, half-memories, assumptions and shortcuts – and once it’s formed, we act like it’s true.

The feedback from this participant created an interesting design problem for the team to consider, made easier because the UI was a static environment. 


But when you’re dealing with an AI tool, the landscape changes to something that becomes dynamic, probabilistic, and unpredictable. The game changes, and it can really screw with our mental model for how we think this tool works.


ree

AI & The PowerPoint Catnip Paradox


Picture the scene. It's Friday, mid-afternoon. You’re working on a presentation with a colleague. Both hunched over a laptop, fuelled by coffee and mild dread, staring at a slide deck that looks... fine. Not great. Not tragic. Just fine.


You sigh, gesture vaguely at the layout, and say the most natural, human thing in the world: “Can you make this look cleaner?”


Your colleague nods. They know exactly what you mean. They see the overly enthusiastic bullet points, the text that’s shifted half a millimetre too far to the right. The rogue logo breathing down the footer’s neck.


They know “cleaner” means polish it - don’t reinvent it.


Now replace the colleague with AI. Same request. Same slide.


AI hears cleaner and barrels straight past your intention. Instead of adjusting the layout, it decides the words must be the problem. So it rewrites the entire deck in a painfully bright corporate tone you’d never use, trims away half your nuance because the pattern looked "redundant," and rearranges your bullet points into a new hierarchy that no one asked for.


It might even expand your short, punchy lines into long-winded explanations, or shrink your thoughtful explanation into a single bland sentence, because to AI, “cleaner” often translates as “change the text until the clutter disappears.”


All seasoned (naturally) with a liberal sprinkling of em dashes and random bolded sentences that scream "AI woz here!".


And therein lies the whole cognitive mismatch in one tragically redesigned presentation.


Give AI the same vague instruction you’d give another human, and it transforms into an über–zealous intern who now thinks it's an Information Design guru... and suddenly your modest slide deck reads like it’s prepping for a TED Talk you never signed up for.


We love AI because it feels like a creative superpower. But it goes feral when you don't box it in.


If you say: “Reduce visual clutter by adjusting spacing and alignment only. Don’t change the colours, icons, or text”, suddenly AI becomes your new Bestie.


Same system. Different input. Different universe.


When you understand how differently the two systems operate, the "failures" stop looking like failures at all.


We expect a mind. We get a mechanism. The behaviour isn’t wrong. The expectation is. Once you see that, all the weirdness begins to make sense.


And the best place to start is with the “expectation” bit, and why we naturally fall into this trap.



How Our Minds Work


Human cognition is built for mind-reading. We infer intention without trying. We track continuity in conversation. We fill in gaps automatically. We read tone, subtext, emotional shading – instantly. We attribute meaning, motive, memory, identity, even when none of it is stated.


This is how human communication works. It’s intuitive, layered, deeply unconscious and deliciously complex.

So when something speaks language fluently (like AI) our brains snap it into the same category as a human conversational partner. This is called “schema substitution” - a cognitive shortcut where the brain grabs the closest familiar pattern (a schema) and uses it to interpret something new, even if that pattern doesn’t really fit.


We assume it "understands", it knows what we mean. We presuppose that it remembers prior context and can "figure out what we’re trying to say." Of course, none of these assumptions are explicit. They’re just how our minds work.


And this is exactly where things begin to break.


ree

Where Humans and AI Actually Align


Before we get into how the AI "mind" works, it’s worth acknowledging something interesting. There are areas where the human mind and AI behave in surprisingly similar ways. Not because they work the same way or that AI is becoming "like us", but because some of the surface behaviours overlap – and that overlap is what makes the whole interaction feel so natural... and so bloody misleading.


Predilection for Prediction


For example, humans and AI both try to predict what comes next. Humans do it instinctively; finishing each other’s sentences, anticipating tone, filling in the emotional gaps without thinking.


AI does it too, except it does it mechanically, token by token. (A token being a tiny chunk of text that the model reads one piece at a time.)


Different guts, same skin.


We both fill gaps without being asked. If someone leaves a sentence hanging, our brains almost can’t help completing it.


However, if you leave part of a prompt vague, AI fills it too – with brazen authority.


Sometimes it’s spot on.  Other times it’s so off-piste that it makes AI look like a professional space cadet.

Again, very different architecture, but functionally similar on the surface.


Compression of Meaning


We both compress information: as humans, we don’t store every detail from every conversation. We store patterns, impressions, shortcuts.


AI does something similar using embeddings: numerical maps where related ideas sit close together. It’s not understanding your meaning; it’s navigating patterns. When it responds, it simply picks the nearest cluster and keeps marching in that direction.


So we both lean on patterns. Humans rely on learned scripts: the professional voice, the polite tone, the email template that lives in muscle memory. 


The Confabulation Glitch


And here’s another uncomfortable parallel: we both "hallucinate". 


Humans misremember. We confabulate. We convince ourselves that something plausible is something true. We’re biased little beasties. We see what we expect to see. Sometimes our brain doesn’t misremember so much as it "helpfully" edits reality to fit our assumptions. And that can feel very real to us - even if it’s not objectively true.  


AI does its own version – because it’s compelled to fill the empty space with the most statistically likely continuation. AI does have biases, but they’re not emotional or personal in the way human biases are. The model reflects patterns from the data it was trained on, including the distortions and assumptions already implicit in society.


Constraint as Compass


And finally, we both work better with constraints. Humans think more clearly when someone tells us what the goal is, who it’s for, how long it should be, and what to avoid. AI is exactly the same. It thrives when the direction is specific and unambiguous.

.

So yes, there is alignment. And it’s part of why interacting with AI feels intuitive... at first. But this is also where the trouble begins, because these surface similarities trick us into believing the underlying machinery is the same. And that misunderstanding – that instinctive assumption – is where things can quickly devolve into AI Brain Burps of Note.

But here's the part many people don’t grasp: the mismatch doesn’t happen because of AI’s "foibles". It happens because we unconsciously expect it to work the way we do. And that’s where all those WTF moments come from.


So let’s talk about the mismatch – and why it matters far more than people realise.


ree

Where Humans and AI Stop Aligning (The Part We Keep Forgetting)


For all the places where humans and AI seem strangely aligned – the pattern-matching, the prediction, the occasional flair for confident claptrap – there’s an entire territory where the resemblance stops abruptly. This is where most of the frustration comes from. We keep relating to AI as if it’s a mind because its output looks mind-shaped...  until it doesn’t. And that moment of divergence is always jarring.


Intention versus Autocomplete


One of the deepest mismatches is that humans think with intention. We’re always aimed at something, even when we don’t realise it. If I’m talking, there’s a goal behind it. If I’m asking a question, there’s a reason. If I’m thinking, I’m trying to get somewhere. 


AI, on the other hand, has no "somewhere." It’s not aiming. It’s not trying. It’s not hoping to succeed. It's not afraid to fail. It’s predicting what comes next, over and over, without any internal sense of where this is supposed to lead. So when it veers off into an unexpected direction, it isn’t being awkward or obtuse – it never had a direction to begin with.


Stakes versus Indifference


Another human advantage is that we understand stakes. If someone asks us something sensitive, our whole demeanour changes automatically. We become gentler. More cautious. More attuned. Humans instinctively calibrate based on how much something matters.


AI doesn’t. It answers everything with the same flat absence of urgency. You can ask it a deeply personal question, or a trivial one about folding laundry, and the emotional weight of the answer doesn’t shift. Not because it’s cold – but because it has no concept of importance.


World Models versus Word Models


Then there’s the way we reason. Humans think in terms of the world: objects, events, memories, interpretations, smells, textures, gut instincts. 


AI thinks in terms of language: sequences, clusters, correlations.

 

Meaning versus Language Patterns


When we speak, the words are the surface layer of a much deeper conceptual structure. 


When AI speaks, the words are the structure. So when it produces something beautifully articulated but fundamentally wrong, it isn’t deceiving you – it simply has no underlying world model to check against. It can’t pause and think, “Hold on, this doesn’t make sense.”


Clarity versus Conjecture


Ambiguity is another divide. Humans absorb vagueness and recover effortlessly. If someone gives us a half-formed instruction, we make educated guesses, read the situation, and fill in what’s missing, often fairly successfully.


AI does the exact opposite. Vague input sends it swirling into an ever widening ocean of possibilities. It’s easy for AI to fly away with the fairies.


Humans tend to collapse uncertainty into clarity; AI expands uncertainty into conjecture.

This is how a tiny bit of missing detail can send the model light years away from what you actually intended.


Hierarchy versus Flatland


Another important distinction is that humans naturally organise information into hierarchy. We understand what the main point is, what’s an aside, what’s a boundary, what’s emotional colour, what’s the real instruction buried underneath a bit of narrative. AI doesn’t do that. 


Unless you impose your structure rigidly, everything in the prompt appears equally important. This is why it sometimes fixates on something you considered tangential and treats it as the heart of the task.


Implicit versus Explicit


There’s also the matter of silence. Humans understand meaning through what isn’t said: tone, pauses, hesitation, the things we leave out deliberately, the subtle shifts in expression. 


AI can only work with what it’s given: If it ain’t in the text, baby, it don’t exist. 


The subtext you think is obvious is invisible to it. The emotional shading you didn’t bother to spell out is just not there. AI cannot perceive the parts you assumed were implied.


Uncertainty versus Completion


And finally, humans can sit with uncertainty. When we don’t know, we pause. We hold the question open. We wait for more information. AI can’t do that – a gap appears and it rushes to fill it, whether it’s right or not. 


Uncertainty is the one thing AI can’t tolerate: it reaches for whatever pattern is statistically closest, even if that pattern is completely wrong. When AI "hallucinates", it isn’t being fanciful... it’s being mechanical. It’s doing exactly what its architecture demands: completing the pattern at all costs.

These mismatches are the real fault lines. They’re the things that create the feeling of “Why is it doing that?” Not because the system is malfunctioning, but because it’s obeying rules that are nothing like the ones humans use to think.


Once you truly get these differences, the entire interaction changes, and your expectations become more aligned with how AI actually works.


ree

A mental model for AI


If we were to create a mental model for what AI actually is, it might look something like a hapless autocomplete engine that somehow ended up conducting a full orchestra. But there’s no sheet music here. Just a useful idiot in a suit, waving the baton, guesstimating which note comes next... but because it has a thousand instruments behind it, the result can sound impressively like a symphony.


This is why AI often feels smarter than it is: the performance is grand, but the mechanism is basic. It isn’t reasoning, interpreting, or aiming for meaning; it’s following statistical cues with dazzling self-assurance, one beat at a time.


More often than not, the melody lands beautifully. But sometimes it hits the wrong note with all the enthusiasm of a trombonist who never got the memo. Not because it’s "flawed", but because it’s still solely calculating what comes next.


ree

The Delicate Art of Becoming an AI Whisperer


I want you to imagine an alien has crash-landed on Earth and, by pure misfortune, has wandered into a hen party in Sheffield, UK, at 11pm on a Saturday night.


Your job is to explain what's happening - but you are only allowed to use language that translates only what the alien can literally see. You cannot use:


  • subtext or innuendo

  • cultural assumptions

  • unspoken social rules

  • any phrase that relies on shared human knowledge or humour

  • metaphor


Try to explain:


  1. The purpose of this event and who it's for

  2. Why all the women are wearing a white gauze object on their head

  3. Why they’re performing strange, repetitive moves beneath a flashing light and a cacophony of electronic noise that makes the alien's innards vibrate

  4. Why there’s only one man in the room, wearing nothing but baby oil and a leopard print thong

  5. Why there are inflatable biological objects floating everywhere

  6. How the alien should behave (and what it shouldn't do)


If you can explain all of that cleanly, literally, and without leaning on implied meaning…you’re already halfway to thinking like an AI Whisperer. If not, here’s the mental checklist I use to help you frame how you speak to an alien intelligence. Nothing fancy, just the things that stop AI from guessing:


  1. What am I actually trying to achieve? If you don’t know, AI certainly won’t.

  2. What context is missing that a human would automatically infer? Audience, purpose, stakes, voice, format – all the quiet things you didn’t say out loud.

  3. What needs to stay the same? People always forget this one. AI will happily “improve” you or the thing you’re creating into someone or something you don’t recognise.

  4. What mustn’t happen? Define the no-go zones. It listens.

  5. Where might ambiguity creep in? Anywhere the instruction could branch, it will.

  6. Have I told it what to focus on and what to ignore? Without this, everything sounds equally important and it starts free-jazzing.

  7. Have I set the boundaries of tone, scope, and length? Those three alone eliminate most drift and tame AI's habit of producing reams of text long after a human would’ve stopped for air.

  8. And finally: if a stranger walked in off the street, would they know what I meant? Because that’s basically AI: helpful, eager, and completely oblivious.


AI only looks like it’s failing because we really don’t have a dependable mental model for how to use a tool that speaks like us.


Once we stop assuming mind-like behaviour and start giving instructions that match the system’s architecture, the interaction becomes more predictable, more consistent, and far more powerful.


The system doesn’t need to think like us. We need to think clearly for it.



If you’d like a quick-glance version of the clarity framework, I’ve made a one-page AI Whisperer cheat sheet you can download below.




Follow me on LinkedIn



 
 
 

Comments


bottom of page