AI that Glitters is not Gold
- Lynda Elliott

- Jan 5
- 10 min read
Updated: Jan 23

Imagine that - for whatever reason - you decide to register on a dating app called Blunder one cold and windy Friday evening. You’re scrolling through the profiles with a mixture of curiosity and mild anticipation. There’s a mug of herb tea next to you.
You’re happily swiping and evaluating a bunch of profiles, none of which are really jumping out at you.
And then you land on this one:
Alex, 42 London · Coffee · Walking · Clear communication
Looking for a meaningful relationship with someone emotionally available and intentional.
I’ve spent over two decades learning what actually makes relationships work.
Not through theory — through experience.
Most people think dating is about chemistry.
It isn’t.
The real issue is misalignment, unclear intent, and people avoiding the hard conversations.
Here’s what no one talks about 👇
→ Consistency beats intensity
→ Clear communication beats guessing
→ Effort is attractive when it’s sustained
I’ve done the work to understand this so you don’t have to decode mixed signals.
My approach is simple:
→ I show up
→ I say what I mean
→ I don’t disappear when things get real
I believe there is a right way to do this — and it starts with honesty, clarity, and follow-through.
If this aligns, feel free to DM me 👉
Hmm. Some good keywords in there, your Inner Judge points out. They trigger a flicker of interest in your brain, but at the same time, something feels a bit off.
Your Inner Detective detects a faint whiff of intellectual halitosis, but it could be the herb tea. You’re not 100% sure.
Is this a new Love Language you haven’t yet heard of? Has this person ever suffered the grinding despair of heartbreak or the injury of being ghosted by someone they really liked? Or did Alex simply arrive on earth with this gleaming Love Roadmap, fertilised with copious amounts of Love Island and Love is Blind?
Let’s assume you are genuinely looking for a partner. You've invested more than a few minutes crafting your own profile, curating your selfies and optimising your carefully chosen words to appeal to Person Right.
You decide to give Alex a miss and continue swiping.
Next up are a couple of profiles that sound genuine but not really your type, one or two that say nothing much at all, some dubious photos, and a handful that are just plain awkward.
And then you eyeball another profile that sounds eerily like Alex. Okay, different wording, but the same kind of shape. And then you spot another. OMG.
Has catfishing gone full-on corporate? Are these people for real? It’s starting to feel like the lights are on, but no one’s home. What’s with the staccato sentences, those bullet lists?
Who writes like this?
Perhaps you sigh and shut down the app because modern dating has just become a bit too weird. Maybe even a little intimidating in ways you might not have imagined a few years ago.
You get on with your weekend and forget about it for now. You have a week’s free trial, so no great rush here.

Monday morning you flip open your professional networking app, your steaming mochaccino standing by, and scroll through the posts.
You take a sip of your coffee and a post catches your eye that makes you stop in your tracks.
Wait … is that Alex from Blunder? It sounds uncannily like the dating profiles from Friday. Only this time you can’t blame the digital bad breath on herb tea.
You begin to wonder if the barista slipped some magic mushrooms into your coffee. But no, you’re not hallucinating. You’ve noticed a pattern.
The tone and structure are the same: confident conclusions that arrive without showing how they were arrived at. It doesn’t feel like someone thinking out loud. The posts aren’t challenging or polarising, and they can end up feeling as if they were composed to invite agreement, not questions, and this isn’t really how we tend to think.
People are increasingly starting to use AI to generate content in networking spaces.
Why is this becoming a thing?

The Cold Kiss of the Machine
When the algorithm shakes hands with AI generated content, that steely machine embrace raises visibility.
It’s a bit like junk food - quick and tasty and can be found on just about every corner. It satisfies for a brief moment but doesn’t deliver much nutritionally.
The algorithm doesn’t give a toot about about mental nutrition, but it does look for signals that point to expertise, value probability, relevance, and engagement potential.
AI is good at producing those signals without doing the underlying work. It optimises for fluency, completion and certainty, not for thinking, intellectual hesitation, or judgment.
This raises value probability without raising value depth. AI also tends to use shorter sentences, which works well for mobile. Job done, as far as the algorithm is concerned. The post is amplified and shown to more people.
It’s likely that this is why we’re starting to see posts that don’t show thought process, but sprint from authority markers to important sounding solutions to some type of call to action.
I'm sure that the person creating the post has identified a need and perhaps they believe they are offering something of value, but AI has a knack of taking human intent and reshaping it into a elegant, eloquent word salad that takes on a life of its own.
And in fact, the structure itself is not wrong per se. If we look at the TED Talk format we also see a confident opening, followed by a problem we all recognise. There might be a personal credential or anecdote. Often there’s a neat framework and always, a hopeful resolution.
When it’s done well, that structure can be genuinely nourishing. It’s relatable, human and useful. The arc carries lived experience, uncertainty, risk and resolution. The structure serves the thinking.
But once the structure becomes detachable from lived experience (something AI naturally leans toward), once it can be generated on demand - that same shape can carry very little substance at all.
So that goes some way to explaining why we’re seeing more and more of these posts appearing on our feeds, but it doesn’t explain why people are doing this.
The answer lies in yet another sly way AI exposes human behaviour. Yes, it has to do with cognitive shortcuts and the allure of output that sounds complete and compelling (I couldn't have said it better myself type of thing). But I’m going to focus on something we unconsciously do every day, across many different social scenarios - especially professional ones.

Impression Management
Erving Goffman’s Impression Management Theory goes something like this: In professional settings, people naturally shape how they present themselves to fit what the environment rewards. Certain cues read as credible, while others invite challenge or confusion. Over time, people learn which signals travel well and which ones don’t. It’s an unconscious mechanism for controlling one’s reputation.
LinkedIn is a textbook environment for this. In fast-moving, competitive professional spaces, people rarely have the time to evaluate ideas in depth. Most of what we absorb is scanned rather than read, and judgments are made within seconds, based on surface cues.
When something looks impressive, we’re often inclined to assume that the thinking behind it is sound. Clear structure, fluent phrasing, and a decisive tone create a kind of cognitive shortcut - they reduce the effort required to assess credibility.
From the poster’s perspective, this creates a powerful incentive. Producing a piece that looks like expertise is far less demanding than producing one that exposes real thinking. The payoff is immediate, and the costs are low.
I think It’s important to point out that I dont think this is manipulation. It’s more likely a very human response to the conditions of the game and the availability of the tool.

The Paradox: How This Undermines the Goal
Most people who share posts like this are trying to do something perfectly understandable. They want to signal that they know what they’re talking about. They want to be taken seriously, and they want their thinking to be recognised.
The paradox is that the very move that helps create that impression in the short term by using AI generated posts, weakens it in the long term.
Authority, in practice, doesn’t come from sounding certain or well-formed. It comes from judgment exercised in specific situations. It shows up when someone is willing to take a position, name a trade-off, or explain why one path was chosen over another. That kind of authority carries risk, because real decisions can be wrong and invite criticism.
Generative systems are designed to pat down the uneven edges where context actually matters most. When their output stands in for thinking, judgment becomes abstract and intangible, and context becomes elusive.
This erosion doesn’t necessarily happen all at once. No single post breaks trust universally, but as more writing converges on the same shape, the signal loses its appeal. After a while, they start to feel a bit vanilla and templated. Maybe even repugnant.
What once may have felt like insight starts to feel interchangeable. The very thing people are trying to amplify becomes harder to distinguish from all the other cookie-cutter posts trumpeting their authority.

The Longer-Term Effect
There’s another layer to this that unfolds more insidiously.
When AI is used to generate posts that sound smarter or more insightful than what we might have written ourselves in the moment, it doesn’t just change how others perceive us. It also changes what we practise internally.
In the short term, the feedback is rewarding. But over time, a mental association forms: less graft for more validation. That's brain candy from a reinforcement learning perspective.
The issue lies in what it's actually training us to do.
Thinking isn’t just the finished position. It’s the hard slog that happens beforehand, it’s about deciding what matters. It’s also about sitting with ambiguity, shaping language around lived experience, translating evidence, and tinkering and teasing your words into something solid that you stand behind. You're genuinely curious to hear what others think.
When a tool reliably supplies the polished surface, that important middle part gets less exercise.
At first, it can feel like striking gold; the output glitters and the social response delivers a nice little dopamine hit. But in reality, it’s closer to panning for fool’s gold. What looks like insight and performs like authority can displace the underlying work that makes either real.
Over time, judgment can become easier to defer to the machine. Language can start to feel less owned, and eventually, critical thinking can start to feel optional.
The risk here isn’t deception or decline, but rather a gradual drift away from thinking, evaluating, taking a stance, and deciding how much effort we’re willing to invest before publishing something that looks like complete thought. We need to be careful here.

Reframing AI’s Role
Generative AI is efficient at producing representations of knowing. It can muster language that sounds informed, structured and decisive by recombining statistical linguistic patterns at speed. In many contexts, that’s genuinely useful.
What it doesn’t do independently is make judgments: it doesn’t decide what matters in a given situation, it doesn't weigh competing concerns, or take responsibility for the consequences of a position once it’s taken. That’s our province: we still decide which questions are worth asking, what context to include, and where a neat answer is misleading.
If we can see it this way, AI isn’t doing the thinking for us. It’s just taking over the presentation. A bit like if we buy the ingredients and do the cooking, and AI does the plating up and presentation.
The risk isn’t AI use itself, but in mistaking fluent output for proper thinking, and allowing our own voice to drown in the river of words AI spews out.

A Question of Signal, Not Morality
So it’s not about banning AI from our writing, or drawing hard lines between what counts as authentic and what doesn’t. Those debates tend to generate more heat than clarity. If enough people start criticising and disengaging from obvious AI writing, we may find this pattern slinks off into the corner by itself.
What matters here is understanding what different kinds of writing actually signal, especially in environments where attention is scarce and fluency is cheap. Language carries traces of how ideas were formed, not just what they are.
"Show don’t tell" still applies when we’re trying to demonstrate authority.
The open question, I guess, is whether the signals we’re sending with AI generated posts still line up with what we really want to be recognised for.

At this point, it’s worth saying the obvious.
The dating profile is ridiculous. No one wants to read a manifesto while looking for a partner. Who wants a framework for intimacy or a bullet-pointed approach to emotional availability? If you encountered enough profiles like that, you’d probably do exactly what most people do when something feels a tad stinky: swipe left.
And that’s precisely why I used this analogy.
Dating apps are one of the few places where we’re still highly attuned to how something sounds, not just what it claims.
We’re scanning for tone, presence, vulnerability, humour, contradiction ... all the small human signals that suggest there’s a person on the other side of the screen. When language starts to feel templated and over-polished but empty, we don’t necessarily analyse it in the moment. We simply recoil.
But here’s the point - what’s uncomfortable on a dating app is becoming normalised elsewhere.
On LinkedIn and similar platforms, the same language patterns now being applauded as “professional”, “thoughtful”, or “insightful”. Or at least, the many likes and comments they're rewarded with would certainly imply this.
But the underlying effect is the same. The dating profile exaggerates this so it’s blatantly easy to spot the Emporer's invisible cloak. It strips away the professional camouflage and places machine-shaped language back into a social context, where it becomes immediately obvious how little of it we’d tolerate from a human being.
What’s ultimately being exposed is a growing mismatch between how language is being produced and how humans actually recognise one another in the wild. We’re fluent at detecting presence, and equally fluent at detecting its absence, even if we can’t always articulate why. It’s just instinctive.
It wouldn’t be reasonable to object to AI being used in this way, but when its language becomes a proxy for authority, experience, and judgment in places where those things used to be earned through blood, sweat and tears, then we must speak out.
The dating profile looks absurd because it collapses that mismatch in one move. It looks like LinkedIn just lets it hide a little longer.




Comments