What happens when algorithms mistake patterns for people?

Last month, my 75-year-old neighbour was flooded with crypto ads after clicking on a retirement planning article. The algorithm mistook his financial research for crypto interest. He wasn't an investor. Just a retired veterinarian trying to understand his pension options.
Watching him scroll through endless Bitcoin promotions while searching for actual retirement advice made me realise something unsettling about how these systems really work.
Your feed isn't reading your mind
The algorithm doesn't actually know you. It knows what people like you clicked on yesterday. Every "recommendation" you see has already been tested on millions of others.
Think about Netflix for a second. When it suggests your next binge, it's not intuiting your mood. It's reading data from viewers who watched the same three shows you did. Same thing with Instagram flooding you with fitness content after you follow one trainer.
My friend, a paediatrician, researches childhood diseases for work. Her Instagram now thinks she's a hypochondriac parent. Every ad is for anxiety medication and parenting books about "managing worry." The algorithm can't distinguish between professional research and personal concern.
The "For You" feed isn't actually for you. It's for the version of you that fits into a data cluster. You're grouped alongside thousands of statistical twins you'll never meet.
The brutal math behind the recommendations
Here's what's really happening under the hood. Every algorithm is optimised for one thing and one thing only: predicting which content will make you engage within seconds. Click, like, share, comment. Anything that signals you're still paying attention.
The system doesn't care if something makes you think for three days. It cares if you react in three seconds.

This creates a feedback loop that's impossible to escape. Every click trains the system to serve more of what's already working, which means more of what's already been served. Your feed becomes an echo chamber not because the algorithm is biased, but because it's designed for maximum engagement through minimum variance.
Innovation is unpredictable. Repetition is measurable. The system works exactly as designed - but not for you!
When different people become identical users
Three people search "best depression treatment" at 11 PM on a Tuesday.
- Person one: a desperate teenager questioning everything.
- Person two: a journalist fact-checking a mental health story.
- Person three: a parent researching options for their child.

But the algorithm identifies identical behavioural signatures. Evening searches. Medical sites. Multiple tabs open. Treatment comparisons.
The data points are identical. Professional research looks exactly like personal anxiety. To the algorithm, they're the same person. Same digital fingerprint. Same response.
All 3 receive the same pharmaceutical ads, upbeat wellness podcasts, and one-size-fits-all solutions.
The teenager's urgency, the journalist's scepticism, and the parent's protective instinct all collapse into the same data cluster.
This isn't theory. Researchers proved this by creating 128 fake accounts with completely different interests (e.g., cooking videos, tech news, travel content). After sixty days, every single account saw identical feeds. The algorithm had erased their artificial personalities entirely.
Context is invisible to the machine. Intent is irrelevant.
It's like being coached at a party
Imagine you're at a party, moving between three different groups. The first recommends the same book. The second suggests the same weekend activity. The third praises the same travel destination.
You'd quickly realise these aren't authentic conversations. Someone's been coaching them.
That's your personalised feed. Different platforms, identical recommendations, all pretending to know what you need.
A skilled bartender would handle this differently. They'd read your expression, notice your posture, and ask about your day before suggesting a drink. The algorithm only sees "user consumed alcohol-related content" and serves more of the same.
The difference between researching wine for a dinner party and drowning sorrows after a breakup is invisible to pattern-matching systems. That's because both generate identical data signatures:
- Searches for alcohol.
- Time spent on beverage sites.
- Purchase consideration behaviour.
What real personalisation would look like
True personalisation would distinguish between curiosity and crisis. Between research and desperation. Between planning and impulse. It would ask not just what you engaged with, but why you engaged with it.
Instead, we get sophisticated broadcasting systems that whisper our browsing history back to us and call it intimacy.
Your "For You" page isn't for you. It's for the data shadow you cast across the internet. And shadows, no matter how detailed, remain just shadows.
Breaking free from the pattern
Start clicking unpredictably. Follow accounts outside your usual patterns. Search for topics you'd never typically explore. Confuse the algorithm deliberately.
Better yet, recognise that your feed isn't you. It's a statistical guess about you. The real you exists in the spaces between the clicks - in your context, your intent, your actual needs.
That's something no algorithm can capture. Yet.
But at least now you know why your neighbour keeps getting those crypto ads.