What happens when algorithms mistake patterns for people?
5 min read

What happens when algorithms mistake patterns for people?

What happens when algorithms mistake patterns for people?

Last month, my 75-year-old neighbour was flooded with crypto ads after clicking on one retirement planning article. The algorithm mistook his financial research for crypto interest. He wasn't an investor. Just a retired vet trying to understand his pension options.

Your feed isn't personal

The algorithm doesn't actually know you. It knows what people like you clicked on yesterday. Every "recommendation" you see has already been tested on millions of others.

Think about Netflix for a second. When it suggests your next binge, it's not reading your mind. It's reading data. Specifically, the "data exhaust" from viewers who watched the same three shows you did. Same thing with Instagram flooding you with fitness content after you follow one trainer.

The platform isn't intuiting your goals. It's just applying a template. And it uses the same template for everyone who exhibits that behaviour pattern. This isn't personalisation; it's demographic profiling, at scale. And you can find plenty of real-life examples all around you.

A friend of mine is a paediatrician who researches childhood diseases for work. Her Instagram now thinks she's a hypochondriac parent. Every ad is for anxiety medication and parenting books about 'managing worry.' The algorithm can't distinguish between professional research and personal concern.

The "For You" feed isn't actually for you. It's for the version of you that fits into a data cluster. You're grouped alongside thousands of statistical twins you'll never meet.

Real personalisation would be different. It would account for context, mood, timing, and your actual intent. Instead, you get algorithmic assumptions based on your digital footprint.

Your feed reflects something interesting. Not who you are but who the algorithm thinks people like you should be. And it seems algorithms are being rewarded for predicting clicks. Never get penalised for intellectual stagnation.

Here's how the algorithm's incentive structure actually works

The math is simple. Brutally simple.

The primary objective of almost every recommendation algorithm on social media platforms is to rank the available content according to how likely it is that the user in question will engage with it. Algorithms are rewarded for predicting clicks, but they never get penalised for intellectual stagnation. The research confirms this stance (something we probably already suspected!).

And all this sounds reasonable until you think about what that actually means.
This engagement-first approach favours content that gets broad reactions. Immediate reactions. Not content that provides deeper value.
The algorithm doesn't care if something makes you think for three days. It cares if you click, like, or share within three seconds.

This creates a feedback loop. Every click trains the system to serve more of what's already working, which means more of what's already been served. Your feed becomes an echo chamber. And that happens not because the algorithm is biased (though it might be) but because it's risk-averse. Innovation is unpredictable. Repetition is actually measurable.

But here's what's really disturbing

The system isn't broken, by the way. It's working exactly as designed. Maximum engagement through minimum variance. You're not getting personalised content. You're getting the statistically safest bet for someone with your click history. The proof is stark. And repeatable.

Researchers deployed 128 social bots on Weibo with zero human intervention. The bots started with completely random interests—one liked cooking videos, another followed tech news, and a third engaged with travel content. After 60 days, all 128 saw identical feeds. The algorithm had erased their artificial personalities.

A 2023 Nature study proved that Facebook's algorithm creates echo chambers. It does this by predominantly showing "like-minded" content. Facebook's internal research team (called "Drebbel") reached the same conclusion. Their algorithms push users toward extremes, which tells you something about the problem's seriousness.

Every algorithm optimised for click prediction will converge on the same solution: Serve the content most likely to generate engagement from the largest possible audience segment.

Same digital fingerprint. Same response.

Three people search "best depression treatment" at 11 PM on a Tuesday.

Person one: a desperate teenager questioning everything.

Person two: a journalist fact-checking a mental health story.

Person three: a parent researching options for their child. Same search, different worlds.

However, the algorithm sees identical behavioural signatures: evening search patterns, extended time on medical sites, and multiple tabs comparing treatment options. Same digital fingerprint. Same response.

All three receive the same generic pharmaceutical ads, upbeat wellness podcasts, and one-size-fits-all solutions. The algorithm cannot distinguish between crisis, curiosity, and concern; it only recognises engagement patterns. Context is invisible to the machine. Intent is almost irrelevant. 

The teenager's urgency, the journalist's scepticism, and the parent's protective instinct all collapse into the same data cluster.

This is the fundamental limitation of pattern-based personalisation. It treats symptoms of behaviour. Not sources of need.

Think of it this way

Imagine you're at a party. You move between three different groups. The first group recommends the same book. The second suggests the same weekend activity. The third praises the same travel destination.

You'd quickly realise these aren't authentic conversations. Someone's been coaching them. That's your personalised feed. Different groups (different platforms), identical recommendations. All pretending to know what you need.

A skilled bartender would handle this differently

Bartenders read your expression, notice your posture, and ask about your day before suggesting a drink. On the other hand, the algorithm only sees "user consumed alcohol-related content" and serves more of the same.

The difference between researching wine for a dinner party and drowning sorrows after a breakup is invisible to pattern-matching systems. That's because both generate identical data signatures:

  1. Searches for alcohol.
  2. Time spent on beverage sites.
  3. Purchase consideration behaviour.

The algorithm treats the symptom (your click) as the complete diagnosis of your need. But the bartender's approach reveals what's missing from algorithmic personalisation entirely.

What true personalisation would look like

It would distinguish between curiosity and crisis. Between research and desperation. Between planning and impulse. It would ask not just what you engaged with but why you engaged with it.

Instead, we get sophisticated broadcasting systems. They've learned to whisper our browsing history back to us. And they call it intimacy.

Your "For You" page isn't for you. It's for the data shadow you cast across the internet. And when it comes to data shadows, no matter how detailed they are, they remain just shadows.

So what can you do?

Start clicking unpredictably. Follow accounts outside your usual patterns. Search for topics you'd never typically explore. Confuse the algorithm deliberately.

Or better yet, recognise that your feed isn't you. It's a statistical guess about you. The real you exists in the spaces between the clicks. In context. Your intent. And your actual needs.

And that's something no algorithm can capture. Yet.