Be a Bad Data Point

Terje Rutgersen · 2026 · Opinion / Guide

Most privacy advice is about hiding. Use a VPN. Block trackers. Go invisible. That's fine. It works. But there's another approach that doesn't get talked about enough, and it's more fun.

Don't hide. Be visible. Just be wrong.

Every tracking system, every ad platform, every behavioural analytics pipeline depends on one thing: the assumption that the data it collects is real. That when you click something, you meant it. That your location is where you actually are. That your browsing pattern reflects your actual interests. The whole model falls apart when the data is garbage. And you, personally, can make your data garbage.

How tracking actually works (short version)

I've worked with the professional side of this. Endpoint telemetry, SIEM platforms, behavioural analytics, identity correlation engines. The enterprise versions of these tools are built to profile users across devices, sessions, and networks. The consumer-facing versions (ad tech, analytics, recommendation engines) use the same principles, just pointed at your wallet instead of your security posture.

They all rely on building a profile over time. Individual data points are noisy and unreliable. But enough of them, correlated across enough sessions, produce a picture that's accurate enough to sell. The key word is "enough." The model needs consistency. It needs you to behave like yourself, repeatedly, across platforms.

So stop doing that.

Poisoning the well

This isn't hacking. It's not illegal. It's just being unpredictable in ways that cost you nothing and cost data brokers everything.

Search for things you don't care about. Not randomly. Deliberately. Pick a hobby you have zero interest in. Spend ten minutes a week searching for it, clicking results, reading pages. Competitive dog grooming. Vintage tractor restoration. Uzbek ceramics. Whatever. Your ad profile is built on what you search for. Feed it nonsense and the profile becomes useless. A useless profile is worth less money. That's the point.

Use multiple browsers for different contexts. One for shopping, one for reading, one for everything else. Don't log into Google on all of them. This isn't about hiding your identity. It's about fragmenting it. Correlation across sessions is how platforms stitch your profile together. Make the stitching harder.

Rotate your DNS. If you run your own DNS resolver (and you should, it's easier than you think), switch upstream providers periodically. Cloudflare one month, Quad9 the next. Each provider sees a partial picture. None of them get the full timeline. If you want to go further, run Pi-hole or AdGuard Home and blackhole the tracking domains entirely. But even just rotating breaks the continuity that makes DNS-level profiling useful.

Lie to location prompts. When an app asks for your location and you don't strictly need it, deny it. When a website asks, deny it. When a service requires a postcode or city to function, give it one that's close enough to work but wrong enough to matter. You don't need to spoof GPS coordinates. You just need to not hand over the real ones for free.

Use throwaway emails. Services like SimpleLogin or AnonAddy let you create aliases per site. When that alias starts getting spam, you know exactly who sold your data. Kill the alias. Create a new one. Every dead alias is a broken link in someone's CRM pipeline, and if enough links break, the pipeline is worthless.

Click ads you don't care about. This one is beautiful. Ad networks charge per click. Every click on an ad you'll never convert on costs the advertiser money and teaches the algorithm that you're interested in something you're not. Do it occasionally, not obsessively. You're not trying to commit fraud. You're just being an unreliable signal. Unreliable signals get deprioritised. That's the goal.

Why pollution beats blocking

Blocking is defence. Ad blockers, tracker blockers, VPNs. They work. I use them. But the industry adapts to blocking. They fingerprint browsers. They use server-side tracking. They find new ways around your walls. It's an arms race, and they have more engineers than you do.

Pollution is different. You can't filter out bad data if it looks exactly like real data. A fake search looks the same as a real one. A throwaway email alias behaves like a real address until it doesn't. A wrong postcode passes validation. The system ingests it all because it can't tell the difference. That's the vulnerability. Not in the code, but in the assumption that users are honest.

In security, we call this "poisoning the training data." It's an attack vector against machine learning systems. Turns out it works just as well when the machine learning system is an ad network trying to predict what you'll buy next.

The cost to you

Basically nothing. A few extra clicks a week. A browser you don't log into. An email alias instead of your real address. None of this requires technical skill beyond what a moderately curious person already has. The guides on this site cover the harder parts like running your own VPN and hardening a web server. The data pollution stuff is just behaviour.

The cost to the data economy is cumulative. One bad data point is noise. A thousand is a pattern. A million is a crisis. Every person who makes their data slightly less reliable makes the whole system slightly less valuable. And the system only exists because the data is valuable.

Ten percent

Imagine if ten percent of internet users started doing this. Not all of it. Just some of it. Throwaway emails. A fake hobby in their search history. Location denied by default. What happens to the ad-supported internet when the ads stop working? What happens to the data brokers when the data stops being trustworthy?

I don't know. But I'd like to find out.

Be a bad data point. It's the most legal fun you can have with a browser.