Published Apr 30, 2026
Should you leave red herrings about yourself online?
Short answer: for most people, no. Planting fake jobs, cities, and life details all over the web is a weak default. It rarely wins against systems that ingest public records, commercial data, and whatever you already leaked. It can confuse you on recovery questions, create collateral hassle, and still leave the real trail intact.
The idea is easy to sympathize with. Privacy guides and OSINT-minded writers sometimes suggest muddying the water so data brokers and search aggregators end up with a mess instead of a clean dossier. Michael Bazzell’s Extreme Privacy line of thinking is one place that mindset shows up. The instinct is not silly. The execution usually is, unless your threat model is narrow and you treat deception as a small, controlled tactic.
To decide what is worth doing, it helps to separate three things people blur together:
- Pseudonyms and compartments (different name or handle, different email, keep those worlds from touching).
- Broad fake personal facts (invented employers, cities, birthdays sprinkled across profiles and forums).
- Targeted decoys (honeytokens and canaries that fire when someone touches something they should not).
(1) and (3) often make sense. (2) is what this article argues against as a default lifestyle.
What “red herrings” usually means here
People mean: leave enough false trails that an automated profile or an amateur OSINT sweep picks up noise instead of signal. Maybe a fake hometown on an old forum, a junk LinkedIn-adjacent crumb, or made-up “about me” text somewhere indexable.
The imagined adversary is often a people-search site, a marketer’s graph, or a stranger with Google and patience.
The appeal
Real dossiers are built from scraps. The Federal Trade Commission explains that people-search companies compile reports from other brokers, public social posts, and government public records. Starting from something small (a name or phone), a buyer can get a report with age, past addresses, associates, and more.
If the machine is correlation-hungry, maybe garbage in means garbage out. That hope is the whole pitch.
Where the tactic usually breaks
Strong sources beat weak fiction. Brokers do not only scrape your Substack and your forum signature. They buy and merge feeds. The FTC notes public-record inputs can include property, voter files where applicable, licenses, and court filings. A fake bio on a hobby site does not delete your deed history or un-ring data you already sold with consent by signing up for services.
You are outgunned on scale. You might plant ten lies. The ecosystem has automated refresh, many secondary sites, and years of accumulated transactional data. Opting out can push your file down; it does not turn the internet into a clean slate. The FTC also warns that after opt-out, information can reappear when public records change, and it may still show up through relatives or neighbors.
The data is already dirty. Commercial profiles often mix fact and error without asking you. In one Brennan Center piece on broker data, the author described LexisNexis Accurint stating it does not verify data and that changing incorrect data is not possible in the way consumers might expect, and Thomson Reuters data showing odd inaccuracies. If the baseline is noisy, adding more noise is not clearly an upgrade. It can even help a sloppy system “confirm” a wrong story because another negligent source repeated it.
Serious adversaries do not stop at Page 1. Nation-states, litigation, dedicated harassers, and well-funded investigators use records, legal process, financial footprints, and interpersonal graph data. Hobby disinformation does not harden you against that tier. It mostly changes what low-effort search aggregations say, sometimes.
Costs you might not budget for
Account recovery and identity checks. Security questions, “verify your previous address” flows, and support tickets are built for consistency. The EFF notes recovery answers can be mined from social details and suggests false answers stored in a password manager. That is a controlled lie with a system behind it. Random old fibs scattered online are not the same. They are debt you forget until you are locked out.
Self-doxxing through inconsistency. If you reuse patterns, links, photos, or usernames across supposed compartments, the fake story and the real one collapse into one graph anyway. EFF guidance stresses keeping profiles separate: separate email, avoiding phone numbers where possible, and not reusing photos that tie accounts together.
Forms, banks, and fine print. A joke city on a forum is not the same as a false line on a government form or a loan application. The practical line is: harmless theater stops where a statement gets treated as a formal declaration.
Harm to bystanders is rare but real. Invented addresses or phone blocks can land on real people. Invented employers can point at real small businesses. If your red herring is another person’s nuisance, you traded their time for your comfort.
What tends to work better
Start with threat modeling in plain terms: what you protect, from whom, how bad failure is, how much hassle you will accept. Without that, dramatic tactics jump ahead of basics.
Then prefer:
Less submission, not more performance. Every signup is a data event. The Privacy Rights Clearinghouse notes how many everyday actions can end up in broker-derived listings because records and surveys proliferate.
Opt-out hygiene where it matters. The FTC describes DIY and paid approaches and warns that removal from people-search sites does not erase government public records. Repeat checks matter because data can return.
Pseudonyms where the platform and your life allow it. A pseudonym is a chosen public name that is not wired to your everyday identity. That is different from maintaining a contradictory trail of alleged facts under your real name.
Compartments: distinct email, distinct payment method when feasible, distinct browser profile or device for sensitive work, and discipline about not cross-posting links that stitch worlds together.
Security questions: use random answers stored in a password manager, per EFF, not a second fake life you will forget.
When targeted deception does make sense
Canarytokens and similar decoys are for detection, not for cosplaying a second life. Canarytokens are tripwires: things nobody legitimate should touch, so an interaction is a signal. Use small planted artifacts with a defined purpose and an alert path.
Limited operational cover also belongs here: a throwaway email for one project, a press alias that colleagues know is a mailbox, a PO box that is a real mailbox but not your bedroom. These are bounded, documented, and consistent inside their scope.
Bottom line
If you want privacy from data brokers and casual searchers, subtract data, segment identity, and repeat opt-outs more often than you add convincing lies.
If you want anonymity in a specific corner of the internet, a pseudonym plus real compartmentalization beats a litter of implausible facts tied to your legal name.
If you want warning shots that someone accessed something they should not, use decoys meant for alerting, not a fictional resume you half-maintain for a decade.
Red herrings read clever in a book. For most readers here, the sharper move is dull: fewer accounts, harder linking, tools sized to the adversary you actually have.
Leave the right message behind
Set up encrypted messages, files, and instructions for the people who would need them most if something happened to you.