Parents once taught simple rules: don’t follow strangers, don’t share personal details, and don’t trust someone just because they seem friendly.
These rules can feel outdated in the online world, where meeting new people is normal. We work, date, shop, play games, and chat with people we’ve never met in person.
Still, the old advice is not useless. Technology just makes it harder to spot when we’re breaking those rules.
Online, strangers haven’t disappeared—they just look more convincing, with good photos, polished language, and sometimes even AI help. The warning itself doesn’t need to change, but how we apply it does. Up until today, the updated advice is, don’t trust an unverified identity just because the website or app feels familiar.
The internet makes trust faster and identity cheaper
One reason the old rule still matters is psychological. In The Online Disinhibition Effect, psychologist John Suler wrote that online, some people “self-disclose or act out more frequently or intensely than they would in person.”
He connected this to things like anonymity, invisibility, asynchronicity, and the minimization of authority. He did not mean people act irrationally online, but that digital spaces weaken the usual social brakes that slow us down.
In real life, strangers come with more friction: body language, place, timing, distance, and immediate risk. Online, those checks are weaker, slower, or easy to fake.
This helps explain why strangers online often do not feel like strangers. They can seem familiar before you know who they really are. They might know your interests, copy your slang, use a realistic profile photo, or contact you through a platform you already trust.
In Social Media Identity Deception Detection: A Survey, researchers say social media is “a popular source for identity deception” and group these attacks as “fake profile, identity theft and identity cloning.” In short, the internet makes it cheaper to create, copy, and misuse identities.
“Stranger” online often means “someone pretending not to be one”
That is why the old rule needs to be updated, not just repeated. Online, the main danger is not always a clearly unknown person. It is often someone pretending to be someone else.
The FTC’s New insights about imposter scams says people reported almost half a million business and government imposter scams in the past year. It highlights three trends: scammers are starting more schemes with text or email, are pushing bank transfers or cryptocurrency more often, and often pretend to be more than one organization in the same scam.
In other words, the digital stranger often looks like someone you are supposed to trust.
The FTC’s Social media: a golden goose for scammers explains why these platforms are such useful hunting grounds. The agency says scammers can easily manufacture a fake persona, hack a profile and pretend to be you, learn from what you share, and even use advertising tools to target people based on age, interests, or purchases.
It also says one in four people who reported losing money to fraud since 2021 said it started on social media, with reported losses of $2.7 billion over that period. This is what the online stranger looks like now: not random, but optimized.
All these sources show that the modern internet does not just connect strangers. It gives them tools to personalize their approach. Someone reaching out may know enough to sound less like a stranger and more like someone you already know. This changes the risk. The threat is not just unknown contact anymore—it is carefully created familiarity.
Romance scams show how quickly digital intimacy can be manufactured
Romance scams are a clear example of this. In the FTC’s What To Know About Romance Scams, the agency says romance scammers create fake profiles on dating sites and apps or contact people through social media, then strike up a relationship to build trust, sometimes chatting several times a day, before making up a story and asking for money.
The same FTC page says moving the conversation off the original platform is a warning sign: someone special suddenly wants to email, call, or message somewhere else, often claiming to live far away and avoiding meeting in person.
This matters because it shows the old rule was never just about staying silent. It was about building trust slowly. Romance scams work by speeding up emotional certainty before you really know who someone is. Technology makes this easier.
Constant messages create instant closeness. Photos and voice notes add emotion. Platforms reward quick replies. The scam does not have to look obvious—it just needs to feel ongoing. What once took weeks in person can now happen quickly on a phone screen.
The same FTC guidance gives a clear modern version of the old warning: Never send money or gifts to a sweetheart you haven’t met in person. This advice is not just for romance. It is really about verification: do not commit before you have proof.
Research shows online identity deception is real, measurable, and incentivized
Academic research shows that online identity deception is not just a few stories. In Fake it till you make it: Fishing for Catfishes, researchers describe catfishing as users creating fake profiles to deceive other users regarding their true identity. The paper looks at age and gender deception and says these fake accounts can directly harm others.
It also notes that some platforms may encourage people to lie about their profiles. The study is specific to certain platforms and the authors are careful about its limits, but the main point stands: some online spaces actually reward people for being deceptive.
This matters because the internet often treats identity like something you can edit. Age, location, job, relationship status, interests, and even photos can be chosen, changed, or stolen.
When profiles are easy to make and hard to check, people start trusting weak signals like quick replies, a polished profile, shared interests, or emotional skill. This is why the old advice still matters. It is not that everyone online is a threat, but that many platforms make it easier to fake identity than to prove it.
AI is making digital strangers more convincing
If the internet already weakened old trust signals, generative AI is making that problem sharper. In Digital Deception: Generative Artificial Intelligence in Social Engineering and Phishing, researchers say generative AI amplifies social engineering through three main pillars: Realistic Content Creation, Advanced Targeting and Personalization, and Automated Attack Infrastructure.
The paper argues that AI can help attackers produce more persuasive content, tailor lures more effectively, and scale operations more easily. That is not a futuristic warning. It is a description of how old manipulation methods become more efficient once language and targeting are partially automated.
The UK’s National Cyber Security Centre makes a similar point in “The near-term impact of AI on the cyber threat.” The NCSC says AI will almost certainly increase the volume and heighten the impact of cyber attacks over the next two years, and says AI gives a capability uplift in reconnaissance and social engineering, making both more effective, efficient, and harder to detect.
It also says generative AI can already create convincing fake documents without the spelling and grammar mistakes that used to give phishing away. This matters because bad grammar was once an easy warning sign online. AI is taking that away.
This is why the old rule feels more urgent now. A stranger online does not have to be obvious or awkward anymore. They can sound polished, patient, and emotionally aware. The scammer who once sent nonsense can now send messages that fit your situation, copy a brand’s style, or answer you convincingly.
Technology has not changed the risk—it has just made it harder to spot.
Phishing is just “don’t talk to strangers” in a corporate dialect
The same logic applies to phishing. The NCSC’s guidance on phishing scams says the purpose of a scam email is often to get you to click a link that might download a virus or steal passwords or other personal information.
The guidance says suspicious emails should be treated cautiously, and bluntly advises people: Don’t click on any links in a suspicious email. It also says reporting phishing helps make yourself a harder target for scammers and notes that the NCSC had removed 430,000 scam URLs as of January 2026.
Phishing works because it looks like trusted communication. The stranger hides inside a package notification, a payroll email, a password reset, a delivery alert, or a message from a manager.
The old lesson comes back in a new way: do not treat a familiar format as proof of identity. A logo is not proof. A convincing tone is not proof. A quick, emotional request is not proof.
This same idea explains why scams often try to move people off the original platform or to a different payment method. The FTC says impostor scammers are steering more victims toward bank transfers and cryptocurrency, and romance scammers often ask for payment methods that are hard to trace or recover. If someone tries to move you to a channel with less protection and less verification, that is not a small detail—it is the main goal of the scam.
So what should the rule mean now?
It does not mean you should avoid all digital contact. That would not be realistic and would often be unhelpful. The internet is full of good interactions between strangers: networking, working together, learning, support groups, and real friendships.
The old rule online should be more focused: do not trust someone’s identity before you have checked it. The problem is not talking—it is trusting too soon.
A practical update to the rule could be: slow down if a new contact tries to get close quickly; do not move sensitive conversations off the platform without a good reason; do not send money, codes, credentials, or personal information to someone you have not checked out yourself; look up their name, photo, company, or job title; and remember that any message creating urgency is probably trying to rush you past verification.
These tips are not separate from technology—they are ways to handle how things really work: easy identity creation, simple impersonation, targeted actions, and AI-powered persuasion.
The internet did not end the don’t talk to strangers rule. It made it more specific. Online, the most dangerous stranger is often the one who does not seem like a stranger at all. So the real reminder is not never speak. It is never let the screen do the verifying for you.