For a while, the internet had a favourite trick for spotting deepfakes on video calls: ask the person to hold three fingers in front of their face. If the image glitched, blurred, warped, or failed to render properly around the hand, people took that as proof they were looking at a fake.
It made intuitive sense. Early real-time face swaps and low-grade deepfake tools often struggled when something passed in front of the face. Fingers, glasses, mugs, microphones, poor lighting, or fast movement could confuse the model and reveal the illusion. The trick spread because it seemed simple, clever, and easy to use in the moment.
The problem is that it is no longer a reliable test.
What once exposed weak systems has now become a public checklist item for attackers to work around. As with many viral security hacks, the more visible it became, the less useful it became. Today’s real-time deepfake and face-swap tools are significantly better at handling occlusion, facial tracking, motion smoothing, and reconstruction of hidden facial features. That means a person on a video call can pass the “three-finger test” and still be fake.
This matters because businesses are increasingly relying on video calls for trust-sensitive interactions: payment approvals, executive conversations, recruitment interviews, customer onboarding, legal discussions, supplier requests, and remote identity checks. A visual trick that once caught a few low-quality fakes should not be mistaken for a modern security control.
To understand why the trick is obsolete, it helps to understand why it ever worked at all.
Many early deepfake and live face-swap systems depended on relatively fragile tracking of a person’s key facial landmarks: eyes, nose, mouth, jawline, eyebrows, and overall face shape. These systems tried to map one face onto another in real time. As long as the face was clearly visible and reasonably well lit, they could often produce something convincing enough for a short call.
But once a hand moved across the face, the model could lose track of what it was supposed to be rendering. This is known as an occlusion problem. If fingers blocked the nose, mouth, or cheek contours, the software sometimes failed to correctly separate the foreground object from the generated face underneath. The result was often an obvious visual glitch: fingers appearing semi-transparent, the face bleeding through the hand, warped skin, unstable jawlines, or a brief collapse in the fake image.
Those glitches made for compelling social media clips. The “three-finger test” looked like a neat little secret — an instant lie detector for the AI era.
But security shortcuts rarely stay effective once they become popular.
Deepfake systems have improved for the same reason all useful AI systems improve: weaknesses become known, data gets better, models are retrained, and tools become easier to use. Occlusion handling is no longer a niche edge case. It is one of the first problems developers of real-time face manipulation tools try to solve.
Modern tools are better at maintaining facial alignment even when parts of the face are temporarily hidden. They use improved segmentation, better tracking, stronger temporal consistency across video frames, and more robust estimation of what the obscured parts of a face should look like when the object moves away. In plain English: they are much less likely to fall apart when someone moves a hand in front of their face.
That means the old social test has changed from a detector into a performance prompt. Instead of catching unprepared attackers, it can now simply tell them what kind of artefact to avoid. If a scammer knows people are likely to ask for a hand gesture, they can use a better tool, rehearse around the motion, improve lighting, reduce movement speed, or choose software specifically designed to survive that kind of challenge.
In other words, once a detection trick becomes part of internet folklore, it stops being an advantage for defenders and starts becoming a design requirement for attackers.
The biggest problem with the three-finger test is not merely that it is outdated. The bigger issue is that it gives people a false sense of security.
If someone on a call performs a gesture smoothly and the video feed appears stable, many people will unconsciously downgrade their suspicion. They feel reassured because the person “passed the test.” That is dangerous. Security failures often happen not because there was no check, but because the check looked persuasive while being fundamentally weak.
In business settings, that kind of false confidence can have real consequences. A finance employee may approve a payment after a convincing video conversation with what appears to be a senior executive. A recruiter may move forward with a candidate whose identity has not been properly verified. A staff member may trust a supplier requesting urgent changes to account details. A customer service team may accept a video interaction as sufficient evidence that the caller is who they claim to be.
When organisations rely on “it looked real to me” or “they passed a visual trick,” they are putting too much weight on human perception in an environment designed to manipulate it.
The right response to deepfake risk is not to keep searching for the next clever gesture challenge. It is to design processes that remain safe even when a call looks and sounds authentic.
That shift in mindset is important. Deepfakes are not just a technical problem; they are a verification problem. The question is not whether you can always visually spot a fake in real time. Often, you cannot. The more practical question is: what business controls remain effective even if the person on the screen is not who they appear to be?
The strongest organisations treat video as one signal, not the deciding signal. They assume that a face, a voice, and a convincing manner may all be manipulated. From there, they build verification steps that are harder to fake because they depend on process, context, authority, and independent confirmation.
That approach is less flashy than viral advice, but it is far more robust.
If your team handles money, identity, contracts, hiring, confidential data, or privileged access over video calls, there are far better protections than asking someone to wave three fingers in front of their face.
The key idea is simple: do not ask whether a person can survive a meme test. Ask whether your process would still stop a fraud attempt even if the fake looked excellent.
Many organisations still think of cybersecurity in terms of phishing emails, malware, password theft, and network intrusion. Those threats remain real, but the attack surface has expanded. Video conferencing is now part of normal business operations, which means it is now part of the social engineering landscape too.
That changes the assumptions teams should make. A live video call used to carry a certain built-in trust. Seeing a face and hearing a voice created a strong psychological sense of presence. But generative AI has started to erode that trust layer. Not completely, and not in every case, but enough that businesses should stop treating visual realism as proof of identity.
This does not mean every video meeting is suspicious. It means organisations should separate communication convenience from security assurance. Video is excellent for communication. It is no longer sufficient, by itself, for verification.
If your staff have seen the three-finger test circulating online, the message to them should be clear: it was never a formal security control, and it should not be treated as one now.
A sensible internal message might be:
That kind of guidance is much more useful than circulating another supposed “one weird trick” to beat AI fraud.
The rise and fall of the three-finger test is a good example of a broader pattern in cybersecurity. People love silver bullets — one easy trick, one visual cue, one question, one checklist item that separates real from fake. But attackers adapt. Public tricks degrade. Heuristics age. What felt clever six months ago becomes ordinary background noise.
That is why mature security programmes are built on layered controls, not on moments of intuition. A resilient organisation assumes some signals will fail and designs the process so that one failure does not become a breach, a fraud payment, or a major compromise.
Deepfakes should be approached the same way. Do not expect a single on-call gesture to save you. Expect attempts to get better, cheaper, and more convincing. Then build your business practices accordingly.
The three-finger deepfake test is dead not because it never worked, but because the world moved on. It was a glimpse of a temporary weakness in earlier systems, not a lasting defence strategy.
Businesses that understand that early will be in a better position than those still relying on whatever viral trick is doing the rounds this week. In the age of realistic synthetic media, the smartest defence is not sharper guesswork. It is stronger verification.
Ready to strengthen your verification processes against modern AI-driven fraud? Get in touch with us to review your approval workflows, identity checks, and high-risk communication controls before a convincing fake slips through.