AI-generated video, audio, and images are now convincing enough to fool almost anyone. Here's what to look for — and how to protect yourself.
Beth Andress
Digital Self Defence & AI Governance Educator
"The most dangerous deepfake isn't the one that's perfect. It's the one that's just good enough."
In 2019, a UK energy company transferred €220,000 to a fraudster after receiving a phone call from what sounded exactly like the CEO of its German parent company — right down to the accent and speech patterns. The voice was AI-generated. The CEO never made the call. This was not a sophisticated state-sponsored operation. It was a fraud using tools that, by 2026, are available to anyone with a laptop and a free account.
Deepfakes — synthetic media created using artificial intelligence to fabricate or manipulate video, audio, or images — are no longer a niche concern. They are an active fraud tool used in romance scams, investment fraud, executive impersonation, identity theft, and political disinformation. The technology has advanced to the point where a convincing AI voice clone can be created from as little as three seconds of audio, and realistic video deepfakes can be generated in minutes. Understanding how to recognize them is now a basic digital self-defence skill.
**What deepfakes are actually used for.** The fraud applications are more varied than most people realize. Voice cloning is used in grandparent scams — a caller sounds exactly like your grandchild in distress, asking for emergency money. It's used in CEO fraud, where a 'senior executive' calls an employee to authorize an urgent wire transfer. It's used in romance scams, where a fabricated video call 'proves' the person is real. Synthetic images are used to create fake identity documents, fake social media profiles, and fake evidence in disputes. Video deepfakes are used to fabricate statements by public figures and to create non-consensual intimate imagery as a form of coercion. The common thread is that the technology lowers the barrier to deception dramatically.
**The visual tells — what to look for in video.** Deepfake video has improved enormously, but it still has characteristic weaknesses. Watch the edges of the face, particularly where hair meets skin — deepfakes often show unnatural blurring, flickering, or colour inconsistency at these boundaries. Eyes are another indicator: blinking patterns may be irregular, and the eyes may not track naturally with head movement. Lighting is often inconsistent — the face may appear slightly differently lit than the background or the rest of the body. Teeth and the inside of the mouth are notoriously difficult for deepfake models to render convincingly. And watch for what researchers call 'temporal inconsistency' — subtle flickering or morphing when the subject moves quickly or turns their head.
**The audio tells — what to look for in voice cloning.** AI-generated voices have improved to the point where they can fool people who know the person being impersonated. But there are still indicators. Listen for unnatural rhythm — AI voices sometimes have slightly off timing between words, or pause in places a human wouldn't. Emotional range is often flattened; a cloned voice may sound like the person but without the natural variation in tone that comes with genuine emotion. Background noise is sometimes inconsistent or artificially added. And in phone calls, be alert to any resistance to video verification or to answering questions only the real person would know. A scammer using a voice clone cannot pivot to video without preparation.
**The image tells — what to look for in synthetic photos.** AI-generated images have a distinct aesthetic that trained eyes can recognize, though it is becoming harder. Look at the hands — AI image generators have historically struggled with hands, producing extra fingers, fused digits, or anatomically impossible positions. Jewelry, glasses, and accessories often show distortion or asymmetry. Text within images is frequently garbled or nonsensical. Backgrounds may show repeating patterns, impossible geometry, or objects that don't make physical sense. Ears are often asymmetrical or oddly shaped. And look at the overall image for what researchers call 'uncanny valley' quality — a face that looks almost right but triggers a subtle sense of wrongness.
**The verification approach — what to do when you're not sure.** Recognizing visual and audio tells is useful, but it is not sufficient on its own. The most reliable defence against deepfakes is verification through a separate channel. If you receive a video call, voice call, or message from someone asking you to take an urgent action — send money, share credentials, authorize a transfer, keep a secret — verify their identity through a different method before acting. Call them back on a number you already have. Ask a question only the real person would know. Request a video call if you only received audio. Slow down. Urgency is a manipulation tactic, and legitimate requests can wait for verification.
**Protecting yourself from being deepfaked.** Your voice, face, and likeness can be used to create deepfakes without your knowledge. Minimizing your publicly available audio and video reduces the material available for cloning. Be thoughtful about what you post on social media — extended video of your face and voice is training data. If you are in a position of authority at an organization, establish a verbal code word or verification protocol with your team for any unusual financial requests received by phone or video. This is a simple, effective defence against CEO fraud.
The technology will continue to improve, and the tells described here will become less reliable over time. The fundamental defence is not pattern recognition — it is verification culture. Treat any unexpected request for urgent action, money, or sensitive information as requiring independent verification, regardless of how convincing the source appears. The question is never 'does this look real?' The question is 'have I verified this through a channel I control?' That shift in thinking is the most durable protection available.
Next Step
Beth delivers Digital Self Defence training for organizations, municipalities, and community groups across Canada. Deepfake awareness is part of every program.
Book a Digital Self Defence WorkshopContinue Reading