Digital Fraud & AI Scams
Voice-cloned grandchildren. AI-generated romance partners. Deepfake sextortion. The people in your community are being targeted by scams that didn't exist two years ago.
The Crisis
For years, we taught people to look for typos in emails, be suspicious of unknown callers, and never send money to strangers. AI has made all of that advice obsolete. Today's scams are grammatically perfect, emotionally manipulative, and powered by technology that can clone a voice in seconds, generate a fake face in milliseconds, and maintain a convincing conversation for months.
The most vulnerable — seniors, youth, newcomers, and isolated individuals — are being targeted with unprecedented sophistication. And the psychological damage goes far beyond financial loss.
The Canadian Anti-Fraud Centre reported over $530 million in fraud losses in 2023 — and estimates that only 5–10% of victims actually report. AI is accelerating these numbers dramatically.
Your community needs to be prepared.
Key Threat Areas
Scammers use AI-generated photos, scripted conversations, and even deepfake video calls to build emotional relationships. Victims are manipulated into sending money, gift cards, or cryptocurrency to people who don't exist.
AI clones a family member's voice from social media. The 'grandchild' calls in distress — arrested, in an accident, stranded abroad. The voice is identical. The panic is real. The scam costs thousands.
AI generates realistic fake intimate images of real people using photos from social media. Victims — often teenagers — are threatened with distribution unless they pay or provide more images. This is a growing crisis.
AI writes perfect phishing messages — no typos, no awkward grammar. Texts impersonating banks, CRA, Amazon, or Canada Post are personalized and convincing. Traditional 'look for errors' advice no longer works.
AI creates convincing fake job postings, rental listings, and marketplace ads. Victims share personal information, pay deposits, or provide banking details to employers and landlords that don't exist.
AI clones social media profiles, generates new posts in the victim's writing style, and contacts their friends and family. The impersonator requests money, spreads misinformation, or harvests personal data from the victim's network.
Protection
Every family should have a secret code word or phrase that only family members know. If someone calls claiming to be a relative in distress, ask for the code word before taking any action.
Understanding that every public photo, video, and voice recording can be used by AI to create deepfakes, clone voices, or generate fake intimate images. Privacy settings and digital hygiene are essential.
Age-appropriate education about AI-generated fake images, digital blackmail tactics, and how to respond if targeted. Young people need to know this is not their fault and help is available.
Community sessions that go beyond 'don't click suspicious links' — teaching people to recognize AI-generated content, verify identities through independent channels, and report incidents.
Clear, well-publicized pathways for reporting fraud — including local police, the Canadian Anti-Fraud Centre, and community support organizations. Reducing shame and stigma around reporting.
Real Stories
These scenarios are based on documented cases. Names and details have been changed, but the patterns are real and recurring.
AI Voice Cloning + Emergency Scam
A grandmother receives a frantic call from her grandson. He's been arrested, needs bail money, and begs her not to tell his parents. The voice is his — every inflection, every mannerism. She wires $8,000. Her grandson was at home the entire time. His voice was cloned from a 15-second TikTok video.
AI Romance Scam
A recently widowed man connects with a woman on a dating app. They talk daily for three months. She sends photos, voice messages, even video calls. She needs help with a medical emergency abroad. He sends $15,000 before discovering the entire relationship — photos, voice, video — was AI-generated.
AI Sextortion
A 16-year-old receives a message with a realistic intimate image of themselves — an image that was never taken. The sender threatens to share it with classmates and family unless the teen sends money or real images. The fake image was generated by AI from the teen's Instagram photos.
AI-Generated Employment Fraud
A newcomer to Canada applies for a remote customer service job posted on a legitimate job board. The 'company' has a professional website, LinkedIn presence, and even conducts a video interview. After 'hiring,' they request banking details for direct deposit. The company, website, and interviewer were all AI-generated.
Interactive Assessment
Answer 12 questions to assess your community's awareness of AI-powered scams. This is a starting point for identifying where education and resources are needed most.
Do community members know how to identify AI-generated voice calls that impersonate family members (e.g., 'grandparent scams')?
Are people in your community aware that AI can now create convincing fake photos and videos of real people for sextortion and digital blackmail?
Do residents understand how AI-powered romance scams work — including AI-generated profile photos, scripted conversations, and deepfake video calls?
Are community members trained to recognize AI-generated phishing texts and emails that impersonate banks, government agencies, or delivery services?
Do families have a verification code word or phrase to confirm the identity of callers claiming to be relatives in distress?
Are community members aware of the risks of sharing personal photos and voice recordings publicly on social media?
Do people know how to verify the legitimacy of online job postings, rental listings, and marketplace sellers before sharing personal information?
Are residents aware of how to report suspected AI-generated scams to local law enforcement and national fraud centres?
Has your community organization, library, or senior centre offered educational sessions on AI-powered scams in the past 12 months?
Are there accessible resources available in your community for people who have been targeted by digital fraud or sextortion?
Do local schools educate students about sextortion, digital blackmail, and the risks of sharing images online?
Is there a community-wide awareness campaign about the rise of AI-enabled fraud targeting vulnerable populations?
Explore More
Explore more assessments, training, and articles on digital fraud and community safety.
© 2026 Beth Andress | Street Safe Self Defence. All rights reserved.
This resource may be shared freely within your community. For organizational use or redistribution, please contact us for permission.