Scammers are lifting and leveraging samples of user voices from social media posts and those harvested from everyday conversations to craft convincing voice clones and orchestrate fraud.
Voice cloning — powered by generative artificial intelligence (AI) — is the latest tool scammers are using in elaborate phone call scams targeting people’s friends, relatives, or colleagues to extort “thousands of dollars” daily and harvest sensitive personal data.
“A cheap and effective voice cloning software based on advanced machine-learning algorithms now can create highly convincing voice clones of individuals, including public figures,” cybersecurity expert at NordVPN Adrianus Warmenhoven said.
Social media, Warmenhoven noted, is a goldmine of voice samples for cybercriminals.
Below, we’ve included actionable tips on how to recognize this new cybercrime frontier and defend yourself, your friends, and your family.
‘Cheap and Effective Voice Cloning Software’ Simplifies Fraud for Cybercriminals
NordVPN’s press release revealed that “cheap and effective voice cloning software,” based on advanced machine-learning (ML) algorithms, can now be used by cybercriminals to impersonate one’s family members and even public figures or business executives.
Though the press release did not name the software in question, a simple Google search will turn up several Text-to-Speech (TTS) programs, as well as forum discussions on the matter.
“The most popular ways scammers extort money are by impersonating people the scam victims trust and crafting stories about car accidents, injuries, drugs found by the police, and other urgent or sensitive scenarios,” the press release noted.
In just one real-life AI voice extortion case in March 2023, an elderly couple was tricked into sending thousands of dollars to whom they thought was their son calling them for urgent help.
Once cybercriminals gain trust with the person — and they particularly favor unaware seniors and parents — they can “extract sensitive information and money,” Warmenhoven added.
To do that, cybercriminals rely on a large repository of voice samples unknowingly posted online by victims. This can range from “the massive amount of video content that users voluntarily upload on social media to voice recordings from events, media interviews, or just everyday life,” the press release explained.
How to Defend Yourself and Your Family from AI Voice Cloning Scams
In line with Aura’s earlier findings and 2024 forecast, we can no longer blindly trust what we see online. The company found that AI voice fraud was “the standout scam,” in 2023.
“In 2024, people should anticipate that this trend will expand to other mediums, as AI makes it easier to duplicate and edit legitimate videos, images, websites, emails, and text messages for nefarious purposes,” the company’s chief scientist Dr. Zulfikar Ramzan said.
To defend against sophisticated AI voice cloning scams, Warmenhoven advised the following:
- Avoid posting voice samples on social media, as well as sensitive or personal info about yourself and your loved ones.
- Scammers may first call you just to collect your voice sample for future fraud or extortion, so hang up as soon as possible if you suspect you are on the line with a scammer.
- If someone you know calls you from an unknown number or without a caller ID, hang up and ask them to reach out to you through your usual communication channels.
To that, we’d like to add:
- Keep all of your software updated.
- Learning how to spot phishing lures and being cautious of emails, messages, or websites that ask for personal information.
- Using strong, unique passwords for each account — consider using a password manager.
- Enabling two-factor authentication to add an extra layer of security to your online accounts, or better yet, use passkey authentication.
Our top recommendation aligns with Warmenhoven’s tip, and that is to be very suspicious of unknown callers (especially if you are being urged to do something). “While you might feel like you have the upper hand against the scammer in that particular situation, don’t underestimate the technology at their disposal,” he said.
With More Dangers Come More Solutions
Remember, AI can be used for more than cloning voices. Cybercriminals can craft deepfake content that combines video and audio to exploit victims during key events like global elections or the upcoming 2024 Olympic Games in Paris, sway public opinion, and exploit humanitarian crises. They can also craft increasingly convincing phishing emails with generative AI tools based on OpenAI’s ChatGPT and others.
Thankfully, cybersecurity companies like McAfee are now working on tools to detect AI-generated content.
Australia’s Macquarie University has also developed an AI program that can pretend to be a victim and waste scammers’ time.
For more news, follow us on X (Twitter), Threads, and Mastodon!