AI voice hijacking: How well can you trust your ears?

How sure are you that you can recognize an AI-cloned voice? If you think you’re completely certain, you might be wrong.

AI voice hijacking

Why it’s a growing threat

With only three seconds of audio, criminals can now clone a person’s voice, which can easily be obtained from videos shared online or on social media.

An American mother almost fell victim to a virtual kidnapping scam, where a cloned voice convincingly mimicked her daughter’s cries for help. This case shows the level of ruthlessness criminals are willing to resort to.

AI voice hijacking scams have often targeted older individuals by impersonating family members or trusted figures. But now, these scams are also becoming a threat to businesses.

“AI-powered voice scams prey on business vulnerabilities such as weak security protocols, lack of authentication methods, and inadequate employee training,” said Mike Lemberger, Visa’s SVP, Chief Risk Officer, North America.

The financial impact of AI voice hijacking

An employee with access to company finances can fall victim if they receive a call they believe is coming from the CEO, especially since these types of scams rely on creating a sense of urgency and the need to act quickly.

If successful, the attack can result in significant financial losses and a loss of trust from clients, investors, and partners, which could harm the business long-term. Deloitte’s Center for Financial Services predicts that GenAI could enable fraud losses to reach $40 billion in the United States by 2027.

Italian police recently froze nearly €1 million linked to a scam that used AI-generated voice deepfakes. Scammers posed as the Italian defense minister, asking the country’s top tycoons for money to free kidnapped journalists.

AI advancements are outpacing voice biometrics

Although convenient, voice biometrics as a method for companies and industries to authenticate users falls short with AI advancements, particularly in the financial sector. While these industries rely on this technology to verify customers, especially in call centers, it also makes them more vulnerable.

To stay ahead, businesses must integrate MFA and AI-driven fraud detection alongside voice authentication. Without these additional layers, voice biometrics alone won’t hold up against the rise of GenAI.

“We will see phishing emails that are more convincing and more dangerous than ever before thanks to AI’s ability to mimic humans. If you combine that with multi-modal AI models that can create deepfake audio and video, it’s not impossible that we’ll need two-step verification for every virtual interaction with another person,” noted Pukar Hamal, CEO at SecurityPal.

How to protect against AI voice hijacking

Upgrade voice authentication: Basic voice systems can be tricked by AI, as they often rely on a single voice characteristic. Use technology that checks multiple aspects of a person’s voice, like tone and rhythm. These systems are harder to fool.

Add behavioral biometrics: Combine voice recognition with behavioral analysis, which looks at speech patterns such as pauses, tone shifts, and stress. These subtle cues are difficult for AI to mimic, making it easier to identify fake or manipulated voices.

Educate employees: Train your staff, especially key decision-makers like executives, to recognize AI-driven social engineering tactics. Make sure they understand that fraudsters can use AI to replicate voices and manipulate them into taking action. Encourage them to question any unusual requests, particularly those involving money or sensitive information.

Limit personal info shared online: If possible, limit the amount of personal information shared online, including voice recordings. Criminals can use these sources to clone voices and carry out targeted scams.

Don't miss