AI voice scams – CPD guide for financial advisers

Spotlight On Infosec (9)
For financial professionals only

Artificial Intelligence (AI) is rapidly transforming the financial services industry, bringing new efficiencies, insights, and customer experiences. From AI-driven investment advice to fraud detection and risk assessment, the possibilities seem endless. However, as financial firms start to embrace AI, they also need to stay aware of the new security risks.

Financial fraud and voice cloning

Imagine your client getting a call from someone who sounds exactly like you, their trusted financial adviser. The voice requests a fund transfer, and its urgent. They comply - only to realise later that it wasn’t their adviser at all.

A recent report found that 25% of UK consumers encountered deepfake scam calls in the fourth quarter of 2024. Of those targeted, 40% fell for the scam, 35% lost money and 32% had personal information stolen. 

Deepfake, advanced Ai technology clones voices and faces, often using publicly available voice samples from social media, voicemail greetings, or recorded calls. Fraudsters then use these voices to create impersonation scams that are tricking both advisers and clients into transferring funds or revealing sensitive data. Microsoft Teams is even launching an AI agent that will allow users to clone their voice and translate their speech in real-time into nine different languages: English, French, German, Italian, Japanese, Korean, Portuguese, Mandarin Chinese, and Spanish. "Imagine being able to sound just like you in a different language," says Microsoft CMO Jared Spataro.

Yes, this would revolutionise business communication, making international conversations effortless. But, for financial advisers, whose credibility and trust are the foundation of their client relationships, this technology also raises serious security risks. With the most common deepfake call types by subject in Q4 2024 relating to banking/financial, what can advisers do to stay ahead?

How AI scammers target advisers and clients 

📌 Building trust: fraudsters call the target (adviser or client) multiple times, mimicking a trusted voice without making financial requests initially.

📌 Social engineering: they can gather personal details from breached websites, social media, or past conversations.

📌 Exploiting trust: once trust is established, they create urgency - convincing the target to authorise transactions or reveal sensitive data. Because AI generated voices sound highly authentic, traditional security measures and standard processes like the need for additional passwords and PINs that may be bypassed.

This could just as easily work with advisers being targeted by fraudsters impersonating clients, which can lead to fraudulent withdrawals, unauthorised transfers, and tricking advisers into revealing confidential client information.

Why older clients are more vulnerable

📌 Older clients are often targeted by scammers because they may not be as familiar with emerging technologies and their associated risks.

📌 They may also be more isolated, and less likely to check the request with someone else before actioning.

📌 Many seniors trust their advisers and are less likely to question a request if it sounds like it’s coming from a familiar, trusted voice.

📌 They may also struggle with hearing, and therefore recognising the subtle differences in an AI-generated voices compared to real ones, making them more susceptible to falling for these scams.

The sophisticated nature of the attack, paired with a lack of awareness, means that many victims might not even realise they’ve been defrauded until it’s too late.

Seven ways financial advisers can protect themselves and their clients

As a financial adviser, it’s crucial to understand the risks posed by AI voice cloning and take proactive steps to protect your clients. Here’s what you can do:

  1. Educate your clients: make sure your clients, especially older and vulnerable clients, are aware of the potential for voice impersonation scams and AI threats as they arise. Let them know that you will never ask for sensitive information, like passwords or financial transactions, over the phone or via unsecured communication channels such as WhatsApp.

  2. Use Multi Factor Authentication (MFA): always verify the identity of anyone requesting financial transactions - whether it’s over the phone, email, or even in person. Implement a multi-step verification process that goes beyond just voice recognition. A secondary confirmation via email, text message, or a secure app can prevent fraudsters from tricking you and your clients.

  3. Be cautious with your data: limit online content of your image or voice where possible, make your social media accounts private, and limit followers to people you know.

  4. Listen for red flags: AI-generated voices may sound perfect, but they often have subtle flaws. Pay attention to odd phrasing, delayed responses, or background noise inconsistencies. 

  5. Consider the urgency always be sceptical of any communication that urges urgent action, especially if it involves transferring large sums of money or changing account details. Scammers often create a sense of urgency to bypass rational thinking. Let clients know they can take their time to confirm a request before actioning, such as verifying the request in a separate channel (e.g. phone and email).

  6. Create security phrases: establish a pre-agreed security phrase or word with clients before processing high risk transactions. If they can’t provide it, the call is likely a scam.

  7. Monitor for unusual requests: if a client suddenly requests a large withdrawal or makes an out-of-character demand, pause and investigate before acting.  Similarly, if a client is a recent victim to a scam or cyber breach, add additional checks, such as a warning banner on their account to check before processing high risk transactions.

AI-powered voice translation and cloning technologies is transforming the way businesses communicate, but they also present new opportunities for cybercriminals to exploit. Financial advisers must be vigilant in protecting their clients from scams that rely on these technologies, particularly vulnerable clients. By taking proactive steps, including educating clients, implementing secure communication practices, and using multi-factor verification, advisers can help safeguard their clients against this growing threat whilst mitigating major financial losses and reputational damage.

For more ways AI can affect advisers and their clients, check out our previous article – How will GenAI and LLM impact advisers and their clients?

Take the CPD-accredited AI voice scams test here

Test your understanding with these multiple-choice questions and receive a CPD certificate worth 30 minutes of CPD.  

Never miss an update

The cyber landscape is constantly evolving, staying informed and proactive can help businesses mitigate risks.

Sign up to our fortnightly 'Adviser Insight' newsletter for expert insights - use the 'Sign up' button on the left-hand side to receive our updates. 

This article is for financial professionals only. Any information contained within is of a general nature and should not be construed as a form of personal recommendation or financial advice. Nor is the information to be considered an offer or solicitation to deal in any financial instrument or to engage in any investment service or activity.

Parmenion accepts no duty of care or liability for loss arising from any person acting, or refraining from acting, as a result of any information contained within this article. All investment carries risk. The value of investments, and the income from them, can go down as well as up and investors may get back less than they put in. Past performance is not a reliable indicator of future returns.