How will GenAI and LLM impact advisers and their clients?

Website Banners

Generative Artificial Intelligence (GenAI) is a relatively new type of AI that uses large amounts of data and deep learning techniques to create new content. That content can take all kinds of forms, including text, code, images, videos and music. 

GenAI solutions are often trained on Large Language Models (LLMs).  LLMs contain vast data sets, which are searched to find the most suitable answer based on your request.   

ChatGPT is one of the most popular GenAI tools. Within just two months of launching, it reached an estimated 100 million monthly users [1], making it the fastest growing consumer internet app ever.  A recent survey revealed that 73% of UK investors believe that ChatGPT could give reliable financial advice in the future [2].

Yet despite the global investment in AI jumping to record highs, Forbes Advisor research found 59% of Brits have concerns about its use, ranging from misinformation to privacy, and data security issues [3].

Using AI to support advice

AI technology can be a great resource for advisers, helping to:

make better decisions in shorter timeframes, with its ability to analyse data and complex algorithms much faster than a human could.

spot market trends and/or client behaviours.

identify fraud and protect clients by identifying patterns, flagging irregularities, and predicting future activity. 

As with any new technology, there are risks and key challenges we need to consider:

False sense of security

It’s important to remember that AI tools can get things wrong and produce inaccurate results, such as trading and portfolio-rebalancing errors, flagging legitimate activity as fraudulent, or providing incorrect or similar advice to numerous clients.

Without appropriate oversight this could frustrate clients and lead to a poor experience. 

Client trust in AI

Understandably some clients may feel uncomfortable with their data being handled by AI and may need additional reassurance that their data remains protected.

Recently, the release date of Microsoft’s AI feature, Recall (a tool that takes screenshots of almost everything you see or do on your computer, so you can search and retrieve items you’ve seen), was pushed back after public concerns that these screenshots could cause a privacy nightmare [4].

Data privacy concerns

AI technology will inevitably have security vulnerabilities which can be exploited by attackers.   Adviser will need to review how their data, and their clients’ data will be used, stored, and protected within AI tools and consider if there’s a need to restrict the amount of data it accesses. If staff are using AI, it should be clear which data can or can’t be uploaded, to reduce the likelihood of a data breach.  

Enhanced impersonation attacks

Attackers are already using AI for sophisticated attacks, creating deepfakes and targeted phishing messages.   Rapid advancements make it difficult to differentiate between authentic and forged requests.  

Recently we’ve seen AI used to trick:

  • Parents into sending ransom payments to release their ‘kidnapped’ children after using AI to impersonate their child’s voice with a few seconds of social media clips [5].
  • Employees into transferring £20m to scammers after being unaware that the video call he had attended with his CFO and several colleagues, had been deep fakes [6].
  • A security firm into hiring a North Korean hacker who attempted to install malware after using AI to create a fake identity and get through multiple stages of security clearance [7]. 

Advisers are likely to see an increase in fraudulent attempts to bypass identity verification (ID&V), biometric authentication or impersonate individuals, using AI technology. For example, using deepfake images or videos of clients.

Gartner analysts predict that by 2026, 30% of companies will lose confidence in facial biometric authentication owing to the sophistication of AI deepfakes [8].

Going forward

There is no doubt that AI technology will make advisers life easier and be a great resource for advisers and their clients, in the same way that computers and the internet have.  

As with all new technology, AI will continually evolve and be used for good and bad.  You should therefore:

  • Assess AI technology and data privacy controls thoroughly before using it.
  • Ensure appropriate oversight of produced data, remembering AI tools can get things wrong.
  • Consider how AI might be used to circumvent your controls, using this to strengthen your defences.
  • Build multiple layers of defence into high-risk processes, so if one defence is breached, there is another defence in place to thwart an attack.
  • Ensure staff are aware of what data they can and cannot be shared with AI tools.
  • Keep up to date with the techniques used by attacker to trick you, such as deep fakes.
  • Implement multi-factor authentication (MFA) to protect client data in line with regulations.