In part two of his Professional Adviser series, our Chief Commercial Officer Mike Morrow, explores six potential AI risks advisers need to be wary of.
Read the full article below...
Artificial intelligence (AI) tools can be powerful for financial advice businesses, but they also introduce security, safety, and compliance risks that are likely to need active management.
In this article, we explore six potential risks you need to be wary of, and ways to mitigate them.
1. Data confidentiality and client privacy
AI works by ingesting large amounts of data and using this to train and improve the model. If you enter client data (e.g. personal details, financial history or investment preferences) into public AI models, you're transferring this data outside of your company and into the hands of the AI firm. If uncontrolled (i.e. the firm is not on your approved supplier list), this data leak could result in breaches of GDPR and FCA rules on client confidentiality.
To avoid this:
- Never put identifiable client information into a public AI system
- Only use AI systems that adhere to your security and compliance policies, such as Copilot linked to your Microsoft 365 environment (and not the public version)
- Make sure you read the data retention and model training policies
2. Maintaining regulatory compliance
Using AI without proper oversight could breach Financial Conduct Authority (FCA) expectations under the Senior Managers & Certification Regime and the Consumer Duty. It's your regulatory responsibility to ensure outcomes are fair, transparent and easy to understand, and that includes any AI-driven advice.
To be confident in compliance:
- Document any AI use within your product governance, suitability assessments and record keeping
- Don't rely on AI for decision making - all regulated advice should have your input
- Keep on top of guidance on AI governance – you can expect to see more of this in 2026 as the FCA and ICO expand their AI oversight
3. Hallucinations and accuracy
AI can generate misleading information. In AI jargon, this is known as "hallucinations". As an adviser, it's your regulatory obligation to give suitable, accurate, and verified advice. Use AI for drafting, summarising, or helping you with research, but never for producing your final client advice.
4. Cybersecurity
AI tools are not infallible. They can be targets for phishing, tricked into bypassing safety protocols, leak confidential data, or generate offensive or incorrect content.
To maximise your cybersecurity:
- Have a clear policy around AI use in your business, including use of new AI tools and monitor permissions
- Make sure your clients and colleagues are aware of the risks of AI and how to watch out for malicious intent
- Keep on top of emerging threats and attacks, such as AI powered fraud and deepfakes
5. Biases and ethics
AI systems are trained on massive amounts of data, some of which could have embedded biases. Overreliance on these systems means these biases could come through in your advice, for example, in risk profiling or product recommendations.
To mitigate the risk of bias:
- Audit all AI outputs for consistency, fairness, and neutrality
- Make sure you consider all recommendations and personalise them for the client in question
- Always stay true to the FCA principles on treating customers fairly
6. Confidence in supply chains
It's your responsibility to make sure any AI vendors you work with meet regulatory expectations. That means you need to be confident they provide transparency, strong data protection and clear governance.
Perform due diligence on the vendor's data sources and model to assess risks like bias, and confirm the vendor has a process for ongoing monitoring and risk management. Vendors should also clearly explain their AI's functionality to ensure compliance with rules like UK GDPR and the FCA's Consumer Duty. Third-party AI vendors may not meet FCA or UK GDPR compliance standards.
To be a responsible AI user:
- Get early buy in from your compliance and IT colleagues before you adopt any new AI tools
- Do your due diligence on the AI tool, including data storage locations and thoroughly assess the risks
- Review their service level agreements for data handling, breach response, and auditing rights
- Make sure the AI complies with UK GDPR and the FCA's Consumer Duty rules.
AI can bring great advantages in operational efficiency, compliance and ongoing service, leaving you more time to dedicate to the most important part of your role as an adviser – personalised, life-enhancing financial planning. Enjoy the benefits it brings, and use it wisely.
Missed the first article?
To read the first part in the series, visit here: Adventures in AI: separating hype from reality
This article is for financial professionals only. Any information contained within is of a general nature and should not be construed as a form of personal recommendation or financial advice. Nor is the information to be considered an offer or solicitation to deal in any financial instrument or to engage in any investment service or activity.
Parmenion accepts no duty of care or liability for loss arising from any person acting, or refraining from acting, as a result of any information contained within this article. All investment carries risk. The value of investments, and the income from them, can go down as well as up and investors may get back less than they put in. Past performance is not a reliable indicator of future returns.
