Skip to content

Why AI needs to be secure before we scale it

0pinion
Ken Bastiaensen
Ken Bastiaensen |
Why AI needs to be secure before we scale it
4:15

 
Expert firms are moving fast on AI. Not just the big ones with innovation teams, but everyday advisors who are experimenting with tools like ChatGPT to speed up emails, summarise documents or sense-check technical content. The pace of adoption is exciting, but it’s also exposing a problem that isn’t really being talked about openly – the fact that most firms don’t actually know where or how AI is being used inside their business.  And if that’s the case, then how can you control it?

The rise of shadow AI

From conversations we’ve had with partners and firm owners, it’s clear that AI use is already happening behind the scenes. No one admits to uploading client files into ChatGPT, but everyone quietly suspects it’s happening. Not out of bad intent, but simply because it saves time and feels helpful. 
 
The issue is that this kind of experimentation takes place outside any approved workflow or data safeguard. If sensitive information is being shared with a public model, or if advice is being influenced by AI-generated text that hasn’t been reviewed, firms are exposed to compliance breaches, inconsistent quality and even reputational damage.  
 
Most importantly though, they lose the ability to explain how an answer was generated, which leaves them accountable for outputs they can’t trace. 
 
Leadership teams often respond by blocking tools entirely, but let’s be honest, that doesn’t help, because people will just use AI on their phones instead. When the benefits are obvious, people will always find a way around the rules.

professional services secure AI

This isn’t a technology problem 

Shadow AI is a signal that leadership hasn’t provided a supported, secure way for teams to use AI responsibly. Advisors turn to their own tools when the firm hasn’t moved fast enough to meet demand, which makes this a cultural and structural challenge as opposed to a technical one. 
 
If firms want to remove risk, they need to normalise AI by giving people tools they’re allowed to use, building clear guidance on what’s acceptable, and providing guardrails and human review, rather than restrictive fences. That’s the only way to bring AI use into the open and give the whole firm visibility and accountability over what’s happening. 

Where experimentation becomes exposure 

Experimentation is healthy, and every firm should be running pilots, testing use cases and learning from real scenarios. But there’s a very clear line that once AI touches client data, client communication or anything that influences professional advice, it must sit inside a controlled system. That’s the point where AI stops being a test and starts being part of the firm’s real service delivery, and that means it needs oversight.

Explainability as the turning point

This is where explainable AI matters. It’s simply not enough anymore to generate a polished answer. Advisors need to be able to see where that answer came from, what data was used and how the logic flowed, while also needing confidence that every output reflects their firm’s quality standards, templates and tone of voice. 
 
That level of transparency can’t be delivered by generic tools alone. It takes thoughtful design and close collaboration between the people who build the technology and the experts who use it.

Safe adoption unlocks real scale

AI is already reshaping how expert firms work, and that’s a good thing. But the real opportunity is in building a trustworthy system that lets innovation scale without eroding trust. When advisors feel confident that AI can support their judgement, rather than undermine it, they’ll stop working in the shadows and start engaging responsibly. 
 
Before reading this post, you probably thought that securing AI slows down progress, but the opposite is true! Because when firms get governance right, they empower their people with the confidence to move faster, not the fear of getting it wrong.   
 
 
Our co-founder and CEO Joris Van Der Gucht recently wrote on this topic for Consultancy UK. Check out his article here:

Consultancy UK Ravical https://www.consultancy.uk/news/amp/42039/why-professional-services-must-secure-ai-before-they-scale-it