So a quick word about me. I started off as a programmer as well, but very quickly I moved into information security and I've been working in information security for over 30 years now. Working for a variety of different security vendors and 15 years ago I set up my first business, which was focusing very much on securing mobile devices.
It led to a successful exit via merger and I worked at the end company converting it from a managed service provider into a managed security services provider before leaving that a few months ago and setting up this new venture. I've been very fortunate to always work with cutting -edge technology, and that's what attracts me here. AI is obviously very exciting, but I'm interested very much in securing AI.
So in this talk, I aim to cover the why, the what, the how. But I only have a short time, and Greg is a very hard taskmaster, so I'll crack on.
The reason why I set up the business and the reason why I'm speaking to you all here tonight is because I reckon that we've reached an inflection point.
Last year, in 2025 and previously, the questions that we were being asked were, how are you using AI? What are you using AI for?
Are you using AI? But the questions that we're starting to hear now are subtly different, yet really, really important. Now, we're at an inflection point where people are asking,
how are you securing your AI? And what are the points? Can you prove it? And this is what I aim to help you with this evening.
We have to understand that some of the risks are exactly the same as we've seen previously in information security and cyber security. So confidentiality, integrity and availability, CIA, that trifecta, is something that keeps coming up time and time again in AI security. but there's a whole bunch of new AI risks that we haven't come across before and this is where it becomes a little bit of an edge condition and really difficult for us to demonstrate that AI
security OWASP has even gone as far as documenting this now over a number of different years leading to just a few months ago the publication of the top 10 for agentic applications very much worth a good read because these are no longer obscure edge cases.
These we see every single day. It's everything from model hallucinations, data poisoning, sensitive information disclosure and prompt injections.
Can I just have a show of hands? How many people know what a prompt injection is? Okay, about half of you.
For the other half, this may actually explain that quite nicely and apologies have you seen this video before right hopefully you get the understanding here we are deploying AI more than ever before and of course it comes with risks now the risks that we're actually
seeing with AI can be generally categorized in four main types. You have the data risks.
We have to use real data for training AI systems. Sometimes we use even live data or near live data. This is very different to what we've done previously.
There's no test data. This is real data. So we need to secure it properly.
The data and the prompts that we put into the models now intermingled there's no difference between the two the system prompt the user prompt all of this is mixed together with the data when the AI performs its computations which makes it really hard to protect against prompt injections and the data classification is much harder as well
but we're also seeing some operational issues as well a lot of the people who are developing the AI systems at the moment don't come necessarily from a security background and have little or no SecDevOps experience.
They've learned Python over a few weekends watching YouTube and have started to code mission -critical systems for many enterprises. We need to make sure that we are protecting these as well as possible.
And these concerns become even more burdensome when we actually consider the power that we are giving some of the AI tools.
If you have any agentic systems for example that can perform actions on our behalf, MCP servers that have excessive agency, have lack of granular authentication and so on, they're creating a number of different ongoing issues.
And of course we have the unpredictability of the model. We can ask the model the same question three or four times and get three or four different variations on a similar answer. It's non -deterministic so we can't necessarily predict how it's always going to work
which makes it really hard when we're trying to prove that it's going to work in a given way. The models that we use unfortunately also allow us to reverse engineer the prompts and we can tease out from it a lot of the confidential information that went into into building the model. Not great.
And of course you get model and, model drift and other forms of drift as well that cause a tremendous issue in maintaining the guardrails as it's taken into operation.
Alongside all of that, we have a lot of immaturity in the whole system. The guardrail designs are still quite new and evolving. They're not perfect yet.
the AI penetration testing routines are still a work in progress everything is in its infancy we're all learning day by day so you think at this point I would say give up you know AI security is not going to happen well it's tricky but it's not stopping people from asking some awkward questions around this and we need to find answers and find them quickly these are some recent
examples from questionnaires in this particular one is actually asking for information around how the organization is protecting the data that it's putting into its AI systems and in particular asking for alliance outside alignment against the EU AI Act or the OWASP top 10 for LLM if you haven't looked at these, please have a close look.
But this becomes really insidious when you're actually using client data in the AI models, because now there's an expectation not only that you're securing it, but that you're making the users aware that you're using their data in that way.
This is a requirement of the EU AI Act, which doesn't apply here unless you're operating in the EU, but we start and see those questions coming up more and more often in the questionnaires and the important thing
to understand here is that these questions are asked because investigations don't start with what went wrong they actually start with a history leading up to that incident and that is what will determine the penalty the fine and also the outcome of whether the insurance payout will come or not.
There are many different regulations that apply and legislations as well. We have obligations in the UK from GDPR, Data Protection Act, the Cyber Security and Resilience Act, also of course the Equality Act and across Europe we have the EU GDPR, the EU AI and sector -specific requirements.
We also have a number of interpretations of this that we need to comply with and adapt to from NCSE, ICO, DCET, Cabinet Office, and in Europe and in the different industries as well.
1We cannot prove any compliance against individual regulations in any systematic way unless we can translate it into something that we can all understand. And the way we do that is with a control framework.
And the control framework is what allows us to make this more accessible and understandable you'll have heard of some of these cyber essentials cyber essentials plus ISO 27001 42001 the NIST AI frameworks or the CSF the cyber security frameworks SOC 2 type 2 all of these are well known trusted because they can evidence
evidence alignment against those different regulations and legislations for the key controls and that evidence comes in the form of a chosen control set which are then maps to the frameworks regulations and legislation.
The reason I mention this is because you can actually take hold of your your AI project and actually start to build the evidence that sooner or later you will be asked for to prove that you're securing your AI systems.
The questions that you need to be able to answer is your asset risk register, your asset register as well. So in other words what AI systems are you using?
Those that you're aware of but also if you can understand how they're also being used in in other parts of the organization.
This is known as shadow AI.
But there's also another more insidious form of AI, which is partly shadow AI, but I prefer to call it stealth AI. This is where you actually see AI embedded in applications
that have already been approved, but have been updated to now have AI within.
You need to understand what data is flowing into them, whether it's customer data, you need to be able to evidence that you are treating it properly and in accordance with the GDPR requirements and that you're classifying it in a way that you can separate the data out if it's so required.
You need to have an
understanding of what is acceptable in terms of AI usage so policies documentation and similar need to cover that and you also need to ensure that you're aware of who your third party suppliers are around AI.
And if you can understand the risks that they are exposing to you in the supply of AI systems and data sometimes.
The way to look at that is really, it's your system, but it's my data that I'm putting into it. So how are you securing it?
1Finally, but by no means least, is understanding what the evidence trail looks like. This includes includes understanding that the models are making decisions on your behalf and you need to keep an audit trail of how these decisions are being reached so that if you ever have to review them you can go back and find that.
But governance isn't just about paperwork, you'll be glad to hear.
Barclays Bank had a very interesting use case with something they call digital eagles. This is going back to, oh, the 2010s, when they deployed over 10 ,000 iPads to all of the different branches. What they were trying to do was enable mobile banking.
And the very first time that they tried this, it failed, and it failed spectacularly. The reason it failed was nothing to do with the software, nothing to do with the devices. It was down to the fact that when the devices arrived in the branch, Sue, who had worked in the front office for the past 30 years, didn't know how to use it, was scared of it, thought it would replace her job. So she put it in the bottom drawer and left it there.
Phoning IT to ask for help in how to use these devices never went down well with them either. so therefore they didn't get used as a result of which Barclays completely revamped the whole program and they created a concept of a digital eagle and this was actually a change in the
culture they got a person from every branch and they trained them up in how to look after and support their fellow peers and then they sent them out again and this time around it was a tremendous success because there was a cultural fit and a cultural support that actually worked with the peers within the teams that already existed.
If you're interested in that by all means have a read of the story.
Another way in which you can actually accelerate the AI security element is by rolling out an amnesty program. This actually makes use of screensavers, poster campaigns, email sorry, email banners as well,
to try to get people to open up and explain how they're using AI, which you can then adapt and start to give them approved solutions for.
I'm actually looking for 10 more organizations who have an AI program, an active AI program, who are willing to beta test some of this with me.
If you're interested in getting copies of these kits for deployment across the organisation, please see me after.
So, in summary, there's a story about two intrepid explorers who go off across Africa and they're working their way through the jungle and they come across this lion. and the lion is staring at them they're staring at the lion and suddenly one of them throws his knapsack to the floor and he starts rummaging around and he takes out a pair of running shoes kicking off his boots he ties up his
shoelaces on the trainers and the other guy is looking at him as if he's mad he said what are you doing you can't simply cannot outrun the lion says you're right but I can outrun you and this is the story I'm going to say here is it's the
same with AI security you can't do perfect AI security but you can do it better than your competitors if you accept that perfect security doesn't exist but that you keep trying to improve it it demonstrates that you're taking it seriously and if you can do that that gives you a competitive
advantage because when it comes time to a questionnaire or a tender you can actually differentiate yourself from all your peers by demonstrating that you're taking AI security seriously and have something in place and you don't get eaten by the line thank you