How to Leverage AI Securely

Introduction

hello everyone um my name is rafan night and um before we get started I'll tell you three things about myself that I think are really important so the first

London is infested with mice um so she's great and convenient the second one is

that I'm actually Dutch I'm from the Netherlands wave if you're from the Netherlands any Dutch people oh nice

um but I'm actually married to an Englishman which means when we watch the Euros it's quite interesting at home and

the last thing is that I'm absolutely madly passionate about Ai and cyber security and I genuinely believe that by the end of this talk many of you in this room will also be really interested in

Ai and cyber security now I'm going to say hello minstead and I want to hear everyone say hello back because we are going to bring the energy up so hello hello one more time hello one more time hello

About the Speaker

nice awesome so I'll tell you a little bit more about who we are and what we do

so my name is rafan Knight and I head up an organization called secur so we help

organizations across the public and private sector to adopt AI securely

cyber security is incredibly important in particular in the age of AI now we

are backed by the UK's largest cyber security accelerator which is funded by the UK government so the department of science Innovation and Technology um and

we've also recently been shortlisted for the tech UK uh cyber 10 so we're doing an interesting and exciting work now who

are we we are AI cyber Specialists our mission is to empower organizations in the public and private sector to adopt

AI securely and our vision although ambitious is to upskill every single person an organization in the entire world to understand AI security and

Online safety um now what is a problem 73% of organizations do not have a formal AI safety policy in place 76% of current gen projects are not secure and 89% of organizations are experiencing AI powered cyber attacks

now what is the solution the solution is AI cyber workshops it starts with people it's always about people

um next we do AI cyber assessments and finally AI cyber policy strategies and Target operating models so that's a

Today's Agenda

little bit about the organization who we are now today we're going to cover some

definitions we're going to look at what the difference between traditional and AI security is we're going to look at what the problem statement is we're

going to look at best practice and governance which is really exciting so we're all going to get excited about

governance and then finally we look at some use cases and then a chance to

connect now let's just establish some

Key Definitions

basic definitions I'm not going to go over the AI definitions because I think as a whole they've been covered and there's a good understanding in this room but let's take a look at some of

Traditional vs AI Security

UK now what is the difference between traditional size cyber security and AI

security put your hands up if you say that you would rank yourself 10 being extremely knowledgeable on cyber security one being least knowledgeable

anyone five or up put your hands up okay nice seven or

up okay eight all right so traditional cyber security will deal with things like digital systems networks viruses fishing attacks uh denial of service

data security breaches unauthorized access it deals with so much more than that but these are just some of those

things that it deals with now ai security we're talking about things like bi mitigation data manipulation data

poisoning model inversion model theft model Integrity AI governance and privacy so there are different

challenges associated with AI security versus traditional cyber security Now what is the

Problem Statement

problem we went a little bit into these stats but let's look a little bit wider

so 83% of customers surveyed reported they will take their business elsewhere following a security breach most of us

stat now 87% of organizations in the UK are vulnerable to area powered cyber attacks now just just imagine things

like National critical infrastructure so that could be the N it could be you know local authorities um imagine your bank right um that's a huge

number and then when we look at what cyber Security Professionals think 95% of them agree that we need AI powered Cyber Solutions for AI powered

challenges and 96% of organizations surveyed reported that they actually seeing more sophisticated cyber attacks

Best Practices and Governance

that's the problem statement now when we talk about best practice in governance today we're looking at

oecd then we're going to look at the nist AI RMF framework and then finally the act in 15 minutes right um so this is going to be a little bit

rapid but we're going to have fun with it and if you have questions ask me at the end um or grab me for a drink afterwards and we can talk about it now

OECD Principles

when we talk about oecd there are these five principles that underscore this framework and they they sort of cover

inclusive growth um sustainable development and well-being and this principle focuses on trustworthy AI how

it can sort of uh contribute to having prosperity for everyone not just a few people but every single person across

the world now next we're looking at things like human- centered values and fairness and this is designed in a way

that respects law human rights Democratic Values diversity now next we look at

transparency and explainability this is incredibly important and actually we've seen a lot of regulation in the UK

that's come out and there's actually disc connect between the regulation and if a model can even meet that regulation

um but this is around uh transparency responsibility making sure that AI systems um are explainable essentially

now next we're looking at robustness security and safety now this essentially deals with the operational resilience of

an AI model and then the impact that it has on an organization or critical infrastructure or our society as a whole

and finally the oecd um fifth principle looks at accountability so organizations and

individuals developing deploying or operating AI systems now who is accountable who's responsible is it the

person that's coding it and developing it is it the person that's deploying it to customers is it The Regulators who

owns and who's accountable for the consequences of these

models so that's oecd so now you guys know about the oecd values which is exciting right yes

NIST AI RMF Framework

now next we're looking at the nist AI RMF this is another really exciting one I can tell everyone's loving the

AI Act

version now when we look at the AI act where cyber security is particularly important is those sections and those sections when we look at the first bits look at things like operational resilience transparency

explainability now the next chunk deals with sort of safeguards and testing that's where your developers and Engineers will be looking at then the

next bit looks at product safety and cultur awareness your people teams will be really interested in that then

finally we're looking at data protection information exchange and confidentiality that's where we're really looking at things like privacy

now the AI Act is anyone familiar with this put your hands up if you've seen this before cool nice okay so what's

really interesting is the AI act um looks at categorizing risk and it's really interesting because the approach

they've taken isn't necessarily to just say there's this AI tool and we think that this tool on its own is really high

risk or really low risk they've looked at what the tool is and then where you're applying it and Bas of those two things they then apply the level of risk that's associated with

it gosh we're done with the AI act within 15 minutes look at that um now

Use Cases

let's look at some sort of use cases where where things have gone wrong and this is why one of the reasons why cyber security is so important now there was a

real these are all real incidents by the way there was an incident of a deep fake where a CEO of a uk- based energy firm was deep faked then a huge financial transaction was able to take place um due to that deep fake this is real it's

happening now now the next was an autopilot crash and there was AI software used um and again the this sort of resulted in negative media coverage uh safety concerns all sorts of

stuff now next it's it's one that most people worry about this one and again it's real this was a AI software um a chatbot which started sort of um putting out a lot of hate speech um and so that was really difficult and

then finally a tech company breach where facial recognition was being used and this was interesting because this this company used something called Web scraping so they were taking data um that was publicly available in order to build their tool and so the company experienced negative media coverage um

Common Issues

and a huge loss of trust now I know there are brave people in this room so I'm going to I'm going to ask and just shout out what do you think these things have in common just just shout it out it's a quite loud room actually so can we just the last years interesting anyone else keep them coming people okay people

yes yeah money okay anyone else this is quite a loud yes someone at the back yes cool okay so um who said money you said money Financial loss was one of them is one of them sorry loss of customer trust brand erosion loss of public trust operational disruption

The Importance of Cyber Security

this matter right um this room is filled with incredibly intelligent and high energy excited people and you can go and

you can build great AI tool or you could teach your company how to use an AI tool um or you could go and invest in an AI

tool but cyber security is so important especially in today's day and age and actually thinking about cyber security

every step of that life cycle not just right at the end because you think it's a it's a thing that you need to take off in a list but actually making sure that things are operationally resilient is incredibly important um there is human

genuine human loss associated with with when things go wrong and there have been a lot of cases where eii algor algorithms have been used in the Netherlands or even in the UK and they've made some big mistakes and they've resulted in some serious serious impacts on human beings so AI is really

exciting and AI is really cool but we also need to make sure it's safe and it's secure now one of the terms that I hate is secure or respons responsible or

ethical um because it's not the tool itself it's it's how we code it it's how we develop it it's how we govern it it's how we roll it out it's how we teach

people about it and actually 96% of cyber security breaches and attacks are human errors so it it it's actually it comes down to humans um at the end of

Conclusion

the day now I see pizzas at the back so let's

connect you you know you guys know how to scan a QR code so I I'll finish on that note

but uh thank you so much

Finished reading?