Thanks for the introduction.
I'm going to talk about controlling AI.
So it basically means
using AI in a safe way.
I think most of you are here because you're interested in AI,
because you realize that AI is an extraordinary new technology we have.
But because it's so powerful, it's also important that we're able to use it in a safe way.
Especially
because of the nature of AI which I'll talk a bit more about later.
So I hope I
can in this presentation give you a bit of an idea of how organizations are
working on making AI safe in their workspace.
So also I have my QR code
right here which is for my personal profile because I'm quite active on
LinkedIn and if people are interested in AI they can add me there if they're
interest in that.
Okay so Andrea has already talked a bit about this.
Yeah so how did I end
up in AI governance?
I have a bit of a different path I would say than most people working in
governance because I have a more technical background.
I started with studying mathematics
in Delft.
Then I took a year of studying to be a software developer at Hydrogen Race Team
and then I did my master's in math but I realized that now that I've worked on
something which is actually like tangible it was quite difficult to get
back into abstract mathematics so I decided to start Lexfriend and stop
with my master and when working with Lexfriend I realized when talking with
clients talking about AI that they were quite hesitant to use AI mainly because of possible
risk so information security risks they don't know where their information is going they don't know
whether the output will correct will be correct so I realized quite quickly that it's very important
that you can give them some sort of assurances or some sort of rules or frameworks so they're able
able to trust AI because you know using it personally is one thing but if you
have client data that's sensitive you don't want that to just disappear
somewhere so what I did is I did a lot of things to get certified in AI and I
would change my role from a software developer which I used to be completely
specific now to AI governance and information security also at Lex friend
where I work we help organizations with AI in trainings and consultancies and starting from
today I also give talks about AI.
Yes so the first question you may have is okay it sounds
interesting but why is it necessary to have some sort of governance?
Well I picked three real world
examples that show what can go wrong if you don't have a good structure in place.
So for instance
the first one is with Amazon.
They had used AI for hiring and it was trained on data where
generally speaking where women had a lot less opportunity to be hired than men so what the
AI did is it basically penalized women so made sure it was men were privileged in that way
which is obviously not what they intended to do but they saw that happening in real time which
is just a form of discrimination.
Then the second one is from Air Canada and people were interacting
with their chatbot and it made up a policy so they said you know as a client you are able to
claim this and that but that was completely made up so they had to financially repay those clients.
And the last one is about Samsung, it's more about information security but also AI where sensitive
data got leaked in ChatGPT.
And what's interesting here is that each one of them has a different
thing.
So the first one is about bias in the training data itself.
The second one really has
to do with hallucinations, which is also a big issue now.
And the third one, as I mentioned,
is more about information security.
So where is your data going when you're using it in AI?
So there's lots more that can go wrong.
I'll touch upon that a little bit later.
and yeah when I tell people that I govern AI I always say that the first
thing that you need to do is know exactly what an AI system is and that
may sound a bit strange to start there but when it comes to AI I see a lot of
people need to have a sort of idea of what it is I mean most of us use it
every day we can sort of in some way express what it is but to really make it
specific what it is to have a definition to determine what it is and exact also what it isn't
that's actually quite difficult to do now people at the european union thought about that for a
long time and they came up with something so what i want to do is basically um build up that
definition in a way that i hope you guys will sort of understand and follow and in that way
understand better what ai is and as i already said what it really isn't
so it starts with basically two principles the first one is that it's machine -based
and that sounds a bit strange but what it means is that it's not like human intelligence or animal
intelligence it's something that's software and hardware and then you have another one which might
seem a bit more far -fetched but that's that it has a specific objective so how AI works simply put
is that from the training data it has it learns and in that way it's able to
pursue a certain goal so if you take away the goal there is no learning so it
has to have a form of objective or it works towards because otherwise there is
no intelligence so these are the two core principles and if we build upon
that we get the following so this is next to the thing I just showed and that
That is, you put something in.
So for instance, when you use ChatGPT,
you put some text in, you put an image in,
that's the input, and something comes out.
That's their answer, so that's the output.
Can be predictions, I don't know, recommendations, whatever.
And the real thing that makes AI special
is obviously what happens in between.
The inference part, or well,
that's what they call it, inference.
And yeah, that's the black box also,
what people call it, and that's where the patterns
or the logic is to infer an answer to your inputs.
So that's also where if you ask the same question,
you don't get the same answer necessarily twice,
which in that sense makes it a very interesting new piece of technology.
Yes, and then we have the last one.
So the last layer on top.
And these are things that I also think will make sense to you.
That's first of all, the autonomy part.
So we're used to having software
where you tell it what to do and it does that.
But because AI can sort of reason in a way,
it can actually do more on its own.
So what really qualifies or what really is unique about AI
is the fact that it can actually have humans
out of the loop.
So it can continue by itself without human intervention,
like agents, for instance.
And then we have adaptiveness,
meaning it's even after it's in use.
So when you're talking to ChatGPT,
it can still learn and improve even as you're using it that's also new because traditional
software just what you see is what you get it doesn't really change after you've used it
and then there's influence and that's more of a i think more of a philosophical point
and that's that because ai can give answers in the form of recommendations of decisions you
should take it has a real impact on the way we perceive it so it has much more of a direct
influence than a traditional software has.
So this all put together gives you
this overview.
These are all the things we just saw put together in the sort of
one big map and here you see how it basically interacts with this bit of a
simplification but this is the legal definition that we use now to categorize
AI and I'm showing you this because when people think about controlling AI they
they generally only think about controlling the output.
So we think, okay, something comes out
and the only thing we can do to control AI
is making sure that that output is correct.
But the problem is that the part before that,
the inference part, that's the black box.
There's things happening that we don't understand.
So controlling the output is in that sense,
extremely difficult or even impossible.
So what I'm trying to do and show you this
is that the way AI is governed
is by controlling everything around the output in order to assure that the output itself
can be sort of put into boundaries, has some sort of guardrails on it.
So instead of saying, okay, the output has to be correct, we can, for instance, say,
hey, we have to look at the autonomy.
The AI shouldn't be able to do that much on its own.
Or when we look at input, we could say it's only allowed to use this sort of input,
so no sensitive information gets in.
Or we can look at the objectives.
Is the AI actually doing the right thing?
or is it going in a different direction than we want so governance is very much
about changing all these things around the output and the inference to make
sure that you can control it now that's a bit fake but I'm trying to I will
explain how we can actually do that yes so we know what an AI system is and now
let's look at how we are able to control it yeah this was the difficult word
Andreas mentioned so I just took two frameworks as an example you don't have
to remember the names but I just put them there for yeah so it's like
complete we have the EU AI Act that's the European Union's law about AI
systems and then we have another management system that's like the world
standards for managing AI and those are two frameworks that in our organization
we have implemented so I'll basically walk you through how that works and
their approaches to managing AI because I think it gives you an idea of how you
yourself can handle AI better in that way.
So this is the AIX approach, it's
risk -based and what it basically says is you look at your AI system
whatever you have, whether it's a chat or whether it's something that predicts
certain results and you only look at the intended goal, so the purpose.
And
It falls in a risk group.
So it has a list for instance for unacceptable risks
So when you're thinking of you know, manipulative AI
That's just simply not allowed.
So if your use case falls in that group, you can't do it
then you have the high risk and here you have things like
Medical devices the whole list and if your AI system falls in that you're categorized at high risk and you have a lot of obligations
So you have to do a lot of logging you have to do a risk analysis.
You have to do a lot of stuff
and when it's in the limited risk that's for the majority of AI there's very
little regulation actually in the Europe so those people have the idea that
there's a lot of regulation but for most use cases there really isn't and yeah a
very other big group even has no specific rules for instance think about
video game AI things that aren't really affecting people's lives yeah so what
does it mean when you're falling a certain risk group well I took care the
other framework as an example so as you can see here what I was trying to make
clear earlier you're not really trying to make the output better in any way
you're just trying to make sure that everything around it is better
organized so first of all what's a very important point is actually making sure
that when you're using AI you know exactly when you're using AI and that
sounds very obvious but what we see is that a lot of people don't really
you realize they're interacting with AI anymore so like using a chat GPT is obvious but a lot of
applications now have AI and a lot of those applications also deal with your sensitive
information so the first step is always looking where am I using it and then we have the risk
assessment so what you do is you look at okay I'm using this AI system can I see where there's
possible risks so what can go wrong and what can what what's yeah really something that can have a
negative influence and you try to mitigate those risks you want to have roles and responsibilities
so if something goes wrong who's responsible for what you need to have some sort of
management in place for people who are for instance going to let your clients know that
something is wrong or people that go contact the organization where the AI is you really need to
have a good structure there and the most important part is policies so those are basically the rules
within your organization where you would say things like we're only using AI from
registered developers if you want to use a new tool you have to contact the IT
person where you say we don't allow sensitive data in AI that's basically
the most important part and it's about continuous improvement so a lot of
people think okay I have a policy in place where I have some sort of rules
tools, but AI adapts.
That's like one of the key properties and organizations change too.
So it's really important that you make a system that's actually able to change in a sort of
natural way.
So it doesn't get stuck right away when some small things change.
And the
last part is very important, AI literacy.
And that means that people in your organization
know what AI is and what its risks are, which is also often overlooked.
Because what you
you see maybe also today is that the because it's so new the level of experience between people can
vary a lot so i'm maybe assuming someone's very good with ai but they've never heard of it and
also the other way around so we see that there's a lot of difference in literacy
so yeah now that you basically hopefully have some sort of idea of how to manage ai in some way
Yeah, I want to get from our own account how we did it.
So as I mentioned before, the inventorizing is a very important part.
So I'm also basically asking you tonight to think about how and in which ways you're using
AI already.
And it's probably a lot more than you think, because there's already small applications
like spam filters, like Google search, a lot more things that already use AI, where you
You might also put sensitive information into or interact with other ways that can actually
be harmful.
So that's important.
Yes, then what's important is that you're able to categorize the systems in a way.
I mean, if you want to in your organization or for yourself, want to have an overview
of okay, I know my systems, then you should put them in some sort of framework like an
Excel or any other thing you like and then go through all the steps like okay, is it
using sensitive data are they do they have some sort of certification about how they use their
data do they explain how they use their data do you know that it's ai what sort of ai those are
all the things that you're going to put in an overview what's also important in your team is
that everyone is on board so what we see is a lot that the management for instance of organizations
are not really like invested in AI usually they're yeah a bit more senior
people of course and so AI is not something that they may have used
naturally so it's very important that those people also understand or have
some sort of better idea of where the risks may lie and then of course
something that's often overlooked the people that actually make AI so in our
organization we have people making AI systems we involve those heavily in
making our policy because yeah they know how it works then the AI literacy is
important part I think we're doing that tonight all together making sure people
better understand AI and also understanding its risks also in your
organization if you already have things in place for compliance for instance
with information security or gdpr then a lot of it can be reused and then i have my last
point which is quite maybe a controversial point but what we actually found really useful was to
use ai to govern ai so what we did is now let's say we're looking at making certain rules to bound
ai what we could do is then write out some rules we have and then give it to some sort of chat
interface and say okay is this like exhaustive is this really handling all cases and then it says no
because these things you're not covering yet and we're all okay um what we could say oh the ai
access we have to do this but what does it mean exactly and it says oh yeah you're an ai act it
says this and this and this so it's actually quite a helpful tool to govern itself in some weird way
but that really helped us it saved us a lot of time and obviously you have to make sure you do
that in the in the way you're building it here like governing it but that was extremely useful
and made our whole progress a lot faster too so the most important thing i actually like the
takeaways i want to give today is that this is something a lot of organizations are working on
ai governance but since it's so new there's a sort of at least the way i see it a sort of
gap in experience maybe because when we look at older things like information security GDPR
there's like tons of people who already doing that for years they know a lot about that
and with AI it's very new and you see organizations constantly trying to figure out how to do it best
and one of the most common reactions is just to not do anything so we see shadow AI a lot where
organizations ban the use of AI or extremely limited which means that people are going to
to use it anyway but secretly which is much worse because then there's no policy at all
or they don't want to use any AI.
Now I think AI is going to be maybe the biggest thing in my
lifetime at least so I don't think that's a good strategy either.
So my whole point here today is
that when we look at our own organization we really feel in control and confident about using
our AI because we follow these rules.
So there is a way to deal with AI in a useful way and also
make sure that it's very effective and can really help you yes so that brings me to the end of my
talk um yeah in case anyone is interested if they're thinking about ai in their organization
or they want to learn more about the things i said we also have a website which you could check out
but also i'm very curious to hear your own experiences or your questions if anybody has them
if you have a question
please try to speak up
and speak slow
yes please
do you also feel like
everyone is just trying something
and investing
in AI and then see
what works and what doesn't work
and in the end what they start with can change
completely
yeah
good to
summarize the question
even though you speak up
it's it's better to summarize and then yeah so you basically said if I say
correctly that a lot of projects are starting with AI and they might end up
completely differently than where you started but everybody's just trying
things out yeah we see that a lot so we actually had a talk here a couple of
weeks earlier but someone whose job it was to go to organizations and write all
those ideas out and basically select the projects that are interesting and but we
see that yeah so we have people that don't want to do anything with AI but
but also people want to do a lot,
but then they don't really know the use cases.
And that brings them very weird solutions
that don't really work in the organizations.
So it's actually quite difficult, I think,
or more difficult, but you really have to know
a bit about AI and about your organization
to actually apply it in a good way, I would say.
So we do see a lot of organizations piloting new projects
and then they don't really work.
Yeah, that happens.
Yes.
I'm wondering, so in your industry, do you meet more people who don't know how to use
AI or do you meet more people who are misusing them?
So you ask, do I meet more people that misuse AI or don't really know how to use AI?
More people that don't use AI.
so surprisingly enough especially also in in fields where it would be super
useful because people are for instance working a lot with text like in legal
fields we see that a lot of well a big part of the people we speak to don't
really use AI that much and the problem is that because they don't they don't
see the potential that it has so it's very difficult to explain to them that
that it's interesting because they don't care yeah that's something we do see yes
Yes.
Yeah, there are quite some different AI tools.
And I know in my organization, we started with CoPilot.
Yeah.
And you said goodbye to CoPilot and now JetGPT.
Do you feel like you need to know all the different tools
or can you choose one and use that in the post?
Yeah.
Yeah, so you're asking, there's a lot of different tools.
How do you know which one to use?
Yeah, so that's a good question.
You have specific tools, AI tools that might be very good
at specific things, but you also have these bigger ones
like cloud or co -pilot that especially now a bit later have significantly improved that i would say
are somewhat similar in the way they work i do think some can be a lot better but the way they
work they're quite similar so for instance i use cloud a lot i prefer that over the other ones
but i also use the other ones because i sometimes want to check one answer with another ai
so to see if that actually makes sense but what you do see is that the
beginning had lots of different AI things so one AI product did one
specific thing and now you see the bigger companies doing a lot more so the
the industry of different AI products is sort of decreasing in that way so I do
expect well what it seems like is that there's going to be a few big
big organizations that are going to be very good at a lot of things.
You do depend on the tools that the company chooses to work with.
Yeah.
Yeah.
I have a question too.
Come to you then.
Yes.
The different standards of, let's say, security or the questions you have to ask yourself.
It looks like quite a big program.
Yeah.
And even me personally, when I look at all of these steps I should be aware of, I know
i would skip half of them probably because either i don't want to or it takes too much time or it's
too confusing let alone a company that has to instruct its people to go through all these steps
so how probable is it that this happens yeah that's that's a good point so what what usually
occasionally people say as they draw the comparison with GDPR so that's the
the AVG in the Netherlands like the information security law that came into
practice like a while back and then all of a sudden organizations also had to do a lot of
stuff that they didn't have to do before so the problem of course is communicating that with
with everyone in your organization.
And in a certain sense, I don't think it's,
I mean, you can obviously,
what they do is they give trainings,
they tell them about the policy, where they can find it,
but in practice, it's gonna be difficult.
So what generally happens is
if people don't know what to do,
they have like someone they can contact
and they'll tell them what they can do.
They really know what they shouldn't do.
And yeah, in that way, you try to make it work,
but it's difficult, especially with larger organizations.
So you're basically asking, is the AI act sufficient in really, you know, creating ethical
ethical AI yeah exactly yeah yeah that's a good point so there's a quite a lot of
debate about that so what's important to know is the AI act is coming into
effect and sort of phases so it's not fully in effect now but in there keep
postponing it but in a while the high -risk systems the high -risk group
which is the most regulated group will go into effect right now there's no
regulation at all but yeah there's a lot of debate about that because as I said
you're a high risk only if your use case is on a specific list but you can
imagine that that list yeah I mean it's only a finite list so there may be there
are specific use cases that aren't on there that can be very harmful and also
So the AI Act is quite vague about things having to do
with the algorithm or the AI itself.
So what it does is it can talk a lot about,
do logging, do risk analysis, that kind of stuff.
But it also says, yeah, there shouldn't be any bias
in your data, but yeah, I mean, how are you gonna do that?
And a lot of those fields, it can be quite abstract
and that makes it very open to interpretation
and that's not good.
but also yeah how how could you fix it it's it's such a an intangible thing in that way that it's
very difficult to regulate so yeah it's expected that there will be going to be some amendments
to the laws to make sure that it will work in practice thank you where you had your ban high
risk yeah they had simple like for example chatbots and stuff like that yeah what about
about when agents come in?
Because that's a much more multi -field, I feel like.
Because you start with one use case
and then it keeps building on itself.
And it might move to a different one by itself.
Yeah, that's one of the biggest issues now.
So it's a very good question,
because as you properly said,
it's a use case that matters.
So as your use case dynamically changes
or naturally changes,
you can move into different risk groups.
risk groups but yeah I mean how can you manage that in that way so that is a
serious problem I mean at the time that the AI Act was written wasn't really
expected that these type of things would happen anytime anytime soon so I guess
now if I would think about how I would do it you would probably have to sort of
assess beforehand what could happen and then make sure that that that you can
assess it but i mean i spoke to you earlier you said it's very unclear where it will go so
yeah that's a very interesting point difficult to do so and honestly i just don't think companies
would do it now because it's so uh yeah all over the place yeah so what he's talking about is that
you have this um new ai technology basically well i don't know anyway there's this new thing
and that's much more autonomous than before so you can ask it to do things like for instance
present you the news every morning and then it does that on its own so it goes to different
websites it can log in for you it can decide oh maybe i want to go there too and it does all this
type of stuff but it can also do things on its own that was not intentional at all for instance it
could you know log into a news site and all of a sudden place a comment for you on a website that's
like super politically charged that was something that actually happened with someone so the problem
there is that you can't beforehand really assess the risk that AI can have
because you don't know where it's going to go so that's a very interesting thing
now.
There was one last question I think yeah yeah that's a good question so how
How do other countries or regions deal with regulation?
It varies a lot.
So actually I was working on making a sort of world map to assess it because every day
I see new things like Ireland has something, Taiwan has something, you know, you see it
all over the place.
But it's really sort of individual.
So there's no real worldwide network that does this.
it's each on their own and a lot of countries don't have anything at all so
the Europeans are one of the first to actually push this and what generally
happens which also happens with if you look at for instance and the Apple
charging that's the maybe a good example because the EU says they have to use
USB -C they'll do it everywhere because it's easier for them that way so
generally the European standards will also influence the rest of the world
I have one question though.
Since we have the EU AI Act, do you think in the future
if any individual or organisation use AI for bad purposes, do you think they are going
to face more legal consequences?
Yeah, so you're basically asking if an organization does something bad, will they face consequences
in that way?
Yes, because I think when the Cambridge Analytica scandal happened, there wasn't too many people
concerned.
No, exactly.
And at that time there wasn't really regulation in place.
So what people expect that will happen is the same what happened when GDPR, the information
security laws went into effect is that they'll probably find big companies a
lot at the start as a sort of deterrent and then people will quickly see oh I
don't want this to happen to me so they'll make sure that it gets
updated it's important to know that you can now make an AI system in any way or
not in any way one but in the high -risk category and don't do any sort of
control because it's not in effect yet anyone else okay we wrap this up yes
Thank you, Lauter, give him another round of applause.
Thank you.