Ethical AI by design

Introduction

So it started with a bit of a not positive.

I'm sure that we will end on a positive note.

Current AI Landscape and Challenges

But I know that we all are here of companies that are developing AI. There is a hype going on.

Of course, most of you guys are very serious, and not to say that it's not. But there is a prediction saying that because there is a hype, because there are so many companies doing and developing AI, and there are already predictions like Gartner, for example, saying that over 40% of all the HHK AI projects and AI projects in general will be canceled by the end of 2027.

Why is this? It's because although there is the creation of the AI tooling and the technology behind it, even the workforce, there is no controls, there is no business value attached to what is being created.

Issues with AI Adoption

Yeah. And so that is also another experiment by Deloitte saying exactly the same thing, that although companies are investing every time more in AI, there is research saying that 80% of the companies are not getting any kind of revenue from it. So there is no ROI being affected.

And that is because mostly, as I see, there's a very good graphic that shows that AI is not just about the technology. And of course, there is a huge part of software infrastructure data that is behind the tip of the iceberg that we see. So of course, there is a front end, but there is memory, there is authentication, there is tool, the agent orchestration, model running, foundational models, ATL, the database, the infrastructure, the CPU, that's all behind it.

and most importantly and that's what i want to discuss a little bit more there's the governance that that brings the that the possibility of this attaching to the business a bigger business strategy and making sure that the companies are aware of the risks and that can apply mitigations in uh in a in a good way making sure that any kind of things that is launched is not after that end up creating harm and so end up being called back and of course discuss budget and the company might end up being constantly approached that has a future.

AI Governance

1So in that sense, I would like to bring a few steps on how to create a good AI governance. It doesn't have to be anything too big. It doesn't have to be a big company. Any startup can do that.

And so I bring this framework. Of course, there are several frameworks. You can use the ISO framework, the NIST framework. Happy to discuss all of those.

I like this one because it's simple and straightforward. And so it's called Trust, and it's from the Global Innovation Framework.

So the link is over here as well.

Steps to Effective AI Governance

So the first thing that the company needs to do is to have triage. And that means have a risk analysis in place. Most companies already have a risk analysis based on GRC. And it's just a matter of adapting that into AI.

And when the developers are creating, working on the infrastructure and the software development, it's a matter of understanding those risks to understand if the AI is red signals, very dangerous, yellow signals, or green signals, and based on the risk approach, and have the attachment mitigation to that. We'll look at that in a few slides.

Aside from that, there is also the necessity of having the right data. We all know the saying about garbage in, garbage out. So if you don't have clear, consistent, quality data to feed up and train your AI, you're not going to get. You can have the best technology, you're not going to get anywhere.

And so you have to have clean, ethical information. And that information, aside from the quality, needs to reflect the company's values. That also means it needs to be attached with the company's strategy and the business strategy. Otherwise, the inputs that you're going to have are not going to reflect that. And so you can have whatever.

You can have hallucinations. You can have bias. And the business, the automation will not be valuable for the company. So it will not be used.

and also needs supervision. So clear rules so that the teams know that they should follow.

And that also means ownership. Who should own, of course, who should own the creation of the AI, but also who should own The responsible roles of governing AI, is that legal, is that privacy, compliance, it doesn't really matter. It's just that it has to be a clear ownership, a clear hat of the area of the business that is doing the supervising, the risk analysis, and is making sure that those rules are applied in the company.

And the fourth one is that there has to be technical documentation. So all of this, the risk, the data, the rules, all this needs to be documented somewhere in frameworks, in policies. And those need also to talk with what is ongoing by the product teams and needs to be recycled and making sure that they remain valuable. for the company.

So it's not just a matter of I created a policy is up there, and I don't need to do anything anymore. No, actually, the policy needs always to be upholded and remain so that they remain valuable for the business.

And for what the team is that is conducted the AI research and then end up building

The Role of Culture in AI Governance

But aside from those four steps, I also always remain that the very important fifth step is culture. Because a lot of you might have seen that a lot of companies in a lot of areas are very excited about AI.

But there are some areas that are not that excited. There will be people that are not excited about AI. They will be very, very reactive to it. That might be leadership, that might be HR. And so that clash in culture might end up running a project.

And so culture is very, very important to AI governance because it shapes how AI is perceived by the company, by the business, and how it is regulated. And of course, different cultures have different values for each company. And so the AI that is being built or adapted to that business has to reflect that culture.

Otherwise, it's not going to be valuable for the business to be efficient. And as well, the culture is important to shape the perception and trust in AI and enable the responsible development and deployment of that.

Creating an AI-Friendly Culture

And a little bit, I'll be very quick about this one, is how to create that culture. So of course, it has to align with the culture of the business and the business strategy for AI. It needs to develop a clear communication that also speaks with how to keep transparency about what you are doing in the company, how the leadership is viewing AI, how the leadership is understanding and wants to invest in AI.

And so it's how AI is being used in the company, how they can be used and how they cannot be used.

The third one is advanced experimentation. So I know that a lot of companies are very, very aware, has awareness of how to implement AI and what they should do or they should not do. And there is easily a lack of control.

So for example, one employee can start using Chat-TPT because he thinks that it might be efficient. But if the company has that control of what they are doing, it can create hackathons. It can do some testing to let the employees test AI in a secure and safe way.

Like you can use sandboxes, you can hackathons. So if you have the control of that, you can even create new ideas we don't know because not sometimes we have the idea that only the technical people can create AI, but actually, maybe they find that the The other areas of the company are also experts in their own field. So maybe HR will know what's best for HR. Financial will know what's best for financial. So they might come with ideas on how to deploy AI that might even bring to new ideas.

1And the fourth one is to prioritize training. This is directly related to AI literacy that is mandatory by the AI Act and will be mandatory for most AI regulations. And it's directly related to empower people to understand AI.

Of course, not everybody needs to understand how to code. But it is important that all the employees of the company understand the business strategy on AI and what AI can do and what they cannot do. What is AI?

What's not AI? And how to use those toolings that are provided by the company in an efficient way.

So that will match with everything that is being said.

Focus on Risk Assessment

And so focus a little bit on the risk assessment. That was the first step on the AI governance framework that I just presented.

I would like to focus a little bit on the by design approach. So the idea of the by design, and of course, he's saying privacy by design, but think about AI by ethical by design. It's not just privacy.

So the idea of Python is that you can be proactive of what you are doing and connect the risk analysis before the deployment. So you can follow the whole AI lifecycle. And since the ideation into the launch,

By Design Approach to AI

And so the idea is that what we do, what matters is exactly related to that. So we have a first step that is called consultation.

Then we have an assessment and then approval. In the consultation stage, we know that, of course, any idea of a project related to AI is very immature.

So we don't have the full answer to the question that, for example, what model versions are going to be used, what data sets are going to be used. What are you going to share that? What's the full purpose of using that AI systems?

Are we using systems? Are we using several models? Are we going to train? Are we going to fine tune? Are we going to just do a benchmarking? How are we going to do evaluation?

How are we going to deal with bias? But it's a very good early signal of what the company wants to do, of what this project wants to accomplish, that half or that year specifically, and gives the opportunity for the team that is conducting the governance framework to give early signals of attention.

So, for example, let's say that they have a project that is related to training. an extensive dataset that has personal data. So it will, if you give you a signal that, okay, you need to be concerned with GDPR, you need to be concerned with what the data that's in there, and we need to get some level of transparency that will be, need to be provided for the users. Are we sure, we are sure of how we are collecting this, this data with foreign parties, from our users. Or, for example, do we know the potential output that we are going to have? How can we make sure that those outputs will not have bias in it? What are the potential harms that this new model can create?

Continuous Monitoring and Adaptation

and uh of course i happy to go into more details in the pixar party but i i would like just to finalize with the last part that is continuous monitoring because of course ai um harm and of course the usage of ai doesn't end when we launch the the ai actually the the big work just starts when we deploy ai because that's exactly when we know

how it will work, it will react to real user cases, real data, because until then it's just testing, it's under control with the data set that we have. And so in the real world, we can have unexpected harms, we can have regulatory changes, we can have changes that might affect how the AI works and how they react.

Conclusion

It can start hallucinating, it can start uh giving different answers after that being being used with real scenarios data so we need to be always aware and continuous monitoring to make sure that those even if those harms are happen they are addressed in a timely way and so yeah i will stop here and ask questions

Finished reading?