Good afternoon, everyone. My name is Alejandra Ayala. I'm a lawyer at Área Digital Abogados.
First of all, thanks, Sapin, for inviting us to share our knowledge with you. What I'm going to talk about this evening is very aligned actually with what Melinda has talked about.
We're going to talk about the new European AI regulation, which is the first legal framework from trust and innovation.
Do you mind taking a microphone? Okay, thank you. It's okay? Okay, so...
As you know, artificial intelligence is changing industries, helping businesses, and improving decision-making. But these facts also imply some risks.
Risks like bias, risks like lack of transparency, and unfair decisions.
So, the European Union created a regulation to reduce these risks and to help people trust in AI.
Today I will explain the main points of this regulation, of this law, including first of all who must follow it, who is obliged by this law, how AA is classified by risk, what companies must do and how it will be enforced and what impact it will have.
Okay, so the AI regulations apply to providers, to developers and also to users of the AI systems operating in the European Union but also even in the ones based outside the European Union. This is important because it's a European law but it also applies even if they are based outside the European Union.
The main objectives are the following. First of all, ensuring AI safety and ethical compliance by establishing clear guidelines. Secondly, harmonizing regulations across the UAE members, avoiding fragmentation. Also 1enhancing transparency so that users can understand that they are dealing, they are interacting with AI. And finally, protecting fundamental rights.
Ensuring AI does not reinforce discrimination or pose risks to privacy or security.
Manu, can I have some water?
This is very important. I mean, this is the main idea, okay?
The risk approach. So risk-based approach. So there are like four level of risks.
So first of all, there are the unacceptable risks. These are, sorry. AI systems that are completely prohibited.
What are they? First of all, AI systems that manipulate human behaviour in a harmful way. But also social scoring AI systems similar to those used in China. And finally, this system related to mass biometric surveillance in public areas. So these three are completely prohibited.
Second level risk is what is called high risk. In this level are settled some sectors.
What sectors are critical sectors? Which are healthcare, education, employment and also justice.
In this case of high risk AI systems, the requirements include transparency, again, every single time is the same based on the principles. So transparency, then human oversight, always, and risk assessment before developing.
Next level, limited risk. Limited risk systems, they are using chatbots, content generation, and virtual assistants. You are familiar with these ones. In these cases, the companies must inform the users that they are interacting with AI.
And finally, the minimal risk. It's included here, video games or recommendation engines and general search algorithms. For this minimal risk, there are no further requirements.
So by adopting this risk-based approach, the regulations ensure that higher risk applications are subject to stricter requirements and while low risk can continue to develop without unnecessary restrictions.
Okay. Let me have some water.
So it's clear more or less the levels. I think it's very important because it depends if you can assess the AI system in the proper level, you will have clearly what are your requirements, your obligations.
Okay.
Let's see what are the obligations now. So the obligations could be for companies, for developers, and for developers and providers.
For companies, they have to make sure that AI systems follow the European law. It's also required that human supervision is taking place. And finally, they have to regularly check how AI makes decisions. So as you see, it's very important here, the human supports to accomplish all these regulations.
but also there are, for developers and for providers. First of all, they have to take risks before using AI systems, that's clear.
Risk is the principal message here.
Then, have to register high risk systems in a European database. And finally, they have to use high quality training data to reduce bias.
One is this low entry into force. It depends on where are you allocating, if you are high risk, minimal risk, or if you are limited risk or prohibited.
So in 2024, immediate ban on prohibited AI systems, this is already applying.
2025 implementation of the obligations to high risk, including transparency in generative AI models. And finally, 2026, okay? Full informants of the regulation including compliance checks and oversights.
There is a small exception for small and medium enterprises and also for startups where deadlines are extended to ensure that innovation is not a still fight.
The regulation presents both opportunities but also challenges. Let's say for business.
For business compliance will require additional investment but at the same time builds consumer trust and promotes fair AI practices.
1For legal professionals, the demand for AI compliance expertise will increase and it also opens new areas for legal advisory and AI risk auditing.
And finally, for the European technological ecosystem, Well, it's good because the European Union is setting a global gold standard for AI regulation.
But at the same time, some people think that it probably will represent an excess of regulations, and it could provoke a push that push companies to reallocate outside the European Union. So we need to take care of that.
Actually, it happens now with privacy. It's the same.
I forgot to explain to you that if these developers, if the requirements of the regulation are not followed, there are like very huge feints around. The feints are up to 35 million dollars. up to, or 7% of companies' global annual turnover.
So as you see, also very aligned with the feints settled in the GDPR for privacy. They are following the same line, these enormous feints.
And the conclusion is that it's clear that this regulation is it will help a lot because it was needed. But, I mean, because it's focused on safety, transparency and responsibility, at the same time respecting this innovation development, which is very, very good.
So let's think, I don't know what you think. Do you think we are ready for this? Do you think it's good? Do you think it's going to be a barrier for the innovation?
The debate is open.
Thank you very much and I don't know if you have any questions.