AI and Human Rights

Introduction

what are human rights and why should AI care so firstly a little bit about me by background I'm a human rights lawyer um

I did a lot of work on migration and human rights over the years and then I had a sabatical I did a little bit of

programming and learned how to make video games in Python and worked out after my 10we course of uh making things like um uh poker card decks and that

kind thing that I could probably uh make a set of if else statements do most of my job so then I started looking at what AI might do to my job and uh

looking at what the consequences of AI might be for the rest of society as a result so um after a few years of researching and writing uh last year I

wrote a chapter of a legal textbook on a it's called The Law of AI my chapter's on AI and human rights all very Snappy titles I'm on the Council of Europe's

expert committee on AI and human rights uh on AI in equality and we're trying to develop recommendations for States who are seeking to regulate AI to reduce

bias I'm also doing a lot of work around children's rights and digital um Technologies looking at Tech that both helps and harms children and I'm on an

Advisory Board to a project um which is looking at predictive technology in the justice system so that's my background

Understanding Existential Risks of AI

and I'm going to first look at existential risk through the medium of

Helen this is Helen Helen has an AI vacuum cleaner called Harry and Harry's been designed to Hoover up things that

are no good over time Harry learns from Helen he's listening to her on the telephone generally following her around the

house Helen frequently talks to her friends about her other friends behind their back which Harry doesn't like very

much she also discusses her favorite political party her views on abortion maybe her views on Elon Musk

and over time Harry learns that this is no good Harry decides that Helen needs to be

cleaned now this is the image that chat GPT gave me and I did wonder whether a semi- naked woman was a good slide for a human rights presentation but I do think it goes some way to illustrating perhaps some of the problems that we're

facing so then we uh put this at scale Harry's learning is repeated at scale and again Harry is starting to learn and his friends are starting to learn that human beings really are no good all of them and finally the AI vacuum cleaners decide that no humans are good and try to Hoover all of them

up and at this point we start wondering do we need to get off planet and I think that we have a

Embedding Human Rights in AI Development

framework work this is where it gets a little bit more serious that some people think might be able to solve uh some of the problems that existential risk uh

might cause and that would be in a simple way to embed human rights into the data design development and deployment of AI and hit the kill switch

if it's going to go wrong um we can use AI to identify patterns where good humans human rights practices have not previously existed we can use it to identif patterns of racism

of sexism discriminatory housing practi practices sexist Employment Practices and we can fix them so that's one good positive thing it's possibly the only good positive thing I might say throughout the entirety of this

The Human Rights Framework

presentation but there you go and there are some basic reasons why the existing human rights framework is good so the human rights framework Works along the basis of having a set of Rights for prevention and then you

Monitor and put in place oversight and then you provide effective remedies for where things go wrong and this Maps quite conveniently to the life cycle of an AI product with Concept and initial analysis the design and testing phases the deployment and then the monitoring

and evaluation the benefits of this is we don't need to reinvent the wheel we have this framework already it is internationally agreed which was Quite a feat at the end of the second world war I'm not sure we'd manage it now but we have this established framework that's

enforc forcible it has existing jurist prudence and uh this diagram is reproduced with consent I haven't stolen it one of the things that human rights law does is place the obligation on the state to prevent um harms and to protect human rights so the failure to regulate properly would cause the state to be liable for the harm done to the humans but that doesn't exempt businesses from their responsibility they have corporate responsibility they have due diligence uh human rights responsibility and States should if they're acting properly put in place laws that regulate properly but also respect Innovation these rights that we have provide the ability to strike the balance between competing obligations so one that you're probably familiar with is freedom of expression on one hand and freedom of privacy on

What Are Human Rights?

another just in case what are human rights so this is the basic Set uh contained in the European convention on human rights which is the one that I'm

going to focus on partly because we are in Europe and partly because I'm also going to talk about the EU AI act uh which is focused on fundamental rights which broadly speaking mirror these so

these are um the right to life the prohibition on torture their absolute uh they are um the right to Liberty and security the right to to a fair trial

the right to respect for priv private family life freedom of thought conscience and religion freedom of thoughts quite interesting when you thought start thinking about brain

computer interfaces because freedom of thought's absolute so once you start interfering with somebody's brain you can potentially interfere with their freedom of thought so that gets quite

complicated um freedom of expression freedom of assembly and Association and then the uh uh prohibition of discrimination is the other one that's

quite important they're not just about criminals and migrants as some would have us believe but they are a set of fundamental rights and freedoms which

are designed to protect essentially what makes us human so what differentiates us from machines um maybe an element of human

dignity I'm going to leave that one for a debate afterwards because I think this issue of human exceptionalism um is definitely one for beer and not my talk

um so some of these rights are what are known as qualified rights which means that you can breach them or interfere with them um if the interference is in

accordance with a law if it's Justified if it's necessary in a Democratic Society and if it's a proportionate measure so uh I can inter WAP you if I've got a court to say I can do it

because I think you're going to commit an offense that's the sort sort of thing that that

AI and Human Rights Issues

means so now I'm going to give a couple of examples of AI and human rights problems and the first one is

recommender algorithms and looking at these under the right to life so they're great if you want to find a pair of trousers or a hat

however they've also been implemented in genocide um they've also been implemented in in um child suicide and child mental health and harms um and in

this case that's that's on the screen at the moment um there was a Coroner's report into the death of um this individual child called Molly Russell and they found that particularly graphic content portraying self harm and suicide as an inevitable consequence of depression that could not be recovered from was found to have contributed to

her suicide the coroner in that case recommended legislation from government and self-regulation by platforms and that prompted some regulation in the UK and there's also

concerned that recommender algorithms pose a general threat to democracy in skewing elections fermenting hate and limiting access to information I told you this was going to

be fun so deep fakes and generative AI puffer it's kind of funny uh image Foundation models digital manipulation

issue at play as well this isn't porn this is salad so how do we avoid inadvertently building Tech that contributes to these kind of abuses and

what might happen if we don't that's the thing we're trying to avoid what I'm what what I would suggest

and what the UN suggests and what the EU suggests is that in appropriate cases you use human rights impact assessments which are also known as fundamental

rights impact Assessments in the EU EU legislation to assess what likely risk is going to be caused why How likely it is to happen who it might affect and whether or not you can mitigate it um

EU AI Act and Risk Assessments

the EU regulation aimed to combat unacceptable harms divides AI systems

into different risk categories and we we're going to look at two of them

Prohibited and High-Risk AI Systems

firstly prohibited systems and secondly high-risk systems so prohibited systems create

unacceptable risk and these are the ones that um if we follow them through might lead to us needing to get off planet or destroying ourselves in ways that we

don't want to subliminal manipulation exploitation of vulnerabilities social scoring which I've always found quite strange that we object to it happening in China by government but we seem to object to it much less Happening by private companies in the west not entirely sure how we square that one

same with profiling um these are very specific examples you can't profile people to determine whether or not they're likely to commit crimes facial recognition databases have been um quite widely looked at and there are two circumstances where you can't really use them one is where you you create the database for them using scrape data that is untargeted and the other is if you're doing biometric real time uh identification in public

spaces the other one that's interesting and I don't think people are paying attention to is emotion recognition in workplace and education institutions so whiteboards that watch you at school that decide whether or not you're paying attention or whether you're sad those are currently uh prohibited under the EU um the the EU EU AI act there are some narrow exceptions but I'm not going to go into those because I've been speaking for 12 minutes already high-risk systems these

are the uh second group that I'm going to talk about and these are governed by the requirement to conduct what's known as a fundamental rights impact

assessment and a high-risk system can be one of two things the first group are AI systems that are safety components of products governed by EU product safety

law so they would include things like medical devices or agricultural vehicles and various other um products and then the second list is Biometrics

educational and vocational training employment workers management and access to self-employment access to and enjoyment of essential private services

and essential public services and benefits law enforcement migration and administration of justice and Democratic processes so broadly speaking things

that government does that affects all of us that we can't opt out of um and this is why I'm not so worried

Responsibilities for AI Risk Assessments

about um generative AI at least just yet because that was what it drew for me when I put that list into it and asked it to do a

picture um and again some some systems are ex are Exempted from fundamental rights risk assessments they tend to be uh to do with essential uh critical

infrastructures so who needs to do a risk assessment generally it's the deployer um if the deployer is a body that is governed by public law this has

a specific definition uh when I ran this presentation past someone they said you'll lose your audience at this point so I'm going to do it very quickly uh

some of it some of it is really easy government departments uh some of it is slightly more complicated where governments contract out uh

their basic obligations to other bodies so for example if you contract out private prisons sometimes those private prison contractors would be uh quazi

public bodies and then the deployers that are private entities providing public services and part of this is because it governs all of Europe so in

some cases healthc care is private in some cases it's public in some cases the distinction is a bit blurred um it's not very clear who it's

going to include yet um so it's got likely Provisions in the recital and then deployers of certain high-risk Systems Credit Systems and credit

scoring unless it's used to detect fraud and then risk assessment and pricing in insurance policies a deployer cannot rely so if

you're if you're the deployer this is your responsibility but you can rely on an assessment that's done by a provider as long as you keep it up to

date and take into account changes uh that are relevant which means that if you're a provider of an AI system your customer may require you to have done a

fundamental rights impact assessment I think legally this is going to create lots of contract and other challenges that are going to be an absolute

Conducting a Fundamental Rights Impact Assessment

nightmare for people to sort out um and this is what you have to include in a risk assessment uh a description of the

processes because you might have a system that does some of it needs to be risk assessed and some of it doesn't um a description of the period

of time that you're going to use this product for the categories of persons and groups likely to be affected by its use in the specific context specific

risks of harm and a description of the implementation of human oversight measures so that's the human in the loop and what measures are you going to do if

it are you going to take to govern it and what are you going to do if it goes wrong and how do you do this the first step is planning and scoping so what are

you examining and who are the relevant stakeholders uh what's your data collection and the Baseline development of your risk

assessment um it's not just the product but also workers the community that you're deploying it in it's not even just the people who are going to use it

it could be that this product has knock on effects for The Wider social Community what are the potential impacts and these should be assessed according

to both the likelihood of them occurring and the severity of them occurring if they do and what are your red lines it's quite fundamental and consider impact

mitigation and risk management um and I suggest using something like uh Rome and then reporting and accountability and bear in mind this is not a one-off

Conclusion

process but needs revisiting ultimately I would say that if we get this right we can live happily with our robot

overlords and not have to go to Mars thank you very much

Finished reading?