My name's Louise and I work in regulation, doing a bit of regulation and then training regulators. I also, in my spare time, try and build small things with Gen AI because I find it fun.
But by background, I'm a lawyer.
So I'm going to shoot through this quite quickly.
Firstly, where are we now? So we've got some regulation in the EU, in China, with NIST in the US.
We've got a lot of resistance in the EU and in the US, with the moratorium of 10 years announced the other week in the US, and with EU regulation generally stuck at the moment.
Code of Guidance on General Purpose AI was due out, or agreement on it was due this month and we failed to reach agreement and it's unclear at the moment what's happening next with that.
And there's a lot of political pressure, both in terms of tariffs and potential imposition of tariffs, but also from within the EU bloc itself and from companies and people who want to innovate about how complicated the regulation is, about how difficult it will be to comply with.
And with things like IP, What training material should be available? Is copyright really an issue? All of that kind of thing.
What's quite interesting for me is that the people who want all of my work to be available for them to train their models on seem to be quite reluctant about releasing their models for other people to steal. So there doesn't appear to be any kind of equality. of approach in terms of who owns what and who should be able to profit from it.
What happens next? Well, I've got three possible scenarios, a brave new world, a dystopian nightmare, or Luddites united, where we are so traumatized by what happens to us as a result of AI that we ditch it altogether and go back to the pen and paper.
Well, I think it could be worse.
This is a timeline.
At the moment we're here, and as I say, rules on general purpose AI, this is in Europe, rules on general purpose AI, can't reach agreement on them, so who knows what happens next.
My tentative view is that there will probably be a renegotiation, it will get held up, and at the same time we're also going to be looking at changing the rules on data protection.
So that's all fun.
The Council of Europe is separate from the
No, loosen them. Yeah, I think they'll loosen them.
I think they'll change the rules on the requirements of automated decision, like telling people whether they're subject to automated decisions. And I think they'll change what constitutes personal data.
And you see a little bit of that potentially in the EU AI Act to start with. So they've changed the way that they look at biometric data actually to make it tighter in the AI Act, but different.
I'll talk to you about it later if it's interesting to anybody.
This is the EUAI Act implementation timeframe.
God knows what the UK is doing. They're having a chat about it. It's basically where they're up to.
They are changing data provisions in the data bill, but in terms of AI regulation, we're so far behind and don't want to do it.
So I'm now going to talk about what I call the pistachio problem, because I'm interested in what people's ideas are about where the limits of regulation should be and why we should have it.
So this is Sophia. She is 32. She's a data analyst. She uses a wellness app. It's an AI app.
It tracks fitness and it logs meals. She loves pistachios. She often buys them. She logs all of the pistachios she buys on the app.
And she ends up getting lots of really useful recommendations about pistachios, personalized snack recommendations, pistachio-themed workouts, exercise clothing, all of this kind of thing. The ad experience is relevant and helpful. she buys more and more pistachios. So her conclusion is, it feels like the AI really knows me.
So my question at this point is, is this a good example of helpful AI and would you sign up?
Straw poll across the room, who would sign up to a health tracking app and then use it?
Nobody. We've got one person at the back. Two? Two. So everybody else concerned about health tracking apps.
doesn't have a Fitbit, doesn't use Google Fit, doesn't use MyFitnessPal, any of those things. None of them at all.
Wow. You tried them all. Nothing works. OK.
Do you think it should be regulated?
So can we do a show of hands for no? Do you think an app like this should be regulated? In any way? In any way?
Hang on. Yes? No? Okay.
This app, which is called Happy App, the regulator is very worried about the amount of personal data it's collecting. and the profiling that takes place and wants to regulate how it's used. They in particular want to make it possible to opt out of data collection and use. Happy App says this will cost too much money and will prevent them being able to develop proper health recommendations.
Is this type of regulation for this kind of app handcuffs or guardrails? And I'd like a show of hands for handcuffs and a show of hands for guardrails.
So what happens next is that Happy App is doing really well. And the company expands. And the data is now being used in insurance and hiring analytics. So they've repurposed themselves.
And they're using their AI systems to say, here's all of the data that we've collected from this fitness app. And we're going to profile people from it. And we're going to use it. to decide insurance and hiring analytics. And it incorporates behavioral data and behavioral indicators from this consumer data.
So what happens next is that pistachio consumption is oddly correlated with impulsivity. This has an impact. Sophia applies for a job and her risk profile is slightly negative. And the consequence of this for Sophia is that she screened out. No interview, no explanation.
How did pistachios become such a problem?
So my questions at this point, and there are quite a few questions, and I don't think we'll deal with all of them, are what about regulation here?
Should we allow data obtained from our personal lives to be used at all in employment decisions? Yes or no. Yes first. No.
So interestingly, 2023 report estimated about 90% of employers look at things like social media, which has personal data to decide job applications. And 21% of them said that they had rejected candidates based on information that they posted online.
It's not. It is looking at stuff that's public. So there are slightly different considerations because it's publicly posted. But if you're putting something into your private app, say, use it that you've got on your wrist, you're not expecting necessarily that data to become public.
Yeah, right. I mean, if you post something publicly, it's fair game, right? Potentially. I mean, there's an argument that it shouldn't be, but yeah.
In terms of so so i think the difference here is how the data is combined in the back of the app to decide that pistachios and pistachio consumption gives personality traits because there's no i guess maybe my question is more is this because it's harder to place a plane when you say this is an ai making a decision versus
It's a good question and there are different questions to be asked in terms of regulation about what you can know, how you can know it, who should tell you it and whether that is different in an algorithm if it's not clear or not disclosed than I am a scientist and I've written a paper on this and this is the outcome and this is what we're applying. Yes?
It might not be about the data in and of itself. It might be about the combination of data with... It's about, I can't hear you very well, sorry.
It's about how you put the things together. It is about how you put the things together, but there is, for me anyway, there is a difference between a person having written a, I've got this bit of data, this bit of data, this bit of data, and this bit of data, and I am making the decision that that's the outcome, and all of that data going into a black box with no understanding of how it got to those decisions coming to an output.
And the reason that there's a difference is because when you look at how you challenge it, you don't know what you're challenging. So if you've got it in stage by stage, this is my process, you can say, well, I object to that because that's got no rational foundation or you shouldn't have reached it. But if you've just got a, here's a set of characteristics, I don't know what you've put into this, but it's made me not get a job and I don't know why, then you're in a slightly different
situation so it's about the process that for me it's about the process that happens in the middle um should pistachios come with a warning on the packet anyway happy apps i'm going to skip the other questions but happy apps are happily going off with its business expansion and it is now selling to law enforcement
And it's being used for predictive policing. And risk scores are based on zip codes and consumer data.
Pistachio purchases appear in a flagged cluster. And this is my example of possible clusters that it might appear in.
The risk score alerts an officer at a routine traffic stop. The stop becomes tense. And unfortunately, Sophia is briefly detained.
So a snack has turned into a signal for suspicion.
And the actual reason that I use pistachios is because I started thinking about this in the context of a border control system that's being introduced in Europe, which takes in data from your social media. It takes in data from Interpol, which is obviously legitimate. But it's unclear how much of that data could lead to profiles being built on people
about things that are totally irrelevant to whether or not they should be granted visas, whether they're terrorist threats or otherwise. And I also didn't want to use the standard things that people talk about because they just get really dull if we talk about facial recognition all the time.
1So my question really is, what if the correlation rate's accurate and 10% more crime is committed by pistachio fans?
Are there any problems with using these kinds of systems or should they be regulated? We're talking about regulations on private companies, but I think the problem is more a government or a police force using citizens' data without any proof of So is there an argument that government should be more heavily regulated than companies?
I think the onus is on putting restrictions on the government. So that would be an example of a good guardrail rather than a handcuff.
Yes, in my opinion. I think government should have a more strict development problem. At least governments are regulated. They are heavily regulated.
Where you have a private company, who audits them? Who audits the auditors? Who regulates the regulators? You have a lot of auditors regulated.
shared with a private company without any suspicion or evidence of wrongdoing. So I think one of the things that I was trying to get at is if the tech is made by a private company, you're saying it's okay for it not to be regulated, but if it's used by a government, it ought to be.
Okay. Yes.
So we think it's the pistachios problem. Okay.
So I am going to finish with how would you regulate? What would you allow? What would you allow with safeguards and what would you ban?
I have one question. So how does the status of data get to the public authorities?
So you weren't following. Sorry.
So the idea is that you have an app. that is developed that develops behavioral indicators based on consumer data. That is then sold on.
So that was one of the questions about. It's not quite true. It's not quite true and it has in fact happened, so there's one example.
So in that case you would say regulation is a good thing? So you think if we... I'm not saying this is an AI act. I'm saying this is about regulation of AI and I think regulation of AI comes through data protection legislation, it comes through product safety legislation, it comes through EU AI Act legislation all taken together.
So really the issue was and there are examples of data being legitimately transferred on sale of a company for a different purpose. There are guardrails and safeguards which are that you're required to tell people that's what's happening. And we're not just working in a GDPR state.
So that's... Yeah, but we are living here in a GDPR environment. So here it's forbidden. That's not allowed.
So I get the problem, and this is a huge problem. I think the bigger problem, by the way, and that's why I'm completely with you, we have, there's so much information out there about us, that actually someone who puts that together knows more about you
What if the data had been de-identified before sale so that what's actually sold is the behavioral insights about pistachio eaters and not the personal data of the person? Because by the time that you've de-identified it, you can, you know.
Yeah, I think we can move this upstairs. I think we move this upstairs.
So what I really wanted to do was walk through that scenario to show how one set of data collection, one set of predictive analytics could be acceptable in one circumstance but not in another, and that regulations may be needed as guardrails to ensure the safety in those different circumstances.
Thank you.