My name is Ashley. I am a researcher at Fair Work Foundation based at the Oxford Internet Institute, University of Oxford.
So we are basically an action research project and we deal with labor markets. And my particular specialization within this group is AI and the labor market.
So AI and work present and AI and work in the future, which is why the title AI and the present and future of work.
fantastic stuff because it involves all of us and or if it doesn't involve all of us I'm sure it will very soon so thank you guys for having me here and
thank you John Michael how do you pronounce your name John Michael John Michelle okay John Michelle and Joe this will be a much more lighter talk so I'll
I'll just set a timer here so I'm on track and get started. I'm an academic, you know, comes with a job. Okay, so, right, can I?
So, as we all know, the world thinks AI is a bubble and there's a high chance it can be a bubble if we don't treat it with a certain degree of seriousness.
And one section of that is we don't realize the beyond trends, beyond what's trendy, everyone wants a piece of AI today. Everyone wants to include AI in something.
And I was just telling to my friend right there that as a consultant, I often get inquiries from companies and they ask me, we have a product, can we somehow insert AI into this? Like, can you make us have AI in some shape or form?
And the reason they ask that is is because the real world implications include that if you have AI somewhere in your project,
investors, angel investors, can give you 10 million, 20 million, 100 million. You can become a billionaire in a year. So the reasoning for them asking that is real.
But with that comes this issue that we forget the real world implications of AI.
And as you can see in the pictures, that's a data center, and that's an Etisalat data center in Dubai,
where I'm from and as you can see on top right that's under the ocean cables which powers our internet and on the far right down there that's an office of one of the companies we work with and that company powers AI because the workers that you see there are data annotators, content moderators etc etc the hard labor that goes into creating the AI
systems that we have to give you a better idea of you know what I'm trying to get to here is have you ever seen this yes you're all the familiar so I'm trying to link something here and I'm sure you know already what I'm trying to
link for those of you who don't know this is called the Mechanical Turk it is It was this ancient, an actual thing, machine, where if you can see, there's a chessboard on the top. And that's a robot.
And this particular man, he took this robot all around the world and told everyone, hey, look, I have a robot. And this was hundreds of years ago that can play chess and can beat any grandmaster. And people like actual grandmasters went and played and they failed.
But turns out that right here, if you see, behind this was a man. and he had like an upside down chess board and he would play in real time and no one knew that until way later when someone bought it and like opened it up because it was quite profitable you know like you don't want to share your trade secret.
So the reason I bring up the mechanical Turk is because just like how this man who was playing chess was invisible today in the world and with time it's only going to increase we have lots of human labor who are invisible but necessary
and without them we can't move ahead with AI because as we all know there are lots of things that we don't know about AI and we just you know kind of saw that in the last talk we were confused on what the right answers were we are learning interpretability we are learning about how to control AI's hallucination AGI in the future how is that going to work out we are trying to reason
with all this and there are actual labor involved behind the scenes so if we don't protect them, if we don't see their work, then how can we ensure that the AI systems that we create will lead to a fair, a better world?
Because that's what we all say, that we want a more ethical world. We want an inclusive world.
But if we don't know what's happening behind the scenes, can we really trust companies? Can we really trust people that develop these AI systems to then
be like oh the ai should have non -biased answers but if you have a boss giving you a deadline that in one day you have to finish 10 hours of work in seven hours do you think the person you know helping to create these ai systems will then be like oh i'm going to make the most gentle ai you've ever seen that's not going to really work a part of their emotion and i'm not trying to be you know very philosophical here but it's a real life thing what they try what they think will reflect in their output.
So from data to development, the people powering AI, AI systems don't just run on code. We are, you know, evidence that it just doesn't run on code.
1Behind every algorithm lies a vast, often invisible human supply chain.
And that's what I'm trying to get to, the AI supply chain.
And as you can see here, it's a bit complicated, but if you read carefully, it's quite simple.
Data collection, data curation, annotation, model training, evaluation, verification, And then you have sourcing materials, data centers, algorithms, operating systems, hardware, commodities, suppliers, manufacturers, shipping, distribution, lead firms, and the bottom end, all of us, the customers.
We are part of this, whether we like it or not. And we do have a moral responsibility to ensure that anything that we do, that we are cognizant of this.
all I'm trying to do through this talk is just make you think that the things that you use the things that you work on there are humans doing it AI it's not
just you know out there in the vacuum think of it like oxygen is example I like to use oxygen is important you know even though we don't see it these people are important even though we don't see them so I'll take you know just to show you the scale of things.
We don't have a hard number on how many people we have working in this field because it's not a very regulated field. So we don't have hard numbers.
So we sort of try and estimate that from the platform economy.
So platform economy is everyone who works on a platform.
So transport and logistics, professional services, AI -related service, domestic services, personal care service, etc, etc.
So, you know, your Deliveroo, your Uber Eats, they all come under platform economy. It's all gig work.
And with every day, gig work and platform work are also integrating with AI. So the AI is becoming an increasingly common thing.
All these companies that you use, they all have AI. They're increasing their AI AI capacities.
So we sort of try and use the size of this to then see how many people we have who actually work purely on AI. Like we can't really tell, but to give you an idea,
the value of platform economy is 15 trillion and it has about 404 million workers. And this 600 million or 400 million are about 12 % of global labor force in gig work. So that's not a small number.
And this is our own study. It's from our own study.
And this 404 million is a lower estimate. Upper estimates is over 600 million.
And even if you take, oh, within that maybe 3 % is AI, that's still a huge number. And that's still a lot of people that we have to be, you know, aware of.
So I'll just bring out, like I was saying, keywords, companies, governments, governments, international organizations, and even us, we all like to talk about humane, intelligent, inclusive, trust, etc, etc.
But for humane outputs, the system has to be humane too.
We can't really build an inclusive modern world on invisible labor, because then what's the difference between us and, you know, all these sort of modern slavery?
You can't just build a shiny new city and then say, oh, people were treated wonderfully. It was all good. it.
Just like that, we have to be cognizant that if you're building AI systems for a better world, then the people working behind it also have to be treated fairly.
So what are the issues that they face?
Commonly, low and unstable earnings, precarious working conditions, lack of labor protection, insecurity, limited avenues for collective representation.
And if you take a a look at the map on the right, the purple section, the lavender
section, we are right there over there to this, we form the where the companies are. So that is North America, Western Europe, we are where the companies are.
And if you see the blue and the green, the blue is a bit like there are companies and therapy workers. But the green is mostly where workers are.
Most workers workers, and these are data annotators, content labelers, et cetera, et cetera, they are in the global south.
They are in Asia. They are in Africa. In Asia, most of them are in India and the Philippines.
And in Africa, it's a bit spread out, but the rising trend is South Africa, Nigeria, and Ghana. And Ghana especially, the government is heavily investing.
A lot of our work at the moment is in Ghana.
So it just shows you sort of like, if you look, the sort of disparity at a global level too, because the benefits of these workers come to the global north, but the workers themselves are based in the global south.
And again, this is all about predicting the future, right? But if you take a look at this, give this 100 years, what do you think the result will be?
If we don't ensure that that the people working in the South have basic resources, basic amenities, then when the age of AI becomes the thing, like when everyone uses AI, they will be subjugated.
It'll be a modern form of, what's the right word? Modern form of? Slavery. Slavery is a word.
You can say, give me words. Neocolonialism. Neocolonialism.
that's another harsh word yeah exactly so like I said I'm just throwing ideas here I'm not it's
not doom and gloom I'm just trying to you know bring into the discussion things that we don't often talk about because I besides this I also work in hard AI research and the things that I see in wherever I go is that people don't talk about this when we talk about ethics we're talking
about oh we have an AI model how do we make the model more ethical how do we make the model more responsible we don't check if the very systems that we use to build these models if that's ethical like are our employees getting paid minimum wages are we checking that that's something that often gets lost in
the discussions around AI and that's something that's what I'm trying to bring your attention to lots of talking from my side I'll give you a few
few words from people I've interviewed. This was from Latin America.
I work 14 hours a day and still don't earn minimum wage. My monthly salary can't cover even two weeks of expenses.
So I borrow and I sink deeper.
This person was from Africa.
And this person in particular was a very sad case because he has a bachelor's in science. He is currently pursuing his master's in business Business Administration.
But the trouble is, and this is why Global South is the popular destination. There are young people in the Global South who are very educated, literate, multiple languages, fluent in multiple languages, but they just don't have the opportunities.
The proportion of youth in the population is much higher compared to the opportunities that they have.
So what lots of companies do is that they tap into this. All the big companies that you know, all the companies that you might be working for or might work for in the future. They're all tapping into this resource.
But what happens then is on the ground, there are people like him who want to study, who want to better his life, but his one month's work won't sustain his family for two weeks. And he's very educated.
And this is another thing that I heard very often.
This is what companies say when people complain. You are replaceable. If not you, we have hundreds ready to take a job. And this is a reality.
It doesn't affect us here as much, but it is a very, very, very scary reality in Asia and Africa.
You must finish your daily target.
And this one in particular was really heartbreaking.
So this was a case of a content moderator and she was reviewing her daily content. So things, you know, violent crimes, assault, the worst things you can imagine.
And there was a protest happening in the city. And I'm not going to say which city, but there was a protest happening and her father happened to be out and the protesters caught him and attacked him.
And she got that video while she was working on her screen, but her manager wouldn't let her leave because she had tool time that she had to work through this is a reality this
is you know it compared to this much simpler but for us women this is a big big big issue this was another data moderator sorry content moderator she was saying that the city that she works in they give her night shifts and they
say you can go home after your night shift we'll pay you a bit extra but there's no public transportation home. It's not a safe city to walk home.
Taxis, if she takes the cab, it costs more than her week's salary. How can she afford a taxi? So she said that my pay disappears just to get home safely.
Even if that means you just saw your own father being attacked. 10.
This one, if you do a little bit of math, you'll understand why this is particularly annoying.
So this was something that a manager said. Your screen will be actively monitored by AI again.
So now a lot of these companies are using AI to moderate the workers.
In a nine -hour working day, you have to work 8 .5 hours. You get 15 minutes of toilet break twice. So that's nine hours.
And you can take one -hour lunch break. It's a nine -hour working day.
How can they take half an hour plus one hour? It's just not going going to work. And that's the whole point.
And the last thing, like I was saying, I'm scared to complain, I will lose my job. This is a reality.
And all I wanted to do was to bring your attention to what really happens all around the world with people working behind AI.
So now I'll quickly go
through what we do at Oxford. And this is not an advertisement. I'm not trying to, you know, know, sell you what we do.
All I'm trying to say is that think about this. And there are people like us who can help you with, you know, making sure that the AI supply chain systems that you are a part of are more ethical.
If not us, find someone else. And if neither, I'll share you the principles that we use. You can try and implement those yourself.
So Fair Work, we are an action research project established to assess labor conditions and engage constructively with organizations to enhance standards within the AI supply chain, platform economy, cloud work and we've also worked with online sex work platforms.
So far we've done over 734 company ratings and directly impacted the work lives of over 16 million workers and the red spots that you see are all the countries that we've worked in.
Few more to add to that in two months time. So far 41 countries countries.
And we've enacted over 414 changes by companies.
And these are the principles that I was talking about. You can try and implement these in your organizations.
You know, I would encourage you to 1five principles, fair pay, fair conditions, fair contracts, fair management, and fair representation.
These are the five principles that we work on. And we've come up with these principles after lots of research and we've been running this for like about a decade now so it's a decade's worth of work and this is what we do for our scoring
system we go to companies we rate the companies and we have two two sub points under each threshold so if you score well in sub point one so for example in
fair pay the first point is that you should pay the local minimum wage that's That's the first sub -point. And if you pass that, you'd be surprised how many people don't pass that. That's a different point.
If you pass that, then the second point is that you pay local living wage. There's a huge difference, minimum wage, living wage. And you can only get to the second point if you get the first point.
So like this, we have five principles and 10 points. No one's ever got 10 points.
it's rare to even get a company to get at least seven points I think one time we got like a six points honestly that's good enough it's not a complaint if a company you know is willing to engage in this genuinely that's more than enough if we can even bring one change brilliant this is you know why should
you do it what would it help you with for companies but like in particular you You can stay ahead of regulations, earn trust, stronger teams, sustain impact, financial upside.
And also in this world where everyone's using AI, how will one particular company stand out? When everyone's got a fantastic model, how will your model stand out?
It can be more ethical, you know, if that sells.
So for us, this is how we engage with companies.
We have audit. We do certifications. That's the audit versus certification.
Audit is a one -time thing. thing, certification is a more regular thing that we do one year and then two years later and then it can be a lifelong thing or we also work as a consultancy, so companies come,
we give advice, it's their choice whether they want to do it or not and pledge, so companies just take a pledge, we are not going to check whether they're following it through but it's their own, you know, they just take it up on themselves. ourselves.
This is sort of how the journey looks like, discovery.
May I ask a question? Yeah. The audit, who conducts the audit? We conduct the audit. You conduct the audit.
And, you know, like there's no conflict of interest here because we are not -for -profit, we are a research institute, we're not funded by any company. No, that's fine.
I'm curious, maybe others are too, or maybe it's just me in which case we can take it offline.
What are the guidelines that the auditors have when conducting an audit?
And I ask this as a trained auditor, I said 27 ,000 and one, where the first day of the training, one of the first things they said is to look for ways of finding conformity
As an information security specialist, I'm always looking for the information security failings and failures, so actually trying to find the one case that it works and using that as a tick in the box to show compliance always seems to be an imbalance of ethically or morally perhaps of the whole point of doing the audit.
So are the auditors doing this looking for signs of compliance or are they looking for signs of non -compliance?
So we sort of are directly in between and the reason, you know, because first thing we don't want to scare people away because at the bottom of all, like at the core of all this, all we want is companies to take any sort of action.
Like I said, we're not doing this for money, we're not earning billions from this, we're not even earning millions from this so if a company comes to us our first goal I mean we're not trying to appease them but at the same time we're trying to be
as honest as possible if I can if my data connects let me see so the principles that I showed you we there's a detailed I don't think I have my My network doesn't work here.
But if you just look up... We can take it offline if you like. No, no worries.
If you just search Fair Work AI principles, it'll take you to this website. And you can see five major points.
And then under the five principles, you can see two principles. And then under each of the two principles, you'll find specific sub -principles.
So what we try and do is we try and look at that exactly as it is.
do companies or governments yeah sort of in a way but it's not limited to that let's say
if um in one of our last um analysis like one of our last audits we saw that a company had a very strange um privacy agreement so in that privacy agreement they said that um if the employee doesn't share their data with the company then the company is not liable for the data but the data is being say used by a third party and if the third party decide like misuses the data then the company is not liable and the employees must agree to that and if they don't agree to it they're terminated I I know that's complicated. I'll just say it in one line again.
Your data is being taken by your company and your company is selling it to a third party. The third party says we might misuse this and they can misuse it. If they misuse it, you can't say anything. And if you refuse to give your data, you will be terminated.
And the thing is, this is hidden in clauses. This was an agreement and most people might not notice this clause at the end and these are not things that we anticipate this was not there in our threshold but then we had to add it so in a way it is prescriptive like we look at that see if they you know follow those exactly but that doesn't mean that we don't look at things around it sometimes they might follow that to the T but there are some things that are a bit suspicious you know around the borders if that if that that helps. It's a new and if any of you have any ideas open to it.
We've been trying to work this around because we started off with the platform economy. And now we're trying, we're moving into the AI space over the past two years. But because of the space at which AI is improving, it's a bit tricky to keep the thresholds updated. Last we updated it was a month ago. We might have to to update it again in two months' time. So, as it goes. I think it's the right approach for what it's worth.
I mean, the third -party risk management process has made tremendous changes.
Obviously, legislation will follow, but in many of the countries, obviously, the people are being recruited and so on. The jurisdiction and, shall we say, governance of the legislation is a little bit lax, which is
That is why opportunities exist for companies to invest in getting cheaper, staff management, should we say, resourcing their requirements at will in a flexible manner in these countries
is the challenge that many organisations will face in any form of supply risk management, AI especially.
Most of these countries don't have a minimum wage law at all. So even our first threshold, if we only follow the local laws, no company would even agree to go through with this.
This is a sample of some of the people who have taken the pledge, just to show you that's another way a lot of people support.
So that's it.
I hope that gave you some food for thought and maybe not too much of a depression.
All I'm trying to say is let's place fairness at the heart of our digital futures by humanizing the AI supply chain with all the tech jargon discourse.
Let's not forget that there are people working behind the scenes so that we have the things that we have.
That's all. Thank you so much.