Human-AI Gap: Ethical and Equitable Design for Good & Gains

Introduction

My name's Dr. Shanta Brant-Larsen.

I really very much take after my granddad.

Background and Personal Journey

I'm autistic ADHD, so I'm both an architect and a builder.

Early Work in AI and Transformation

Actually, back in 16, I built out AI chatbots across 21 countries, so I know your pain. That's before generatives. It was very much like this. And I was completely fascinated.

I know this is going to go off in a second.

in the fact and the pain of working in transformation for at that point 20 years now it's like 25 years of why just fails either way it works even see why it fails wasn't that on cue it was oh no it's back up um

Research into Why Transformations Fail

So I undertook a second doctoral research and went into companies like SVB, Schneider Electric, many different companies to understand what was going wrong.

And that's what, if I had the slides, I would have shared the depth of the research with you. But you're gonna have to listen to the story.

What the Research Covers

I'm gonna share with you what the research said. I'm going to share with you some of the insights around how you can make games.

And I would have shared a last slide because, like I said, I'm not just a solutioner. I'm also the builder.

From Chatbots to AI Agent Teams

I went into creating AI agent teams, so kind of beyond the chatbots, but really where we need to go, which is actually giving across functional teams to either small businesses or big businesses to get shit done, basically.

So yeah, hopefully that will, well, not hopefully, but I think that will go off in a second.

A Human Model of How AI Is Built

So if I take a step back, oh, we've got two women in the room. Oh my god, this might actually work.

If you kind of stand up, that would be great. I do really mean stand up.

Interactive Exercise: Revealing Bias and Exclusion

So 1I did a session at Cambridge, and I built a human model with them on how AI is built. And what I'm going to do is, in this human model, I am going to invite you to sit down at different stages.

And for example, if you're neurodivergent, please sit down. You don't have to kind of come out as neurodivergent at this point. If you don't want to, that's fine. I will say all at this point.

So if you are neurodivergent, please sit down. I want to invite you to sit down.

If you are LGBQIA, please sit down. If you come from the global south in any way, South America, India, Africa, I invite you to sit down. And if you are a female, please sit down or identify as non-binary.

This is not you. This is the human model.

I want you to look around the room and look at the difference between who's sitting up and who's sitting down, sorry, sitting up, and who's standing up. Hmm.

AI Mirrors Society’s Data

This is the model of AI at the moment, as many of you know, because you're actually, I think, data scientist or coders or developing it and finding out this loopholes in languages. Because they also work across the global south. Unfortunately, this is the perpetuation.

AI is just working off the data. that we give it. So it's mirroring society.

Imagine being in Arabic where you have 200 different dialects, right?

Clinical Trials and Data Gaps

So when we go into more sophisticated builds, for example, I was at Novartis, a session with Novartis, and they're doing clinical trials. They're taking all of their data, they're swearing at PowerPoint, because any data in PowerPoint, as you know, is a complete bitch. So they're doing all that, but actually what that means is for women in clinical trials or global majorities in clinical trials, there is a real fundamental issue.

And so that's basically the crux of I wanted you to feel today. You can all stand up. And how does that feel?

Designing for All Customers

Like, actually, if you're building AI that is for all of your customers, whether they speak Romanish or Swiss German or Italian or French, that's actually what we want to do. We've always wanted to go from a customer design point of view. If you think about your agile, OK, you can sit down or you can stand up, you can do whatever you want.

Human-Centered Design as a Business Imperative

So what did I do? So the first thing was really looking at the inadequacies.

The Cost of Neglecting Human Design

And actually, if you go to the research layer, there's a guy called Preece. And in the research, a lot of the overspends, I've worked for many big companies, have come from lack of human design. So 45% of overspends are from lack of human design, exactly what you just modeled.

Because we think it's all about Azure licenses. So let's talk about AWS or whichever licenses. It's all about the tech. But we forget that human design.

Well, that's not just picket.

Field Studies with Enterprises

So then I went to go and spend time with the companies to understand what they were doing.

Case Study: AXA’s Fraud Team and Motivation

And I take an example of AXA. They've built an amazing, so AXA Insurance, but not health insurance, by the way, car insurance.

They started to look at and wanted to do pieces around fraud, right? And fraud can be highly turbulent because we've had bias and fraud in many countries based on race and different things. So they're really looking at the core data.

What they managed to do with human design was really motivate and speak to the people and understand what they wanted from it. Can anyone guess what the fraud team wanted and what motivated them before everyone got scared about their jobs? What would motivate you to give data over to AI and shape AI? What motivates you this year?

Yeah, they wanted higher bonuses. So they said, actually, if we don't identify more fraud, we're going to get higher bonuses. They're like, OK, how do we do this?

They identified 10 million more fraud, and they got higher bonuses. So actually, this is just one example.

If I could show you my freaking human model, which I'm not even going to bother going through the slides.

Eight Human Factors: Trust, Explainability, and Positive Emotion

I identified eight human pieces of the design, which will be things like trust, explainability, of course. But there were also things about positive emotion.

Aligning AI with Human Motivation and Values

If we want to get more out of AI, we need to get the data better and we need to get better models.

You actually attach it to the motivation of people. A lot to do with bonuses if you see about their behavior.

So identify these human models.

And then I got invited to, before Cambridge University, I was lecturing at Cambridge University, I got invited to something called the International Swiss Talent Forum. Does anyone know that? It's a very old organization.

They said, can you come and work with 70 people over 34 countries or students?

And I said, I will if I can collect some data because, you know, coming from the data background. And I said, yeah, I really want to test out.

Testing AI in Complex, Cross-Country Problems

how AI starts to behave when you have complex problems across different countries, how it can deal with bias, and how we can actually start to see if bias exists, or how we can start to intervene at a rag model. So it very much connects with Halsana's point.

And so what they did was I had people from Guatemala, Finland, Israel at the time, Denmark, basically a lot of countries. And we took a complex problem and we linked that to an ONA poly node to understand how were they making decisions with the AI as a teammate. I'll tell you why we did that in a second.

Intervening at the Vector Database Level

And it was fascinating that for some they didn't even bring AI into the equation and for others they did in terms of solving the complex problems. But what happened was they couldn't remove the bias unless they intervened a vector database grad level one, not just on the what by feeding, actually, it would be this in Guatemala, but also around the how.

Think about you at whichever company you're at. It's not just what you do, it's how you do it.

Embedding Values to Reduce Harm

So 1we started to feed it sustainable development goals to get it to pave in a way that reduced inequity.

So one of the things in your vector database is as you go out to customers, you might want to consider is the values. It really increases what you're getting back and decreases the chance of any harm if you're looking at the EU AI Act. By the way, I'm the European chair for AI governance, so I have this EU AI Act piece in my head in terms of making sure that you do no harm.

Business Gains Across Product, Customer, and R&D

So all of these things are really important, essentially, in making the human design upstream. Why? From a business gains point of view, there are four things that came about.

Product, customer, R&D. And now I have to remember my menopause brain off the top of my head. Oh, it'll come to me.

But these four areas mean that you can get more value from your customer. You can do more sales. And that's what I started to go into.

And make sure I didn't miss anything out. You can see it. OK.

So the thing that I wanted to say, I'll come on to why it was important.

Coalition Situational Awareness

After this test that we did, the other thing that came up was correlation situational awareness. Correlation situational awareness.

It seems like there's lots of AI people in the room. Has anyone else come up across that in their research or in their work?

That's right, but do you know what it is? Coalation situational awareness, or team awareness.

Okay, so I think we all know that, or we should, I hope we all know when we come to practice, some of the most advanced researchers in the military.

From Military Research to Organizational Silos

If you look at the awful stuff that's happening now. And basically the piece of a missing gap that they are trying to make is increasing situational awareness between humans and machines.

Yeah, so this piece started to come into, when I was putting it into an organization network analysis, was how do we have situational awareness, regardless if you're a human or a machine, but if you think about the power of that in organizations, the thing that cripples organizations is silos.

Knowledge Flow and ONA

Knowledge, that's the key reason that you were doing it. The older people had more knowledge than the younger people. But that also happens across departments, right?

So hence the advancement that I was looking at later.

I'm not going to skip that. I'm going to skip that because I already mentioned.

Product Safety and Biased Data

I mean, from products, we already know this, right? We've known since the 1950s that more women have harm or die in cars But we don't fix it.

And then we're going to give an AI a CAD to kind of design something that will kill us because we're not fixing the data. So you don't want your wife, your sisters or your daughters to die.

I love it. You're all agreeing. That's fine.

Model, model, ONA. I have more complex ONAs now.

Is there anything I've missed there? No. See, I do know my shit.

The...

From Researcher to Builder

The piece that it then came onto was where I work and the space that I do now.

So I don't use Azure because I'm, well, I can, I'm just not a corporate corporate person, as you can tell. I'm more of an NAN make crew AI type of person.

And for me, when we go back to the solution or the builder,

Three Principles by Design: Ethics, Equity, Environment

From the builder, I started to want to see, by design, how do you put these three principles, ethics, environment, and equity? So ethics, from a privacy point of view, is really quite important. I heard you say that when you were talking to Azur.

But for smaller companies, I often get, we can just go and scrape this. I'm like, AI governance? Oh, no, you can't.

Ethics and AI Governance in Practice

Maybe in Switzerland or the UK, you can have one cold reach out. But in Europe, you can't. The German courts have 30,000 cases at the moment.

So how do you ethically this is just one of the builds from a lead generation built in. How do you ethically do that?

And then from a human perspective, how do you design from a from an equity point of view that

Designing for Equity with RAG and Memory

If you are designing product or if you are in R&D and you're doing clinical trials, how do you make this human with, I mean, this one's a very simple memory. Yours wouldn't be a simple memory, but how do you have these rag databases that literally level up and equate to reduce bias?

Environmental Impact and Model Optimization

And then the final piece is environment. And I love what you said around optimizing. Do you really need a 5, or do you need a 3.5?

But the other thing I build in for my companies is actually calculating how much we're using so that we can offset that.

So having these fundamental three things as part of whatever build is the key to the execution.

Operationalizing AI Governance

And then the model that is pointless because the screen's going to go off is what I do as the AI governance chair. See, it's going to go off. But it came up at the right.

And I hope that was adaptable enough for you, Reggie. You were like, I knew you were going to ad lib it.

Conclusion and Q&A

Any questions?

Finished reading?