The basics of Gen AI Ethics - Guillermo Miranda

Introduction

Look, a little introduction.

My Journey from Law to Technology

Let me confess, I am a lawyer. But I've been redeeming myself for the last 20 plus years from that mistake. I even went all the way to Deusto for an MBA to keep redeeming.

And then I work in IBM, in Boeing. And in the last couple of years, I am more doing like a set of advisories. They call it portfolio life.

Startup Experiences and Learning

And in that, I am learning a lot about startups. And we just launched a startup in London. It's called Atlas Copilot. And it has exactly the rack system that they explain.

Exploring the Boundaries of Technology

And the idea today is to talk a little about the boundaries, how we are going to use this technology.

Setting the Stage with Information Points

And let me start with three points of information.

One, how many of you know Qantas? Qantas? Qantas? Yeah, Qantas.

Australian airline? Yep.

The Iberia of Australia. Well, Air Europa will be upset that the Iberia of Australia.

Qantas AI Booking Error: A Case Study

So Qantas last week got a fine from the Federal Aviation Commission because AI bot made a mistake with a passenger and booked this person in a different flight. So this passenger arrived to the airport, oh, this is Bangladesh, what am I doing here? So boom, fine.

IBM's Workforce Reduction Through AI

Second point of information, like eight, Nine months ago, the CEO of IBM, Arvind Krishna, did an interview. And he said, oh yeah, the AI technology is fantastic. We are going to reduce at least 10,000 people.

We will fire 10,000 people and be more productive. Immediately, two things happen. All the media started to say how you can say that, that you are fighting people, et cetera. The World Council in Germany immediately put a preventive action against the IBM Deutschland.

And then the IBM stock went up with the declarations as well.

AI and Automation in Everyday Life

Third point of information. It happens actually last weekend.

We were looking for a new Peruvian restaurant near the Tetuán area. We found it. But we have to take a second taxi to get there.

And the guy in the taxi, it was a heavy day for something, a demonstration or whatever. And we were talking, I don't know for what reason we landed in ChatGPT, or artificial intelligence, more in a broader sense. And he clearly said, look.

I am very clear about my future and I don't know if De La Senora Yuso will permit this but my future is I'm going to hire three chat GPTs, put it to drive my car and I will control this from home with good air condition so I am multiplying my income and I have seen, and he mentioned a couple of videos in YouTube, of the new service, the autonomous service in the Singapore airport.

How many of you tried the new service in Singapore that is fully autonomous? Okay, next time in Singapore, go to the taxi, and there is a line that goes into no driver taxi. It works. It's just in the, you cannot get out of the certain area, but it's fully autonomous there. And it's a standard.

So this guy was very clear what was the future. So in this context of things that are happening in society, the question is how we manage what we are going to do with this capability, technology, science.

The Presidio Principles

And from different conversations and different things, there is one document that has been prepared like six months ago, under the sponsorship of the World Economic Forum, that is called the Presidio Principles. Any of you have One. OK.

So the Presidio Principles is the group of large providers of technology got together in San Francisco. There is an area that is called Presidio. Remember, all the names in California sound very Spanish, and the Americans don't know what it means. So they, I'm going to Presidio.

So they got together in a conference center, in the Presidio Center, and they put together a series of reflections and tried to have clarity about better to go self-regulated than waiting for the government to put regulations. And by the way, for point of information, Meloni tried to put a ban on chat GPT in Italy. Any Italians here? Well, the ban has not succeeded yet.

But the justification of the Italian government is that the companies have no permission from the Italian citizens to use their information to inform the AI models. And then, well, this is more of an anecdote, and it's not a point of information. a prime minister, president of the government in Spain, two weeks ago with the president of Microsoft data, very nice announcement and said, we are going to create LLMs in Spanish, but real Spanish. And then Sobey says, well, probably you don't understand what is an LLM because you don't need to do it in Spanish. But nobody dared to make that question in the announcement. They are not going to do anything similar to that, but the announcement was there.

Key Aspects of the Presidio Principles

In those principles, there are four things that I think are important to remember. And we can debate, but the first thing that is very clear is, what is the purpose of AI? Here it's important to have what we sometimes call extreme clarity. The purpose of AI is to augment human capabilities.

So if all the actors in the market agree that the purpose of AI is to augment human capabilities, it's a good directional force. Why? Because you can use AI for many things that actually have nothing to do to augment human capabilities. But if we directionally agree that why we are exploding these new technologies to augment human capabilities, then we come back to our taxi driver in Tetuán that wanted to have three chat GPTs managing three different cars from home.

nicer condition. He says he already proved with a joystick that he can move up and down Tesla from his son. I don't know how he did it, but that's progress.

So first thing, clarity. What is the purpose of AI? Second thing that is very important in this context is when we create new technology, we need to have a sense of explainability. a black box will not make the trick oh yes you put here and something gets out no because you don't know how everything is being processed so when you create new technology having a basis of explainability is very important in order to

manage how this technology progress. Even with the hallucinations that from time to time we get from the models, there is an explainability. There is a mathematical sequence that creates the output in the most simple way to explain. What is an LLM model?

It's a mathematical sequence that predicts the next word and based on a corpus of information is able to have semi-conversational capabilities or fully conversational capabilities in many cases. Explainability.

The purpose, explainability. The third topic that was discussed there and is core of the conversation is data. And who owns the data?

Because we already have all this precedent with the social media platforms that you don't own your own data. Actually, everybody that still uses, I will not ask who uses Facebook because that will be inappropriate, but if you have not read many years ago, you are giving out the data.

And Instagram is exactly the same. I have not read TikTok. but probably has that, but you are giving out the data.

And when you have much more complex models, like Clarity AI that is doing a whole sustainability framework, who owns the data? That is a very important thing to discuss up front. And finally, the final point that they agree that is important in terms of guardrails is full disclosure of non-human interaction.

What it means is don't fake the bots. So if this is a bot, be upfront. Hello, I am Diego.

I am an automatic service machine for Telefonica. We are going to reconnect your service in 10 minutes, maybe 15. I don't know.

But it starts with, hello, I am Diego. I am a service engine. That disclosure is very important because it puts in context the interaction.

Not so many, actually. It was three years ago. There is a company that is called Life Person.

And this company has the ability to switch the chatbots between people and the chatbot. So all the easy questions go first to the people, and then it switch. And one of the first things that at that time I was working for IBM, and we did an integration with Watson, one of the first things that we asked is, the moment that we switch, disclose that.

Hello, I'm here to help you. I want to automate your ticket. Oh, I cannot help you anymore.

Let me pass you to a life agent. Let me pass you to a person. But the full identification that this is an interaction with an entity,

is very important to avoid confusions, to avoid things. Or maybe the guy from the Qantas flight would say, oh, I'm not sure what they did for me. This is the right ticket.

But let's have clarity. What is the purpose of AI? Extremely important.

Why? Aument human capabilities. Second, explainability.

How it works, even in the basic way. Third, who owns the data question. Very important.

And finally, disclose where is no human interaction. How this will work?

Balancing Innovation and Governance

On one side, you have the more traditional thinking. And the traditional thinking is, oh, we will put a governance system. Or we will have a regulation.

At the end, what is going to happen, this is going to happen more on the prone engineering and interactions. And this is where the reality will continue to evolve.

So put these guardrails wherever you are experimenting with technology, you are launching a new company, you are discussing this, having that clarity is gonna be great to see how we, get superpowers of humans. And we are able to drive three taxis, do many things in life.

Sharing Knowledge and Encouraging Dialogue

I will share the principles of Presidio with Carmen.

But any questions, any comments, anything except

Legislation and Technological Innovation

Are you talking about the need for clarity when coming up with these as one of the principles? When you are creating things new into the future, coming from, yes, I confess I was a lawyer. I'm still a lawyer, I guess. But always legislation goes way behind reality.

What are your thoughts on Elon Musk's open AI with open AI not being open as a company and for humanity? Elon Musk was one of the original founders of OpenAI. When it was a known for profit, the moment that they become for profit, he left. And he said, I have the principles of this should not be for profit.

Do you have? I have an opinion. Yes, I have an opinion. And you know the history.

being the success and having the recent incident with , et cetera, he wanted to reaffirm his persona in the middle of the discussion. People don't necessarily know, but Sam Allman has zero ownership of OpenAI. He's the CEO, but he has not a single share of the company. So he only drives the company, but he has zero interest, direct interest on the company.

Yeah, this is a voluntary declaration. The sponsor was the World Economic Forum and there is no enforcement. So it's important to have a voluntary framework to start shaping reality. And then legislation will catch up, will create contradictions, and then eventually will facilitate things.

And the innovation really happens when there is no constraints and there is good faith. Best example, probably globally, is something that happened in Kenya 15 years ago. They created the money transfer system by chance. Because there was no legislation, and somebody in SafariCon says, why not?

I have to send money to my mama in Mombasa. It started to happen, and now 55% of the GDP in Kenya is transacted on mobile money. without smartphones or anything. This was 15 years ago.

Any other questions?

AI and Its Societal Impact

What is happening when connecting, for example, AI to ? So I think that part of what you do in terms of guardrails for technology is to understand what are the limits that society put.

At the end, if you take this from 30,000 feet the military is a system of coercion in society in order to first maintain Civil order and second to maintain your neighborhoods in the borders so there is the principle of the military is coercion.

The Military Aspect of Technological Development

Ideally you don't use that and it's just a psychological force like the Swiss or you use it like it's happening in many places in the world and then you try to automate it as much as possible. And it's a contradiction because many of the things that we enjoy today as a society were created with a military purpose. Internet was created by DARPA and DARPA was funded by the U.S. Army.

So we need to have more of a philosophical or principle conversation. I just realized for you that yes, there is a purpose of the military and yes, there is a coherence.

I understand Let me consider how much I can say. When I was working for Boeing, I had access to something that is called Phantom Works. And Phantom Works is the most advanced technology of the US military. I was not an engineer or anything. learning a skills person to get the right people into Phantom Works. But there was a code, an ethical rails of what you can do, and it was all about interconnectivity to avoid mistakes.

So the big thing about, for instance, the F-35, which is the most advanced military plane in the world, is the capability of software interconnectivity. is that it cannot go rogue. It has to be part of a central command. Now, it depends who you put the shooting orders in the central command. You can have Trump. You can have Biden. You can have, I don't know, whoever is next.

Instilling Common Sense in AI

How would you introduce or put inside artificial intelligence the common sense? how you introduce common sense into artificial intelligence. It depends on prompting. We just have a demonstration of how you put common sense into ChatGPT because you are prompting. I think that the whole prompting art is going to be like a parallel area of software engineering that does not follow mathematical rules because you are talking about models that behave in a self-assertive mode.

Conclusion

The pizzas, the beer. Thank you.

Finished reading?