Shaping the Future with Responsible AI: Insights from the Generative AI Commons

Introduction

So let's start. Or maybe before we start, I hope that everyone will learn a little bit about responsible AI. And if you implement AI, you will think about the elements associated with responsible AI.

And the other thing that I want to achieve today is that you will understand what we are doing as part of the Linux Foundation AI. and you will join us if you are interested and you think it's important. By the way, Anat is part of this effort and I really hope to see a lot of contribution coming from there.

Speaker Background

So let's start. Before I go into the details, I want to say a few words about myself. So I spent many years in the university.

I have a PhD in computer science, then worked for a corporate for several years, then co-founded two startups, and currently I'm running the company called i4AI, and we are basically doing AI services and implementation. So if you're interested, talk to me about that.

And also, not sure if you see on the button, I am part of the Linux Foundation AI, and I will explain what that organization is. I was part of the co-founders of this organization seven years ago, Actually, we started March 18, so exactly seven years ago.

And I'm also part of NIST, National Institute of Standard and Technology effort to define AI safety. So I bring some credibility in this space. We live, this is my family, the five of us.

As you can see, AI is not that amazing because I'm not, it's not, maybe 20 years ago. But in any case, we live on the other side, the wrong side of the Hudson. So this is the skyline that we see, or almost.

Linux Foundation AI

So this is the background, and let's talk about responsible AI. And before that, let's start about Linux Foundation AI.

Linux Foundation AI is an open source organization, non-profit organization, focused on AI open source. And we bring the community, we bring companies, we bring users together in order to build the AI elements of the AI ecosystem. And it's very important for the benefit of all.

And we will see this sentence a few times in this presentation. The Linux Foundation AI is basically built out, is separated into two elements. One is the projects, and we have dozens of projects.

And the other side is the committees, the working groups. And today I'm talking, I will be focused on the general EDI Commons. We have three working groups, committees we run.

One is more technical model application and data. The other one is education and outreach. This is a committee that I am chairing, co-chairing.

Working Groups and Committees

And the last one, Responsible AI, and this committee runs the project that I, or the framework that I will be talking about today. By the way, a fun fact, we started the Responsible AI, I think like six, seven years ago, a little bit after we founded the Linux Foundation AI, and basically there was no traction. Some of us here in the room,

Remember those days, no one cared about responsible AI, and then in the past two years, it's exploding. Everyone is coming, everyone wants to learn about responsible AI, and everyone wants to implement responsible AI, which is great, and we hope to continue to see that.

This is a little bit more about the Linux Foundation AI, as I mentioned, founded in 2018. We have 73 members, many organizations contributing, more than 2,000 organizations, 244 million line of code generated by 52,000 contributors across the 67 projects that we have as part of the foundation.

And maybe one other element that I want to mention is that we have Many, many, almost 300,000 GitHub stars across the 67 projects. So very, very popular open source projects.

And let's say this is the landscape. If you're interested, this landscape is interactive. You can play with it.

You can see many things. You can basically see hundreds of AI projects. And of course, the AI projects that are ours, the Linux Foundation AI project, they are marked in a special box.

Responsible AI Framework

So this is a fun project that we have and you're welcome to play with it. Okay, let's talk about the responsible generated AI framework.

Before I start, I wanted to say these are the authors, the main authors, and down here we have the other contributors. Susan, back there, is the leading force behind this responsible AI framework. Thank you, Susan.

David is also in the room. of the contributors, so on top of the education and outreach activities, I'm doing that as well. And I'm doing it and doing the NIST and all of this volunteering work because I believe that AI is very risky and we need to do our best, those of us that know about it and can influence, need to do our best to make sure that the future is good with AI.

So these are the main authors, and one other thing that I wanted to say about it is that we have quite diverse set of people here coming from different locations, different countries, continents, and we thought it's really important. When we started the foundation, we also wanted, we had three cultures, and we wanted to have one from the east, one from the US, one from Europe, because we wanted this to be very inclusive and to get the voices of everyone.

This framework is basically, or this presentation is based on a very long paper that will be published soon. So this is kind of a very early presentation about something that is in the making.

Framework Definition

1And basically what we discuss in this document or in this framework is basically we start with why do we need a framework? Why do we need this concrete definition about a responsible AI framework or responsible AI elements?

We discuss the nine dimensions of this framework. Then we compare it.

to the different regulations around the world, and specifically we chose one in the US, NIST AI framework, EU, the European AI Act, and we also considered the Chinese one and the Singaporean one. So we compared it to those frameworks.

We also discussed beyond the models. So the models are very important, but the models are not everything.

So there are some other elements in AI that are beyond that. This is also a discussion that we have in the paper. I will not touch it today because we don't have a lot of time.

And we discuss the future work and how we take this framework and make it significant.

And of course what is really unique about this one is that we are an open source organization and we are focused on open source related aspects.

Yes, Ben. Do you find that folks are actually following the frameworks that are being put forth? So I would question like the Chinese

Are they actually doing what they say they're doing? I would assume. I don't know.

I would assume that people in China follow more than people in the U.S. and Europe. But because, you know, but I don't know.

And everything is really early and I will talk about it. There are many frameworks. each one is taking us into a different direction and it becomes very hard to build something that follows all the regulations.

And this is one of the incentives to build this framework. Okay, why is it important to build a framework like that?

Global Alignment and Importance

And we have different reasons here, but basically, going back to Ben's question, we want to achieve global alignment. and basically there is, I'm calling it arm race of regulations, arm race of frameworks, and we want to build something that everyone can connect to and maybe will become something that is followed by other countries or organizations.

And of course, I felt that we would get to it again. We want to be something that is for the benefit of all, of all of us.

Nine Dimensions of Responsible AI

These are the nine dimensions of the responsible AI framework, responsible generated AI framework. And each, in the paper, discuss each one of them, we start with the definition, the challenges that this specific dimension has, some potential remediation or solutions for the challenges, and then we discuss how the different dimensions are related to each other, what are the interdependencies between them.

I will not get into many, many details about each one of them, but I want to show you some of them and discuss some of them. So human-centered and aligned, this is the first dimension, very important one. We want to make sure that we build an AI model and more of that AI system that is aligned with human values. To be honest, I'm not sure what is aligned with human values because I'm not sure that any two of us in this room have exactly the same values, but this is at least something that we want to try to achieve and aim to achieve. to that direction. And this is also why we had a very diverse group of people contributing to this effort.

And the next one is accessible and inclusive. And this is something that is very important to me. I'm leading the education and outreach committee. And one of our major goals is basically to talk to the community. I know it has the same purpose here. And we want as many to enjoy this technology. This is why I speak in different locations, in different events, to promote AI, to make the technology more accessible to everyone.

So this is the second element. And I would claim that open source is one of the main elements to allow that. accessibility and inclusion comes in many, many cases in the history, came in many cases in the history from open source.

Robust, reliable, and safe. It is really pretty hard to get to that with those systems, but we need to try our best.

Next one, transparent and explainable. And we want to create systems that are transparent, are explainable, and we can understand them. And with AI and AI models, which are, in many cases, are very, very black box, it's not easy. However, we see that the new reasoning models are pretty helpful in understanding. And also, going back to open source, If we have the model, we have all the training data, and we understand what is going on there, the black becomes more gray, and we can basically achieve more transparency with that.

The fifth dimension, countable and... Maybe going back to the question in the previous discussion, who is taking the accountability when we build an AI-based system? Let's take, I don't know, autonomous car. So is it the model producer? Is it the manufacturer? Is this the insurance company? Is this the driver? I don't know. And accountability is a very interesting question, and we need to understand that maybe the regulator takes the accountability. It's a major problem that we need to think about.

Privacy and security, of course. We need to make sure that the data is private and secure and built in a way that will not expose personal or private data.

Compliant and controllable. So again, meeting all the regulations, all the elements in that respect. Some say that it's easier to control AI rather than human because AI is programmed to follow some guidelines. But not sure if you heard about that, but a couple of months ago, an open AI model played chess against the leading chess app in the world. And guess who won? The OpenAI model, I think it was 01, or the chess app? The leading chess app, the most sophisticated or most advanced chess app in the world. Who won the game? OpenAI, and how did it win? Cheated, exactly, cheated. It basically modified the chess board. It went into the configuration file, changed the setting, put it in very good advanced, like almost a checkmate, or one move from checkmate, and that's it. And no one told the AI that it can cheat, but the goal was play chess and win this game. And it's really hard for us to put all the guardrails and basically to create aligned systems.

Ethical and fairness and unbiased. So this is also a very interesting, important thing. And we learn from data. We learn from our data, and our data is biased, so it's also something that we need to take care of.

And the last one, environmental and sustainability. So do you know how much energy it takes to basically run one single prompt on ChatGPT? a lot, and it basically consumes that amount of water, one small query or prompt. So we need to make sure that we build sustainable and environmentally friendly AI models or systems.

Dimensions Overview

Exploring Each Dimension

So this was the... at least the elements, the dimensions of the responsible AI framework.

Moving forward, our goal is basically to take that into action, to investigate, to make sure that all of the open source projects as part of the Linux Foundation, AI at least, and some others, follow the guidelines. We want to create some tools that allow companies or AI system builders to check themselves against the dimensions.

Future Goals and Commitment

So this is where we are going forward. And before I finish, a few more things.

So two elements here, two things that I wanted to talk. We have a commitment to ensure that the AI system serve humanity in a way that is fair, inclusive, and beneficial for all of us. And this is a collective responsibility of all the stakeholders in this community. So the builders, the governments, the researchers, and of course also the end users.

It's important that we demand and responsible frameworks and responsible tools. And since it's a collective responsibility, we are inviting you to join us.

This QR code, I hope it's not too long. takes you to our join us presentation. I think it's an amazing opportunity for anyone.

Some of us are very technical, but we also have many, many activities that don't require to code or don't require deep understanding in AI. We have people from different fields different places and different capabilities. I personally think it's a great opportunity for you, so please.

Conclusion

And to finish, thank you everyone. One second.

This is my LinkedIn. You're welcome to connect with me. It will take me some time to accept everyone, but I will eventually.

Any questions?

Finished reading?