One, make sure you define the real problem. So really understand what it is you're trying to accomplish and what metrics you would use to measure that. This also goes for if you're implementing AI within a team process, what is a metric to understand? Are you being more efficient? Are you saving time? Is there not too much cost associated with this?
Two, question things, validate. Is it actually working? Look at it with a critical eye. Is it really solving that problem? Is it diverse enough that it's actually giving good, robust answers?
And iterate transparently, make sure you've explained your system, reveal your processes and test it like a game, play around with it, explore or run different scenarios and understand that you can't just release it and let it exist there because ultimately models do change or sometimes something that's worked one way for a very long time is going to shift and all of a sudden start to work another. So be curious about your own products and services that you're building because they will change over time with the unpredictability of AI.
And that's it.
Any questions from anybody on this topic? Any questions?
I think you can hear me. Yeah, I can hear you. I can hear you.
Can you expand on explainable AI? Maybe it was an example. How do you explain it?
It's still in the course. So there can be many ways, like it's a great question. So it was, what is explainable AI?
So some of this can be in terms of documentation and transparency. So clearly stating, for example, where in your platform AI is being used. Making it clear if that's an actual human or if that's AI is a component of explainable AI because you're clearly showing where AI is being used.
Another example, let's say you're inputting information into a platform like a job recruitment tool. clearly stating where AI is used in order to match you with results or to show you what jobs are open. This is important and explainable AI because it helps people understand, okay, AI system had a place here and it's doing this action that is leading to a certain result so that I understand, okay, this is what your AI is looking at on my profile in order to determine if I'm a match for this position in an example of recruitment.
It's very broad, and there's not necessarily one right way to do it, but it's committing to being transparent about where AI is being used and how that AI works and what data it's looking at in order to make a decision and how much influence you have over opting in or out of that decision. So would there be a case where, in your example, where the human... Where the person would have an option to, let's say in that example with, would they have an option if they see that they're interacting with a chat bot and then wait for, if there's a human behind there, is that part of the whole explainable?
Absolutely. I mean, the idea here is that products should be designed, AI or no AI even, with consent in mind. At all points in time, we should be making sure people understand what experience they're entering and what data they're giving, how that data is stored and used, and if there are AI decisions being made or if there are AI systems at work, where are they and how do they work? And now hopefully a lot of legislature, like some of the EU AI Act, will help force companies to be more transparent in these things.
1Because honestly, a lot of companies are not super transparent about not just are there algorithms, but how these algorithms work. And that's something that explainable AI really tries to encourage people to do is make sure that people really get it, that they understand what you're taking into consideration because that's the only way you can also know if it's working or if it's not working.
If it's a black box and it's not clear what data it's using or not using, how can you complain? How can you challenge it? How can you, you know, iterate upon that as well or give feedback as a user?
Any last questions? Where have you seen explainable AI in an example company?
Because I'm thinking there's a lot that users get information about, but I'm a lawyer. Not everyone will see the license for their iPhones. So what's a good example that's not overwhelming for the user?
That is a great question. And honestly, I've not seen many great examples of people being transparent about where AI is being used.
And yeah, a lot of times they do try to hide it in just legal documentation of, okay, hey, we mentioned in here we're using it. But ideally, like, for example, let's take a platform like Instagram. having more transparency where you could see this is recommended for you because of this this and this if you tap on an aspect of a post that would be explainable ai because you're literally being shown why the ai algorithm chose this post for you as an individual to view now a lot of companies don't like to do that because they don't really want to show you why they're showing you something or what data they know about you or that they're using
For example, a lot of your phones may actually have a computer vision software that is looking at all the photos you have, making tags of things, and advertisers are actually using those tags to show you content or adjust things. And technically, it's not a data privacy violation because it is not actually taking your photos and taking them off the device. So there is an example of where most organizations that use that technology are not necessarily upfront telling you they're using it, even if it might be in the terms of service.
Any last questions? And I think we're out of time anyways.
So yeah, I want to thank you all for hearing my product tips and look forward to chat more later.