Hi, everyone. My name is Aglevina Oskaita. I'm a Learning Strategist and Director of Nodes, a learning innovation consultancy.
I promised you everyday AI use cases across L&D. So here's what's going to happen in the next 15 minutes.
I'm going to first do a quick overview of the results of the study that myself and Ronald Taylor, who's going to be on stage next, did a few months ago where we surveyed a lot of people in the learning industry. We asked them how they are using AI in L&D. What are the use cases that are emerging? So I'm going to do a quick overview of that.
By quick, I actually mean quick, don't hate me. I'm going to share a link, a QR code to the full report where you're going to find more of them.
Then we're going to do a demo and we'll see how that goes.
So before I go into these AI use cases, I'd like to preface this by saying that There are AI use cases that have been around in L&D for the last few years that I'm not going to talk about.
And specifically, they are content curation, adaptive learning, talent marketplaces, and skills intelligence. So I'm not going to go into these, but these do exist and some companies use it.
What I'm going to do is I'm going to look into generative AI. And because generative AI is so focused on content, that inevitably, gets us into learning experience design because the bulk of the use cases, according to our report, and not really surprisingly, are happening across the learning experience design process.
And if you think about it conceptually, there are roughly four parts. some discovery, which is, okay, so what is it that we do? Who is the target audience? And so on.
Then we do strategy and design. So what is the solution going to look like? Delivery, creating pretty much everything, the content, and then follow up.
So the way the use of AI roughly, what it looks like in L&D right now is in discovery, we have very little going on. We have something like mostly understanding the topic, as I said, to understand the subject matter.
There's a little bit more happening in strategy and design, which is pretty much, so what is it that we are creating, high-level design, perhaps some learning objectives, that sort of thing, and then the majority of the action is taking place in delivery, which is where people are creating a lot of content, both text and video.
And finally, here we have anything that happens after you ship, after you release the solution, whether it's face-to-face or digital, that's sadness and that's crickets, and pretty much no one is doing anything with AI here.
If we look at it step-by-step, and this is going to be the quick part, is that we have common practice and interesting use cases. Common practice are the use cases of AI at that stage that we discovered across many organizations, many people reported doing this. Interesting use cases are one-offs that I fished out of the answers that are just for inspiration.
So if we think about AI-enabled discovery, so most of what's being done is initial content research. So pretty much prompting the AI to explain to me what is the subject matter that I'm being asked to create some training around.
More interesting use cases include, for example, writing scripts. understanding what is the knowledge and skills required, so sort of helping inform your learning design.
Then if we go to the second stage, which is AI-enabled learning strategy and design, mostly what's happening is pretty much, okay, so give me some suggestions for learning activities. What can I do here? Give me a high-level course outline. and generate an assessment, quiz, question, that sort of thing, like learning designy things.
Some interesting use cases include creating examples, finding case studies, projects that people can do to actually make sure that the students, the end users, the learners, they learn the thing.
If we go to delivery, this is where a lot of stuff is going on across L&D. So unsurprisingly, we have content writing and scripting. We have translation. And then we have talking heads videos, synthetic audio, and generating images for courses.
So this is not surprising at all. Some interesting use cases were simplifying SMU content, especially if it's very technical. So just asking GPT, can you explain this to me so that I, as someone who's not a subject matter expert, understand it. That sort of thing.
Rewriting perhaps some procedures so that they apply to certain roles. But pretty much the bulk of what's happening in L&D right now in AI is Here, I would say 80% of all the prompts in L&D are happening here.
And finally, there are no common practices in evaluation of follow-up literally. And then a few people, really four people in our survey of 185 reported one bullet point each of what you can do with AI once you have released. So for example, sentiment analysis, analyzing the course design and suggest some improvements, identifying opportunities to personalize the course, and trying to understand user behavior by analyzing and interrogating data sets.
So that's pretty much where we are with everyday AI use cases.
So this is the full report, the link to the AI and L&D state of play report. You're going to find a lot more use cases, including administrative use cases, including a good understanding pretty much what are the common blockers and what are other L&D professionals facing.
But before I move on, I want to make a point here. The point I want to make here is that we looked at pretty much this, I believe, is representative of what's happening in the industry and a lot of it is content creation.
If we think about On one hand, it is understandable that when you have a tool such as generative AI, the first thing that you want to do is you want to use it to speed up your current ways of working, your tasks, and make them faster.
However, we have a chance right now to take a step back, and to consider how we can use generative AI and AI in general to do different things in learning because what is the purpose of L&D?
If you ask me, the purpose of L&D is to support the business strategy, meaning that if the business wants to pivot, if the business wants to throw some resources and create a new product, a service, whatever process, the business needs to have the skills and the performance support to do those things.
When you start thinking about that, you realize that content has never been the purpose, although content right now pretty much rules what's happening in L&D. The purpose of L&D is to enable people with skills and with performance support. Content is only as useful as it leads to these two things.
So first of all, content has always been but a means to, say, upskill. But it can never do the job by itself. We all know that people do not learn just by consuming content.
And if we think about some of the popular e-learning interactions, such as branching scenarios, they have always been a proxy for skill development. They could never quite get there.
I do not learn to give feedback by playing with a little avatar which asks me questions and I have to respond based on a multiple choice selection of four answer options. And I don't think that using AI in that way to speed up creation of these kinds of interactions is where the power of it is.
And probably you see what I'm alluding to is that the way L&D can use AI, at least one of the most powerful use cases is upskilling.
So what I want to do now is I want to do a demo of what this could look like.
How many of you have created a GPT? OK, so interestingly, at least half of the room, which is amazing. So for those of you who have created a GPT, well, this is going to be at least amusing.
We're going to create a feedback GPT that creates a sandbox for me to practice giving feedback. So if I go to explore GPT, let me just increase the screen, create a GPT. And let's start with creation. What would you like to make?
And I would like to make a roleplay partner to help me practice giving feedback at work. Wow, when you're on stage, this really feels like it takes forever. You need to delay suddenly. Yeah.
OK. Sure. OK. We really need some elevator music here.
Yes, yes, it just begins with a picture. Are you happy with the picture? It's fine. Great, let's define how it will work.
How detailed do you want the feedback scenarios to be? Should they cover a wide range of workplace dynamics and roles? Specific, wide range. Please, I need to practice my manners.
The thing is that my personal learning from creating GPTs is that it starts with asking very specific but quite rudimentary questions. And then it encourages you to launch the GPT. And then that GPT just does not do what you want it to do.
And you need to write the extra three times as long prompt for it to actually work. The Azure Persuasive Feedback Goals scenarios.
Yeah. So this is before today. I created another GPT just to see what it's going to ask me. So this one is asking me very different things, by the way.
So is something happening here, or? You might want to stop it and ask it again. OK.
Oh, something's happening, finally. Perfect. OK.
So what I like doing is doing some research beforehand. So a little cheating here. Giving positive feedback while it's doing that, what I'm going to do is I'm going to configure some prompts.
So what we have here is give feedback to a peer after a meeting. The other one is give feedback to a report in a monthly 101, which is excellent. Anything else? No?
Good for now. Oh really? A hiccup? Let's try again.
Oh really? Yes. So pretty much, let's just cut my losses here. So just to give you the background of what I wanted to put into this GPT and what I put into the other one that I'm going to demo for you is, so I created
a list of things that it needed to do and how it needed to to behave so specifically i told it to give me context about the situation so when it starts speaking give me context about the situation gave it an example i told it to assume the role of the person receiving the feedback in the scenario and because it's its instinct is like it was giving me tips i will start a scenario and was just still throwing tips at me and then give me feedback according to the star framework. I gave it the star framework and just make it different every time. So what happened then is, and this is the good part, hopefully.
Okay. Well, this is not any bigger, is it now? I wonder what the internet that is being so unwieldy. One of the things that is interesting is as you get closer to the end of the day, the US wakes up and the server load on OpenAI becomes higher and so you get slower response times. This happens every single day. Well, lucky me.
OK. So I asked to teach me to give feedback to a colleague that was dominating the meeting. It gives me a scenario, a colleague named Alex, who has been dominating discussions during team meetings. My role is I've noticed that during team meetings, Alex dominates, has been a recurring issue, need to address with Alex privately. and gives me some tips.
Okay. Let's begin. Cool, cool. Hey, Alex. What's up? I just wanted to talk to you real quick. Good.
So I notice you dominating these meetings. I'm getting really bad feedback. And I don't like that. Could you please stop doing that? I wasn't aware. Oh, Alex is very understanding. Help me understand better how this is happening and how it's affecting the team.
The team really hates you right now. I'm trying to be quick, because when I would give it more considerate answers, it would take a while to process, even as I was practicing the demo. Oh, really taken aback to hear that the team feels this way. It was never my intention. I'd like to understand more about what action specifically . This is how I can make changes. OK, we can do that tomorrow.
Can you just stop doing that for today? If you remember, he was feeling a bit taken aback, and then we finished the conversation, and then my GPT is giving me feedback about how I handled this whole thing. what I did, the result, feedback. So going point by point, what I did well, what are my recommendations for improvement, be more specific, offer clear examples, that sort of thing. Encourage dialogue, which is a bit of a tough one for me. Offer solutions. OK, so this is fair and quite polite.
What I wanted to show here, and this is where the demo ends, and what I wanted to show here is that this is an actual, because it offers me different scenarios every time, It is something that's getting closer to a sandbox. And this is something that I did in five minutes. Obviously, it's imperfect. But if you imagine something like pairing this up with a curated list of resources on feedback giving, and perhaps adding those resources as additional content for GPT so it kind of gives feedback in a way that, just like I said, that the SAR framework, the BIFF framework, adding those resources in there so that that feedback is more in line with that.
So it's not just giving people content. If you imagine including it as continuous practice in a blended learning program, because right now what happens is that we have this event, and then you do some work, and you do an event, and you do some work. some reading usually. So imagine using that as something that is an integrated part of the learning journey. So when you think strategically about where to put it in the learning journey, if you actually integrate it properly, not just plop it there, but think about, okay, so where does this happen? How do I follow up on that? How do I perhaps discuss how people experience that feedback? Maybe ask them to submit that feedback so that we can all discuss. when we get together in that blended program with reflection and so on, that becomes quite a powerful tool for actual skill development that people can do in a real way and that can really enable knowledge transfer.
And obviously in this case, there are additional considerations such as the fact that right now this would have to be accessed for the GPT interface. It's not really integrated anywhere else. It can be integrated, but that requires additional work. You need to build in boundaries and test the GPT rigorously. So one of the things that is quite a good practice is to create some, once you release that sort of thing in the organization and some sort of piloting group, creating a feedback mechanism which is a fancy word for saying just have an e-mail inbox that people can send report box to that you can address and keep improving your creation. Setting expectations for use, again, we all know that GPTs and generative AI is a bit unpredictable. So setting expectations that it can and it might get weird. But with all of those things resolved and in place, this is something that actually is promising to move us away from content.
And I would encourage to just... consider what part it can play in your learning strategy and how you upskill people in your organizations instead of creating more of the content that people are already drowning in. So that's me. Sorry, Josh. I promised 15 minutes. It was longer.