Your Brain on Gen AI: Insights from the Science of Learning

Introduction

I want to talk about your brain on AI and the kinds of things we can do to ensure that we're using it in a productive way for learning. I wanted to title it Brains, Bots, and Bullshit, but Josh was like, how about we go with something more professional? So I threw that on there and we'll get around to what that actually means as we get into it.

Audience Interaction

So I'm curious, who's recently used generative AI to learn something? Hand raises. OK, so many of you.

I want you to talk to somebody close to you. This is a meet up. I want you to share a story of a time that you have used generative AI to try and learn something.

Make a friend. You have two minutes.

All right. We will keep this conversation going.

Examples of Learning with AI

I want somebody to shout out something they learned or learned that somebody else learned. Yeah, go. REST programming. REST programming.

Okay, what else? Yep. Postgres database. Postgres database.

Okay. but I learned how to cook something. You can do all sorts of things, right?

You can ask it for information. And what I will hopefully convince you in the next 10 minutes or so is that you probably didn't do it as well as you should have. But there's hope, and you can do it better.

Personal Journey and Work at Minerva

And the reason I am optimistic is because I was a very traditional academic until I met some crazy people who were going to build a university from scratch. So for the last 10 years, I've been working with a group called Minerva. We built Minerva University.

It has a really unique active education model and a global rotation. And when that got its own accreditation in 2021, a group of us started working with other universities around the world to help them rethink their curricula and their teaching practices so that college is more effective, more experiential, and more relevant for the 21st century.

And in higher ed, everyone is freaking out about AI. They're like, my students aren't learning anymore. And my answer is usually they weren't learning before, right? So if your practices are not resistant to AI, you are doing something wrong.

Common Fallacies in Learning

And I'm going to start to illustrate that with a couple of different fallacies that we try to address. So we talk about these a lot at Minerva in the work that we do at our own university, as well as with our partners.

Exposure Doesn't Equal Knowledge

And the first one feels kind of obvious. The first one is that exposure doesn't equal knowledge.

We know this based on the science of learning. If you listen to something, it's not sticky. The times lectures work are when experts talk to other experts, when they can actually already understand the level somebody is speaking at. Usually you are not having that conversation. You're working with a novice. And so expecting them to read something passively, expecting them to listen to you stand on a stage, it's not getting in their heads.

Knowledge Doesn't Equal Learning

The second fallacy we talk a lot about is the fact that knowledge doesn't actually equal learning. And this one is weird.

We think that if we have knowledge, it is something that we now own, and we assume that we have learned. But if you think about the real definition of learning, which is taking in information in a way that actually changes your behavior so you do something differently in the world, just knowing something isn't enough to actually say you've learned.

And so my CEO likes to illustrate this by saying that, He bets in your life you have sat in a seat quite a bit like this and listened to someone give a talk and found them very boring. And it was very obvious for you to listen to me and say like, oh, she's doing this wrong, or I wish her slides were still there, or all of these things that you can really critique well. But all of you have probably gotten up in front of other people and were also boring. And so just knowing what something is or what it looks like doesn't mean you've learned enough to apply it.

The Role of AI in Learning

So this is an AI thing. This is not just a learning thing.

And my slides are gone forever.

Generative AI and Quality of Work

So essentially what they find is that consultants are 40% smarter when they use generative AI. So they actually did a really nice experiment. Some people didn't have it. Some people had it. And some people had it with an explanation of how to use it well.

And so what you see, this is not my data. This is a group out of lots of different schools, some at HBS. So this is the distribution of work that was done without generative AI.

And as you can see, if you get a tool, you become better. This is actually quality. So they're turning in better work. That's the headline.

The subtitle of the paper that nobody really talks too much about is that it's disproportionately helping people who weren't good to begin with. So if we break down their data, we look at the bottom half of participants in terms of quality, they're getting quite a big boost. And people who are already good are getting a little bit of one, but it's not as large.

So AI can help us do things better and turn in higher quality work. But if we're already good, it's not helping us that much.

Pitfalls of Misusing AI

The one that nobody talks about is that it also actively screwed up people who used it when it was designed as a task that AI wouldn't be helpful with. The authors were really sneaky. They came up with something that would give the AI a hard time, that it would have the wrong answers.

What you find there is if you don't have access to AI, You are about 84% accurate. But once you're given misinformation from AI, it actually hurts your performance. You turn in worse work.

So it really is about how you're using this tool and whether or not it can give you reliable information.

AI in Creative Collaboration

There's another more recent study out of Stanford where they look at collaborating with AI to generate creative ideas. Generative AI should be pretty good at this. Its job is to kind of make things up. And so if you're trying to ideate with it, it should help you. And it turns out it does in terms of volume.

But what these people found was that AI-assisted teams produce more ideas. But when judges rated them, they were actually really normal. They were just fine.

So they bucketed them into A ideas, B ideas, C ideas, and D ideas, kind of a grading scale. And when teams use generative AI, they got more ideas overall, but they were all sort of Bs. They didn't produce more A ideas. That came from real humans working together collaboratively without an AI assist.

What I love about this study is that they actually also asked people how they felt about their own work. It wasn't just an independent rater saying, hey, I think that this is good work or bad work, the same way a teacher would grade you. It was asking performers about their perception of their own work.

And what they found that I love is that AI-assisted teams with worse solutions felt better. They're like, we didn't grade. And AI-assisted teams with better solutions felt worse. So the raters are looking at this and saying, that's pretty good work. And the person who produced the work with their group is saying, I don't know. That wasn't great. And exactly the opposite.

If I'm thinking it was really great, other people are looking at it and being like, bleh, not that great. So they dug in and said, why? And the reason comes down to iteration. How much did they iterate and go back and forth with the tool? And what they found was that the AI teams who iterated less had worse results. They didn't press the AI. They didn't interact with it. They just said, sounds good. And they felt great about it. And the teams that iterated a lot, which actually led to better results, felt worse about their performance.

And this is kind of fascinating because it mirrors exactly what we know about teaching and learning in lectures versus active learning classrooms. So which one is better? Anybody? A, raise your hand, or B, don't raise your hand. All right. Perfect. Okay.

Effective Use of AI for Learning

This gets us back to this idea of your brain superpower should be to learn. It doesn't matter if you're talking about teaching and learning in a physical classroom, whether you're interacting with AI, The challenge is we've been conditioned against doing so.

If you think about trying to handle an LLM or a social interaction, most of what we're doing is input, something hidden magical happens in the middle and then there's output.

And so in a teaching context, that input might be something like, teach me about cell signaling. This is something you could reasonably ask of an educator.

And so they would have some sort of hidden nodes inside of their brain and you would generate output. And it is typically somebody who is standing in front of you telling you things about cell signaling. You can do the same thing with an LLM.

Teach me about cell signaling. The hidden nodes look a little bit different. Philip did a really beautiful job of talking about the space that a large language model actually inhabits.

So it's taking each one of these words and trying to predict a response with the words in the middle. And then the output looks something like this.

This is literally from ChatGPT about an hour ago. So if you say, teach me about cell signaling, you get a lecture. You get a freaking wall of text.

Improving Inputs for Better Outputs

But the challenge here is that the problem is not the output. The problem is the input. We asked for something very lazy when we could have asked for something much more intelligent.

And whether it is a human trying to teach you something or an LLM trying to teach you something, if you give it better input, I promise you will get better responses. So all I've done is change the prompt to be slightly more specific and say, 6be the best active learning facilitator ever. Help me learn something about cell signaling by asking questions one at a time and critiquing my responses as we go.

You get your hidden nodes. And ChatGPT says, great, let's dive into cell signaling. First question, what is it? Why is it important? Take your time, answer, and I'll provide feedback afterwards.

So this is going to provide you with a way more structured experience to really dig into it and not let your brain do the lazy thing, which is read a wall of text and be like, I've learned it. So when we come back to these three things that your brain doesn't do very well, instead of letting it be lazy, ask LLMs to ask you questions.

Interactive Learning with AI

I do this all the time. I'm like, I want you to revise this proposal for me, but before you do that, ask me five questions that will help you do a good job. And what that turns it into is a thought partner. It makes me think, it doesn't let it do the thinking for me.

The other thing you can do is ask it to break down your response. So if you think you understand how something works, say, ask me about this process step by step. So then you're actually pushing on this idea of getting over the illusion of explanatory depth

And then my favorite is asking it to disagree. Academics write papers, and then they want to send them to reviewers so that they can get published, so they can get mini famous in their tiny little niche. And the best thing to do with a paper is not ask an LLM to write it for you.

It's to say, imagine you are the world's worst reviewer. I want you to critique all of my ideas and tell me how to make them stronger. So what you're doing is you're just changing the input, and you're going to get much better output.

Conclusion

If you want more, I'll plug Steven's book. He comes to these sometimes. He's wonderful. He was my former boss.

So the picture I kind of want to leave you with is we want our experience of what we think learning is to kind of be like riding on the Euro rail. We want it to be smooth. We want it to be beautiful. We want to sit there and let somebody tell us something.

And that's just so wrong because what is really going to help you learn is going on a road trip where the signs are confusing, everybody's switching, who's driving all of the time, and you're really trying to engage with the map, what's happening, what your goals are, and thinking about where you've been. So instead of smoothly flying through your experiences whenever you try to learn, try to make it as messy and failureed as possible so that you actually remember all of the things that you intended to learn in the first place.

And that is the end of the talk.

Finished reading?