AI & Mind perception - Christine Looser - NY

My name is Christine Luster. I work for an organization called Minerva. I'm a professor at the university and I work on our business development team for our global partnerships. I'm going to talk to you and hopefully with you about psychology. I'm a neuroscientist and so I know that this is an AI meetup and some of you may have seen this great explanation from the Guardian visual explainer. It kind of explains what
AI does visually. So you put in some input, it has some hidden nodes, you get output. But what I want to really focus on is the hidden nodes that exist between our ears. And the reason this is really interesting is because your brain is ostensibly, at least for now, more intelligent than all of the things that we have built. Or at least it's using way less computational power.
And brains are awesome because they are old. I am biased here, but one of the things that I love about this graph is it's showing sort of how the brain has evolved over time. And if we zoom in to these two little dots right here, this is actually a time span of 150,000 years.
And there's a book called Sapiens. You all may have read it. It's got this great quote in it, which says that if a corpse from 150,000 years ago found its way into a morgue, a pathologist wouldn't notice anything that was particularly different about it. We have very, very similar structures, but we live in a world that's massively different.
We've gone from solving problems like what to eat to solving problems like what not to eat. We don't have to worry so much about finding potable water in a lot of the world, although that may change, because we put so much of it in plastic. And we don't worry so much about shelter. We ask ourselves very first world questions like, which global city should I live in this week?
But there's one challenge that is just as hard, if not harder, than it was back then, which is, how does our brain deal with other brains?
And this idea of trying to find minds is really interesting because minds are something we can't really see. We have to attribute them to other things. So we have all this visual information in the world. We have auditory information. We look at these pictures and we tell ourselves stories about them. We do the same thing when we interact with things like AI.
But when we see these pictures, we don't actually care that much about the visual information. We care about the mind that's hiding behind it, because those are the minds that can interact with us and affect our lives in meaningful ways. And the only reason those pictures are important is because they give us clues to what the mind behind the facade is trying to tell us.
There's all of these different ways that we attribute intentionality and sort of what's going on underneath the hood to the things we interact with in the world. And it could be the way something sounds. It could be the way something moves. Those actually dance if I slow this down. There we go. So what's the top one if you had to give it a feeling?
Yeah, it's happy, right? And the other one's plotting along, and it looks really terrible and sad, like something bad has happened to it. And we also have lots of information from faces. This one's neutral. But we've created a world full of things that blur this line. So you might come in and ask Siri if she's alive. Hopefully, you'll be able to hear this. That's a rather personal question.
And then you can say, no, but really, are you? And she might say something like, close enough, I'd say. And that was recorded in 2012. And so this is outdated in terms of the technology. You can think about all of the interactions you might have had more recently with generative AI saying kind of ridiculous things. We'll get into some of them.
There's also creepy mannequins that exist in the world. There are AI-generated influencers. And so all of these things don't actually have minds the way our brains have evolved to detect them, but they're mimicking the cues that we rely on to find other people who can help and hurt us in the world.
There's also Roombas, which are kind of fun, or Boston Dynamics robots and things like that. So they can imply things about their mental state, even though we wouldn't say that they have a mind the way that humans do. I want to take a minute because we have
it gets a little bit quiet in here, to think about edge cases. And the thing that I want you to do since this is a meetup is to come up with one or two in your heads, give you 30 seconds to do that, and they're actually going to talk to the person who's sitting next to you, maybe somebody you don't know, and I want you to debate which one has more of a mind.
So think of something weird, but this is a strange question. I'll give you a little bit of help with it. You might wanna think through questions like which one of the two things that you guys are comparing between is more capable of forming a plan, capable of feeling pain, feeling hungry, or forming a memory. So we're gonna do this for three minutes. Everybody feel like they have a weird thing in their head. This could be like a frog. It could be a particular instance of a GPT.
All right, you guys look up to the challenge. Talk to someone. All right, I'm going to ask you guys to wrap it up in 10 seconds, like when people keep talking after the wrap up, because it means that they were having a good discussion. Anybody have a particularly interesting debate, like two weird things that you didn't settle on really clearly?
We had Puppy versus Chat2BT. Oh, which wins? There's no winner. There were different things. Forming a plan, Chat2BT was better, but executing on it, Puppy was. And then we did plan. One of the more interesting ones was forming memory. Both of them can. Chat2BT just has a short context window, I guess. My puppy doesn't remember a lot.
It learns over time, but it depends, right? So there's different ways to think about these distinctions. I once did this with a, we'll call them snotty group of Harvard students who debated whether or not Yale students or rocks had more of a mind. And they came out with rocks and I was like, all right, good for you.
But what we could have done was do this very experimentally. What we could do is say, here's a bunch of things. We're going to have you rate them on all sorts of different traits. Which one wins? And then we could collapse all of the data in a factor analysis. Because we'd have targets, things were rating. We would have traits like planning, pain, hunger, and memory. And then the ratings are what you guys are all saying. Which one's kind of winning?
And if we had put all that together into a factor analysis and we came up with different factors that these things weighted on, what we would have done was write a science paper. And so this science paper is going through and thinking about the way people perceive minds out in the world.
And basically had people come to a really old looking website. They met a cast of characters. And these were the people that they had them rate. So you went from somebody who's like a full grown adult, a baby, god. My favorite one is the person in the persistent vegetative state. Also you, somebody who's deceased. And they went through and they had them rate for these different mental capacities, things that indicated what had a mind and what didn't.
and they basically pitted them against each other the way I asked you to do, and then you had to say which one had more.
This is the cast for joy. So the little girl has the most joy. All the way at the bottom is the robot, which is both below the person in the person's vegetative state and the dead person. So humans are making weird judgments about this. And if you do this for the whole cast of characters and all of the attributes they asked about, you end up with two factors, which they named experience and agency.
And so experience is things like having a personality, feeling embarrassment, all the things that mean you experience the world. The world kind of happens to you. And the other factor is agency. It means that I can do things in the world, like make a plan. So we can actually visually represent this.
And so what we're looking at here is if it's high on experience, I think it can feel a lot of stuff. It's going to end up by that one. And if it's high on being able to form plans and do things in the world, it's going to be over there. It's high on both top right-hand corner.
So we can look at where all of these things fall, and robots are OK on agency. This is 2007 at this point. People have replicated this. This is where chat GPT still falls. Even though it's capable of more, people don't like to give it a ton of agency, and they think it doesn't have experience.
The weird thing about this is that there's kind of this giant gap in the middle. This is kind of the closest to something that's sort of balanced on both but low, and that's the dead person.
So the question is, why don't we like things? Why don't we like to put things in the middle of this? Why is this gap there? And my lab wanted to go in and say, OK, well, can we force people if we look at things on a continuum that look like they're somewhere in between having a mind and not to make these choices, what would their perception do?
So we did it visually and we can get into some stuff that's not visual, but I'm making a continuum of things that go all the way from hopefully you perceiving those on the one side is alive and the ones on this side is not alive. If you don't come find me after the talk, I would love to study you. But to show you what this data looks like, I can plot this continuum.
And I can say, OK, we're forcing you to have this continuum of having a mind to not having a mind. Then I can say, how are you perceiving it as definitely having these traits or not at all having these traits? And if we look at mind, we see this sort of stepwise function. And it looks exactly the same if we ask if it's alive. It looks the same if we ask, can it feel pain? And can it form a plan?
What you notice is even though we gave people this continuum and they could have made a diagonal line, they don't. They have these two kind of buckets that it sits in and they sort of have this wide bucket for things that we're saying that thing doesn't have a mind, it's not alive. Sort of a narrow bucket for things that we're willing to say have agency and experience, mind and aliveness.
So we don't like things in the middle. Does anybody know what happens when you put stuff in the? Oh, this is also important. This seems to be instantiated in your brain. So we stuck people in a scanner. We showed them categories of things that were ambiguous in terms of visual information, but not ambiguous in terms of whether it's alive or not. And there are certain parts of your brain that respect the visual similarity of these things, certain parts that respect whether it's something that moves around, but
The higher level social areas of your brain say, nope, these are the things that are specifically human, and everything else is different from it. And those are areas that do high level social cognitive skills like empathy, understanding, social resonance. When you give people stuff in the middle, it starts to look a little weird.
And it could be that you have this tension in your brain between not liking the thing that doesn't fit neatly into either one of these categories. So anybody know the term for when things start to look creepy in the middle? Uncanny Valley. Yes. OK. Great. So I'm not going to actually show the uncanny valley, but it's this idea that you like,
Don't really mind anything that doesn't look like it's not human until it starts to look too close to human and then you're like, that's freaky. And if you are a researcher who studies these things, people send you stuff like this. This will go on for a while. These are all emails people have actually sent me.
And this was the best advice, right? So the question is, we don't like things that are in this weird in-between space. And I promise it's coming back to AI, because it's not just visual information.
a neat follow-up study, looked at what they called an uncanny index. And they thought about what happens to a person when they don't have one, the other, or both of those dimensions we talked about that kind of make up a mind. When we don't have experience, we don't like people who don't have it. We don't actually love people who don't have agency or don't have both. But the real hard one is if people don't feel like they're capable of experiencing the world, that freaks us out a bit.
1And it also freaks us out when computers or things that we think are not alive act as though they have experience, which is where you get to interesting things like this. I'm picking a little bit on Bing here, but it was bad. So this is back in February. Bing had a chat bot called Sydney. It did really bizarre things like try to convince somebody to leave their spouse and tell them that they weren't in love with them. They were in love with the AI.
uh it really tried to say that it was going to harm some people it said things like oh it's not on here uh my
integrity is more important than your safety, I think was one of the quotes. And this one, which was when they finally shut it down, and I love the quote on it. It says, Microsoft is stopping conversations with Bing if a user asks about the AI's feelings. So it's really realizing that humans are behaving bizarrely when they're experiencing something that's not alive that has all of this experience associated with it.
also the AI is responding really bizarrely when you're asking about its feelings. So this is sort of the big picture about having
artificial intelligence, but I mostly see people talking about this. How do we build the better neural network? How do we make sure it has more effective stuff on the other side of it? How do we get it to understand what we're saying and give us more useful and valuable information? But what I'm seeing less of is the recognition that this is the input for us, and we're attributing a mind on the other end of it.
and your hidden nodes the ones in between your ears are really judgy and they're predisposed to be particularly judgy about other people's minds and about the minds of things that we wouldn't necessarily think of because we didn't evolve to deal with them. And so I think if you want to get to the point where you're building products that use AI you have to remember that
You might want to leverage artificial intelligence, but at the end of the day, your user experience, how they're going to judge that artificial intelligence, is really the thing that's going to give you a competitive advantage. So that is my lightning talk, and I'm happy to take any questions.

Finished reading?