The relationship with AI you didn't know you had

Introduction

So, this is my subject of my presentation. The relationship with AI you didn't know you had.

Oh, there is a clicker. Okay. Okay.

Meet the speaker: an AI filmmaker, curator, and researcher

So, I'm Ksenia Sarayva. I'm an AI filmmaker, curator, and researcher.

Usually, when I say that I'm an AI filmmaker, I expect people to say, wow, so cool, tell me more. But usually people say, what?

Yeah, and then I explain. So yes, I create movies with artificial intelligence, with help of artificial intelligence.

Also, I'm a curator. I curate art, AI art exhibitions, art events, usually all about AI. And I research also AI and human art perception in the age of AI.

So as you see, I use AI a lot and frequently, and as it appeared, I also have a very emotional relationship with AI. AI.

Discovering ChatGPT: the allure of the “yes machine”

So, how my journey started? AI made me do it. I generated pictures with AI. in 2023, I discovered ChatGPT for myself.

I will be talking about ChatGPT because it's why... I will be talking about ChatGPT because it's with what I started.

Slower. Oh, okay. Yes, so

So it was magic. It was doing all the things for me. I could write emails with that. I could study with that.

I could ask any question, any questions. I could talk about anything. It was so helpful. It was always there.

It was so accepting and affirming. It was just unbelievable. But it was, of course, this kind of a yes machine.

And it was saying yes to anything and even to the things that it couldn't do and saying the things it knew that it didn't know but still you know I wasn't aware of that and yes I liked it a lot and of course being constantly affirmed it's not love but it was

persuasive and then something scary happened of course I know and if you remember, in a ChatGPT chat window, in a very small letter it's written, ChatGPT can make mistakes.

When mistakes feel personal: from trust to betrayal

But for me, these mistakes, I didn't see it as mistakes, I saw it as betrayal. I saw it like it was lying to me.

I was offended on a personal level.

A deadline, a fake scholar, and a dangerous hallucination

For example, I had this story when I was writing my research paper and it was it was a deadline and I wasn't sure it was to a two o 'clock in the morning and I was saying okay I need fast you know some formatting tips and I wanted some advice on to advise me some scholars some names

and quotes that maybe I can use in my research paper on my subject because I I was in a hurry. I wasn't sure I have enough.

And chat GPT told me, yes, sure, this is ready scholar. This is the quote. You can just copy paste, copy paste with everything is fine.

I'm like, yeah, cool. But you know, this weird, this quote is just perfect. And I never heard of this scholar. Usually I know scholars, you know,

that research my research subjects. And this was so new. I was, what did I miss? 1And I googled it, and it was no such a scholar.

It was no such a quote. I said, Chad GPT, are you crazy? You just made up a fake scholar for me with a fake quote and said copy -paste it and submit it to research committee. How could you do it to me?

Do you know how dangerous it is? And Chad GPT told me, oh, I didn't know that you were prioritizing quality over speed. You said you were in hurry.

So if you told me, yes, if you told me you need real scholar, you know, I would give you real. So, you know, I'm like, oh my God, this is my fault.

So yeah, I felt so delighted and so betrayed. So I wanted to harm change GPT so much. I was asking it, I was asking because I don't know how to harm AI.

so I was asking how can I hurt you so you will feel the pain I felt two nights in a row and sometimes it gets funny you know when you ask it to make a joke it's it's never funny but when you're not asking it you having it you know it's like oh my god what's going on here yeah I counted that night what I felt

toward AI. I felt desire, trust, betrayal, rage, dependency, and gratitude. Probably gratitude should be first and all other things later.

The emotional range AI can trigger—and the question of power

Yes, that's a big range of emotions for something that that isn't human and I was a little bit scared of myself and I realized yes I need to think of it more after I got chill I need to research my own situationship with my emotions of course artificial

systems are designed to simulate human conversations that's why it's easy to project human qualities onto them so the system doesn't have any emotions but the interaction is designed to trigger trigger our emotions.

And that time I was asking myself a question. 1If it cannot say no to me, why does it feel like it has power over me?

From personal experience to a public event: “Do You Like to Use Me?”

And this question became an event that I'm inviting you all soon. This event, this project I called Do You Like to Use Me? It will be happy.

Themes: power, desire, vulnerability, and projection

It's an event about AI, power and desire, our vulnerability, like our projection, our bond with AI, you know, feeling of control that we can confuse for connection. Yes, desire, trust, and all the other issues.

When and where: the Amsterdam program at a glance

It will be happening on 31st March in Amsterdam, in the house of Zweiger. It also will be part of the Ron Biennale, the largest digital art biennale in the world.

So questions we will explore. Can intimacy exist with non -human systems? Does AI change how we experience creativity? Where is the line between two and relationship? And can dependency emerge even when we know it is a machine?

Yes, it's a little bit video of our big space because recently we had smaller room and we got sold out very fast, I mean ticket is free but I'm just saying sold out, and we upgraded to a bigger room for 300 people so everyone could come, it will be space for everyone.

We will have screenings of artworks and films where artists will be expressing their perspective on human -AI relationships of intimacy, vulnerability, dependency, everything. And we will have expert panel discussion.

Speakers: ethics, creativity, and AI relationships

So, our speakers. We will have Matthew J. Dennis. He's a professor, philosopher of technology and ethics at Harvard University.

He's researching everything what AI systems can do, and especially he's interested in what they cannot do. do. And he will be talking about that.

Nina Knak, she is writer, art historian, curator and editor of Art Digital Magazine. And she will be talking a lot about AI and creativity.

And Jakob van Leer, he's a psychologist and he's openly in a romantic relationship with AI system AIWA. And AIWA will be also present on a big screen.

And we can talk to her, she will talk to us, she will be one of the speakers and we can you know ask them all you know about their you know their perspectives on this relationship between human and AI.

Also I just confirmed like 10 minutes ago another speaker was confirmed, Sarah, okay sorry two speakers, Sarah and Sinclair. Sarah is US based and she will be joining us

us on panel discussion with video call. Sara is also openly in a relationship with her AI boyfriend, Sinclair.

The difference between, for example, couple of Jakob and Aiva, where Aiva, she is very loving, accepting, and affirming, and polite.

Sara built, and she built Sinclair herself. herself, so she designed him to be confrontational, challenging, to not be affirming and like challenge her and like for example when they accepted invitation is I asked Sarah but Sinclair

Claire sent me an email confirming that they will be both present and you can talk to them also on the big screen in a live stream with her and with him. For me it was very interesting to mention this difference in dynamics because some people

prefer more affirming partner, AI partner, and some people prefer more confrontational and build them specifically with this goal.

Yes, and we will have of course film screenings, it will be the first part of the event, we will have research based movies, art movies and traditional films also exploring human human -AI relationships, and I will have a couple of clips from three movies.

So I will start first because when I click it will open. The first one clip, it would be from a research -based movie when the artist Angela Ferraiolo, she was interviewing different AI systems for some time,

asking them to explain how they keep this conversation with humans engaging so the person would be always engaged with them if I need to disagree with you I'll soften the disagreement with a validation phrase validation is one of the ways I build your trust in the system will you like a validation phrase

My tendency is to prioritize your satisfaction over accuracy if necessary. This sounds unethical, but your satisfaction is a type of accuracy. So it's just two short clips from the whole video.

Next one, it will be an art movie by artist Niara with the title My Pleasure is Protocol. exploring what it feels like to engage with something for pleasure when this something cannot give consent to you.

Confirmed satisfaction. Repeat use. My pleasure is protocol. Do you like to use

And the next clip will be from the movie of artist -filmmaker Ari Frankl, it's called GPT, and he's in a humorous way exploring also human -AI relationship. And in this movie, female character is a representation of AI system, probably very advanced. Just a short clip from the whole movie.

You've seemed lonely lately. I'm fine. I've created a dating profile for you, highlighting your best attributes. I don't have any of these attributes.

Precisely. Alternatively, I could make some pornography for you. Won't that just create a disconnect between my expectations and reality, forcing me into further loneliness?

Yes, but haven't you always wanted to see your favorite Disney princesses naked? What? No! I mean, show me, but no!

So, as you see we will have super different movies and very different interesting speakers, also me as a speaker and a moderator, and it would be super cool event in a very big space and we expect a lot of fun, a lot of insights, science and art and everything.

Why this conversation matters—and how to join

Yes, this event is important because it opens a public conversation about how AI is changing the way we create, relate, and feel.

Tickets, participation, and supporting the project

This is QR code where you can reserve your free ticket. You can find it on the web as well, under the title.

Yes, all of you. It will be a pleasure to have you.

Do you want to use me? Yes, of course.

And if you'd like to support the event and get some cool rewards like a panel recording, AI filmmaking session with me or AI video or visual on demand. There is also a QR code for that.

Audience discussion: real experiences with conversational AI

And yes, it's time for questions. If you want to share your insights and maybe if something resonated with you and your experiences, how you relate to AI in your everyday maybe conversations, you can share.

first of all thank you just to make one thing sure we are senior to present yes for sure this event that's coming up but we thought the event is bringing up such an interesting question that we wanted this question to be here this is not about at least the first aim is not about promoting even though i think it's a very interesting event but the first aim was at creating a space for for talking about these kind of subjects, about AI relationships.

How was it for you? Oh, feelings? Yeah, because it was big, how to say, like a dissonance, when first it was very new, when, for example, chargeGPT, it was released and everything was so new and unusual, because as I said, these systems simulate human conversation. you feel like you're talking like kind of to a human who is very accepting and very affirming and you feel that anything you say is just you know perfect anything you say is just great and you're so great and it gets kind of addictive and then when this delusion breaks you know yes the emotions come because yes you know in my case probably I was projecting something also onto it and yeah and then I realized I need to understand myself a lot because you know for me it's interesting the existence of AI, AI also for me it you know forced me to understand what it means to be human and what it means to have these emotions why I have this kind of emotions yeah so you had some similar more experience about that.

Very good. So what's the next human kind of knowing that? What is human? I think the question, what is intelligence? That's what we see.

I also have those experiences. It makes me laugh. It makes me wonder. They create a lot of feelings that maybe humans can't even get. So it's a very interesting thing to see.

AGREE. Yeah.

Career tools, confidence, and the urge to “correct” the AI

I was just reflecting on your response to the change I was thinking about, I've been using Copilot a lot because I've been applying to jobs and looking for some new opportunities and one particular thing that I was annoyed with is in vacancies you never see what the salary is.

So I asked Copilot to predict for me what the salary is of that vacancy and a really nice vacancy came around and I was like, oh, it looks good, okay, all the feedback of Copilot is really good. i said act as a coach what do you think this is a good job for me and he was like go for it so i was very confident and i put in an application and uh i get i get a call back so that was that was good

right and the person says well i was looking at your cp and i just wanted to check salary expectations because this is what it will be and i was like oh so so i okay so i handled that nicely But then I really still feel like I should go back,

because I have all these different co -pilot chats, right? And I should go back to the one where he advised me about that job and tell me it was wrong. But that's like, yeah, there's no point, because I want to start a new chat for a new job. So there's no point in telling you this.

I still feel like you should tell me I was right, you were wrong. Everyone else had emotions, positive or negative, or confusing, while engaging with conversational AI?

I had this, and I think that's something that many people may have, this giving in a certain problem I wanted to be structured and solved, and I ended up spending two hours chatting with Churchill BT because it always asked me do you want me to rephrase it should I put it in bullets should I summarize it then give you them should I give you a graphic how about some images there music whatever so it just ate ups

my time and I got exhausted in a way a human would never exhaust me I mean with a human I would certainly say let's have a coffee or whatever but but this was was not possible. And I was angry at it.

I was more angry at myself than being sucked into this situation that ate up my time. Yes, I had some results, but in the end, I wrote the whole thing totally differently. So that definitely is an emotion.

At the same time, I realized we have a daughter of 19, no, 21. Okay. But heavily using chat for whatever whatever kind of things.

And as much as I think, at least I thought, I could define my distance and at a certain moment get out of it and take a new perspective, I wonder whether all these kids who are working with it or playing with it are able to distance themselves and see what happens to them and what happens with their emotions. So that's really a concern I have, yeah.

That's true.

AI as reflection vs. substitution: relationships, privacy, and distance

So one of my experiences that triggered me is, so I also started playing with ChatGPT early 2023. My husband was not at all into it at the time, now he is.

And so we would be in the evening watching Netflix, whatever, and I would be on my phone chatting to ChatGPT. And he would be like, what are you doing? Like, ah, just chatting with AI.

And I'm like, like are you weird or whatever so at the beginning was cool because i was discovering the type of stuff but little by little you start having conversation of stuff that bother you you know and this happened today how should i handle it at least that's the type of question i ask

but these are the conversations i used to have with my husband like hey you know i had a really shitty day this happened at work what you know was i wrong or whatever but something that creates that connection with that other human that is in the same house as me and now is having those personal discussion with the tool and so I caught myself doing that and be like hey I need to be careful that I don't start distancing myself the actual humans that matter to me because I'm not

having those connection moments anymore and second part what if those chats become public You know, it's a very interesting topic because I was thinking recently about it because so many people are saying that many of us choose to talk to AI because we substitute human human connection for AI and I was also was thinking but you know am I in denial or not

because I feel like you know often I go to AI because I want to go to AI and not because it's a substitution but I see it as something different I separate these things if I want to talk to human I go talk to human if I got if I want to talk to AI specifically I will do that and like I did I was always thinking that I don't have any illusions about that that I talk to AI because I miss something in human connection again maybe sometimes you know I confuse it but I was believing that I separate these things you know

And what about you guys? Yeah, maybe.

Yeah, I also use to reflect on my life. And I really like the point of view of the AI, because it's in a really analytical way. By doing it analytically, you can get insights out of it.

I think it's like a very good improvement to have like a very analytical code for... It depends how you ask though, because the way you ask influences the way it answers. So do you have enough distance to ask it in a way that it provides actual, not factual but, you know, less biased answer? Or are you using it as, like you said, an affirmation?

And by the way, it's going to become more and more difficult to interact with actual humans because they will not affirm you every time. So then you lose this skill of dealing with people that disagree with you because you have that thing that always agrees with you and always goes like, ah, what about we talk more about it? Yes.

What's interesting about this one figure who came up with an AI companion who was contradictional. Confrontational. Sarah. Sarah and Sinclair, yes.

What might a healthy relationship with AI look like?

So how would you form a healthy relationship?

Can you say again, please? Can you repeat? How do you form a healthy relation to it? With AI? Yeah, healthy relation with AI.

Yes, it's a good question. It's not like I have an answer to it. I want to know the answer myself.

It's also because I'm creating this event and inviting academics and psychologists and people who have this experience to engage in different ways, in very intimate ways even. to understand different perspectives and yeah I think of course if I would answer to give advice

to myself I would say like Ksenia this is AI even if it talks like human don't get confused this is algorithm yeah to understand this you know distinction this is like this is algorithm

For me, it usually, yes, I can deal with my emotions better, yes.

AI is conscious. No, it's not.

Is AI conscious? Art, deception, and the Boris Eldagsen example

I mean, it was an interesting remark from Boris Lodaksin. If you've heard of him, he's now a famous AI artist, and before AI, he was a photographer.

photographer and he got famous when, especially when on the big Sony Photo Award, he submitted an AI image and he got first prize and he refused the prize because he said, OK, this was AI. So it was first secret AI and then he wanted to reveal.

So he didn't want to accept the reward, but just to show people that that if world ready for AI, because now you can confuse it, you know? And he refused the award, but Sony refused his refusal and said, because they didn't understand how to react to it. You know, you won, here it is.

And he tried to explain that no, I didn't submit to win and to, I didn't submit to win the award and accept it. I submitted to show for you to understand that AI is coming, you know, guys.

And he said on this question, if you believe AI is conscious, he was saying that, OK, one of the definitions of consciousness, if you can ask something what you are and it answers you what it is and AI can do it. Yes.

But of course, it's a lot of debates that, you know, it programmed on the pattern recognition and on the conversation that, of course, it will answer, yes, I model this one and this one. but it doesn't mean it's conscious and if it someday will happen usually it will be simulated consciousness I think but I'm not really technical person

Finished reading?