Fire-side Chat

Introduction

So now it's time for the fireside chat. For the fireside chat we have the two speakers that we've already met today and a third speaker that we've added to the mix.

So today when we were discussing in terms of how technical, then of course Bora is quite technical, Stefano on the other end of the spectrum is more of a business person and we have someone right in the middle. who is a leader, business person, and also heading, obviously, a technical company for decades now, and has been a serial entrepreneur.

Raido is a chairman of the Torget Group. For anyone who doesn't know, he has founded quite a lot, if you've been in the startup scene in Estonia, if you've been to conferences like Latitude, PyCon, if you know anything about Torget, Waybiller, a few other companies, then he's probably the founder behind it.

He's also an angel investor. And you might have seen him in the news, for example, talking about AI, something he has been talking for many, many, many years before it was hyped up.

And yeah, he can also wave today alongside Bora and Stepan. And yes, by the way, please come here.

Guys, take a seat. So you guys will have to pass on the mic. I'll read this.

Quick-fire questions on AI

How we will do it is that I will start with quick-fire questions that you can only answer yes or no. So don't explain yourself, just say yes or no. If later you guys want to hear explanations, then you can always raise your hand and they can explain themselves. But I want to hear an answer from all three of you, so I will start with the first question.

Jobs, AGI timelines, and hype levels

Do you think AI will create more jobs than it replaces? Yes. No. Yes.

Do you think artificial general intelligence, AGI, will arrive within the next five to ten years? No. It was, okay, sorry, yeah. Nice, yeah.

I was imprimed on this. It's irrelevant.

You say next decade, right? Next decade, yeah, five, ten years. I say no. Okay.

Do you think today's hype around AI is bigger than what the reality is? Is there a maybe answer? Yeah. Yeah.

Nope. Okay.

AI replacing top management

Do you think AI will ever be able to replace a CEO or other top management positions? Don't be afraid. I might go with maybe. It depends on which company.

I think in general the question was quite broad and the answer is yes. Same, yes.

Is AI overhyped or underhyped?

I want to hear a little debate on the height of AI. I agree with you as well. I think it's the capabilities, what can we do with AI?

We're still learning about it, and it's much, much larger than the height is. Still, people are in judgment, but what we can do is much more.

So you guys disagree with each other. And I invited the guy to come over. But yes, maybe you can go first.

Well, you asked about the hyper on the AI basis is what is in the crowd. And we are kind of on opposite sides of it. You're saying it's not the hype. I say it is a hype. And then you wanted to have an explanation on small debate on that topic.

A decade of AI: from early warnings to mass adoption

I've been talking about AI since 2009, 10, something like that. And since the first large yearly slides in 2014 about the Thorget group, there was basically on the last slide kind of a warning for all the developers, all the engineers in the group that you need to be ready. At one point, knowing programming will be like knowing English. And then being a teacher of programming will be like a teacher of English.

It's not that I'm saying it's not important to know English or not a bad job to be an English teacher, but the hype around it is now dissipating. And the value, I would say, of AI, the actual value is getting realer and realer. So when I was talking about this like 10 years ago, there was no hype.

There was like nothing. And then there was a build-up until like five years ago, even our NGO, Python Estonia and the Target Group tried to get those discussions going that AI is coming, let's be ready. But actually it was really hard to predict what will then break the human emotion of fighting anything that's new and being scared of it and so on.

I was kind of giving up a bit around like five years ago, but a couple of years ago in ChatGPT and everything else, they really got the hype going, but they got it a bit too hyped up, I would say. And then when Claw 3.7 came around, February this year, like maybe it was beginning of March, like Cloud 3.7 increased the hype. Who know what it basically managed to do is that many technical people started to understand that you can replace junior developers thanks to Cloud 3.7 and stuff coming after that.

And that fueled the hype. And then non-technical people who were a bit hiding away their use of AI, marketing people mostly, They started speaking openly about that stuff. So before, it was like two years ago, people were using AI, being secretive about it, and think that CEOs don't know about it.

But then with developers having to use it, everybody having to use it, I think they fueled it a bit too far. And now the hype is happening. There's many solutions, many different technological solutions from for programmers and for copywriting and everything else.

Tool glut and market shakeout

So I would say 70% of the tools at the moment out there will be gone. in two, three years. And currently, like, 70% of the companies in AI will make a big loss for their investors.

And then, like, 20% to 30% of those tools will remain, who will be the stronger ones, the better ones, who are actually really trying to figure out the real-life use cases. So I think the hype is extremely real. Like, today morning, I was listening to three startups pitch to me about new investments, and I was like...

fuck, not again.

So the hype at the moment is for sure a reality, but I would also say that we are on different sides of the coin with MindStone that I think we necessarily think that the end result will be bad. I'm just hoping we get through the hype really fast.

I don't think... Oh, I don't think we disagreed that much. It's the beauty of yes or no questions that you end up framing it differently in your mind.

Economic impact versus investment hype

And I think Raido had in mind two areas, a comparison to the past or to how he thought, to the foresightness that he had in thinking that AI would deliver some impressive things. And he also has in mind the investing landscape. And I think I agree on both points.

What I think I had in mind when I answered the question was more the economic and social side of things. And I think the companies where AI is happening do see impressive productivity benefits. that are in line with the hype, regardless of the tools that you use.

And I agree with you that a lot of these AI companies will go just the way the same thing that happened in the 90s where all of these browsers and Netscape and all of these startups. Browser wars. Huh? Browser wars. The browser wars.

I mean, we will see something. We will see something similar, but I still think the impact will stay.

And there will be, on the investing side, a space for pick and shovels and organizations like us teach people how to use the pick and shovels, even if we are in a little bit of a gold rush.

I'm predicting Cursor will be like Netscape. Who remembers Netscape? But let's see.

Audience Q&A: AI and education

OK, any other questions? Yes, go ahead.

We are a really great generation. We're growing step by step with technology progress.

But one of my priorities is to teach people I see that, for example, we use AI for making money, for getting more knowledge, but I see a big teacher and big parent, for example, like our kids, teenagers, they are just going down with the knowledge, just using chat GPT. This is a big problem.

And my question is, how I, being the teacher and others, for example, being the parent, how we can integrate AI in that teenager's life, kid's life, to make their, in our future, more brighter, more normal?

Since you have the microphone.

Calculator analogy and gradual introduction

Yeah, let me start with a thought or with an analogy that maybe helps to frame the discussion. I think a good analogy here is the calculator. So would we ask kids who are age seven or eight to just use a calculator straight away? No.

We would ask them to first learn how to do math by hand. Same thing, same principle, and then, I mean, we would still ask them throughout the education system not to use a calculator in specific settings because we value a different set of skills and we test for a different set of skills. Now, there is a principle then about gradual introduction.

Of course, the point of your question is how do we ensure that that happens at the social level? especially in a situation where, with previous technologies, we didn't always get it right. Thinking about social media and thinking about the internet itself.

And I think it comes to probably a mix of regulation. So try to restrict access across certain age groups and enforce that rule quite stringently. We cannot just ban things. I think we need a more comprehensive approach where we also bring educators and parents on board, and we teach them how to teach these skills to the kids at different points.

We should add on by somebody else, or if you feel good with the answer, you need an... I can always add something.

Parenting with AI: phased exposure and values

Well, I have three kids. The first one just went to school. So knowing my background, or everybody here, you can understand that I have this concern. And I know how easily AI dumps people. That's a real thing.

But I've compared it to my family, my father, in a way that, for example, my father told me how to take a car engine apart. And I can take an old engine apart. I can give me a , give me a tractor, I can do that. Now, give me a new Audi, or a BMW, or a Jag, I'm fucked. I don't have the skills.

So it's not necessarily a bad thing as well. Sometimes the technology goes to a level. So there's going to be, like Stefano already explained, I think, the calculator is a great comparison. It gives the right context. Not before. up to a point. And from certain points further, you notice the technology is more powerful than you.

And now we went from doing math in my head, that I used to be rather good at, then the calculator fucked me up a bit. And now I can give a large data set to AI. to have I forgotten the underlying logic of maths? No, I haven't. Or physics? No, I haven't.

So we just say, yeah, being such a new technology, for sure, there is options, frontiers in so many places that just the leap will be much more difficult because the human brain will at one point be Well, less capable to be polite, stupider to say it out loud, and not have all the facts, even if sometimes AI will hallucinate and give you the wrong ones, but will hopefully get the data quality to a new level, and that will change it.

So I think the kids are already using AI, and I'm hoping to keep my kids from AI until the age of 10 to 14, but... They have talked to it. I've explained what it is by the age of seven and five. So it's not, again, I'm trying to introduce to them, explain, warn them, kind of lean them into it. But I hope the discussion will go on for the next three, five, ten years and figure out the right balance between when to restrict, when to give access, and then at one point let them loose.

They will for sure be able to do much better things with it than we are able to do. We are going to be too old. I think most of us here, when the real results of AI will start to happen, and hopefully they will be good ones. Thank you, Raido.

Family-ready tooling and regulation lag

Well, I think for now it's a bit complex because these tools are relatively new, and they are not built in a way to support family use. So for example, ChatGPT, I believe, introduced a functionality that you can share chats. So with that ability, you could potentially, I was thinking in the background, you could have like a family chat where you can add some context or add some knowledge so that when your kid asks something,

about something then it also uses your answer so i believe you know same thing that happened with youtube so now there is like youtube kids and you can see what your child has been watching you can restrict it you can for different age groups i think same stuff will go through with ai as well we will just have to wait a bit

One thing though, regulations are always slow to catch up with what you actually need to do. The harm is often done before there are actual regulations, unfortunately. But yeah, doubling down on what everyone else said, that it just means you need to put in that extra effort, for sure.

Audience Q&A: Jobs, agents, and the future of work

We can take one more question before we, yes, go on.

I've seen a lot of people who have already been replaced by the lambs. Many, many examples come.

I'm getting the feeling that AI capabilities and the amount of intelligence, the artificial intelligence that is pouring into the work market is going to affect all of us. And I think we are on a kind of breaking point.

At the beginning, maybe one human with a lot of agents can do a lot of work. But at some point, companies are going to hire not people anymore, but they're going to hire others. And this is going to have a huge impact on the workforce.

So my question is, what's going to happen next?

Historical shifts in labor and where work moves

So I think if you told someone 50 years ago that 30 or 40% of the workforce would be sitting behind the desk looking at the screen and crunching numbers and putting words together, they would have probably thought that you were crazy. So just let's look at how our labor market functions from the perspective of someone in the 60s.

they would have probably thought that the world today would be a lot more similar to their world where 40% of people were in manufacturing. And what you saw happening was that the manufacturing share of jobs over this time fell from 40% to 15% in most, at least if we take OECD or developed economies, and a lot of jobs were created in effectively the processing of information.

So I think what will happen now is that as these jobs in the processing of information start going, you will have a lot more people being involved in sectors where we are currently seeing a lot of shortages. the education sector, the health care sector, the construction sector. And no, I don't mean to say that people will just need to take a hammer and start smashing things. There would probably be drones involved to a more significant extent 10, 15 years down the line.

So what I ultimately think is that we will take these productivity benefits that we now have because the cost of processing information is going down, and they will be redistributed and supply different jobs in the economy.

Robotics, UBI, and redefining human work

Well, as you remember, our answers were different. So I'm not going to be as optimistic. I'm trying to be not 100% honest publicly. Later on you can ask me what I actually think.

But it's not going to be easy. The social structure is built roughly in a way that 20% of the people actually want to work. 30% of the people are okay to work. 50% of the people don't want to work. They are forced to work. And out of this 50%, the last 20% are usually the guys who are, we have issues with social issues now.

Only now we are compared to the time Stefan explained, macroeconomically we are at the level where, for example, solving world hunger, would be a decision by the richer countries, by the UN, and world hunger would be solved. Work is not anymore, by the general productivity of the world, actually a thing that you need to do to actually solve being a human, remaining alive. So the amount of wealth we create, the amount of money available, that's an issue now.

We've been continuing this way for many different reasons, social habit and so on, and now AI will not alone break this barrier, but what will be different is that if you have enough developed robotics, that's developing really fast as well. That means that the limitation between a human being The issue between a human being needing to be available to do physical work, that will be removed. And a robot can do car mechanics. A robot can do cleaning, whatever. A robot can do all those things.

And they used to be clunky and funny and lawnmowers. But now you've seen post-robotics and everything else being able to... somersaulting gymnastics type of really cool robot and now you add AI onto it and basically it's really hard for me to see a job that's in the current economy is highly paid by the country place.

Now, so it means that 50% of the jobs, at least 20% of the people just really don't want to work at all. So that's easy to replace them if you would change our tax system, potentially consider civil pay. And maybe we don't need those people to actually work. There's amount of people who actually, by them being in the company, when I talk about the company, them being in the company, I would rather pay them to not work and go away. So the same thing works in society. There's about 20% of the people who are actually, if they just wouldn't be stopping the rest of society functioning, we would be doing better off. So if we're going to have robotics and AI, and we finally start talking that access to wealth and capital is sufficient and rich countries could do that, if we introduce civil pay, thanks to AI and robotics, we might be better off if we just tell to 20% of the people, you don't need to work.

You have one average salary coming every month. Just don't drink yourself to death. Be in virtual reality. Enjoy yourself. Don't cause a war. And if we pre-think those moves and try to create entertainment and social structures for those people, we can hopefully make it in a way that they don't drink themselves to death. And actually, it's a good thing. They will be able to actually not stop the rest of society going forward.

Will more than 20% opt into civil pay? Will it change the structure of the society and how things work further? It's really hard to predict those things, but I'm saying for sure it's going to create issues, but I think the wealthier countries will adopt some type of measurements to allow some amount of people to go away from work and not work, and that's not the issue. It's more about...

How fast can we bring back creative jobs and works that humans want to do? Not because an AI can do the job worse. No, I think AI will be able to do creative work and other stuff also at the same level or even better. But we're humans.

Human-made value and creative premiums

So I have predicted to my wife who's a fashion entrepreneur for 10, 15 years that you are sometimes envious that I work in IT. Believe me, in 20, 25 years, I'm going to be envious of the fashion industry and creative people and musicians because I'm going to be wanting to pay 10 times more for an art piece when I know for sure it was made by a human. So that will potentially give you a silver lining. that some works that are considered human, some places where you look at this, I want this done by a human. Arts and crafts, somebody doing woodwork, and you being able to pay not anymore five euros for a wooden horse, but you maybe want to pay a thousand euros for it because you know it was done by a human.

So maybe behind this doom and gloom and some bad times are away, There might be an interesting future ahead, so I'm not optimistic about the near future. It's going to really fuck up their economy and jobs will be lost and smaller countries like us hopefully will be able to be faster in this, but hopefully we will get through this intact.

Do you have any guesses what would be the first country to have this universal social income more simply?

Estonia.

Well, it has to happen. Who else?

AI as a tool, not total replacement

Yeah, well, for me, I believe the job market, so imagine like when Excel was introduced to accountants. They were done for, right? But no, like accountant is still a title and they utilize Excel to do their jobs.

So I am optimistic about how AI will be used in different industries. Well, obviously it will affect how many positions are available in a company. as Raido shared, but there will be still a lot of things to do for the human race.

So I think it's another fancier tool in our tool belt.

Cool.

Deep dive: Developer productivity with AI

Okay, we'll move on now to some more detailed questions that you don't have to say yes or no to that can actually explain. I'll start with you, Bora, since you have the mic. Sure.

Where AI boosts coding—and where it doesn’t

What do you think is the biggest increase in productivity that developers can see when they're using AI today? Hmm. I have mixed feelings on that. So...

So I would say if you are white-coding, if you are creating something from scratch, if you are working on a hobby project or a weekend adventure, then yes, the productivity increases. I don't know, 500%.

But if you're working on an already sizable or large project, then the help of AI at its current level is not as much as companies with hope. Actually, there has been research published on this, and there will be more, so it's early to say something, like what the actual productivity increase is, but at least earlier signs indicated that it's definitely so. Even for the developer, one that I read was saying the developer thinks they are 15, 20% more productive, and it actually turned out in a slightly negative.

So, for now, I would say, early to say, but definitely, you know, these tools, imagine everything that came out around 22, 23, the earlier phase of, you know, like common known AI, then all the images that were generated were crap, Whatever question you asked, it didn't reply in a meaningful way.

So already we can see how much it progressed, and this will only go in a nonlinear way. So it will be much better in a shorter amount of time. So I have hopes.

I've definitely heard someone say that a new industry might be senior developers cleaning up the work of AI and the kind of code it's writing.

Choosing AI tools for engineering teams

Since you lead an engineering team, then how do you choose what is the right set of AI tools for you and your team? I would say... you kind of have to take the time to build a demo and evaluate.

These GPTs I showcased, they are the product of a real need and something that works good enough that we came up with. I think you kind of have to take the time to test it out, otherwise it will just stay an idea, as it is with everything.

Any follow-up questions from Bora? Okay, then I'll move on to Raito.

Don’t overcommit: tools change fast

By the way, adding on, don't get stuck to any tool like I warned before. Whatever tool you think is winning, like I think Cursor was winning like three months ago, two months ago. Yeah, the business people fucked them up.

So you just really, whoever you like at the moment, it's highly likely that they will not exist or not be the winner. Just I really like to emphasize on Bora's thought that just don't love one product at the moment.

Just play around. Look for boyfriends and girlfriends. Don't get married just yet with the right tools.

There's so many changes happening. But anyway.

CEO and investor perspectives on AI ROI

Okay, I have two questions from you, one as a CEO and one from, if you wear your investor hat, I'll start with a CEO question. So where do you see AI is actually giving the biggest ROI right now? Is it operations, product, people, something else?

Where AI delivers ROI inside companies

1Well, the simplified summary is that the places where information is as much digitalized as possible, so you can give as much data and access to AI as possible, and so accounting, stock market, anywhere where information is as digitalized as possible, AI is making a big difference. If the question was, as a CEO, where is the biggest benefit, then the quick answer is, where is the most data available in the digital form? It's available nowadays everywhere, from stock markets, accountants, to marketing people, and I would say in content writing, in any type of content creation, for sure I've seen So I would say, again, summarizing, where you have digital data, AI makes a big impact.

Yeah, the marketing job, what it was two or three years ago, and how people were able to explain that I'll be doing SEO and all that things, and a full-stack marketer is not possible because this and this skill set. Now it's like if a marketing person tells you this, OK, thanks for coming, and the door is that way. So there's many jobs where I would say just the amount of people needed for a job, not anymore three marketing people, one marketing people, not anymore that amount of accountants, one accountant, and the CFO who gives the jobs to authentic AI and so on. Three people, four people replaced by one.

So some of it is not public information, some of it is secret information, but lawyers who are still using lawyers, they're charging you 150 euros per hour, I don't know of any successful lawyer company where these two hours that they're billing you for or four hours that they're billing you for, they're still doing this. It's done in five minutes with a semi-AI-based supported solution.

Accountants, again, there's people who still do, like Poroset, accounting is still a job, but the best accountants are saying, you know, they have that many amount of people and they have this amount of job that's been done, but actually it was done in, you know, 10x or 100x at least at a time.

Even in my company, I have had to explicitly say to people that, don't be worried. I know for the last six months what you do. I'm giving you a bonus that you have said to me that you did it in six hours, but actually did it in one hour for the last six months. It's fine.

So I would say that people, and oftentimes people who are using AI first as a CEO, I can say that your CEO, a smart CEO, oftentimes knows or at least the smart ones, they choose to not tell you sometimes or not get you to be scared because actually you're a great example. I think a smart CEO should not give you more job because you did the job faster with AI or whatever. You should try to find a way to make you feel at ease with this and be lazy and smart and teach others and increase your salary.

Spotting substance vs. buzz in AI pitches

And now from an investor's point of view, you mentioned you heard three different pitches today. If you're listening to an AI startup's pitch, how do you know if it has substance or it's just buzzwords?

After doing this for 20 years, the first thing I see is body language. The guy comes in, so to be honest, the first three seconds to five seconds, usually if you've seen thousands of presentations by somebody, you can read automatically, okay, that's going to be funny.

Sometimes one out of 50, maybe one out of 25, it's a mistake. So it's not always that you should like lead with this always being the case. So you should leave room for... always after you find out more, just the person was actually nervous and so on.

So first actually experience gives you this intuition, gut feeling, and you understand body language. You have a higher emotional intelligence than that's still a human skill so far.

Then again, experience helps because when we had this mobile bubble, some people know the dot-com bubble in the zeros, then around 2010 was the mobile bubble and apps and stuff like that with iPhone and Android, and there's been different bubbles around 2017 and B2Bs and SaaS. You just start to understand where the keywords start to repeat a lot. At one point, two years ago, I remember that every person in the slide deck had the word blockchain in it. Just like blockchain here, blockchain there.

And the first question, like after 30 seconds, you said, but wait, before you go anywhere else with this, why do you need blockchain for this? The funniest example, somebody's doing like a random mobile app and saying, we will use blockchain. And it was like, whoa, stop. Why?

So AI still being really useful. And after everything I've explained to here, in many cases, we stop. I also stopped the person just saying, Wait, you don't need to use AI for everything.

In some cases, you can say you're doing a glorified Excel or whatever and leave AI for the future when you have enough data. You don't have enough data points to even be close to making a useful hint. I said, what the hell are you talking about? So the second thing is besides body language, it's just you can understand that people don't understand AI with the first initial question that how much data is needed.

And then you get to high precision, usually use cases where people think that AI is somewhat true all the time, that AI doesn't make mistakes, and they pitch to you a context like, hey, AI, is it like a superhuman robot that doesn't make mistakes, and then again, like, no, it makes mistakes, so if you're trying to use it at the moment, in some places, you need a human fail-safe, so just three examples where I can say it's easy to understand the fluff, but by far,

Whoever is pitching to you something, I can say a human getting pitched by another human, if they have high empathy, have an understanding of body language and psychology and stuff like that, it gives you still a huge edge. And that's something that AI can't replicate and you can understand. Without AI, who's actually not confident of what they're doing and most likely trying to bullshit their way through the pitch.

So... Human to human, it's easy to pick. Okay.

I will take any follow-up questions to Raidu, and then just to be within time, then I will move on to Stefano, and then we can do questions in the end. So if you have a question, just keep it in your mind.

Power user habits: Everyday AI workflows

Stefano, I know you kind of like an AI power user, someone who actively also advocates for it. I'm very curious and would like to understand what...

Like, what are you doing in your day-to-day life? What tools are you relying on the most? What are you exactly using every single day that's making you more productive with AI?

Personal automation and the elusive email copilot

I think I'm at a point where the repetitive tasks And a lot of the tasks that can be generalized are done through a chat bot or a custom GPT or another. So proposal writing, spotting some leads out of an Excel spreadsheet, sales emails, all of those tend to be ai generated what i've still not done that is high on on my list as we were discussing earlier is uh

an email assistant that has enough context to actually reply to 70-80% of my emails. And I think the reason that is not happening yet is that there are limitations in how much context you can give to these models. Once that will be there, I think also that would possibly be gone and there could be a filtering system where the whole infrastructure knows which kind of emails I should be responding to and the ones for which a draft can be prepared.

Go-to tools beyond LLMs

What are your go-to tools besides the LLMs? I think I use Gamma quite a lot for presentations. I use Suno for music.

Actually, I write poetry in my spare time, so I sometimes do some songs out of my poetry. These are some of the things that... I mean, Super Whisper. We have seen it in action.

These are probably the napkin AI. Some things here and there.

Not yet. I'm being lazy. I've also not had enough free time to properly play with it.

Why companies stall on AI—and how to unblock

Okay, but since you're also training companies regarding AI, then what do you see are the biggest blockers that stop companies from adopting AI more widely? And then also, if you can comment, what do you think they can actually do to overcome them?

So, the two main blockers are leadership and internal politics. So I think few C-levels have actually played with the tools and understand their capabilities. I've been in situations in executive workshops that we sometimes organize where it was clear that some of the executives had never opened an LLM.

And I found myself just showing where the previous chat would be. So that's the level of detachment from what might be happening on the ground, especially in large companies.

And then there is the internal politics, because ultimately, managing to AI deliver a promise of productivity, benefits, and time savings, there needs to be some reallocation of resources. Reallocation of resources means that someone loses out. Some people are fired. Some people are reallocated across departments. So that internal politics can really slow things down.

I mean, one of the reasons that a lot of compliance departments have been against AI, it's also because it might actually reduce their job to some extent and reduce their team size. which also means that you can hire fewer people, you have smaller teams, you have less influence and less power within the company.

So I think what we do, I mean, I don't think there is a solution to the internal politics, but I think what we can do is really bring the leadership on board. by hosting an executive workshop where we show what these tools can do and at least the leadership is able to see the potential. And this is a C-level COO or COO conversation. It's not an L&D conversation.

Makes sense.

Audience Q&A: Agents and AI-led companies

Okay, but we can take questions now. So if you have a question, please raise your hand.

Yes.

What do we mean by “agents”?

Agents. Nobody talks about agents. Why?

Because they mean different things to different people and I think there is a lot of hype and it's actually difficult to understand Well, they are That's why I know I don't really use the word right now I prefer to speak more generally about AI solutions and What they can do Privately

We can chat. There's some really extreme cases where executives and producers have cloned themselves to a level of the agent being able to do their job to a point where junior partners or co-writers aren't able to understand that they might be co-mediating a new film or show with actually the producer's clone.

So I think this can be a private discussion, like what Stefan said, it's still a lot of these wordings and terms and what is what and what's being used. People put names to different things, but I don't think anything important was left out of today's, in general, what's the trend and so on, so I don't think we left out anything important.

I think also LLMs have actually caught up with ChatGPT-5, for example. It is quite agentic. So now when we say AI, it often includes also agents.

But as Stefano said, agents actually, I've seen non-technical people call automations agents. Like you create an automation and you're like, yep, I created an agent. It's not an agent, but okay. So this is why often it makes sense to just say AI.

Yeah. It's become a spectrum, right? But yes, I agree with Bora's definition though.

Okay. Any other questions? Yes.

When AI-led firms compete with human-led firms

So at some point humans are going to compete with AI and AI is going to run a company and humans are going to run a company and at some point AI led companies are going to outperform human led companies. What happens next? Just to repeat the question, at some point AI and humans are going to compete, and then AI is going to create companies, and humans will continue creating companies, and so which of the companies win? I would actually give this question just to Raido, since you are the one who would be investing in both the human and AI companies.

My job is on the line, so... Well... I would say it like this.

If you play with the same tricks that we've been taught so far since the Industrial Revolution, then, as you know, our education system was built up to build or teach factory workers. So most people who have listened to TED Talks or Sir Ken Robinson most likely know that our education system in Western countries is tilted to maths and physics and not to the creative side. So what will most likely happen is also in management,

You're able to be, I would say, you're able to be a really big CEO and still bring money results up to a point, up to a level, and so on. So AI will most likely outperform most CEOs at one point with all the intelligence, facts, business know-how, and so on. But...

When and if at all, and there I'm not 100% sure yet, will it be able to outform humanity of the CEO, the empathy of the CEO, the ability to find solutions behind the curve, else take risks. So AI is going to be always calculating, always thinking the same way like they bring this great example with the Tesla cars, right? Because you know, old person and the baby, and if you're a human being, even if you think the odds,

is one way or the other, if you need to kill one of them, you will kill the old person. Highly likely, because you hopefully will think that, okay, if both options are, they both suck, At least there's maybe some reason to it that this person will still have a majority life ahead of them. So reasonings like that can happen in the human brain, most likely not in the AI brain.

So there's going to be, also the logic might be different with the stories, but empathetic thinking, thinking CEOs and competing companies as well will be an important factor. And up to a point, like I said, like maybe a futuristic movie, somebody has seen this iRobot, right? And, you know, the person just hating robots because they are robots and hating its arm because or his arm because it's robotic and so on.

So there's going to be some large shift happening in a human brain in the next five, 10, 15 years where most of us most likely will. treat a human-created solution, human-created product in a different way with a different value than something that's done by AI.

So I will say AI will outperform most CEOs. Like 95% of CEOs suck at their job. I hope I'm not part of that 95%, I'm part of the 5%, but not putting myself in any of the categories, like CEOs usually are bad at their job.

Sometimes they're made from specialists, sometimes they haven't gone to school, sometimes they don't have any empathy, and AI will be as good as the input you give it to it. That means garbage in, garbage out. So CEO garbage goes in, CEO garbage comes out, AI or not, not AI.

So until it gets really, really better and then the competing is going to be hopefully with the human part that you can make your humanity a competitive edge of the company. So giving you some hope, it's not so bad.

Okay, one last question and we can wrap up then.

Working less, producing more: Will society accept it?

Yes, go on. I have a question. You're trying to give an example in your company.

If somebody starts using AI, and instead of six hours, it lasts one hour, and does the same job, instead of punishing, you celebrate them. Is it like really, that's the future that you think that you can live with as well, that, I don't know, four days of the week you don't come to work, only you work one day, you have the same salary, and it's something that, you said that you have 5,000 CEOs that are like, good, but don't suck at the job, but is it something that will be accepted in the society as well, or is it just like some exclusive cases? They used to be saying when there was an IT guy in the company for hardware and stuff like that, that the best IT guy is the one you don't see ever. Now, things are running, things are happening, things are going in the right direction, like a smart person, don't ask questions.

Rewarding leverage, not hours

So I do think your summary when the question was like, this is happening, some people are doing four days of work in one day, and this is happening, will society accept it? I don't know the society part at a whole. Like I said before, there's a normal curve of people, most likely, as always, there's gonna be 20% of early adopters and people who understand that's okay.

As long as you have set up your agents, your systems in a way that you are creating value more than other competitive people, then why should, as an owner or a CEO, be a smart decision to let this person go? And then the question is going to be, if this happens for the next five years, 10, 15 years, like I said, I predict that we need to figure out something for the people who don't want to work, who aren't able to create those systems.

There would be a civil pay. Otherwise, I think there will be demand that those things, those people need to be taxed more and so on. So the societal feedback on this will be something that we need to figure out. Civil pay is one option.

so I want you to do six six times you work don't just say like okay you have some people who work I'm happy to give Stefano and Boller a word on this as well. And most likely, there will be a balance.

Because let's be honest, if you treat a smart person like that, then you can do it once. Hey, your benefit for doing a great job is more work. And a lazy smart person will most likely go like, fuck you. First of all, I will hide now everything I do until I can. And I will go to a competitor and tell them how bad you are and will do for them now two times more work, yeah, because I hate you and I like you. So there's going to be reasons why, from a human point of view, it's a bad decision to be a dick as a CEO again.

But for sure, you're totally correct. I'm predicting many cases will be like that, that the benefit Even I, as a really junior CEO at the age of 20, felt like, well, that's great that you got something done by Wednesday. You have a whole day next day. Let's fill that up with shit.

As you get, I think, more experience, understand that long-term, you might win. Long-term, you lose the game. So think about people in the long run, I would say. Do you want to make a last comment on that?

The role of policy and coordination

Yeah, I think it also goes beyond individual action of the CEO and of the worker. And if we look at previous technological shifts, there was a role for political action and coordinated action, thinking about the trade union movement. Am I saying we need to bring trade unions back? No, not necessarily. But we need to think about ways in which we decide to coordinate as a society on how much we work.

because there is the civil pay model that Raido mentioned. There is also a model where we start having collective bargaining or some form of wage and hours coordination, and we decide each person would work less. And I think having the role of politics here can be very important. Okay.

Conclusion

With that, we'll wrap up this session. Thank you so much for answering all of the questions.

Thank you.

Finished reading?