Welcome again. I'm Jorge. I have presented before.
I work in SAP since three years in generative AI. And I have been seeing many points on how AI is going to impact people, how it's not going to impact.
So my mission here is in 15 minutes to give you a quick view of what is happening in AI, what's going to happen, as far as we know. Okay.
First of all, I want to ask all of you, I want to be interactive. How many of you have used AI before 2020? Okay.
Probably the ones that you haven't raised your hand or you live in a cave and don't have any technology before 2020 or you have used it and you will discover it now. Okay.
An impacting number is that in 2025, in March, we reached the same LLM AI publish done in the whole 2024. Three months versus 12 months. So AI is growing really quick.
We're going to go to the past.
Many of you have heard about that, but different times that was shocking for the humans, it was when an AI win in chess and when an AI wins in Go. Anyone of you know what is Go? Okay, who doesn't know it? Don't be scared. Okay, perfect.
Go is like the chess, but a bit more complicated in the ASEAN culture. So, it was like something really human. How an AI is going to win a human? It did.
Later, fun fact, they find a gap and they win the AI, but later they improve the AI and now they can't win it anymore.
Okay, so what is behind the generative AI?
Generative AI is the new, the most popular from now, but before it was Siri, it was Alexa, it was traffic jams for agriculture and preventing diseases. It's been much time before.
If you have used
Excel, we could say it has its own AI, but it's really classical AI. AI is not something new. We have been into AI for many times, but we didn't call it AI.
So this is where I want to go to the point.
And we see the AIs from the movies, and we think that's the only AI.
Okay, autonomous driving, another example of what is an AI, but we maybe don't call it AI. And not only the Tesla or the most fanciest autonomous driving. Many of you have an alert if you're going to crash and your car is starting to beep, beep, beep, that's also AI.
Okay, virtual assistance, strategic AI, the transformation, okay.
GPT comes to signal generative pre-trained transformers.
The point of growing in AI, it was in 2017 when Google, the main laboratory, came with Attention is All You Need, a paper on what and how do they work, the transformer technology.
And it wasn't until three years later that other company, take this information and release GPT, GPT 3.5.
For me, I'm a gig also in business, so for me this is a great example on why a small company, small, can innovate much more than the biggest company in the field, like Google.
Okay, so, in AI right now we are seeing two different paradigms. First of all is what, OpenAI, Google, and so on are telling that it's possible to do what is going to happen or is happening for some of us.
And the reality of startups, of SMBs, or freelance, and this is the part that it's a big gap. So I'm going to try to make a look in both of them. They are different, quite different.
It's not the same what you see in the latest keynote of it's possible to be done. No, this is prepared for that keynote and in some specific conditions it could be done, but it's not the reality we are going to live.
I have introduced that before.
How many of you knew what GPT means before I said two minutes ago? Okay, so Fun fact is that GPT is not a 2020-2017 term.
In fact, GPT in economy theory is general purpose technology, and it has been existing for years and years. the fun fact that is connected. And what is a GPT in economics?
A GPT is a technology that is not, it's not like a chair, it's not a chair, it's technology, in fact, but it's not something Sallow is not something specific, it's something that is broad.
It's like electricity. Electricity is a general purpose technology. It's not only one use. It integrates its infrastructure with all our day-to-day, all our tools.
This is what GPT, generative AI, also means. It's not like, oh, I might be able to use it for prepare these add and do marketing with that, no. It's something that's integrated in our day to day.
Okay, AI limitations. First of all, generative AI limitations.
How many of you have asked, it's a GPT, something, and that's not like that? Okay.
And that's the problem, is that we see a machine that answer GPT or any generative AI is a super powerful, autocomplete machine. So, hallucinations come when we have something to do that is not meant to do.
So that's why we have to distinguish when to use classic AI and when to use generative AI. A really fun example is if you want to do investment decisions, in generative AI, it might give you something that it could work, but please don't do it.
Because there you lose something that is called the explainability. That is, okay, why did you give me this answer?
You can ask CGPT, Gemini, or Quark the same question, the same prompt, and receive different questions, different answers. Why is that?
So, don't realize different point of your lives in that. Okay.
Also, reasoning gaps, now we can see the thinking or the O3 that have been released for a while. That's great, but I want you to take the point.
1Generative AI of text right now is a super powerful autocomplete. So it think word by word, confidently, stating incorrect information. It doesn't process in a whole.
In fact, I'm going to show you an example. that I think you're going to understand much better.
AI doesn't think in words, it thinks in tokens. So, when you see this and you give it that to the AI, OpenAI large language model process text using tokens.
He doesn't see that, he doesn't see, I'm saying he, it doesn't see words, letters, it see numbers. When you give it, it's being converted to these kind of numbers. And is that what it being processed?
So, The reasoning, it has a really strong challenge in the reality, more than reasoning in using what it has been trained in the whole internet. That is how it works. Okay.
GPT-Evolution.
Any of you have tried GPT-3? No GPT-3.5? Three. Okay, great. So, you can see the difference.
The first one, 117 million parameters. 3.5, 175 billion parameters. And I have to actually say this, GPT-4 is 100.7 trillion parameters. GPT-5 is much more.
Well, I'm not going to enter because sometimes I have the problem that I go deep in other things, but there is GPT-4.0 and GPT-5 that have something called Mistral of Expert that instead of training a model in more and more information, they train one couple models specifically for different topics. Later, we can discuss about that.
Okay, AI evolution. This is mid-journey, okay? Version one, March 2022.
Something said one to 10. How much score we give it? Two, two, okay. Ah, 0.5, okay, okay.
We are going to come now to... It's the same prompt in all the pictures. That's a really good point. Same prompt.
Two, 0.5, we can see a one. Okay.
July 2024. One score? OK, what we can say is an eight. OK, it's an eight. From one to an eight.
But it has been only two years and a few months. So this is where I like to make the point on when we are going to be in two and a half years more.
Now we are seeing in the social media videos completely generated by AI that sometimes I'm like, this is AI. And I'm working with AI videos all the day. And it's like, I'm not 100% sure in some cases.
Later I see it a couple of times. OK, it's AI. But it's sucking. And in proof, it can be done. OK.
I have been discussing about this before, and it's what is coming next in AI. So the thing is that now we are like here in the mountain, we are going up, up, up, and we don't know where we are going to reach. We might go to the moon, or we can have this.
Because the real fact is that AI is being trained by data. And one possibility is that we run out of data. So the AI go and we have no more peak, no more growing.
I don't have superpowers, I cannot read the future. From now on I'm going to speak about things that might happen and in fact about the market.
It might go around there, but the fact is that We are here, we are coming here, and maybe in two months, one year, we go to the limit.
It might be 10 years, and in 10 years we are technological, and we are seabirds. Who knows? I don't have the answer.
But, what are the emerging strategies?
Okay, so, first of all, the things we have been, and the limitation it has right now, solving them, and the lost problems, mainly in Europe, because if we, for example, many of you have an iPhone or a Mac, and we are not getting Apple intelligent features with AI, that in other parts of the world are getting. Why? Because they don't want to take the risk of having a huge fine for privacy or whatever.
Okay, I'm gonna go back here. Okay, this is what I have told you before.
In three months, we have had more than in one year. The map, we can see when AI is happening things.
Which model do you use? Where are they from? United States? China?
Any of you use a European model? Which one? I use from all three countries.
From the three countries. Sorry. I didn't understand. You can repeat.
From the countries you've got listed there, I use models from all three places for different reasons. Including Europe? Yeah. So you are using Mistral or with model?
Yeah, it's not my favorite, but I think it's good to keep in mind because it's local, it's European. OK. But I think for open source, China is I want a cup of tea, but I'm just being honest.
Okay, great. In the US, you've got Iraq, China, and China. CPT, Gemini.
The thing is, I ask the same question, and unless some of you are shy, it's okay, don't worry. Only one person use models from Eero. Mistral. Okay, Mistral.
Good, good, good. But the fact is, it's not in the point. If we have to choose only one model, we wouldn't choose Mistral mainly. So that's a huge challenge.
Okay, there are different things for change and is, GPT is now released and it has the limitation of reasoning. Okay, so what could we do if we could have more, energy, more time of processing, so this is a good question, but the thing is that it's not worth it to have it more like that.
Okay, I'm running out of time, so how many of you know agents? How an agent work?
Okay, agent, it's from a GPT created by a prompt or connecting with different database that we are going to hear from Dimitri now, in fact. So MCPs, I'm going to jump that because Dimitri is going to talk about that, but it's something really,
with a lot of potential, specialization that is, instead of training a model really broad, train small models, small, like they are really big, but only on one topic, and the law of AI, that we can talk about that later.
Okay, so now, on the edge, AI is basically when you don't need a server, when you can use AI in this laptop, without going to any authority or whatever.
So what is happening when we have on edge on AI that it happened but they are not the most powerful and we take omni-channel AI that is okay, I can see in real time and I can listen in real time.
That is when the robots are making so much noise, and when they are out of money investing in that. Because if you have an AI that it can see, process, speak, and listen without transcribing the media, and you don't need internet, this is when robot makes sense.
I have run out of time, so I only ask you one thing, and this is more in the future geopolitical. The United States law allows the state to, okay, I need data from a United States company. Give me your data, Microsoft.
1What happens if we build our government systems in another country's companies that they can access?
So, about the question of Eva is, okay, technically, If you upload something to GPT in a paid version without the training activated, it's as sure in the law as if you use OneDrive, Google Drive or whatever.
But in fact, tomorrow if we enter in war with the United States and they think we are drug dealers and they need information, they will have access to all our information if we use their systems.
Okay, so if we have any questions, we have now five minutes and later in the networking.
It's hard to resume five minutes, all the AI in 15 minutes.