So I thought I would introduce myself briefly. You've probably not seen me before. My name is Stefano.
I'm the head for the community in Paris.
So I'm here. I came here to Toronto for a chance to reconnect with our community here and meet some of you. I've joined MindStone recently and before that I was an economist at the OECD in Paris, where I advised ministers and heads of states around the future of work, education and skills in 10-15 countries, including Canada, actually.
So many of you ask me, is this really happening in a church? We're not strictly speaking in a church, but we are, I mean, in a room which is in a church complex.
So are you here to preach about AI? Is that what you're going to do?
well we are not we are here but and AI is very different from from religion we are here to talk about very practical things but they do have one thing in common and which is that they have and are gonna have any an impact on the life of many people and mindset we we really think that AI is this potential to transform the world we live in, in line with other great purpose technologies.
And so this is why we organize these meetups and really try to share with you some of the advanced advances and interesting use cases that are happening in AI so that we can reflect together on the impact on our society, productivity and well-being.
So for how many of you is the first meetup? Oh wow, quite many.
Then also a few other things about what we do at MindStone. So this is not the only meetup that we run.
We are in 10 cities, and we will try to get to 20 or 30 before the end of the year. In North America, we are also in New York and Boston. In Europe, we are in Lisbon.
We are in London. Paris, Madrid, Berlin. So we are in many different locations.
And our community has around 25,000 people. So we are on track to build the largest practical AI community in the world.
I see a hand, yes. I'm just curious. I mean, how do you get the church?
Do they actually pay you to rent the space? I mean, do they charge you rent the space? Yeah, they offer the venue.
Yes, we don't have a religious affiliation. That's why you're asking.
So without much further ado, I think our talks always have, or these meetups, have a common format across all the different locations. And the format is we do a practical demo showing how you can be more productive using AI. Then we do a technical talk.
So we actually show how large language models can be useful in building a product. And then we do a theoretical talk about the future.
And so I will get started now, if there are no more questions on MindStone and what we do, with a practical talk. All right. Let's get going.
You see I've got chargpt open, so I'm giving you a hint what I'm gonna talk about. So, this is a session on custom GPTs on chargpt.
So, how many of you are familiar with custom GPTs and have used them before?
All right. Not many hands. I'm not surprised.
I asked the same question in Paris, and it was probably 10% of the room, like here.
And is anyone brave enough to share how they have used custom GPTs? All right, no. Understandable.
Yes? And have you actually built your own? Okay, fantastic.
So I still hope I will be able to show something new, maybe a couple of tricks. I hope it looks like this would be useful for most of you guys.
So what I will show is how you can build a custom GPT, and I really wanted to build one together on the spot on a random topic that we would select, but I'm still a bit jet-lagged, came here from Paris yesterday, so I actually pre-programmed one, and then I will show you how I built it.
And the GPT I want to share with you, and first let me describe the concept. So what custom GPTs are, they are a pre-programmed version of ChartGPT that can help you on a specific thing. So you basically pre-program ChartGPT to do one thing, one task very well.
And so this allows you to save a lot of time while prompting and it's great for repetitive tasks. One of these repetitive tasks that we have at MindStone is actually asking questions that we get from potential customers on our learning programs. We get tens of questions every day, and it's obviously quite time consuming to answer each of them.
And so I will show you how I've built, just for the purpose of this meeting, this meetup, a GPT that is basically able to answer questions from potential customers on our programs.
Is that clear? Any questions on this so far? Okay, clear.
Imagine that I receive an email from a customer on the milestone programs. The email could be, how long is the program?
And so what would I do if I were not to build a custom GPT? So in the old-fashioned way, in the world of AI, old-fashioned means two years ago.
I would probably try to write a very precise prompt like, imagine you are, customer success lead at MindStone you receive a question from a user on the program hey there our log is the program. I'm just making up an email, all right?
And then I'm gonna ask the model to answer using the attached documents. So I've already saved them, not to waste too much time looking for it.
So while they load, I'm going to show you how they look like. These are just some standard presentations. They're like nothing, nothing special.
So, They're just PowerPoint presentations. They describe our programs. So I'm not loading it, it's just think about it as a normal PowerPoint presentation.
Okay, we have loaded. So, we see that the model is taking a deep breath.
Have you seen this before? Then it's explaining its reasoning, then it's drafting, I think a fairly good message. I can tell you that the information is right.
Now, why is the model doing this with a deep breath? So the reason has to do with how I customized it.
Are you familiar with the customization feature? How many of you know about it?
So you can actually customize ChartGPT. In all of them.
That's a very good question because for the custom GPTs, you need at least the plus account. So it's a very good question.
Now, and you see that I've described what I do, three main things, Mind Stone, I'm a lecturer in economics and business, and I do poetry and philosophy, and then in the traits, I've put a bunch of information that effectively tells Judge EBT that it should explain its reasoning, and not to hallucinate then there is also this can you read it so there was some research suggesting that if you tell char gpt in the traits that you tip it then it performs better
by a significant margin, we're talking about five to 10%. Then there's some people started to explore, to exploring, okay, how much should I tip? Is it 20, 200, 2,000?
Doesn't change, more that the fact that you tip is important than the exact amount. And OpenAI is at a very hard time trying to explain this, but it works.
How? So what's the problem with this approach? It is that if I want to answer a question, I mean, if I basically want to answer a different question on the same documents on the same prompt, I basically need to rewrite the prompt every time.
Maybe I could do it in the chat, but I need to find the chat. So it's a bit time consuming. I could copy and paste it from Word, but it's time consuming.
Luckily, there is a better option.
So I've already built this. I will show you how it works first.
So reply to email. I'm gonna pick the same email. Hey there, how long is the program?
And so the prompt here is a lot simpler. And the result is pretty much the same, probably a bit better because this sounds a bit more like me, like someone who spent a long time in the UK but remains Italian.
Do we want to try with other questions just to show that I have not prepared this and it actually works? Any suggestions?
here we do reply to question what are the advantages for the program okay this is great and it has only taken three bullets You see it seems to work fairly well.
And now what I will show you is how we can build it together. We will build it together very quickly. And I will show you that it's actually very easy.
You can go My GPTs. You create a GPT.
Then you have two options. You can create or configure it. If you create it, it's more like a chat.
you can do it in a chat format. If you configure it, then you need to effectively go through all the different fields, importantly, the instructions.
So I will type, I want to build an assistant that can answer questions on milestone programs training programs the assistant must only use the attached Documents these really important because you are telling them you are reducing the risk of hallucination It's like this is your these are these are your sources.
This is where you should take the information from The tone Should be formal And with a British twist. I mean, we are based in London after all.
I attached again the same document that I showed you before. This will take a bit. So now it is building.
milestone made. So you really see the British twist here. Let's go with milestone assistant.
And now it's generating a profile picture. I think we will accept pretty much. All right.
You see the British twist again. There's a cup of tea. All right.
Now, let's see if it works. A reply to question in email.
Either how long is program best Caroline all right it's not bad it's probably a bit overly enthusiastic for the way I would write but I think with some further tweaking we could get a very good result
And so I will show you quickly because I think it's interesting the guidelines of the assistant that I created so you get a sense of how it actually functions. So these are the instructions of the assistant. So you see this pretty much what we said.
only use information from these two documents be clear, accurate and to me this is very important because I really want the model to sound like me when it's writing so professional, warm but not overly enthusiastic friendly and helpful tone using British English spelling and then I've also specified some rules. So if I tell you reply to queue in email or reply to email, you need to respond in a certain way and you need to finish in another way.
So all in all, a tool like this can really allow me to save a couple of minutes while drafting each email. And if I need to do 10 or 20 by the end of the day, I've probably saved 15 to 30 minutes. And over time, this really accumulates.
And I use this personally for a bunch of things. For example I've created a small translator from Italian to French. I'm based in Paris, so French to me is still quite difficult and it's a lot easier for me to write in Italian and then have the copy and paste the French output.
And again, I could, it basically saves me the time of having to tell charge GPT, okay, I'm Italian, I'm doing all of this and I need you to translate this information in French.
So I hope this was useful. I'm really happy to take a few questions or comments.
Yes. I did not hear you very well. I was wondering if you could point out the advantages of using this versus a traditional translator, like Google Translate or
Very good question. So the advantage is that you can shape the style a little bit more. So for example, there are some writers that I like in Italian and some writers that I like in French.
So I can tell the model, I want you to sound a little bit like that writer while you translate. Or I want to use these variations in French. I don't want you to use any informal words, for example.
So I think you can be a bit more flexible in the type of translation that you're asking. You can bring this further, by the way, because if you have the patience and the time, you could even try to upload some documents with a French that you like, some PDFs like I did. with the... This is actually a very good idea.
I might try that. Because I mainly use it for emails. When doing communications with the French bureaucracy, I basically write in Italian and I take the output in French.
But... It would be interesting to try to upload some PDFs of some French or some language that you like and then see if they perform, if they improve the output.
And from a technical standpoint, this makes complete sense because what you're basically doing is you are telling the model to source words from a particular cloud. So think about an LLM as taking information from the entire universe. And then there is like a sky of information out there.
And you're basically saying, take this cloud. I want information that is relevant to this cloud. Could be a writer, could be a subject, and so on and so forth.
Does that help? Any more questions on this?
Yes. So you're basically asking me if I'm taking this custom GPT and I'm linking it to other things? UI example they just gave us, is there a way to force the so that it only responds within constraints, as opposed to not just the instruction set, but as an actual object?
MARIUSZ GASIEWSKI- There should be. We can shot after the event, and I can explain further how to make it work. So another question?
Yes.
Any other useful use cases?
So, I mean, I showed you the translator. I think, what else?
There is customer support. I mean, think about any sort of activity that you do a number of times per day.
And email drafting is generally great. because you do email drafting on a specific subject, because you do it quite a bit. And again, the point here is not to come out with a perfect answer.
The point is to save time. And even the fact that you are taking something that is already written and you are correcting it can allow you to save time with respect to writing it from scratch. And by the end of the day, all of that accumulates.
Does that help? Yes? Yeah.
Yeah, I mean, another thing, if you assume that you're doing job applications and you're writing cover letters. And I think a lot of it can be automated. I've done it myself.
Of course the important thing there is that you upload some samples of things that you've done and you cannot expect the model to do the research on the company for you. That's where the problems begin. Because the tendency for the model to hallucinate will be very high and you will probably end up with an output which is
problematic for the application, but it would still save you a lot of time if you basically pre-program a custom GPT to say with a bunch of cover letters that you have already written and with, to recognize three or four bullets that are specific to that company and that application that you want the model to pick up on. And in a sense that's freeing up time, rather than just doing the writing, that's freeing up time that you can dedicate to the search and to thinking about why you are a good fit for that company.
1So here the point, I think we should think about these models really as calculators for words. So there are basically tools that allow you to reduce the writing and the typing time a little bit like a calculator means that you don't need to do 300,000 times 527 in your head. I think this is how I would think about these models.
Would you say there's anything here that's taking that approach? I mean, it depends. You would need to check very carefully the policy that the company you're applying for has with respect to AI.
But it is possible that they will use an algorithm that scans the cover letters. That's to me, that's a reason to try to really ensure that the custom GPT that you build has a lot of information on cover letters that you have written and writes in a way that is consistent with your writing style as opposed to just an average cover letter that can be available in the web.
Does that help? Yes?
Yeah, it's a very good question. Thanks for clarifying.
Yes? Beautiful. Beautiful.
And it could be about, I think it's a great point, and it could be about laws.
So the Social Security Agency in Italy I mean, because Italian norms tend to change quite often, and the language is actually very difficult for people to understand, because it's this hard legalese that is mainly spoken in Italian ministers.
And so they are using some advanced version of this to basically translate into spoken Italian the laws that are written.
Of course, one would think that the better option would be to write laws that people can understand. But as that is not happening, we are actually using the models to facilitate that translation.
It's the same concept, like with the blog.