What type of people are here in the audience? Okay, how many developers do we have here at the moment? Quite a few devs, and obviously a few gray hairs, not so much hair, out there in the audience.
And how many people are here from a marketing point of view? Couple of people in marketing as well, okay.
And how many people are here just to find out what this whole AI thing is and how do we use it a little bit more? So a few people there, okay, great. No problem at all, okay.
So what we're hoping to do is gonna span a few of those groups. We've got something for everyone here, I think, in what we do.
So I'm gonna dive in and kick off. Let's see if my computer comes up and we can show that.
Okay, so I am, I'm an ex-investment banker as well. Don't hold that against me.
But I did that for about 15 years where I used to run global businesses in the city. And we built a lot of technology. And I've actually been using AI actively in business for over 20 years now because we were building AI-based trading systems 20 years ago when I worked for UBS and then Barclays Capital.
So we did a lot of work. I used to have 60 IT guys working with me in the business in those times. And we sort of built a lot of cutting-edge tech product during that stage as well.
But since I've left there, I've been in tech startups for 15 years as well. I'm sort of old now in all this stuff too.
And in my tech startups time, we've done a number of companies, I've had a few exits, but I've had a robotics company, and we were doing robotics 10, 15 years ago. I left that, basically because we needed AI to get robotics to work.
And that's what we're beginning to see now with all the robotics innovations that are happening out there alongside this. And I've been focusing exclusively just on AI for the last 10 years in a number of different capacities.
And I've just exited a FinTech startup a few years ago, which let me then just indulge myself in just playing with AI and bring myself up to speed.
So I have done a lot of work with the latest AI and the latest AI tools over the last couple of years now. We've built a lot of stuff. I wanna share some of that stuff with you today.
David's gonna share how we do some of that, but I'm gonna show you a couple of demonstrations around this stuff as well, if that's good. Okay, does that make sense? Okay, cool.
So my current company's Fueled, I'll talk about that in a sec, but before we do that, let's get AI building something for us. Okay, so this is an AI tool that's come out from Google called AI Studio, and it's its own version of a building tool. So how many people have heard of Replit Unlovable? Has anyone played with that stuff?
Okay, so this is Google's version of the same thing. So, let's see what we got here, right? We're gonna throw out and build something simple that everyone can understand.
Okay, so let me vibe into this and see what it is. So we're gonna use the latest talk speech models and a lot of the stuff we do for speed of input, which helps, right? So, let me see if this is good.
Okay, I'd like you to build me an app that's very simple, that just does a simple onboarding flow, please. And we want that onboarding flow to take in the name, the age, and the weight in kilograms and height in centimeters of the user. We're gonna use that to calculate their basal metabolic rate and their active metabolic rate.
And we wanna do that with both a UI and also voice. So do a voice interactive session for all of that, please. Let's go. off it goes.
Let's see how that goes, we'll come back in a sec, it's gonna spin along.
Now let me talk as that works in the background. Because that is what I do a lot of, multitasking with these AI agents. We set something off, we get it doing it, and then we kick on and do other stuff.
It's absolutely a nightmare for a person with ADHD like I have, but it is pretty interesting, and when you can channel it correctly, it can make you very, very productive.
So what are we doing with fuel? I'm gonna talk about a little bit of this stuff.
So we're on a mission to solve health at the population scale. So it sounds very ambitious, and it sort of is, but we're pulling together a lot of the tools to do that.
Obesity, as we all know, is a big problem in the UK. and costing us a lot of money, both in a human and financial point of view.
So what are we looking to do to bring that out? Looking to improve and optimize diet, and we do that through both recording of diet and analysis of stuff. We're looking to pull together exercise and lifestyle components of that, so integrate wearable data to get in together sleep, exercise and stress, and then we're going to apply personalized support for that with a combination of both humans and AI to be able to deal with various niche use cases and then try to expand that to the population scale.
That's the big vision of what we're trying to do. We're making good headway with a very small team.
Okay, so what have we got? This is going to give you some context of what I've been building, but also gives you some examples of how we can use AI in production. Because I'm building apps that use AI when they're being as part of the product, but we're also building those apps with AI. I'm going to show you some of those things as well through this.
Okay, so our app is basically a food recognition app. So we basically just take a picture of food and it'll give you the full nutritional breakdown from just the image, okay? So that's the very simple, specific thing that we offer.
And we've had that out in the market now for about 15 months or so now, just over a year. And on the back of that, we've learned a load of stuff about building AI product in production, which is sort of important, how we're using these models and what we're doing with that. And we've got some cool features underneath this that have been brought out and even highlighted by Google with some of the work that we've done with those guys.
But it's not just, that gives us data capture and we're working with universities to capture and do experiments with that because it's pretty easy to do. But we actually then, with all the data we get, we wanted to analyze that data as well.
So what we've also got that goes along with this is a portal that personal trainers or nutritionists or functional medicine doctors can use to analyze all the nutritional data that we get and then we let them build reports with those, and we've just now applied the latest technology in an MCP protocol, so the model context protocol to build agentic systems to help do really good and accurate reports.
So what does that really mean in practice? And I'll show you some of this stuff, but it's really quite interesting how the AI begins to unlock the data analysis elements and bring it into personalized terms for the user to make it very accessible.
And I think that's the skill of what we're trying to do with some of these things.
Okay, so the initial bit of AI coaching that we started out to do is interesting. You take a picture of a male, And it'll then give immediate feedback. It'll give the items and the elements of that mail.
We're very conscious we're using stochastic models, right? So it's not absolute.
I don't know if any of you guys have watched Silicon Valley. Any of you guys watch Silicon Valley, fam? Few out there.
I don't know, have you ever remember that scene where they did the hot dog one? And it goes like, it goes, I've got an AI. It does like hot dog recognition. This is hot dog.
It goes, hot dog identifies it. It goes, it takes a picture of a slice of pizza. And it goes... Not hot dog.
And basically, the thing we're doing is not as well known as a sort of problem out there. How do you go about doing that? And one of the things that we do, we make sure that when we're getting ideas of what the items of the food on the plate are, we understand there's a variability to that. So each item, we give six options that the user can flick between to get an accurate recording.
And what's interesting about that is, Actually, let's see, one of the things that we learned about this, which a lot of people didn't get early doors in this, was how that actually worked.
Okay, so ChatGPT came out, large language models, you can talk to your chat and it's like a human, right? So you can, it's gonna have a conversation back and forth in text and language, okay? Then we're able to analyze, then all of a sudden you're able to add up an image and was able to analyze that.
Okay, can anyone here tell me how those models are analyzing that image? Open up the room here a little bit. How do you think it analyzes a plate of food?
Okay, two answers. Let's go from the lady here first. Okay, go ahead.
It analyzes pixel by pixel. Models can recognize if the PCA or Okay. So then it's trying to identify like a card on the player, that sort of thing.
Okay. Wrong. Any ideas in the back?
Well, I think it's pretty much, it's kind of the same idea. It's trying to predict what's, what is next in a series of pixels. So it's going to try to say what's, what's coming after this. I think it's a big pixels.
Okay, so what you said, both of you said are similar, but it's not right, and it's not a bad guess. So that's how people have been trying to do stuff for a long while. So when I was doing robotic vision for my robots about 10, 12 years ago, this is what we were doing.
So we were doing different types of video analytics pixel by pixel trying to do that. Now there's also startups been trying to do this with food recognition for a little bit.
And they, there was one company, guys rolled out of Google, they had a camera on a mount to take in the best high resolution pictures of all these different items and build in a model just to really recognize food items at scale. When OpenAI launched there, it was in 4 Omni, it was able to take in pictures for the first time. And these became turned from large language models to large multimodal models.
Most people, even professors of robotics and professors in MI, when I ask that question, they give the same answer as you do, right? So your answer is not a stupid answer, because that is how it was done. Everyone doing a masters or a PhD in machine learning on robotics and vision thought that's how you needed to solve that problem.
So transformer models came along and approached that problem in a very different way, right? So when a transformer model takes language, it turns it into vectors, it turns it into tokens, and it's looking at that in the vector space. When the model analyzes an image, it's taking the image and putting it straight into that same vector space, okay?
So it might be decoding that pixel by pixel, but actually what it does with those pixels, it's not trying to recognize that pixel as a car, right? It's just taking this whole image of this plate of food and moving it into the same vector space as the rest of the large language model is in there.
So the subtlety of this is, it's taking that food and it's putting it in the same place vector-wise as every recipe it's got on the internet. Of every recipe book, it's digested as well. So when it guesses what's in a plate of spaghetti bolognese, it's doing that with the context of every spaghetti bolognese recipe that's out there as well.
So that's a very subtle thing that a lot of people don't understand when these large language models and large multimodal models are looking at images and even video and audio of that's the mechanism of what they're actually doing. And that's a very interesting mechanic of how it works. And it changes how you think about problems when you're solving things.
That's super technical, but I wanted to share that because it's sort of interesting.
Oh, my screen's gone dead. Oh, let me get this going again. Is that working? Let me take it out and plug it in. Oh, there we go.
Right, so when we came back to coaching, we take a picture of food, we identify items, but then we try to humanize that a little bit more. And we do that by then begin to do some AI, bring AI into this in the second level. So as opposed to this picture analysis,
We're trying to do, I can talk through this, so it's not gonna be the end of the world, but what's that? Is that a mechanic there? So when you're talking about coaching or trying to make change, give a spin, no, still not working. It looked as though it was about to work. That'll be annoying if we can't get this older.
And what we do is we have little AI coaches that we let the user select how they wanna be communicated to. we basically have, the user can choose different coaches that they want to be communicated by.
We take that items and that meal that they just had, we give them a coaching advice on how to improve the health of that meal based on what it is. We don't try to say everything you eat should be a salad. We don't try to be sort of like trying to give them an ideal idea all the time. We try to bring it to them in a way that meet them where they are.
So it's sort of like if we want to try to make health change, it needs to be incremental. It doesn't generally happen in a fell swoop. So
You're still playing there? Okay, cool. So it'd be good to be able to show you some of the reports that we do. So that's what we did initially in the app in terms of the AI analysis bit.
What we then wanted to do with that was work with people over time and with other professionals to help them analyse diet and capture. So we're doing some experiments with the University of Exeter in terms of diet and how people use diet. We're using our tool as a data capture app for that and we're giving them the tools in the back end to be able to analyse these diets quite a lot. And we're working with some personal trainers who want to use that for their clients as well.
So if you want to get someone in the gym lifting a lot of weight, If you're going to make real gains in your body, whether to lose weight or to build muscle, etc., you want to be able to add your diet into that mechanic too.
So we have a series of tools and portal in the back end.
Is that it? Let's see if I can pull that out and try it in again. Is that it? Okay. AV hassles.
So one of the things we gave them to do, what we've done then with that is we give the university staff and the functional medicine people we've been working with a portal that can access all that diet, securely share permission by the user. So it's very secure, all the work that we do. So they get the permission to share their diet and their health goals and things like that.
When they do that, we then use, we've been building reports on this for over a year now in terms of how do we build a really interesting report that works well with the user. So we run a number of bits of analysis on that and then we use AI to help interpret some of that analysis and give us a result back and turn that into language that the user can use themselves.
And what we've done interestingly is that we've just recently rewritten all that piece of work. And we've done that now with agents and made that into a bigger agentic workflow as well.
So who's got the idea of an agent here? Who's got a concept of what an agent is is one of the hot topics in AI? A few people, okay. So agentic workflows, okay here, cool.
So, Oh, now you're just teasing me. So 1agentic workflows are basically breaking down tasks into smaller tasks that different skilled pieces of AI can go and do a job on.
What's happened with our agent, what the agent begins to look like in our agentic flow with our report building is, so we have a series of agents that work underneath an orchestrating agent that goes away, does jobs, comes back, gives information back into the main orchestration flow, and then we let other agents operate on that information to give a final result.
Okay. As opposed to that being really abstract, let me pull that back into the use case that we have at the moment with this.
We have a user that's recorded their data. We've worked with them. They've given us health goals, et cetera.
So our first agent just goes off and gets the data that it needs. There's a couple of different sets of data that it needs to get.
So we have one agent that goes off and gets all the mail for the last week that the user's recorded on the app. We have another agent that goes off and gets their profile, their health goals, et cetera. And then when we come back with that, we have a...
We have an agent then that goes, okay, let's take all this data, let's do some analysis on that initially, put it together in a, see what their, the amount of the top foods for protein they've eaten, their top calories, have they met their targets? There's a little bit of just raw, straight data analysis we do.
With those results, we then package it up and then send it off to LLMs to begin to ask, specific sets of questions to report back on each of these items. To put a report generation together, we build a reporting tool that works with humans.
So basically, we give the user back four or five different options for each item so they can choose which one they want to do to compile it into the report. When we get all that data back, though, still within the agentic workflow, we then say, okay, this is all good, but actually some of these suggestions are a bit bad.
So we've had to add in a chef-based culinary workflow to say, okay, this is telling us add this food with this food. Does that make sense from a kitchen point of view? Would I put this in a plate, right?
So we've got a chef agent that works with us to say, that doesn't make sense for like, you're trying to combine two really strange ingredients. So that's an important bit of it.
And then with that, we get all that stuff back, and then we get an agent that summarizes all that's learned from this and writes a couple of nice summaries in a couple of different tones and a couple of different perspectives from the user.
And we're working with some of the top health clinics in Harley Street at the moment, where people are paying tens of thousands of pounds for their services to be, and they decided to use our tool to do that analysis stuff. We've helped develop some of this stuff with those guys. So that's been quite an interesting process we've gone through in the last year.
And then we give all that back into the UI again, and that is then put together at the click of a button into a nice report they can send to the users, like a 12-page report. So it's pretty cool, and it seems to be working well in practice.
We're doing lots of stuff. No, we can do some more stuff going on.
We've still got no camera, though. We think it's the stick going into the projector there. So we're going to try and nudge it. So we're hoping he's going to come back with something that can reconnect with us. Do you want to open up to questions in the meantime?
Oh, right. Okay, right. Okay.
So that's some of the stuff that we've done there with that and how we're moving it forward. Now, with the latest models that have just come out, we've done a lot more development recently, and we'll be able to integrate a lot more stuff with the wearables that we're doing now. So we've got that going now.
So in a way, you've got Whoop and Ura, and we're getting data from Apple Watch now, pulling that into the mix. The reason why that became very interesting is because All of a sudden, when we get that data in, we can then begin to make these analysis of sleep quality, of exercise amounts, of real-time stress.
So we've had a bit of fun building real-time stress models and this stuff as well. I'm working with some professors from Oxford who are experts in that area. And in conjunction with that, we're putting together some nice output from that too. That's all coming together really well.
We're going to add those data into our agents and make another agentic flow that will help build and weave in that data to the process as well. With all of that in place, we can then do some really interesting stuff. Really give very good personalized advice, not just at any given time, but how people are using these things over time.
What does their health journey look like?
If you need to lose like... five stone or something like that. It's going to take a bit of time.
So you need to be kept going on the journey. And we try to stay there with the user on the journey.
So we can do a nutritional report based like capture your data for about a week. We'll do an analysis for it. We'll keep you monitored as we go through time with that. And maybe touch together for another week in a two or three months time to see how you're going and how you progress towards that.
That's been a little bit of a, that's what our reports look like. So as you can see, we can give some quite detailed reports coming back.
Each of these sections of writing there can be edited by the user. We give them five options for each section. It's like a big 12-page report.
We're making things like food recommendations and things like that on there. So it's quite cool, quite comprehensive. But also what's really good about it, it's completely personal to the user, right?
What's been funny about this is I've been doing it myself. I'm pretty good with nutrition and know what's going on with this stuff. It still calls some stuff out from my diet that I picked up and said, you better watch this and this because you're getting it.
I'm like, oh my God, right, okay, I haven't clocked. That combination wasn't good. It's beginning to make some really intelligent insights onto my own diet and how I can improve it. And I'm finding that I'm learning from my own system that I'm building, which is quite interesting in itself.
So this is one of the other cool things we did. So one of the things people have said, okay, it's great that I can record what's on my plate, but I want some ideas for recipes before I do this. So we said, okay, how do we build a really cool recipe builder?
So what we've done, we've done a number of things. This was sort of nearly just like a pure experiment thing, just for the hell of it.
But on the first window over here in the top is a little tool that we've built where you just take a picture of your fridge and your cupboards, and it itemizes all the items that you got in your fridge from just one picture. It's ridiculously good. I was like, this is insane, how could it do this? You have one picture of a messy fridge and boom, you go, we've got all these nice little items on here.
So one of the things we've done, when it gets a food back, we've got it, it was looking a bit messy when it tried to do different images. We said, we only have a picture on there. So we create each of those images on an image generation pipeline that happens in the background. So they all look sort of same. They've got nice little gray background and et cetera, et cetera. So that works pretty well.
We then, from that, we have a tool that says, okay, what do I fancy tonight? So here's what's in my fridge. So I'm going to choose some chicken or some tofu or some peppers, et cetera.
And what cuisine do I like? Okay, I fancy a Thai cuisine tonight. So then we go, go.
It gives them, the user, a recipe for different, it gives them three options. And they can go out and come up with a recipe.
And we've got this set of ingredient lists that we put out. A method, ingredients, etc. And then they can turn that into a grocery list.
And we're talking to some supermarket chains at the moment that they might be interested in getting access to that grocery selection.
This was also cool, even Google profiled it earlier on in the year. So they wrote some blogs as part of their Firebase setup because of the work that we've done. We've done a few things with them.
We're on the customer advisory council for some of Google's Firebase Studio. and for their Gemini models as well, so we're actively involved in some of that stuff and giving those guys feedback on how we're actually using this stuff in practice.
But they actually had me in talking to some of their engineers, because they were even surprised by some of the output we were getting and the speed of what we were doing. It's mostly me, and I've just recently hired a junior engineer as well, so we're doing all this stuff like one person has done most of this stuff, and actually we've got two of us now working on this, which is going well.
One of the things, so one of the things that's come up, I'm gonna go across, we've been using so many of these tools over the last little while that,
I've had to build some tools to help me control my use of AI, because we're doing so much. Now, David's gonna talk about how we do it, so I'm not gonna steal his thunder, but one of the things we've been using these models a lot of is so much in the background, and this is for the devs in there, how do you keep control of how much AI you're using?
So what we've built is this now AI Coder Guru setup, which we've got, we've got a tool that lets me analyze my token usage and how, And this is gonna let teams analyze the token usage across their teams to encourage more developers to get in and use these tools.
So this is sort of what it looks like.
In the last three months, I've gone through 2 billion tokens. That's a big number, and it actually is a big number in practice. This is a lot of usage and cost over there.
So we've spent $1,275 on AI. About half of that's been in the last month or so since the latest Claude Sonnet 4.5 model came out. We've built so much stuff in that time.
That sounds like a lot of money. a lot of money, it's sorta not. If you understood what we built with that, it's insanely good and an amazing amount of productivity and quality output for that amount of spend.
So this is, but as you can see, it's getting a reasonable amount of money, so if you're a business with lots of engineers, you need to have some tools to do that. So that's why we're building this AI coder tool as a side project, because we needed it ourselves. And we were able to turn that into a SaaS app and lob it out there. And we're just finalizing that up now at the moment.
In the last week and a half, two weeks, we put this out as another little mini brand out there as well. And one of the other things we did, which is really fun, and I talked about this in our group last week, but this is super cool.
We got another... This output was one prompt from Claude Code, and it was super interesting.
So every day we have a team meeting in the morning. We have a remote team, so we all get together, and we'll have a team meeting where we discuss about, just get together, chat about how everything's going with just life and our team, and then we talk about a couple of tasks that we're doing through that.
And we record though, they're on Google Meet, they're recorded in the background. AI does a voice transcription of that every day. I don't normally touch, it saves the files down to Google Drive, don't normally touch them.
So this was one prompt. I basically went back into Cloud Code, sorry, not Cloud Code, Cloud itself, the UI, connected it to Google Drive, and said, okay, Go into my Google Drive, look at all my morning meeting stand-ups, go through each of the results of that, get all the tasks.
I want you to build a Gantt chart of all the tasks we've done as a team and also then split it up into per person, okay? That literally was the prompt, go. And the output was this.
And it actually built a little mini app to do this. So it went off through all that, produces this Gantt, actually asked it to produce a Gantt chart of the project work that we'd done. And this is the output. And this is actually pretty damn spot on for the work that we've done.
It basically picked out all the tasks, picked out who was doing each task. The first screen is for the whole team as we went through when these things were completed. And the second task is per individual person, so this is the task these people were working on and what contributed to that over time.
It was pretty remarkable and pretty low effort. You have to get the right prompt into it, but we're getting better at prompting these things and these technologies all the time. So on that note, segue into, let's see how our prompts going on the other side of it here, okay.
Whoa, okay. Allow this to access the microphone. Okay, cool. Welcome to the BMR calculator.
Let's see how this goes. So that was just a prompt we said earlier on. In the background as we've been talking up here, it's built this. I don't know if this is gonna work or not.
It's live demos, let's see. Let's turn up the volume. Okay. Let's see, is it going, how is it picking this up?
Okay, microphone. Glenn. Let's turn on my microphone and my phone. My computer's doing that at the moment. Okay, let's try it.
Glenn. Is that working? No. Okay, the voice input's not just fired up yet, but let's see, let's go through and see what the UI looks like.
Okay, what's your age? Okay, I'm old, I'm 52. And a male, okay, cool.
Select my weight in kilograms, quite nice. Okay, here we go. 71 K. Is that weight, would I get that right?
Oh, weight's all the way around, so 71, okay, I'm 175. The wrong way, okay, boom, okay. That's not too bad, right?
So we made a couple of assumptions in there, but that's about right. So it's usually slightly different. You know, that's from that one prompt.
It didn't get the voice bit of it right, and I can mess around with that, and actually, without too much difficulty, I could get some of that voice stuff working, so that's gonna be, put in a voice AI model there in the background if I wanted to, and some other stuff, which is quite advanced in terms of these tool sets.
But you can see how quickly we've built a very quick little calculator with just one prompt, and it's given me two or three screens. It's connected them together, got the data. It's often actually got the right equations to calculate basal metabolic rate, and it's put those together. That's a very quick little mini app.
I'll give you another practical example for the marketers in the room, okay? And this was something we did last night that was actually pretty good.
So we've got some AI, so we've got some Facebook campaigns running at the moment. And here's our page that we direct our users to from the Facebook campaign. We get a little bit of what they're doing.
At the bottom, they can sign up, and we got a couple of subscription levels at the bottom, standard sort of page. We have a promo code that we add. So at the bottom, actually, we can add a promo code down here at the bottom. So if I put it in here, Hamish, oh. So this promo code will go away, check out our promo codes in the background and give us a discount. So that's quite good.
But what was happening when users were clicking on the page with the promo code, they weren't getting anything at the top up here. So what we've just added is this little special offer's just been applied and a bit of confetti happening down there at the top of the page. So a user, when they click on an ad with a promo code, gets immediate visual feedback that they've just done something with it.
Adding that little bit here at the top, that little bar and that confetti was again one prompt on the website because we now vibe code this website. We moved away from Webflow, which a lot of marketers have used. I've used for the last three or four commercial websites I've done. And I taught my marketing team to vibe code this stuff as well. And their productivity is so much higher.
And we can add something like that in like... Literally, this took 30 seconds last night. It sort of even surprised me.
There's a new faster model coming out from Cursor, and that was really, really effective. So on that note, I'm going to give a good point of hand over to David, who's going to show you Cursor and some of these tools.
Before that, any questions, actually, before I step down? We'll maybe catch up later if you want.
Yeah, how much of that is... Just your prompt engineering. I'm talking about your main app. How much of that is prompt engineering?
How much do you do custom embeddings? Yeah, good question. The prompt, it's not so much prompt engineering, it's how to talk to the model in the right way.
So I've got a skill set now where I can sort of do most of the roles in a business, because of just the nature of most of my failures, basically, as much as anything else out there, which is part of the learning. So I'm sort of like, I was like a chief product officer, if you like, or had strong product skills in a lot of the work I've done. So a product person is someone who's taking The ideas for the product, talking to the customers, talking to the market, and then turning that into a product idea, and then interfacing with the engineering team to create that.
But I've also got quite reasonable architecture skills and know how to put together systems. So therefore, when I talk to the computer and I prompt it, I'm sort of asking it to do things from the point of view of the customer, from the point of view of understanding what's going on in the code, but actually understand the architecture as well.
So when I ask it, attempt to get it right. Now I don't overly intellectualize the prompt in this stuff. Now I know Joshua, who runs MindStone, does a lot of work in prompt engineering, but I tend not to do that. It's not my style.
We're finding different styles of people of how they interact with that. I voice talk to the computer a lot, so I find voice as an input is super, super good. We've now used that a lot, so that seems to work. You tend to get more information by voice than you would do if you had to type it all in, so it seems more natural.
I'm just talking to the computer, asking it to do stuff. That's sort of the way it works for me. It works pretty well.
Let's go here now. Good question. So you might have mentioned it, and I missed it.
When you did the get, So, the interesting bit is, I just connected it to my Google Drive, I didn't even, my cloud code is already connected to my Google Drive, so I just told it, go and get, go to my Google Drive, find the files that are the saved documents for my Google Meet morning every morning, the transcripts, extract from those transcripts the tasks, the objectives, and the people I allocated to it, and then produce a Gantt chart. That's all I asked it, I didn't, That's an unbelievably flexible thing to ask something super, super high level.
If you could do that as a project, people could spend weeks doing exactly what I just asked the computer to do there. It did it in like a minute. It was just like, I was blown away by the data structures that sit behind it. Your question's a really good one because that is the non-trivial part of that and it did a really good job of just creating that itself in the background before putting in a UI in the front of that.
So basically you recorded your meetings, you transcribed them, and then... Yeah, you just pointed to the source. Yeah. And that source was just a Google folder, right?
It wasn't even a structured data source. That's what's sort of even more surprising about that. Did you use any embeddings in the background?
Yeah. Literally, I just asked a question. I literally have just described to you what I asked. It was one prompt. It was literally one prompt.
And it was descriptive, and it was probably about 12, 15 lines long, and that's what it produced. It was just one shot at it, it was like, wow. It sort of surprised me, I didn't ask for that level of detail or ability to be an app and click on users and see what they've done and stuff, it just gave it all that out of the box. It's pretty impressive.
One of the things I will say is one of the things I've learned over the last, I think just three weeks, I've now come to the conclusion that the AIs are actually doing a really good job of understanding me as a human and what I'm trying to do. So what I mean by that is, They're understanding the context of the task I'm giving it really, really well.
So I'm asking it for a feature. Okay, I want to add a feature to be this button to do X, right? And the interesting bit is I'm asking some of my other people in my team, and I'm having to spend a lot more time explaining what that task and that job to be done is. The AI is understanding it faster, because it's got more context of the real world.
That is the thing that's actually, I've just clicked it, that's what it's doing, and that's the way I'm interacting with it. It's just like a really, really smart colleague now as opposed to like a junior colleague, which it was definitely three or four months ago. So that's interesting how these things are evolving this fast right now. Okay.