It's nice to see it's getting crowded.
So this is our second event, as they already said, Mation.
So I just wanted to mention something.
So the Mind Stone is our format.
Apart from that, in the Cambridge, what we are trying to do is build a community, an AI community and a hub that we can just collaborate and just share.
So it's not written on stone, the talks, et cetera. I mean, if you have anything to share, also just please reach out to us.
and yeah back to the talk so my name is on the right so uh yes uh i am also the one of the co-organizers and i run the the software uh the consultancy uh company the clock start and today we're gonna talk about what's happening in the software development area so it is it is a good topic but
not just because of the fancy software development tech, but what happens in here, it's about to happen everywhere. So I just want to say, if you are not in the software development or just some other industries, what you're going to see, it's about to happen everywhere.
So that's one thing. Yep, we will talk about that, the AI native software development and the fancy things.
So, just before moving forward, I want to ask how many of you know at least one programming language?
That's good. It's quite good, yeah. So, I mean, if you don't know, actually you know if you speak any human language, so pretty much nowadays, yeah.
I will walk you through, I mean, most of you already know, how the software development was or just currently the same for most of the majority of the companies. But I want you to imagine what we are doing as a developer software, right?
Someone, some entrepreneur, someone comes with an idea or just a product, wants to implement the feature. then and the people collaborate we create a ticket and then we analyze it and break it down and then into the actionable items a team or a developer take it digest it and then build something right so but while doing that we we as a people i mean the human we do lots of collaboration
and we have the knowledge base, the subject matter experts, we have lots of interaction.
So what we were doing with AI agents, so we just giving some little context to them and expect them to behave like a proper software developer and create the same results. what the developer does.
So you might have heard, it's a very new term I just added there as well. So it's the context engineering. So have you heard it? I'm pretty sure you heard prompt engineering.
So it's a way to how you talk to the AI, the LLMs, actually. So if you give LLM just a random, just a very tiny prompt, you expect some, like, if you put a trash, you get trash, right?
But if you insert, say, okay, this is your role. This is the context. This is the example response I'm waiting from you. This is your goal. You get proper, proper results.
So for the context engineering bit, it says if you allow the LLM to get the better instructions, the prompt, the example what you want, the tool description, access to your knowledge base, and the tool as your developer can access the similar environment, and treat it like an internal junior software developer, you get way, way better results.
That said, we will be moving to the demo part. For the demo, what I want to do is first give you a sense of the wipe coding. Just one prompt. I will give just one prompt and expect it to do some game, right? So let's start it.
So it is just one tiny prompt to give. Let's use clouds. That's the one we're going to use. OK. I think I cleared it. Yes.
So this is one of the AI agents, this AI software agent that I'm going to use. So I didn't give any more context. I just give one prompt. I just say, OK, I want a Flappy Bird. It's a simple, some rack application. I just want the scoring and the collision detection. That's it.
So what it does is it's the better version of, I would say, the wipe coding. It's a bit smarter, but it does. It gets the context. It just creates some actionable items itself so they understand what we have, et cetera, and then creates a to-do list and just go one by one. So I don't have to wait it, but I just want you to watch it. So let's switch to automatic mode.
It doesn't have to ask. It's a human-in-loop feature. It asks everything before you do, which I just allowed them to do. So it's a bit smarter version of that.
The software development agents allows itself to generate a knowledge base. understand what it has and understand its next action, right? And it is so powerful, not just for simple applications, but for bigger enterprise applications. It creates great job if you have the proper tools.
So I think we can, meanwhile, move forward to the second demo. and we can come back when it's done and to see what we are getting and then maybe we can just play with it just for web coding.
So for the second demo, actually, just yesterday, I decided to create an application without coding, with just the similar tool.
So, I mean, I have over 15 years of experience. I write code. But for this one, I did write no single line of code.
So this is... It's just a prompt to, yeah, okay, so how we can manage efficiently this event, right? So it's a single prompt, but it's deployed.
It has its own database. It's Postgres running on the backend, and it has AI integration. It has an AI bot that you can ask questions, like say, hey, what speakers we have?
And it's also the AI native software, which nowadays it says. So instead of like, you can say, I mean, I didn't implement that, but I can say, hey, I would like to organize an AI event. Just give me some ideas and find me the speakers. And whatever you need to do and find me the sponsors, everything, it should do everything, right?
So that's the script. But other than that, so this is just a prompt took 30 minutes to the cloud agent to build.
So what I will show you, let's act like this is the proper enterprise software. Let's say we have hundreds of developers, a couple of other projects, subject matter experts and the knowledge base, all these, the proper organization, right?
You just cannot write one prompt and expect that some AI agents to follow your company rules and instructions and create something that really works. inset so You have to you know the ask of line by line just you know the follow of the instructions blah blah blah So what we can do I?
will
I just created some Jira integration to demonstrate what I'm going to talk and demonstrate. So we have the simple task, right? So the idea of allowing your agents to access the things your developer access in the same way.
It's like, if you heard about MCP, has anyone heard about MCPs and using MCPs? I love the crowd. So, okay, so it's basically allow your LLM to talk through the other tools.
It can be, an API, it can be a product or whatever, right? So it is the way of the communication. So model context protocol.
So we have a ticket here. So this ticket is like 40 hours. Okay, so it's a simple ticket. For our dashboard, I just need a dark mode, right? So I can go here and the
This is the one. By the way, this is also completed. So we can play with it, I guess. Our Flappy Bird, is this running on localhost?
Let's see. I hope it works. Flappy Bird, Flappy Dev. OK, while it works, let's go back to our demo.
OK. Seems like something is working. OK. So the funny thing is, OK, it's really hard to play.
I did this demo before. And I can ask to make it easier and blah, blah. This is the vibe coding. This is the one I don't like it. So it is like kind of dopamine addiction because something's happening continuously and then eventually you get frustrated.
So, 1but if you can build the pipeline, I mean, there is the word, like I think the Lincoln said, that's like, if I'm gonna cut the trees, if I have six hours, for four hours, I just chop my axe, et cetera. So it is the same thing. If you can build the proper system for your, the AI agent eventually you will have something better and about the context bit so we just said we have we just need to flip a bird but we didn't give any context it just said okay it tries to it created something it has trained it for just find some other example i don't know it's give it us but it works anyway
So this is my backup if it failed so I will I was about to show this is another prompt and it's just totally different things, right? So I think it's better So anyway, let's go back to do the real thing, right? So Let's say let's go to the clouds.
So one example cloud knows let's say Many Tickets in Jira CMD project So this is one example to connecting the dots and allowing your developer to get things. So it says, yeah, OK, I will have you defined. And I'm using this MCP Atlassian to allow us to get the project.
Run a query and get total two and say, OK, you have two projects. So I know of two. And so how many assign it to me? The same way,
you connect your Jira, your ticketing system, you can connect other things, your collaboration tool, your Slack, your knowledge base, whatever you use, the Confluence is also connected, but Notion, or your issue tracking system like Sentry, Datadog, whatever you have. So once you're connecting these dots, you create a seamless pipeline and you just reduce the noise, give all the noise and little tickets in the box. the agents and then if you create a proper verification system eventually it starts really doing something and working so let's say let's go say and okay cmd2 right so okay let's say uh go and fetch cmd2 tickets and implement
Let's create a PR and push it. So, but this is not that enough, right? So still, it doesn't know how it should do everything, right? I just said, okay, it doesn't know how to create the PR.
What is our company policies? How, you know, the commit structure, you know, the semantic commit before low, like all bunch of things, it doesn't know. So for these kind of things, we create rules.
The LLM is way better with the human language. If you put plain text or markdown, it's perfect. So the idea is to create All these rules, the policies, and somewhere in repository or someone is accessible for your LLM as you teach your intern and junior developer, right?
So, okay, I will demonstrate it and then I think I will give it shorter. So I have the bigger implementation and tickets example, but I will just keep it short and allow you to go to the next one.
Are you worried about the cost at any point? It's a perfect question, so I'm glad you asked. For myself, no, but it can be costly when your team grows.
So for me, So we also, for my customers, we use Devin. It's more advanced, et cetera, but it's charged a lot. For other part, we use Windsurf or Cursor.
They are manageable. However, so I'm using this in the demo, the entropy cloud. It's like $100 and $200 per month if you go advanced.
So I think for me, the ratio between the benefit of the cost is little because of the... I use it a lot and just get most of the benefit. However, it's not that cheap. So it's, it depends.
You never hit the limit, so you pay $200 for a max plan and you kind of don't need to do that. Well, yeah, for me, yes. I don't care much, but yeah. Yeah, but I mean, for my clients, so they are, so we are using Devin and Windsurf there.
So, it's kind of a problem because for these tools, not everyone likes cloud, and for these tools, you get charged for the extra usage, and it can be just growing, growing, growing, especially your developer doesn't really know how to use it efficiently, so it can be an issue if you have lots of developers. Right.
So, yes, I want you to go ahead and do whatever you need to do, right? So, let's see. this is what it is now and let's go localhost and also let's make sure it's running here so npm run dev so we can have something visual and we can see what's happening while it's working so another port by the way is it visible for properly yeah
Oh, it's already implemented. It was fast. I couldn't even catch.
So let's see what is implemented, right? It's committed already and created the branch and the PR. It's faster than me, I just can't keep up.
So yeah, it looks like it's happening.
So what was the Cambridge AI meetup? I think it's event management demo here.
Oh, it's not created the PR yet. So what it says, yes, proceed, please. Oh, it's going to comment to the Jira?
Let's see that's happening. Yes, it did comment as well. So I never I never comment like this.
So what? Yeah. Yeah, I never done that. But yeah, it's commented.
Now it's asking me for the movie to what you know transition yes let's transition that so i just wanted to give you what it is possible and what you can do achieve if you can just put a little more context so and just integrate it with so i would love to show you more things like you know the how we automates the PR creation just from our issue tracking to the ticket system, ticket system to the agent, and agent to PR, and then the automated test, and then someone's just review. So it just cuts the noise, half of it at least for now.
But this is where I'm going to stop actually. Good question.
Sure. When it comes to small code bases, I can see where this is useful. Perfect. I'm glad you asked.
So for the client that we are working, they have the software. It's a bit the modernized bots with the weird, the frameworks and the versions, it's a bit problematic for the LLMs because LLM just assumes that, you know, the latest versions and tries to get the proper version, it doesn't match.
For these things, we created some workflows and the rules and lots of them. What happens, the developer knows and so to trigger which rules to implement specific things.
Let's say for the UI, we are developing some Angular component, right? So there is a specific rule we mentioned in the chat. We say, okay, just follow from the cloud. We have like, if you put slash, so which not this one.
I will show you some examples. OK, so you can say, OK, implement this component by following the Angular component creation rule. So in this rule, we say, all right, so you're going to use the version. You're going to run the scripts that we prepared to create the component initially.
And for the styles, these are the rules. We follow the styles, et cetera, et cetera. So you have to be conscious about the rules and policies and put them somewhere. And also, you need to educate your developers to how to deal with them.
For the bigger code bases, it wasn't that good before, but nowadays it's really nice with the indexing and when you ask questions, some specific questions, so how you can connect for this framework, implement that, what are the connection of the infrastructure of the others, it allows you to connect the dots. Anyway, I think that's pretty much it.
Any questions? Go ahead.
This is the second time I'm seeing people using the MCP, the cloud code, using and creating the tickets, the PRs, as a use case. Okay.
So do you think in the future the people, the coders are going to just fix what the AI has written? Hmm. Kind of.
Actually, I mean, what I am doing. So what happens is, nowadays, None of my colleagues, they start a ticket by just scaffolding themselves.
They just first get the ticket. If it is LLM friendly, right away, throw it to the generator. And then it's a review.
If it is really a mess and just, you know, they start from scratch or just fix it. So I think it's going there.
Do you have a sense of what the break-even point is? How much did you have to put into prompting and the rules that you talked about to get everything to work, where the time and effort that you put into that, what's the break-even in getting code out? So what you are saying is the ratio... Like just writing it yourself or getting a team to write the code.
What's the... So I will give you one example. So the clients of ours, so they wanted to transition to all development team to AI native developers.
And it slows us down a lot. So I would say the productivity went down. at the beginning because the people are super get used to what they are doing when you tell a user to add a new component he just done it already just copy from the the other module and just change it and push it that's fine but what uh what i think the companies are just trying to achieve they try to automate everything as possible and they use the same developers to create more competitive products and the new products and to get agile.
So I think it's an investment. So for nowadays, for a company to just switch to AI-native development and do these things, there is a cost at the beginning. But I think eventually, It's a very good thing to invest in because it gets better and better.
So when it gets to the top level, you will have an army of the developers just can build automation and just deal with it. So, yeah. All right. Go ahead.
If your AI-powered development is able to output, let's say, three to five times, I mean, how many times extra amount of code into a code? Do you see a greater need for kind of more vigorous testing of the products?
You know, testing may be a bigger thing for these companies that are using AI-powered code. More effort to testing, less effort into evaluating the code. It's a super important thing.
So I think, so I will give one example from the Devon AI. I wanted to demo today, but there are lots of the customer information I didn't want to expose.
So what it does is, when a ticket comes, AI first run through that on the browser, the agent, and try to reproduce it. If it reproduce the issue, let's say I'm talking about the bug, and then it's implemented.
And then it runs through the agent again just to use the browser and test it. And then if it says, OK, test it, and then fix it, and send it.
So I don't see the people are doing lots of automation they should do, like end-to-end automations, acceptance tests to make sure everything is fine. But instead, I see the AI agents to companies like Devon, the cognition, they are just trying to copy what the regular developer does, just run the test before and after.
Does that make sense?
Okay. Right.
I think that's enough. I just talked a lot.
We should be hungry.
Thank you.