Requirements-Driven Development: The Missing Layer in Agentic Coding

Introduction: Why Estimation Still Hurts

I've been chief technology officer of a bank in Switzerland, also from a unicorn, one of these guys that I saw, where I had over 100 teams working non -stop. I don't recommend it. It's nice, pay well, but stress to the roof.

okay and in one of the in all of my career all my bosses all the time they have asked one question okay when you will have this is it done how we will do it you know it's like how much effort we will have to actually take to actually

do this okay so when I go in a meeting with a client I ask them the same what What tools are you providing to your team so that they can answer this question? The answer is zero.

Okay? So that creates a huge amount of problems.

From Waterfall to Agile: The Missing Context Problem

But before that, let's go a little bit of a story memory line. Who knows what that is? Is that the name? Exactly. Okay?

So when we started to be professionals in the software, we said, said, oh, we will plan, then we will create the requirements, then the specifications, then we code it, and then we deploy it, and then we documented everything.

Have you ever seen something like this actually really working? Never.

And that's why we invented Agile, which we said, OK, we will know what we want, but we will skip requirements, we will skip the specification, we will go straight to the code, we will deploy it, oh, we like it or not, Then we go back to plan and then if the cycle is fast enough, then okay more or less works

Okay, but the same problem I mentioned before is still there You know so to the point that if you ask developers to give you estimation They size in t -shirts in animals in numbers in whatever They have no clue how to actually estimate the things. There is a lot of thing there going on

AI Coding Shifts the Bottleneck to Requirements

okay and then AI appeared okay and what happened is that suddenly the bottleneck was not actually creating the code okay so if you actually prompt properly write you know to the cursor cloud code whatever tool you want in five minutes you can have like an MVP or whatever you want or a modification of a small feature or whatever, okay?

But what happens? Whenever you have that, as you keep pushing the border, the border, the border, AI gets lost because it's built on human knowledge, right? So it thinks like a human in many times.

So it needs to have the same context that a human to actually go right. If it doesn't have the context, doesn't work right.

So if you don't have requirements, which is sort of why we are building this and it doesn't have the document, you know, to actually check what we already have, you know, doesn't work out, okay?

What happens? There

are some developers that what they say is the code is the document, okay? Which is partly true, okay?

The problem with code is that doesn't explain why. It explains what it does, how it does it, how it's linked, but it's not not giving you, if I break this line, that feature that the product owner three years ago thought will break.

So in a way, AI has accelerated the process of discovering that we were wrong straight away from the beginning, and we were lacking a little bit of context.

So that's how the next step in AI coding, which is the requirements, which is basically using AI to actually create the requirements, the specifications, and the documentation.

The problem is that we are humans, and we are lazy by definition.

And the problem we have is that if you ask any CTO or any CEO or any manager, they are only focusing on one simple thing, which is the next feature.

Nobody cares about creating the documentation, the specifications, the requirements, they don't care, but we have to make the same feature as the competitor, we have to do the thing, etc.

So we basically have to think how AI can create the requirements, the specifications, and the documentation as a byproduct of the ongoing work that we are doing.

So that AI can use it, but you you don't even see that exist. Otherwise, nobody use it.

So either because it's not up to date, it's because it doesn't really exist, because it takes some time. People just want to dump the problem and keep moving.

A Requirements-by-Product Approach: Documentation as a Byproduct

So we basically designed this system, which basically is a chart that allows you to interact with AI with this knowledge.

This knowledge basically can connect also to the code that you already have. We have a Visual Studio Code extension, which I'll show you, that connects directly to your code.

This generates all the information, send it to the bridge, and then the synthesis of it is what we use to actually create the requirements specifications at high level.

How that actually works. works.

How the System Works in Practice (IDE Extension + Knowledge Bridge)

Let's imagine this is Cursor for any Visual Studio fork that we like to have. You install the extension.

You run an MPM, whatever library that basically runs a small daemon in your application.

Do you see these three dots basically indicating that it's actually connected to the different three services that you have there?

And here you have the tasks. tasks. But what is more important is this part. This is an analyzer of your code.

And then we use your local agents with access to your local code to actually generate the full documentation of your 10 years old application. Because one of the main problems we have is that most

of the time, you don't have time to actually do documentation. So we had to find a way to accelerate the process

Okay, so you click there you generate all the different things you generate all the different You know things and then you generate different assets you run all these and all these all the specific Information of the knowledge map or what is an application we send it to glue term

Okay, so when you send it to glue term you end up having Something like this, sorry Why is this?

Generating a First Pass: Users, Domains, Features, and Data Models

So it ends up generating the documentation for you. It infers the users, the stages, the domains, the features, data models, and so. So you end up having a first version.

So you can also navigate it from here. You have the different functional requirements, technology, features, etc.

Turning Change Requests into Concrete Specs and Impact Analysis

But the funny thing is that what you need to do is like, okay, what I want is this thing. I want to change. I want a new user classification system. That's it.

Okay, so you overchat here, keep polishing exactly as you want. That would be the work like a product owner, a product manager will do.

Okay, when you are ready, okay, to actually do all these things, you you approve and then you end up having sorry it will not do it because it takes some time I'm not sure what is going on now I think an internet it's a little

slow okay and don't make a tiempo okay it ends up generating that specific change it compares the change request that you have done okay with the the previous documentation has already generated.

Then it actually says which features, which use cases, and which scenarios you will have to build. It tells you the visual aspect of it, which is the experience data, the views,

what it would be the behaviors you need to change in that view. It gives you the data models, it gives you the structure of the database compared either new or

exchange, the services and the technology stack in case that have changed. okay with that finally your engineers can actually start analyzing if that is

a big job or is a small job okay so I'm not sure why it's not loading okay so we have here the thing well something it's broken okay so the next step these are all the things so for example we could see you know how a suggestion of the

data model actually changes okay then you can actually go and review it through the chat system I show you before you can just click and change it so it's very very easy sort of by engineering it if you want so in very

very short period of time you have a full analysis that the waterfall dream people will actually love, but done in one -tenth of the time, which allows you also to have a big picture,

but also this information is later used in AI. Because the important thing is that you end up having this.

Delivering Actionable Context to Developers and Their Tools

You have that end up the user stories and attach it to each one of the user stories generated for your specific request, you have this information which is the technical task you need to do and when you assign a user to actually do it, it will actually show here for the developer.

So, it will simply have it, then it will click a button, it will download all the different aspects of the context and then he just needs to start coding. Whatever methodology he wants to use.

You can use your workflow, but you can use open spec workflow, you can use cloud code, you could use a cursor, you can use open code. We don't care how you actually code it, it's up to you.

But we have already accelerated the process of understanding what you want to do, how big, how small, what are the things you you will have to change.

If you tell that to your AI if you are doing specifications in the development, you have to charge these. I have to add these fields, and these fields have already validated for the team, et cetera, et cetera, that works.

Keeping Docs Honest: Drift Analysis and Continuous Updates

1Whenever you finish this uncompleted task, we detect that, and then we do what we call a drift analysis of the documentation.

So we analyze here locally if your specification actually match what you have coded because sometimes the product owner goes and says oh I want this I want that change me that or I need an extra field which was not originally in the user story it was not original specification this is

someone okay so and then we feed back again it to to glue term okay and then then you end up having all these accumulated information that you can navigate in Glucer.

So let me go here. So there, you will have this branch. This is each one of the change requests. So you literally can see, for every change, what other final requirements that we might have.

And now let's go, for example, to users. users. And it says if they have modified or not have modified. So you can actually see all the different things. And then you can start editing.

You can have the domains or the features maybe. Let's pick a feature, which is fully analyzing everything in your feature, including the use cases. You can go to a specific use case. And you can see the scenarios of data specific use case.

You don't have to do that. that. The AI will do it for you. And the AI will actually use this to actually code it for you. That is what we do, and what is with our tool.

There is other tools to do requirement -driven development. Questions?

Q&A: What “Coding” Means in an Agent-Driven Future

I think that coding is going to change, because you said that, for now, everything is built inside what humans work. and now it's changing with the human computer. How do you think this will change?

What do you do in Blue Charm? It depends on what you understand by coding. So with the level of technology we have today, without improving it, just with the tools we have

on making smarter systems, we are going to be able to automate the full development non -stop. up. There is no question about it. That's reality.

How long we will take, blah, blah, blah. You can guess it. 1But as more systems like mine, or there is another type of tools which are control plane, agents, controls, et cetera, which are basically to analyze and have 15, 20, 100 agents working for you, that will happen.

But the interesting part, at least for the developer's point of view, is that, first of all, that's not actually the bottleneck okay the bottleneck is understanding how you what is what you want your agents to do which is still valid either in the functional point of view but also from the technical point of view okay so one

of the things for example that we record in in our knowledge map okay of the of of the thing in this here, is which decisions are you taking or what is the decision log you are taking to actually go in that direction.

Because that might be like this because of that reason, but over time it might be completely false, and then you have to reassess this.

So it's a huge network of information that no human is going to be able to read, that no human is able to actually digest, yes, but it's what machines are good at. So you have 100 agents actually reading all the information, how is it, et cetera, et cetera.

From Code as the Asset to Process and Decision Knowledge

So in a way, what will happen is that the assets of the company will not be the code anymore. It will be knowing what is the process that we wanted to take.

So really large companies, now they have a big problem. Because in their accounting books, and I'm just going to the financial part, part, it says that they have 20, 30, 100 millions of code. Suddenly, generating the line of code is $0 .10, 1 euro, 20 euros.

Even the project from Cursor that they invested $2 million to create from zero, no human invested a full new browser. It took them $2 million. But it was fully.

And one of the things they discovered, However, what is one of the biggest insights you can take is it only worked as much as the specifications were right. If the specifications were wrong, it was like, whatever. That's why they use a browser as a test,

because there is the standard of what a browser needs to do. And it's very clear what this standard needs to do, and that's why they use the specification. So it's a very technical product.

a lot, the more we go to our products that are not necessarily technical, the more tools like us you will need as an interface to actually talk to the workforce of agents that are actually doing the real execution work that we do.

So I usually point out one thing is, for a long time, knowing to do the thing was the same as knowing to do the thing. It was the same, doing it and knowing it, because you cannot do it if you didn't know it. But suddenly, we say, yeah, this is not true.

You could get kids to code a video game, and they have no idea even about gaming principles. That means that little person knows to do games. It's a high threat.

So our language will have to adapt. That's what I said about what it means working on coding, or what it means, no, it for sure will evolve in a completely different way.

Conclusion

And that's it, because I think my time is up. Any questions? Let me know after the session. Thank you.

Finished reading?