Anatomy of an AI Native App

Introduction

There are two things that I would like actually like to focus in this talk so one is of course I'm going to introduce you to Matty but then the second one is when you're actually building AI native apps using white coding platforms or any

of those kinds of things how do the costs incur and yes my name is Aditya

Gudipudi I have been in the space for about 11 years been with the European Union for self driving vehicle regulations or also been with Scania Volkswagen, Audi and Porsche with their self -driving vehicles.

Been a former head of AI at one of the Swiss scale -ups here and then yeah now working on my own startup.

Why Most Apps Don’t Make It: Distribution Matters

The reason I have asked how many of the apps that that you are actually building on Loveable have gone public is we as engineers or non -technical co -founders or anyone we often build a lot of things and a lot of them actually do not see the face of light.

Now software has become a commodity meaning that everyone can build something on lovable which basically works.

1The key part for that project to turn into a startup and to become successful is distribution. You have to distribute it.

There are a lot of distribution channels there is an entire framework on how to do things. Later on I can send the entire lovable case study on how they have done it but distribution is the new mode and

that's this is exactly what we are solving because my first startup failed because we built the best tech in the world but we did not even sign up one single user because we did not know how to distribute so that was that is the problem that we actually want to solve this time with Mattie solving our own

problem being too much of an introvert to post on LinkedIn or Rex or any of of those kinds of platforms, or not even knowing how to do it, is basically the case baseline that we are solving.

Matty: An Autonomous Distribution Agent

So what is Matty? Matty is an autonomous distribution agent.

What do I mean by autonomous?

From Workspace Knowledge to Distributable Content

All of our work lies in our workspace tools. That could be Lovable, Replit, Cursor, Google Docs, Notion, everything.

All our knowledge lies in those documents. and that knowledge has to be translated into content that could actually be distributed.

So this is where Matty comes in.

So it's gonna plug in into your workspaces wherever you work and then not breaching GDPR or any of those kinds of things you give it permissions and then like you know it's gonna convert the knowledge that you have built in all these workspaces into distributable content.

So we capture it and then we basically distribute across all the channels. So all these things are basically done quite autonomously.

The Hidden Risk in AI-Native Apps: Runaway Model Costs

So why I have picked my own platform to actually give this talk is AI costs are huge and especially when we are building an AI native app with so much of AI

compute that we have to do and then our users are going to come in and use our AI and then like you know if the costs are going to be exponential.

So if we actually use let's say the premium for frontier models 4 .6 Claude or Gemini 3 .1 yeah we're gonna bleed a lot of money so we have observed that right from the

beginning when we are when we started building our own platform and then we started testing we saw the costs going up and up and up just with us using the platform and that is where we wanted to actually bring a little bit of a

difference and then a little bit of thought on how could we actually spin this around where the in AI costs are going to be low for us and then when we actually onboard users it's going to be low as well so that's where we we've

actually started thinking thinking thinking and then this is just an

Product Walkthrough: Conversational Content Creation

example of number of agents that we that we have in the platform actually I would like to jump onto the platform just to show the complexity slash how many things that we have in the platform.

Oh, by the way, if I did not mention, actually, Matty is a conversational AI. So you just talk to it. Hopefully it works.

Hello, I'm here to help. Hello. Hello. How can I assist you today?

Yes, I would like to create a LinkedIn post. I can definitely help with that. To get started, what topic do you have in mind? And what's your main goal for the post, like getting engagement or sharing an insight?

I want to share an insight. I've started a LinkedIn draft for you focusing on AI strategy. Does that look like a good start or would you like to refine it? I've created a draft for LinkedIn based on that insight. How does it look and what kind of call to action would you like to add?

Just save the draft. Great. The draft is saved. What would you like to do next?

We have a conversation layout, which basically you can actually talk to it because when you're trying to distribute something or your you have your own ideas in your brain and then it needs a translation it needs a like a springboard you need to bounce

ideas so that is why we have the entire conversational AI which actually controls most of the platform you can talk to it you can actually discuss ideas it will come up with all kinds of posts it has access to the internet your

documents and everything and it can go through your knowledge base and create content across different platforms and then of course you can also chat with it

doesn't matter it will it will respond the main part comes with integrations so you can connect your LinkedIn X and then it's going to take care of what content to post when to post and it's going to come up with the entire strategy based on your profile well I can show the demo as well but it actually we can also

integrate telegram so you can just talk to it on telegram you don't really have to come to the platform and it's multi -model meaning that you know you can send a voice note it works you can send images videos anything it works uh straight off telegram and then everything is going to end up here in uh in your draft section lovable you can actually ask your lovable

uh to send information to matty and then it will automatically send information to matty I can just show I would be like okay send my last day update to Matti and then it's my own cursor basically talking and all the information comes to

my feed so based on your profile we actually scout the internet we look at your documents we talk to different AI agents that you are working with and across different different platforms and then everything is just gonna end up on your feed and you can like check and see yeah you can like just check and see and

then you see that okay last three days I have done this I would just click generate post and then it will automatically convert that into posts we have a human in the loop just to not give a AI the full autonomy so in a

Re-Architecting for Cost: Supervisor + Swarm of Small Models

nutshell this is a pretty complicated architecture so we have one basically a

primary big brain that is listening to multi -channel and then we have research agents we have well let's say scouting telegram slack the moment we add more and more things our agents are going to get bigger and bigger and bigger for for every single agent,

if I'm basically using the latest premium model, well, the costs are gonna be skyrocket.

So that is where we have actually seen that it's not a good idea at all to go with this kind of infrastructure.

And we have decided to optimize our own architecture for our own sake.

So in the entire platform that you have seen, we use the premier frontier models only once. so the our main conversational AI is not the premium frontier model we are not using for that as well so the only frontier model which basically lies in

our back -end somewhere is it captures intent from all the channels and then it sends to the frontier model which we call it as our supervisor agent it decides everything that it needs to do it has a swarm of agents which are

basically powered by what we call as SLMs small language models trained on seven billion parameter could be anything Lama for example and then all those smaller agents are basically powered by Lama's and we basically those

small language models will take all the decisions and everything and then sends back to the bigger model and then with the huge context length that processes everything and then everything ends up in a draft.

1So by doing this small change we've actually pretty much cut the cost a lot even if we have let's say five to ten thousand users using our conversational AI every day, using our every single channel, we are definitely not gonna break our wallet.

But this is a let's say a classic example of re -architecting the AI native apps which are basically heavy usage for AI within the within the platform so

Benchmarking Vibe-Coding Platforms and Cost Implications

this is one of the things and then I've actually added a small slide a recent analysis that I've actually anyone who's wipe coding I've actually checked all the things on how allowable is implementing I mean they want to be the

best oh sorry okay cool so I have so the SLM here is basically the cheapest one that I've observed, the key takeaway is bolt .new. So they are a little bit, have been existing for a very long time.

They have pretty good rates for, I'm not advising anyone, but just my own observation and the costs are for my own billing. So I haven't taken anything out of context.

So I've observed that bolt basically offers the cheapest. I don't know if they are subsidizing everything or if they have a different architecture, but they have been the cheapest in order to build AI native platforms so I've actually built my own

platform on all the vibe coding tools to actually see how much is it going to cost how many tokens it is burning of course it can't be scaled but nevertheless trying to evaluate costs across these different kinds of platforms and that is one of the key takeaways that I would want to give

Conclusion: Practical Takeaways for Shipping AI Apps

everyone in this room is that when you are building AI native applications please do bear in mind that it is not necessary to use the best and the greatest model a lot of previous models actually can do the job and please do

keep in mind of the costs when you have a thousand users because you cannot well well let's say guess how much a user uses your platform so please do bear in mind of all these kinds of things so that if you're self -hosted on

lovable it's all the billing is going to be on lovable cloud which is going to be very expensive so please do bear in mind of all these costs and try to think of a

different angle when you're launching it to public that's pretty much it thank Thank you so much, and hope you guys have a nice evening.

Finished reading?