The Future of Work Won’t Be Learned, It Will Be Executed

Introduction: Making AI Frictionless for Everyone

I think the topic that I bring is exactly the opposite and how do we make AI as easy and as frictionless for the common Joe working in companies. We know this is coming and we know that a lot of the tools that have been in the market for a while are very still complex for the majority of the people. But innovation is coming day by day.

Tool Overload and FOMO in the Workplace

People are completely swamp, lost, and to be honest, the FOMO of really knowing what is the best tool for what thing for the common person is getting extremely hard.

Introducing MyDRA (MyDress): An AI-First Workflow Companion

And so that's why we built MyDress.

Founding Story and Market Shift

So just to give you a little bit of context on the background, we started actually as a company, as student finance. Back in 2020, so we were actually financing people to take on engineering, junior engineering training and become junior developers. So they would go to technology boot camps, become junior developers.

At that time, the huge problem that the world had was that we don't have enough engineers and enough firepower. You know, COVID came, actually that mindset was only massively incremented because all of a sudden we all went digital and we actually needed a lot, but

From Training Junior Devs to AI-Native Upskilling

All of a sudden, I would say in maybe 2023, 2024, the world has again completely shifted. Nobody really wants junior engineers anymore. I don't know if there are junior engineers in the house. I'm sorry, but particularly we were building, we were working with a lot of people that were kind of new to market, right?

So really no experienced people that were all of a sudden seeing everything that we're doing being automated from day to night, right? And so we were sitting in a lot of global market data and realizing that oh no, right, we should not be building anymore or feeding anymore people going into fast, we're saying, three to six-month technology boot camps and revamp. We really need to start helping a lot more people getting acquainted, comfortable with AI nativity, AI automation.

What MyDRA Does

And so we built a Mitre that operates in a couple of different layers.

From Business Problem to AI-Native Workflow

So what do we want? We want people to... know, which is the best AI tool to solve the business problem that they have.

We want them to get their hands dirty. And by dirty, I mean we need to stop buying off-the-shelf agents that are just one size fits all, et cetera, which is what 90% of the companies have done for, I would say, the past two years or a year and a half.

If you go to corporate land, the AI box is being ticked by, I just bought something. I automated customer support with an out-of-the-shelf agent. I'm saying that I'm using AI. I'm happy for a couple of years.

But the teams behind it are doing exactly the same manual jobs, doing exactly the same thing. That doesn't work. And that's what we want to solve.

Recommendations, Guidance, and In-Context Training

So we are bringing a tool that you put a business problem, I'll just give you a couple of examples. And it will generate an AI-first or an AI-native workflow, meaning a tool recommendation for what are the best automation tools and AI-native tools to solve your problem, a step-by-step guidance on how you implement it.

And kind of just in context training, so we have a YouTube video section connected to the tool. So if I'm going to recommend you use Annie Tan or Clay or Lindy, I will put a section there I'll show you in a second with LinkedIn videos, very customized to the tool and problem that you are also building.

Parallel Learning: Building AI Foundations

In parallel to that, we do have a learning arm where we want you to take one notch deeper into the ai learning so we all know that like again my team has been doing deep learning trainings mind stone etc so we want to people to still be acquiring that foundational training that is absolutely fundamental to the world where we will all be working on and going towards but we need to work on these two streams in in in parallel sorry let me just go back to

Product Walkthrough

Generating a Competitor-Tracking Playbook

the mydra dashboard right and so what I want to when you when you when you put here will generate a workflow it actually takes a while sorry it actually takes a while to run so I will show you one that I already have built in automation and so you give me here the the problem here I said I want to track my competitors I put a couple of features that I want to be automating it gives me a

a very thorough kind of mapping of what the agents will be doing, and then step-by-step guidance on, you see here, tool recommendation. Here it is suggesting that I use Linde to build this.

Tool Choices for Non-Technical Users

This is designed, again, for non-technical people, right? So it will go in sort of detail on what are the optimal tools, alternative tools. So here, kind of Clay and Any10, which are the obvious for these tasks.

Configuration, Monitoring, and Troubleshooting

It goes deeply into kind of configuration steps, monitoring, kind of troubleshooting.

Prebuilt Prompts When LLMs Are Involved

Whenever prompts are recommended, imagine that some steps required us, like doing analytics with an LLM, et cetera, we will include a detailed prompt for the user to just be able to implement.

I'll just run you so you see here.

Collaborative Execution and Progress Tracking

step you just follow you can mark so this is designed to be used in collaboration so multi team people being able to access I can do step one you do step two I go back to step three and we kind of monitor how we are doing on on on this execution you see like again step four kind of monitoring etc if I was to say complete

And I would have these parts.

Training Materials and Metrics

I just want to show you here the training metrics.

So we always will recommend associated with the playbook tools that we are recommending very specific YouTube videos much to obviously the the relevancy to the use case.

Iterate Fast: Test, Learn, and Refine

What we are trying to aim with this is that people start becoming very lean and agile into testing, playing, going back. Again, the playbooks don't need to be perfect at all.

Switching Tools and Auto-Updating Playbooks

If you speak with the agent, it will customize. Let's say I want to use NA10 because I prefer NA10 to Lindy. It will just redo the complete workflow.

To do that, it will update the videos, et cetera. And so it's a very agile way.

Designed for Continuous, On-the-Job Upskilling

This is, by all means, an AI upskilling tool, right? But it's an AI upskilling tool that it's not designed to be a one-off training on the shelf. It's supposed to be sitting with you as you work, when you work, and to take you deeper and deeper into knowledge layers.

Under the Hood

I'll just show you another example. So you can see a couple more.

So I give some background on how this is built.

Agent Architecture Overview

We have an agent architecture.

Perplexity Search and Market-Aware Reasoning

So pretty much what the model does is takes the user problem. It does a perplexity search on what are the best tools, kind of always the model is trying to be AI native and looked in the market, or what are kind of the most up-to-date or recent workflows and solutions for this problem.

Filtering and Final Assembly

It then passes a couple of agent filters, and then kind of the final public agent spits out this complete architecture.

Sidecar Agent for Hands-On Help

This agent on the side, this conversation, is designed to help you as you execute.

So if I don't know where are my OpenAI API keys, it's just like, how the heck do I get the OpenAI API keys?

And it will guide us through that execution.

Anyone Can Build Simple Automations

So this is pretty much to translate to English in a very comprehensive and easy way what is happening, and I believe the miracles of AI automation. And I truly believe that anyone, Literally anyone following the steps can build very easy automations.

I had, well, I guess you guys are all pretty technical, but so I, for the competitor one, I just went to, oh shit, sorry.

Live Example: Competitor Monitoring in Lindy

to Linde, and I actually built step by step of what it was. So when I run the competitor playbook, it told me use Linde. Actually, it was telling me to do web search.

No-Code vs. Low-Code: Lindy and n8n

So Linde is an automation tool, but it's very much designed for really no coders right so we use a lot internally lindy and na10 na10 i think requires a little bit more of knowledge and it's a little bit more for technical people uh lindy again i set this up in 10 minutes what it does is every monday it will run a propensity search on for ai native companies it will send feedback to the slack channel and then it will set it up on on on on a database

Results: Slack Notifications and Google Sheets

on a google sheets i just want to show you again i run these this is actually working so i just ran it a couple of minutes ago to show you it actually pulling all of the recent data that i requested from these competitors it will do so every morning this is super easy i guess for your level i don't know if people here are more or less technical to that sense.

Who It’s For Inside Companies

But for a lot of people that we are in the end, this is more designed to be working for companies. And it's used by marketing teams, finance teams, ops teams.

So we were talking about people who are really not used to automation and doing the best of the AI and ATVT tools.

Reimagining Workflows with Automation

And people are completely reimagining how they are deploying their workflows with a very, very strong bias to automation. And we want to scale that more and more and definitely believe this is the future of work.

I don't know if you have any questions. Yeah, sure.

Q&A

How Do You Validate Tool Selection?

How do you validate that the tools that it's picking are the right tools? So how do you know that Lindy is the exact right tool for the problem?

So we run a lot of tests on the plasmas that I was trying to search for. Where did I have that back on?

This is about how we prompt the model, right? So it is designed to be extremely centric on AI nativity. So the versions, the first versions, when we set up the agent chains, were actually, you see like, if we're talking about ops, it will fall back to Zapier straight away.

But guys, that we know, right? The magic is not on those tools. They're not keeping that AI nativity edge.

Keeping Up with a Fast-Changing Tool Landscape

That works in several layers. On one side, the perplexity search that is run every time that we do the query. So we are ensuring that we are always running on the most advanced information and data sets that are available on the market. And then we have two agent chains that are actually doing that.

One is outputting the recommendation and then validating, is this really AI-native? We have an internal database. that is also constantly updating. And we are running the recommendation tools versus that AI-native database.

The issue with this, as you say, is that tools are changing every day. So the importance is that the search and the database are in continuous sync to make sure that we are particular.

Because a lot of these, the tool will depend on integrations. I was telling him before.

Choosing the Best End-to-End Stack, Not Just Parts

Because also something that was happening at the beginning when we started building this was that it was always choosing the best tool for each step, which is wrong, right? We need the best tool that makes the whole pack of solutions different. Only if one step is 10x better done with a specific tool, then I want to obviously break the chain or the logic.

Bias Toward Simplicity and AI-Native Approaches

If something can be built fully within GPT, customized GPT, by all means, we should do it, right? And so we don't have a tool bias. We have AI nativity and simplicity of approach problem to the task, which is what I think it's important.

Yeah, sure.

Who Is MyDRA For—Companies or Individuals?

So actually, I didn't get the full big picture. So this was the MyDRA. MyDRA. MyDRA. MyDRA.

Thank you. It was a very good description.

But OK, is it more leaning toward, I mean, is it best for the companies or for the individuals as well? So it's supposed to be, it's a tool to be used in the work context, right? And so it's a tool that it's, at the end of the day, is 90% of the times paid by the companies.

And it's like the early adopters, so we launched this recently, early adopters who are pushing the platform are heads of ops, heads of digital transformation, innovation leads, et cetera, right? And those are the people that are kind of deciding within a company who should be kind of V1 of holding the pen on kind of rethinking these workflows, et cetera.

Yes, but it can be totally used by independent users who want to optimize how they are working. And the model is that you learn as you need. You learn as you need and contextualize and adapt it to your needs.

Why Traditional Training Doesn’t Stick

Because what's happening now is you do a training. And I will show you or I will show you gamma. But then your stock is different.

And what happens now is that people do the training. They feel very happy for a week or two. But nothing changes. Fundamentally, nothing changes.

People are not, in a massive scale, changing how they work. We had recent stats of OpenAI. People are less than 30% of the use cases of ChatGPT are on work use cases, which kind of shows this.

Contextualizing to Your Stack and Use Case

The importance of MITRE is putting it in the work context, completely customized to the problem that you have at hand, your tech stack, but specific problem that you want to reimagine how you can work on an AI-first approach.

Pricing and Tiers

Does it have a specific pricing of the different tools

Yeah, so now we have three tiers, free, pro, and team. So pro is 59 euros and team is 79.

Model Choice, Accuracy, and Cost Optimization

Currently, and we saw a massive, so we also tested a lot of models, like Pedro was saying. We actually now are using Cloud Opus, which is by far the most expensive model in the market. And so we will need to keep on iterating.

But the... The level of accuracy and depth with was like by miles. Any other model?

Pricing Context for Suggested Tools

We need to keep optimizing for pricing, because it's a company, and at the end of the day, a productivity tool. Of the tools. It always gives a pricing context for the tools that we are suggesting.

Onboarding Customization

It does consider open source, and it does so. On the first sign-up, we do a customization of user profile that runs in a couple of layers.

Company Type and Data Constraints

One is enterprise, startup, scale-up, SME, because that will massively influence also depth of tools that you can use. We know that for an enterprise use case, for instance, they are completely restricted in where the data runs.

So again, if they are allowed to use ChatGPT, you cannot really go a lot. Although, a lot of the tools that we are recommending, you see like Lindi, NATN, et cetera, they are orchestrator. So the data is not being exposed in a lot of these instances, right?

Skill Levels: Beginner to Advanced

And you should always use their CRM, et cetera. But so we do that customization and also the user AI level, right? So if I have done a couple of automations and use AI, but I'm in, so we have three layers, beginner, intermediate, advanced.

If you are advanced, you will go, like for instance, I would never recommend Linux. I would very likely go straight to N810 and more kind of open source, et cetera.

Current User Profile and No-Code Bias

What we are seeing at the moment is that the tool is being used by people mostly in beginning and intermediate levels, where really the no-code bias is extremely important.

Any more questions? . Yeah, I want them to work.

Staying Relevant as Users Advance

The only thing that I will say but is like, The thing is that the more we have a couple of roots on how these can go in the future, but if you start being very advanced, you will definitely not need to use the platform anymore. The speed that new tools are emerging specialized to the use cases is not going to stop anytime soon.

And the type of user that we are approaching is a type of user that is not searching every day on the news, what is an 810? What is the automation tool? People simply don't know, right?

Continuous Tool Discovery and User Value

And so I would say that we stay or remain extremely relevant if we are successful in keeping identifying the best tools for available to solve the problem and people become hooked into operating at the best of AI nativity or the best of the tools that are available.

Platform Familiarity vs. Best Tool Selection

Saying this, I don't expect people, I like working with Lindy, for a decent period of time, I will have a bias towards working with a platform that I know, right? So is that checks and balances.

Nevertheless, saying this, 90% of the population needs this. And so we will need to obviously keep adding value and developing.

Scaling Adoption Inside Organizations

I think in the first layer, as I said, we are going to, because how companies are developing AI is that they're going, okay, now there is a group of people who are kind of the AI ambassadors who are testing these, who are doing the hackathons, the demos, the that, and they need to train. So it will go in layers. And what we want to do is speed that process a lot more because you are less dependent on the person that learned and will now do the demo to the half of the company and everyone has the independence and the agency to actually search for the best way available to solve this in an AI-first world.

I have a question.

Team and Adoption Dynamics

How many people are working in this project? In this project, we are a team of 11.

Learning Curve and the ‘Addictive’ Effect

Have you found that people are learning AI tools far quicker than they learn traditional development IDEs and that kind of thing? So what I see, and this is extremely addictive. So there is a resistance to start, but then it's like you cannot stop. And once you run one, two, three workflows, you kind of want to change everything on how you're working.

So not at first. And I think a lot of the acceptance that we're having in companies is because people say, oh, but we already did 10 AI trainings. We already did that. And what changed? Really nothing, right?

From Demos to Real Change in 30 Minutes

And then you need to, a lot of, we're doing a lot of demos in the companies themselves and showing how different this is to a traditional course, right? And how people can actually in, 10 minutes, half an hour, really test. And it's about, OK, choose one very simple use case and run it.

A More Engaging Way to Work

Because then it's, like I say, it's very addictive. And I see people kind of falling back in love with their jobs, which is a very curious outcome. Because it is extremely intellectual stimulant to redesign a job on how you have been doing for 20 years.

Finished reading?