Fine-tuning AI Agents based on user roles for enterprise adoption

Introduction

So, good evening everyone.

Thank you, Reginald, for having us here tonight.

Skilder is a project that we started roughly five, six months ago, so we are pretty new in the room.

And I will explain to you a bit what we do, show you the platform, and of course, happy to take questions, feedback, or suggestions to enhance the product afterward.

award.

From automation to agentic AI

So for maybe non -technical people like me, just a quick shift, sorry, a quick intro about

the shift we're having now about AI coming from the process automation that we used to know with

the platform where we just like creating workflows, putting the data and getting some automation.

We then have seen the last few years a shift to our generative AI, where you basically have a

a prompt, LLM is processing something,

and it generates for you some text, images, and videos.

And what we're seeing since last year,

and that is really pushing hard this year,

is more this agenting AI, where basically you

give the AI agent a goal, and then it

will have a loop about basically what

you need to plan to do, doing things,

reflect on it in order to give you the output.

This agentic AI market is something that's going to be huge.

We heard some projection about being a 50 billion market in the next few years.

We might have three times more AI agent than human on the planet soon.

So just to give you the range of how much AI agent could be created by the builders and by any one of us.

So it's roughly, some people say, 20 billion of agents that will be created in the coming years.

And the pace is really, really high, meaning that this market is doubling every year.

Those will be the future you and I as a digital twin somehow, someday.

Why agentic AI is taking off

Anatomy of an AI agent

For non -technical people, maybe just a rap about the anatomy of an AI agent.

We like to say that there is two parts of an agent.

the brain, so who the agent is, which is usually composed of a prompt

that define what it is, a LLM, a kind of a memory,

a capacity to have the reasoning, so planning, acting,

controlling and giving the answer, and some guardrails in terms of safety that

is around the agent.

And on the other side, an agent is also composed of capability,

ability, so what he can do, how he can influence on the external ecosystem.

And currently it's mostly around three things, is the skills, so

instruction that you give him as a context to perform a task.

We'll come back on this.

Tools, connectors, so ability to connect with other systems.

Should it be API, should it be MCP server,

that is a protocol that has been developed by Anthropic two years ago.

and also access to some knowledge base or a rag that you are maybe familiar with.

The scaling problem with today’s agents

Now the point of AI agents, or at least our point of view.

The thing is that there is a lot of duplicate effort.

When you create an agent, you need to create all these set of things

that I have mentioned to you before.

So they are rebuilt every time more or less from scratch.

They are more or less governed.

So it means that today there is relatively not a lot of solution to control what an agent can access or do.

It's a part of the market that is currently also targeted.

It doesn't scale well when it comes to adding new capabilities because you need to change it as a file within your agent.

So for each agent, you need to rework on these famous tools or

these famous skills that is related to it.

And also the collective learning of the usage that has been done from the agent,

what he has performed, the context he processed is often

put away at the end of the transaction.

So what we see today is that AI agent has a use potential, but

But still, in terms of adoption, it's pretty low, and also the scaling of AI agent in something that will become a question for the coming month and year in AI.

Duplicate effort, limited governance, and poor reuse

What we see at Skilder is a kind of n times m problem with the capabilities.

Today, as I mentioned, capabilities are defined per agent.

So if you take this agent, he gets some skills, he gets some tools.

This other agent has his own set of tools.

And this basically is not something that is meant for

scaling when it comes to managing all those capabilities.

So what we have think about at Skilder is basically addressing

this n time m problem.

Skilder’s approach: reusable “hats” (roles) for capabilities

And what we created is what we call hat or roll.

And the idea is really to decouple all those capabilities

capabilities from the agent, from the brain, to make it a kind of reusable

asset that you can share amongst all the agents.

So basically

you take tools, you take the skills, you take the knowledge, you create a role.

I will give you an example later.

It can be a credit officer,

can be admin assistant that you want to perform some live process within an

organization and by doing so it can enable your AI agent to wear those

different hats when it need meaning it offer the scalability that AI agent can

just grab a new hat depending of what you need to execute can be a legal

assistant in the morning after being a compliance officer and then working as a

facility plan planner for manufacturing.

By decoupling those things you then

definitely enable the power of using your AI agent in different contexts.

How does

How hats are packaged, distributed, and governed

it work and I will jump in the product right after that we package those

capabilities in those hats.

They can be distributed across any kind of of LLM

So you can put it in Entropic, you can put it in OpenAI, in Copilot.

The user does not have the friction to change the interface that he used to.

We're just tagging along to the existing system that are leveraged.

Then it's governed, meaning that you don't have agents that are somehow in

the marketing team, then by other people in finance.

you have all those capability as a single source of truth on Skildr and that can be used by every employee of the company.

So you enforce some governance principle.

And what we are looking at is that knowing we have all the transaction between the interaction of the user and the AI agent,

you now have the history of everything that has been exchanged.

and we want to build a kind of self -learning engine that can then

alimentate, that can fill the pipeline so the agent can self -learn

and self -improve down the road based on the context you are giving them.

Learning from interactions over time

So stop talking, I will jump in the product just to show you a bit.

as a power user.

Product walkthrough: building skills and roles in Skilder

When you arrive on Skilder, that's the beginning of the journey.

You have

here what we call the studio.

So in this place is a way to interact with the LLM and ask him

to help you create a skill of a process.

So basically if I ask him okay can you help me

me to create a skill to do a credit request.

We're in discussion with a bank and they want to basically

automate all this monitoring of the loan that they have granted.

Here, the idea was to see how we can generate

a skill that would represent

the capacity to apply the policies of the bank,

to check the data that they have about

the customer and see if they fit the policies.

So basically it would assist you to create those skills, those processes, mapping your internal policy.

And then it also adds the related tools, so systems you need to perform those actions.

Because the data are maybe in a database, maybe you have a template about how you process the monitoring of your credit somewhere else.

and the reporting of the financials

are maybe in another database

kind of tools that Recarta is working on.

So you build all this with the studio.

Studio, connectors, and capability configuration

For the technical people,

you can then set the configuration

of the systems you are requiring.

So basically, if you want to connect

any kind of database or any kind of ERP,

technical people can configure this here once and it will be auto discover in the studio

when you want to create a new a new capability

then you have your list of skills that you create so it's an horizontal product you can really

address needs that are in admin but also in sales or in production and all those skills they are

are then combined as those famous hats that I mentioned to

you, which are at the end of the day, a digital twin of position

in a company.

So you can imagine to create an org chart, and you

have those digital profile or hat that are mapped.

And this is

what the LLM will consume depending on the user request.

Demo: school administration use case in Copilot

A quick demo here, that's the case we have with the school.

Actually, they receive a lot of requests to get the kids out of the school

prior to holidays, and that's something that's happened on a recurring way.

The admin team then needs to basically see in which class they are,

provide some information, and provide an attestation if they accept or

not that the kid is getting out so it's basically a kind of admin hat for uh school kid management

that we can uh we can imagine and then here we have created those uh those hats and they have

been introduced in uh co -pilot and what we will you will see here is the end user just saying oh

i have received a request from uh this person please proceed with the creation of the of the

the attestation for this case.

And it works with the usual tool they have,

which is Copilot.

So here, basically, you see that they want

to give Charlotte the authorization.

That's behind the scene in Microsoft Copilot.

What's happening is that Copilot will call

the Skilder platform, discover the different hat

and skills they need to perform the action.

So there is access to database.

database, there is access to a template and at the end in just a breeze the LLM is able

here or the agent behind is able here to generate the letter which would have been a manual

process that is done by the secretary at the school.

So that's a quick and simple example of what you can do but definitely after this use case

is, you can imagine that you can scale these to other activities

of the person.

And it's definitely helping to address

the 70 % of tasks that are usually low value.

But that

requires a lot of time to two people in their job.

So yes, a

Where Skilder fits in the landscape

bit of category comparison.

We are sitting basically in a space

that is a bit different from agent coding.

So you can code

code everything yourself.

You just need the appropriate skill.

And for an engineer, it's maybe a no -brainer.

What we propose here is more something

that is a level of abstraction for people

that are keen to leverage some capacity with AI

and to compose those things, and also being able to create once

and reuse everywhere.

Compared to coding agents and workflow automation tools

On the other hand, you have the traditional AI workflows.

flows, you may have tried NA10, make and others, but they still

require some expertise about how you change the different nodes

and module.

And they are pretty rigid, where working with hat

and skills offer some flexibility with relatively

similar output, giving the agent the ability to choose what you

need to.

The key benefit, I will not go into details.

But what we

Benefits for end users and organizations

have seen is that there is a barrier for end user to use ai beyond just translation of email and

making joke about their schedule of the week and this is because they need more context and they

need some kind of skills and tools that allow them to perform jobs that they do on a daily basis

basis rather than having generic AI that do not understand their context.

1For organization, it's again all about governance and security.

So avoiding that you have shadow AI everywhere because someone is building

something unlovable, but then the team next to it is not able to handle it.

So by using a platform like Skildr, you can ensure that everyone has

the thing that are approved around and governed the right way.

Positioning and differentiation

Our positioning, of course there is competition and I think that's a fair point.

We see ourselves and differentiate in two dimensions.

We do not consider ourselves as agent builder, but as we focus on those capabilities, we

believe we are more enabler of those AI agents because they can consume all the capabilities

that are created on the platform.

form.

And also we are model agnostic, meaning it works with

any any LLM on the market, comparing to different big

player which are leveraging their own model, because that's

their business model of of making money.

So white skiller

just to close this discussion and then happy to take

question.

Conclusion: Swiss-by-design, model-agnostic, built to scale

So it's Swiss by design.

That's also something we

see a lot of customer around here especially mid -cap or larger enterprise everything is

hosted in in swiss data centers we can leverage also a model that are based here in switzerland

thanks to the api of infomaniac for example there is no lock -in in terms of llm we interface with

every big agentic framework that does exist exists sorry our aim is to create something that gets

smarter over time and contextualize to every company and it's definitely built for scaling

in in organization so that's a bit the vision we have

with my co -founders here at skilder and thank you again for listening

Finished reading?