I started my career as a computer engineer in Bologna, in Italy, so basically I was in
this company that was under GIS, Geographic Information Systems, close to Rimini, where
I was born, and after this first company I joined Italian Software, it's another company
near Bologna area, where they built a new language for microservices, I was the first
first employee and here then I moved to another start -up close to Geneva, close to the CERN
area, the Pépinière Technopark at the border with France and Geneva and there we work with
drones and sensors, with IR inspection, I go very fast now, maybe later I can detail,
and then I work at the station F and on the startup called Pixis for the career upskilling and reskilling
and I work with a European project called de -reskilling and since 2019 I am independent
so I am doing CTO de transition, transitional CTO, fractional CTO
before I was a senior engineer architect and then became CTO
and now I'm doing missions specifically in AI and agentic AI.
The microservices was the focus of Italiana Software,
a company that actually built a new language for microservices,
so not libraries that you import,
a language we construct in primitives for microservices
like aggregation, composition,
and we were actually building orchestrators and also...
and so after this experience I recently I moved to deploy agent so you can
deploy agent like a microservices so these are one link from what you
asked and yes there are still orchestration so also in microservices there is
orchestration but also there are system with events and even stream and
and in agents you have more another layer in microservices you don't have some degree
of freedom in agent you have more autonomy so that's another evolution of the concept
of yeah I'd say an agent compared to a service is a higher concept in a software engineering
so actually I prepared a full presentation with the path and main steps so when I was
in Terra B we work with drones and sensors and we did the iron inspections with drones so we
prepare mission plans and the reconstruction of terrains and different environment or buildings
also this for the biodiversity checks for the natural parks and this reconstruction of a golf
course and that was the here I wasn't I wasn't yet a CTO I was a senior engineer
previously for micro services I was in this company so the two founder created
this orchestration language and you can see here as a technical the technical
part of the audience I think they can follow you can create a service a micro
service with a proper language the keywords and is tailored for you can you
You can have types like regexp directly on the definition of types.
At Pixis we work on reskillings and station F reskilling and upskilling here for we use
the SDG, the ODD, the Sustainable Development Goals and this could be interesting maybe
for the what man was mentioning before Chloe so we are so in 2004 I started
this prototype with multi -agent application is in terms an internal is
an internal project where you have an infrastructure with different layers the
last layer is orchestration layer between different agent you can build
multi -agent application but it's still an internal project is a it's a it's not
not yet in production to anybody, it's just an internal project.
So I provide different kinds of solutions for AI, like a customised RUG, the thing I
used most in the last year.
So you customise a pipeline where you have a vector database, also a graph database,
database and then I elicit the requirement from the customers and then I prepare, I customize,
I see the critical points and I customize the systems.
This is what I did mostly last year and also at the beginning of this year.
I'm working now for a tool orchestrator, so it's like an NCP, because now there's this
trend in which you don't have any more API platform you have MCP platform so
you connect directly with MCP and and the platform provide you services
already you can maybe they have a gateway MCP and as you can connect with
them so it's not so it is not yet the developer that just the prototypes but
there is an orchestrator there is an actual orchestrator that calls different
tools but it's different from the agent orchestrator these are two orchestrator
so it's one agent single agent orchestrated different commands between mcp system it could
also have a simple version with a function calling in llm so it's the previous version anyway
i also have i also consider multi -llm systems like in multi there are many lms and then you can you
compare the result of a lamb different ways or select or you can compare like uh like vote
let them voting or maybe uh compare but take the best solution so best of n different kind of
patterns and this was the this multi this interaction patterns were the idea when i
created the the 2024 this call this uh product so that that was the main idea you can inject
an interaction pattern in this layer and then the the distributed team of agent can coordinate in
this way so um the other thing i provide uh okay multi system is still uh is still uh i will see
we will see later is uh um there are many uh still uh issues in the multi -agency stems like are
are conceived now um i also when i work with client i also provide some custom customization
tools and roi calculator mainly at the beginning to see the feasibility because there are three
main problems the reliability the feasibility in the center of roi and also the security but and
of these systems so there are some typical trade -offs when i probably provide the solution
1The main key element is the control.
Who has the control of the control flow of the application?
Is it the LLM or is it some sort of deterministic part of the code?
So that's an interesting point because at the beginning of my career, when I was actually at university,
I studied multi -agent system many years ago.
By that time there was no LLM.
So, there were formal system, formal protocol, negotiation protocol between agents, and like
with protocols like FIPA, agent control languages, communication languages, and there were many
frameworks and other simulator with agents, but the definition of agent was different.
So, at that time, the definition from academy, from historical papers and definition like
an agent is situated entities in some environment and capable of autonomous action in order
to meet its objectives, its goals, so it's a weak definition.
Then also we had informal definition, so a computer system capable of independent action on behalf of its owner.
So these are from Woodridge, this is one of the references I was studying when I was at university.
And then there is a strong definition, belief in design intentions.
These three are very old and this is an old view, I make a flashback now.
now.
And so coming back to now, why did this flashback?
Because at that time everything
was deterministic and the interaction with the user was not conversational, very fluent,
it was something remaining in the academia.
So now when we talk about agent, we are talking
about something that often we talk about the controls in LLM, so that creates some non -determinism
and and some problems many problems so when i decide the design solution i have to to make
some trade -off uh which part is deterministic which part is not not is non -deterministic
and with how much we rely on natural language or how much we rely on structure
and typical rules or typical typical systems before lm and also
Also, yes, the context size and the pertinence of the context, maintaining the context relevant.
So these are some sort of trade -offs I take into consideration when I think about a solution.
So this one is one example, custom rag, you have an ingestion, you have very importantly
the evaluation so you can see each step you can you can probe you can like make
a section of different side part of the system and see how well they are
performing if the retrieval is where is the if the retrieval is correct is
accurate and precise and then you have the real -time ingestion when you when
you the user send documents during the execution the system the runtime and and
And there are many variations here, so that's why I say it's very important for me to do the elicitation phase and then choose which is the actual proper implementation to do.
This is an example.
I masked a different thing because it's very simple, but the client gives you all the documents, all the connections to some databases, and they have some typical questions.
questions, and then they can ask questions about the project they are doing, and there
are some methodologies, so the system has to learn the ontology of the company, of the
specific project, so to contextualize each prompt internally.
And then you have this very simple example, but it's tailored for one company, for one
domain, one specific use case.
so when you when you general agent are some can be difficult to tame but when you have some specific
it's very important the first phase of analysis of the situation of the need of the customer you
can tell me the hour the timing because uh the timing okay okay so uh this is another thing i
i'm as i was saying before now there are uh platforms when you your your system directly
call don't call anymore api can call directly mcp so remote mcp mcp are deployed to the platform
platform already.
Is the idea of an MCP clear?
Is everyone familiar with MCPs here?
It's the interconnect.
Sorry?
It's the interconnect.
Yes.
It's like an API.
An API.
It's an API for other labs.
Okay.
Sorry.
I give by assumption that there are CTOs, so there are many CTOs, but maybe many are
not CTOs.
So, sorry for maybe, thank you for highlight.
It says that MCP is a model context protocol, it's a protocol to help the LLM to use tools
and make actual action, it's not only retrieval of knowledge.
Now I'm discussing, it's not yet started, but I have some prototype for call to remote
remote MCP and also local MCP that calls the API of the platform, so sometimes you could
do both in some cases, if something happens, so you have a more robust solution, you can
use both the local MCP server, local to the agent that orchestrates the different tools,
and the other is deployed to the platform and and there are many many
provider like a Kong very famous API gateway provider are providing their own
MCP gateway so each core to the platform pass to a gateway and then and then you
can the system can have a center of to to to to enforce some some rules or some
checks etc sorry for the the the image is not very nice graphically but the main concept here
you have a single agent it's a let's call it two orchestrator and this agent can be just a simple
loop or can be part of another protocol like a to agent to agent protocol or other protocols
and this agent have some MCP client.
MCP client establish a connection with the remote MCP server,
so it can be one HTTP streaming connection,
and the MCP are deployed to the platform,
and the platform internally also have API,
so in case it's also possible to go to the API,
or use a local to the...
And here there is the user and the classical chat interface, this is an example.
And then an example is a very long ROI calculator that I used at the beginning with some customers,
so you design some scenario of usage of the platform, here are the two different roles
roles of users, the type of interaction they have to the system, how much token we think
they will use for chat, how many chats a day, etc.
So there is a long scenario based ROI calculator, I do this at the beginning usually.
And then I come back to the definition of agent because it's a key element.
So, here from Anthropic, but it's very well established.
1Workflow are systems where LLMs and tools are orchestrated through pre -definited code path.
So, workflows are deterministic workflows that have some steps in which you call LLM.
So, that have some non -determinism, but still it's quite stable.
agent, and I will call LLM agents to distinguish between the original academic definition of
agent.
The more practical definition, a system where LLM dynamically direct, call directly their
own process and tool usage, and maintaining control over how they accomplish the task.
There are other definitions, mainstreams, that are on the same way, they are very practical.
So element -based agent is an artificial entity with prompts, specification, conversational
trace, maintain state, and the ability to interact with environments such as tool usage.
age so the tool is the element to interact and perform action on the environment in the typical
reinforcement learning style loop multi -agent system so when if we think about agent with
the definition of academic definition we have more more reliable and rigid systems and but
But if you think agent with a new definition is more, is more, there is some part of non -terminism.
So why multi -agent system?
So by now we can use it for very, for complex case because multi -agent can model complex, multi -agent system can model complex realities, complex systems.
So you can decompose a system where each element is proactive or at least reactive and there are continuous loops.
And then it can be used when there is heavy task paralysation because the sequence of many agents with LLM could increase the hallucinations
because a chain can
amplify the problem with the hallucinations
so there are
but if you have a parallel
you have some specific problem
and you need parallel
task it could be interesting
to use these systems
you can also
think to split
the responsibility of different
agent and give them only specific
tools and then they can
maintain more
more coherent adapter context.
So it could be another way to use it proficiently.
And then when you have to interface with numerous complex tools, you can have these separations.
But in reality, as we see, many, many studies now say that there are many issues with multi -agent
systems.
For example, we have system design issues, so even if you put a prompt with a role, a task, they don't follow it.
They just disobey, let's say.
They don't follow what is written in natural language.
They can repeat some steps without a real reason or what you intend.
They can lose some part of the story or they can be unaware of some termination conditions.
Another element is the interagent misalignment.
So you have, when the contexts are at a certain level, many tools, or compact or reset, so
there can be some reset of the conversation.
I would have a question about this.
How would you solve this problem?
How would you improve alignment between two agents?
So in this moment, interagent in my system, I can see everything but in my system I basically have a deterministic part
part, so the agent can interact in a deterministic part, and they can choose which part of their
context share in a common context, and in this way is more reliable, because many of
this interagent misalignment are due to the fact that everything is written in natural
text, in natural language, and if you pass, if you have some steps to constrain the system
with normal usual syntax and some formal syntax you can have less of these
problems so there could be task a few questions from the audience as well yes
yes but now or okay after which will continue for a for another question in
In your presentation you distinguish two types of agent, the actual agent which are LLM agents
running by its own and deciding what to do and the workflow which is basically a hard
coding program which is absolutely totally deterministic.
My question is, how do you feel if LLM agents, as they are heavily prompted to be directed
into a specific task and get specific results, how do you feel this is not like a hidden
workflow instead of an autonomous agent working by itself?
I tried to reframe, which is different between a typical workflow and an agent that is instructed
with natural language processing, natural language text, is instructed to follow some
steps.
How does not this is actually a workflow just written from?
because the when you when you put is a workflow in your so work raises sequence
of steps you can say to the agent a sequence of steps so it depends what
your flow but the agent can can be wrong can never hallucination can can and then
agent is a loop is a little is a loop an agent can trigger deterministic workflow
so you can have like an agent then triggers different workflow that can be
a use case of the company a feature and then the agent can trigger usually the agent is considered
a higher level because of some sort of autonomy the the the agent can say no if he wants but the
workflow no because you trigger it and you start so it's more reactive and the agent you this is
a metaphor of human you know this but i think it gives the idea i don't know if i answer your
question further what you functionally end up with is falling back on
traditional programming techniques essentially you know the pseudo if you
will pseudo code that you pass on to the agent use that structure and you say
well this is a multi -step process sequentially and you just don't have to
work through the individual set but good in traditional programming but you leave
that reagent some part are traditional yes some part are some part additional but i try
to each time to see how much of the original part i use or how much i can open up to the
to the non -deterministic part there are other questions about this one but so about the
issues of the current multi -agent system they can derive the task can this is very common
to me or even if the best model a certain point it start to don't answer
exactly what you meant maybe you don't didn't explain exactly well there are
some implicit assumption that you didn't put in the prompt but sometimes
really they can hallucinate because they can feel the most relevant part of
the context so after certain percentage of the context they start to to if don't
have so task development information withholding they can they can um don't consider all the
information or maintain some information and not use such information this is the effect we see
sometimes they can ignore other agent input so they they and they can this is a group of a group
of a topic that you can find in different frameworks because there are many there are
There are Cruella, there are Langerath, there are many, now every main company of its own
age.
So this is very generic, it's very cross.
So reasoning action mismatches, so when an action doesn't correspond to what it was reasoning
about just before, so it corresponds exactly.
exactly and then task verification you can have some premature termination you have not
complete verification of what he did and all will be simply incorrect so this is everything
with could happen and happen actually with the LLM multi -agent system with this definition
of agent and that some something we can do we would cope with it but there is
still there is no no solution or get cover every everything's so this
interagent misalignment happens because they do an agent talk with each other
they can misalign and then they can break the the main goal that was
split between the different agent or and so we can yes yeah sure can we go to the case study
yes so the case so the case study is an example so this is not something i i deployed anywhere
or any client this is just an example to show the the the one one example of one protocol for
multi -agent systems where one agent doesn't only uh doesn't actually uh coordinate or uh
different uh tools but coordinate different agents so it's a multi it's it's a coordination between
agents not between tools so uh so still this images are not very nice but i what we have this
today I do so there is a agent which is a Google protocol and this is an
essential ontology the key elements of this protocol you have discovery phase
the beginning so they usually are distributed so agent are deployed like
API endpoints giving example and you have an agent as a client is initiator
interaction and you can at the beginning is find the search for the proper agent
agent he needs, and so he can go to a registry, it can be usually a centralized registry,
but usually at the beginning when you start, sometimes you have your own, you know already
what you need and when you develop the first example you just put the agent card, the descriptor
of this agent you put already inside it but usually you can find it you can find
it to a runtime at runtime with the discovery phase is similar like for for
a SOA service when you have a registry and you go to this and in the card
contains all the key information of the agent which is a whereas the endpoint
the skills and then we see so the card contain the identity because agent needs
needs also identity, so to be unique, to be searched from everywhere, etc., the point,
the provider, the skills, so the other agent knows what can ask to perform, the security
etc., and this is an example of execution, so the agent -client, the initiator asks for
a task.
This is mainly for task delegation.
The interaction pattern is task delegation.
So,
I have a task I send to another agent.
He has the resources and the capability to perform this task
and then give the result.
And so, it's a very, very simple example here.
So, the task is a key
element.
So, the task is a unit of work and basically the agent sends a request to perform
perform this task to the server, because in the first part you get the information of
which are the tasks that the agent can perform, and what happens is that the agent can have
a longer process to perform this task, so there are some differences, but there could
be exchanges between the two agents.
These exchanges are all linked to the task, so it's like two humans that want to perform
perform a task and then the agent server can ask for more information and can have an interaction.
All this interaction is tracked in the task in the form of messages, messages in natural
language processing, in natural language, usually in English, usually, or French.
So and each part has some, every message has parts, is composed by parts, like file and
images, text, etc.
So the agent
when generates something, it generates an output
and the output
is called the artifact
and usually in the agent world
it can be a simplified, it can be code, etc.
And
Yeah, maybe
the rest of the example
we have time
So this is the
example with the coordinator
the coordinator have a prompt
to say how to coordinate the different agents, the agent -to -agent protocol, what I showed
before here is between these two elements, and each agent is on MSCP tool, and the user
asks in the chat a simple question, and there is also a protocol to interface between the
coordinator because here we have some event like to interact with the graphical user interface
and here is a so we are talking about this part here so this is an example i took the the the
screenshot we have a this agent is an hr assistant so basically here a coordinator is an hr assistant
the hr manager asked for some activities to to start some activity because maybe a new a new
a new employee arrived and there are some specific tasks to do when he arrives so
he asks schedule onboarding for new employees starting next Monday so this agent here is the
is the A2A part of this coordinator because this coordinator is also a protocol mediator
so there is one protocol for a GUI interface and one protocol for A2A so in this
that's why you see Margaret this day is the but this is the same Margaret and
this green is the same so it's a rephrase the question and then you have
this you know if you can so this a finance agent this IT agent and this
building manager that they're preparing task for the arrival of the so this
prepare a table so you can find the position where to put the table of the
the work the desk of the person and here they are preparing that maybe the email
etc then we made and so here there are some typical workflow that can be
enabled so during maybe during the ideally during the day interaction there
there are some some workflow they are more probable likely that than others so there
is a story of a state but not in this case these are simply put here just an example