Roundtable Discussions: AI in Practice

Introduction: From Pilots to Production AI

So in the meantime, I would like to ask Matteo, based on your experience, what actually makes

the difference between pilot projects that fails and solutions such as the one that you

showed us, WSBI, that generates real and scalable value in production?

So what's the difference between pilots and real actually use cases such as yours?

I will start from there.

What Makes AI Projects Scalable?

Start with Scope and a Real Business Case

Well, nowadays everyone is expecting AI to be part of a solution.

So I would say that one of the main things is to define a clear scope,

scope, meaning that this is maybe the reason why there are a lot of experimentation about

AI without a concrete business case.

What usually happens is also that AI is used for performing, or at least large language

language model, not the AI itself, but for tasks which the AI itself was not trained

about, like asking to compute a mean, I mean LLMs are done for speaking, and this is basically

what we should ask them to do, not performing analytics tasks.

Use LLMs for Language, Not Analytics

So having like a clear separation between the data, I mean the data itself, which needs

to be trustable and natural language interface which is then provided by the

AI.

Getting the Right Data Into the Model (Fine-Tuning, RAG, MCP)

The debate about LLM started with how to provide the right data, so at

first it was the fine -tuning, so adapting the model to a context, then the fashion

was the RAG Retrieval at Mac augmented generation, now it is about model

model context protocol.

So how to provide the right data to the model is probably the

simple way to answer this question, not asking the LLM to generate the data itself, but just

using the data to answer a question.

The Hard Part: Evaluation and Guardrails

Measuring Quality When Models Are Opaque

So can I ask you also, what was the most difficult part of this process, right?

So the main

challenges that you had, some of them you already explained, but also how difficult

was training these system and the actual results that are you are also trying to

adjust right because you never end adjusting the result right yeah I mean

the big challenge is the evaluation part itself when speaking about like

traditional machine learning yet specific specific field for recess which

is the supervised machine learning and i want to make it that technical but the thing is you can

derive metrics to assess if the task that the model is performing is good or not when it comes

to using an open an llm which is not even open source you don't really know the tool you are

using and so basically it is about like building constraints and they call it nowadays guard rails

Controlling Risk with Constraints and Human-in-the-Loop

under which the risk is controlled so it's like moving from a pure bet to a to a controlled risk

you have technical things you can do like setting parameters like the temperature which basically

tells the model to be not that creative you can control the way the outcome of a model is

structured so you have technicalities but at the very head it's about applying

all the all the tools you can have checking the data using the the

technical configuration that the model gives you and keeping the human in the

loop as it was said is probably I mean it's definitely the only thing you can

do so the solution will always have the need to be improved now questions for you to make sure

Hallucinations and Operational Risk

you're still alive and uh are here present with us so we do have a question for you and the question

is have you ever caught an ai completely making up a fact as hallucinating as we said before while

while you were using it for work.

So you have four options.

Let's see what the room thinks

about hallucinating issues from AI.

And it would be also interesting to know

if these hallucinations were made because of the prompting

or for the dataset or any other reasons.

And I would love to hear our speakers also about this.

Why Errors Are Inevitable at Scale

Well, it is very good to know that the AI itself

didn't cause any disaster and what do we have here so almost all the people could

recognize okay well this is this is something which also relate to the fact

of the risk that I described I mean that there is AI and we are definitely using

it most of the time a human can recognize an error but what happened when

you apply an AI where the human could not intervene for instance I know you

probably don't remember everything about the presentation but we were

speaking about enriching patents data with a smart summary okay we have

billions of data and there will be and I mean I have to say that at least hundreds

of error and you can simply ask a human to review that so come back to the

Safeguards: Acceptance, Monitoring, and Continuous Improvement

governed risk, so it's simply about accepting the fact that this is part of the definition

of the problem itself, and continuously experimenting about how to build safeguard rates.

The answer is always the same, you play with statistics, so you have a model, maybe you

want to have another model which is checking the outcome of the model, and at some point

you have to stop because otherwise that one will never come but yeah this is part of the thing

thank you thank you uh yeah of course having um let's say safe data or clean data and not

having corrupt data will also help not to have hallucination or biases as well

Adoption, Incentives, and Privacy Trade-Offs

Would You Trade Data for Lower Taxes and Lower Impact?

other questions for you um so would you accept an ai camera monitoring your personal garbage

and food waste if guaranteed a massive reduction in city taxes and environmental impact?

Because actually, I believe that if the result impacts our daily lives,

probably we're most likely to adopt an AI system, right?

So also adoption could be resisted by those who do not see a direct effect.

So let's see what you think.

only because of taxes let's see let's see so we have 11 people saying absolutely

take my data and save the planet nine of you say yes but only if the data is 100 percent anonymous

Case Study: Waste Monitoring Under GDPR

Designing for Anonymity and Scope Limitation

okay so this leads me to a questions for ricardo so we saw your system we which was really

interesting and also I was wondering and interested in knowing how do you balance

this hyper granular and camera based information that you're getting

collection collecting I'm sorry we the strict we are in Europe right and we do

have strict GDPR rules about it we spoke about this when we talk about this

speech right and I was interesting to know how do you actually allow school

school canteens to have camera, you know, facing the tray and people working in school

canteens.

How did you manage it?

It's the most difficult part of running our service because there's a lot of, also from

this point of view, also a lot of hallucination that is not about the AI hallucination, but

the not clear idea of what does it mean having anonymous data and you know

assign a data to a single to a single person so this is pretty complicated

pretty complicated thing to describe but yeah we do we have our our internal control system so we

have additional AI that is scanning every every images and every picture to delete each single

picture that is not coherent with the scope of our system so if we are getting

some faces so the identification number of a car or if we are getting every kind

of different thing that the waste we are completely destroyed the picture so this

is this is what we are doing from a technical point of view but from a

practical point of view I'm just negotiating because if we are talking

about optimizing waste management we are talking about unions in Italy so we have

to make sure that it is not GDPR it's about labor labor monitoring that is no

no load thanks to Amazon and so we have to this is a very very short space in

which we can go in so this is this is very very difficult but we we are doing using a different

kind of solutions the one as i said of additional software but then also by design we are looking

inside the waste waste truck so it's almost impossible to go to go in there but yeah you

Turning Measurements into Simple Decisions

know whenever people see because I really like your data -driven approach right and the fact

that you providing actual data to make decisions so I was wondering beyond privacy if you know

this and can tell us any particular decision that a school canteen or the municipality or

one of your clients made based on the data -driven approach that you're using,

so the data you provided to them.

But the decisions are very simple ones, like

you know reducing the amount of certain type of vegetable in the school or in the university

trying to find a different kind of receipt that you already put in the canteen so repeat that one

instead of trying another one and wasting all the resources you put in there or trying to

to change the different rules, try to find from which kind of building comes the construction

and demolition waste in a neighborhood in which only one is the building in which we

We are seeing that there are, yeah, you know, this kind of construction, demolition, workers in there.

So very simple, very simple decision.

But actually, there's a lot of a lot of fears and a lot of, you know, trying to not be watched by someone else.

Behavior Change, Gaming the System, and Looking at Aggregates

whenever you see data black and white is it natural that you change your behavior right i mean

how is how natural is whenever you are you know uh put in front of an evidence that you're wasting

food or whatever in a wrong way also talking about the experience of tolling garden right so we had

members that actually being told by the system that they were putting the wrong material in the

wrong bin they we tried to you know educate train people to better understand how they

need to recycle things does it does this happen naturally no because as you can see maybe uh

you put it into uh you know city taxes so it's very interesting question but still two people

would try to trick the camera so this is the standard I think the standard

approach of human being try to make the system fail and yeah actually the system

failed because AI failed but what we are what we are our aim is to work with a

a large amount of data so it's not about the single item or the single action or the single

dish that is not working we are trying to get the massive picture of what is happening in the city

or in the specific country thank you another questions for you our audience is this one so

Trusting AI vs. Trusting Your Gut

let's do this scenario you are an engineer an AI system tells you a bridge

is 100 % safe but your human instinct your human intuition tells you something

looks hot off what do you do you trust AI you process million of data points

you trust me you trust your gut you ignore the AI and redo all the math

manually this is interesting because I believe human beings by default they're

kind of skeptical right and we say we saying this on a daily basis people

being scared of our AI is taking over their jobs their skills so we still need

need to find peace and understand if we can trust data.

Because AI is simply an aggregation of data.

Avoiding Automation Bias in Engineering

Expert Review as a Non-Negotiable Step

And this leads me to the last question for our speaker,

Dennis.

Since your team trained your model using 10

of your own expert inspectors, how

How do you ensure your engineers don't fall into automation bias?

Because whether you start using a new tool, you get used to the tool and maybe you start, you know, let's say your critical sense gets lost in the fact that you're taking a habit of using AI to process some data, some information.

so do you trust the machine instantly and then you can have second thoughts if

it's evident that what AI is telling you is wrong or there is always a critical

sense by analyzing what they are told your engineers so is there a bit

changing the way engineers and you know read data and take decisions because

they are using every single day our point of view on it is that we would

like to avoid this type of problems in order to centralize the professional

expert on it that's why we only receive the result from the AI but after we have

the revision from the professional expert and the instance is not ended if

there is not the revision of the professional expert we can say okay if

we are hiring someone new in our team he can use this application and maybe as

As you said before, he could be, let's say, driven by the AI.

But our spirit is to develop always criticism on the professional expert.

So it could happen, what you are saying.

But we would like to avoid this situation.

but it's something that it's part of our profession yeah yeah I believe that each of us if we think

on the way we use AI probably it changes the way we think right so it makes it faster to take some

decisions because some information are provided in a more fast way but still there are professionals

such as engineers that need to be really careful and mindful whenever taking decisions.

So thank you, Dennis.

Barriers to AI Adoption in Traditional Companies

Trust, Cost, Privacy, and Skills Gaps

One last question for the audience.

So in one word, we are

interested in knowing what's preventing you to use and adopt AI in your traditional companies,

companies, if you have one.

So we see challenges in adopting AI and many misconceptions or

many, you know, fears.

So what's yours?

One word to describe it.

I like ignorance, complex

businesses, ERP, costs, different systems in place.

And if any of our speakers have

a comment on these words please cost trust if you see the word getting bigger because

many of you have used it so trust it's a really key word here we saw we told we talked about it

before trusting a machine an ai could seems uh really futuristic costs ai costs of course for

companies it's a cost fear of innovation credibility and i suppose because if we

use ai we're not credible anymore um lack of deep knowledge privacy close minds workers so people

people are part of the problem in adopting ai probably uh hallucinations any idea from

Reducing Ignorance Through Training and Clear Scope

your side we want one word from you too you're using ai okay well i do agree that trust is

probably the main barrier um um i mean we as mentioned earlier i mean it's very difficult

to assess when something which is an outfit from an ai uh is possible or not and at the very end

And then we end up in having a human in the loop,

so it seems like a loop itself.

And we have, yeah, usually also some concerns

when it comes to replace people.

So there are also ethical concerns

that needs to be considered.

So I do agree mainly with these two.

But at the very end, I'm also confident that it's not really

a choice to use it, no?

I mean just to stay competitive on the market I mean we need to find a way so

even if there are some some barriers on the very end anyone is probably forced

to include some sort of automation since it leads to concrete benefits like

saving money saving time and these kind of things yeah the last thing he said so

I think the scope is missing so why you are adopting AI what you have to do

and if you know what you have to do probably you will do whatever if you are

buying AI to write the name faster probably the problem is that's where as

I said before from these words I think the for me the most important is

ignorance because when there is no ignorance you don't see trust you don't

need trust because you already know if AI is saying something right or not.

So if

we are able to remove ignorance there are no reasons for don't

use AI because we are completely able to manage it and to review it and to

to understand when it fails.

So training people is part of the key,

and also probably allowing people to understand

that if you're using AI, you're actually

making some tasks faster, some repetitive tasks that are not

really valuable, because the capability of a human being

to take decisions based on different variables

goals and having more time to be more present in strategy, in complex thinking, is actually part of

the reason why we are adopting AI, just to reduce the manual and the repetitive tasks that we do

every single day.

This is, of course, my opinion, but we want to hear yours.

So this is your last

chance to ask questions to our speakers.

Of course, if you are shy and you want to write it

down you can do it through the mantimeter otherwise feel free to raise your hand that's

great in the meantime i'm really glad that we explored how ai can be applied and used in

different industries so tonight we have different angles of how ai can be really used and can create

an impact in different industries so i was really glad to have different perspectives so again i

want to thank our speakers um and i know that you're waiting for the pretty book but and during

the aperitivo we will definitely have more chances to talk with our speakers and ask other questions

but uh yeah is there any questions for our speakers before yes please can you yeah you can

How AI Changes Hiring and Professional Development

Decision-Making Skills vs. Technical Mastery

say that loud if you want okay i hope that my voice would be loud enough so i want to ask

question related to the hiring process of the first startup, because now

because he was saying that now the human part is more decision -making rather than

technical, technical roles, more consideration rather than questions so now do you think that now

maybe the hiring so that now the new roles of the co -workers will be reached

reach on more decision making rather than technical stuff.

And

so now, so the training process of collaborators for work would

be, I don't know, taking responsibilities, decision

making, I don't know, this kind of other skills that the

companies need to need to train according to the line.

I believe this question is for Danny.

So if I understood it

correctly we are asking you if you believe that ai is supporting with technical you know analysis

because we train the ai to have technical analysis so this leaves room for people to

be replaced on the technical analysis and be more focused on decision making okay thanks for the

question from one side maybe it could but I think that in our business the

professional expert must be always completely formed and he needs a huge

knowledge in order to perform this kind of business, like a civil engineer, like a designer

of bridges, like someone who is performing safety assessments of bridges.

So for sure we can use AI in order to make faster calculations.

I usually use AI in order to make fast calculations, but what I receive is always under my revisions

and I immediately understand when the AI is giving me something false.

What you are saying correctly, from my side, is that someone else could have a different approach.

That is to say, it could ask AI something and receive the results and just accept it.

it, but I think someone who is performing his professional job on this way is going

to lose it, because there is no any chance to avoid knowledge and to skip some processes,

is some calculations so in my opinion we are all free to to have to have a

specific way on doing our professional professional job but the consequences

are always the results of this choice before I will leave you to the aperitivo

Conclusion and What’s Next

if you are interested to this is a monthly recurring event that we are

are taking simultaneously in different locations, Rome, Milan, and Turin.

So if you're interested

in the next one, it's going to take place 21st May here in this room with different speakers,

different perspectives, different point of views of AI and how it's being used in different

industries.

So don't forget to register.

And now the fun continues with the aperitivo and

to our speakers to us at stalling garden and thank you for have been here with us tonight thank you

so much

Finished reading?