From Roll-Out to Enterprise Impact: Lessons Learned Scaling GenAI & Agentic Platforms

Introduction

Speaker background

So, maybe to just say a few words, who am I jumping in on this topic, so my name is

Hannah, I'm a bit of a techie actually, I have a background in machine learning, I've

been working with Morontix Momentum for the past four years and I'm leading their delivery

unit, so my job is to bring AI from research into practice and I've been working actually

with Paulina, who was planned to be here today, on enterprise level gen AI rollouts for the

the past year roughly so I hope I can just kind of freely share my learnings and walk with you

through some takeaways that I had along this this journey.

Why AI ROI is under scrutiny

So first of all I think this is this is

quite quite a catchy catchy title that we've been seeing in headlines not just this one I think

we've seen similar articles all over the news recently kind of doubting AI ROI kind of sharing

these stories around how hard it actually is to bring gen ai from like really just playing around

with it to to having impact in enterprises so i also wanted to ask i know we we've had a couple

of questions already but i'm also curious who has at their workplace access to some sort of

gen ai tooling like chat gpt all right perfect keep your hands up for thanks and now keep your

your hands up if you also see really effects on profitability from what

you're doing with the tooling.

Also quite a few that's actually amazing because I

think this is actually what's been criticized a lot and I can see

that the people in this audience you're exactly the people who also know and

want to create actual impact with AI that's perfect.

From copilots at the edge to core workflow impact

So what we often see when

we talk to clients is that clients come to us because they realize that they've been trying

and starting to roll out ai but they are kind of failing to get to this point where it actually

creates impact and they get beyond first like let's say email automations to this point where

they really have deep integration into workflows and create that kind of value add and you can you

can think a bit about it and this is also what what this article is highlighting about you want

want to go where you see friction.

So we see that we have this movement, let's say,

at the edge.

So we have co -pilots.

We have email drafting, summarizations,

and let's say, optional change, where a lot of companies

are already navigating in this, say, purple outline.

And they are now on their way to navigate more

towards this core, where you really

go where the friction is.

You go deeply into processes.

You challenge how can you optimize.

And we just had some examples there of how you can use workflows to really deeply change

how we work.

And this is kind of this area where you're also more in this, let's say, high -risk, high -sealing

zone, where you can really move the needle.

So I already said a bit about myself.

About Merantix Momentum

Just to say a few words also on Merantix Momentum.

So we take our clients on this journey through basically discovering what kind of use cases

make sense to realize with AI towards actually realizing them so we guide on a strategic level

but we also go in we have an implementation team of around 60 engineers where we actually go in

and then realize these use cases end -to -end and we want to kind of drive this adoption change

Three common friction points in enterprise GenAI rollouts

through both strategic guidance and implementation so let's look a bit at typical friction points so

So I picked out three.

I think there are way more, but these are maybe three that we typically see in projects with clients.

And I just want to talk a bit about these patterns and some guidance on how to address those.

1) Place strategic bets: the AI pyramid of needs

So first of all, this one's more, let's say, on the business level about placing strategic bets and navigating the AI pyramid of needs.

So what is the AI pyramid of needs?

needs.

That's a bit of a mental model that I like to use when I think about AI use cases

where at the bottom of this pyramid you have very broad acceleration with general purpose

AI tooling.

Then in the mid you have optimization of core business processes and then at the

top you have strategic innovation.

Bottom, middle, and top of the pyramid

And if you think a bit about typical use cases here

at the lower part of the pyramid these are your typical accelerating to write an email

type of use cases.

So more like essential needs that you have in your day -to -day work where you

can write one prompt and really already accelerate your work.

But it's also really hard to like

capture the value like or even measure the value at scale.

And then on the top of the pyramid

you're moving more towards these growth needs where you can as a company actually really defend

and build your competitive advantage by thinking about maybe even moving into in -house development

development or partnerships where you really go in and you develop use cases, deep integrations

and leverage that strategic advantage that you have.

And this is exactly with this higher risk, higher return area.

And I mean, I think it's important to acknowledge that there's a lot of value in this lower

tier.

Like, I don't want to say only think about the top.

I think it's just good as a company to kind of have in mind, you want to have a lot of

the lower one and this is like AI adoption for everyone in the day -to -day use cases but you also

want to start thinking about these strategic use cases quite early on in your journey.

How to pick and de-risk high-impact use cases

How do you do that?

So for these top use cases I think one point we sometimes see is that

there's like maybe a bit too quick of a jumping to conclusions in picking these use cases so it's

super important here to properly ideate and explore and validate especially also in the

business side?

Am I working on a problem that's actually a problem that I can solve with AI?

This would go like happy to have a coffee later on that or maybe rather pizza and a beer and

discuss that a bit how to approach that.

It's super important to kind of de -risk early before

then going into actually developing use cases.

You do me a favor and just like put the time back.

2) Double down on data—even in the GenAI era

better safe than sorry thank you the second one i want to mention is

let's say more on the technical side there's a lot of things also to talk about

i want to talk a bit about data i know it's probably like for all of you maybe not news

but i still feel like especially in the area era of of agentic ai or gen ai these questions pop up

back again and like question is a bit well now we kind of can write prompts

etc is data actually still important or like can we just like go ahead now

without it more or less I hear some laughing that's very good like I love it

Models vs. data: where value is created

that's the right attitude so just like to start just quick recap I think

probably not news to most of you in the room but actually AI is not just the

the model, it's also not just the data,

the value is generated when you put data into algorithms,

train them, and that's where you get the AI solution,

and that's the one that actually generates value.

What we also sometimes see when

developing machine learning models is that there's often

a lot of focus on the model choices or the architecture choices.

Because there's also a lot of innovation in this sphere,

fear and data is often treated a bit like a like a given like you a constant you just need to kind

of deal with however from our experience it's often very much worthwhile to explore how can

we actually improve the data not only the models and what do i mean by that this is for example

sometimes it's easier than you think to collect more data it's worthwhile to think about how can

we actually bump up our data quality because in the end this is the asset of every single company

company, this is kind of where the mode lies, not so much maybe in the models because that's

changing fast and others could also adopt these new models.

Why data still matters with foundation models

So how does it translate to the GenAI era?

So here we have kind of the luxury of being able to navigate a bit with instructions,

these foundation models, so we can actually steer without having to train a model from

from scratch on thousands and thousands of rows of data,

we can use instructions to steer our foundation models.

So this is amazing because we can do a lot of things

without having to train.

So when in the past for a summarization model,

we would have had to train it on this very specific task

on, for example, generating an abstract for an article.

Now we can just say, hey, just like make an abstract,

you know, and then works perfect.

That's fantastic.

Evaluation datasets and quality control for automation

however data is still super super relevant so first of all also you can you can bake data

into your workflows and another point I really want to stress is the evaluation data so especially

when you want to move towards automation with workflows see some nodding from you as well you

kind of really want to make sure that you optimize towards something that experts have validated so

So you want to kind of be aware of what are my typical cases,

what are my edge cases, and bake that

into a good, well -engineered evaluation data set

that you can optimize your agents and workflows against

and have some sort of quality control.

That's super important, because otherwise you just kind of

don't know what's going on, to be very honest.

So data stays important.

A practical way to generate training/evaluation data

How do we get data?

Maybe just one quick advice.

I think there's a lot of things to think about when collecting

data to make it faster.

One mental model I think can be is there a way to

digitalize a process in a way that it like automatically generates evaluation

and training data.

So oftentimes maybe there's a super manual process here as

an example checking DPA documents or legal documents.

We had an example

project there actually where this was done on paper so the first thing we did

We built them a front end that just basically already gave them a bit of an acceleration

because they could do it in a digital tool a bit more repetitively.

And conveniently on the side we were kind of collecting data that then helped us later

in a later step to automate or at least partially automate this process still with the human

in the loop.

But that's a way to think about these things and like kind of find clever solutions without

without having to sit down a lawyer who obviously has a lot to do in the day -to -day and sit

them down just to collect data.

So this is a more feasible way to approach it.

3) Drive adoption: take the organization on the journey

Last but not least, I think also quite important is take your organization on the AI journey

and drive gen AI adoption strategically.

So I think this is a question that it's a bit more on the soft factor side, but you

you have to take your organization along on the journey.

I think this is a topic alone that could fill

probably hours of talks and deep dives.

But just to maybe point out two aspects

Early rollout: focus on usage over outcomes

and circling back to this pyramid of AI needs.

I want to point out that especially on

these lower levels of the pyramid,

especially on broad adoption of AI tooling,

we've made the experience that especially when starting the rollout,

it's super crucial to focus on usage over outcome so you you want to just give your organization the

space to go into this trial and error mode nurture curiosity drive the adoption and and probably like

for me like three points are super important here first of all is

encourage exploration so give time and freedom to just try things second one is

offer peer -to -peer guidance so develop an ai champion community of people that can actually

in the organization be very much in touch with the team and and help on getting started and

answer questions and the third one is celebrate wins so make sure that you don't create this

expectation of okay you save time perfect what are we doing now like but like give the team the time

back and they will for sure know how to best use this time so celebrate the wins really highlight

that this is a good thing to try out things and save time and take this fear of automation

Reduce friction by meeting users where they work

essentially the second take looking a bit at these these top layers here is personally I think it is

is super essential to think about how to

take friction out of using AI tooling.

So, you can see some examples

of interfaces that we've

built in our projects at Morontix Momentum,

and you can see how different they look.

From something super technical in

the manufacturing space to actually an interface

we integrated in Power BI because

the users knew this tool already,

and they were quite accustomed to it.

To something that looks a bit more chic,

it's actually the DPA checker from

the example earlier or a dashboard that's more techy and this was all tailored very much to

the specific audiences with the target to make the process as frictionless and also as fun as possible

to make sure that you don't build a tool that's technically awesome but nobody wants to use it

super important how do you achieve this talk to your users early loop them in ask them for

feedback ask them what their needs are and then also make sure that you deliberately roll out and

Conclusion and discussion prompts

educate as you go so approaching the end of this talk just to summarize and we brought two three

points today ranging from placing strategic bets to doubling down on the data and then driving ai

adoption and taking the humans on the journey so my question to you is also to maybe prompt a bit

of discussion later also some questions is what kind of value gaps or frictions

do you experience what do you maybe yeah wonder about what are you missing on

this list I'm very much looking forward to the discussion

Finished reading?