Today, I mean, this talk specifically, it's going to be about sales productivity, right, in the age of AI, obviously.
And before I even start, I wanted to ask you just a quick question here. When was the last time you contacted a business and you had to wait for an answer?
It's something you wanted to buy, right? You were really interested and think about the time it takes for them to get back to you. So it could be a few hours, it could be 24 hours, it could be 48, and a week sometimes.
So think about this feeling here.
And on the business side, think about the opposite effect, which is when I'm on the business side, I'm thinking, oh, I got back to that customer in 24 hours, oh my God, that was good. That was good. good.
But as a consumer or as a business or client on the other side, anything is long. Anything is a few hours is already too long sometimes.
So this is what we're going to be talking about today.
And some industries obviously are more impacted by what we call speed to lead than others.
But today we're going to be discussing this notion of response time and speed to lead.
And I'm going to share case studies from one of our customers and how we solved for it, how we solved for it, and how we also cut our teeth because it wasn't a smooth ride.
All right, so let's get started here. That's going to be the agenda for today. We're going to discuss the speed to lead notion, and then we will dive into the case studies.
is this one is a legal firm that we serve, Fraser Partners, mid -market, and they focus in probate search and litigations. So it's a US firm.
Then we're going to surface, we're
going to go in quickly on the AI solutions, but it's not really the main point today.
And lastly, the three hard lessons that we learned that we're applying to other customers, especially in the legal space.
So I'm Jean -Michel. I'll go by JM if it's easier.
Thanks for the reminder here.
I've been building SaaS enterprise for the past 15, 16 years. And my last role was head of AI products for my organizations. I spent all my time in healthcare and pharmaceutical industries.
The product that I was focused on was CRM. So, think Salesforce for the pharmaceutical industries.
And as a product manager by trade, I've seen everything from 0 to 1 and 0 to 80 % market share. Just to give you some context here, that organization, I was employee number 50 -ish. I was the first PM in Europe.
And after 12 years, we were 8 ,000 people. So, you can see the scale. It was a rocket ship. It was crazy.
but I learned so much and I'm so blessed that I was part of that journey.
I quit my job last year to do my own thing with my co -founder. And right now, we're building essentially our own thing and AI -powered revenue systems for scale -ups and growing businesses. So that's what we do.
Okay? I'm always happy to chat. And you have my contact here. Happy to chat later as well after the three sessions here. here.
Okay, so let's dive right into it.
I'm going to talk about the speed to lead concept here. Some of you might not be familiar with it.
But this is what we focus on in our organizations.
So when you look at the on the right hand side of
the screen, essentially what it says, you have, it's a studies with hundreds of organizations, and studying 1000s and 1000s of of leads coming in, like inbound leads we're talking about here,
and the time it takes to respond and the conversion related to the time it takes to respond.
Right. And if you look at this graph here, the longer you wait, which is not surprising, the less responsive the less the conversion will be.
Now what's surprising about this chart is not so much the delay, but more the drastic cliff after the first hour. That was the most surprising part here.
And so the key takeaway is after the first hour, in many industries, you will lose the lead, essentially.
You just move to the next one. That's it.
Not heavy enterprise, because heavy enterprise, it's a long sales cycle, so it's a different story.
But for many industries, and the legal one is one, when you get there, you have to strike that lead pretty quickly and engage with that lead.
So that's it on the key takeaways.
I'm going to jump into now the case studies.
So again, Fraser Partners, it's a US legal firm specialized in probate search and litigations.
As I said, mid -market, so about $100 million revenue.
Essentially, this is doing the discovery, what we map out, right? So it's really classic. You see that in many lead engagement, the lead gen.
But I'm just going to dive into this quickly so you understand the case study better today.
So the lead comes in usually website, email, form, or calls. So we serve all of that with our AI agents, voice AI, AI on the email, website, and all that.
Most of them, I mean, not most of them, but 15 % of them are spams. That's what we find out. and about 20, 30 % comes after hours. Think about Saturday, Sunday, or even 7 p .m.
When you have a car accident, and we know that those Americans love car accidents and send lawyers right away after you. When that happens, you don't want to wait. So that's the use case here we're trying to solve.
And then you have the human -style response, which is classic. Prospect qualifications. Indications, right?
One of the big things beyond what you probably know already is the conflict check. If there's no solicitors here or lawyers, conflict check is essentially, you cannot represent the wife and the husband if they're getting divorced. You cannot represent the employer and the employee.
There's a lot of things that is super important in the legal space that I wasn't aware of, and I find out, and conflict check is one of them. Anyway, then we move on.
we get into the meeting booking, classic calendar sync, checking people's calendar confirmations, managing all of that with the leads, and then logging everything into the case management system.
Essentially, a case management system is a central location where they have all their cases. It's like a CRM. It's kind of a CRM, if you like.
All right, so that is just to set the context of of that specific customer.
Now, what we did was to map the cost of the delay. Because when we discussed with our customers, they were thinking that, first of all, they cannot get to all the leads. And when they get to all the leads, they do a poor job in the qualifications, in getting back to them on time.
Some of them have to wait more than three days because of simply just the nature of the job. So just to map the KPIs that matter to that specific business,
they receive about 200 inbound leads from specific channels. They also have partners, but we didn't get with that. We didn't deal with the partners, we were just dealing with the inbound leads from two channels that they wanted us to take care of.
Average deal size, I asked them just the average number, but obviously they can win big, right, like a litigation in the US. or if they lose the case, then they can lose a lot as well. So, they gave me that 25K just for the sake of that exercise.
And then they have a 10 % close rate on the leads. And they have about 24, they have an average response speed of 24 to 72 hours. That's there. And what we find out is closer to 72 than 24.
The owner, the partners thought that it was more 24. I think that's what his staff was telling him maybe. but when we looked at the data behind it, it was closer to 72.
Now, what we've observed with our customers, the uplift, we told them that we probably have an annual misrevenue of about $4 million, $3 -4 million of misrevenue based on all the KPIs that they provided to us
and obviously some wasted admin time of about $60K for the intake and the admin team.
Now, in terms of solution quickly, think of what we did for the pilot before the production one.
We quickly spin out some low -code automation for them just to prove the ROI and they could work. So that's what we did with the classic tools that you may know, like NN8 and things like that.
Before we go after the pilot of four months, we were in full production, but during the pilot that's what we did and we surfaced an intake dashboard as well, mostly for the partners.
Right here you can see like new intakes, auto accepted, auto decline and things like that. This is simply surfacing the dashboard on top of of it mostly for the leadership team.
The real user will will get the the automations that is really they won't see anything because no dashboard specifically, even though they have access to that.
Right.
Happy to share by the way, the template or the behind the scene, but probably not now. Just interest of time.
And I know again, I'll be looking at at me with his eyes and I'm scared of him. So let's make sure, just let me know, keep me honest here on time.
After we delivered the pilot, so the pilot took about four months, and then we added three months in hypercare. Usually, that's what we do.
I think with AI now before SaaS implementations, I think with AI, it's closer to six months hypercare because we have to fine -tune everything usually after go -live, which is a bit annoying, but I think it's part of the game we have
to play now, especially in 2026 where our customers now expect faster outcome from the go -live and not like a classical SaaS when you were back in 2023 or 2022 where you go go with your SaaS, usually you realize value after 12 months, especially in enterprise or mid -market customers.
Anyway, going back to that story here, we move the close rate after about six months to 80 percent close rate. So, it's a massive achievement for them, and the revenue impact was about 300K uplift.
So, obviously the customer was super, super happy here but more importantly when it when we talk about the technology behind it it was mostly thanks to all the handoff automated of all those ai agents and we managed to bring it
down from 24 hours to 72 hours to really five minutes again here we're talking about everything everything like from the moment the lead comes in, the lead hang up and this is where the automation starts.
We are checking the conflict, the case management. We want to make sure that we're not representing a client partner at the same time as this client that is calling us.
Everything is being checked live with the voice AI agent or by email with the back and forth, asking more information about the case if needed.
Obviously, this looks good on the outside again, but we really cut our teeth here, and I think we're going to address the most interesting part of that presentation on the next slide.
So the three hard lessons that we learned,
the first one is the context windows. And what we realize with our customers, okay, what we realize with our customers and the transcripts that we got and the behavior from the leads is that many, many times
those leads will go on, can go on for like 10, 15 minutes without interruptions. That you talk about your case, you complain about your situation to an AI bot essentially and you can go on for like 15 minutes
and what we realized lies is that our robot was getting drunk. Like it couldn't capture the right informations, it couldn't capture the facts correctly. And obviously, it was really risky, even for us, in terms of contract with US firms.
So during the pilot, we were very cautious about this, and we communicated a lot with our customers that hey, we're testing here our agents, and so please be mindful.
so we take the scripts that was already available and made by the customers, recorded by the customers, and train our AI with that.
Long story short, the way we fix it is that every four minutes, we have an AI agent just summarizing the conversations. So we added one more agent to just keep summarizing every four minutes the conversations.
Don't ask me why four minutes, initially it was five, But we figured given the length of usually the first call, four minutes was the appropriate timing here. But it doesn't really matter much.
Ultimately, we had microsummaries in a structured fact. So that was the first point. We know context window is always an issue anyway.
So our heuristic for us was to design around those limits. We know there's challenges there, we have to design around it.
The second one was also important, the hallucinations. really unacceptable in the legal space. I'm not saying in other industries it is okay, but I can think of if you're serving marketing agencies,
I guess it's a bit okay to overstate something or understate something, right? No offense to marketing folks. I'm not a marketing person, but no offense here.
But what we did here, obviously, we cannot provide legal advice, even though it's so funny because when we listen to those transcripts, and some leads will ask for legal advice to a bot.
Even though we put a disclaimer, don't ask for legal advice. Do not try to ask anything. We're here just to qualify the case.
And when human beings realize it's a robot or it's a bot behind, they just go crazy. So yeah, we keep those fun facts for later.
Essentially what we did here is we built two buckets. buckets, we prioritize facts, because on the legal space, it's all about the facts. If it's facts stated, we build a bucket to capture only the facts in one bucket, the other one on the unknown fact or unknown informations.
On the unknown information side, we just capture whatever we got from the agents, so from the US legal partners, or the client itself. All the documentation is stated on whether or not it's a fact or not. We put disclaimers everywhere, obviously, and we mention this.
We block legal advice, so we put the guardrails and the fallbacks on AI agents to not go into that direction. So we're pretty, pretty strict on our workflow here. That is, we want to block any intention from the LLM to go in that direction. So, here we optimize for correctness versus clever.
We don't want the AI agent to be clever here or to go haywire, just state the fact and stick to that lane that we provided.
The last one is,
we know by definition LLMs are probabilistic. So, what we try to do on the automation side is to provide as much guardrail and fallbacks as possible. We want it to be very deterministic.
So think of all the input and output. We force the JSON format for instance on both sides. We don't want to go again crazy on free text.
So when we pass output and input one agent to another, we really put guardrails to make sure that it's not falling off the cliff here.
We also put confidence threshold at that multiple step in our framework so that if it's not meeting that confidence threshold, it is routed to a human to pick up.
We'd rather play safe here than risky, obviously. Here, our heuristic for us was AI is a co -worker, and we don't want it to be a single point of failure.
It's there just to help the intake team team and the senior partners to focus on cases that matter for them and the ones that can close.
That's it. I think I'm pretty much behind, but just one second to close that quickly.
Ultimately, if I zoom out a bit from the hard lesson learned from us, for some businesses, speed is the new competitive advantage, especially in the age of AI.
We as consumers or people who inquire sometimes these services, we know that we're super impatient
and now we won't cut it, like 24 hours or 48 hours in some cases, we won't fly, it won't fly.
Obviously AI eliminates the bottlenecks in many businesses and the heuristic that we use as well
it's the two -lane model and the way we try to educate our customers is that AI handle speed and repetition is really good with that, but humans should handle trust and closing at the end.
Right, so that's why we go for when we engage with our customers.
That's it.