My name is Luca Buccioli. I'm co-founder here at Accelerate AI.
And what we're here to present to you today is an introduction to the evolution of Gen AI and how we got to where we are today, as well as some deep dives on some key business solutions that we're experiencing across multiple industries.
So when we look at artificial intelligence, it has been building a foundation over 90 years, and it's been paving the way for Gen AI and where we are today. And it's demonstrating clearly that it's here to stay and shape the future going forward.
If we look at from 1957, so the first computer to beat human in chess down to 2012, Jeff Hinton, as most of you know, is nicknamed the godfather of AI and his computer vision neural network model that actually won ImageNet in 2012. That did two things and all the events in between those two timelines gave us two things.
1So one was the actual proliferation of neural network development, which led us to where we are today with the neural network model is used in LLMs that we all know of and use in our daily lives. And two, even though back in the day, the technology was not commercialized, meaning that it was not proliferated for the public to use and consume, At the same time, it was generating enough of a use case to pour more and more capital into to get us to where we are today.
So there's a few key drivers that have allowed LLMs and Gen-AI technology to revolutionize the current present moment of what solutions can offer us. And some of those driving trends are the increase in computational power. So first of all, you can think of Nvidia, you can think of all these big companies in the stock market who have been very popular over the last years simply because their hardware is able to provide us with the computational power we need to start enabling these solutions scale work.
Two, the ability of LLM models to be trained on structured and instructed data sets. This is pre-training techniques as well as fine-tuning techniques have advanced.
Then lastly, advancement and reinforced learning. These are essentially training methods that are reward-based where humans are providing feedback based on the output of the model for a given input and telling it how to redirect itself in terms of future output.
And so these three drivers have allowed us to go from a specialized knowledge system, which is simply very binary. To a general and multipurpose knowledge system that is incredibly undeterministic.
So is this a bird? No. And so is this a bird? No, this is a corgi, it's a dog. So what is it doing here? It's able to contextualize the conversation and provide you a meaningful output beyond just anything binary.
So what do corporations have to say about this?
Well, if we look at small and medium businesses, 75% are using some form of AI and over 80% are expected to further invest in AI in 2024. The latest survey of about 600 small businesses in North America, small and medium size, if we take a look at the graph on the left, what it's telling us is as businesses scale in revenue, as well as enterprises at very large, robust financial balance sheets and income statements, they're going to incorporate more and more AI.
The drivers of these adoptions, 51% are adopting AI to save time and money, 29% are adopting it to stay competitive, And we expect that number to increase because as competition continues to adopt AI automation solutions, it will force competition to continually adopt it. If not, it's going to have their lunch eaten.
And then lastly, 26% invested in AI due to current inflation as well as cost-cutting measures. And if we look to the graph on the right, this is data from Goldman Sachs.
So Goldman Sachs predicts that one out of every four knowledge worker tasks has the potential to be automated in the near future. And so we look at the average bar, it's at 25%.
When you look at the distribution of functional verticals within a company, you start to see that industries where manual labor is incredibly intensive, of course, have much lower use cases. But as we start to move up the chain, we even see that highly expert-based such as legal tasks, lawyers, these professions, because they are so entrenched in language-based output, no matter how expert and complex their form of logic or reasoning is in order to output the work, because the LLM is naturally trained by that nature, it's gonna be able to figure out how to pass the bar exam.
It's gonna be able to have a knowledge base of all the theory books. It's gonna be able to download all the precedent cases that are relevant in a public domain. as well as also learn from the thousands of email exchanges that law firms had to do over history between other lawyers and clients.
And so even some of the most expert fields, as long as they're very text and language-based, face a risk of being automated over time. It's not just, let's say, the least complex work tasks of admin support.
And so how are corporations looking at this from an investment perspective?
At a high level, it comes into two clusters. So efficiency and effectiveness.
Right now, most of it is relying on efficiency because at the end of the day, you're looking at it from an 80-20 perspective as a corporation. And if you can get enough quality within the output that you need, even if it's 60% of the accuracy of what a human-based employee might have done, that might be good enough for you to implement this. but at the end of the day, a lot of it is gonna come down to efficiency and time because you're trying to cut costs.
And so these two clusters can be bucketed into a few themes. These themes would go under top line improvements and bottom line improvements.
On the top line, we see a lot of use cases that are proliferating around personalization of the customer experience, increasing retention, increasing sales and acquisitions, and then also creating a personalization through the ability to create new products and tailored approaches in real time to actually be presented to consumers. And so that's going to enable new strategies for revenue streams.
Bottom line improvements, it's essentially at the end of the day, efficiency and automation. So the more you can take away that's monotonous from a human labor, the more they're able to focus on what matters and the high value tasks within the job description. And then of course, with things like R&D, we can reduce the innovation costs and improve risk mitigation and management.
And so if we take a look at sort of the global AI investment landscape, there's about top 12 countries that are leading it, but really it comes down to the US and China. The coloring here represents on the left, the relative public and private investment scale. So the darker the color, more money poured in, more capital is flowing into those countries to generate AI. The lighter the color, less money is being poured in, but these are still the leading countries across the world as of 2023.
If we look at the key technological focus, at the end of the day, everyone is gonna be focusing on generative AI, natural language processing goes under that, computer vision.
But what you will find is countries who have very concentrated economies under certain industries will naturally focus on advancing commercial use cases for those industries that affect their local GDP, as well as their net export, which flows into it. And so you can think of Germany, right? It's a very heavy manufacturing-based economy.
They're going to be focusing on computer vision for a robotics case application versus a country that might have an incredibly huge retail consumer e-commerce space like China is going to focus computer vision mostly on those types of use cases.
And so now let's take a look at what are we focusing on in America. Well, it's, of course, going to be a mix of computer vision, Gen AI, and natural language processing.
But when we start to look at sort of what are some of these applications, if we first look at computer vision, top applications are retail and inventory management, medical imaging, and autonomous vehicles. I'm sure we've all heard a lot of the last two. And those are medical imaging, I think, is much more advanced than autonomous vehicles is in terms of where we are today with commercial application.
But even when you think of retail and inventory management, so some of you might buy a house in the future or very soon, and you're going to be able to pick out houses based on aesthetic features. Why? Because computer vision is going to be able to look at markers within aesthetic images and put them into a knowledge base in which can actually be retrieved.
So if you want an avocado kitchen, that data normally wasn't always on the real estate platform that you were looking for. If you want some sort of a marble countertop, that might have not always been there. It was always sort of numerical markers like two bed, three bath, under X price, within XYZ zip code, right? And so it's essentially creating a whole new way for consumers to start looking for their demands, right, and their interests.
And then when we look at GenAI and natural language processing, they're together. NLP enables part of what GenAI does.
And what we want to do is we want to deep dive into a subject that we've heard a lot about here today so far. It's an industry that is exponentially growing due to the integration of LLMs over the last two years. And we don't see it stopping anytime soon.
It's currently the wild, wild west, because it's within its first 18 months of infancy, but we will see it mature over time.
And that industry is AI agents. So AI agents are using gen AI, large language models, for conversational flow.
So the way we look at it is an AI agent has two sides of itself. There's the conversational flow, which is going to be utilized either in a text or voice-based format, and that is enabled through the integration of an LLM. And you can choose which one you want depending on its quality and also the purpose of the LLM. As we know, ChatGPT OpenAI has many different models beyond just the ChatGPT 3.5, 4.0, 4.0 Turbo, et cetera.
1But to bring these agents beyond just conversational path and basically providing some relevant output for a given amount of text-based input, you have to start custom programming or including off-the-shelf integrations. Custom programming, of course, is going to allow for much less limitation versus off-the-shelf API integrations that some of these AI agent building platforms provide today. And what is that going to allow?
So all of a sudden you've turned something that's purely conversational to incredibly functional because its utility is able to start performing tasks that we would normally all do by utilizing other platforms, such as booking meetings, booking video calls. We can also instruct it with frameworks, mathematical frameworks. So if you're a real estate lending company, and you're looking to qualify people for certain loans based on their investment style, as well as potentially based on whether it's going to be residential or commercial type of real estate, you can program frameworks in order to actually quote a price on any type of product and or quote the interest rate.
And so you can start thinking about all the types of applications that might be relevant to a business. It can also provide technical support, as well as any type of report generation based on voice based input. So A lot of people, let's say, in a field where you have to make inspections, you can record yourself, and then you can program an AI agent with the right amount of customization to identify the keywords within those recordings, put them into a given field in a very structured report, and there you have it. You don't have to go back to your recording or copy-paste the notes that were summarized out of the recording.
So what are the key technologies that are allowing this? Well, there's the model, which is of course the LLM of your choice.
The techniques are, the main techniques are gonna be NLP and RAG, retrieval augmented generation. The architecture is generally going to be generative pre-trained transformers. The databases and representations are embeddings.
Embeddings are going to allow to capture relationships between keywords so that it is able to process the conversation in a better context, as well as Vexor stores.
So I want to take you through a framework that I think we can apply in a lot of cases. This is a sales funnel framework, but you can also look at it as any conversational framework between an employee and managers and then escalating to VPs.
So if you picture a lead or a customer, when they're looking into buying a product, typically high ticket products, they're going to end up being educated through a live assistant. Similarly for an employee who needs approval from a manager or needs to escalate a situation and then get feedback from that.
A lot of times, there's much countless time spent back and forth. And on top of that, there's a lot of manual processing, especially in a sales cycle, where you need to document all the follow-ups, you need to document exactly what the conversation was about, and what status that lead is through the pipeline. And why would you need to do that?
Because eventually that information needs to get passed on to the next sales rep that is going to try and close the deal.
So when we look at the new normal, Leads and customers can all simultaneously be processed by one tailored AI agent. You're not limited to the processing capacity of a single human or group of humans.
Two, that AI agent is gonna know everything about your company, product specs, services, warranties, pricing, et cetera, whatever you would like it to know, including internal company policies and regulations, assuming it needs to interface with employees, that it's gonna be able to deliver exactly what the person needs at the right time, depending on how you're gonna customize it. And so this AI agent can make product recommendations based on matching what the user's inputs are to what its inventory of products are.
For example, we built a custom AI agent chatbot for a public automotive dealership. If you told it, I'm a father, I have five kids and a wife, and I'm looking to buy a fuel-economic efficient vehicle, It was going to tell you, well, you're probably going to have to go with the Q7. I can suggest you the size of engine based on what your fuel economy goals are, simply because it has that information and it's able to contextualize the input from the user or the buyer.
And so these AI agents can not only do product matching, but also they're going to provide support, answer your frequently asked questions and more. If you need them to process a conversation in a given way to get that user, whether it's a customer, employee, into a given outcome through that conversation, you can instruct it with qualification pathways. And you can also instruct it with performing certain services based on what the person is asking.
So at the end of the day, the use cases are limitless and the range of a single use case can be very broad. It just comes down to how you're going to customize this AI agent at the end of the day.
Now I'll pass it off to Alex.
Alex is going to talk to us about some of the deployment platforms that we can use as well as the integrations that AI agents can go into. Yeah, so basically I want to talk a bit about deployments and integration platforms.
And I would like to point your attention at the top of the pyramid where you can see a lot of channel deployment channels where depending on the business or the company that uses AI agent is going to be different, for example, social media.
uh platforms as well as for example like website where you uh most of you probably interacted with some some kind of widget that you see on the website uh whether it's through this like widget small window or through a standalone interface so basically the middle of this permit is workflow deployments and integrations so those are basically certain interfaces that company uses to either have the same kind of interaction through internal platforms or use those platforms as a workflow to either extract some data and pass it over to AI agent. or get some valuable information that you can pass as the parameters. So basically at the end of the pyramid, you can see those infrastructure integrations.
So at the left of the pyramid, you can see automations. So those are basically different types of automation, like Zapier, for example, that allows companies to connect different business management platforms to either like to communicate and kind of enrich the process of AI agent, whether through connecting to their CRM, for example, to pass over some of the information that the AI agent can use or to extract some data from AI agent and then process it later on for their use case.
So I want to talk a little bit more about the AI tool benefits for businesses. So based on this medium size and small businesses survey, one independent contractor's workload on average is replaced by AI tool. 2.1 full-time employees work can be done by AI tools on average. 93% of the respondents agree that AI tools drive cost savings and improve profitability.
62% consumers would rather talk to AI agent than live agent if the alternative is to wait 15 minutes to answer. And 41% say human capital savings allows to redirect employees to a higher value work. So they don't focus on a less high, less valuable work.
So yeah, that concludes the presentation. I hope you guys enjoyed it.
Before we open up the room for questioning, we were asked not to do any promotion, but we will shamelessly solicit one thing. If anyone here is interested in project-based work to build AI agents, whether you have a background in Python, JavaScript, and are familiar with libraries such as Langchain or Llama Index, please reach out. We're looking for project-based developers. As well as if you have a background with any no-code building with any of the current AI agent platforms, reach out as well.
We're looking for developers. Thank you.