I'll quickly introduce myself before we get into the presentation. My name is Anna. I'm the founder at Kraft AI, where I help leaders make sense of AI and adopt it successfully into their business operations and processes.
I've been in the ML AI space for about a decade now. I was in academia, I was in management consulting, I was in big pharma leading Gen AI teams. I was part of many successful projects, but also a number of failed projects, which is what is really informing some of the work that I do at Kraft AI.
A disclaimer before I start. I am neither a techno-optimist nor a pessimist. I am a practitioner. I built my first neural networks back in 2014. I'm passionate about the technology, but I want to see real practical ROI-driven applications of this.
For that, we need to understand the hype cycle. and what you're seeing here is a graphical representation of the life cycle of a technology life cycle stages of a technology any technology and this particular representation or formula was put forward in 1995 and since then we have seen a lot of technologies typically follow this pattern be it virtual reality be it augmented reality voice over ip wi-fi cloud computing, contact centers.
So this is empirical-ish, but again, this, as you can see, it is not scaled. It doesn't talk about a specific timeline. It doesn't talk about the peak or the plateau, what to expect.
But for the purposes of this discussion, we can kind of like map generative AI through this process. So just to talk about some of the terms here, there is a technological event or a consequential development, let's say for VR, that triggers this cycle. And then a lot of people start talking about it. There is a hype built around it. And oftentimes, there's a lot of over-exaggeration around the capabilities of technology. And that peaks at the inflated expectations. And then we see a trough of disillusionment, which is driven by these over exaggeration.
So on a personal level, let's talk specifically about generative AI. The trigger event was really the introduction of ChatGPT, it being open to public. uh through open ai back in november 2022 and one and a half years have passed since then and i believe we are somewhere just shy of that peak of over exaggerated applications.
What this would mean is you might have subscribed to ChatGPT, and you were using it, say, five to 10 times a day back in start of last year. And now you hardly use it, but you're still paying for it.
1For a business or organization, what it would look like is you have paid tens of thousands to buy AI products. And right now you're not seeing the ROI.
And for investors, what this would look like is or what this might look like is they are investing in products that are AI washed or talk about being AI-powered as a core part of their offering, both products and services, and they invest and they are not seeing the returns. And this is a typical cycle that we have seen with other technologies as well.
That being said, I think as a whole, we are at this level, as I mentioned, just shy of the peak of inflated expectations. But there are already people and organizations who are at the trough of disillusionment, who have invested and have not seen ROI. There are also people and organizations who are at the plateau of productivity, who have utilized this technology, asking the right questions, adopting the right strategy, and seeing tangible ROI right now.
So as I mentioned, the first step is to delineate between the hype and the actual capabilities of this technology. And one thing to keep in mind is the first part of this cycle, the hype cycle, is really driven by a technology-first approach. As I mentioned, there is a lot of AI washing happening, like this AI-powered lead generation, AI-powered marketing. AI-powered, everything.
But at the crux of it, Is it really working? What are the performance indicators around these AI-powered products?
We see it with the technology that we use on a daily basis, like there is Slack AI, there is Gemini integrated across the G Suite, there is ChatGPT available through Microsoft as co-pilot to all our enterprise users. But do we really see the indicators behind it?
one characteristic of this part of the cycle. And how do we get through this hype.
I know this chart is a lot, but what I want to focus here is the evolution that this particular technology has gone through. If we really trace back the beginnings of AI, not just generative AI, it goes back to the cybernetics movements that started in the 1940s. which is basically the idea that humans and machines are essentially the same. And that really sparked the idea in scientists that maybe, maybe machines can mimic human cognition or human intelligence.
So what we are seeing right now It's not really the product of OpenAI releasing GPT or AlphaGo defeating the Go champion or neural networks, convolutional neural networks that was put forward by Yann LeCun or even before that, the work of Geoffrey Hinton in back-propagating through neural networks. It really goes back and sits on the stack of development that happened long before.
Which is why we have to think of AI not as a product, but as something we have to upskill to, we have to master. A lot of this is about getting literate about AI, what the possibilities are, what the technology actually is capable of doing, and that goes beyond some of the marketing material that is put forward in front of you by technology companies. But really focusing on the core of how to communicate with large language models. As previous speakers were talking about prompt engineering, like how do we get this particular technology to make value for us?
Having said that, step two of a good AI adoption strategy is to know the risks. And I mean know your risks in particular, be it your personal use case or your organizational use case or your business, whatever you're starting up. What are the risks associated with using this particular technology in your whatever use case that you define?
An analogy I can think of on why you need to know the risks before even adopting the technology is talking about getting a driver's license here in New York. Even before you learn how to drive, you need to get a learner's permit. And for that, you need to go through this five-hour course, which explicitly talks about the risks of being callous with driving or being not conscious enough when you are operating a machine like a car. And it also has like graphic, grotesque imagery of what would go wrong if you are not careful about its usage. And mind you, this is all before you actually learn to drive a car. And we are all fine with it. We think that is the right approach.
The same applies to any technology, especially ones like which have the capacity to cause a lot of harm downstream. LLM security by itself is a huge topic. I could talk hours and hours about it. But at the core of it is... At the core of it is misinformation that can be spread by these LLMs, which is a product of hallucinations. Everyone familiar with the nature of hallucinations of LLMs, right? It's still not fixed.
And then the... Capability of these systems to cause harm through biased outputs. We are seeing a lot of that daily. 1And these systems causing data breach and systems breach, mainly through security issues like prompt injection, typical cybersecurity, but exaggerated with additions of prompt injection capabilities.
So let's talk about the implications of this. These are real-world implications of people getting, yes, there is the potential for this technology to be really useful. But without thinking about the risks of this, this can have real-world implications. So this case in point being,
the chatbot that was implemented by the new york city government on their website using microsoft azure powered open ai basically but i am guessing it has like a knowledge base it goes into the knowledge base gets information out technologically yes it's doing all the right things but did they put enough guardrails in place no so One of the questions that was submitted to this chat board was, can I take a cut of my worker strips? And the chat board very authoritatively says, yes, you may. But the reality, as we all know, that you can't. That is not against the law and against common sense and common ethics. But that's a very clear-cut state, clear-cut policy. case.
There are also like other nuances which as somebody who is not well versed with the business laws of NYC may not initially catch. Another example I want to highlight is the Mobley versus Workday case, which I think at least some of you might be familiar with. So Workday provides AI-powered solutions for hiring. They basically use AI to sift through applications and put in front of the hiring manager or the HR, the best candidates for a particular job. And as expected, these are biased systems, and now there are legal ramifications to this being used and we have seen this before in the context of a system called COMPAS.
This was a machine that was used in understanding the risk of recidivism. This was back in the 1990s and again, ProPublica released a report saying this is biased and then there were lawsuits and now the same story is being repeated but in a different context. So that's, again, as part of like, we can say like big businesses, big corporations. But this is an example when a researcher who is just probably using ChatGPT to summarize parts of the research inadvertently included parts of that generated text in their research paper. So this is published in Science Direct. It is a very reputed paper. research publication that goes through a peer review process. But because of certain practices of overly relying on this technology and giving excessive agency to these technologies, some of these things get reflected in the work that we hold to the highest standards.
As I mentioned, those are like some of the worst case scenarios. But there are also people and organizations who are reaping the benefits of technologies to be productive for their specific use cases. And how this can be realized?
is through another framework of putting your users first. Not the technology, but the users you are serving. Your customers or your internal workers, if you are using these systems to improve the productivity, improve the understanding of your workforce, upskill your workforce, or put it in front of your customers to make it a better experience for them. But looking at it from the vantage point of the users.
So that brings us to step three, which is adoption, but not reactive adoption. You see something, it's cool, it's shiny, and you don't just adopt it, but you go through a strategic framework of understanding your business use case. I'll jump it in a second. So this is again a stacked approach when it comes to adoption.
So on the bottom of the stack is understanding your data systems and processes. I know it's overly simplified, but even for a personal use case, it's really understanding what data you're going to be putting into these systems what systems you have in place right now, what processes work for you as a business, as a team, as an organization.
And then on top of that, you align your personal goals or your business goals or your organizational goals. And think about the pain points. Think about the bottlenecks that you're facing right now. Even if you are generating a business case, think about those like, or the barriers to you getting to the goal that you have set for yourself.
And then on top of that, we put the AI tech in place. And this is where you think about questions like, what specific technology do I use? Do I buy it or do I build it? When do I upskill my team? When do I hire more engineers? What is the cost-benefit analysis I need to undertake?
what is the adoption rate and like my domain right now of this technology you go through the vetting process of your supply chain of your vendors of your procurement and then on top of that is where you put the tactical logistical details into place so this is where you continuously monitor these systems even if you have Procured OpenAI, for instance, for one of your use cases, you need to be continuously monitoring the performance indicators and the input as well as the output to make sure that there is no model drift or data drift or any of those issues that might eventually cause harm to your end users.
And then there is the concept of continuous integration, which is really relying on the idea that it's not the product or services you need to tie yourself to, but this technology, what is possible with this technology. Any year from now, OpenAI might be obsolete, or we might all have switched to open source models. And in that case, we should have the systems ready to integrate these models as and when it suits our needs or our customers' needs.
And then there is the idea of continuous deployment, which really relies on testing. So testing the models, testing to make sure that your services are in place, it doesn't get disrupted. A lot of concepts of LLM security, LLM ops comes here. And on top of that is the governance training and management stack, which is more like an umbrella initiative that must be implemented even on a personal level.
I could talk hours about that, but considering that the pizza is almost here, I'll wrap it up.
So I just want to like, sorry, was that a question? No.
So that being said, there are, as I mentioned, productive use cases. I use AI personally for a lot of work that I do at Kraft AI, precisely following this structure. Even when I'm using GPTs, as we demoed earlier, I have local GPTs. I have Zapier workflows that connects all of these.
I put in a lot of work to improve my productivity. So I do not like context switching. So I give a lot more privileges to these systems. But it is all governed by frameworks and policies on when to use AI and when not to use AI.
And then I use it for brainstorming, as was demoed earlier as well, for note-taking, for summarizing. I do have a lot of these workflows in place, but what makes me really sure is the things that I've put in place, the security measures, the governance models, the policies, and also upskilling my team to make sure that they know the ins and outs of what these technologies are capable of and how much privilege we have given in each of these workflows.
I do want to highlight the work of a friend. This is an example of a productive use case, I believe. To give context, his name is Nathan Hunter. He is the CEO of ChatGPT Trainings. He has been in the learning and development space for about 10 years. a great idea of like how to create value with content, with learning modules. And he has worked with a lot of corporates.
Let's see what he has to say. As an instructional designer, I love adding videos to the learning journeys I create. But video production takes too much time, money and expertise that I just don't have. So that's why both this video and my voice here are 100% AI generated.
And in this course, we'll teach you all our secrets. When you think of AI generated avatars, you probably think of videos like this. Or maybe videos like this. I'm guessing that seeing demos like this put you off the idea of AI generated videos in the past. But what if this kind of video could be your avatar instead? What if your avatar was actually you using your actual voice? When you use Eleven Labs' professional voice cloning tools, Heijen's video avatars and our own way of integrating the tools together, you'll be able to harness the true power of AI video generation within your company.
Basically, he's not relying on one single product, but it's a suite of products. But he has a team of production experts who are coming in, putting in different camera angles, basically taking the expertise from that space and augmenting it with AI.
And with that, I will wrap up my talk. Oops, sorry. Didn't realize that was also lighting. But if you're taking anything away from this talk, it is that AI is great as an augmenter at this stage. We need to understand the ins and outs of it, and we need to approach it more strategically than we think. Thank you.