for your interview with investors about a i'd best a controversial yet thought-provoking question could be do you think the relentless pursuit of a i've asked this could lead to ethical compromises or even societal harm and how should investors balance profit with responsibility in this scenario profit versus responsibility okay nice one
So the question was, essentially, is there an ethical dilemma for investing in AI? And how are you thinking about the balance between profit and responsibility? We'll go down the chain. Give 10 seconds on who you are and why you're excited to invest in AI. Sure, sure.
Hey everyone, thanks for coming. My name is Ruthvik and I run a CVC called Go Ventures. It's based in the Mediterranean. And excited about AI because, you know, it's a technology shift like we had when you first got the internet. Hello? I think it's just recording. Oh, it's recording. So yell into the mic.
I'm so glad you asked such a controversial question and you're recording it. I'm Noah. I work at Inovia Capital. We're a large venture firm in Canada investing from pre-seed all the way to pre-IPL and we've been around for like 15 years.
Is this OK? I have trouble when the mic doesn't actually project my voice. You guys good in carrying the back? I'm David Upani, investor at Form, investing in early stage B2B software, so anything kind of before your Series A. And yeah, I love AI. And I think investing in it is something that we're all probably very aware of. I think my controversial answer is the cat's out of the bag.
us lowly investors aren't probably the people that are going to be, you know, pulling the guardrails out for, um, you know, stopping AI domination, but that's just my thoughts. Okay. So we're going to, I'm going to try to moderate this in a way that like hopefully facilitates an interesting conversation for everyone afterwards. So I'm just going to, we're going to rapid fire this. I'm not going to use the mic because it's, it's, it's kind of, um, and so let's get into it. So you're essentially claiming you don't have any responsibility.
You're all profit, which is more or less what I'm hearing. Any of you guys disagree?
I think just realistically the amount of impact we have is limited, right? Like people are going to develop these solutions and I think it's our responsibility to invest in the best ones but also be mindful of the guardrails. It's probably going to be beyond us. I think government officials and folks like that's regulated. Okay and what are the most important guardrails that you're mindful of?
Right. I think, honestly, the fine-tuning example is quite interesting, because at the end of the day, for AI to be most impactful, you're going to need a closed-loop system. And I think some sort of way to make sure that we're not all relying on one closed-loop system is probably the safest way to build AI. Can you speak up a little bit? Sorry.
How do I say this quickly? Basically, the fine-tuning thing you mentioned, closed loop systems are probably the biggest risk. It's kind of like having a key man risk in the business. So having several different closed loops and different ecosystems is probably the best way to defend yourself, in my recent opinion. What do you think, Don?
I think the first part of that question is really easy. Are there ethical concerns with investing in AI or with AI in general? I mean, yes, that's pretty easy. I think everyone in the room would say that, as with any technology that you can invest in.
We generally have a tech for good investing. We side more in the Yon Lacoon camp of this for anyone that is following these everlasting Twitter debates between him and Hinton. But not so much on the existential risk, but more on the misinformation, deep fake persuasion, especially individual persuasion. And realistically, LLMs are a tool and a new toolkit.
You need to have guardrails around how those are used, and that responsibility falls on companies and investors. The same way that if I made a software that could be a forum software, I might have a neo-Nazi forum somewhere. It's probably my job to moderate that. And that's what we expect from our social platforms. We probably should expect something similar from our AI companies.
That's interesting. The social media comparison is interesting. We spent the last 10 years in court with social media companies essentially claiming that they're not responsible for monitoring the content on the platform. It's only now been the case that we've started to reach some level of consensus. I think the question is with AI, is 10 years of deliberation going to be too late? Do we have enough time to be able to develop this regulatory framework?
Well, it's happening, whether I'm doing it, I'm saying it or not. I think in some of these regulations, I mean, if you want like a sound, but I think some of the regulations coming out of the US are often self-serving and tend to favor the incumbent technology companies and restrict innovation and silo power, but I don't know exactly. Do we have enough time? I don't know. What do you think?
You know, we touched upon a lot of different ways to look at it. First of all, you said this thing about social media screening, right? I don't know if you guys know how that ended up. It ended up that people couldn't handle the shit that they were seeing. And they were going and doing drugs in the bathroom and, you know, to be able to handle all the stuff that they had to moderate.
So, you know, are we gonna expect the same thing? Are we going to be able to police to the extent that says, okay, if somebody's figuring out how to make a homemade gun, is it for a research project or is it for, you know, to hurt people, right? So yeah, we can make the decision to invest in good things, right? Like, so if I see a deep fake startup, maybe I'll read it, but I won't go for it.
But then also to Zain, right? Zain's first point, who's it going to be? David. Sorry. David. David. My mistake. So just first point, who's it going to be, right?
At OpenAI, we saw what happened to the board recently, to the CEO. They kicked him out because they thought they couldn't police him, but he's invincible. So he came back. That's one interpretation.
Let's talk about what is most misunderstood about AI from an investment perspective. You guys are in the business of finding an edge. All of you are competing for access to deal. There's a finite number of tier one AI companies to invest in. What is it that each of you believes that your colleagues don't? David, let's start with you.
Honestly though he is getting a lot of attention right now I suspect enterprise adoption place in the B2B world will be slower because it's like unless you're a big incumbent. It's hard to.
kind of prove the, or take the risk and like leverage AI, particularly in very adverse circumstances, like the reputation risk, for example, or like, and also seeing the ROI initially might be challenging. So I suspect it actually might be a bit longer before we start seeing some of these new AI startups get off the ground. That's like my pot take, I guess.
I have a point to add a little bit and then I'll do mine. Something that's been disappointing to us because we have some more mature at scale growth stage companies has been the reaction to use AI purely to reduce costs.
in a way, or it's viewed to replace humans, which has been displayed, you know, oh, I see GitHub copilot's pretty good, when can I reduce my engineering staff? And we see a lot of that from often the more financial-focused partners that are fun, too. And that's disappointing, why? Because you cannot ignore the opportunity to have this new tool to deliver an expanded value proposition to your customer, right? And rather than just find it as a way to
increase my margin, how should I be using this tool to increase the outcome to my customer that maybe I can end up charging more, I can expand my product offering. And that's been one frustrating thing on the enterprise adoption, like everywhere we look is like, how can I cut costs in some way? The thing my colleagues ridicule me for, I think that this is not like a consensus Anobia thing. I think that it's probably going to change our lives way more than we think, but it's probably going to take a lot longer than we think.
Right now, we're seeing customer service. These workflows are really great. We're seeing marketing workflows. We're seeing legal workflows. But as our friends at Cohere like to say is that this is a fundamental change in the way we interact with technology, the same way that the smartphone did many years ago. And I think that
this is an opportunity or climate that really changes maybe the interface of a computer or the operating system of a computer. It's just that environment that has your next Steve Jobs type visionary. And this is a lot of hard plumbing and a lot of product experimentation that's going to take longer than we think. Like I always say, we're on the iPhone 15 now. I don't think in 20,
We're going to be waiting for the iPhone 35 or whatever it is. Many of you probably saw humane and they kind of got laughed out of their demo, but I think they're on the right track this early.
Just give everyone some background on what Humane is. Humane is a Silicon Valley company that's raised a ton of money and basically they worked on a new device and it's like a clip that you clip to your shirt and you interact with voice and then the screen is a projection onto your hand. Again, I'm not saying they're right, but I think that we might have a new modality in the way that we interact with technology because it's not just generation, it's the ability for computers to understand language, which I think is actually
It's really, really exciting. That's really interesting because he's talking about now that hardware also has to evolve alongside. So it's not just building a software element of it. It's like how do you build the physical devices that you're going to interact with in the future, which might be the glasses or might be this humane thing. Or spatial.
Okay, a lot of folks in the room are building with AI today. What advice would you offer folks who are looking to get their emerging AI startup funded? David, do you want to pick this up? Or no, if you've got energy behind it, go for it.
Well, until you had funded, I had energy behind it. Get customers something. Get people using it. Let me rephrase the question. I think five years ago, if you were starting a startup, there was a certain level of pattern matching that you would look for in the early days of a startup's growth.
I think it's fair to say that building an AI native company has perhaps changed that pattern matching. And I'm curious, what's most different when you guys are looking at early stage opportunities? What's most different about this batch of AI companies from the previous generation? Is anything different in your eye?
I think the importance when building an AI is definitely the data, the training data. It's probably consensus, proprietary data sets are probably what are going to be a huge differentiator in the long term. So I think definitely doubling down on that is quite important. I think one thing that, when you phrase it like that, one thing that I've noticed a lot more of is
a lot of companies that have user adoption but no monetization and it's a result of what we're calling AI tourism. As we talk about C suites everywhere, as I mentioned on the cost cutting, every board is telling the exec suite to reduce costs and then they push it down to the VPs or the
product managers and they're probably simultaneously entering a buy procurement. So I, you know, I want to chat a lot. I'll probably talk to Ada. I'll probably talk to Intercom.
And then I'll probably play around in the GPT store as well and see, or maybe you'd rip another open source model off and see if I can get to that quality. Then I realize how hard it is. And then all along that, there's developer tools that are being used. So we see a ton with adoption curves like this and no ability to convert. Cool.
I think building on that as well, I think just as I got more time to process it, there's kind of three areas that I'm excited about that's also helpful. So the data set kind of falls into the vertical approach of like thinking like you're use case specific, whether it's like supply chain or marketing or sales or legal tech. And there's also the horizontal approach. So like here, for example, or an OpenAI or folks like that, I think that's probably really excellent. But also the dev stack, right? That's another place that's probably really exciting that it's a whole new way of building computers, right? And I think,
different ways, different approaches to maximizing that. Yeah, if you split into revenue and fundraising, just how I try to look at and kind of bucket these AI companies is either in infrastructure
in a foundational model or the application or the tool. So if you look at revenue, 90% of revenue goes to the infrastructure right now. You're paying $20 a month for OpenAI. It's not that much. But then if you look at fundraising, it's on the other side. People are putting the money into the foundational models and the apps, which haven't generated the yield yet. So the first thing is, when I'm thinking about this,
Is it going to be outdated in six months, right? Is OpenAI going to copy this thing? Is it going to be on its head and then there's nothing really you can do? How much ever you try to sell it.
Then the data set is definitely one. I had a guy who was demonstrating his app to me in a very clear manner. In addition to the general criteria of being venture-backable, let's not forget that industry, AI is part of the technical due diligence, maybe. It's not just on its own. It's not just the AI that you have to evaluate. So yeah, he took me through it in a really nice way today.
He explained it in the manner that he said, look, here's the before, here's the after. We do this magic sauce. And this is what goes in and comes out, just like was said earlier. So that's a little bit about what I'd like to share on that. One more. I'm just assuming that you're a founder that's building on top of LLM API.
We see kind of two camps of people. I'm going to use analogies. I hate using analogies, but this is how it was explained to me and it really does. We see people that are in the philosophy of like use the best in class model all the time, even though they might be too expensive or might be too powerful for the use case. But get the best outcome that you can to deliver to your customer until you get product market fit.
So that analogy is like, why would I take an open source model and fine tune it? Why would I use an axe when I have a chainsaw over here?
you have another camp of people that say, you don't need a chainsaw to cut a piece of banana bread. And then, so they take all these smaller domain-specific models, tune them to your use case, tune them to, or use a hierarchical architecture between them. That's actually, this is like the advice that will probably change next week, but my learning so far is, so like, my learning so far is that the effort to fine tune and the effort to host those smaller open source domain-specific models,
is so expensive and intense just for the next chainsaw to come out and now a whole new world of possibilities are possible. Focus on using the best in class tool until you're at scale and you have some product market fit like these folks and then you can worry about cost optimization. That's a second order problem. I think that depends on what you're building.
Yeah. If you're building something that's more general purpose versus like a healthcare tool or like a supply chain tool like me, for example, I think that's a different. Yeah. There's argumentation right there. It's good. Nice job. Hey, these guys have been great. This is the customary interrogation that you guys signed up for. So thank you. Let's give you a round of applause.
You can ask any questions. We'll pass it back to Josh to wrap things up. Thanks to everyone for listening. Thanks, guys.