All right, so we've been hearing the theoretical talk and technical talk, which are really exciting. But when is our next robot going to be? I'm really excited, but not now.
So right now, we're going to give a practical talk on AI, on what we can actually use it for, how it can help us from day to day, whether it's personal and in a business setting.
And for this talk, it's something our team has been working on for a long time, which is an AI that can help you with applications. So I'm sure every single one of you has to file lengthy applications, whether for yourself or it's for a business.
The thing about this application is that, one, it's a ton of applications you have to send out and you have to customize yourself or your company to each application. And the second is that because each application has so many applicants, you want to figure out how you can stand out among all these applicants and figure out some of the key points that differentiate yourself.
And because of all these hard work and lengthy applications, sometimes you don't have time to do all of this. So you're leaving the opportunities as well as potential money on the table. So we want to walk you through how potentially we can help with that and something our team has been working on.
This is one of the sample for applications, which is more like a business side, which is an RFP problem. It stands for request for proposal. So it's usually a government or a commercial company will put out an RFP asking for vendors to apply for a specific challenge they have.
When applying for applications for RFP, you need to have very customizable effort to each RFP based on your company's current status and projects and all the past materials. And at the same time, the volume of documents you have to go through from your company's side to apply for this RFP are extensive.
Another side of applications are grant applications. Grant applications are usually issued by the federal government or from public sectors. When government issues grants, companies, whether they're smaller companies or bigger companies, they will put together a proposal to see how they can compete with each other for this particular grant.
Put it in a quantitative way. On average, an individual business usually spend 150 hours on each grant applications. Each applications are around 10 to 15 pages long, but the success rate is only 10 to 20 percent. So to put it into like a statistic way, if you apply for more grant, you are more likely to succeed and get the grant. So for the grant submission requirement, you need to have like same thing like for majority of the application tailored applications and have the business contacts and to stand out among your competitors.
So Ben right here who has been working very hard on this will give you a demo on how AI can help you to speed up these applications. So we actually wrote some software that just uses an LLM to fill in these grant applications. So I'll just walk you through our code here.
So to begin with, the federal government has a number of websites where they list grant applications. And it's very easy to just export the data from those sites into a CSV. And now you have a nice, easy file that you can pull into a Pandas data frame. And it just has a large list of grants with information about each grant.
So once we do that, we then try and find the most relevant grants. We use a TF-IDF vectorizer to convert the just natural language title of the grant. Each grant will have a little title that's a short description of what it's about. And we then take some keywords that we're interested in searching for, and we find the titles that are most similar to the the keywords, and we just tell it how many grants we want.
So here we're telling it we want 10 grants, and these are the keywords we want it to look for. So then once we have our grants, we just loop through each specific grant proposal, and we get a little bit more information about the grant. We actually get the full text of the grant. So for each federal grant proposal, there will be a very particular structured document that says, this is how you have to format your proposal. These are the key aspects of any proposal that we're looking for. This is what the technology should do.
This is what we're hoping to achieve with this program, that sort of thing. So then we actually get ChatGPT to take the full text of the grant and prompt it to come up with a number of steps to fill in that grant. So given the text of this grant application, give me, you know, step one would be read the grant carefully. And I'll show you kind of what this looks like. If I go to steps, so the first step it comes up with is just, yeah, read the grant carefully.
And then as you get into it, you know, outline your technical approach and whatever, right? So we just loop through each of these steps that we get from ChatGPT. And for each step, we prompt ChatGPT again with... So we have a database of all of our internal documentation that we've already built in Chroma. And we just query the Chroma database to find the two most similar documentation pages that we think are relevant to this grant proposal. So we prompt ChatGPT with the full text of those two documentation pages, the full text of the grant, and the specific step that it just generated. So it'll get this step, the full text of the grant application, and those documentation pages, and then it'll generate some output for each step.
In the end, we pull it all into one text file. It's just a list of responses from ChatGPT that we save to a text file. So this is an example of what a response might look like. So it says it reviewed the document for step one. And then for step two, it sort of gives this outline. It dives into specific company details, things like that. And the idea behind giving it the documentation pages as part of the prompt is that it can use information specific to our company to actually fill in the grant, rather than
traditional approach with LLMs, where it's limited to whatever it trained on. This is a pretty well-studied field, and there's a lot of folks doing this sort of thing. But we think it's an interesting application of that sort of approach. So then we just save that response to a text file. And then we do a fun little thing. We get ChatGPT to go back and revise the grant it just came up with. So we give it this whole set of text. And we tell it, OK, you're a grant writer. This is the first draft. Now make it better. So we get a revised response. And yeah, it's maybe a little bit better. And so yeah, that's just kind of what we're doing. We, as sort of was mentioned earlier, think that a lot of people leave really a lot of money on the table because this is just a sort of arcane process. It's tedious. But it's non-dilutive funding, which can be huge for startups across any range of industries and even more established businesses. So yeah, we think. There's some pretty interesting use cases here.
So with that, if anyone has any questions, we'd love to jump into them. Thank you. Yeah?
I recently was exposed to the whole AI stuff. With what you mentioned about it having access to, let's say, local documents or database, is that considered something I would fall into, like, brag? Yeah, exactly. Yeah, yeah, yeah, yeah, yeah. So the idea is we're using RAG to give it that context about a specific company.
Yeah? Yeah? So I'm curious why you chose BF versus something like hybrid BF25 or something like that? Honestly, it was just simple to implement. We might play around with it in the future. There was not any particular dogma behind it.
Yeah? So we just use... the built-in Chroma similarity scores to find the most relevant documents to feed it. And we just sort of set an arbitrary limit of two pages because each page is fairly long and has a fair amount of information. But we were very careful not to exceed ChatGPT's context window.
So it really is looking at everything we give it, at least. Any questions? Yeah. So, looking at the use case, it looks like it's a lot of tokens. So, what would you make the costing for this? You know, we haven't priced it out. I'm actually not sure off the top of my head. Yeah. I don't know if any of you guys have any thoughts, but yeah, I don't know.
For the RAG, I want to ask, how much does it optimize the prompt size? like, the prompts around, like, you mean the RAG-specific content, or you mean the stuff around the RAG-specific content? Yeah, so we use RAG and prompts together. Right. So, first we start with prompt and tag, like a huge prompt. Right. So, how much of that, like, cost is saved? Because now we're not using that as tokens, but as, like, context in RAG. Oh, right now, we're just dumping it all into the prompt, right? So we just get the text from our own database and send that as part of the prompt each time. Yeah? So it's not ? Yeah, because you're still retrieving the relevant information and feeding it in as part of the prompt, right? It's just that you're doing it with a local vector database instead of something that the model itself has native access to.
Yeah? Yeah? So I'm assuming because of the price, wouldn't something be available on the base of the code? No, I don't think this would be like, I mean, the code could be open source, but I don't think we could host it for people for free, for sure. I mean, you could, of course, use LAMA3 or some other open LLM instead of a proprietary model. And that may be a decent solution. Yep.
So the way this is working, it's just an entirely local database. So you're sending some information to the LLM. Presumably, you're using an LLM provider that you trust, or maybe you're running LLM3 locally, so it never leaves your premises, right? You would probably want to make sure in your grant application, you would just double check and make sure once it's drafted that there's nothing particularly sensitive that you don't want to externally disclose. But yeah, I think the fact that it's a locally hosted database helps alleviate that concern.
Yep. Thank you for the demo and the talk. I'm curious as to whether you tested the quality of this versus someone going the more traditional route. So we haven't tested against an actual traditional human being, but we have done a couple iterations to improve the quality. We found that actually getting ChatGPT to generate its own steps for each grant, and then looping through each of those steps and prompting it again and again, rather than feeding it the whole thing and just saying, draft it, because it gives you a fairly limited response, or even breaking it up by section, things like that. The steps do seem to improve quality. I guess the ultimate quality measure is who gets funded. Right, that's right. So we'd have to... Right, yeah. So this is an early-stage demo we haven't deployed or anything yet.
Yeah, in the back there. So I've been working on the same problem for a while. Uh-huh. I kind of have a question. You sounded a bit not sure of what answer you're giving. You're not sure of whether the quality of what is being generated through RAP is good enough to just use it directly and fill the grant applications. So I'm curious, what is your strategy to improve the results? And what's the plan going forward? Yeah, so I think there's a number of things. Right now, we use a very simple TF-IDF score to figure out which grants match the keywords we're interested in. Filtering the grants more specifically could help, because it is a grant that's more relevant to your company. maybe you're more likely to get it. And then also having better data sets to feed into RAG. We just pull our like API documentation and stuff right now. We don't have like sort of a knowledge graph or anything that like about the sort of overall structure of the company. That sort of stuff might be helpful too. Yeah, and then I think just kind of iterating through. Again, this is a fairly early demo.
I think there's a number of other things we'd also want to play around with, maybe other models, things like that. How long do you think it will get out to the public? Yeah, I don't know. You guys have any thoughts on that? Yeah, I think basically, I think this is something that if it worked really well, it would be super cool. But I think there's a lot of obstacles to having it be completely foolproof and completely great. And it takes a team of actual really good ML researchers to work on something like this. And I think there's always going to be the problem of hallucinations or adding wrong information. how much of the process you can do and help save time with. But you'll probably still always need a human in the loop. And then I guess on Rehan's point on evaluation, I think, to your point, really the only way to evaluate is to get the results back. But I do think if you have previous grants that you've applied to, that you're like, hey, this is successful, this is not, you have some sort of structured data. But I think otherwise, it's like, like, relevancy and faithfulness. And I think, like, furthermore, it's, like, I don't think it's a solved problem, right? I think this is something that, like, right now, LLMs probably can't do, like, perfectly. But, like, maybe over time, like, the enhanced retrieval will be better. And I think it's something that's, like, worth building, basically.
And I think anything we can do to make the process less painful is helpful. And I can tell you, even just from building this demo, that search function to just find the relevant grants, that's hugely useful even on its own, because these websites are so bad. So finding anything that's even remotely relevant to what you're trying to do is a painful, tedious process. So yeah. Did you think about inverting the problem in terms of the ultimate, I guess, Outcome here is making sure the right companies get the money. Yeah. Could you invert it by helping the grant holders find the right companies, rather than helping the grant companies find the money? Maybe. I think there's a pretty different task, though. I mean, maybe that's also something to be done, but I don't think it's kind of within the scope of what we're doing here. I saw a question over there, yeah.
Thank you. How do you account for the rate of firms in terms of weight that can be made in life? If you over-train your model, then you go, hey, how do you ? So we're doing RAG. We're not fine-tuning the model. So I don't think that's a big issue for us in this approach. Yeah.
Yeah. So I wanted to ask you about the validity of it. You also said it keeps changing, right? Like six months later, this will not work the same. Even next week, it will not work the same. Right. So how do you get, so first I see there's a bias problem where like you first ask chart GPT, create the steps and then it reads that steps to create the document and then you ask HR GPT to again verify it. So that's a closed loop. So the validity I'm curious like how you verify whatever is coming out day after day, months after months is going to the planet. Yeah, so I think the ultimate metric, as we were sort of saying earlier, will just be funding rates, success rates, things like that, right? There's a number of proxies for that you could try to build, but I think that's sort of the ultimate ground truth we care about. So do you keep testing it? Yeah, well, you keep track of how well the grants that actually get submitted ultimately do, right? There's no version number I can fix in time. I can just use it, get the same output. Well, not right now, but that certainly could be something we could implement. Yeah. Yeah. But this is still very early days, so we don't think it makes sense to freeze anything yet. Yeah.
So I guess we're out of time. We'll just have one more question, if any. Yeah. A quick question about more about the available data. So generally for grants, are the winning grants available, or is it just only the winners named? So right now, we just have the grant proposals themselves. And we would just have to sort of track the grants that get generated and submitted, how they ultimately do. Yeah, I guess there's probably some data set of accepted proposals. It's probably a matter of public record. But I don't know off the top of my head. Yeah. Yeah, I think that's accurate. Well, thanks, guys, for your time. It was a great conversation. Thank you all.