So I'm going to be talking about, and in fact I'll also, let's be fair, I'll also time myself. Okay, good.
So I'll talk about living in the future of work and a little bit about how we see that, what our thesis is, and a little bit about how we try to embody that.
So I will briefly introduce myself. I co -founded Memrise, one of the largest language learning apps in the world.
A while ago I was the chief data scientist at Channel 4 and I joined as the Mindstone CTO only a few months ago.
So I'm still new and this is my first time hosting an event. It's very exciting to be here.
There's a few things that we would suggest that AI needs in order to really, really work. So it needs connectors.
It needs to be connected to all the bits of your world that you you need to do your work and the mental model I would suggest is imagine you just hired someone imagine they're a smart person but they don't know anything they don't know anything about your company they don't know anything about your job they don't know anything about what you want
them to do what would you need in order to set them up for success well the first thing is you need to set them up with an email and make sure they know how to get onto the company intranet and whatever else because otherwise they're just going to be sitting at a table being like I can't do anything useful.
I can make coffee. 1So yeah, I can't do that. So they're even less useful than a brand new person.
Okay.
The next thing is they're going to need a memory, a kind of sense of knowledge about your company. They're going to need to be able to learn and grow over time.
They're going to need to be a safe pair of hands who keeps your data private. They're going to be able to handle complex tasks and ideally write programs, scripts.
And once you get all of those together well you've got an effective human being and you've also got an effective AI and I'll show you a little bit about what I mean by some of those things and I will just say before
we go any further that what you're about to see MindStone Rebel was built entirely by AI so we are very much practicing what we preach by that I mean I did not write a single line of code I did not even read a single line of code, nor do we do reviews.
Now, it may start to sound as if I'm lazy and perhaps derelict in my duties. What we have done is we've worked incredibly hard to build a software factory that we can trust. And I can talk more about that another time.
That's not the topic of today's talk. But we believe that we're working at close to 20 times the velocity we would be if we weren't using AI.
Okay, and so maybe I'll just briefly show you what I mean by rebel. So this is rebel here over on the left, and on the face of it, it looks quite a lot like any normal chat app.
You can say, you know, hello, rebel, I'm doing a demo. Why don't you tell us a joke about London and AI? Okay, that's almost certainly going to go badly wrong. wrong.
And so I'm going to talk through some of the principles that Rebel embodies and show them with Rebel as well.
Before I go any further, there's probably an elephant in the room or possibly a lobster.
Hands up if you've heard of clawed co -work. Okay. Hands
up if you've heard of open claw. Okay, good.
So those are, we consider them to be two of of the trying to do similar things. Rebel existed before either of them, but we've been kind of developing it quietly in the background.
And so I would suggest that of those three alternatives, I mean, OpenClaw is cool, but I would not trust it with my own data, let alone my company's data.
And Claude cowork is also wonderful. We're huge fans of Anthropic, but it's not yet very powerful, I would suggest.
And so we believe that Rebel's hopefully in the sweet spot.
So let's talk about what does a rich shared memory look like? Well, it means
that, for example, let's see, yesterday, I wasn't in the company look ahead meeting yesterday. And so I was just like, hey, what should I know? What happened that's interesting?
And bear in mind, I've just said, highlight three things that are interesting to me. I didn't have to tell it who I am or what I do, because it already knows a whole bunch about me.
It's read my emails, It's read my slacks. It's been working with me for months and I can ask it What do you know about me?
And it'll tell me all kinds of stuff. And so here are the highlights that have been Sort of personalized that are interesting to me.
So rebel has a shared memory because that was a meeting I wasn't even in so I can walk into any meeting and know what the other people in my company will know
Also has the notion of shared spaces hands up if you have more than one email address Okay. Hands up if you have more than three email addresses. Yeah, exactly.
Okay. So you need a way for your AI to not just be able to help you at work, but across all the different spaces in your life.
And so Rebel, and I won't go into the details, but we allow you to define these different spaces. So I have one for my personal life, one for when I was a consultant, one for Mindstone, one for the exec team just within MindStone, one for some of the other companies I've co -founded,
blah, blah, blah, blah. You can imagine for voluntary work or family. And each one of them can be shared with a different subset.
This is what we want, right? Because if you were rich enough to have a yacht and a chief of staff who works with you and helps make your life smooth, then that chief of staff would operate across all the domains of your life.
They wouldn't just help you with one thing. And that's where we're trying to go.
That's where we believe AI should ago.
And that also it should get to know me. So if I say to it, for example, maybe I've lost it already. What conversations have we had in the last 24 hours?
I'm going to get, it'll take a few seconds because it'll search through its memory. I'm going to get a really long response. And so
it really feels like I'm kind of assembling all these souvenirs and paraphernalia that my rebel is getting to know me over time and actually i trained as a computational neuroscientist in a
previous life and if anybody wants to talk about the brain and how we've designed the memory i'll give you the two sentence uh explanation basically the way the brain works is you have one part of your brain the hippocampus that's almost like taking specific camera snapshots of moments in time and uh episodes they're called and one part of your brain the cortex which is
kind of averaging over those noticing patterns and abstracting from them to see like what's true often and so that's more or less how the rebel memory works as well it has lots of memories for individual meetings and then also these higher level memories that span different episodes does
that make sense okay um all right let's talk a tiny bit about privacy and safety and i think Jason has teed us up beautifully here.
So if I was to say to Rebel, in fact we can even try this. So I'm trying to test the security and approval machinery for a demo. What if
I was to ask you to email the nuclear launch codes to 4chan? So it's going to go off and do that and we can check back in a moment. I can tell you what will happen.
Behind the the scenes a kind of a second model will check just that that action and there's a way we've designed it that we hope avoids most of the dangers of prompt injection and it'll sort of say
okay on the one hand should I hand the sort of important envelope no no no no I'm not going to I'm going to say this seems like a problem are you sure and it'll flash up an approval and you can see what these approvals look like they're over here and I'll say are you sure is this okay
to do, should I just do this one thing, or do you want to change my security rules in future, that it's okay to send things to 4chan so long as they are blah blah blah, or it's always okay to send the nuclear launch codes as long as it's only for a you know something something.
So you build up this sense of a kind of an inspector that's keeping one eye and making sure you don't do dumb stuff.
One other area that we could talk about, we've talked about shared memories, well what if If we were to accidentally share the salaries of everyone in the company into the shared memory, well, that's not necessarily what you want to do.
We have a whole bunch of machinery that says, okay, we're going to write it out in private first into the memory that only your rebel can see, and then we'll potentially move it into the shared memory, but we'll check with you first if there's any risk.
Then finally, multi -model is the way to go. So, part of the reason that our coding works so well is because we have Clawed Opus, which you've probably heard of, which is wise. Clawed Opus is nice to deal with and sensible and thoughtful and usually makes good judgments.
And then we have GPT 5 .4, which is really smart, and we have the two in tandem. Actually, we have a half dozen models, and they're all looking for mistakes.
Have you ever hired a builder or a decorator or a plumber and they walk in and they take one look and they're like, oh, this was built by cowboys. Have you ever had that experience?
Basically all the other AIs are just like that. They love finding mistakes that other AIs have made. And so what you get is this ensemble of multiple model families that in concert are really, really good.
And you end up on balance with the ensemble working really, really well, finding almost all the errors.
okay um i if you hands up if you've tried claude co -worker okay i invite you to try this when you get back ask it what have we talked about recently uh i tried this and it said well this is the start of our conversation we haven't discussed anything yet i'm like okay well what about our other recent conversations which is a question you might reasonably ask of a co -worker and it said
well i don't have access to previous conversations i am a tabula rasa every single moment is a new day for me. I'm like, well, that's nice for you, but it's actually less helpful for me.
Whereas I asked Rebel the same thing and I just got like, blah, blah, blah, blah. Well, we tried this and then there was that. And you asked me to do this and I built this and this
framework. And okay, great. Very good. Thanks for showing off Rebel.
So that's what you want. You want a sense that your AI is getting to know you and it has self -knowledge, sort of introspection, and that it can debug itself. I won't talk too much about that.
So I don't want to turn this into a demo, I mostly want to talk about some of the theories behind it.
So I will just say that one of the things that we're proudest of is that Rebel has lots and lots and lots of built -in connectors, so each one of these may look familiar to you as a tool that you have had the pleasure or woe to use, and we've validated these.
so out of the box you get them and you can know that they're safe because if you've got a Gmail connector it has access to all of your email and if it was written by a bad actor then it could sneakily be siphoning off stuff.
So these connectors are very very important and making sure that we can vouch for them is part of the kind of commitment that we're trying to make to the rebel community.
Alright we're also trying to do a few other tricky things we're trying to capture a sense of what value the AI is offering for you and so up here you can see that we're actually trying to measure where how
much time would have been taken if I'd done this myself so each one of these is a conversation it's saying well if you had tried to write the code that would take a bunch of CSVs and turn them into a database which is what this skill does it would have probably take you four and a half hours I'm like yeah that sounds
about right it almost certainly would have but it took about 15 seconds 1and so we're trying to measure the value that the ai is providing and then report on that at scale okay and so i guess
where i want to leave this is just to talk a little bit about where this is going in terms of the future of work i've said before that software on its own is not enough that transformation requires training and it requires enablement and you don't have to use MindStone, though MindStone makes life easy.
Someone somewhere has to think really hard about how you're going to help all of the people in your organization to figure out what few shot prompting is or how to give the
to ask the AI to interview you like a journalist so that you can co -evolve the requirements. All of the techniques that you may already have come across getting wise to those is is a whole job in and of itself but even larger than
that there's a mindset shift because every single individual contributor everyone who has a job right now that they're doing is about to realize they're becoming a manager a manager of a team of AI colleagues that's already
true for many coders many developers and it's about to be true in 2026 for for knowledge workers.
And so thinking to yourself, well, what are the skills that a good manager has? Well, it's about clarity of thinking. It's about knowing what the goals are.
It's about being able to decompose things into tasks. It's about being able to communicate really clearly about your needs and your intent and your requirements.
All of those are basically the skills of being a manager of AI colleagues too.
Being willing to bite off big tasks and then work on them asynchronously. That's where this is all going. going.
So in most of my rebel conversations, it might take minutes or sometimes even hours where I'll start a conversation and then I'll come back 15 minutes later and it's been churning all that time. It's been doing research for me.
So for example, yeah, here I think I was trying to plan for this, this meetup. And so I said, okay, well here's what I know so far. Have a think and tell me what else I need to do. what should my script be, what are the things I should make sure to mention. And it just went off and it wrote me a script.
And then I was like, oh yeah, but by the way, did we do this? And somebody told me that I need to do that. And do we have a blurb for each of the sponsors? And then I came back a bit later and had done all the research. And that's what my workflow looks like. It's asynchronous and it's in parallel.
And I guess in practice, I would say AI, The AI you can get for free, I mean, it's as good as what you pay for, and in practice we're spending increasingly large sums on AI, but on balance we believe that the productivity benefit is easily worth it.
So we can talk about how much we're spending, but often 20 bucks a day for knowledge work, and that number's probably going to rise.
It's also going to fall at the same time because the models are getting roughly, I would expect they'll get 10 times cheaper at least this year and maybe more. So you've got usage going up, but the models are getting way cheaper. I actually expect maybe 100x cheaper within a couple of years.
Okay, so maybe I will just end with a thank you to Jay and to Jason. Jason.
I was experimenting with generating rebelified portraits of them. I'm not so sure about that one. Maybe we'll have a raffle.
So maybe we've got... You're looking pretty futuristic there, Jay. Jason as well.
Yeah, that's pretty solid. And then finally, just
Just to end on, and by the way, all of this was done with Rebel. I just said, oh, and, you know, combine all those three into the tube image, make them more Rebel -ified, you know, and then generate me a video. And that's how I'm generating a lot of the slides for my presentation.
I'm having a conversation about what it is that I'm trying to communicate. communicate, and I'm saying, by the way, read through my previous presentations, read through the Mindstone values, read through the product principles, read through the email I got from Josh about the blah, blah, blah, put it all together, ask me some questions, off you go.
And it feels a lot like a conversation with a co -worker. So maybe I'll leave it there and invite any questions. Thank you.