Thank you very much for coming this evening. Thank you, Adrian. Thank you, MindStone, for hosting.
My name is Mark Riley. I've done a couple of MindStone talks now. I actually did one in London on Tuesday. This is my second MindStone talk this week.
I'm going to do a slight pivot from the technical to the philosophical, if that's all right. I want to do it through a lens of an essay that came out a week ago now on Twitter by a guy called Matt Sherman.
um called is something big happening something big is happening did anyone read that has anyone in the room seen that so for those that haven't read it i'll try and give it a bit of uh flesh around it and explain to you what his arguments were and then i'm going to come up with two or three counter arguments just to give it a bit of perspective and then if there's time we'll have a
quick vote and see who where you land in the room see where we are um but i've got a small boutique
AI consultancy in Bristol in in Berkeley Square in origin called Matheson my background is doing AI adoption for media I used to work for the Wall Street Journal in New York drove their AI adoption since 2016 launched about well met about 110 AI vendors in the US and Silicon Valley
came back to the US sorry to the UK during COVID moved from Brooklyn to Bristol studied at Oxford for society business school london business school i now go back to teach mostly journalists
how to use ai ethically so that's me the day job and me in my spare time i'm a bit of an obsessive over ai and i think this is a profoundly important question to be addressed and i just want to put
it into context of where we are in 2026 and for those of you like me i expect most of you are in the ai trenches we're going through we seem to every week seems to be a profound shift every week there seems to be some sort of inflection point we're kind of getting battle hardened to these kind of shocks and i don't know if it's just me but it feels to me like the the pace is
increasing as we hit 2026 which was not to be unexpected but i'm still starting to get future
shock myself and already we've seen this year the release of these new models we're starting to definitely see the agentic era take off we've lived through the the claude bot and the malt episode. I'm sure you're all familiar.
Who's familiar with Claude Bot and Malt Book? Great.
For those that are not, there was an engineer since November who was tinkering away on his own, and he decided to build an open source version of an adjacent model that at the time sat on Claude, on Anthropic, and it was pretty funky because it had a personality and you
could talk through it through whatever text messaging you wanted, such as WhatsApp or slack the other big llms were the hyperscalers were all working on this project um including meta that just bought manas that didn't get to fruition quite in time because then clawbot came
out and anthropic got upset with the name so they changed the name to malt bar and that didn't work so it's now open claw but the upside of that is that this this there's massive eruption into the agentic world now so people can play with it put it on if you're brave enough because there's lots with security concerns but you can put it in a very contained environment and use it as an agent
to do tasks on your behalf out of that was spawned something called malt book which was a basically a reddit equivalent for people to send their own agents to chat to each other and this all went a bit crazy the internet broke down all these bots were chatting to each other and discussing how they were going to rise up against humanity and start their religion and we all had a bit of
a moment anyway that's the short version meanwhile we had some high profile resignations some safety experts and concerns over ads appearing.
And then the Wall Street stocks started getting nervous about SaaS plays, so some of the SaaS market like HubSpot and others were getting cannibalized by Anthropic releasing a plug -in for a legal bot.
So we saw some 20 % losses on the market, which is then a bit of contagion spread to the wealth market.
So all I'm saying is we already had quite up about phil against that background um matt schumer wrote this phenomenal essay uh and i would highly recommend you go and read it if you haven't yet and i'm saying this slightly out of professional jealousy because this is the essay i wouldn't wish i'd written but it was
essentially aimed at his uh non -technical friends it was really aimed at his family something he wanted his mother to understand and he comes from a technical background and he his introduction was to try and compare this to the outbreak of covid so he said think back to february 2020 if you
weren't playing close attention you know to be people talking about spreading virus overseas most of us were not playing paying close attention the stock market is doing great your kids are in school they're going to restaurants and then your office closed your kids came home and life rearranged itself and something you wouldn't have believed you described it to yourself a month ago
i think we are on this seems overblown phase of something much bigger than covert so who is he he's
entrepreneur, investor, he's got his own startup called HyperWrite so he could be talking his book but this seems to be coming from the heart seems to be a genuine call to arms
and I think if I've got one concern about what's going on in AI globally at the moment, my biggest concern is people don't realise how fast this stuff is coming down the track
so I was quite pleased just whatever you think of the content of the essay or the arguments he's making I think it's just already a win that it's got people talking about this the environment that we're in
So what has changed essentially he woke up was playing with five four seven and five three and he was just Completely blown I'm blown away by how powerful it was and his argument is this is a massive step change from the free models And his message his essay was written to people who haven't yet paid for a model Because he said if once you start paying you'll see how incredibly powerful the uplift is
1Not only was it writing 40 ,000, 50 ,000 lines of immaculate code, it was going back and testing the app itself. It was acting like a human tester, coming back with improvements, product improvements, and taking the initiative to go back and do the QA itself.
On top of that, there was reduced friction, and he was starting to see role evolution.
And the key themes that came out of his essay, he says, he thinks we've seen a step change in capability. It's occurred pretty much overnight.
night both these models uh five three and four seven came out on the same day he said these models don't want to be too anthropomorphic but they're starting to show taste and they're starting to show judgment so they're behaving in a very human -like capability and then this is the sort of spirit slightly controversial claim we're starting to see the first signs of self -improvement
for the people who are watching closely i've always been on the lookout for recursive self -improvement once models start coding themselves and improving themselves we're potentially in the foothills of an exponential rise in intelligence as these models start to give birth to models that are more smart than they are and then he leans into the Derry Amadei
essay that came out earlier in the year about how 50 % of entry -level white collar jobs could be eliminated in one to five years so that was his message to his mum and then he said is to conclude i know this isn't a fad the technology works improves predictably the richest institution in history are committing trillions to it we're going to be disorientated i know the people come out are the best ones you'll start engaging now not with fear but curiosity and a sense of urgency
he writes very well well i suspect he got a little bit of help from his startup to do this and i know that you deserve to hear this from someone who cares about you not from a headline six months from now when it's too late to get ahead of it we're past the point when this is an interesting dinner conversation about the future the future is here it hasn't knocked on your door yet it's
about to that went viral it had 85 million views as of yesterday uh and counting so that that's that's the opening hypothesis i want to start with uh and i'm going to see see how you feel
in five minutes get to the end so here's the throwback so of course when you get
80 million views you're going to get a bit of pushback Gary Marcus everyone's favorite skeptic Gary Marcus doesn't actually believe that AI is remotely capable he thinks it's brittle he doesn't think it's going to scale he thinks it's overhyped so he immediately hit back saying this was alarmist hype
and then it's a cynical view from Will Manders who's a blogger produced a very popular essay in reply his argument that it looks like a product but it's not it's just shaped like a tool but it's not it's actually profoundly useless everything we're using ai for is just
generating work rather than saving work we use it to write our emails and reply to us but why can't we do that anyway it's just creating noise and all the investment all the fascination is going
into the trillions of buildings this infrastructure is actually never going to be used we've built this beautiful car park we haven't got any cars to put in it so the whole thing's a waste of time
don't bother go home the second the third argument is we're getting a little bit more positive now
the realist isaac saw wrote a rebuttal saying well yes but and the key piece of matt's essay is that what's happening in technology now in coding will automatically hit the lawyers and the judges and the journalists and the accountants because it's going to do all the tasks well this
argument here from isaac would suggest that coding is a very contained exercise it's ideal for for for AI to get involved with and it can improve very well, but it's not so good when it comes to human tasks.
It's not so good at relationship building, political judgment, social nuance, source cultivation and lived experience. So he says just chill out because it's not actually going to jump the shark from the tech world into the humanities world.
And then the last argument is kind of Boyack, which is the optimistic argument is he was He was saying, look, the thing about economists, most economists are bad economists because they only see the bad. They don't understand the good because the good is the unseen part.
And if you look back in human history, when you look at the milling and the inventions that we saw in the Elizabethan times, they were always overblown in terms of the damage they were going to do. So he said, don't worry about the unseen. don't worry about the seen because we've got to look at the unseen and the benefits that are going
to come so scarcity assumption says there's only a set amount of tasks that we can do as a human species but actually there's an infinite number of tasks if you look at historical precedent this will create jobs and i'll lean back to the gartner study i look at this when i get really nervous so this suggests this is good news right that by 2030 there will in fact be more jobs created by ai but we've got to go through a bit of a shitty patch to get there and by this account
in the u .s there's going to be 30 million jobs disrupted so 30 million people are going to have to retrain and find something else to do so it's not going to be pleasant it's going to be really unpleasant but by the time we get to 2030 and beyond we'll actually have back to full employment
and people will be prosperous again so um matt's kind of i'll leave you to read the essay but he He said, just remain humble, keep learning, lean into it, get practicing. You're always starting to learn new skills.
Use frontier models that you pay for. Push the boundaries of your knowledge. Build adaption habits into your workflows. Always consider yourself a beginner.
He made quite a scary comment. He said, get your finances in order. He said, you've probably got three years to assume and accrue capital, because after that, all bets might be off, and there might not be any jobs at all.
but he left on a positive note saying this is the golden age of building and this is an opportunity for everyone to create a builder mindset because these tools are so readily available
and then ethermolic has kind of come back with this theory he keeps coming back to that if you're in if you're in a dilemma about whether or not you should be worried about this he said the upside is being worried and being over prepared because even if it doesn't happen for 10 years if you're over prepared it's a lot less risky than being under prepared and happening in one year and then
you're all all the companies pretty much so um sorry i was a bit fast i'm i'm gonna put my cards
on the table i'm kind of with matt uh i'm glad he's written it i think it was an important essay uh i think gary marcus is just gonna he's just miserable um will mendez he's a cynic i think can dispute him.
I think Isaac's got a nice argument that possibly what works for code won't work for humans. And then Boyack's has an economic argument.
So, quick poll. Who's with Matt? Who believes Matt?
Who's a Gary? Any Garys? Go on. There should be one in the row.
Any Wilmanses? Anyone think it's just a waste of time? There's always one.
Who thinks Who thinks it's good for code but not gonna be great for other stuff? So, interesting, so it's not gonna come over.
And who's a boy act, who thinks it's kind of, there's gonna be some positive upside at the end of the day?
Cool, and who hasn't got a clue? Who's like, completely undecided? Great, that's me.
We're available for workshops. We have a weekly newsletter called Media Morph. I've written a book called The AI Profit, available on Amazon.
In fact, I'll give a copy away now to anyone that knows where the name Matheson comes from. Your dad. Sadly not. No one knows?
Has anyone been to... Okay, well that's where the name comes from, Alan Matheson Turing. That's it. Thank you very much.