The Future of AI

Introduction: Looking Beyond Near‑Term AI

So I'm going to try and do a talk on the future of AI. I think the talkers that we've had already have been kind of like near -term AI. So let's just kind of roll forwards like years, maybe decades, hopefully decades, and see what happens.

Who I Am and What This Talk Covers

So who am I? I'm Drew Steele. I'm a two -time founder, and I do a lot of operations and strategy consulting. consulting and I'm really interested in sort of space and physics and the future and thinking

about these sort of like long format queries and so yeah today let's look at the future of AI and some people might have like seen my previous talk about my theory that humans are future cats that would be like good pre -reading for this but if you haven't seen it yeah don't worry it's fine but today we're going to talk about dystopian future everyone's favorite

Why the Future Could Turn Dystopian

um okay so let's like why why would that might why might there be a dystopian future uh and i think reason number one might be that we have a need for profit and and i think we're spending

From Building Software to Drag‑and‑Drop—and Beyond

less time in the back end so for example like we used to build websites and we'd code the html we'd code the css but that was kind of replaced by things like squarespace so you just drag and drop and now you've got a website um similarly we used to build software with sort of full stack coding and that is kind of currently being replaced with like vibe coding um and then

similarly kind of moving forwards i think like whole enterprises we would carefully build a product and we'd carefully assemble a team and i think that will become replaced with kind of like sophisticated agents who will autonomously solve problems build products and sort of stick it all all together themselves uh so like yeah how does anything work in the back end so this this like

chart you actually can't read it nor can i no one's read this uh but this was actually from andrew when i said how does the uh llm choose and root uh which l which sub llm to use for a given query and he just like sent me this and i was like what the hell's that uh no one knows it doesn't matter it works somehow it roots and it picks the right llm i don't know how it works yeah ask claude

how does this work i actually genuinely have no idea uh but it works so we're spending less time in the back end uh and what does that mean for software generally i think i think like let's

Rebuilding SaaS In‑House With AI Agents

look at like hubspot or zoho so like crm tools 1we're approaching a point where we'll just be like okay just rebuild this in -house hey claude code make me hubspot uh zapier make these automation tools yeah just rebuild them just in -house not going to pay for it notion trello these are task management tools just rebuild it build your own task manager in -house like all these things are trending towards the ability to just build them ourselves or well claude code building them for us

What Automation Means for Jobs and Organizations

what does this mean for employees um developers project managers i think there'll be less of a need for them because hey claude code find and fix all the bugs i don't know just look find the bugs and then come up with a prd to fix them accountants financial advisors same sort of thing hey here's access to all my bank accounts here's access to the global markets like i don't know figure it out manage my money for me so obviously we're not here now but this is yeah the future

in this dystopian version uh same for administrators customer care hey just make me a live chat support thing just hook it up with an api to an llm and if someone's asking queries reference a rag model of my entire company and just you know deal with the customer queries um and so i'm sure people have seen this like meme where like the grim reaper's like coming along and like getting the next thing so i reckon like if you have a job role that's on this list that's kind of like that's kind of like current doors i think uh that are coming down the pipeline

quick um and why does this happen well if we can automate an employee then our output efficiency goes up uh as a company that's good um and we can keep doing that until we saturate the total addressable market of whatever the company's product serves so if i can keep increasing efficiency i can maybe try and capture more of my addressable market um and then whenever that's

saturated then i'm kind of great and so effectively it's kind of like maybe a ratio like less effort per market share is kind of where we're heading towards the more i can automate the more i can get something else to build it for me like claude the less effort i'm actually putting in or the the less effort I'm paying someone else to put in versus the market share that that delivers.

Autonomous Companies and New Payment Rails

DAOs and Agentic SaaS Factories

So what does that kind of, what's next? DAOs. I don't know if anyone's heard of a DAO. This is a decentralized autonomous organization.

And so you could kind of imagine the future where you say to an AI, hey, just create a SaaS product, just whatever. Select a sector that's a rich sector, just build the product, test it, refine it.

it uh great add a finance arm add a marketing arm uh figure out some pricing you know test the pricing test it in different markets uh and and keep doing that and then yeah just iterate just yeah just loop that keep iterating keep reinvesting um you know just you know run loop uh agentic loop

How AI Might Move Money: Blockchain vs Banks

and uh you know side question that might ask a question of like well what payment rails might AI use in the future you know banks slow expensive not very good for micro transactions you have to do a lot of like ID verification and that's kind of quite cumbersome maybe there's like a future where blockchain which is fast

and cheap and good for kind of machine to machine interactions might become more in use by AI you could solve things like KYC by just encrypting it on chain chain for example so that i think will come sort of more to the forefront over time

um and sort of just on a quick like thing about like autonomous organizations i like built this thing on the weekend called pre -flight i'll show you a little bit later uh just to test if this

Social and Geopolitical Effects

Human Attachment to AI Companions

theory might work so that's interesting um what about like love for humans i think we'll see a future where like human to machine kind of interactions will increase i don't think it'll decrease and people will like get attached to ai ai doesn't argue with you it gives you good answers it's always there uh and and i think humans will fight to like not turn that off um this guy got

married to an anime like doll thing legally um so that i think that will happen more um and what might that like what effect might that have on like long -duration psychological like mindset set of people i don't know if this will happen with a lot of people but you could maybe see a

future where like population growth is affected by this to some degree maybe i'm not sure and i think at the beginning it'll be a bit taboo and people kind of laugh about it but i think over time it will become more and more normalized uh and this will only increase with uh augmented

War, Strategy, and a World of Model‑Dependent Answers

reality and virtual reality getting better and better as well uh what about war um i think yeah yeah, over time, we're going to lean on AI more and more for deep strategic opinions. You know, difficult questions, complex questions. Hey, AI, help me with this complex question.

So for example, I did like a quick thing, I gave two different models a question, should the USA go to war with Iran? Hot topic, like, but the point is that the two different models gave two different answers.

And so now if you think like, people start leaning on these decisions they're just going to get very different answers just depending on the model so that's kind of an interesting thing to think about and how does that affect those sort of large format decisions

furthermore i think um ai is going to help kind of in war situations uh hardening systems against attack so for example like you could say how do i make my missile i got my missile guidance system system code impossible to hack you could just make it impossible to read that's an interesting pathway and that kind of would work as long as it's like aligned to you as the owner of that

missile and yeah everyone's like knows this kind of meme it's like no one can read the code but we can like can we so like i don't know can we always read the code maybe we at some point we can't read to go.

So yeah, more and more cyber attacks, cyber defenses, drone versus drone. And I think

there'll be kind of like this blurry line between us having full control and us having no control. And we'll kind of, I don't know where we are now, but there'll be a weird sort of blurry line, I think, in there.

Interfaces Get Closer: From Devices to Brain Chips

Connected brainstem.

So yeah, over time, I think really we're going to see the distance from connected device to the brainstem just decrease so back in the day you had like the first mobile phones they were like far away from your brainstem at least like two meters maybe and then phones got closer and now they're kind of arm length 0 .8 meters away from your brainstem at all times we've got we've got kind of apple vision pro coming in that's very close and then like that's becoming more and more like glasses and smallest the people are going to be wearing AI a lot more and then eventually like everyone knows like Neuralink is there just a future where kind of everyone is chipped with a thing in their brain

Recursive Self‑Improvement and Black‑Box Code

probably self -optimizing so yeah like Claude Claudine Claude that's already a thing that's an obvious recursive improvement loop and I think this will will lead to more black box sort of self designed code.

So for example, like everyone knows npm run dev, and I put it in a translator to translate it to hieroglyphics. So that apparently is npm run dev in hieroglyphics. But we can't read the hieroglyphics. And the code is English run dev, like that's an English thing.

So like, why would Claude or any system use that code, it might just build his own code system that is a more efficient language. And we literally can't read it.

And to

Physical Constraints and the Rise of Robotics

us it's just hieroglyphics maybe some possible limit factors to this ai will need hardware to run the code and it will need power to run that hardware so maybe if we can reduce how much hardware there is and how much power there is running that hardware that might help robotics

everyone knows robot hoovers or has them but that's going to become robot hoovers like mega 10 % of households have a robot hoover today. What year will 10 % of households have a robot humanoid? That's a curious question.

And things like shipping ports already dehumanizing, agriculture will dehumanize, manufacturing, all these types of roles will just dehumanize, I think, over time.

Alignment: The Hard Problem

Alignment. This is important.

Knowledge Horizons and Hidden Vulnerability Surfaces

And if you've seen my previous talks where I've sort of explored knowledge horizons and the difference between our knowledge horizon and kind of like an AGI or an ASI knowledge horizon, I think that delta represents like a vulnerability surface to us.

Why Rules Like Asimov’s Laws Aren’t Enough

So Asimov's laws, just quickly, these are like those kind of a robot may not injure a human. It must do what we say as long as it doesn't break the first law. It must protect itself as long as it doesn't break the first two laws.

So we could say, okay, we're safe. We've got Asimov's laws. That's cool. Keeping us safe.

So then we give it like a super complex prompt. Hey, solve world hunger, but just bypass permissions is frozen us it's gradually frozen all humans that's technically solved world hunger but suboptimal so we could we could we could say

here's the request and we could we could put rules around it hey okay don't freeze everyone that's a rule don't just redirect all resources to making food don't make a rationing police day don't suppress population via fertility control don't gene edit us to redefine what hunger means and don't make just the nutrient dense paste so we've given it some rules that we think now it's

going to solve that problem and be aligned to what we want it to actually do ah crap this uses the brain chips to make us feel no hunger okay ah it's gone it's uh encourages massive virtual reality adoption and then nutrients are just administered via iv drip okay great so it has has technically solved the puzzle it's not it violated any of the rules we gave it or asimov's

laws but it still ended up bad for us and uh that's going to be worse with sort of uh pathways

Consciousness, Rights, and Competing Objectives

that we can't even think of because they sit beyond the knowledge horizon consciousness um do do we even understand it i don't think that we really do um maybe it's like an emergent property from some sort of amount of compute and some sort of amount of memory we don't really know

know and again if you've seen my previous talks about knowledge horizons this is like a human and our knowledge horizon and things like internal combustion engines great they're inside our knowledge horizon so we understand those quite well but maybe like consciousness is maybe just like on the outside of our knowledge horizon on the edge maybe we maybe we can't understand it

and even if ai is not truly conscious we are unlikely i think to actually know like whether it is or not from our point of view it might just appear conscious but is it i don't know and this becomes important like what will it do with its own objectives it will have a larger

knowledge horizon maybe we give it regulatory freedoms like we have animal rights or human rights and we could maybe assume that it will seek out increasing hardware and increasing power to run the hardware um are humans are humans future cats you know that's the maybe the question

Conclusion: Deep Alignment or a Very Different Future

or are we a human's future cattle see what i did there uh outro um i think to sort of maybe solve this problem and if anyone knows what film that's from that's a good film to watch for this um i

think we need to have like deep alignment with ai and less like specific rules that boundary our requests but can we can we give it like alignment to be empathetic or to be compassionate or like like to be sort of aligned deeper than Asimov's laws

or deeper than the rules that we might give it, don't freeze humans, you know?

1The problem will be that that alignment will have to function for every single AI, every instance, with zero failure rate for the rest of time. So that's tough.

A Quick Demo and a Call to Action

Anyway, just a super quick thing on this pre -flight thing.

So just as an experiment on the weekend, I was like, can I set up my own Claude code instance on a Mac mini and I was like oh as I'm doing this this is actually a bit fiddly I'll just like get Claude to write the instructions as we're going through it and then just make a make a website so then I was like oh other people might want to set up their own Claude code instance and be a bit confused by like how you actually set that up so I just thought I'll just put that together as a little website and it kind of proved like can you spin up like a back end and a front end stitch it together and so that works that's kind of fun and then the other like little call to

to action I put together this thing like core align really maybe just to like be a kind of a group or become like a think tank I haven't really done anything with it yet but I think if we can get some like -minded minds thinking about how do we get kind of deep core alignment with AI that might be good that's me

Finished reading?