The Future of AI

Introduction

Who I am and why this talk

Hello, everybody.

So I've been given the talk of the future of AI.

So yeah, I'm Drew Steele with a rubbish photo of me.

I'm a two time founder, operations consultant, I do a lot of work with Android 60 seconds.

And I very much like thinking about space and physics and the future.

So I think that's why I've been given this talk to do.

So just like bear with me as we go a little bit off piste here and see what might happen in the future.

The thesis: AI keeps getting better

So just as like a TLDR in case everyone decides to nod off, the summary I think is going to be AI is going to get better, it's going to get better, and it's going to get better.

And I think as we talk about this, we need to think about how this actually relates to what's going to happen to humans.

And so I think humans better as well.

Then I think humans worse.

Then I think humans are cats, which we'll come back to later on.

Phase 1: Better (2025–2035)

So let's start with AI.

Better.

This is the good bit.

I've put it here roughly 2025 to 2035, I think.

So the next decade, AI is just going to get better.

Why AI will improve: compute, data, and incentives

We could probably make a pretty reasonable assumption that compute power over the coming years will continue to rise.

And really, just this little graph here is just showing like calculations per watt.

So it's called gigaflops.

That's like millions of flops doesn't matter.

It's just how many things it can do, like a like a CPU or a core or something like that.

And yeah, it's getting more efficient.

That's like how technology progresses.

And we can also see that adding more compute to AI systems improves intelligence and adding more training data to AI systems improves intelligence.

So let's just take those as kind of like that's probably going to carry on.

For anyone wondering, yeah, I think Moore's law probably will break down due to things like quantum tunneling.

But I think we'll then just come up with the next thing.

It will be compute.

So it won't be flops.

It will be quantum operations per second or be the next.

So we'll just kind of keep doing more per second, I think.

And I think from the human perspective, we're going to be in a bit of a commercial prisoner's dilemma, which will keep fueling the AI improvements, because if one company fuels it, then they're now winning, and then you have to kind of keep up.

So we're incentivized naturally to keep making it better somehow.

Obviously, this talk is going to get a bit theoretical, but we'll figure it out kind of thing.

We will competitively need to.

Recent progress and accelerating adoption

And just looking backwards a little bit, GPT 3.5 came out not very long ago, end of 2022, and it was kind of a good autocomplete type of thing.

And just three years later, it's smarter, has broader abilities, and everyone's talking about APIs and MCPs and stuff like that.

The interoperability is only getting better as well.

So we've got widening use cases, people trust it more, and people rely on it more.

Energy, efficiency, and scaling laws

This is like just a little energy kind of consumption forecast where they're just thinking, okay, how much energy are we putting into AI, querying AI, training AI?

And these blue lines are prior year forecasts of how much energy we're going to use for AI.

And every year they've had to upcycle their own forecast of how much energy we're going to put in.

So the summary of this slide really is we're going to probably have more computes per watt, and we're going to plumb more watts into it.

So I think those two lead to more intelligent AI.

Humans in Phase 1: productivity boom

Humans over this period, 2025, 2035, probably better.

we're going to embed AI into all sorts of business domains.

We already are trying to, and everyone's here thinking about it.

We're going to improve efficiency.

Things get better.

And it gets cheaper as well.

So cheaper intelligent action per pound.

All of those blog posts that Andrew was talking about require intelligent actions to ultimately get to the end result of a blog post.

And that used to cost x, and now it costs x minus y. So it's cheaper.

Consumer experiences: instant, personalized media

Things like instant Netflix movies.

Great, we can just log into Netflix.

What movie can we watch tonight?

Bored of scrolling, just voice note, hey, make me a movie that's a little bit like this, but has a happy ending instead.

And it will just render it straight away.

So this is good for humans.

Automation at home and work

We can automate dull stuff.

We can automate complex stuff.

Who doesn't want a robot house cleaner?

We've already got these little robots like this.

This guy's called RizBot, and he runs around in America on TikTok, I think, and already a useful bot, or the early stages of.

Augmenting skills and decision-making

And AI is going to obviously augment our existing skill sets.

I don't know how many people in here can natively code, but does it matter?

You might not need to code, so AI can do that.

Legal and financial.

If you've got a legal question, you might ask ChatGPT first.

Same with financial stuff.

Ideation and creativity.

We've seen now with things like Google VO and all the kind of video platforms, there's a lot of kind of improvement in the direction of not just coding stuff, but actually producing visual objects and things that look nice.

Better decision making.

Hey, ChatGPT, what do you think about this?

Here's some input data.

How can I make an informed decision?

And so that will cross, I think, all kind of business domains, and we'll just use it more and more.

Shrinking distance from device to brain

Distance from device to brainstem probably would decrease.

I've already gone from landline to mobile, mobile to glasses, neural links coming down the pipeline.

Ultimately, the summary there is probably more ideas get turned into reality because lots of people have a cool idea, but they can't code, so how am I going to build that?

Or they work in a company and they can't interact with a different part of that company, but that idea might be valid, so AI can take that heavy lift for us.

Phase 2: More better (2035–2045)

So back to AI again.

Maybe let's look a little bit further.

2035, 2045-ish, I don't know.

Probably more better.

Why would it get worse?

If we can add more compute to it, is it going to get worse?

Embodied AI and ubiquitous robots

And I think we're starting to see, so this is a humanoid robot from FIGA that's already existing today.

And so humanoid robotics are going to improve greatly.

Navigating complex environments is going to get better and already is getting really good.

And I think this is just going to become very normal.

It'll be very normal to interact with humanoid robots, whether it's in the office or out and about or in your home maybe.

This one's just doing the laundry.

Capabilities: near-peer general intelligence

And AI will start to match human intelligence across all domains.

1Cost of software production will trend, I think, towards marginal cost of energy.

If we want to build a blog post writer, what does it really cost?

It's just really costing the energy to run that LLM and run those tokens.

And they're going to be in their own competitive race to be cheaper, to beat the next one.

So that's just going to get lower.

And interoperability will be very high.

Any system will be able to interact with any other system.

It's just a translation.

We've gone from APIs to MCPs to whatever better version will come from that.

And AI will get better at completing complicated long-duration objectives.

At the moment, we're having to say to AI, hey, please keep coding overnight.

But we'll just be able to say, hey, look, here's a really long-duration task.

I don't really know how it's going to work.

Can you just figure it out, please?

Innovation cycles will speed up, because at the moment, innovation cycles are limited to a human pace.

How quickly can we review it and think about it and come up with the next idea?

So that will increase as well.

And self-improvement, we might start to see sort of beginnings of proper AI self-improvement where AI is able to sort of look under its own hood.

And it's already a bit black box at the moment.

How does it really kind of work?

And I think that's just going to become more black box.

Humans in Phase 2: displacement and restructuring

So yeah, kind of near-peer human and this is termed AGI and artificial general intelligence so kind of near-peer Human intelligence across all domains a bit hard to define it specifically, but eventually it will be Unavoidably like yes, that's better than a human Humans at this stage 2035 2045 maybe I think probably worse I think

Maybe a bit worse there.

And the reason really is once we've embedded it into all business domains, it will just be unmatchable efficiency.

How can we match that?

How can we do complicated things with our brain that has less data than the AI?

Cost per action approaches marginal cost of energy.

So that might not just be like cost of coding approaches marginal cost of energy, but it might be cost of physical action as well.

If we think about humanoid robotics, maybe they become even cheaper as well.

From single-purpose to general-purpose automation

And here we've got, you know, kind of today's traditional robots, which are kind of single purpose doing, you know, a single thing in a factory.

And we've got kind of the Tesla, current Tesla robot, which is going to be much more general purpose.

So we're going to see end-to-end shipping logistics fully automated, fully run by AI.

Maybe even raw material extraction and processing, we already have that beginning.

Robot house cleaner, what about robot anything a human could do?

If you have a human-shaped object that's cleverer than a human, then we start to get into the territory of where does the human do it better?

Sectors transformed: agriculture, logistics, transport

We've got fully driverless agriculture coming down the line pretty quick, I think.

Fully driverless transport systems coming down the line very quick.

This is a quote from Rio Tinto.

They do lots of big resource mining, turning trucks and other equipment into robots, eliminates need for meals and shift changes.

So they're already trying to do this.

New jobs vs. rapid replacement

And I think new human jobs and roles will come about.

There'll be new roles that we haven't thought of that will emerge.

But I just think they're going to themselves get swiftly replaced by another AI.

Food systems, ghost supermarkets, and digital twins

Food production and agriculture.

Ghost kitchens will become ghost supermarkets.

And we might have digital twins that communicate with other digital twins.

I'm running out of time, so I'll skip a bit further ahead.

But yeah, effectively, the retrain rate will probably be longer than the replace rate, I think, at this stage.

So it would be just quicker to put another AI in than to learn to fix it yourself.

Phase 3: Artificial Superintelligence (ASI)

AI more better.

Runaway capability and self-iteration

And this is where we kind of move into artificial super intelligence, where it's like way, way beyond human intelligence, exceeds all human capabilities and all domains by a lot.

And we'll be able to design and build and iterate on itself faster and faster.

Fully black box.

Beyond human languages and interpretability

At the moment, it uses human written languages, JavaScript.

Why does it use those?

Maybe it's just come up with its own entirely new language systems that are completely hieroglyphic to us.

Motive opacity and emergent consciousness

Difficult slash impossible for us to understand motives.

Maybe this is where we start to get emergent consciousness, or at least something that is indistinguishable to us.

We don't really understand that.

First contact with a more intelligent species

And this will be the first time humans will meet a more intelligent species.

That just will happen at some point.

So we're going to have to have that handshake at some point.

Humans as cats: the knowledge-horizon analogy

Humans, cats.

So the cats, like this is just a, you have to run with me here.

What a cat can’t know—and what we can’t either

Like a cat's understanding of a laser pen.

We can give the cat a laser pen and it can like touch it and smell it and it could bite it.

And it can, as far as the cat is aware, it can like fully see the whole laser pen and it can understand that laser pen.

Here's the cat.

and the laser pen sits inside of the cat's knowledge horizon.

The cat is aware of the laser pen and it can touch it and you know, whatever.

So as far as the cat is aware, it understands everything about the laser pen.

Maybe we could expand the cat's knowledge horizon a little bit.

Maybe we could show it that the button equals the dot.

They can learn when they hear a noise or whatever.

But it would be fundamentally impossible for us to tell the cat how the battery works.

That battery exists beyond the cat's knowledge horizon.

No matter how hard we try, it just won't be able to understand that.

And I think we can make a parallel.

Finite knowledge horizons and their limits

Do we make the assumption that all intelligent systems must have a finite knowledge horizon?

And so if so, I don't know, it could be some clever equation to do with neuron density over whatever.

But here we are, humans.

We have a knowledge horizon.

And that's the cat.

We've hit its knowledge horizon, which fits inside our knowledge horizon.

And inside our knowledge horizon, we have stuff like cooking.

Great.

We all understand cooking that's inside our knowledge horizon.

And maybe we could expand our knowledge horizon a little bit.

And maybe something like quantum gravity sits on that edge, where we're kind of trying to figure it out.

Can we understand that?

But there'll be something outside our knowledge horizon.

We're not going to be able to understand everything.

So there'll be stuff that exists beyond our ability to fundamentally understand.

ASI’s larger horizon and our epistemic gap

So then if we roll forward to artificial super intelligence as well, its knowledge horizon will be fundamentally unknowable to us, because it is bigger than ours.

So here's us, and that little dot is the cat's knowledge horizon.

And so there could be something like quantum gravity sits just on the edge of our knowledge horizon, but might sit well within the knowledge horizon of ASI.

But who the hell knows what's over there outside our knowledge horizon, but inside ASI's knowledge horizon?

And who knows what's over outside of that?

This is stuff that we fundamentally can't know.

So what do we do?

So what are we to do?

Because we've got to think about, well, what does that mean for us when we have something that has a knowledge horizon much larger than ours?

The alignment problem reframed

And I think this comes back to core AI alignment.

We have to align the AI to us.

So I'm just going to route through these very quickly.

Specification gaming from toys to geopolitics

So for example, if we give an AI a chess game, it will try to bend the rules or cheat or get around the rule set that we provide it.

You know, get from A to B with two legs.

No, you can't have really long legs and just fall over.

And now you've done it in nought steps.

And so the same sort of thing happens if we ask it a really big problem like solve world peace.

Oh, but you can't just kill everybody.

And also, no, you can't just freeze everybody.

So how do we give it rules that exist beyond our knowledge horizon to prevent it from coming up with solutions that we don't know about that would be detrimental to us?

The unknowable boundary of safe behavior

So yeah, the alignment boundary surface, I think, will be unknowable to us.

So, yeah, maybe we have to think a little bit deeper about core alignment rather than trying to give it specific rules, you know, solve world peace, but don't kill everybody, don't freeze everybody.

Toward core alignment: values over rules

Maybe we actually need to just have more core alignment with AI.

Can we instill love, empathy, and compassion?

And this, I don't know how we solve, but can we teach it to just like love or to have empathy or to have compassion?

And maybe those deeper things, those core alignments actually then solve the puzzle

Conclusion and call to collaborate

for specific pieces that are outside of our knowledge horizon i'm drew steele i don't really use linkedin but if you want to talk to me on linkedin you can use this qr code and then last night at midnight i thought i'll whip up a quick website called coraline because i thought maybe this is actually something we should talk about and if there's other people that are interested to try and figure out core alignment um go on that qr code and i don't know let's let's try to figure it out so

Finished reading?