So we're going to talk about AI.
We're going to get everybody on the same page in terms of what AI isn't, isn't.
And we will talk about the macro impact that these technologies are going to have on society.
I want to give you a different way of thinking about AI, hopefully that's more useful.
Hello, Peter.
So a little bit about me.
I wear two hats.
The past 25 years I've been involved in AI research.
My undergraduate PhD postdocs role in AI.
I ran a master's programme in UCL for about four years.
I had hundreds of students going out there applying these technologies.
And I am currently an entrepreneur in residence for UCL, so I helped them spin out deep tech
companies.
I started a company 15 years ago that's been building solutions for big and small companies.
I sold it to WPP, the biggest media marketing company in the world, two years ago, where
I'm the chief AI officer, so I was responsible for coordinating AI across about 120,000
people, which is good fun.
You found it?
I found it.
Excellent.
All right.
So, okay, before we get into guest definitions of AI, what I usually do is take people through
a technology stack.
And that technology stack is sometimes detailed, sometimes not, but really there are three
elements and they're going to come up on the screen in a minute.
There's data, insights, and action.
And I know there's at least impulse at the moment for organisations to be building data
lakes and then putting Tableau or analytics layer on top of it and thinking we have AI,
so forgive me if you're doing that.
But I would argue that giving humans access to better insights doesn't typically lead
to better decisions.
For the past 10 years, I think people have been hiring data scientists, my students,
they've been building data lakes, hoping that better insights lead to better decisions,
and they typically don't.
Humans are bounded by our decision-making capability, and I'm a big advocate of trying
to solve the decision-making problem at the top, which if you're old enough, it used
to be called operations research, it's discrete mathematics, it's optimisation, is it working,
how do I do that?
All right, that's me, it's boring.
All right, these are the three things.
Decision-making sits at the top, completely different field in computer science, it's
optimisation, it's discrete mathematics, and there's probably less than about 3,000
people around the globe that are really good at solving these types of optimisation and
decision problems.
Oh, if I build, perfect.
If I build a system that I give data to, it makes a decision, and tomorrow I give it
the same data, it makes the same decision, what I have is automation.
And automation is amazing because we can get computers to do things better than human beings.
Does anybody know the definition of stupidity?
Doing the same thing over again, I would argue that by definition, automation is stupid,
it's not intelligent, it's not AI.
I know that everybody that currently touches this technology stack is calling themselves
an AI company, it's fine, we get more funding, we get more clients.
But there are unfortunately many definitions of AI.
The most popular definition, I think, is the weakest, which is getting computers to do
things that humans can do.
So over the past decade, we've managed to get machines to do things that traditionally
only human beings can do, recognise objects and images corresponding to natural language.
When we get machines to behave like humans, because humans are the most intelligent thing
we know in the universe, we assume that that's intelligence.
Now, I would argue that humans are not intelligent, that's a different conversation.
There's actually a much better definition of AI that comes from the definition of intelligence.
Oh, no, it is there, it is there.
What I'll do is I'll do it, it's coming up soon.
So I just want to give you an understanding about why decision making is hard.
So imagine these are five staff members.
Okay, and what we're going to do is we want to allocate these five staff members to five
jobs.
Ignore those rules, they're not important.
How many ways can I allocate five people to five jobs?
How many ways can I allocate five people to five jobs?
Yeah, exactly, five times four.
So there's 120 possible ways to allocate five people to five jobs.
Let's make the problem more complicated.
I've got 15 people, how many ways can I allocate 15 people to 15 jobs?
And don't say 15 factorial, please, it's cheating.
How many ways can I allocate 15 people to 15 jobs?
So, there are a trillion possible ways.
To expect a human to solve this problem, we're wasting our time.
One rule to take away with you today, anything more than seven, don't use a human four.
Okay?
And industry don't have problems of this size, they've got problems of this size.
So here are 500 staff members.
How many ways can I allocate them?
It's a big number.
It's a number that's going for a thousand digits.
Just to put this number into context, this is how many atoms are out in the universe.
Once I reach about 60 things that I have to consider, in this case, 60 people allocated
60 jobs, there are more solutions than there are atoms in the universe.
Problems can solve problems up to seven.
You can hire a good computer scientist that can solve problems up to 40, 50.
To solve problems at this scale, you need to have deep, deep specialized expertise in
optimization.
Okay?
There are lots of these problems that exist in your companies.
Okay.
So, there's a better definition of AI, which comes from the definition of intelligence,
which is goal-directed adaptive behavior.
It's a beautiful definition.
Goal-directed in the sense of trying to route our vehicles to maximize deliveries or allocate
our staff to maximize utilization or spend our marketing money to maximize return.
You have to have a goal.
It's usually a complex objective function.
Behavior is how quickly I can answer that question.
Now we've just discovered that if you use the wrong algorithm or if you use people,
you're not going to solve these problems well.
If you use the wrong algorithm, it will literally take longer than the age of the universe.
If you choose the right algorithm, it can take milliseconds or seconds.
So, algorithms are a differentiator.
But the key word in this definition is the word adaptive.
What you want to do is build systems that make decisions, learn about whether those
decisions are good or bad, adapt themselves so next time they make better decisions.
If I'm being totally honest with you all, you don't really see adaptive systems in production.
My most systems in production are automation.
They don't typically learn.
Do you remember a few years ago, Microsoft launched a bot on Twitter?
Lots of teenagers decided to tease that Twitter bot and it became sexist and racist very quickly.
And that's what happens when you put systems in production, they can adapt themselves.
So, instead of using looking at AI through definitions or technologies,
I look at AI through a different lens.
Before I give you that different view, I want to give you a history lesson.
So, this is AI in the 60s and 70s.
It's Socrates.
Socrates is famous for inspiring the Socratic method.
If I say to you, Socrates is a man, and all men are mortal,
I can infer that Socrates is mortal.
And AI in the 60s and 70s was writing down lots of things that we know about the world
to try to infer new knowledge.
And it didn't really scale.
It didn't really work.
In the 80s and 90s, a new type of AI came along that's modeled on how brains work.
This is the brain of a bumblebee.
My PhD 20 years ago was trying to model the brain of a bumblebee in a machine.
Bumblebee brains have a million brain cells.
Their brains can fit on the end of a needle.
Bumblebees do amazing things.
They navigate 3D worlds and they recognize objects.
They talk to each other.
They don't handle windows very well.
But ultimately, they're very smart creatures.
And the question was, can you model a million neurons in a machine 20 years ago?
You couldn't.
Now we can model billions of neurons.
And we currently call these brains large language models.
These large language models are really good at knowing things about the world.
They're really good at telling you what they know about the world through text,
through imagery.
They are not good at making predictions.
They are definitely not good at making complex decisions.
So these brains know things, use machine learning to extract insights,
use these technologies to make decisions,
and then build systems that can safely adapt themselves in production.
That's really the true paradigm of AI as far as I'm concerned.
So as I said, if I held what we do in industry to these concepts,
really one would argue controversially that nobody's doing AI.
And that's not very helpful because over the past 10 years, 20 years,
new advances in algorithms plus compute, plus data allow us to do some amazing things.
So I look at AI not through definitions, but through applications.
And I believe there are six categories of applications of AI
that can be applied to pretty much any use case across a business.
First category is task automation.
Now this gets a bad reputation in the AI community.
It's using very simple algorithms, if-then-else statements, macros, RPA,
to replace repetitive mundane tasks that human beings are doing
with essentially something very simple.
The fact is, if you apply task automation in the right way,
you drive a massive amount of value in your business.
Content generation is something we're all excited about.
Large language models allow everybody to create generic content.
Imagery, text, soon video, soon sound.
The battleground is not creating generic content.
The battleground is creating brand specific production grade differentiated content.
That's the battleground.
That last mile is extremely hard.
One of the things we're doing in WPP is we create what we call brand brains.
So you can take a large language model.
You can train that large language model on the identity of a brand,
the tone of voice, the style guide, all of the assets.
And when you're engaging with those models,
it's now producing essentially brand-aligned,
almost production grade content, which is very exciting.
What's exciting for me is not the ability to create content, it's this.
So a few years ago, I'd be talking about how we can use AI to represent humans.
So essentially, we replace human beings by things that look and behave
exactly like a human.
But what's really exciting for me is that we can build large language models
that represent how people perceive content.
This is really important.
If I show you an ad or a policy or a promotional material or an experience,
historically, I've never known what goes on in your head.
You have context and nostalgia and all sorts of stuff that goes on in there.
And historically, what we'd have to do is ask people,
and people are not very good at reporting on what goes on in their minds and bodies.
Now, I think for the first time ever, we can build brains,
what I call audience brains, to recreate how people perceive content.
And we can use that to then create better content
or to be able to predict more accurately, activation, clicks, likes, whatever.
One of the things that we're doing in WP is not just building brains on target audiences,
but we build brains that try to represent every corner of society.
So cultures, minority groups, political parties, newspapers, food compliance claims,
so that when we're creating content, we want to show it to all of what I call
a council of responsible AIs, it's a terrible name,
to see if we're offending anybody, if we're causing any harm.
And actually, I think we can use AI to help us hold AI to account.
Insight extraction is what we called AI before generative AI.
It's machine learning, it's data science.
You already know my opinion, extracting insights from data,
giving them to human beings doesn't typically lead to better decisions.
What's really powerful about machine learning is not its ability to predict the world,
it's ability to explain the world.
And so, I mean, the ad world now, if I show you an ad with a black cat,
I can predict the clicks and the likes that I'm going to get from the ad,
but what machine learning can tell me is that Daniel,
if you change that from a black cat to a ginger cat,
you're going to get more clicks and likes because that audience
watched Garfield when they were young, right?
And no human being would have really been able to identify that complex correlation,
but that's what machine learning is really good at, explaining the world.
Decision making is, I've already explained to you a different field in computer science,
and I would argue that if you apply these types of techniques,
optimization algorithms, and task automation,
you actually get one of the biggest banks for your book.
And I guess in my world, the marketing world, once I predict activation of my content,
I need to push that content down lots of different marketing channels
subject to budgets and timing to maximize the return on my investment,
and that's one of these large-scale optimization problems.
They appear in many disguises across your organization.
The final category is human augmentation.
So a few years ago, I'd be talking about how we can use exoskeletons and cybernetics
to make ourselves faster, better, stronger.
One of the things that we're doing with one of the biggest brands in the world,
I can't tell you who, is for each one of their employees,
we build a large-language model,
and we train that large-language model on their digital footprint,
their email, their calendar, their feedback,
and we ask that digital twin, that digital representation of the employee,
if I put you on this project, will you work well?
If I put you on this team, will you thrive?
And that might sound creepy, but I promise you it's being embraced by those employees
because they feel like their digital representation represents them better
than five numbers in a database that's meant to determine what their skills are.
So these are the six categories.
I'd love for you to challenge them,
but they also allow you to navigate this complex world of AI safety
and ethics and governments, governance and all that kind of stuff.
The types of questions you need to ask yourself when implementing task automation
are very different when building digital twins of employees
where you can identify secret lovers
and people who are going to leave the company before they're not going to leave the company.
Anyway, so we're now entering the world of AI risk.
There's lots of who-hard nonsense about AI risk.
As far as I'm concerned, there are kind of broadly three categories of risk.
The first category is why I call micro-risks,
implementing these technologies in production safely in your organizations.
And I believe that there are really three core questions you need to ask yourself.
The first question is, is the intent good?
And it's the intent that needs to get scrutinized from the ethics perspective.
People confuse ethics and AI.
I would argue there's no such thing as AI ethics,
so forgive me if you rebranded yourselves as AI ethicists,
but ethics is a study of right and wrong,
and it's intent that needs to get scrutinized.
AIs don't have intent, human beings have intent.
Once you've deployed something in production, there are two questions to ask,
and these are two safety problems.
One is if my algorithms are opaque,
but they are making a material,
essentially decisions that have a material impact on people's lives,
then you need to make sure that they are not opaque.
So you need to make sure that they are explainable,
and therefore they are transparent, auditable, and governable.
So I, by principle, try to make all of our algorithms explainable,
not to try and kind of tick and adhere to regulation,
but because actually explainable algorithms help you understand the world
in awesome ways.
The second, the third question is, if I deploy an AI in production,
and if it goes very, very well, what harm can it cause?
AI can significantly move the needle
in solving the problems across an organization,
and sometimes you move the needle so much that it causes harm elsewhere,
and that's an engineering problem.
Have you thought about the failure points of your systems?
Intent ethics, deploying these solutions in production is safety.
So that's the first category, micro risks.
The second category are malicious risks
or mitigating the risk of people like bad actors creating pathogens
and misinformation and all that kind of stuff.
And then the third category of risks I'll talk about in a moment,
which are called macro risks.
So we're using AI to essentially solve problems across our supply chain.
We're going on our digital transformation,
making our organizations more efficient, more effective,
and what ultimately organizations are doing,
either consciously or unconsciously,
is trying to create a digital simulation of themselves.
So if I run a marketing campaign right now for a customer,
can I project across that supply chain?
Will my suppliers default on their supply?
Will I have enough space in my warehouses?
Do I have enough delivery drivers?
Do I have enough people in the stores
to fulfill that promise to the customer?
And you can't really at the moment ask those questions
and project them through your organization,
but I think what we're moving towards is the ability to do that,
so that you can essentially adapt to a changing world.
I think there are three digital twins.
One is of your business model,
the flow of goods and services across your supply chain.
The second is your workforce,
that liquid layer of resource you that you can allocate on top,
and then back office processes like hiring and firing
and onboarding and expenses.
Lots of processes are efficient, but they are not effective.
And AI is enabling us to rethink how we do these back office processes.
We're going to get a bit philosophical now.
So I don't know if you know who said this quote at the top,
the nation that leads in AI will be the rule of the world.
Do you know who said that?
Putin, Vladimir Putin recently.
So I think we need to acknowledge that these technologies
are not just going to have the most profound impact on our business,
they're going to have the most profound impact on society.
By the way, Vladimir Putin was actually advocating
that AI should be made available to everybody.
Actually, that's what he was saying.
And I'm sure you've all heard the word singularity.
You're all LinkedIn AI philosophers.
And so singularity comes from physics.
It's the point in time that we can't see beyond.
And it was adopted by the AI community
to refer to the technological singularity,
which is where we build a brain a million times smarter than us.
I think there are actually multiple points in time
that we can't see beyond.
And I've tried to capture them using the Pestle framework,
which might have come across.
So just very quickly, the political singularity is a post-truth world,
a world where we no longer know what is true.
AI misinformation bots,
deep fakes have not only challenged our political foundations
and may continue to challenge our political foundations,
but they are now starting to challenge the fabric of our reality.
We now can create deep fakes or clones of our children,
of our work colleagues that I know are being used to attack people.
So I actually know people are now putting in safe words
to make sure they're not talking to AIs.
The environmental singularity you're all familiar with.
So AI is increasing consumption.
And as far as I'm concerned,
consumption gives people access to goods and services
that typically enrich their lives.
It is putting pressure on our planetary boundaries,
but I believe if we apply these technologies in the right way,
and it's a big if,
then we can significantly reduce the amount of carbon
that we use to create everything we're creating.
And the reason why I say that is so,
you know, we built Tesco's last mile delivery solution.
We built solutions that allocate consultants to projects.
In all of the projects that we do,
we typically reduce the amount of carbon by 20-25%.
And if you can then start to co-optimize across supply chains,
make them efficient across supply chains,
you can get another 20-25% carbon reduction.
We can halve the amount of energy that we need to run
everything we're running on the planet.
The social singularity, often referred to as the Methuselarity,
is not my expertise,
but there are scientists that believe
there are people alive today that won't have to die.
AI is rapidly advancing medicine.
It's able to monitor ourselves, clean ourselves out.
And a bit like a car, if you stay on top of damage,
that car will never ever break down.
And I don't know what the world will look like
when we realize there are people amongst us
that won't have to die.
The technological singularity is when we become
the second most intelligent species on this planet.
My community, I guess, felt this wasn't going to happen
for another 30 or 40 years.
We now think it might happen in the next 10, 20 years.
And it could be the most glorious thing that happens to us
or our biggest existential threat.
My advice to people is that when it comes, look busy,
be nice to each other,
and hopefully it will bugger off
to a more interesting dimension.
That's why I hope will happen,
and not take our son with it.
That's the...
So the legal singularity is when surveillance becomes ubiquitous.
So we know that these technologies are phenomenal
at understanding people,
potentially even manipulating people
to get them to do things that they shouldn't be doing.
We need to make sure that we mitigate that ability
for a small group of organizations, governments,
to not use these technologies
to accumulate a mass amount of wealth and power.
The final singularity is my favorite singularity.
It was coined by a very good friend of mine
called Callum Chase.
He's written extensively on the subject.
I highly recommend his book, Surviving AI.
This is about job losses.
So I think for the past 15 years,
we've been applying AI,
freeing people up from tasks they shouldn't be doing,
getting them to do more interesting, impactful,
purposeful things.
And I think that over the next 10 years,
we're gonna see a Cambrian explosion of innovations,
new opportunities, yes, jobs will be displaced,
yes, you'll be disrupted,
but AI is like a new energy source
that will allow humanity to grow to the next level.
I think beyond 10 years,
nobody knows what they're talking about.
I think that there is a concern
that when we have the ability to free up whole jobs,
which we might if we achieve AI in the next decade,
then we free up lots of jobs
and our society won't be able to rebalance fast enough
and create social unrest.
That's a concern.
There is an alternative view, which is controversial,
but we should be accelerating as fast as possible
towards this singularity.
So bear with me.
But if we can apply our smarts in the right way
and remove the friction from the creation
and dissemination of food, healthcare,
education, transport, energy,
we can bring the cost of those goods down to zero.
I'm not talking about taxing rich people
and giving it to poor people.
I'm talking about using our intelligence
to create a world of abundance.
Now, people say to me, Daniel,
what would I do if I didn't have a job?
I know lots of people who don't have jobs.
They're not sitting at home bored and depressed.
They use their time and their assets
to try and contribute to humanity.
And I'm gonna ask you a question.
What would you do if you didn't have a job?
If everything was abundant and you didn't have to do a job,
what would you do?
Yeah, which is what?
So somebody say communism.
I get that a lot.
I get that a lot.
It's not communism.
Sorry?
Coding.
Some people say they'll increase their golf handicap.
Maybe they'll travel, indulge in their hobbies.
If I keep pushing people, they say the same thing,
which is they wanna do something
that contributes positively to humanity.
And I think we all have an innate desire to want to do that.
And I think that if we applied AI in the right way,
we can unlock many people from economic constraints
and half the population in the planet
are living off a dollar a day
and freeing them up to go and contribute to humanity
in a new way.
So I'm just gonna close by saying,
it's not good enough for organizations
just to have a strong, profitable business.
You need to have a purpose.
If you don't have a strong purpose,
you're not gonna attract talent like you
and you're not gonna attract customers.
The purpose of WPP is to use the power of creativity
to make a better world.
I think a better world is a world
where everybody is free to do what they want
until somebody tells me differently.
And actually, I think it's enterprise, not communism,
that will solve this problem.
If you think about the purpose of your companies,
the purpose of the companies that you consume from,
if AI can help those companies grow,
if they can genuinely help those companies
achieve their purpose,
it's the collective purpose of enterprise
that will make the world better for all of us.
So that note, I'm gonna stop.
Okay, thank you very much.
Thank you.