The Future Of Skills In A World Full Of AI

1 I'm going to try and talk to you a little bit about a slightly more theoretical topic,
which is about the future of skills.
It's a lot of where I spend my time on.
I'm going to try and go through three things.
One, why it is important that we think about what everything in AI is causing in terms of
a change of the skill landscape, like what is actually valuable, us as humans,
what can we still do as more and more of these things end up automating away the things
that we used to take for granted, take as our capabilities.
And then second, what can we do to stay ahead?
As we know how this landscape is changing, what can we do today to try and kind of keep an edge
and stay valuable in the world of work?
So, I mean, all of you are here, so you're interested at least somewhat in what is happening.
I imagine you might be somewhat aware of what has happened around the world from a workforce perspective as well.
So, it was IBM earlier this year announced that they were putting on more than 7,500 jobs
because they thought AI could do those jobs instead.
In May, CNN reported that about 4,000 jobs were automated that month alone.
And CNN, during this, it was in June, they actually for a very small time did total auto publishing.
So, generative AI used to both find and publish the story until their staff basically revolted
and then they paused the project for a little bit.
All of that to say is like this stuff is actually real right now.
The World Economic Forum predicts that about 42% of business tasks will be fully automated by 2027.
We're not talking like a long time, we're three years and a month away.
It's not far. McKinsey goes even further and they revised their report in June this year.
They looked at the situation, they said that based on everything we know of knowledge and knowledge of jobs today,
60 to 70% of the time that those people are currently spending on tasks can be entirely automated by 2030.
Oh, sorry, no, can be entirely automated with the technology as it stands today.
So, zero technological improvement here.
We're not talking about what might happen because the technology evolves.
We're purely talking about the technology that exists today as it spreads through society, as it spreads through business
and we start to leverage it to automate as we get Alex spending more time creating more agents to automate more tasks.
So, who am I to talk about this?
So, I'm C.O. at MindSign and talked about that before.
I also am co-founders. It's super awesome.
That became one of the biggest kids technology companies in the world.
Actually, that's the only other times I came to Helsinki before as well because we did a lot of business with gaming companies.
It was all about child safety. It was acquired by Epic Games, created as a Fortnite.
I'm also a venture partner, a limited partner with Emerge Education.
They're one of the most active early stage education venture capitals in Europe.
And I run these meetups, which now suddenly is thousands of people all over the world.
So, I try to figure out as I spend all my time on this, and I forgot to say that MindSign is all about upscaling and rescaling.
This is literally where I spend my time on every day.
What can we try and look towards to give us an indication of what the effects of this automation might be?
And I think that we can learn a lot about what is about to happen by looking at what happened to software engineering in the last 15 years or so.
I'm a software engineer by background, so at least some of this comes from proper experience.
About 15, 20 years ago, your ability to code would get you a job.
Just the ability that you could understand and write Java, C, C++, even PHP, the time if you were writing that, would get you a job.
Just the knowledge of the language was enough.
But today, on Node.js alone, which is one of the more popular execution environments for engineers, you've got more than 2 million different libraries.
And a library for software engineers is kind of like a Lego block.
You don't have to reinvent the wheel every time.
You can kind of take one library, combine it with another library, string some data from one end, put it in a library, figure out what you get out of it, put it into a third, and so on.
And so as an engineer, more and more of your job is basically gluing together different libraries to try and find a great solution to a problem that you're trying to solve.
And your actual ability to code, even the time that you're spending writing new code has gone down dramatically.
And so the key differentiator is the ability to solve a problem.
Now what I'm arguing is that the same is happening through AI, but in a whole bunch of other industries, which is that we're automating more and more vertical tasks, such as the ability to create marketing strategy, or create a curriculum, or craft a legal argument.
All of these are vertical tasks that little by little AI is tipping away at, and our job is going to be more about stringing together those different AIs into finding a solution to a problem that evolves.
So you might be, I mean, everyone here is familiar with ChatGPT, you might be familiar with Jarvis, you might not be familiar with Harvey.
That is already a legal service, a full Gen AI legal service that exists today, where you can automate helping creating a contract, drafting a legal argument, evaluating a contract and so on.
That exists today.
Or digitalfirst.ai, which actually they were on the website two weeks ago, which was very interesting.
You can put in your website and it spits out a full on marketing plan purely based on the website.
So what it does, it looks at everything that you're doing from a positioning perspective, then creates your personas, creates your marketing channels, creates your entire execution strategy, and everything is done in three minutes.
And so I'm not talking about things that might be done in ten years, I'm talking about things that are getting automated right now.
And so, if software engineering has lived in this world already, where you have all of these libraries available, where your job is to try and string them together into a solution to a problem you're trying to solve,
what do the people that hire software engineers today look for when they're hiring?
I thought that might give us an indication as to what might happen in the rest of the industry, right?
So I was CTO, it's super awesome.
So I'm part of one of the biggest CTO groups in Europe.
And I simply asked this question, as CTOs, when you're hiring, what are the key skills that you're looking for when you're hiring engineers?
And the top three things that came forward were the ability to collaborate, the ability to learn, and the ability to solve a problem.
Those were the top three.
Now, you will notice that the ability to code doesn't come in the top three, and it was a very distant fourth, it was on the list.
Now, if you're still not convinced and you think your ability to code as an engineer is definitely going to continue differentiating,
Scott Guthrie from Microsoft in March this year disclosed that over 40% of newly committed code to GitHub was unmodified AI generated code,
over 40% of all new code.
And that was in March this year.
I don't know what the updated stat is at this point, but I can only imagine it went one way.
It definitely didn't go back.
So then there was another study.
Pearson teamed up with Google who did a study on what were the skills that might be the most valuable in 2030.
And the same things came out, the ability to collaborate, the ability to learn, and the ability to own and solve a problem.
And so I think something is coming together here in terms of what are the things that we can focus on to try and stay relevant as all of the other things around us are getting automated.
So what can we try and what can we do?
What can we do to build these skills?
And here is another thing that comes to mind.
And if you haven't read it yet, I mean, I love it.
Now, I know that I'm a geek about learning, but if you're interested in learning, it's absolutely a great book, Rage from David Epstein.
It's basically about different environments, how we learn, why some things are harder to learn than others.
And the core principle of the book, the thesis of the book, hovers around two different types of learning environments, a kind learning environment and a wicked learning environment.
Now, a kind learning environment is one where the feedback loop is fast, is rapid.
You make a decision, you quickly realize you either were right or you were wrong, and then if you were right or you were wrong, the feedback you get is fairly accurate in a way that allows you to adjust what you are doing, try it again and get more feedback.
So you're quickly learning whatever happens.
And the wicked learning environment is the opposite.
So you might make a mistake, you don't know you made a mistake, maybe you had it right, but you're also not told that you actually got something right.
And then even when you get to the result, you're not necessarily sure if it was your fault or if it was your success or not.
A good example of a kind environment from a job perspective or an activity perspective would be a golf.
You hit a golf ball, it goes in the right direction, it goes in the wrong direction.
You get some idea of what you did wrong, you hit it on the left side, on the right side or whatever, and you instantly get to hit it again and try again.
You can just do it over and over again and that feedback loop is extremely fast.
On the wicked environment, early stage investing.
Perfect example of a wicked environment.
You make a decision, 10 years later there might be an outcome and you have no clue if it was your decision that was good.
If it were the market conditions that got it there, what was your actual involvement in any of the things that happened?
So it's extremely hard to learn when you're doing early stage investment because that feedback loop is just slow and extremely ambiguous.
And so a good example of a wicked environment.
In terms of skills, a good example of a kind environment is programming.
You write a piece of code, you execute it, it works, it doesn't work.
Now I would say it's in the kind environment like 20, 25 years ago, it was actually more of a wicked environment because it just didn't run and you had no idea why.
You're getting more and more ideas now when it doesn't run, you have a whole bunch of machinery that tells you where the problem might be so you're getting a better feedback loop.
In terms of skill in a wicked environment, they're the three that we just talked about.
Your ability to collaborate, your ability to problem solve and your ability to learn.
The feedback loop around these skills is actually fairly complex and very hard to quantify.
Even the definition of what great learning is, what great collaboration is, isn't something that is agreed between us in order to try and get a great feedback loop going.
What I find interesting though is that AI can help us get wicked learning environments closer or act and feel closer to a kind learning environment.
On the one hand you get action, result, feedback, action, result, feedback, the normal loop that you go through.
The parts where I can really help us accelerate this loop is that you can use AI to extrapolate different scenarios.
Imagine a decision, figure out what might happen, get five different scenarios and basically run through the scenarios before you even make a decision.
You can contextualize different results and figure out what the second third order effect might be and you can provide some indicative benchmarking.
It can actually try and draw from everything that has happened in history to try and say, well are there similar situations that happened before?
What is interesting here is not the same as the actual result, obviously it isn't.
But we're able to simulate results in a way that will allow us to learn from it at a much accelerated rate.
Assuming that not all of our decisions are perfect, far from perfect, there is a lot to learn from purely the simulations that we might not have thought about before and gets us to that feedback loop faster.
And then from a feedback perspective, AI basically removes any emotional load around the feedback that's given which comes with these, well specifically these types of skills around collaboration, problem solving and learning.
It's always there, you can ask as many questions as you want, get as detailed as you want without feeling bad about the questions that you're asking.
When I'm sitting with an expert on a particular topic, after 10 questions, if I start to feel stupid, there's a limit to how many questions I'm willing to ask either because I'm wasting their time or because I feel like shit, I really should have gotten it at this point.
And you're starting to back away with AI, this is not a problem.
So you really are able to get to this feedback loop again at a much more detailed level.
And so you can simulate the feedback in a way that we're not able to do before.
All of this to say that it's not the same, but I think that with AI we can get these environments, the wicked environments to act and feel more like the kind environments by applying these different simulations.
So an example here is the investment scenario.
So now when I am evaluating an investment, I'm literally just running it through chat GPT at least to do some brainstorming.
And just ask, okay, well these are some investment criteria that are part of the deal that I'm looking at at the moment.
And then give me some ideas of other companies in history that had similar context, similar business model, maybe an entirely different industry that would help me evaluate the deal that's in front of me right now.
Will that actually lead to a better decision or not? That's a question.
Will it help me learn as I'm thinking through my own models? Absolutely.
So something that simply wasn't possible to do before, it was a very wicked environment getting closer to the kind of one now.
I think with all of this coming through, we are genuinely at a once in a lifetime tipping point here.
We went from a world of total information scarcity, it was hard to find information, or even just whatever, 25 years ago, to a world of total information overload.
We have the opposite problem from a learning perspective. It used to be access, now it is the opposite.
We're trying to figure out what's right, what's not right, we're getting information thrown at us from every single angle.
But somehow the institutions that we have that are built up to try and help us learn are still the exact same.
They're based on a broadcast model where you're going somewhere because that is somehow the only way to access the information.
And it's no longer really working. The pace of change, I mean we're talking two weeks that these agents came out and I'm already feeling outdated now.
What is happening? The pace of change is accelerating to a point where the time any particular skill stays relevant is becoming smaller and smaller and smaller.
So the shorter the lifespan of any particular skill, the lower the value of the particular certificate that goes with it.
By the time that you finish your degree, the skills you were taught at the start are no longer even relevant when you joined the workforce in the first place.
At least when the degree is focused on particular skills that it's trying to teach you in the workforce.
Now from an employer perspective everyone is already realizing this.
Like they're across the board, different employers have started dropping the requirement for degrees as you think about applying for their jobs.
There is a whole explosion around micro-credentials and they're making them smaller and smaller and smaller and smaller.
But I would argue that this is merely a plaster on a problem that needs a much more fundamental solution because literally the entire paradigm shifted.
We're no longer in the information scars world, we're in an information overloaded world.
So how do you think about learning and about in this case specifically the signal that is called signaling in the industry,
but how you showcase what you learn in a way so that it allows other people to recognize what you are actually capable of doing?
And so again, I looked at how does this work in the engineering world?
Lo and behold, it's the only industry that has a slightly different way of looking at credentialing.
So GitHub and Stack Overflow are two platforms that stand out.
GitHub is kind of like a library where engineers can check in code,
other engineers can look at the code, evaluate it, potentially even build on top of it.
And Stack Overflow is a question and answer forum for engineers and great questions, great answers get rewarded.
That builds your profile.
A good GitHub or Stack Overflow profile today will get you access to more job interviews than a degree from a top-tier university.
It's kind of like a portfolio type approach where suddenly you are being evaluated on what you actually are capable of doing,
not the piece of paper that you had that you got through a university in one way or another.
Now, this is what we're trying to now take from a mind-synced perspective, but through every single other industry.
So what would a portfolio type approach from a skill perspective look like everywhere else?
And we're not the only ones, there will be many others because I'm convinced that this is kind of the approach where we're going to.
But the way we're looking at it is basically what if you could quantify everything you're learning from articles, blog posts, videos, tweet threads, anything,
into some kind of profile, some kind of portfolio, a universal skill profile that you can submit alongside your next job application.
In our case, we literally took the GitHub commit log here.
As you can see, the little graph for those that are familiar with it in terms of how much are you actually learning on a daily basis?
Are you regular? Are you irregular? Are you learning on the weekends? Do you do this twice a year or do you do this every day?
And from an employer perspective, the same thing. How can you actually set learning objectives, but then measure the learning wherever it happens,
rather than sending someone on a course that is probably already three months outdated by the time they started it?
So, again, this is a problem that is extremely timely. 42% of business tasks will be automated by 2027 and 60 to 70% of tasks executed today will be automated,
or are automatable with technology as it stands today.
I think this is a live or die type adaptation from a workforce skills perspective that we are going to have to live through in the next few years.
And I hope that I was able to show you something on this, and I will stop with that. Thank you very much.
And I will take questions, but I will not let you suffer anymore.
I've got one more slide to go through. I almost forgot that. It's crazy. I always get here.
So, I talked, this is actually, for once, I talked a little bit about Mind Stone just because it happens to be in the presentation.
We have a QR code here. One of the things we do, if you scan the QR code, these talks are getting uploaded one to the platform,
and you'll actually get evidence points for all the talks, everything you learned at the event here today.
So the system analyzes which skills were being talked about, what you learned, and it will get credited to your skill profile.
Two, by doing so, you will also join the global AI in Mindspace we have.
Every talk from every event we do around the world is always uploaded to this Mindspace, so you will get access to those as they get published.
You will also be made aware whenever we have new events, of which we are now hopefully going to be able to do monthly here in Helsinki.
And we have one more tomorrow, by the way. Crazy.
Tomorrow morning, we have another one. If you want more of this stuff, it's different talks.
Tomorrow morning at the slash venue.
So, on that note, let's go grab some pizza, and thank you very much for coming out tonight.

Finished reading?