Future of Work in the Age of AI: What to expect in the next years

Introduction: The Range of AI Use Cases

One of the things I find so fascinating, and I don't know if you feel the same, it's the variety of use cases. There is something for everyone, simple use cases, which you can implement straight away. Even helping you with your newsletters and others, even simple AI workflows.

And then you have more complex ones like this, and then you have very complex ones. So it's like kind of there's something for everyone.

one.

The Future of Work in 2026: AI Shifts from Tool to Co‑Worker

We're shifting topics now for the last presentation which I'm going to do and we are looking at kind of what the future of work brings and I share a couple of things of our research which we are working on so together with two business partners I'm working on the next book on this topic and so I'll share a few of these things which I also share in executive workshops

keynotes and any things like that. So happy to kind of keep that short. So why this matters in 2026?

1What we see is AI shifting from a tool to a co -worker and we saw some of that on the agent side. AI agents are actually entering the workflows and take on roles.

So this is really shifting slowly. In some organizations they are far away, some are a little bit more advanced. Leaders are

being asked to navigate both people and this creates a new leadership agenda and an opportunity and i think we all need to learn that you know depending on your level what you're doing you might not lead a lot of kind of people but you're going to lead ai in the future so this is how the situation will is going to look um going forward and so um so this is actually not a person here

Well, it is a person, but it's not you. So you are looking at this situation.

What Leaders Will Be Managing: Humans, AI Agents, and Physical AI

So in the future, you will have three different types or entities you are going to lead. There are humans. So how many of you are leading some responsibility, team responsibility? I think it's probably a third.

But the others actually can, of course, put yourself into the situation because you are being led by someone. So you have the people, leading the people. This is kind of how we know.

This is not going to change. And then we have these two other things on the side. One are AI agents. This is software.

This is what we saw from Sarah before as an example. So this is kind of, and they become, depending on the level of autonomy, they are acting. And this is a separate conversation to have. So this is one.

but then what we're going to have and this will explode over the coming years is the physical the embodiment of AI agents that's physical AI this wasn't possible before but when you take a robot and you add the robot an LLM then the robot becomes intelligent I'm simplifying but then the robot can see

can read can do anything and everything like you can do yeah so therefore you You put the intelligence of an LLM, define it intelligence or not, into the robot, and then the robot can do the same thing like an agent, just with hands. That's kind of what we're having.

So we have these three entities coming up, and pretty quickly in organizations. So therefore, kind of the leader is really shifting.

It's shifting from a conduct kind of more to an orchestrator. so you have to orchestrate this environment this new environment going

forward I hope that's not the pizza kind of who's calling so a couple of

Governance, Autonomy, and Risk: The Hard Questions

questions and I'll leave that have a look because this topic is super complex as you can imagine so I want to give you kind of a taste of that you know what is

actually included in the discussions yeah so you have things like autonomy how much autonomy do you want to give the agents is that full autonomy who Who wants that? Probably nobody.

This is kind of where OpenClaw is heading towards. But even that's not full autonomy.

Responsibility in an organization who is going to be responsible. Most organizations just say, well, it's the CIO, it's tech. That's totally wrong. You can't be kind of more wrong than that.

It has to be at the CEO, but even then the question is who is leading an AI agent? Is that the chief HR officer? Or is there a new chief agent officer? So it's kind of, it's an interesting, complex conversation.

Change management. Risks. Of course, there's lots of risks.

There is still technical complexity. It's not trivial setting that up.

Phishing is one of them. So one of the discussions on OpenClaw is, do you allow, when you put, actually, you give this agent an email. And then the question is do you allow receiving or sending or both? Who is for sending so that the bot can send?

Okay a few. Who is for receiving and can read kind of its own emails? And the others both or nothing?

The much bigger risk, and I hope you agree, is on the incoming email. because actually the LLMs can write very good outgoing emails you can't do a lot wrong there yeah of course some things can go wrong but in principle yeah the outgoing is not a big problem the incoming is much more problem which we saw before because you in an email if you have

this kind of phishing you can't control it because the email comes in and it comes in so you have to think about that so it's really interesting all of that that's technical complexity so you see kind of there's a number of topics here just as an idea what we are going to see

going forward so in the future you basically have a situation where you have to lead people you have to lead AI agents or agentic systems if they are bigger and the systems in between so thinking how they are connected how the new workflows look like how the new processes look like you really need to

kind of rethink that again very early stages yeah only a few companies start working and thinking around that this is more kind of impromptu at the moment you know they just make something up but it's not really strategically thinking that through so we came up end of december we sat

A Practical Model: The ORBIT Framework for Hybrid Leadership

together in preparation of the book and we came up with a framework i'd briefly share that so you can take that basically with you as an initial thought and i always look forward of course to feedback we call it the orbit framework for hybrid leadership yeah so it's what we call

leading people and leading ai agents is hybrid leadership because you have to do both and this is a complete new area of working and the orbit is the orientation yeah set purpose intent then it's response responsibilities and accountabilities super important you know otherwise that's governance then you have the bridging you have a lot of things you have to bridge then you have to

actually integrate that into the organization and you need then you need the traction and measure the performance so that's our orbit framework which we are further developing and bringing basically into into that book and if you are interested of course I can share in the future a bit more on that once it's out but this helps to go through the different steps so if you take that and And you apply that in the organization. It helps you think things through.

What are relevant. What is that made of by the graph? Who knows? Actually, it says here. Above Nova Business School where I did the presentation.

This is amazing. I mean, this is kind of a separate conversation. But Notebook LM is just kind of mind -blowing, I think. And it's really in its own category in terms of application.

So it has some disadvantages as well, but this comes out of here with all the logic and just with a little bit of input and a prompt. Quite impressive to do something like that. So that's kind of the Orbit framework.

Key Takeaways: Hybrid Leadership as a Core Capability

A couple of key takeaways. It's really a collaborator and not just a tool anymore. I hope we all get that step by step. step.

What is also interesting, the leaders of tomorrow require soft skills and AI skills, very diverse skill set, which each leader and pretty much all of us have to integrate. You know, that's the so -called dual skill set.

You know, so we have to kind of this human machine orchestration and hybrid leadership is really that roadmap going forward. So I I hope that kind of gives you a little bit of an idea.

We say hybrid leadership is not optional. It's coming. So it's clearly coming.

You have to prepare whatever your role is. If you're a leader, you have to embrace that and think that through.

If you are in teams, I would encourage you to trigger that to your management. Say, Quan, have you thought about that? How is that looking like?

How do we address that in our own organization? I think that's super, super important.

With that, I would close kind of this presentation. Any one question again? I don't get more time than the others, of course.

Q&A: Change Management and CEO Ownership

In regards to change management, like legacy companies. Yeah, great question.

Clearly ownership by the CEO. And I see that with companies where that is the case, I'm training hundreds of their executives and managers in one organization. organization in face -to -face over two days. Because the CEO says AI is super, super important.

So what I would do, very practical, AI is an item on every single meeting. Even if you only spend 30 seconds on it and say no news, which is unlikely, by the way. You agree? I hear.

But you have that on the agenda because you have to bring that into the organization. You have to breathe that step -by -step it's a long journey still most people out there are happy that they're using chat GPT better than three months ago that's normal yeah

that's totally normal across all organizations all levels and this has to change so the CEO has to own take ownership and saying this is top priority this is one of my three top priorities otherwise you're failing I I clearly you're going to fail and then through the organization and that's the

challenge through the organization you have to mandate that AI is on every single meeting agenda that's what I would do if I were a kind of CEO of a larger organization yeah because it takes time because you will see a lot of

resistance in the organization so that doesn't work it's hallucination it's It's not safe. It's like all this BS, which is not true. But that's what I would do.

Okay? Cool. Great question. Thank you very much.

With that, if you want to kind of...

Thank you.

Finished reading?