Why Everything You Were Taught About Leading Is Now Incomplete

Spring 2026

The conventional narrative is that artificial intelligence automates tasks, which frees leaders to focus on more strategic work. That framing is simple, comforting, and almost entirely wrong.

What AI has actually changed is what it means to be a leader. For the first time in history, you are no longer just leading people. You are leading a new type of workforce – a team of both human and digital resources, each with fundamentally different strengths, weaknesses, failure modes, and governance requirements.

Your people need motivation, support, and challenge. Your digital resources need governance, prompt architecture, and operating parameters. Both need leadership—but not the same kind. Conflating the two is how organisations end up either ignoring AI entirely or deploying it senselessly.

This is not a technology adoption challenge. It is a new type of organisational design problem—and most leaders are approaching it with frameworks built for a world that no longer exists.

Psychological Safety Is Now A Strategic Advantage

Innovating with AI begins with permission. The leader’s first job is to create the cultural conditions where experimentation is encouraged, failure is absorbed safely, and the organisation can debate the consequences of new tools openly.

Most organisations are making one of two mistakes: mandating adoption without providing the necessary support or permitting it passively without the psychological safety to do so successfully.

The right approach is structured experimentation – invest in tools, share resources and training, create space for people to learn, try, fail, and share. But also invest in the conversations most organisations avoid: what are the second-order consequences? What are the ethical boundaries? What happens when the tool gets it wrong, and the human doesn’t catch it?

A culture that can hold those debates openly is a culture that can adopt AI at pace without losing its judgement. A culture that cannot is a culture that will either stall or crash.

None of this is easy, and it would be dishonest to pretend otherwise. Most leaders right now are operating in genuine ambiguity – expected to have a view on AI strategy while still figuring out what the tools actually do.

That uncertainty is not a leadership failure. Refusing to acknowledge it is.

Role-Modelling or Irrelevance

If leadership is not visibly using AI, talking openly about how they use it, and sharing both the wins and the failures, then every message about adoption rings hollow. This is the equivalent of the CEO who champions a data-driven culture while making every significant decision by instinct—people see through it instantly.

But this goes further than just “showing your working.” Leaders who engage visibly with AI are, over time, redefining what leadership competence looks like. When a CEO shares how they use AI to stress-test their own strategic assumptions, they are not just modelling behaviour — they are shifting the standard.

Leadership in this context stops being about having the answers and starts being about demonstrating the quality of the questions.

Share what you are using. Show where it accelerates your thinking. Be honest about where it falls short.

The leader who demonstrates intelligent, critical engagement with AI gives the entire organisation permission to do the same.

When AI Generates Data Nobody Was Trained to Handle

Of all the governance challenges AI presents, this is the least discussed and the most consequential.

Every AI tool introduced generates new data, and new data generates new responsibilities that most businesses have not resourced, governed, or even identified.

Consider AI note-takers—tools that transcribe meetings, summarise discussions, and increasingly analyse sentiment. These are already widespread, and are already generating a data trail that will inevitably feed into HR processes: performance reviews, disciplinary proceedings, restructuring decisions.

The problem is not the technology. It is that the people who will access and act on that data may have never experienced it. They do not understand that an AI-generated summary is a probabilistic interpretation, not an objective record.

As a leader, if you are introducing a tool, you are introducing a consequence. Your job is to see the consequence before it arrives – not after it has caused a problem.

They Don’t Need Motivation. They Need Governance.

Your leadership portfolio has expanded to include digital resources that behave nothing like people. They do not need motivation, but they do need governance.

Models do not fatigue, but they do hallucinate. They can process information at a speed that dwarfs any human team, but they cannot distinguish between a decision that is technically correct and one that is organisationally misaligned.

Bad digital resource management rarely looks dramatic. It looks like a strategy deck built on AI-synthesised market data that nobody pressure-tested. It looks like a customer communication (yes, this term was AI-generated and intentionally not correct to illustrate the point) drafted by a model that was never briefed on tone. It looks like a junior team member who stopped engaging with a subject matter expert because the AI was faster – and whose judgement quietly atrophied as a result.

Managing human performance is built on psychology, emotional intelligence, and trust. Managing digital performance is built on prompt accuracy, boundary-setting, and validation. The failure to recognise this distinction is already producing a generation of leaders who largely ignore the digital side of their workforce entirely, sadly all too many ignore the human side too.

The integration of human and digital capability is itself a new leadership competence – not an extension of an old one.

You Will Be Challenged More Often, More Credibly, and With Less Warning

The information advantage that sustained hierarchical leadership for a generation is collapsing. When an early-career professional with a well-configured AI platform can synthesise a market landscape in minutes that previously required a strategy team and three weeks, the structural advantage disappears almost overnight.

The response is not to restrict access—that leads to resentment and stagnation—but to raise the quality of leadership to a standard that welcomes informed scrutiny.

This is a threat to mediocre leadership that relied on informational asymmetry. It is not a threat to genuinely inspirational leadership.

Invest in the capabilities no AI replicates: judgement under genuine ambiguity, emotional regulation under sustained pressure, the ability to explain decisions transparently, and the willingness to change course when the evidence demands it.

The Leadership Bar Has Been Raised – And That Is an Extraordinary Opportunity

The leaders who will define the next decade are the ones who remain genuinely open-minded about how their own practices must change – who treat AI not as a threat to their authority but as an intelligence multiplier that raises the standard of every decision they make.

I say all of this from inside the work, not above it. At Oxygen, we are living this transition in real time – building a billion-pound natural capital portfolio while simultaneously developing the next generation of leaders through dedicated, psychologically grounded programmes that treat leadership as a trainable discipline, not a personality trait.

We do this because we have learned, through experience, that in a world where AI amplifies every other capability, the quality of human leadership is the binding constraint on organisational performance

That is precisely why we have invested in dedicated leadership development grounded in performance psychology — not as a culture initiative, but as an operational one. In a world where AI amplifies every other capability, the quality of human leadership is the binding constraint on organisational performance.

AI has not made leadership harder. It has made pretending to lead impossible.