Skip to main content
leadership lab

Navio Kwok, PhD, is a leadership adviser, specializing in organizational psychology, at leadership advisory firm Russell Reynolds Associates. Aleka MacLellan, PhD, is an associate director at Russell Reynolds Associates. She specializes in offering leadership succession advisory services to boards, executives, investors and multigenerational families.

When it comes to artificial intelligence, the most common messages that organizational leaders convey are:

  1. Humans who use AI will replace humans who don’t
  2. AI will augment jobs
  3. AI must be adopted with a crawl-walk-run strategy
  4. Strategic silence prevents panic

With each message, leaders are committed to positive outcomes as they usher in AI.

Sometimes, what a leader does is evidently bad. But other times, even behaviours with the best intentions can have unintended consequences, such as creating more angst around job displacement.

Here is a framework that describes leaders along two psychological characteristics, explaining the reasons behind their AI messages, the unanticipated costs and what leaders should do instead.

Open this photo in gallery:

This framework describes leaders along two psychological characteristics and provides examples of their AI messaging.Navio Kwok / Russell Reynolds Associates/Supplied

The first characteristic describes leaders by their focus on opportunities versus risks. The former are driven by maximizing gains and eager to seize new possibilities, while the latter are driven by minimizing losses and are critically attuned to threats.

The second characteristic describes leaders by their focus on tasks to be completed compared with people completing the tasks. The former prioritize efficiency and productivity, while the latter prioritize team cohesion and worker well-being.

Will AI make you obsolete? The future of work in the age of automation

Ian Brown: Human beings are mortal. AI isn’t. That matters

Combining these characteristics results in four types of leaders, with two points worth noting. First, they all perform essential leadership functions, so none are inherently better than the others. Second, leaders will engage in all four functions, but will also naturally gravitate toward one type.

Opportunities/task-oriented leaders

The message: These leaders are most likely to convey the message that humans with AI will replace humans without AI. Excited by AI’s potential to improve productivity, as nearly three in four global leaders are according to Russell Reynolds Associates research, they encourage their teams to adopt AI skills.

The consequences: However, emphasizing AI skill adoption paradoxically creates more angst around job displacement. Although leader support is critical to whether employees embrace AI, those who use more AI at work are less willing to admit that they do so for important tasks and more worried that doing so makes them look replaceable.

The advice: Help employees understand that human judgment is still indispensable by highlighting success stories where AI has supported or enhanced your and other employees’ decision-making and productivity.

Opportunities/relationship-oriented leaders

The message: These leaders are most likely to convey the message that AI will augment rather than replace jobs. Oriented to the opportunities of AI while aware of its effect on the people involved, these leaders provide reassurance by emphasizing the net positives that AI will bring.

The consequences: Highlighting job augmentation creates two problems. First, it sets unrealistic expectations about the immediate benefits. Before AI can augment roles, it must first automate tasks, which requires additional human oversight. This means employees will actually be busier in the short-term, dealing with new and unfamiliar responsibilities and upskilling within the AI ecosystem of tools. Second, suggesting the possibility of augmentation without providing specific guidance on what it entails still leaves employees uncertain about the future of their roles.

The advice: In the short-term, be transparent about the additional workload and learning curve. Upskilling requires capacity, so employees must be given permission to say no to requests – something that you must model. In the long-term, co-create expectations with employees about what their augmented role will look like.

Risks/task-oriented leaders

The message: These leaders are most likely to convey the message that AI must be adopted with a crawl-walk-run strategy. Cautious and structured, these leaders prefer taking a gradual approach and managing missteps as they implement AI.

The consequences: While leaders are still thoughtfully crawling, employees are already jogging. Whether leaders like it or not, employees already use AI in their own way. Failing to meet employees where they are at runs the risk of inconsistent and unregulated AI use, leading to more risks, not less.

The advice: Swiftly establish guiding principles around responsible AI use, which may already exist in the form of shared values or commitments. Then, fast-follow with granular policies that can be implemented iteratively.

Risks/relationship-oriented leaders

The message: These leaders are most likely to remain strategically silent about AI, resulting in extensive under-communication. Oriented to AI risks and their effect on people, these leaders prefer maintaining stability and avoiding panic around job displacements. Staying silent on AI is presumed to prevent unnecessary fear.

The consequences: Such avoidance causes even more uncertainty and anxiety. In ambiguous situations such as how AI could affect jobs, insufficient communication and information are ripe for rumours to spread. And, when changes do inevitably occur, this lack of communication ultimately creates greater cynicism toward change.

The advice: The value of communicating about AI outweighs the discomfort you may have about addressing the elephant in the room. In fact, leaders are 10 times more likely to be criticized for under-communicating than over-communicating. If you feel you are over-communicating, you’re probably doing just the right amount.


The truth is a full transition to AI is not imminent, which means we have ample time to prepare. Yet, the paradox of time is that the more of it we have, the less of it we use for important but non-urgent things, like preparing for the effects of AI on jobs. More on this next week.

This column is part of Globe Careers’ Leadership Lab series, where executives and experts share their views and advice about the world of work. Find all Leadership Lab stories at tgam.ca/leadershiplab and guidelines for how to contribute to the column here.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe