Articles
AI³

The OD Practitioner’s Case for Strategic Inefficiency

Where AI belongs in organisational development practice, and where being deliberately slow is the whole point.

Arthoven Ng
Arthoven NgManaging Director & Lead Trainer, OverpoweredMA Professional Education
20 April 202614 min read
The OD Practitioner’s Case for Strategic Inefficiency

The Short Version

  • AI can now absorb the long, tedious parts of OD work — scheduling interviews, transcribing them, clustering themes, drafting the report. That part of the job is changing fast.
  • What AI cannot yet do is the relational work that makes change actually stick: contracting with a sponsor, reading a room, feeding back hard findings, holding the conversation that turns a diagnosis into a decision.
  • Sort OD activity into three zones: (1) what AI can fully own (large-scale surveys, sentiment analysis across thousands of responses), (2) what AI and humans split — AI captures what was said, humans read how it was said and how the room moved — and (3) what only humans should hold.
  • The time AI saves you belongs to the slow human work. Never hand it back to the client as a faster timeline.

We built a tool called PULSE to solve one of the most painful parts of organisational development work: the months it takes to schedule and complete one-to-one interviews with a client’s top ten leaders. PULSE has an OD orchestrator inside it, primed with the case context, the working hypothesis, and the chosen diagnostic framework. It spawns and equips AI interviewers to do the grunt work asynchronously. What used to take twelve weeks could finish in days.

It worked. And then it didn’t.

The interviews came back fast and clean. The transcripts were searchable, the themes auto-clustered, the report drafted itself. But when I read the data against what I would have heard sitting across the table from those same leaders, something was missing. People do not talk to AI the way they talk to a human in a room. The data points were there. The texture was not. The sighs, the long pauses before a sensitive answer, the moment a leader’s voice changes when they mention a peer they do not trust. None of it was in the transcript.

That gap is the entire argument of this article. AI is not a threat to OD practice. It is a forcing function. It will take over the parts of our work that should have been automated a decade ago, and in doing so it will expose, with uncomfortable clarity, where our actual value lives.

The reflexive response from most practitioners is to ask how we can use AI to do more, faster. That is the wrong question. The right question is where AI lets us be deliberately slower, and what we should be slower about.

I call this strategic inefficiency: deliberately holding pace on the phases of OD work where the relationship is the mechanism of change, even when the tooling would let you go faster.

The conventional frame is wrong

Walk into any HR tech conference in 2026 and the pitch is the same. AI compresses the diagnostic cycle. AI surfaces themes in minutes instead of weeks. AI generates the recommendations, the presentation, the change roadmap. The implicit promise is that AI makes OD work faster, cheaper, and more scalable.

This framing is correct for some of OD work. It is dangerously wrong for the rest.

A 2025 Korn Ferry study tested whether a large language model could interpret group-level assessment data and generate client-ready reports. The model handled simple charts and summaries well. It struggled with complex visuals, nuanced insights, and tailoring to the organisation’s actual context. The authors concluded that AI should augment, not replace, the human consultant in assessment interpretation. That conclusion matches what we found with PULSE, and what most experienced practitioners will recognise from their own work.

Meanwhile, conversation-analytic research on facilitation shows that the micro-moves in a room (turn-taking, how a question is framed, when a facilitator chooses to stay silent) are what determine whether participants engage, resist, or reframe an issue. These are not preferences. They are the mechanism through which change happens. They are also, currently, beyond what AI can pick up at any reasonable cost.

The implication: AI’s speed serves some aspects of OD work. For the rest, the work can only be done through slowness.

Three zones for AI in OD practice

The most useful way I have found to think about AI in OD is to sort the work into three zones. Each zone has a different default for how AI should be used.

Zone 1: AI-100%, where speed and scale are the point

Some OD activities are bottlenecked by volume. Organisation-wide health surveys with 2,000 respondents. Open-text engagement comments at scale. Sentiment tracking across a transformation programme. These are the places where AI is not a nice-to-have. It is the only sensible answer.

McKinsey’s Organisational Health Index now draws on more than eight million survey responses across 2,500 organisations, using analytics to identify health archetypes and the management practices most associated with above-median financial performance. No human team can pattern-match across that volume. Platforms like Culture Amp fold AI comment summaries and recommendations into continuous listening, and Perceptyx research on AI’s cultural impact shows that leadership-driven AI adoption is associated with measurably higher employee engagement at scale. The AI-enhanced workplace culture analytics market is projected to grow from USD 4.22 billion in 2026 to USD 9.71 billion by 2030 for a reason.

But even in Zone 1, the data is never self-interpreting. A director of staff capability at a school of higher learning in Singapore, running surveys across a workforce of 1,400, put it plainly:

“It may not be a skills issue. It’s that they don’t have time. Or it may be a very introverted leader: ‘If can, don’t meet, don’t meet.’ There are many different factors.”

Her point was that a survey flagging low coaching frequency looks like a training problem until a human sits with it and realises the supervisors know the GROW model perfectly well, they just don’t have thirty minutes in the week to use it. AI sends the signal. The practitioner decides what the signal actually means.

There is a second-order problem too, and it is the one most Zone 1 dashboards quietly ignore. A talent management lead at a Singapore statutory board described how her staff behave when they know feedback is being collected continuously:

“People feel very stressed: ‘I’m being assessed all the time.’ There must be a line drawn for employees to feel psychologically safe. People don’t rate bad because they don’t want after-action reviews.”

In her experience, that fear systematically warps the data. If the practitioner is not sitting inside the instrument and reading the politics around it, Zone 1 will hand back a confidently wrong answer at scale.

So: design the instrument, frame the questions, sense-check the output. Delegate the processing. Never confuse the processed output with the diagnosis.

Zone 2: AI-assisted, where AI captures content and humans capture dynamic

This is where most of the interesting work sits, and where most practitioners get the split wrong.

Take a diagnostic focus group with a leadership team. AI is genuinely good at capturing what was said. Reliable transcription, theme tagging, and summary writing are solved problems. What AI cannot reliably capture is how it was said: the tone, the speed, the intensity. Speaker identification across overlapping voices is still patchy. And the dynamic of the room (how others react when the loud person speaks, what happens to the energy when a sensitive topic surfaces, the micro-expression that signals a leader is performing rather than disclosing) is, for now, beyond what is realistically deployable.

Those micro-signals are not incidental. They are often the earliest indication that the stated issue is not the real issue — that the team is avoiding a topic, protecting a peer, or rehearsing a sanitised version of events for the consultant in the room. A trained facilitator picks this up the instant it happens and adjusts the next question accordingly. An AI transcript picks it up never.

The practical operating model is this. Let AI capture the content. One facilitator leads the group, and another watches for the dynamic, names the key interactions, and writes them into the report alongside the AI-generated themes. Both data streams matter. Neither alone is sufficient.

This split applies to most live, small-group OD work: leadership team diagnostics, design sessions, conflict surfacing, dialogic interventions. The AI handles the content layer. The human handles the relational layer. The report integrates both.

Zone 3: Human-only, where slowness is the practice

There is a third zone where AI should not lead at all. Contracting with a sponsor. Sense-making with a leadership team about what the data actually means for them. Feeding back hard findings without breaking the relationship. The conversation that turns a diagnosis into a decision.

These phases look inefficient. They are supposed to. They are where psychological safety is built, where shared meaning is negotiated, and where the political reality of the change gets surfaced. Compressing them is not a productivity win. It is a silent way of guaranteeing the change will not stick.

A chief inculcator at a Singapore OD consultancy, working across roughly twenty client organisations, described the characteristic failure mode this way:

“When I ask leaders to describe what AI adoption looks like for them, not technically, just describe it, nobody can answer.”

Her diagnosis was blunt. Adoption conversations are dominated by the technical side, while the question of how to help people process the change gets skipped. The tool lands in a vacuum, and six months later the client is confused about why nothing shifted. That vacuum is exactly the Zone 3 work that got compressed out of the engagement.

I will come back to why slowness here is not a soft preference but an evidence-backed design choice.

The implication: Match each OD activity to the zone it belongs in before deciding where AI goes. Zone confusion is where good tools create bad diagnosis.

Why the human-only zone resists automation

Edgar Schein’s concept of humble inquiry reframes leadership as the discipline of asking genuine questions rather than telling. The point is not the questions themselves. The point is the relational signal: I am willing to not know, in front of you, so that you can tell me something I would otherwise never hear. That signal is what makes upward communication possible and what surfaces early warnings about risk. It depends on subtle status negotiation, emotion reading, and context-sensitive self-disclosure that an AI cannot authentically reciprocate, because the AI has nothing at stake.

Appreciative Inquiry, as Cooperrider has reflected on it later in his career, works because it reshapes meaning-making and relationships, not because of its interview protocol. The protocol is replicable. The conditions under which it produces change are not.

Lewin’s unfreeze, change, refreeze is similar. The “unfreeze” phase requires building a compelling case for change, surfacing concerns, and establishing psychological safety for experimentation. These are conversations that human leaders must hold. Tools support them. Tools do not own them.

There is also a trust problem that gets thinner attention than it deserves. A 2024 review of trust in AI and a 2024 framework on calibrating worker trust in intelligent automated systems converge on the same point: workers easily over-trust or under-trust AI, and both extremes degrade performance. Studies of chatbots in the workplace identify emotional, cognitive, and organisational dimensions of trust, and find that perceived organisational backing strongly shapes whether employees feel safe relying on automated feedback. In OD work specifically, where the data is often sensitive and the stakes are political, employees do not talk to an AI the way they would to a trusted facilitator. The signal you receive is not the signal that exists.

One caveat worth acknowledging. AI’s impact on OD is not only direct. Through job re-design and the automation of knowledge work, AI is pushing organisations towards leaner workforces — smaller teams, wider spans of control, more ambiguous role definitions, and displaced workers who need processing. The landscape the OD practitioner has to navigate is shifting under our feet. The relational work I am describing does not go away in that world. If anything, it gets harder and more necessary, at exactly the moment the client’s appetite for slow work is thinnest.

The implication: The relational mechanism through which change actually lands cannot be outsourced. Remove the human, and the signal the diagnosis relies on disappears with them.

The strategic case for being slow

So far I have been pointing out what AI cannot yet do well in OD. The stronger case is different. Even in the places where AI could compress a phase of the work, sometimes it should not.

Outside OD, several literatures have converged on the idea of deliberate inefficiency. A 2024 essay on innovation argues that successful innovation emerges from deliberate inefficiency through systematic experimentation and slack. Productivity writers describe slack time as a critical source of new ideas rather than wasted capacity. Cal Newport’s Slow Productivity argues that knowledge workers should do fewer things, work at a more natural pace, and obsess over quality.

One finding from decision science sharpens this for OD. Time pressure degrades complex decision quality, and the effect is not neutralised by giving people better tools. One experiment on clinicians using literature search tools found that with generous time, access to search improved correct answers by about 32 percent. Under high time pressure, the improvement fell to 6 percent. The tool did not compensate for the rushed thinking around it. Apply the same dynamic to an OD practitioner using AI outputs under a compressed sponsor deadline, and you get the same pattern: a hurried practitioner with AI can still produce a worse diagnosis than a slower practitioner with less tooling.

Apply this to OD. If AI compresses the diagnostic cycle from twelve weeks to one, but the sponsor now expects the leadership-alignment conversation in three days instead of three weeks, the gain is illusory. The practitioner is forced to rush exactly the System 2 work that complex change demands. The diagnosis arrives faster and lands worse.

There is also a quieter point about the shape of returns. Volume of data follows a decreasing-returns curve. The first fifty data points carry the most signal. Each additional respondent saying the same thing adds less. What does not have decreasing returns is the severity and intensity of what is being said, and that almost never comes through in words alone. Past a certain point, adding ten more interviews matters less than spending a half day with the leadership team interpreting what the first fifty already told you, and how it was told.

The practical implication: AI lets you collect the first fifty in a week instead of two months. Use the time you saved to do the slow human work that AI cannot do. Do not give it back to the sponsor.

The implication: Speed without protected System 2 time produces a faster, worse diagnosis. The slowness is not a softer option; it is where the quality comes from.

What this means in practice

Three practical shifts follow from this operating model.

1. Re-design contracting around the right pace, not the maximum pace.

When a client asks for the diagnosis report in three days, the answer is not “yes, AI can do it.” The answer is a structured conversation about which parts genuinely take three days (the data side) and which parts take two weeks (the human side), and why compressing the second half wastes the first.

The quantified case for this split, from our own practice: on diagnostic engagements with six or more interviewees, PULSE has reduced the data-collection window from roughly twelve weeks to under one week, a cycle compression of around 90 percent. None of that saving is returned to the client as a faster overall timeline. It is redeployed into the leadership-alignment and sense-making conversations that used to get squeezed at the back end of engagements. The total engagement length is the same. This is what it means to be strategically inefficient. The ratio of data work to human work has inverted. That inversion, not the speed-up, is what clients tell us changed the quality of the outcome.

2. Treat AI-generated reports as drafts, never as artefacts.

AI can produce a credible v1 of a diagnostic report. A trained OD practitioner still has to vet it against everything the AI could not see. My mentor put it cleanly when reviewing one of our early PULSE outputs:

“A few factors that enrich the client experience from such reports are: comparative analysis (how does this compare to other businesses their size or in their sector, and results versus self-score), and the balance between anonymity and specificity. There is a very high degree of disclosure in this report. That can be damaging and invoke sensitivities between the MD and his employees. You can still make it valuable by showing results up to a level where individual findings cannot be sourced back to an individual.”

Another senior OD practitioner I respect described a different layer of vetting he runs on every draft: a pass for “trigger words” — phrases or findings that might provoke a strong enough reaction in the client to close them off from the interventions being proposed, regardless of how accurate the underlying observation is. He also mentioned, almost as an aside, that his own reports are noticeably more vague and less detailed than the one PULSE produced, and he does that on purpose.

For a long time I thought that vagueness was a legacy habit — the way reports used to be written when they had to stand on their own without the facilitator next to them. It is not. It has a name in the organisational communication literature: strategic ambiguity. Eric Eisenberg introduced the term in 1984, and the case he made then still holds: in organisations, clarity is not always the right standard. Eisenberg’s foundational paper and the systematic review of forty years of follow-up work converge on three functions that map almost exactly onto what a good OD report has to do:

  • Unified diversity. A strategically vague finding lets a leadership team with divergent interpretations still agree it applies to them. A precise finding forces them to reject it or accept someone else’s framing of it. Vagueness buys consensus-to-act without manufacturing false agreement on cause.
  • Facilitating change. Precise language closes off options. Ambiguous language keeps the interpretive work alive in the room, which is exactly where you want it for a Zone 3 conversation. If the report has already decided, the dialogue cannot.
  • Preserving the role of the practitioner. The report alone does not move the client. The practitioner-plus-report together does. Maximum precision in the artefact quietly transfers authority to the artefact, and the conversation around it shrinks.

A 2023 ethics review draws the bright line cleanly: strategic ambiguity is legitimate when the motive is protective (protecting the relationship, the political space, and the client’s capacity to act), and illegitimate when the motive is deceptive (hiding a weak finding, avoiding accountability). Both the “trigger word” pass and the deliberate vagueness sit firmly on the protective side. They are engineered to answer the same question: does this help the client accept the report and take the action to build the organisation after it? A report that is technically correct but gets rejected in the room has done nothing for the organisation.

Both sets of judgements (what comparison to draw, where to draw the anonymity line, which words are trigger words for this specific client, how much to leave strategically vague) are nuanced calls about audience, politics, and second-order consequences. They are exactly the kind of work that should never be delegated. AI gets you to v1. The practitioner earns the report.

3. Reserve the System 2 phases as protected time.

Contracting, sense-making with the sponsor, leadership team feedback, the design of the intervention itself. These are the phases where practitioners should be deliberately slow, where AI sits in the background as a reference, and where the calendar should refuse to compress. An APAC leadership coach I spoke to noted that the conversations clients are starting to ask for have changed in character:

“Before it was enough to ask ‘When do you want your next promotion?’ Now the conversations need to go deeper: ‘Where do you find meaning? What else could you do?’”

That is not a conversation AI can credibly hold. It is the conversation practitioners should be optimising to have more of. A longitudinal study of team dialogue sessions in a corporate setting found that teams engaging in structured dialogues over two years showed statistically significant improvements in employee engagement compared with control groups. Two years. Not two weeks.

I hold that specific finding loosely. The study predates the current generation of AI, and some of the slow work it describes may yet compress as tools improve. But for the conversations this section is about — meaning, purpose, identity at work, how a leader sits with the person they are becoming — the duration is still the intervention.

The implication: Redesign contracts around the right pace, treat AI outputs as drafts, and protect the slow phases as non-negotiable time. Those three shifts are where the operating model lives.

The honest counter-argument

The strongest pushback against this position is that the technology will close the gap. Multimodal models will read the room. Real-time speaker identification and emotion inference will mature. At some point, AI will see what the facilitator sees.

Some of that is probably true on a long enough timeline. None of it is true today, and I would not stake a client engagement on the timeline. The other counter-argument, that practitioners are protecting their own jobs, deserves a direct answer. The operating model in this article will reduce billable hours on the data side. That is fine. The work that remains is harder, more valuable, and harder to compete on. If your value as a practitioner depended on being the fastest transcriber in the room, AI was always going to find you.

The risk in this moment is not that AI replaces OD work. It is that practitioners, under pressure to look modern and efficient, hand over the slow phases as well as the fast ones, and quietly hollow out the practice from the inside.

Bet on the relational gap, not on the transcription gap. The technology may close the second. The first is where the practice lives.

Closing

PULSE still runs. We use it on every diagnostic engagement that has more than six interviewees. It saves us weeks. We do not give those weeks back to the client as a discount or a faster timeline. We spend them on the parts of the work that AI cannot do: sitting with the leadership team, interpreting what the data means for them specifically, designing the intervention that fits their politics and their appetite, holding the conversation that turns a finding into a decision.

That, increasingly, is what an OD practitioner is for. The fast parts of the work will keep getting faster. The slow parts, if we defend them, will keep being where the change actually happens.

Strategic inefficiency is not a softer way of saying “human in the loop.” It is a design principle. Some phases of OD work should run as fast as the technology allows. Other phases should run as slowly as the relationship requires. Knowing the difference, and holding the line on it, is the work.

Want to bring PULSE into your next OD engagement?

Get in touch
Arthoven Ng

Written by

Arthoven Ng

Managing Director & Lead Trainer, Overpowered

Master of Arts in Professional Education

Arthoven builds AI training programmes that stick. He has trained teams at SIM, Ninja Van, finexis, CGC Malaysia, and House on the Hill Montessori. His AI³ methodology combines human development, AI tool-building, and intrapreneurial execution.

LinkedIn →

Want to bring PULSE into your next OD engagement?

We build bespoke AI diagnostic tools and train practitioners to use them well. Start with a conversation.

Get in touch

© 2026 Overpowered Pte. Ltd. All rights reserved.