The Middle Manager AI Problem: Caught Between the Mandate and the Red Tape
Middle managers are expected to lead AI adoption but rarely get the tools to do it. Here is the structural gap and what actually moves the needle.


The Short Version
- •Middle managers are handed AI mandates without the tools, budget, or builder access to deliver on them.
- •78% of APAC employees use AI weekly, but only 1% of leaders consider their organisation mature in AI deployment.
- •Large organisations move slowest because governance without a builder pathway becomes a bottleneck wearing a compliance badge.
- •Shadow AI goes underground when tools are banned without safe alternatives. The risk you are trying to avoid is the risk you create.
- •The fix: stop teaching prompt engineering in isolation, build governed sandboxes, and equip the middle before you mandate the middle.
I was preparing for a keynote at a large organisation. The brief was simple: talk to their learning team and learning ambassadors about AI adoption. Mixed seniority, strong mandate from the top.
Before the session, I asked the learning team one question: have you yourselves used AI to automate your processes, or built your own customised tools?
Silence.
Not a single person on the team responsible for championing AI adoption had actually adopted AI themselves. They were being asked to lead a charge they had never been on.
This is the middle manager AI problem. It is neither a lack of ambition nor a lack of awareness. It’s a structural gap between the mandate they have been given and the tools, budget, and support they need to deliver on it.
The Gap Everyone Measures Wrong
Most AI adoption conversations focus on the wrong gap. They measure how many employees have access to AI tools, or how many departments have run a pilot. The numbers look impressive. A BCG survey of over 4,500 employees across Asia-Pacific found that 78% of APAC employees use AI at least weekly. In Singapore, 82% of employers already use AI in hiring, onboarding, or training.
Sounds like AI adoption is handled. It is not. I repeat. It is NOT.
The real gap is between access and capability. Between having a Copilot license and knowing how to redesign your department’s workflow around it. Between attending a prompt engineering workshop and actually building a tool that solves a real problem.
A 2026 report from Absorb Software found that while 61% of organisations have adopted or are testing AI in their L&D strategies, only 11% of HR and L&D leaders feel “extremely confident” in their future skills-building strategy. McKinsey’s 2025 State of AI survey puts it even more starkly: over 90% of companies plan to increase AI investment, but only 1% of leaders consider their organisation “mature” in AI deployment.
Read that again. Ninety percent are spending more. One percent believe it is working.
The people trapped in between those two numbers? Middle managers.
The Translation Layer That No One Equipped
Senior leadership sets the AI mandate. Frontline staff get sent to workshops. But the middle layer, the department heads, L&D managers, HR managers, team leads, they are stuck doing the translation work. They must figure out what tools to adopt, how to train their teams, how to measure progress, and how to show results upward. All while still doing their day job.
KPMG describes middle managers as translators who connect top-level AI decisions to day-to-day workflows. A Fortune article on AI adoption in Asia-Pacific calls them the “missing link,” arguing they face pressure from above to deliver initiatives they may not fully understand while simultaneously managing their teams’ fears about job security.
From my own conversations with talent development leaders across Singapore and the region, this pattern is consistent. A talent development leader at a global semiconductor manufacturer told me:
Right now, people use AI like a Google search. It needs to be integrated into business systems.
An L&D lead at a global medtech manufacturer with 600 employees and a 2-person training team described the challenge differently. Her technical workforce is “very fearful of being made redundant,” and she is expected to manage that change for the entire organisation.
A learning and talent management manager at a private healthcare group captured the resourcing reality: she is a solo practitioner responsible for 700 employees. One person. Seven hundred people. No AI tools beyond basic productivity software.
These are not people resisting AI. These are people who have been handed an AI mandate without the tools, budget, or builder access to act on it.
Two Worlds of AI Adoption
Here is where the picture gets interesting. While large organisations debate governance frameworks and procurement cycles, a different kind of AI adoption is happening elsewhere. And it is moving faster.
World 1: The Enterprise Governance Loop
In most large organisations, the path to AI adoption looks like this: the C-suite issues a directive. IT evaluates tools. Legal reviews data policies. Procurement runs vendor assessments. InfoSec conducts risk analysis. By the time a tool is approved, months have passed. And what gets approved is typically the safest, most restrictive option: consumer-level AI embedded in existing productivity suites.
The caution is not irrational. A regional training manager at a major data centre company described AI security as:
A huge, huge issue that stops implementation. No one has been able to persuade security to open it.
And the risks are real. Some AI tools claim intellectual property rights over outputs created on their platforms. Data entered into certain large language models can be routed through servers in jurisdictions with weaker data protection standards, creating genuine regulatory exposure. For organisations handling sensitive client data, financial records, or health information, these are compliance obligations, not hypothetical concerns.
The result? A 2025 enterprise AI report from Menlo Ventures found that 76% of enterprise AI use cases are now purchased rather than built internally. Most employees interact with pre-built AI capabilities. Builder access, the ability to configure agents, automate workflows, or create department-specific tools, is restricted to IT teams or a small group of designated “citizen developers.”
The middle manager with the AI mandate? They get a chatbot and a prompt engineering workshop.
World 2: The Leaner Organisation That Just Builds
Meanwhile, smaller and leaner organisations are taking a different path. Not because they are smarter. Because they have fewer layers between a decision and its execution.
A founder of a change management consultancy told me she had attended multiple AI events and training sessions. At each one, she asked the same question: “What are my next steps?” Every time, the trainers changed the topic. They were good at talking about the next cool thing. Not at connecting it to her actual business.
When she found a programme that mapped AI directly to her business workflows, with clear end goals and concrete next steps, she moved. Within weeks, she was building an integrated CRM, an AI-powered follow-up system, and a client-facing chatbot. Not with a development team. With the right tools and the right guidance.
This is not an isolated case. I work with solopreneurs and small businesses on AI transformation, and the pattern repeats: the decision-maker, the user, and the builder are often the same person. There is no procurement cycle. No six-month vendor assessment. No waiting for IT to whitelist a tool. The founder identifies a workflow bottleneck, picks a tool, builds the solution, and iterates. All within days.
The irony is hard to miss. Organisations with the most resources are moving slowest. Organisations with the least are building the most.
The Cost of Governance Without a Builder Pathway
Let me be clear: I am not arguing against governance. Governance is necessary. What I am arguing is that governance without a builder pathway is just a bottleneck wearing a compliance badge.
Here is what happens when organisations lock down AI tools without providing a safe, structured way for people to build:
Shadow AI goes underground. Employees who are more AI-savvy do not stop using AI just because it is not whitelisted. They use personal accounts. They paste company data into consumer tools. They find workarounds. Microsoft and LinkedIn’s 2024 Work Trend Index for Asia-Pacific found that employees are rapidly adopting AI tools, often bringing their own into the workplace amid limited formal organisational support. And because these tools are not whitelisted, the organisation has not provided governance training on how to use them safely. What is worse than a data leak? A data leak you do not know is happening.
The capability gap widens. Every month an organisation spends in the approval loop is a month its people are not learning to build. A learning and development lead at a major F&B chain told me her IDP process alone consumes so much time that she has no capacity for her core responsibilities. AI could solve this. But she does not have access to the tools that would let her build a solution.
Competitors move ahead. In every interview I conduct with talent development leaders, I ask: “If your organisation does not adopt AI meaningfully within the next two years, what might happen?” The answers are consistent. The company loses its competitive edge. Employees jump ship. Competitors offer lower prices and win market share. Everyone knows the risk. Few are acting fast enough.
Deloitte’s 2026 Global Human Capital Trends confirms this pattern at scale: 59% of organisations are taking a tech-focused approach to AI, and those organisations are 1.6 times more likely to fall short of their return expectations compared to those taking a human-centric approach. Buying the tool is not the hard part. Building the human capability to use it is.
The Case for Caution (And Why It Still Falls Short)
Now, here is the strongest argument against what I have just said.
Large organisations operate at a different scale with different stakes. An SME founder who pastes a client brief into Claude is risking one relationship. A bank that does the same could be violating regulations across multiple jurisdictions. The consequences are not symmetrical.
And consumer AI tools can deliver meaningful results without builder access. Microsoft’s customer stories document organisations like Bupa APAC achieving significant productivity gains through M365 Copilot alone, without widespread builder access. AstraZeneca certified 12,000 employees on consumer-grade AI tools and reported 93% positive impact. The builder gap is not universally critical.
I accept all of this. And here is why it still falls short.
Caution is a strategy. Paralysis is not. The problem in most organisations I work with is not that they are being careful. It is that caution has become the default state, with no pathway out. There is no phased plan to move from consumer access to builder access. There is no governed sandbox where teams can experiment safely. There is no timeline.
Of the dozens of talent development leaders I have spoken to across Singapore, only one described an organisation with a structured AI governance task force, a tool whitelisting process with clear risk tiers, an AI tool subsidy for employees, and a dedicated internal community for building. One. A head of talent development at a fintech company described it:
Part of the reason this governance task force was set up is because there were a lot of ‘cowboys’ in the company doing whatever they wanted with AI.
They did not solve the problem by banning AI. They solved it by creating a structured pathway: employees fill in a form describing the business problem and data sources, the governance team assesses the risk tier, and they recommend how to proceed. InfoSec evaluates and makes recommendations rather than blocking everything. Everyone else? Still debating.
What Actually Moves the Needle
If you are a middle manager reading this and recognising your own situation, here is what the evidence and my experience suggest actually helps.
Stop teaching prompt engineering in isolation
Most AI training programmes teach people to write better prompts. “Act as a 10-year experienced practitioner in this field, consider this context, then answer this question.” That is surface-level. The way AI creates real value is when you map your department’s workflow first. Where does data flow? Where is it stored? How do systems talk to each other? What manual steps could be automated? The prompt is the last step, not the first.
Build a governed sandbox, not a blanket ban
The fintech case above is the model. Create an environment where people can experiment with builder-level tools under clear governance. Classify tools by risk tier. Let InfoSec be an enabler, not just a gatekeeper. As the same talent development leader put it:
A good infosec person evaluates the risk level and makes recommendations. They’re not there to block people from experimenting.
Microsoft’s own Copilot Studio governance guidelines recommend exactly this: dedicated environments with data loss prevention policies where “makers” can build agents safely. The frameworks exist. Most organisations have not implemented them.
Equip the middle before you mandate the middle
This is the core of it. If you want middle managers to champion AI adoption, they need to have used AI themselves. Not attended a webinar about it. Not watched a demo. Built something with it. Our Adaptive Leadership and Change Leadership programmes are designed for exactly this challenge: equipping the people in the middle with both the leadership capability and the AI builder access to move. When I ask senior leaders to describe what good AI adoption looks like in their organisation, most cannot answer the question. If the people setting the mandate cannot articulate the outcome, how can the people executing it?
The training that changes behaviour is not a one-day workshop. McKinsey’s 2025 AI workplace report found that nearly half of employees want more formal AI training and see it as the primary lever for adoption. But the training they want is not generic awareness. It is structured, embedded in real workflows, and gives them agency to build.
Reframe the urgency honestly
A lot of leaders claim AI adoption is a priority, but it keeps getting displaced by “more urgent” work. Ask the harder question. If your organisation does not adopt AI meaningfully in the next two years, what happens to your competitive position? What happens to your best employees when they see faster-moving organisations offering better tools? What happens to your costs when competitors automate what you are still doing manually?
The urgency is not theoretical. The window is narrowing.
The Real Risk
I think about that learning team often. Talented, committed people. Not a single one had been given the tools or the space to do the thing they were being asked to champion. They were not the problem. The system around them was.
The governance paradox is real. Organisations that lock down AI to manage risk are, in many cases, creating the very risk they are trying to avoid: shadow usage without safeguards, a workforce that falls behind, and a middle layer that burns out trying to translate a mandate they were never equipped to deliver.
Governance and builder access are not opposites. The organisations that will come out ahead are the ones that build both. A clear risk framework and a safe space to experiment. Compliance guardrails and the budget for their people to learn by building.
I know many people in this position. Remarkable people, because of the nature of my work. I feel their challenge. That is why I am writing this. They want to move. People want to move.
But can they?
Your middle managers are not the bottleneck. They are the lever. Equip them.
See how the AI³ methodology gives middle managers the tools, frameworks, and builder access to lead AI adoption from within.
Explore AI³Equip your middle managers with AI capability and leadership skills designed for the transition ahead.
Browse Programmes
Written by
Arthoven Ng
Managing Director & Lead Trainer, Overpowered
Master of Arts in Professional Education
Arthoven builds AI training programmes that stick. He has trained teams at SIM, Ninja Van, finexis, CGC Malaysia, and House on the Hill Montessori. His AI³ methodology combines human development, AI tool-building, and intrapreneurial execution.
LinkedIn →Keep Reading

Most Organisations Think They’re Integrating AI. They’re Just Chatting.
Most companies confuse AI adoption with integration. This four-level framework, built from research and 15 TD leader interviews, shows where organisations really stand.

AI Promised to Lighten the Load. It Didn’t.
77% of employees say AI added to their workload. Based on research and leader interviews, here’s why AI burnout is a leadership and work design failure.

The Economy Needs Fewer Workers and More Entrepreneurs. How Are We Equipping Them?
AI is cutting jobs structurally and the old entrepreneurship frameworks cannot help. Here is what we actually need to teach people to build new businesses in an AI economy.
Your middle managers are the lever. Equip them.
AI training, leadership capability, and builder access. All in one programme.
Browse Programmes