Most Organisations Think They’re Integrating AI. They’re Just Chatting.
A four-level framework for understanding where your organisation really stands on AI integration.


The Short Version
- •61% of leaders believe AI is fully implemented. Only 36% of employees agree.
- •Most organisations are at Level 1: individual chatbot use with no strategy.
- •Real integration means AI is woven into processes, decisions, and value creation.
- •The four levels: Individual Use, Departmental Tools, Connected Enterprise, Strategic Differentiator.
- •The first step is not buying better tools. It is being honest about where you actually stand.
Here’s a number that should make every Head of L&D, HR Director, and CEO pause: 61% of leaders believe AI is fully implemented across their organisation. Only 36% of employees agree.
That gap didn’t surprise me. Over the past year, I’ve sat across the table from 15 talent development leaders across industries: logistics, semiconductors, healthcare, financial services, F&B, government, non-profits. I asked each of them the same question: how is your organisation really using AI?
The honest answer, in almost every case? People are using ChatGPT to write emails.
That’s not integration. That’s chatting.
The Problem with "Adoption"
Everyone measures adoption. McKinsey’s 2025 survey reports 88% of organisations use AI in at least one function. IBM’s Global AI Adoption Index puts enterprise AI usage at about 42%. The numbers sound impressive.
But what do they actually mean?
When I spoke with a senior talent development leader at a global semiconductor manufacturer, she put it simply: people are using AI like a Google search. A training manager at a data centre operator described the same pattern, then added that security blacklists are a "huge, huge issue" blocking even basic tool experimentation. At a government statutory board, Copilot had just been issued, and usage was mostly personal productivity. Refining emails.
This is the gap between adoption and integration. Adoption means people are using AI. Integration means AI is woven into how work actually gets done, how departments connect, how value is created. Every major framework, from MIT CISR to Gartner to Deloitte, tries to measure this progression. But most of them share the same flaw: they’re technology-centric, they oversimplify non-linear paths, and they rely on self-assessment, which makes them susceptible to the very perception bias they’re trying to measure.
BCG’s responsible AI maturity study found that 55% of organisations are less advanced than they believe. EY research shows 52% of executives rate their company 4 out of 5 on AI maturity, but underlying capabilities lag far behind. The Multiverse data is perhaps the most telling: 45% of firms classified analytically as "AI Beginners" still claim full implementation.
I think the overestimation problem isn’t just optimism. People rate themselves highly because they can’t see what advanced looks like. When your only reference point is a chatbox, using it daily feels "mature".
The Four Levels of AI Business Integration
I built this framework from published research and from what I see on the ground. It measures integration depth, not adoption breadth. Adoption is inevitable. The real question is how deeply AI is woven into your processes, your decision-making, and your value creation.
Level 1: Individual AI Use (The Chatbot Phase)
This is where the vast majority of organisations sit today. Individuals use ChatGPT, Copilot, or Gemini for personal productivity. Drafting emails, summarising documents, brainstorming. Jobs stay fundamentally the same. People just do existing tasks faster.
There’s no organisational strategy or coordination. Often, no one in leadership even knows who’s using what. Shadow IT, where employees adopt tools without IT’s knowledge or approval, is the norm.
"A lot of people know AI. But they don’t really KNOW AI. They’re attracted by the flashy things but don’t really know how it can help."
// VP of Talent Management, top-three global logistics operator
An L&D lead at a tourism statutory board confirmed the same pattern: most staff were at "personal chat productivity level" with no real agentic workflow in place. The risk at Level 1 isn’t that people are using AI. It’s that they’re using it without guardrails. Sensitive data enters public tools. One incident and IT shuts it all down.
Level 2: Departmental Custom AI Tools (The Builder Phase)
This is where work changes. Instead of using a general chatbot, departments build their own tools. Sales builds an AI-enhanced CRM. Operations builds AI-driven checklists and project management tools. Finance automates anomaly detection. HR builds screening tools or training content generators. (We build exactly these kinds of systems for businesses that want to see what Level 2 looks like in practice.)
The critical distinction: these are bespoke tools, built by the people who use them. Not purchased off the shelf from a vendor.
Why does that matter? Because it’s a fundamental job redesign. You’re no longer "doing the work." You’re maintaining the tool that does the work and being responsible for its output. When it hallucinates, you fix it. When processes change, you update it. That’s a different job, and it demands a different set of skills.
In my conversations, I saw very few organisations approaching Level 2. An L&D professional at a global medical technology manufacturer had deployed a flight risk prediction model using data patterns, and a financial advisory firm was using AI-generated video for training content. These were isolated examples. Most organisations haven’t even imagined what Level 2 looks like, let alone attempted it.
A regional people development lead at a global testing and certification firm described the vision clearly: identify AI champions from each team, send them to learn how to build tools, then have them bring those tools back to their departments. That’s the Level 1 to Level 2 pathway. But she was describing a 2026 aspiration, not a 2025 reality.
Level 3: Connected Enterprise AI (The Integration Phase)
This is where most organisations say they want to be, but almost none are here.
Level 3 means departmental AI systems talk to each other. For example, when a deal closes in the Sales system, it automatically triggers the Operations system to pull project specs and resource allocation. When inventory drops, Finance is alerted and Procurement initiates supplier outreach. Reporting across functions is automated. Alerts and escalation triggers fire across systems.
Here’s a key insight: the path to Level 3 runs through Level 2. Vendor tools typically won’t build custom integrations for one client’s internal systems. But when tools are built in-house, connecting them can be as straightforward as prompting AI to build the integration layer. Custom-built capability at Level 2 unlocks Level 3 in ways that purchased software cannot.
None of the 15 leaders I interviewed described anything close to Level 3. No one had connected cross-departmental AI systems. This aligns with the published research: BCG finds that companies scale only 19% of their AI use cases on average, and MIT’s work on AI scaling shows most enterprises never progress beyond "islands of experimentation."
Level 4: AI as Strategic Differentiator (The Ecosystem Phase)
At Level 4, AI isn’t just embedded in operations. It IS the competitive advantage. The organisation’s unique combination of domain expertise, proprietary data, and AI systems creates value that competitors cannot easily replicate.
The moat isn’t the tool. Tools can be copied. The moat is the ecosystem: proprietary data generated by the integrated system, domain expertise that validates and improves AI output, client relationships that feed the system, and speed of iteration that compounds over time.
I’ll use my own company as an honest example. Overpowered has built AI agents that can replicate end-to-end training design, from research through programme flow to slide planning. Could someone copy the agent? Technically, yes. But they wouldn’t have the accumulated delivery data, the client-specific insights, the domain expertise to know when output is wrong, or the iteration speed from running it on real engagements weekly. The tool is the accelerator. The ecosystem is the moat.
MIT SMR highlights Michelin as a rare Level 4 example, having scaled more than 200 AI use cases from predictive maintenance to quality inspection, reportedly generating over EUR50 million in annual ROI. The common thread across Level 4 organisations: AI isn’t a department. It’s the business model.
Why Most Organisations Are Stuck at Level 1
Three patterns emerged from my conversations.
First, the ceiling of imagination problem. If you’ve never seen what Level 2 or Level 3 looks like, you can’t envision getting there. A training lead at one of Singapore’s largest F&B groups described management using AI "just because it’s meta" rather than with strategic intent. An APAC L&D lead at a logistics company observed leaders "throwing AI jargon around" while non-technical staff don’t touch AI automation at all.
Second, structural blockages. Security teams that block AI tools entirely. IT departments whose KPI, as one talent development leader at a semiconductor manufacturer put it, "is 100% safety rather than enabling meaningful use." And why would they risk it? Their performance is measured on keeping the firm secure, not on enabling AI experimentation. There’s no reward for taking that risk. A training manager at a data centre operator called security blacklists a "huge, huge issue." These organisations haven’t found the balance between protection and enablement.
Third, the sharing tension. Organisations that figure out how to integrate AI well have a disincentive to share. They’ve built competitive advantage. Why help competitors catch up? But if the ecosystem doesn’t share knowledge, collective progress slows. This creates an invisible drag on the entire market’s integration maturity.
What It Takes to Move Up
Moving from Level 1 to Level 2 requires three things: awareness that AI can do more than chat, at least one department willing to experiment with building custom tools, and an AI policy simple enough that experimentation doesn’t get killed by IT security. This is where structured AI training programmes make a difference: not prompt engineering workshops, but programmes that teach people to build.
Moving from Level 2 to Level 3 is harder. It requires senior leadership mandate to break down silos, willingness to share data across department boundaries, and technical capability to connect systems. The research backs this up: a global survey of 2,525 decision-makers found that 99% encountered challenges implementing AI, and 91% faced issues across all three domains: technological, organisational, and cultural.
At every level, governance must grow in parallel. At Level 1, it’s simply: don’t get AI banned by leaking data. At Level 2: who’s responsible when the tool is wrong? At Level 3: who owns the data flowing between systems, and what happens when one system feeds bad data to another?
You don’t need to become a governance expert. But you need to own the practical question: what breaks if this goes wrong, and who fixes it?
Where Training Fits (And Where It Falls Short)
Most AI training programmes stop at Level 1. They teach people to use chatbots, to be prompt engineers. Necessary, but not sufficient.
At Overpowered, our AI³ programme was designed to go further. AI¹ (Appreciative Inquiry) uses a strengths-based approach to help people discover where AI fits in their specific context. AI² (Artificial Intelligence) teaches them to build custom tools with no code. AI³ (Applied Intrapreneurship) pushes them to think about cross-functional value creation and new revenue.
That maps to Level 1 through Level 2, with the mindset for Level 3. At Overpowered, we’re already exploring Level 3 ourselves. We’ve connected our CommonGround CRM with agentic workflows in Claude, so our outbound, lead tracking, and follow-ups talk to each other. But I’ll be honest: the technology still has room to improve. There are still errors and bugs. It will only get better over time. And that’s actually the opportunity: organisations that start building at Level 1 and Level 2 now will be ready when Level 3 becomes easier to implement as the technology matures.
Deloitte’s 2026 State of AI report confirms the gap: insufficient worker skills are now cited as the biggest barrier to integrating AI into existing workflows. But the upskilling programmes most companies invest in only teach prompt engineering, how to use chatbots better, which is Level 1. Very few are training employees to build custom AI tools (Level 2), and almost none are redesigning jobs so that the employee becomes responsible for the tool’s output, not just their own work. That’s the real skills gap: not "how to prompt" but "how to build, maintain, and be accountable for AI systems."
The next generation of AI training needs to go beyond tool usage. It needs to teach organisations how to connect what they’ve built.
Honest Limitations, and a Starting Point
This framework has limitations I want to name openly. It’s strongest on the people, process, and organisational development dimensions. Governance is essential but not my primary domain of expertise. Industry variation means the levels aren’t always strictly sequential. And the framework would benefit from broader cross-industry validation beyond my current sample of 15 leaders in Singapore and the region.
But here’s what I’m confident about: most organisations are at Level 1. They think they’re further along because they can’t see the ceiling. The first step isn’t buying better AI tools. It’s being honest about where you actually stand.
If your AI strategy stops at the chatbox, you’re not integrating AI. You’re just chatting with it.
What are the four levels of AI business integration?
Level 1: Individual AI Use (chatbot phase), Level 2: Departmental Custom AI Tools (builder phase), Level 3: Connected Enterprise AI (integration phase), Level 4: AI as Strategic Differentiator (ecosystem phase). Most organisations are stuck at Level 1.
Why do most organisations overestimate their AI maturity?
Research shows 55% of organisations are less advanced than they believe. People rate themselves highly because they cannot see what advanced looks like. When your only reference point is a chatbox, using it daily feels mature.
See how the AI³ methodology moves teams from Level 1 to Level 2.
Explore AI³Want to see what Level 2 looks like in practice? We build custom CRMs, automated workflows, and AI tools for businesses.
See What We Build
Written by
Arthoven Ng
Managing Director & Lead Trainer, Overpowered
Master of Arts in Professional Education
Arthoven builds AI training programmes that stick. He has trained teams at SIM, Ninja Van, finexis, CGC Malaysia, and House on the Hill Montessori. His AI³ methodology combines human development, AI tool-building, and intrapreneurial execution.
LinkedIn →Keep Reading

AI Bilingualism Isn’t About Using AI. It’s About Redesigning Your Entire Operation Around It.
Singapore’s NAIIP targets 100,000 AI bilingual workers by 2029. But real bilingualism means workflow redesign, not just tool proficiency. Here’s what most organisations are missing.

The Middle Manager AI Problem: Caught Between the Mandate and the Red Tape
Middle managers are expected to lead AI adoption but rarely get the tools to do it. This article explores the structural gap between mandate and capability, and what actually moves the needle.

AI Promised to Lighten the Load. It Didn’t.
77% of employees say AI added to their workload. Based on research and leader interviews, here’s why AI burnout is a leadership and work design failure.
Want to move your team beyond Level 1?
We build AI systems and train teams to run them. Start with a conversation.
Explore AI³