Articles
AI³Leadership

AI Promised to Lighten the Load. It Didn’t.

Why AI burnout is a leadership and work design failure, not a technology problem.

Arthoven Ng
Arthoven NgManaging Director & Lead Trainer, OverpoweredMA Professional Education
15 March 202610 min read
AI Promised to Lighten the Load. It Didn’t.

The Short Version

  • 77% of employees using AI say it added to their workload, not reduced it.
  • AI didn’t create a new type of burnout. It amplified the existing one through technostress.
  • Organisations absorb AI time savings into higher throughput expectations, not fewer working hours.
  • Workers face a double bind: use AI, but don’t look like you used AI.
  • AI burnout is a work design failure. The fix is explicit workload reduction, task redesign, and honest conversations about AI optics.

Here’s what no one warned us about: 77% of employees using AI say it has added to their workload, not reduced it.

That number comes from the Upwork Research Institute’s 2024 study of 2,500 C-suite executives, full-time employees, and freelancers globally. The same study found that 96% of C-suite leaders expect AI to boost productivity. Meanwhile, 47% of employees using AI say they have no idea how to achieve the productivity gains their employers expect.

Read that again. Nearly all executives believe AI will make their people more productive. Nearly half of the people actually using AI don’t know how to deliver on that expectation.

AI Burnout Is Classic Burnout, Multiplied

Burnout isn’t new. Emotional exhaustion, cynicism, reduced efficacy. That’s the established model, and it was already widespread before generative AI entered the workplace. Gallup’s 2025 State of the Global Workplace reports global employee engagement falling to 21%, with 41-44% of workers reporting significant daily stress.

AI didn’t create a new type of burnout. It amplified the existing one. Researchers call it "technostress," and it comes with its own sub-categories: techno-overload (too many digital demands), techno-complexity (difficulty understanding the tools), techno-insecurity (fear that AI makes you obsolete). A 2025 Frontiers study on generative AI technostress identified additional AI-specific stressors: regulatory ambiguity, data protection concerns, doubts about output reliability, and the constant shift toward monitoring and oversight work.

That last one is worth pausing on. When AI enters a workflow, the nature of work changes. You’re no longer just doing the work. You’re prompting, reviewing, validating, and editing AI-generated output on top of doing the work. That’s a fundamentally different cognitive demand.

And it’s stacking on top of a workforce that was already stretched thin.

More Tasks, Faster. Not Fewer Tasks.

The promise of AI was straightforward: save time, do less grunt work, focus on what matters. The reality is the opposite.

Microsoft’s 2024 Work Trend Index surveyed 31,000 people across 31 countries. 75% of knowledge workers report using AI at work. But 68% say they struggle with the pace and volume of work, and 46% report feeling burned out despite AI uptake. The 2025 update reinforced the pattern: 53% of leaders say output must increase, while 80% of employees report they’re already at capacity.

The data tells a clear story. AI accelerates individual tasks. Organisations absorb those gains into higher throughput expectations, not fewer working hours. A Harvard Business Review study on "AI brain fry" found that productivity gains plateau after employees juggle three or more concurrent AI tools. After that threshold, decision fatigue and error rates climb. 14% of AI users reported mental fatigue specifically from managing multiple AI systems, and those experiencing brain fry reported making 39% more major mistakes.

This is what I see on the ground. I run a two-person consultancy with AI agents handling research, outbound sales, content creation, programme design. Every agent needs orchestration: decisions, approvals, context switches. The hours haven’t changed. The cognitive load has multiplied four or five times over. Previously, a productive day meant getting two or three things done well. Now, the standard for what counts as “productive” has inflated beyond recognition.

A talent development leader at one of Singapore’s largest F&B chains captured it perfectly:

"Even if AI turns my eight-hour workday into one hour, the people who don’t know what to do with the remaining seven will worry about being replaced. And the people who do find things to do? They’ll just multiply their own responsibilities."
// Talent development leader, major F&B chain, Singapore

Either way, no one rests.

The Expectation Trap

There’s a particular tension that the research captures but undersells. It happens when senior leaders understand AI’s potential. They pick it up. They see how fast it can move. And then they expect everyone else to match that pace.

But most employees aren’t like their bosses. They’re not innate experimenters. They don’t have time carved out to play with tools. They’re tied to their BAUs, their day-to-day responsibilities. Where exactly are they supposed to find the space to learn, experiment, and get fluent?

And then comes the second layer. The boss throws a question at an employee. The employee uses AI, gets a decent answer, and sends it back. The boss reads it and thinks: if all you’re going to do is copy and paste what AI gives you, then what do I hire you for?

So now the employee is stuck in a double bind. Use AI, but don’t look like you used AI. Be faster, but also be original. The expectation is that after AI generates content, the employee reads, validates, checks, internalizes, and adds their own judgment before presenting anything. That’s the right expectation. But AI generates at speeds far beyond what humans can read and process. The verification burden is the hidden cost that nobody budgets for.

In my conversations with talent development leaders, a lead at a global semiconductor manufacturer described senior leadership driving AI productivity mandates without a clear strategy for how staff should actually adopt it.

"A lot of people know AI. But they don’t really KNOW AI. They’re attracted by the flashy things but don’t understand how it can actually help."
// VP of Talent Management, top-three global logistics operator

When leadership doesn’t deeply understand AI but still demands productivity from it, the pressure flows downhill with no guidance attached.

The Uncanny Middle: Use AI, But Hide It

A 2025 study published in the Proceedings of the National Academy of Sciences (PNAS) tested this tension in controlled experiments with over 4,400 participants. The findings: people who use AI tools at work anticipate negative evaluations. And those fears are justified. Observers rate AI-using workers as less competent and less motivated, even when the AI-assisted output is objectively better. These penalties extend to hiring and promotion decisions.

A 2025 WalkMe survey found that 49% of employees admit to hiding their AI use to avoid judgment, a phenomenon described as "AI shame." 53% of C-suite leaders conceal their AI habits despite being the most frequent users. Gen Z workers are even more affected, with 62% hiding their use. And only 7.5% of employees have received extensive AI training.

Here’s a small example that makes the point. I used to love using em dashes in my writing. They’re a perfectly legitimate tool for pulling ideas together. Now? Leave one in and people assume AI wrote it. So I edit them out. I’m censoring my own writing style to avoid the appearance of AI involvement.

The net effect is that workers aren’t just managing their actual tasks. They’re also managing "AI optics": deciding when to use AI, how much to reveal, and how to edit outputs so they don’t trigger suspicion. That’s an entirely new layer of cognitive "work" that didn’t exist two years ago.

None of the leaders I interviewed raised this tension explicitly. That’s telling. It suggests the “uncanny middle” is largely invisible to leadership. It’s happening at the individual contributor level, in silence, adding stress that no manager sees or measures.

AI Didn’t Create More Money

There’s a deeper structural issue that most AI burnout discussions skip entirely.

AI multiplies output. It does not multiply the economy. Just because your team can now produce four times the deliverables doesn’t mean revenue quadrupled. If your competitor adopted AI too, they’re producing at the same pace, competing on the same clients. But until meaningful competition forces prices down, the efficiency gains flow upward to margins and leadership, not downward to the workers absorbing the load. And companies don’t need to increase pay either. When jobs are scarce, it’s an employer’s market. The leverage sits with the people who own the tools, not the people operating them.

So no, companies aren’t going to triple your pay because you tripled your output with AI. They’re going to use the efficiency to stay competitive. And the employee absorbs the increased cognitive load without a corresponding increase in compensation.

This creates a perverse cycle. Workers who embrace AI and take on more responsibility don’t get proportionally rewarded. Workers who resist AI risk falling behind. And the ones who find AI reduces their workload to almost nothing face an existential question: if my job can be done in one hour, why does the company need me at all?

The 2026 ManpowerGroup Global Talent Barometer found that 72% of Singapore workers report recent burnout, while 58% fear AI-driven automation could replace their jobs within two years. Nearly 39% anticipate possible job loss within six months. This creates what the report calls a sense of being "trapped": burned out, but too insecure to leave.

I’ve heard this on the ground too. About half of the employees I’ve encountered, including blue-collar workers who’ve seen videos of AI with robotics handling construction and cooking, are concerned. Not necessarily because they’ve experienced AI at work, but because the narrative of replacement is everywhere. This fear-induced stress is real, even when it’s not directly work-related.

The Time Savings Illusion

Some will point to data showing AI does reduce workload in specific contexts. And they’re right. AI scribing tools in healthcare have demonstrated genuine burnout reduction. A randomised clinical trial of ambient AI scribes across 238 physicians found significant reductions in documentation time and improvements in burnout-related metrics. When AI is deployed explicitly to remove drudgery, with workload adjustments to match, it works.

But here’s the catch. In knowledge work, that’s not what’s happening. A lead talent development professional at a global semiconductor manufacturer described senior programmers who used to spend 30 minutes on a task now spending 5. What happens with the saved 25 minutes? They get redeployed into higher-level work. Not rest. Not reflection. More output.

At a pre-IPO fintech in Southeast Asia, a head of talent development described how vibe coding lets staff prototype products rapidly. But the speed doesn’t translate to shorter days. It translates to more ambitious projects, more iterations, more scope.

In Singapore, a 2024 UiPath survey found that 60% of workers use generative AI at work, the highest rate globally alongside Hong Kong. Among those workers, 62% say it saves them time, and over 40% report saving at least 10 hours per week. That sounds like relief. But I’d argue these workers are early-stage adopters, primarily using AI at Level 1: summarising documents, drafting emails, basic research. Once they discover what AI can actually build, once they start creating tools and taking on expanded responsibilities, those "saved" hours fill right back up.

I wrote about this in my previous article on AI integration levels. Most people in Singapore are at Level 1. They’re chatting with AI, not building with it. The time savings are real at Level 1. They evaporate at Level 2 and above.

So What Actually Helps?

If the problem is work design, then the solution is also work design. Not more tools. Not more training on how to prompt. Not another AI champion programme that upskills one person per team and leaves the other four behind.

The research points to a few conditions under which AI actually reduces burnout rather than amplifying it.

First, explicit workload reduction. When AI takes over a task, the old task needs to be formally removed from the person’s plate. Not absorbed into higher throughput targets. Actually removed. The healthcare scribe trials worked because the documentation burden was subtracted from the physician’s workload, not replaced with more patient volume.

Second, task redesign, not task addition. If an employee builds an AI tool that automates part of their role, the job description needs to change. They’re now responsible for maintaining and improving the tool, not for doing the old work AND managing the tool. Most organisations haven’t even started thinking about this.

Third, honest conversations about AI optics. If leadership expects people to use AI but the culture penalises visible AI use, you’ve created a contradiction that your employees pay for in stress. Be explicit about where AI use is welcome, expected, and celebrated. Don’t make people manage the appearance of effort alongside the effort itself.

Fourth, governance that enables, not just restricts. A head of talent development at a Southeast Asian fintech created an AI governance task force with senior leadership, infosec, and documentation leads to move from “cowboys” doing whatever they wanted to structured, risk-tiered AI adoption. That’s the balance: not blocking AI, but channelling it with clear guardrails so people know what they can and can’t do. Remove the ambiguity, and you remove a significant source of stress. (Our AI Governance programme helps organisations build exactly this kind of structured adoption framework.)

The Uncomfortable Question

If your people are burning out after you gave them AI, the problem isn’t the technology. It’s how you deployed it.

Did you reduce their workload, or just raise the bar? Did you redesign their roles, or just add AI to the pile? Did you give them clarity on what’s expected, or leave them guessing? Did you measure the cognitive cost, or only the output gains?

AI promised to lighten the load. For most organisations, it hasn’t. Not because AI can’t, but because leadership chose to use it as a throughput multiplier instead of a workload reducer.

The tool works. The design around it doesn’t. That’s on us.

What is AI burnout and why is it increasing?

AI burnout is the amplification of existing workplace burnout through AI-related technostress. It includes techno-overload (too many digital demands), techno-complexity (difficulty understanding AI tools), and techno-insecurity (fear of being replaced). 77% of employees using AI say it has added to their workload, not reduced it.

Why hasn’t AI reduced workloads as promised?

AI accelerates individual tasks, but organisations absorb those gains into higher throughput expectations rather than reducing working hours. Employees also face hidden cognitive costs: prompting, reviewing, validating, and editing AI output on top of their existing work. The time savings are real at basic usage levels but evaporate as AI use deepens.

See how the AI³ methodology redesigns work around AI, not just adds AI to the pile.

Explore AI³

Need to build AI governance that enables your teams, not blocks them?

AI Governance Programme
Arthoven Ng

Written by

Arthoven Ng

Managing Director & Lead Trainer, Overpowered

Master of Arts in Professional Education

Arthoven builds AI training programmes that stick. He has trained teams at SIM, Ninja Van, finexis, CGC Malaysia, and House on the Hill Montessori. His AI³ methodology combines human development, AI tool-building, and intrapreneurial execution.

LinkedIn →

AI should reduce the load, not add to it.

Our AI³ methodology redesigns work around AI. Not just adds AI to the pile.

Explore AI³

© 2026 Overpowered Pte. Ltd. All rights reserved.