Personal Synthesis · March 2026

Strategic
Equilibrium
2026

AI-Enabled Organisations — what four authoritative sources are collectively describing about the gap between AI capability and the human accountability infrastructure needed to deploy it at scale.

Synthesised by Elaine Mao · Senior Pursuit Strategist · Singapore · March 2026


01 — McKinsey: The Organisational Reality

Three forces.
Nine shifts. One pressure point.

The second edition of McKinsey's State of Organizations draws on more than 10,000 senior executives across 15 countries and 16 industries, published 19 February 2026. McKinsey SoO 2026 The headline: sustained performance and value creation now outrank short-term resilience as the primary leadership goal. Three tectonic forces are reshaping how organisations operate, lead, and grow.

86%
feel unprepared to embed AI into daily operations
McKinsey SoO 2026
75%
are struggling to build and sustain a high-performance culture
McKinsey SoO 2026
of leaders say their organisations are overly complex and inefficient
McKinsey SoO 2026
40%
name redefining process flows as the single biggest unlock in the next 1–2 years
McKinsey SoO 2026
84%
plan to expand shared-services centres within 1–2 years
McKinsey SoO 2026
47%
cite limited career progression as the biggest barrier to high-performance culture
McKinsey SoO 2026

McKinsey maps the three forces across nine shifts — organised by where they primarily act: on the organisation as a system, on teams and how work moves, and on individuals and how they lead. McKinsey SoO 2026

Force 1 — Technology Infusion
1
Unlocking the AI-enabled organisation
Technical + organisational transformation together — reimagine how work gets done, not just which tools are used
2
Humans and AI agents working together
Moving toward a co-intelligent workforce where humans and agents collaborate — human-over-loop as the design standard
3
From structure to flow
Familiar productivity plays are hitting diminishing returns. The frontier is in how work moves — redesigning workflows, reducing handoffs, clarifying decision rights
Force 2 — Economic & Geopolitical Disruption
4
Finding value in geopolitical uncertainty
Building resilient, flexible structures designed to adapt swiftly without sacrificing sustainability
5
From structure to shared services
84% of organisations plan to expand shared-services centres within 1–2 years — the question is whether those centres are redesigned for AI-first workflows or simply scaled with more people
6
Focusing on the core: doing the right thing with more intensity
Identify the performance moves that deliver outsize impact; select a few areas to genuinely excel; allocate budget and talent accordingly. Only 30% of organisations reallocate resources enterprise-wide.
Force 3 — Workforce Transformation
7
Aiming higher with a new performance edge
Unleash human capital by focusing on both people and performance. Less than 25% of organisations achieve sustained impact. Requires culture, management practices, and employee health — not just ambition. Only 20% believe non-financial rewards can instil performance.
8
Sharpening the focus on diversity and inclusion
Four in five organisations are maintaining or expanding D&I efforts. DEI as a strategic performance driver — not compliance. ~50% of organisations that scaled back D&I expect to bring efforts back within 1–2 years.
9
Reinventing leadership: leading from the inside out
Leaders must take an "inside out" approach — leading oneself and leading others are now intertwined. AI puts greater emphasis on the human aspects of leadership. Reflective leaders are measurably more adaptive: 30% of reflective leaders believe their organisations can quickly adapt to change, vs only 17% of non-reflective leaders.
1

Technology infusion — AI, automation, and data analytics

Organisations are being asked to reimagine how work gets done, redefine end-to-end processes, and rethink traditional structures. McKinsey SoO 2026 McKinsey describes the shift as structure to flow: familiar productivity plays — restructuring, delayering, cost-cutting — are hitting diminishing returns. The bigger opportunity lies in how work moves across the enterprise. Simplify first. Then automate where it makes sense.

The scale of unreadiness is striking: 86% feel unprepared to embed AI into daily operations, yet 43% cite productivity as their top 2026 priority, with 61% feeling high pressure to deliver. McKinsey SoO 2026 This is not a capability gap. It is an architecture gap.

2

Economic disruption and geopolitical uncertainty

Leaders in high-pressure organisations are less likely to report employee willingness to meet greater demands (43%) compared with leaders in lower-pressure environments (50%). McKinsey SoO 2026 Productivity pressure without system change produces fragility, not performance. The research consistently points to the same diagnosis: the constraint is organisational design, not individual capability.

3

Workforce transformation and evolving expectations

Shifting demographics and new tech-driven working models require organisations to transcend traditional structures, redefine leadership, and refocus on performance. McKinsey SoO 2026 The top barriers to high-performance culture are limited career progression (47%), lack of targeted incentives (43%), and disengaged employees (38%) — not technology readiness. The human system is the constraint.

McKinsey organises where the shifts land

The nine shifts act across three levels simultaneously — not all shifts hit the same place in the organisation. McKinsey SoO 2026

Organisation
How the enterprise is redesigned
  • AI-enabled org (Shift 1)
  • From structure to flow (Shift 3)
  • Geopolitical resilience (Shift 4)
  • Shared services expansion (Shift 5)
  • Focusing on the core (Shift 6)
Team
How work and collaboration change
  • Humans + AI agents (Shift 2)
  • New performance edge (Shift 7)
  • D&I as performance driver (Shift 8)
Individual
How roles and leadership evolve
  • Leading from the inside out (Shift 9)
  • Leading self + C-suite peers + networks simultaneously
  • Reflective leaders 30% vs 17% on adaptability

"In an uncertain world, sustained performance and value creation are the priority, ahead of short-term gains."

McKinsey State of Organizations 2026 — central thesis

What McKinsey says gets it right

Three formulas from the data — each one a specific answer to a specific version of "why aren't we making progress." McKinsey SoO 2026

Simplify → then automate

The AI Adoption Formula

Adding AI to a complex, poorly designed process does not improve it — it amplifies the complexity. 40% of leaders name process flow redesign as the single biggest productivity unlock. The sequence matters: simplify first, automate where it genuinely makes sense.

People outcomes + Performance outcomes = 4×

The Sustained Performance Formula

Organisations giving equal weight to people and performance outcomes are four times more likely to sustain top-tier financial results for nine out of ten years. As Rolls-Royce CPO Sarah Armstrong states directly: "If you want to change the performance management of the business, you've got to change the whole system — not just one piece."

Pressure − system redesign = fragility

The Pressure Formula

Leaders in high-pressure organisations are measurably less likely to report employee willingness to meet greater demands (43% vs 50% in lower-pressure environments). Employees show more reduced commitment (23% vs 14%). Productivity pressure without system redesign does not unlock performance — it depletes it.

What Singapore is actually funding

Delivered by PM Lawrence Wong on 12 February 2026, Budget 2026 translates the macro AI imperative into specific, funded mechanisms. SG Budget 2026 These are enacted policy with implementation timelines — not aspirational statements.

Enterprise

Champions of AI Programme

Tailored support for companies committing to comprehensive, organisation-wide AI transformation. Participating companies receive customised enterprise transformation guidance and workforce training, and are expected to set benchmarks for their sectors. SG Budget 2026

Enterprise

Productivity Solutions Grant — AI Expanded

The PSG now covers a wider range of digital and AI-enabled solutions for firms of all sizes. Every company, regardless of scale, can access tools to work smarter and compete more effectively. SG Budget 2026

Enterprise

Enterprise Innovation Scheme — AI Uplift

For YA 2027 and 2028, businesses can claim 400% tax deductions on up to S$50,000 of qualifying AI expenditures per year. SG Budget 2026 This materially changes the economics of AI adoption for investment-minded firms.

Workforce

WSG + SSG Merger

Workforce Singapore and SkillsFuture Singapore merge into a single statutory board. Training, career guidance, and job matching under one agency. SG Budget 2026 Course relevance will increasingly be measured by employment outcomes, not just completion.

Individual

Free Premium AI Tools — 6 Months

Singaporeans completing selected AI training courses receive six months of free access to premium AI tools. SG Budget 2026 This is a hands-on fluency mandate. The message: use the tools, build real judgment in practice.

Ecosystem

S$37B Research, Innovation & Enterprise 2030

Singapore's long-term investment anchored in AI, advanced semiconductors, decarbonisation, and quantum computing. SG Budget 2026 A new AI Park at One-North brings founders, researchers, and practitioners together for collaboration and commercialisation.

"We must aim higher, move faster, and be prepared to take calculated risk."

PM Lawrence Wong — Budget Statement, Parliament, 12 February 2026
03 — Singapore: The Governance Architecture

Capability is moving faster
than accountability design

Published 22 January 2026, IMDA's Model AI Governance Framework for Agentic AI addresses the next phase of AI adoption: not generative AI, but AI that plans, decides, and acts across multiple steps on behalf of humans. IMDA MGF Agentic AI 2026 The framework's premise is both simple and urgent — capability without governance architecture creates systemic risk, not just operational errors.

What is Agentic AI, and why does it change the governance challenge?

Unlike generative AI that responds to prompts, agentic AI systems can plan across multiple steps, use tools, access external systems, and act with varying degrees of autonomy to complete objectives on behalf of humans. IMDA MGF 2026 Coding assistants, customer service agents, and enterprise productivity workflows are already active in workplaces.

The risk profile changes significantly at scale. A generative AI error is contained. An agentic AI error can cascade — a hallucinated inventory figure from one agent could trigger downstream agents to reorder incorrectly across an entire supply chain. IMDA MGF 2026 The governance challenge is not preventing individual errors. It is designing systems where errors are bounded, traceable, and correctable.

IMDA identifies four organisational responses required for responsible deployment:

1
Assess & Bound Risks Upfront
Understand scope, reversibility, and autonomy of agent actions before deployment. Design limits at the planning stage. IMDA MGF
2
Make Humans Meaningfully Accountable
Define checkpoints requiring human approval, especially for high-stakes or irreversible actions. Regularly audit oversight to ensure it remains substantive. IMDA MGF
3
Implement Technical Controls
Test agents for safety, policy adherence, and tool use before deployment. Gradually roll out with continuous monitoring. IMDA MGF
4
Enable End-User Responsibility
Ensure users understand the agent's range of actions and their own responsibilities. Train employees to manage human-agent interactions effectively. IMDA MGF

The critical design challenge IMDA names is automation bias: as agents become more capable, humans are more likely to approve their outputs without genuine scrutiny. IMDA MGF 2026 "Human-in-the-loop" as a governance principle only works if the human's review is substantive, not ceremonial. This requires designing checkpoints thoughtfully — not just ensuring a human is present, but ensuring the human's judgment is actually engaged.

"A balance needs to be struck — continuous human oversight over all agent workflows becomes impractical at scale."

IMDA Model AI Governance Framework for Agentic AI, Version 1.0, January 2026

What responsible human-AI collaboration looks like when designed intentionally

AI Singapore's 100 Experiments programme has engaged over 260 companies and started over 50 co-development projects across healthcare, finance, manufacturing, and government. AISG Use Cases Vol.2 Two cases demonstrate how governance principles translate into real organisational decisions.

Manufacturing · IBM

IBM Manufacturing — Shifting from human-in-loop to human-over-loop

IBM's Quality Assurance engineers initially reviewed all AI predictions regardless of risk level. Working with AISG, they evolved to a human-over-the-loop approach: engineers only review batches flagged as high-risk by the model, making the final call on whether to release for sale. AISG Use Cases Vol.2

Outcome: Assessment time reduced from ~30 minutes to minutes per batch. Model achieved 85% prediction accuracy, exceeding the 80% target specification.
Healthcare · RenalTeam

RenalTeam — Preserving human judgment for irreversible decisions

For dialysis patients, false predictions carry direct health consequences. AISG and RenalTeam jointly maintained a human-in-the-loop model: nurses use AI prediction as a support tool for a second opinion, but retain the final hospitalisation decision. AISG Use Cases Vol.2

Design principle: reversibility of the action determines the level of human oversight required. Higher stakes = tighter human control, not looser.

The governance insight both cases share

Neither IBM nor RenalTeam simply deployed AI and observed. Both actively designed the boundary between what AI decides and what humans decide — and revisited that boundary as capability and trust developed over time. AISG Use Cases Vol.2 That design work is ongoing, not one-time. It does not appear in most AI transformation roadmaps. It shows up in how specific workflows are structured, reviewed, and adjusted.

Four sources.
One gap.

Read separately, these are four distinct documents from four different vantage points — global management research, national fiscal policy, regulatory governance, and practitioner case studies. Read together, they are all pointing at the same structural challenge.

McKinsey names it as an organisational architecture problem. 86% of organisations feel unprepared. The problem is not that AI tools are unavailable — it is that process flows, decision rights, and governance structures have not been redesigned to accommodate them. "Structure to flow" is a measurable productivity frontier, and two-thirds of leaders already know they are too complex to reach it. McKinsey SoO 2026
Singapore Budget 2026 names it as an economic urgency and funds the supply side. The Champions of AI programme, PSG expansion, WSG+SSG merger, and the 400% EIS tax deduction are all directed at the same gap: organisations and individuals capable of AI adoption who have not yet built the systems, fluency, and governance structures to sustain it. SG Budget 2026
IMDA names it as a governance design problem. "Meaningful human accountability" cannot be assumed — it must be deliberately designed into agentic workflows. The risk is not AI acting autonomously. The risk is humans approving AI outputs without genuine scrutiny, and organisations discovering this only after the cascade. IMDA MGF 2026
The AISG use cases show what getting this right looks like in practice. IBM and RenalTeam actively designed the human-AI boundary — deciding which decisions needed human-in-the-loop, which warranted human-over-the-loop, and how those boundaries shift as capability and trust develop. That design work is ongoing. And someone had to own it. AISG Use Cases Vol.2

"The gap is not between AI capability and business appetite. It is between AI capability and the human accountability infrastructure needed to deploy it responsibly at scale."

Synthesis observation — E. Mao, March 2026

05 — Decoded at Four Levels

The same shift.
Four different realities.

Research lives in abstractions. People live in specifics. This is my translation of what these four sources are collectively describing — into the questions that actually matter on Monday morning.

Industry Level

The competitive moat is shifting from capability to accountability architecture

Industries that win in the next cycle will not be those with the most sophisticated AI tools. They will be those who have redesigned how work moves through their organisation and built deliberate human accountability into agentic workflows. McKinsey's "structure to flow" shift and IMDA's governance framework describe the same competitive frontier from different angles. McKinsey SoO 2026 IMDA MGF

Budget 2026's Champions of AI programme is Singapore's signal to industry: transformation is the standard now. Organisations still running legacy approval layers on top of new AI capabilities — without redesigning those layers — are building technical debt at the process level. SG Budget 2026

Organisation Level

86% unready is a structural signal, not a training problem

When 86% of organisations report they are not prepared to embed AI into daily operations, McKinsey SoO 2026 the instinct is to declare a skills gap. The combined data suggests otherwise. Two-thirds of leaders know they are too complex. 40% name process flow redesign as the biggest unlock. IMDA confirms: the governance challenge is organisational design, not individual competence. IMDA MGF 2026

Shared services centres, due to expand in 84% of organisations, McKinsey SoO 2026 will become either the bottleneck or the breakthrough — depending on whether they are redesigned for agentic workflows or merely expanded with more people managing the same processes.

Community Level

Singapore is building infrastructure for a specific kind of professional

The WSG+SSG merger, the AI learning pathway redesign, the free premium tool access, and the Champions of AI programme are not separate announcements. They form an architecture designed to produce a professional who can do more than use AI tools — one who can navigate the human and organisational systems around those tools. SG Budget 2026

The AISG use cases reinforce why this matters: IBM and RenalTeam's outcomes depended not just on the quality of the AI model, but on professionals who could define the right human-AI boundary and adapt it as technology and trust evolved. AISG Use Cases Vol.2 Budget 2026 is funding the supply side of that capacity. Organisations need to build the demand side.

Individual Level

The shift from creator to editor to orchestrator is real — and mostly unacknowledged

McKinsey captures the productivity pressure. IMDA captures the governance requirement. Neither fully names what individuals experience inside this transition: the identity disorientation of moving from someone who makes to someone who directs, reviews, and takes accountability for outputs they did not generate. McKinsey SoO 2026

This is IMDA's automation bias risk at the individual level: IMDA MGF 2026 the tendency to approve AI outputs without genuine scrutiny — not from laziness, but from uncertainty about where human judgment is actually required. That uncertainty is not resolved by training alone. It is resolved by clearer organisational design, and by cultures that treat human judgment as the point — not the bottleneck.

What I observe from inside the change

In bid rooms, workshops, lectures, partner conversations, enterprise and SME events across Singapore — people are discussing, collaborating on, and in some cases transforming around AI adoption. And here is what I consistently observe.

McKinsey describes 86% of organisations as unprepared to embed AI into daily operations. McKinsey SoO 2026 IMDA names "meaningful human accountability" as the design requirement most organisations have not yet built. IMDA MGF What these numbers do not fully capture is how this manifests in practice: as a very specific kind of paralysis. Teams that are capable of using AI tools, but uncertain about when their judgment is needed, when to pause the agent, and who is ultimately accountable for the outcome.

The organisations moving fastest are not necessarily the ones with the best AI tools. They are the ones where someone — often not the most senior person in the room — has taken on the work of translating between AI capability and organisational process. Not as a policy function. As a daily, practical, often invisible practice.

IBM evolved their human-AI boundary from in-the-loop to over-the-loop as capability and trust developed. AISG Use Cases Vol.2 That evolution did not happen automatically. Someone had to design it, advocate for it, and hold accountability for it. Budget 2026 is funding the infrastructure for more of these people to exist. SG Budget 2026 The question is whether organisations are designing the roles and processes to put them to work.

Elaine Mao · Senior Pursuit Strategist · Singapore · March 2026
07 — The Human Experience

What this shift feels like from the inside.

The four sources describe the structural gap. What they cannot describe is the lived experience of navigating it in a real organisation, in a real role, in real time. This section names that experience — the friction, the flow, and the identity shift that sits between them.

The Friction

Fitting jet engines onto horse-drawn carriages.

The friction is rarely about AI capability. It is about the gap between what AI can do and what the organisational architecture will permit. In bid and pursuit work, AI can synthesise a competitive landscape in minutes, draft a qualification rationale, stress-test a commercial assumption. But if the approval chain still requires three layers of sign-off designed for a world where humans did all of this manually — the speed gain evaporates, and creates a new kind of cognitive drag.

  • Boundaries between what AI decides and what humans decide are unclear or assumed, not designed
  • Legacy approval processes built for human-speed workflows now create bottlenecks at AI speed
  • The anxiety of moving from creator to editor — doing real work, but feeling like less
The Flow

When the E layer works.

When the boundary between AI and human work is designed deliberately — not assumed — something shifts. AI handles breadth: scanning, pattern-matching, first-draft synthesis. The human handles depth: the strategic judgment call, the client relationship read, the ethical decision a model cannot make. That is not a smaller role. It is a higher one. The mental bandwidth freed from baseline data gathering goes directly into the quality of thinking that actually moves outcomes.

  • Clarity about when human judgment is required — and when it is not
  • More time for the strategic and relational work that cannot be delegated
  • AI outputs that have been genuinely reviewed, not ceremonially rubber-stamped
The Shift Nobody Names

The move from creator to orchestrator is not a technical transition. It is an identity one.

This is a personal observation — not drawn directly from the four sources, but informed by what they collectively describe and tested against my own experience. McKinsey names this at the organisational level (86% unprepared, McKinsey SoO 2026), IMDA names it at the governance level (automation bias, IMDA MGF 2026). Neither fully names what it feels like at the individual level.

There is a particular disorientation in building something you did not generate. In being the person who shaped the question, reviewed the output, made the call — but did not write every line, draft every section, or build every slide yourself. The work is still yours. The accountability is still yours. But the doing has changed shape. For people whose professional identity was built on the quality of what they could make — on being the person who created — that change is not small. And it is rarely named.

This is where most AI transition conversations stop too soon. They name the productivity gain without naming the identity cost. They describe the destination without acknowledging the disorientation of the journey. Getting this right means organisations need to design not just for capability transfer, but for the human experience of that transfer — the intermediate state where the old role no longer fits and the new one has not yet been fully recognised or valued.

That is the gap I am most interested in. Not as a researcher. As someone navigating it — and watching others navigate it — every week.

Elaine Mao · Personal observation · Singapore · March 2026
08 — A Working Hypothesis

The CLEAR Cycle — mapping the gap

Based on patterns I have observed across these four sources and my own professional context, I have been mapping a practitioner's hypothesis I am calling the CLEAR Cycle. It is not a finished model. It is shared here as an invitation — to examine, test, and improve together with people who are closer to specific problems than any single framework can reach.

An Operating Model for AI-Enabled Organisations

Five layers. A loop that builds capability over time. The E layer — Enable — is where McKinsey's "structure to flow" challenge lives, where IMDA's "meaningful human accountability" must be operationalised, and where the IBM and RenalTeam cases show the real design work happening. It is also the layer most organisations have not yet designed for.

LayerOwnerWhat it does
C Context Human Define the question, success criteria, risk appetite, and political landscape. This is IMDA's "assess and bound risks upfront" at the human level — what are we actually trying to decide, and what is the cost of being wrong?
L Learning AI Process signals, surface patterns, generate options at scale. This is where AI does what humans cannot — breadth, speed, consistency. McKinsey's agentic AI expansion and shared-services transformation live here.
E Enable Human + AI The gap layer. Translate AI intelligence into something the organisation can navigate and act on. This is IMDA's "meaningful human accountability" in practice — designed checkpoints, defined decision rights, substantive human review. Neither fully human nor fully automated. The work IBM and RenalTeam were actually doing.
A Act AI Execute autonomously within human-defined guardrails. Agents, automation, scaled operations. IMDA's governance framework defines what responsible autonomous action looks like here — bounded, traceable, correctable.
R Review Human Go / No-Go gate. Accountability. Outcome ownership. R feeds back into L — outcomes recalibrate the intelligence layer and adjust trust thresholds over time. IBM's evolution from in-the-loop to over-the-loop happened through exactly this feedback mechanism.

How the loop works — as a continuous process

The CLEAR Cycle is not a one-time sequence. It is a compounding loop. Each pass builds on the last: the Review layer does not just close the cycle — it feeds new intelligence back into the Learning layer, recalibrating what AI processes next time and adjusting trust thresholds based on what human review found. Over time, the Enable layer becomes better designed, more clearly owned, and more resilient as the organisation builds shared experience of where the boundaries actually sit.

IBM's evolution from human-in-the-loop to human-over-the-loop happened through exactly this mechanism. Not by policy decision — by accumulated passes through the loop, where each Review built enough trust to shift the boundary. The cycle makes that evolution deliberate rather than accidental.

C — Context L — Learning E — Enable A — Act R — Review back to L

The CLEAR Cycle is my read — drawn from these four sources and tested against my own observations across enterprise and SME contexts in Singapore. It is not a direct output from any of these reports; it is a practitioner's inference from reading them together. If it maps to something real in your organisation — or doesn't — I'd genuinely like to hear where it lands.

All claims in this report are drawn directly from these four sources

1
McKinsey State of Organizations 2026
Krivkovich, Klingler, Maor, Guggenberger et al. · McKinsey & Company · Published 19 February 2026 · 74 pages · Survey of 10,000+ senior executives, 15 countries, 16 industries
mckinsey.com/capabilities/people-and-organizational-performance
2
Singapore Budget 2026
PM Lawrence Wong · Ministry of Finance, Singapore · Budget Statement, Parliament, 12 February 2026 · EnterpriseSG programme details confirmed via enterprisesg.gov.sg and singaporebudget.gov.sg
singaporebudget.gov.sg · enterprisesg.gov.sg/campaigns/budget-2026
3
Model AI Governance Framework for Agentic AI
Infocomm Media Development Authority (IMDA), Singapore · Version 1.0 · Published 22 January 2026 · Living document open for feedback and case study contributions
imda.gov.sg — Model AI Governance Framework for Agentic AI (PDF)
4
AI Singapore × Model AI Governance Framework — Use Cases Vol. 2
AI Singapore · Published in partnership with IMDA · Featuring IBM Manufacturing Solutions, RenalTeam, Sompo Holdings Asia, VersaFleet, Google, Microsoft, TAIGER, and others
file.go.gov.sg/ai-gov-use-cases-2.pdf

"Which layer is currently the weakest in your organisation — and who is actually accountable for it?"

Not asking for the polished answer. The honest one is more useful. That is where the real work tends to be.

Share your thinking on LinkedIn