AI-Enabled Organisations — what four authoritative sources are collectively describing about the gap between AI capability and the human accountability infrastructure needed to deploy it at scale.
The second edition of McKinsey's State of Organizations draws on more than 10,000 senior executives across 15 countries and 16 industries, published 19 February 2026. McKinsey SoO 2026 The headline: sustained performance and value creation now outrank short-term resilience as the primary leadership goal. Three tectonic forces are reshaping how organisations operate, lead, and grow.
McKinsey maps the three forces across nine shifts — organised by where they primarily act: on the organisation as a system, on teams and how work moves, and on individuals and how they lead. McKinsey SoO 2026
Organisations are being asked to reimagine how work gets done, redefine end-to-end processes, and rethink traditional structures. McKinsey SoO 2026 McKinsey describes the shift as structure to flow: familiar productivity plays — restructuring, delayering, cost-cutting — are hitting diminishing returns. The bigger opportunity lies in how work moves across the enterprise. Simplify first. Then automate where it makes sense.
The scale of unreadiness is striking: 86% feel unprepared to embed AI into daily operations, yet 43% cite productivity as their top 2026 priority, with 61% feeling high pressure to deliver. McKinsey SoO 2026 This is not a capability gap. It is an architecture gap.
Leaders in high-pressure organisations are less likely to report employee willingness to meet greater demands (43%) compared with leaders in lower-pressure environments (50%). McKinsey SoO 2026 Productivity pressure without system change produces fragility, not performance. The research consistently points to the same diagnosis: the constraint is organisational design, not individual capability.
Shifting demographics and new tech-driven working models require organisations to transcend traditional structures, redefine leadership, and refocus on performance. McKinsey SoO 2026 The top barriers to high-performance culture are limited career progression (47%), lack of targeted incentives (43%), and disengaged employees (38%) — not technology readiness. The human system is the constraint.
McKinsey organises where the shifts land
The nine shifts act across three levels simultaneously — not all shifts hit the same place in the organisation. McKinsey SoO 2026
"In an uncertain world, sustained performance and value creation are the priority, ahead of short-term gains."
McKinsey State of Organizations 2026 — central thesisWhat McKinsey says gets it right
Three formulas from the data — each one a specific answer to a specific version of "why aren't we making progress." McKinsey SoO 2026
Adding AI to a complex, poorly designed process does not improve it — it amplifies the complexity. 40% of leaders name process flow redesign as the single biggest productivity unlock. The sequence matters: simplify first, automate where it genuinely makes sense.
Organisations giving equal weight to people and performance outcomes are four times more likely to sustain top-tier financial results for nine out of ten years. As Rolls-Royce CPO Sarah Armstrong states directly: "If you want to change the performance management of the business, you've got to change the whole system — not just one piece."
Leaders in high-pressure organisations are measurably less likely to report employee willingness to meet greater demands (43% vs 50% in lower-pressure environments). Employees show more reduced commitment (23% vs 14%). Productivity pressure without system redesign does not unlock performance — it depletes it.
Delivered by PM Lawrence Wong on 12 February 2026, Budget 2026 translates the macro AI imperative into specific, funded mechanisms. SG Budget 2026 These are enacted policy with implementation timelines — not aspirational statements.
Tailored support for companies committing to comprehensive, organisation-wide AI transformation. Participating companies receive customised enterprise transformation guidance and workforce training, and are expected to set benchmarks for their sectors. SG Budget 2026
The PSG now covers a wider range of digital and AI-enabled solutions for firms of all sizes. Every company, regardless of scale, can access tools to work smarter and compete more effectively. SG Budget 2026
For YA 2027 and 2028, businesses can claim 400% tax deductions on up to S$50,000 of qualifying AI expenditures per year. SG Budget 2026 This materially changes the economics of AI adoption for investment-minded firms.
Workforce Singapore and SkillsFuture Singapore merge into a single statutory board. Training, career guidance, and job matching under one agency. SG Budget 2026 Course relevance will increasingly be measured by employment outcomes, not just completion.
Singaporeans completing selected AI training courses receive six months of free access to premium AI tools. SG Budget 2026 This is a hands-on fluency mandate. The message: use the tools, build real judgment in practice.
Singapore's long-term investment anchored in AI, advanced semiconductors, decarbonisation, and quantum computing. SG Budget 2026 A new AI Park at One-North brings founders, researchers, and practitioners together for collaboration and commercialisation.
"We must aim higher, move faster, and be prepared to take calculated risk."
PM Lawrence Wong — Budget Statement, Parliament, 12 February 2026Published 22 January 2026, IMDA's Model AI Governance Framework for Agentic AI addresses the next phase of AI adoption: not generative AI, but AI that plans, decides, and acts across multiple steps on behalf of humans. IMDA MGF Agentic AI 2026 The framework's premise is both simple and urgent — capability without governance architecture creates systemic risk, not just operational errors.
Unlike generative AI that responds to prompts, agentic AI systems can plan across multiple steps, use tools, access external systems, and act with varying degrees of autonomy to complete objectives on behalf of humans. IMDA MGF 2026 Coding assistants, customer service agents, and enterprise productivity workflows are already active in workplaces.
The risk profile changes significantly at scale. A generative AI error is contained. An agentic AI error can cascade — a hallucinated inventory figure from one agent could trigger downstream agents to reorder incorrectly across an entire supply chain. IMDA MGF 2026 The governance challenge is not preventing individual errors. It is designing systems where errors are bounded, traceable, and correctable.
IMDA identifies four organisational responses required for responsible deployment:
The critical design challenge IMDA names is automation bias: as agents become more capable, humans are more likely to approve their outputs without genuine scrutiny. IMDA MGF 2026 "Human-in-the-loop" as a governance principle only works if the human's review is substantive, not ceremonial. This requires designing checkpoints thoughtfully — not just ensuring a human is present, but ensuring the human's judgment is actually engaged.
"A balance needs to be struck — continuous human oversight over all agent workflows becomes impractical at scale."
IMDA Model AI Governance Framework for Agentic AI, Version 1.0, January 2026AI Singapore's 100 Experiments programme has engaged over 260 companies and started over 50 co-development projects across healthcare, finance, manufacturing, and government. AISG Use Cases Vol.2 Two cases demonstrate how governance principles translate into real organisational decisions.
IBM's Quality Assurance engineers initially reviewed all AI predictions regardless of risk level. Working with AISG, they evolved to a human-over-the-loop approach: engineers only review batches flagged as high-risk by the model, making the final call on whether to release for sale. AISG Use Cases Vol.2
For dialysis patients, false predictions carry direct health consequences. AISG and RenalTeam jointly maintained a human-in-the-loop model: nurses use AI prediction as a support tool for a second opinion, but retain the final hospitalisation decision. AISG Use Cases Vol.2
Neither IBM nor RenalTeam simply deployed AI and observed. Both actively designed the boundary between what AI decides and what humans decide — and revisited that boundary as capability and trust developed over time. AISG Use Cases Vol.2 That design work is ongoing, not one-time. It does not appear in most AI transformation roadmaps. It shows up in how specific workflows are structured, reviewed, and adjusted.
Read separately, these are four distinct documents from four different vantage points — global management research, national fiscal policy, regulatory governance, and practitioner case studies. Read together, they are all pointing at the same structural challenge.
"The gap is not between AI capability and business appetite. It is between AI capability and the human accountability infrastructure needed to deploy it responsibly at scale."
Synthesis observation — E. Mao, March 2026
Research lives in abstractions. People live in specifics. This is my translation of what these four sources are collectively describing — into the questions that actually matter on Monday morning.
Industries that win in the next cycle will not be those with the most sophisticated AI tools. They will be those who have redesigned how work moves through their organisation and built deliberate human accountability into agentic workflows. McKinsey's "structure to flow" shift and IMDA's governance framework describe the same competitive frontier from different angles. McKinsey SoO 2026 IMDA MGF
Budget 2026's Champions of AI programme is Singapore's signal to industry: transformation is the standard now. Organisations still running legacy approval layers on top of new AI capabilities — without redesigning those layers — are building technical debt at the process level. SG Budget 2026
When 86% of organisations report they are not prepared to embed AI into daily operations, McKinsey SoO 2026 the instinct is to declare a skills gap. The combined data suggests otherwise. Two-thirds of leaders know they are too complex. 40% name process flow redesign as the biggest unlock. IMDA confirms: the governance challenge is organisational design, not individual competence. IMDA MGF 2026
Shared services centres, due to expand in 84% of organisations, McKinsey SoO 2026 will become either the bottleneck or the breakthrough — depending on whether they are redesigned for agentic workflows or merely expanded with more people managing the same processes.
The WSG+SSG merger, the AI learning pathway redesign, the free premium tool access, and the Champions of AI programme are not separate announcements. They form an architecture designed to produce a professional who can do more than use AI tools — one who can navigate the human and organisational systems around those tools. SG Budget 2026
The AISG use cases reinforce why this matters: IBM and RenalTeam's outcomes depended not just on the quality of the AI model, but on professionals who could define the right human-AI boundary and adapt it as technology and trust evolved. AISG Use Cases Vol.2 Budget 2026 is funding the supply side of that capacity. Organisations need to build the demand side.
McKinsey captures the productivity pressure. IMDA captures the governance requirement. Neither fully names what individuals experience inside this transition: the identity disorientation of moving from someone who makes to someone who directs, reviews, and takes accountability for outputs they did not generate. McKinsey SoO 2026
This is IMDA's automation bias risk at the individual level: IMDA MGF 2026 the tendency to approve AI outputs without genuine scrutiny — not from laziness, but from uncertainty about where human judgment is actually required. That uncertainty is not resolved by training alone. It is resolved by clearer organisational design, and by cultures that treat human judgment as the point — not the bottleneck.
In bid rooms, workshops, lectures, partner conversations, enterprise and SME events across Singapore — people are discussing, collaborating on, and in some cases transforming around AI adoption. And here is what I consistently observe.
McKinsey describes 86% of organisations as unprepared to embed AI into daily operations. McKinsey SoO 2026 IMDA names "meaningful human accountability" as the design requirement most organisations have not yet built. IMDA MGF What these numbers do not fully capture is how this manifests in practice: as a very specific kind of paralysis. Teams that are capable of using AI tools, but uncertain about when their judgment is needed, when to pause the agent, and who is ultimately accountable for the outcome.
The organisations moving fastest are not necessarily the ones with the best AI tools. They are the ones where someone — often not the most senior person in the room — has taken on the work of translating between AI capability and organisational process. Not as a policy function. As a daily, practical, often invisible practice.
IBM evolved their human-AI boundary from in-the-loop to over-the-loop as capability and trust developed. AISG Use Cases Vol.2 That evolution did not happen automatically. Someone had to design it, advocate for it, and hold accountability for it. Budget 2026 is funding the infrastructure for more of these people to exist. SG Budget 2026 The question is whether organisations are designing the roles and processes to put them to work.
The four sources describe the structural gap. What they cannot describe is the lived experience of navigating it in a real organisation, in a real role, in real time. This section names that experience — the friction, the flow, and the identity shift that sits between them.
The friction is rarely about AI capability. It is about the gap between what AI can do and what the organisational architecture will permit. In bid and pursuit work, AI can synthesise a competitive landscape in minutes, draft a qualification rationale, stress-test a commercial assumption. But if the approval chain still requires three layers of sign-off designed for a world where humans did all of this manually — the speed gain evaporates, and creates a new kind of cognitive drag.
When the boundary between AI and human work is designed deliberately — not assumed — something shifts. AI handles breadth: scanning, pattern-matching, first-draft synthesis. The human handles depth: the strategic judgment call, the client relationship read, the ethical decision a model cannot make. That is not a smaller role. It is a higher one. The mental bandwidth freed from baseline data gathering goes directly into the quality of thinking that actually moves outcomes.
This is a personal observation — not drawn directly from the four sources, but informed by what they collectively describe and tested against my own experience. McKinsey names this at the organisational level (86% unprepared, McKinsey SoO 2026), IMDA names it at the governance level (automation bias, IMDA MGF 2026). Neither fully names what it feels like at the individual level.
There is a particular disorientation in building something you did not generate. In being the person who shaped the question, reviewed the output, made the call — but did not write every line, draft every section, or build every slide yourself. The work is still yours. The accountability is still yours. But the doing has changed shape. For people whose professional identity was built on the quality of what they could make — on being the person who created — that change is not small. And it is rarely named.
This is where most AI transition conversations stop too soon. They name the productivity gain without naming the identity cost. They describe the destination without acknowledging the disorientation of the journey. Getting this right means organisations need to design not just for capability transfer, but for the human experience of that transfer — the intermediate state where the old role no longer fits and the new one has not yet been fully recognised or valued.
That is the gap I am most interested in. Not as a researcher. As someone navigating it — and watching others navigate it — every week.
Based on patterns I have observed across these four sources and my own professional context, I have been mapping a practitioner's hypothesis I am calling the CLEAR Cycle. It is not a finished model. It is shared here as an invitation — to examine, test, and improve together with people who are closer to specific problems than any single framework can reach.
Five layers. A loop that builds capability over time. The E layer — Enable — is where McKinsey's "structure to flow" challenge lives, where IMDA's "meaningful human accountability" must be operationalised, and where the IBM and RenalTeam cases show the real design work happening. It is also the layer most organisations have not yet designed for.
| Layer | Owner | What it does | |
|---|---|---|---|
| C | Context | Human | Define the question, success criteria, risk appetite, and political landscape. This is IMDA's "assess and bound risks upfront" at the human level — what are we actually trying to decide, and what is the cost of being wrong? |
| L | Learning | AI | Process signals, surface patterns, generate options at scale. This is where AI does what humans cannot — breadth, speed, consistency. McKinsey's agentic AI expansion and shared-services transformation live here. |
| E | Enable | Human + AI | The gap layer. Translate AI intelligence into something the organisation can navigate and act on. This is IMDA's "meaningful human accountability" in practice — designed checkpoints, defined decision rights, substantive human review. Neither fully human nor fully automated. The work IBM and RenalTeam were actually doing. |
| A | Act | AI | Execute autonomously within human-defined guardrails. Agents, automation, scaled operations. IMDA's governance framework defines what responsible autonomous action looks like here — bounded, traceable, correctable. |
| R | Review | Human | Go / No-Go gate. Accountability. Outcome ownership. R feeds back into L — outcomes recalibrate the intelligence layer and adjust trust thresholds over time. IBM's evolution from in-the-loop to over-the-loop happened through exactly this feedback mechanism. |
The CLEAR Cycle is not a one-time sequence. It is a compounding loop. Each pass builds on the last: the Review layer does not just close the cycle — it feeds new intelligence back into the Learning layer, recalibrating what AI processes next time and adjusting trust thresholds based on what human review found. Over time, the Enable layer becomes better designed, more clearly owned, and more resilient as the organisation builds shared experience of where the boundaries actually sit.
IBM's evolution from human-in-the-loop to human-over-the-loop happened through exactly this mechanism. Not by policy decision — by accumulated passes through the loop, where each Review built enough trust to shift the boundary. The cycle makes that evolution deliberate rather than accidental.
The CLEAR Cycle is my read — drawn from these four sources and tested against my own observations across enterprise and SME contexts in Singapore. It is not a direct output from any of these reports; it is a practitioner's inference from reading them together. If it maps to something real in your organisation — or doesn't — I'd genuinely like to hear where it lands.
A weekly letter from Elaine Mao — AI strategy, pursuit excellence, and leading with intention through a Singapore lens. Written for people who already know their business well, and want to know better.
"Which layer is currently the weakest in your organisation — and who is actually accountable for it?"
Not asking for the polished answer. The honest one is more useful. That is where the real work tends to be.
Share your thinking on LinkedIn