<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title></title>
    <description>California Management Review is a premier professional management journal for practitioners published at UC Berkeley Haas School of Business.</description>
    <link>http://localhost:4000/</link>
    <atom:link href="http://localhost:4000/feed.xml" rel="self" type="application/rss+xml"/>
    <pubDate>Fri, 17 Apr 2026 10:15:37 -0700</pubDate>
    <lastBuildDate>Fri, 17 Apr 2026 10:15:37 -0700</lastBuildDate>
    <generator>Jekyll v4.3.3</generator>
    
      <item>
        <title>AI in M&amp;A: Why Faster Deals Mean More Pressure on Senior Judgment</title>
        <description>&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;Summary&lt;/strong&gt;: Artificial intelligence has become a routine feature of mergers and acquisitions, widely deployed for target screening, valuation support, and due diligence. While industry commentary often frames this diffusion as the automation of dealmaking, its practical consequences for how M&amp;amp;A work is performed remain poorly understood. Drawing on interviews with senior M&amp;amp;A practitioners and industry evidence, this article examines how AI reshapes dealmaking by reweighting effort, time pressure, and risk across the deal lifecycle. We show that AI delivers large efficiency gains (40-45%) in the analytical work at the front end of deals, substantially compressing preparation time. In contrast, we find that the impact on judgment-, governance-, and leadership-intensive work in negotiation and post-merger integration is limited. As a result, AI is not eliminating complexity in M&amp;amp;A, but it is relocating it: analytical delays disappear while uncertainty, accountability, and execution challenges remain. The net effect is faster movement from opportunity identification to commitment, increasing the exposure of senior judgment and integration capacity. We argue that the central managerial challenge is therefore not AI adoption, but redesigning M&amp;amp;A processes, staffing models, governance, and integration capability so that analytical acceleration does not outpace decision quality and execution capacity.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Artificial intelligence (AI) is routinely used in the mergers and acquisitions (M&amp;amp;A). It is used by investment banks, consulting firms, and law firms for target screening, valuation support, and due diligence to the point, one expert told us, it “is part of the plumbing.”&lt;/p&gt;

&lt;p&gt;This rapid adoption has triggered a familiar narrative. AI is increasingly presented as a general-purpose technology that will “have a greater impact on deal execution than any technology in recent memory,” and surveys suggest that &lt;strong&gt;64% of C-suite executives expect, therefore, that AI to “&lt;em&gt;revolutionize&lt;/em&gt;” mergers and acquisitions in the coming years&lt;/strong&gt;&lt;sup&gt;1&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Yet this narrative glosses over a more uncomfortable question: does AI merely make M&amp;amp;A faster, or does it improve the parts of the process that determine success?&lt;/p&gt;

&lt;p&gt;The central argument of this article is that AI reshapes M&amp;amp;A not by eliminating complexity, but by relocating it. AI compresses analytical preparation time while leaving judgment, authority, and execution capacity largely unchanged. Decisions arrive sooner, with fewer natural pauses for deliberation. The consequence is not simpler decision-making, but more time-pressured decision-making, in which errors surface faster and are harder to contain. And decades of research suggest that these sorts of effects do not improve mergers and acquisition performance. After all, acquisitions rarely fail because organizations lack information or analytical sophistication, they fail managers don’t manage people well.&lt;/p&gt;

&lt;p&gt;To make this shift visible, we examine how deal work is performed in mergers and acquisitions in practice. Distinguishing between analytical-, judgment-, and leadership-intensive work, we show that AI delivers substantial efficiency gains in the front end of the deal, where analytical and document-heavy tasks dominate, while having a far more limited impact in the back end, where negotiation, governance, and post-merger integration depend on human judgment and leadership. Because it is in these later stages that value is ultimately realized, the core sources of M&amp;amp;A risk remain stubbornly resistant to automation.&lt;/p&gt;

&lt;p&gt;This redistribution of effort within the process has important consequences. For advisory, AI undermines junior-heavy business models and exposes senior judgment more directly. For corporate acquirers, it expands opportunity flow without expanding execution capacity. In both cases, the challenge, therefore, is no longer whether AI can accelerate M&amp;amp;A, but whether organizations can adapt to absorb that speed responsibly.&lt;/p&gt;

&lt;p&gt;Managers therefore face a strategic choice: to use AI as a tool to do the same things faster, or to redesign how M&amp;amp;A work is organized, governed, and led. The first path risks amplifying existing weaknesses. The second requires recognizing where human judgment remains indispensable and deliberately protecting, developing, and deploying it.&lt;/p&gt;

&lt;h2 id=&quot;how-deal-work-actually-gets-done&quot;&gt;How Deal Work Actually Gets Done&lt;/h2&gt;

&lt;p&gt;To understand the role of AI in M&amp;amp;A it is useful to understand the typical M&amp;amp;A stages: from strategy, to screening, valuation, diligence, negotiation, approval and, finally, to integration (see Figure 1). To understand where AI can help, however, it is more important, to understand the type of work that is involved within and across these stages.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/blog/2026-04-mccarthy-fig1.png&quot; style=&quot;box-shadow:none;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figure 1.&lt;/strong&gt; The M&amp;amp;A Process&lt;/p&gt;

&lt;p&gt;For example, in the front end of the deal – which encompasses the screening, valuation, and due diligence stages – the dominant work is analytical in nature. These stages require large volumes of data to be collected, processed, structured, and compared in order to identify potential targets, assess strategic and financial fit, build valuation models, and review extensive documentation. The work is repetitive, data-heavy, and scale-dependent, and has historically been carried out by large, junior-heavy teams within advisory firms and corporate development functions. It is precisely these characteristics that make front-end deal work highly susceptible to automation and augmentation through AI.&lt;/p&gt;

&lt;p&gt;By contrast, in the back end of the deal – which encompasses negotiation, governance approval, and post-merger integration – the dominant work is judgment- and leadership-intensive. These stages hinge on interpreting incomplete and contested information, resolving trade-offs among competing objectives, aligning stakeholders with divergent interests, and exercising authority under uncertainty. The work depends on contextual understanding, credibility, and accountability rather than on speed, scale, or consistency. And the decisions cannot be automated without stripping them of responsibility. As a result, back-end deal work remains concentrated among senior managers, partners, board members, and integration leaders and is, as a result, far less susceptible to standardization or automation.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/blog/2026-04-mccarthy-table1.png&quot; style=&quot;box-shadow:none;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Table 1.&lt;/strong&gt; AI tools per task&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Table 1&lt;/em&gt; summarizes the main categories of work involved in mergers and acquisitions, provides illustrative examples of AI-enabled tools referenced in the literature and in our interviews, and describes how these tools are used in practice.&lt;/p&gt;

&lt;h2 id=&quot;how-ai-reweights-effort-and-risk&quot;&gt;How AI Reweights Effort and Risk&lt;/h2&gt;

&lt;p&gt;To gauge the magnitude of the gains associated with AI, and to locate where in the deal process those gains arise, we draw on industry estimates and interviews with senior practitioners. Consistent with the work-based distinction introduced in the previous section, a clear and recurring pattern emerged across all sources.&lt;/p&gt;

&lt;h4 id=&quot;front-end-acceleration-less-effort-more-momentum&quot;&gt;Front-End Acceleration: Less Effort, More Momentum&lt;/h4&gt;

&lt;p&gt;We find consistent evidence that AI is materially reshaping target screening. Deal teams are using AI-augmented data platforms to scan large universes of potential targets, to automate long-list creation, to flag adverse signals, and to surface sector patterns. Interviewees consistently reported that, because of this, screening cycles are being compressed from weeks to days and even hours. And industry estimates suggest total efficiency improvements in screening of roughly 30–40 percent&lt;sup&gt;2&lt;/sup&gt;. As one expert put it, “&lt;em&gt;screening used to be constrained by how many people we had; now it’s constrained by how quickly we’re willing to decide what matters&lt;/em&gt;.”&lt;/p&gt;

&lt;p&gt;Valuation follows a similar pattern: AI is not replacing it, but it is accelerating the mechanics that support it. Interviewees described using AI-enabled features embedded in platforms that allow them to refresh models, run sensitivity analyses, and to reconcile assumptions in near real time. Because of this, industry estimates suggest that 30–50 percent less time is being spent on modelling and scenario analysis&lt;sup&gt;3&lt;/sup&gt;. The benefit, as one manager noted, is “&lt;em&gt;we’re not deciding less, we’re deciding more often, because the model is almost immediately available and always ready&lt;/em&gt;.”&lt;/p&gt;

&lt;p&gt;The most pronounced effects of AI, however, appear in due diligence. Here, AI-enabled document analytics tools are being widely used to process large amounts of documents and to flag anomalies. The gains are significant: legal and financial diligence teams, for example, report that AI offers efficiency gains of 40–70 percent, depending on deal complexity and data quality&lt;sup&gt;4&lt;/sup&gt;. As one expert observed, “&lt;em&gt;diligence isn’t about being clever: it’s about not missing things. And AI is very good at not missing things&lt;/em&gt;.”&lt;/p&gt;

&lt;p&gt;Taken together, these gains substantially compress front-end processes. Aggregating task-level estimates across screening, valuation support, and diligence suggests that front-end labour input in a typical mid-market deal can be reduced by approximately 40–45 percent&lt;sup&gt;5&lt;/sup&gt;.&lt;/p&gt;

&lt;h4 id=&quot;back-end-inertia-where-ai-largely-stops&quot;&gt;Back-End Inertia: Where AI Largely Stops&lt;/h4&gt;

&lt;p&gt;By contrast, AI’s influence weakens sharply in the later phases of the deal lifecycle. In negotiation, for example, we hear that AI is being used to summarize information, draft briefs, and rehearse scenarios, but the bargaining itself – where most of the work is – remains interpersonal and adaptive. As one expert put it: “&lt;em&gt;AI helps you walk into the room better prepared, but it doesn’t do the talking for you&lt;/em&gt;.” There is little evidence, therefore, that AI meaningfully reduces the time or the complexity of negotiation in the deals process.&lt;/p&gt;

&lt;p&gt;The same pattern holds for governance and approval. AI is being used to support consistency checks and materials preparation, but investment committees and boards are not willing to delegate accountability for capital allocation decisions. Deliberation remains human. As on interviewee put it, “&lt;em&gt;no algorithm is going to raise its hand, take responsibility, and explain to shareholders why this deal went wrong&lt;/em&gt;.”&lt;/p&gt;

&lt;p&gt;Post-merger integration is, however, where AI’s limits are most visible. Research suggests that AI analytics and dashboards are being used to improve visibility into KPIs and milestones, but integration success depends on leadership, coordination, and the management of organizational frictions. Interviewees reported modest efficiency gains in reporting and tracking, typically in the range of 20–40 percent&lt;sup&gt;6&lt;/sup&gt;, but little reduction in the time or effort required for cultural alignment, conflict resolution, or change management. As one partner put it, “&lt;em&gt;the spreadsheets get better, but the people problems don’t go away&lt;/em&gt;.”&lt;/p&gt;

&lt;h4 id=&quot;the-net-effect&quot;&gt;The Net Effect&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;Table 2&lt;/em&gt; provides illustrative estimates of how AI-enabled tools affect labor input across the stages of the M&amp;amp;A process. It reports approximate full-time equivalent (FTE) months associated with key activities in a typical mid-market transaction and indicative ranges of efficiency gains where AI is currently deployed. The numbers are based on industry estimates. It reports that AI-enabled efficiency gains in the front-end range from &lt;strong&gt;30–70%, but in the back end the effects are negligable&lt;/strong&gt;. Aggregated across these stages, the total gain is roughly &lt;strong&gt;25–30%&lt;/strong&gt;, equivalent to more than a year of junior-level effort in a typical transaction.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/blog/2026-04-mccarthy-table2.png&quot; style=&quot;box-shadow:none;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Table 2.&lt;/strong&gt; Estimated AI enabled Improvements&lt;/p&gt;

&lt;h4 id=&quot;the-new-bottleneck&quot;&gt;The New Bottleneck&lt;/h4&gt;

&lt;p&gt;The uneven impact of AI across the deal lifecycle produces a structural shift in where risk resides. Historically, M&amp;amp;A processes were slowed by analytical work. Data gathering, modelling, and diligence absorbed time and effort, creating natural pauses between opportunity identification and commitment. Those pauses provided space for informal deliberation, escalation, and second thoughts. AI removes many of them.&lt;/p&gt;

&lt;p&gt;As analytical preparation accelerates, moments of commitment arrive sooner. Interviewees repeatedly emphasized that decisions are now reached faster, often with less informal discussion along the way. Judgment becomes more exposed, not because it is replaced, but because it must be exercised under tighter time constraints. Errors surface earlier and propagate more quickly. As one dealmaker put it, “&lt;em&gt;AI doesn’t change what can go wrong, but it does makes it happen faster.&lt;/em&gt;” The central risk, therefore, is not automation of judgment, but compression of the time available to exercise it.&lt;/p&gt;

&lt;p&gt;This shift also reweights the importance of post-merger integration. As the distance between opportunity identification and execution shrinks, organizations place greater strain on senior leadership bandwidth and change-management capacity. Integration leaders are asked to absorb decisions that arrive faster, with less upstream digestion. As one manager told us, “&lt;em&gt;The bottleneck has moved in the last few years. We now spend less time analysing and much more time dealing with integration issues&lt;/em&gt;.” Seen in this light, AI does not simplify M&amp;amp;A. It redistributes where effort, attention, and failure are most likely to occur.&lt;/p&gt;

&lt;h2 id=&quot;redesigning-ma-for-ai-world&quot;&gt;Redesigning M&amp;amp;A for AI World&lt;/h2&gt;

&lt;p&gt;If AI primarily accelerates analytical work while leaving judgment and leadership intact, then the central managerial challenge is not adoption but redesign. Organizations that layer AI onto existing M&amp;amp;A processes risk moving faster without becoming better. Capturing value requires deliberate changes to how deals are staffed, governed, and integrated.&lt;/p&gt;

&lt;h4 id=&quot;redesigning-acquisition-processes-slowing-down-the-right-moments&quot;&gt;Redesigning Acquisition Processes: Slowing Down the Right Moments&lt;/h4&gt;

&lt;p&gt;The first redesign challenge concerns process architecture. AI removes friction from screening, modelling, and diligence, but it does not reduce uncertainty or resolve trade-offs. Managers must therefore resist the temptation to let analytical speed dictate decision tempo. In practice, this means reintroducing deliberate pauses at key decision points. Investment committees, boards, and executive teams should not treat faster preparation as a signal to compress deliberation. On the contrary, as analytical bottlenecks disappear, organizations need clearer escalation rules, stronger decision protocols, and explicit checkpoints where assumptions are challenged and alternatives are surfaced. The goal is not to slow deals down arbitrarily, but to slow down the moments that matter. AI makes it easier to arrive at a recommendation. But, as of now, it does not make it easier to decide well.&lt;/p&gt;

&lt;h4 id=&quot;redesigning-deal-teams-moving-from-more-leverage-to-more-judgment&quot;&gt;Redesigning Deal Teams: Moving from More Leverage to More Judgment&lt;/h4&gt;

&lt;p&gt;AI also forces a rethink of how deal teams are staffed. Traditional M&amp;amp;A teams and advisory models were built around leverage. Large numbers of junior professionals processed information under the supervision of a small number of seniors. AI substitutes directly for much of that processing capacity. As a result, deal teams become leaner and more senior. This shift increases the leverage of experienced judgment but also concentrates risk. Fewer people see the full picture. Fewer opportunities exist for informal error detection. Managers must therefore be intentional about how responsibility is distributed and how dissent is surfaced. For advisors, this challenges the economic logic of junior-heavy pyramids and time-based billing models. For corporate acquirers, it implies a move toward smaller corporate development teams with deeper strategic and integration expertise. In both cases, the value proposition shifts from capacity to judgment.&lt;/p&gt;

&lt;h4 id=&quot;redesigning-talent-pipelines-protecting-apprenticeship-under-automation&quot;&gt;Redesigning talent pipelines: protecting apprenticeship under automation&lt;/h4&gt;

&lt;p&gt;A less visible but more consequential issue concerns talent development. Junior analytical work has historically served as an apprenticeship mechanism through which future senior dealmakers learned how transactions unfold. As AI absorbs much of this work, that pipeline thins. Organizations that fail to address this risk may find themselves with experienced decision-makers today but insufficiently trained ones tomorrow. Protecting apprenticeship does not require preserving inefficient processes, but it does require creating alternative learning paths. Shadowing senior decision-makers, rotating talent through integration roles, and exposing juniors to judgment-intensive tasks earlier become more important as traditional analytical entry points disappear. This challenge applies equally to advisory firms and corporate acquirers. AI may reduce the need for junior labour, but it does not reduce the need for experienced judgment. That judgment must still be developed.&lt;/p&gt;

&lt;h4 id=&quot;redesigning-governance-aligning-speed-with-accountability&quot;&gt;Redesigning Governance: Aligning Speed with Accountability&lt;/h4&gt;

&lt;p&gt;As AI accelerates front-end activity, governance structures come under strain. Faster deal cycles increase the frequency with which major capital allocation decisions must be made. Without clear accountability, and cognitive capacity, organizations risk diffusing responsibility while amplifying exposure. Managers should therefore treat AI adoption as a governance issue, not just a technology investment. This includes clarifying who owns decisions, how escalation works, and how accountability is maintained when analytical preparation is automated. AI can support governance by improving transparency and consistency, but it cannot replace it. Well-designed governance absorbs speed without becoming brittle. Poorly designed governance amplifies it into failure.&lt;/p&gt;

&lt;h4 id=&quot;redesigning-integration-capability-shifting-attention-to-the-back-end&quot;&gt;Redesigning Integration Capability: Shifting Attention to the Back End&lt;/h4&gt;

&lt;p&gt;Finally, AI’s uneven impact makes post-merger integration relatively more important. As the distance between opportunity identification and execution shrinks, organizations place greater strain on integration leaders, line managers, and operating units. Yet integration capability is often underdeveloped relative to front-end deal expertise. Managers should resist the impulse to invest disproportionately in screening and diligence tools while neglecting the leadership and coordination required after closing. If anything, faster deal cycles increase the need for integration discipline, not reduce it. Organizations that treat integration as an afterthought will experience AI as a force multiplier for failure. Those that invest deliberately in integration capability are more likely to benefit from acceleration upstream.&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;The evidence suggests, therefore, that artificial intelligence will revolutionize M&amp;amp;A. But the nature of that revolution is not the one most managers anticipate.&lt;/p&gt;

&lt;p&gt;By sharply accelerating analytical preparation, AI removes delay rather than uncertainty. Screening, modelling, and diligence now consume far less time and labour, while judgment, accountability, and integration capacity remain largely unchanged. What disappears is the buffer that analytical work once provided. Decisions arrive sooner, with fewer natural pauses for deliberation, challenge, and coordination. In this sense, AI does not simplify dealmaking. It compresses it. This shift redistributes both effort and risk. Junior analytical work declines, deal teams flatten, and advisory models built on leverage come under strain. At the same time, senior decision-makers become more exposed. Commitments must be made earlier, under tighter time pressure, and with less organizational slack to absorb error. As one senior dealmaker put it, “&lt;em&gt;AI takes weeks out of the process, not risk&lt;/em&gt;.”&lt;/p&gt;

&lt;p&gt;The managerial implication is therefore not one of enthusiasm or resistance, but of design. Organizations that treat AI as a way to move faster through existing M&amp;amp;A processes risk amplifying their weakest points sooner and doing worse deals because of it. Those that recognize how AI reweights judgment, governance, and integration demands can use acceleration to their advantage, but only if they deliberately redesign decision rights, escalation protocols, and integration capability to absorb it. Seen this way, AI does revolutionize M&amp;amp;A. Not by automating the hard parts, but by exposing them.&lt;/p&gt;

&lt;h2 id=&quot;references&quot;&gt;References&lt;/h2&gt;

&lt;ol&gt;
  &lt;li&gt;See Accenture, Reinventing M&amp;amp;A with Generative AI, Accenture Strategy report (2024), based on a survey of C-suite executives with responsibility for mergers and acquisitions.&lt;/li&gt;
  &lt;li&gt;Industry estimates of screening efficiency gains in the range of 30–40% are reported in consulting and practitioner studies examining AI-enabled target identification and filtering. See Bain &amp;amp; Company, Global M&amp;amp;A Report 2024 (2024); Deloitte, Generative AI in M&amp;amp;A (2025); McKinsey &amp;amp; Company, Gen AI in M&amp;amp;A: From Theory to Practice (2024).&lt;/li&gt;
  &lt;li&gt;Estimates suggesting reductions of approximately 30–50% in time spent on financial modelling and scenario analysis are drawn from practitioner analyses. See Deloitte, Generative AI in M&amp;amp;A (2025); Brynjolfsson, Rock, and Syverson, “The Productivity J-Curve,” American Economic Journal: Macroeconomics 13, no. 1 (2021): 333–372.&lt;/li&gt;
  &lt;li&gt;Reported reductions in document review effort of 40–70% during due diligence reflect widespread use of AI-enabled contract analytics and anomaly detection tools in legal and financial diligence. See EY, How AI Will Impact Due Diligence in M&amp;amp;A Transactions (2023); Deloitte, State of AI in the Enterprise, 5th ed. (2024); Harvard Center on the Legal Profession, The Impact of Artificial Intelligence on Law Firms’ Business Models (2023).&lt;/li&gt;
  &lt;li&gt;Aggregate estimates suggesting overall front-end labour reductions of approximately 40–45% are derived by combining task-level efficiency gains reported for screening, valuation, and due diligence in mid-market transactions. These figures should be interpreted as order-of-magnitude estimates rather than precise forecasts. See Bain &amp;amp; Company, Global M&amp;amp;A Report 2024 (2024); Deloitte, Generative AI in M&amp;amp;A (2025); McKinsey &amp;amp; Company, Gen AI in M&amp;amp;A: From Theory to Practice (2024).&lt;/li&gt;
  &lt;li&gt;Estimates suggesting modest efficiency gains of approximately 20–40% in post-merger integration reporting and tracking reflect the use of AI-enabled dashboards and analytics for monitoring KPIs, milestones, and integration progress, rather than improvements in integration outcomes themselves. See Deloitte, State of AI in the Enterprise, 5th ed. (2024); Bain &amp;amp; Company, Global M&amp;amp;A Report 2024 (2024). Evidence that integration performance continues to depend primarily on leadership, coordination, and the management of organizational frictions is consistent with prior research on post-merger integration.&lt;/li&gt;
&lt;/ol&gt;
</description>
        <pubDate>Fri, 17 Apr 2026 05:18:00 -0700</pubDate>
        <link>http://localhost:4000/2026/04/ai-in-m-a-why-faster-deals-mean-more-pressure-on-senior-judgment/</link>
        <guid isPermaLink="true">http://localhost:4000/2026/04/ai-in-m-a-why-faster-deals-mean-more-pressure-on-senior-judgment/</guid>
        
        <category>Artificial intelligence</category>
        
        <category>Mergers &amp; acquisitions</category>
        
        <category>Corporate governance</category>
        
        <category>Due diligence</category>
        
        <category>Consulting</category>
        
        
        <category>[Mergers &amp; Acquisitions]</category>
        
        <category>[Artificial Intelligence]</category>
        
        <category>[Corporate Governance]</category>
        
        <category>[Decision-Making]</category>
        
        <category>[Leadership]</category>
        
      </item>
    
      <item>
        <title>From Rate Cards to Outcomes: Consulting&apos;s Fourth Transformation</title>
        <description>&lt;p&gt;&lt;strong&gt;The Argument:&lt;/strong&gt; Professional services firms have survived three technology-driven transformations: ERP implementation (1990s), web/mobile enablement (2000s), and SaaS/cloud platforms (2010s). Each wave changed what clients bought, how they paid, and what they received — but left the fundamental consulting model intact. Clients still purchased human expertise, measured in hours or FTEs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI breaks this pattern.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The fourth transformation inverts the services model entirely. Rather than software enabling consultants to work faster, expertise itself becomes software. I call this &lt;strong&gt;‘Service as a Software’&lt;/strong&gt; — the encoding of domain judgment into autonomous systems that deliver outcomes directly, with humans supervising rather than executing.&lt;/p&gt;

&lt;p&gt;This article argues that the critical capability for this era is not AI engineering but Expertise Architecture: the systematic methodology for capturing domain judgment and encoding it into machine-executable reasoning. Firms that master this capability will capture disproportionate value. Those that treat AI as merely another accelerator for existing labor models will find themselves disrupted by focused entrants who start without legacy economics to protect.&lt;/p&gt;

&lt;h2 id=&quot;the-four-era-framework&quot;&gt;The Four-Era Framework&lt;/h2&gt;

&lt;p&gt;Each technology wave transformed consulting economics in predictable ways:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/blog/2026-03-pabba-table1.png&quot; style=&quot;box-shadow:none;&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;the-incumbent-response-two-paths-same-gap&quot;&gt;The Incumbent Response: Two Paths, Same Gap&lt;/h2&gt;

&lt;p&gt;The industry’s largest players recognize the shift — but their responses reveal an unresolved strategic tension.&lt;/p&gt;

&lt;p&gt;On January 22, 2026, McKinsey and AWS launched the Amazon McKinsey Group (AMG), a joint venture explicitly designed around outcome-based pricing. The structure is notable: rather than billing for consultant hours, AMG ties fees to measurable transformation results on engagements exceeding $1 billion. This is a structural bet that the traditional labor model cannot survive Era 4. McKinsey is not adding AI to consulting; it is repositioning consulting around AI-enabled delivery.&lt;/p&gt;

&lt;p&gt;Yet even this bold move exposes the gap. AMG still depends on McKinsey consultants to interpret client context, design transformation roadmaps, and validate AI-generated recommendations. The ‘expertise layer’ remains human. The joint venture changes who bears performance risk, but it does not fundamentally change how expertise gets delivered. McKinsey has restructured the economics without yet encoding the judgment.&lt;/p&gt;

&lt;p&gt;Contrast this with Accenture’s approach. The firm announced $3 billion in AI investments and has built impressive technical capabilities — AI factories, proprietary tools, thousands of trained practitioners. But the underlying delivery model remains intact: consultants use AI to work faster, clients still pay for FTEs, and value is measured in hours saved rather than outcomes achieved. This is Era 3 optimization, not Era 4 transformation. AI augments the labor model; it does not invert it.&lt;/p&gt;

&lt;p&gt;Deloitte, EY, and others occupy similar positions — significant AI investment, genuine technical capability, but strategic ambiguity about whether AI is a tool for consultants or a replacement for consulting. The ambiguity is rational: these firms generate billions in labor revenue. Protecting that revenue while simultaneously enabling autonomous delivery creates organizational tension that no amount of investment resolves.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The pattern across incumbents is consistent: recognition without resolution.&lt;/strong&gt; They see the shift. They are investing heavily. But none has cracked the core problem — systematically encoding the domain expertise that makes their consultants valuable. Until they do, they remain vulnerable to focused entrants who start without legacy economics to protect.&lt;/p&gt;

&lt;h2 id=&quot;where-service-as-a-software-already-works&quot;&gt;Where Service as a Software Already Works&lt;/h2&gt;

&lt;p&gt;While incumbents navigate strategic tension, a new category of company demonstrates that ‘Service as a Software’ is not theoretical — it is already operating.&lt;/p&gt;

&lt;p&gt;Vertical AI companies target functional domains where decision patterns are bounded, judgment is repeatable, and outcomes are measurable. Rather than building general-purpose AI tools, they encode domain-specific expertise into autonomous systems that deliver results directly. The model works because these companies start with encoded expertise as their core asset, not as an enhancement to labor.&lt;/p&gt;

&lt;p&gt;Consider the pattern: A procurement AI platform doesn’t just surface contract anomalies for humans to review. It encodes the judgment that procurement professionals apply — what constitutes a meaningful price variance, which suppliers warrant scrutiny based on risk profile, when to escalate versus auto-approve. The system doesn’t accelerate human work; it executes the work with human oversight.&lt;/p&gt;

&lt;p&gt;In my own experience building an AI-native FinOps platform, I’ve observed how this plays out operationally. FinOps — the practice of managing cloud and technology spend — involves hundreds of decisions daily: Is this cost spike an anomaly or expected? Should this workload be rightsized? Does this variance warrant executive attention?&lt;/p&gt;

&lt;p&gt;Traditional approaches surface data and expect humans to decide. Our approach encodes the decision logic itself. We built what we call PRISM — a methodology that decomposes FinOps into five decision domains (Proactive monitoring, Resource optimization, Infrastructure management, Spend economics, and Management governance), each with explicit thresholds, escalation rules, and context-dependent reasoning. The AI doesn’t just detect a 15% spend increase; it knows that 15% in production environments during quarter-end is expected, while 15% in development environments on weekends warrants immediate investigation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The result: decisions that previously required analyst review now execute autonomously within defined risk parameters.&lt;/strong&gt; Humans supervise exceptions rather than processing routine judgments. Time-to-action compresses from days to minutes. And critically, the value delivered is measurable in outcomes — cost avoided, efficiency gained — not hours worked.&lt;/p&gt;

&lt;p&gt;This pattern — bounded domain, encoded judgment, autonomous execution with human oversight — defines where ‘Service as a Software’ already works. The question for incumbents is whether they can replicate it before these focused players expand.&lt;/p&gt;

&lt;h2 id=&quot;where-it-fails-the-encoding-gap&quot;&gt;Where It Fails: The Encoding Gap&lt;/h2&gt;

&lt;p&gt;For every successful vertical AI deployment, there are dozens that stall — not because the AI doesn’t work, but because the expertise was never encoded.&lt;/p&gt;

&lt;p&gt;Early in our product development, we learned this lesson directly. We deployed anomaly detection to flag unexpected cost variances in cloud infrastructure. The model performed well technically — it identified patterns humans missed, processed data at scale, and surfaced potential issues in real time. By any AI benchmark, it succeeded.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But in practice, it failed.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The system flagged everything that deviated from baseline: a 12% increase in compute spend, a new storage allocation, a spike in data transfer costs. Within the first week, it generated over 200 alerts. Finance teams, already stretched thin, couldn’t process them. They had no way to distinguish signal from noise because the system didn’t encode what practitioners know — that a 12% increase during product launch is expected, that new storage allocations tied to approved projects aren’t anomalies, that data transfer spikes during backup windows are routine.&lt;/p&gt;

&lt;p&gt;Without encoded thresholds, context rules, and escalation logic, the AI created more work, not less. Alert fatigue set in within two weeks. Teams began ignoring notifications. By month two, the anomaly detection was effectively shelfware — technically operational, practically abandoned. We had automated data processing but not decision-making.&lt;/p&gt;

&lt;p&gt;This pattern repeats across enterprises. A legal team deploys AI to review contracts; without encoded risk tiers, lawyers still review 100% of outputs. A customer service team launches an AI assistant; without encoded resolution paths, 60% of queries escalate to humans. A finance team automates expense auditing; without context rules, 80% of flags are false positives.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The failure mode is consistent:&lt;/strong&gt; organizations deploy AI capability without encoding the judgment that makes capability useful. They automate the process layer — data ingestion, pattern recognition, alert generation — while leaving the expertise layer untouched. The result is augmentation at best, shelfware at worst.&lt;/p&gt;

&lt;h2 id=&quot;why-it-fails-the-missing-layer&quot;&gt;Why It Fails: The Missing Layer&lt;/h2&gt;

&lt;p&gt;The failures above share a common root cause. Enterprises have invested decades in process frameworks — APQC taxonomies, BPMN models, SIPOC documentation. These capture workflow: what activities happen, in what sequence, with which roles. They do not capture judgment.&lt;/p&gt;

&lt;p&gt;Consider a common process step: ‘Review budget variance.’ A process model shows this as an activity box connected to a role. But what AI needs to automate this step is entirely different: When is a variance significant? 5%? 10%? Does it depend on the cost center? The time of year? The trend direction? These are judgment calls that exist in practitioners’ heads but not in any process documentation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is the critical gap: AI can automate process. AI cannot automate judgment without encoding.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I call this missing layer Expertise Architecture — the systematic encoding of domain judgment into machine-executable form:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/blog/2026-03-pabba-table2.png&quot; style=&quot;box-shadow:none;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The strategic implication is clear:&lt;/strong&gt; Process classification and process models are commoditized — everyone has them. The scarce, ownable capability is methodology for building the Expertise Architecture layer. Firms that build this layer first will define the category.&lt;/p&gt;

&lt;h2 id=&quot;what-to-do-guidance-for-three-audiences&quot;&gt;What To Do: Guidance for Three Audiences&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;For consulting firm leaders:&lt;/strong&gt; Current IP libraries are necessary but insufficient for Era 4. The strategic question shifts from ‘how do we deploy AI tools’ to ‘how do we encode our expertise before others do.’ Protecting existing labor revenue will delay transformation; focused entrants face no such constraint. The McKinsey-AWS model shows one path — restructure economics around outcomes. But without encoded expertise, outcome-based pricing shifts risk without changing capability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For enterprise buyers:&lt;/strong&gt; Evaluate vendors on expertise encoding methodology, not AI capability alone. Ask: ‘How do you capture and validate domain expertise in your AI systems?’ Demand transparency on human-AI task allocation. If a vendor’s AI still requires your team to review every output, you’re buying augmentation, not transformation. Expect and negotiate for outcome-based pricing as the standard — and verify the vendor has encoded enough judgment to deliver on it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For new entrants:&lt;/strong&gt; Functional verticals (FinOps, procurement, revenue operations) offer clearer encoding targets than industry verticals. The bounded nature of these domains — repeatable decisions, measurable outcomes, defined thresholds — makes expertise encoding tractable. First-mover advantage accrues to firms that establish trust and governance frameworks. The window for category definition is open but will close within 3-5 years as incumbents resolve their strategic ambiguity.&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;The consulting industry has survived three technology transformations by adapting its delivery model while preserving its fundamental economics: clients pay for human expertise. AI breaks this pattern because it enables expertise itself to become software.&lt;/p&gt;

&lt;p&gt;The winners of this transformation will not be determined by AI capability — that is rapidly commoditizing. They will be determined by who solves the expertise encoding problem first. The firms that build Expertise Architecture — the systematic methodology for converting domain judgment into machine-executable reasoning — will capture disproportionate value. Those that treat AI as another tool for consultants will find themselves disrupted by focused players who started without legacy economics to protect.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The shift from rate cards to outcomes is not a pricing change. It is an architectural change.&lt;/strong&gt; And the architecture that matters most is not technical — it is the encoding of human expertise into systems that can act on it.&lt;/p&gt;
</description>
        <pubDate>Tue, 14 Apr 2026 05:16:00 -0700</pubDate>
        <link>http://localhost:4000/2026/04/from-rate-cards-to-outcomes-consultings-fourth-transformation/</link>
        <guid isPermaLink="true">http://localhost:4000/2026/04/from-rate-cards-to-outcomes-consultings-fourth-transformation/</guid>
        
        <category>Artificial intelligence</category>
        
        <category>Business models</category>
        
        <category>Service innovation</category>
        
        <category>Services</category>
        
        <category>Digital transformation</category>
        
        <category>Automation</category>
        
        <category>Technology management</category>
        
        <category>Organizational change</category>
        
        
        <category>[Artificial Intelligence]</category>
        
        <category>[Business Models]</category>
        
        <category>[Digital Transformation]</category>
        
        <category>[Automation]</category>
        
        <category>[Digital Strategy]</category>
        
      </item>
    
      <item>
        <title>Diversity Matters: Overcoming the Friction of Different Functional Backgrounds</title>
        <description>&lt;p&gt;Imagine a Monday morning at a global corporation’s strategy meeting. The Chief Marketing Officer is pitching a bold new creative campaign. The Head of Engineering counters with product feasibility concerns. The CFO, focused on risk, winces at the projected costs. There is tension in the air — a classic case of functional diversity at work. Different backgrounds and areas of expertise are colliding. In a small company, this same mix of perspectives might spark an innovative pivot on the spot. But in a mega-corporation, it often leads to crossed wires and frustration. Why? Can leaders harness these differences for better performance, rather than letting them hinder progress? We explore these questions, drawing on our new research&lt;sup&gt;1&lt;/sup&gt;.&lt;/p&gt;

&lt;h2 id=&quot;key-takeaways-for-busy-executives&quot;&gt;Key Takeaways for Busy Executives&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Not All Diversity Is Visible:&lt;/strong&gt; Functional diversity captures the variety of occupations held by members in a team (finance, marketing, operations, etc.). While functional diversity is less visible than other forms of diversity such as race or gender, it can be a powerful driver of better decisions and innovation. Different functional backgrounds reflect distinct experiences and mental models, which enlarges a team’s collective skill set and perspective.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Big Benefits for Small Firms:&lt;/strong&gt; New evidence shows that functional diversity in the top management team has a strong positive impact on performance in smaller organizations. In startups or mid-sized firms, a mix of expertise may help spot opportunities and problems faster, boosting agility and results.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Large Firms Face a Diversity Paradox:&lt;/strong&gt; In giant companies, simply having diverse expertise on the top management team won’t automatically yield results. Complex hierarchies, siloed divisions, and communication breakdowns can smother the potential gains of diversity.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;The Integration Fix — Experience Matters:&lt;/strong&gt; A long-tenured, well-knit top management team can turn functional diversity from friction to fuel. When executives have years of shared experience, they develop trust, a common language, and better ways to integrate their different perspectives.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Rethinking the Business Case:&lt;/strong&gt; High-profile reports claiming diversity causes higher profits (like McKinsey’s oft-cited studies) were correlational and have been critiqued for overstating conclusions. Our research — covering 4,500 firms and 32,000 executives over nearly two decades — used careful controls to better isolate causation. Top management team functional diversity can drive organizational performance, especially under the right conditions (smaller organizations or larger organizations with high team integration).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;the-overlooked-side-of-diversity-why-functional-backgrounds-matter&quot;&gt;The Overlooked Side of Diversity: Why Functional Backgrounds Matter&lt;/h2&gt;

&lt;p&gt;“Diversity” often refers to demographics like gender, ethnicity, or nationality. These matter, but another powerful form is functional (or occupational) diversity: differences in professional backgrounds, such as a sales veteran, supply chain specialist, finance expert, and tech specialist. Each brings distinct mental models, vocabularies, and problem-solving approaches shaped by their training and career paths — creating a true “diversity of thought.”&lt;/p&gt;

&lt;p&gt;Functional diversity is a double-edged sword. Positively, teams with varied expertise approach problems from multiple angles, generating more ideas and preventing groupthink. A marketing executive proposes a bold customer-focused innovation, an operations leader refines it for efficiency, and a finance expert ensures profitability — yielding better overall decisions.&lt;/p&gt;

&lt;p&gt;However, knowledge differences create friction. Functions develop their own subcultures, with unique languages, priorities, and even stereotypes (e.g., storied tensions between finance vs. marketing, or R&amp;amp;D vs. sales). Studies have long shown mixed results on whether functionally diverse teams outperform homogeneous ones. The Categorization-Elaboration Model highlights why: while diverse groups can benefit from bringing together different perspectives and information, they also face social “us versus them” divisions that can make collaboration and integration difficult. In sum, unlocking diversity’s benefits requires effective cross-functional communication and collaboration, which is not automatic and harder as organizations grow larger.&lt;/p&gt;

&lt;h4 id=&quot;a-quick-example--the-big-pharmaceutical-dilemma&quot;&gt;A Quick Example — The Big Pharmaceutical Dilemma &lt;/h4&gt;

&lt;p&gt;A drug development team: excited chemists, cautious clinicians, story-focused marketers, and compliance-wary regulators. Each perspective is essential. Good collaboration requires respecting expertise and finding alignment to build a better, safer, more-marketable drug. Poor communication (more common in big pharmaceutical organizations) will stall or sink the project. Functional diversity is only an asset when teams know how to harness it.&lt;/p&gt;

&lt;h2 id=&quot;small-company-big-advantage-why-diversity-shines-in-lean-organizations&quot;&gt;Small Company, Big Advantage: Why Diversity Shines in Lean Organizations&lt;/h2&gt;

&lt;p&gt;So, when does functional diversity pay off the most? According to our recent large-scale study of thousands of firms, we found that top-management team functional diversity led to higher profits and growth, but that this effect was limited to smaller and mid-sized companies. Little to no direct benefit was observed in the largest firms&lt;sup&gt;2&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Why might that be the case? Smaller organizations are inherently more nimble. They often have less bureaucracy and more face-to-face interaction, which makes it easier for a diverse team to actually combine their knowledge. There’s seldom a maze of divisions to navigate or complex chain of command for approvals. An engineer, a designer, and a customer service expert all on a founding team can be like rocket fuel — each one sees different threats and opportunities, which they quickly act on together.&lt;/p&gt;

&lt;p&gt;Team research suggests people in smaller teams also tend to wear multiple hats, fostering appreciation of each other’s challenges. A startup COO might handle HR one day and supply chain the next, with these blurred boundaries teammates develop a shared context faster — the marketing person understands a few engineering constraints, the engineer appreciates the marketing angle, etc. Information thus flows with less friction, and the team can leverage every member’s expertise. As “members may need to fill multiple roles and have expertise in multiple areas,”&lt;sup&gt;3&lt;/sup&gt; functional diversity can be pragmatically managed in day-to-day interactions.&lt;/p&gt;

&lt;p&gt;Consider a small hypothetical tech startup. The sales-savvy CEO, code-focused CTO, ex-designer Product head, and banking-trained CFO argue frequently in weekly meetings — but resolve issues in real time, face-to-face. When planning a new feature, user-centric ideas get reality-checked by technical feasibility and budget constraints, leading to creative, balanced solutions (e.g., phased rollouts) in a single afternoon. In large companies, the same cross-functional negotiation often drags on for months via emails and global meetings, if it happens at all.&lt;/p&gt;

&lt;h2 id=&quot;the-diversity-paradox-in-mega-corporations-when-good-mixes-go-awry&quot;&gt;The Diversity Paradox in Mega-Corporations: When Good Mixes Go Awry&lt;/h2&gt;

&lt;p&gt;If functional diversity is so great, why don’t we see clear performance gains in big corporations? The answer lies in what we might call the diversity paradox of large organizations: the conditions that emerge with scale — size, complexity, and hierarchy —stifle the very benefits diversity could provide. As firms grow, they fragment into specialized departments and geographic units. Silos harden, communication turns formal and filtered, and office politics or turf wars take hold. In this environment, a diverse executive team often struggles to share unique insights and reach decisions.&lt;/p&gt;

&lt;p&gt;Consequently, larger companies spawn features that intensify social categorization, as leaders may identify more with their division than with the enterprise overall, breeding rivalries and silo thinking instead of cross-functional synergy. Leadership styles also shift toward directive, top-down approaches; with tens of thousands of employees, participatory dialogue — the kind that lets diverse ideas surface and integrate — gives way to command-and-control methods that limit open exchange. Scale itself creates communication barriers as information filters through layers of management and distant divisions, raising the odds messages will be lost, mistranslated, or unshared. Turf defense may stifle any energy for collaboration. Finally, sheer complexity exacts a toll: Fortune 100 companies juggle countless products, markets, and stakeholder demands. Competing sub-organizational goals erode identifying shared objectives and organizational loyalty. In large, complex companies, the coordination burden drowns out any productive creative friction across diverse executives, making unified collaboration difficult and erasing expected performance gains.&lt;/p&gt;

&lt;p&gt;Of note, recent research reported that 78 percent of senior leaders consider breakdowns in cross-unit collaboration a major problem, yet few feel effective at solving it&lt;sup&gt;4&lt;/sup&gt;. Legacy structures in large firms make dismantling silos difficult, despite enthusiasm for cross-functional collaboration. Thus, large firms frequently fail to capture the diversity gains of smaller ones: diversity on paper, but not effective in practice.&lt;/p&gt;

&lt;h2 id=&quot;turning-friction-into-fuel-how-long-tenured-teams-unlock-diversitys-value&quot;&gt;Turning Friction into Fuel: How Long-Tenured Teams Unlock Diversity’s Value&lt;/h2&gt;

&lt;p&gt;Are big companies doomed to miss out on the upside of diverse teams? Not at all. Some large organizations do manage to consistently harness cross-functional expertise — the key differentiator is &lt;strong&gt;integration&lt;/strong&gt;. Our research on top management teams show that firms succeeded in leveraging functional diversity when management teams had levels of shared experience. In our study of 4,500 organizations, when a top management team had long tenure — at least 7 years of time on their team — the negative effects that diversity had on large firms disappeared.&lt;/p&gt;

&lt;p&gt;A well-integrated top management team is not about everyone thinking alike; it’s about having strong trust, communication, and mutual understanding despite their different backgrounds. Building that kind of cohesion takes time and leadership. Executives who have been through battles together can learn how each other thinks and “bridge semantic gaps” between their functional languages. Studies show shared experience provides time for interpersonal trust and psychological safety to develop, enhancing information exchange and integrating diverse knowledge. This is the secret sauce turning diversity into performance gold: team members feel safe to speak up and have the trust to truly listen to one another, making differences a source of strength rather than division.&lt;/p&gt;

&lt;h2 id=&quot;case-in-point-fords-one-team-revolution&quot;&gt;Case in Point: Ford’s “One Team” Revolution&lt;/h2&gt;

&lt;p&gt;Ford’s dramatic turnaround under CEO Alan Mulally (2006-2014) shows how to turn diverse leadership into real advantage&lt;sup&gt;5&lt;/sup&gt;. The company was a siloed disaster — “warring factions” nearly bankrupted it. Mulally assembled a functionally diverse executive team but didn’t stop there: he created weekly “Business Plan Review” meetings where leaders openly shared progress and issues using color-coded charts. Initially, everyone reported only green (“all good”). Mulally publicly praised the first honest red report and rallied the team to help solve it, sparking psychological safety. Silos eroded as executives began collaborating across functions. The payoff: Ford avoided the 2008 bailout, became the only profitable Detroit automaker during the recession, and reversed massive losses. This cultural shift “from toxically competitive to collaborative”&lt;sup&gt;6&lt;/sup&gt; earned big dividends for the company, and showing even in a huge organization, leaders can foster integration so that diversity delivers on its promise.&lt;/p&gt;

&lt;p&gt;Our research finding about long-tenured teams aligns with stories like Ford’s. If your top management team hasn’t had time to gel, all the diversity in the world might not help — it could even hinder, as members struggle to understand each other. Given time (or deliberate team-building efforts), diverse teams become far greater than the sum of their parts. Critically, stability in a leadership team can amplify the value of diversity, undercutting the advice to reshuffle executives or bring in “fresh blood” frequently. Fresh perspectives are valuable, but don’t underestimate the power of a team that has learned to play well together.&lt;/p&gt;

&lt;h2 id=&quot;revisiting-the-business-case-correlation-causation-and-the-new-evidence&quot;&gt;Revisiting the Business Case: Correlation, Causation, and the New Evidence&lt;/h2&gt;

&lt;p&gt;It’s worth considering what the broader evidence tells us about diversity’s impact on performance, beyond individual stories. In the mid-2010s, influential consulting reports — especially McKinsey’s “Why Diversity Matters” series — suggested companies with greater gender/ethnic diversity in leadership tended toward stronger financial results. The 2015 report noted these findings were correlation (in smaller print) which does not imply causation, while still highlighting a potential link to success&lt;sup&gt;7&lt;/sup&gt;. One limitation is that the studies measured diversity at the end of a period and linked it to earlier financial performance — which could mean successful companies were simply better positioned to attract and promote diverse leaders, rather than diversity directly driving the gains.&lt;/p&gt;

&lt;p&gt;Recent rigorous studies have poured cold water on the simplistic “diversity = higher profit” narrative when it comes to demographic diversity in large firms. For example, a 2024 academic study of all S&amp;amp;P 500 companies (McKinsey’s focus) found no clear link between executive team racial/gender diversity and future financial performance. They concluded that the oft-touted business case for demographic diversity is overstated, and they questioned McKinsey’s results, suggesting that McKinsey likely got the direction of causality wrong&lt;sup&gt;8&lt;/sup&gt; . In short, earlier claims that simply diversifying a leadership team will automatically boost your bottom line are not backed by solid evidence. It’s more complicated.&lt;/p&gt;

&lt;p&gt;Our research differs from McKinsey’s simple study (and its subsequent debunking) by examining functional diversity across a large sample and long time period. We tracked performance changes as team compositions evolved and applied methods to address reverse causality and confounders. This allowed us to isolate functional diversity’s true effects: it can improve firm performance, but context is key. In smaller firms, it provides a direct boost; in larger ones, it requires conditions like team integration and tenure to yield benefits. This nuanced view affirms diversity’s value while steering clear of one-size-fits-all claims.&lt;/p&gt;

&lt;p&gt;Executives need to pursue diversity with both optimism and realism. Assembling varied experts or demographics alone guarantees nothing — results depend on how the team is managed and the organizational context in which it operates.&lt;/p&gt;

&lt;h2 id=&quot;making-diversity-work-leadership-lessons-for-harnessing-differences&quot;&gt;Making Diversity Work: Leadership Lessons for Harnessing Differences&lt;/h2&gt;

&lt;p&gt;How can leaders of organizations — big or small — leverage the power of functional diversity while avoiding its pitfalls? A few actionable lessons emerge:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Foster a “One Team” Culture:&lt;/strong&gt; Follow Mulally’s Ford example—break silos with regular forums where leaders explain issues in plain language. Rotate chairs or use facilitators to ensure no single function dominates. The goal is to instill an ethos that &lt;em&gt;we win or lose together&lt;/em&gt;, not in isolation. When every executive feels responsible for collective problems, not just their silo, diverse thinking converges into unified action.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Invest in Integration Mechanisms:&lt;/strong&gt; Don’t leave cohesion to chance. Use cross-functional projects, offsites, co-location, and mixed org structures (e.g., embedding analysts across teams) to create shared experiences and daily exchange. These build trust and mutual understanding, mimicking small-firm closeness for better collaboration. Practices like mixing functions within physical spaces can simulate the close-knit feel of a smaller firm, encouraging daily knowledge exchange across specialties. As research suggests, familiarity breeds collaboration in this context – when people know each other well, they communicate more freely and productively.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Mind Your Team’s Tenure Balance and consider how the CEO fits in:&lt;/strong&gt; In large companies, frequent reshuffles can disrupt cohesion — stability helps diverse teams gel and develop. When adding new blood, use mentoring or overlap periods. Also, pay attention to leadership development: consider cultivating CEOs who have broad functional experiences in their career (so-called “generalist” CEOs). We found a CEO with a broad background can somewhat substitute for team diversity — perhaps due to a bridging ability to speak everyone’s language. When you don’t want (or can’t form) a heterogeneous team, a cross-functional polymath at the helm can be an alternative way to have multiple perspectives at the top.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Measure and Adapt:&lt;/strong&gt; Finally, treat the impact of diversity as a hypothesis to continually test and refine in your own company. Maybe you’ve improved gender or functional diversity in your leadership team — track how it correlates with outcomes over time, and gather feedback on team dynamics. If results aren’t what you hoped, dig into why: Do people feel included or is the diverse team just “for show”? Are there communication clogs you can clear? By treating this as a learning journey, you avoid the extremes of blind faith or cynical dismissal. Instead, you’ll incrementally discover what mix of talents and dynamics truly drives performance in your context.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;conclusion-beyond-buzzwords-to-better-performance&quot;&gt;Conclusion: Beyond Buzzwords to Better Performance&lt;/h2&gt;

&lt;p&gt;“Diversity” should not just be a box to check or a slogan to trot out at shareholder meetings. It’s a capability — the capability of an organization to think differently within itself, to have constructive debate, and to approach challenges from multiple angles. As we’ve seen, that capability can create tremendous value, but it flourishes under the right conditions: a culture of integration, a size that allows voices to be heard, and leadership that actively cultivates cohesion and trust.&lt;/p&gt;

&lt;p&gt;For a small enterprise, the lesson is clear: embrace functional diversity early. Your small size is an advantage — you can meld your all-stars into a tight-knit, cross-disciplinary unit that outthinks and outmaneuvers bigger rivals. For a large corporation, the task is more delicate: don’t assume that diversity automatically yields dividends. Be intentional in breaking silos and forging a one-team mentality in your upper echelons. It may take time and persistence (old habits die hard, as Ford’s story shows), but the payoff is a leadership team that can actually capitalize on the wealth of knowledge it possesses.&lt;/p&gt;

&lt;p&gt;In the end, the debate about whether diversity matters for performance is settling into a more mature phase. It’s not &lt;em&gt;if&lt;/em&gt; it matters — it’s &lt;em&gt;when&lt;/em&gt; and &lt;em&gt;how&lt;/em&gt; it matters. The newest evidence suggests that different backgrounds do matter — they can be the catalyst for superior performance, but only when combined with unity of purpose and effort. For businesses of all sizes, the mandate is not just to have diversity, but to enable it. The companies that figure this out will enjoy more innovative strategies, more robust decisions, and yes, likely better financial results over the long haul. Those that don’t will continue to wonder why “diversity programs” didn’t magically make a difference.&lt;/p&gt;

&lt;p&gt;In the words of an old proverb, “If you want to go fast, go alone. If you want to go far, go together.” We might add a modern corollary: If you want to go further, go together with people who aren’t just like you — and take the time to truly come together. That’s the savvy way to leverage diversity for performance, turning what could be friction into the engine of future success.&lt;/p&gt;

&lt;h2 id=&quot;references&quot;&gt;References&lt;/h2&gt;

&lt;ol&gt;
  &lt;li&gt;Frances Fabian et al., “&lt;a href=&quot;https://doi.org/10.1002/job.70025&quot;&gt;When Does Top Management Team Diversity Matter in Large Organizations?&lt;/a&gt;” &lt;em&gt;Journal of Organizational Behavior&lt;/em&gt;, accepted August 24, 2005.&lt;/li&gt;
  &lt;li&gt;Due to the fact we study public firms, small firms in our sample still have at least a few million dollars worth of assets.&lt;/li&gt;
  &lt;li&gt;Roni Reiter-Palmon et al., “&lt;a href=&quot;https://doi.org/10.3389/fpsyg.2021.530291&quot;&gt;Teams in small organizations: Conceptual, methodological, and practical considerations&lt;/a&gt;,” &lt;em&gt;Frontiers in Psychology&lt;/em&gt; 12 (2021): 530291.&lt;/li&gt;
  &lt;li&gt;Sharon Ceurvorst et al., “&lt;a href=&quot;https://hbr.org/2024/06/why-cross-functional-collaboration-stalls-and-how-to-fix-it&quot;&gt;Why Cross-Functional Collaboration Stalls, and How to Fix It&lt;/a&gt;.” &lt;em&gt;Harvard Business Review&lt;/em&gt;, June 24, 2024.&lt;/li&gt;
  &lt;li&gt;Ernest Gundling, “&lt;a href=&quot;https://cases.haas.berkeley.edu/2018/01/ford/&quot;&gt;Disruption in Detroit: Ford, Silicon Valley, and Beyond&lt;/a&gt;,” &lt;em&gt;California Management Review Case&lt;/em&gt;, January 1, 2018.&lt;/li&gt;
  &lt;li&gt;Tom Relihan, “&lt;a href=&quot;https://mitsloan.mit.edu/ideas-made-to-matter/fixing-a-toxic-work-culture-breaking-down-barriers&quot;&gt;Fixing a toxic work culture: Breaking down barriers&lt;/a&gt;,” &lt;em&gt;MIT Sloan Management Review,&lt;/em&gt; May 29, 2019.&lt;/li&gt;
  &lt;li&gt;Vivian Hunt et al., “&lt;a href=&quot;https://www.mckinsey.com/insights/organization/~/media/2497d4ae4b534ee89d929cc6e3aea485.ashx&quot;&gt;Diversity Matters&lt;/a&gt;,” &lt;em&gt;McKinsey&amp;amp;Company,&lt;/em&gt; February 2, 2015.&lt;/li&gt;
  &lt;li&gt;Jeremiah Green and John Hand, “&lt;a href=&quot;https://econjwatch.org/articles/mckinsey-s-diversity-matters-delivers-wins-results-revisited&quot;&gt;McKinsey’s Diversity Matters/Delivers/Wins Results Revisited&lt;/a&gt;,” &lt;em&gt;Econ Journal Watch&lt;/em&gt; 21, no. 1 (2024): 5–34.&lt;/li&gt;
&lt;/ol&gt;
</description>
        <pubDate>Wed, 08 Apr 2026 05:10:00 -0700</pubDate>
        <link>http://localhost:4000/2026/04/diversity-matters-overcoming-the-friction-of-different-functional-backgrounds/</link>
        <guid isPermaLink="true">http://localhost:4000/2026/04/diversity-matters-overcoming-the-friction-of-different-functional-backgrounds/</guid>
        
        <category>Diversity</category>
        
        <category>Teams</category>
        
        <category>Performance</category>
        
        <category>Managers</category>
        
        <category>Management performance</category>
        
        <category>Functional diversity</category>
        
        
        <category>[Leadership]</category>
        
        <category>[Teams &amp; Collaboration]</category>
        
        <category>[Talent Management]</category>
        
        <category>[Human Resource Management]</category>
        
        <category>[Demographics]</category>
        
      </item>
    
      <item>
        <title>Making Organizational Culture Great: Moving Beyond Popular Beliefs</title>
        <description>&lt;p&gt;This article is adapted from &lt;em&gt;Making Organizational Culture Great: Moving Beyond Popular Beliefs&lt;/em&gt; published by Columbia Business School Publishing (c) 2026 Jennifer A. Chatman and Glenn R. Carroll. Used by arrangement with the Publisher. All rights reserved.&lt;/p&gt;

&lt;h2 id=&quot;popular-beliefs-about-culture&quot;&gt;Popular Beliefs About Culture&lt;/h2&gt;

&lt;p&gt;Culture baffles even the most experienced managers. Even those who take on the challenge of leveraging their organization’s culture for strategic success often feel mystified, uneasy, or skeptical.&lt;/p&gt;

&lt;p&gt;There are plenty of good reasons for this discomfort. First, culture is not a tangible phenomenon that you can readily see. Second, managers typically receive little or no training in creating or managing culture, unlike their training on tasks like manipulating a spreadsheet or reporting financial outcomes. And few managers are social scientists.&lt;/p&gt;

&lt;p&gt;Yet the stakes—using culture to accelerate your organization’s success or, conversely, letting cultural inertia doom your organization—are high. Most people—especially executives and other top leaders—believe that culture matters enormously for how an organization operates and performs, both in the short and long term.&lt;/p&gt;

&lt;p&gt;Consider the findings of prominent management consulting firms. For example, a 2021 survey of 3,243 executives in forty-two countries by consulting firm PwC found that “81 percent of respondents who strongly believe their organization was able to adapt during the 12 months before our survey was conducted also say their culture has been a source of competitive advantage.” Similarly, Deloitte’s survey of 1,308 adults and executives in 2012 found that “94 percent of executives and 88 percent of employees believe a distinct workplace culture is important to business success.” A study by Korn Ferry found that “91 percent of executives agree that improving corporate culture would increase their organization’s value” and “80 percent of executives ranked culture among the five most important factors driving valuation.” Likewise, Heidrick &amp;amp; Struggles’ 2021 survey of 500 CEOs at companies with a minimum of $2.5 billion in annual revenue found that “82 percent of CEOs . . . surveyed said they had focused on culture as a key priority over the past three years.”&lt;/p&gt;

&lt;p&gt;Academic researchers report similar findings. John Graham and colleagues surveyed 1,348 CFOs and other finance executives around 2020. They found that “91 percent of executives consider corporate culture to be ‘important’ or ‘very important’ at their firm.” Similarly, Glenn Carroll and Lara Yang surveyed 1,926 managers and nonmanagers in the United States about cultural beliefs, perceptions, and experiences. They found that about half the respondents reacted positively to the statement, “In general, culture is more important to organizational performance than strategy or operating model.”&lt;/p&gt;

&lt;p&gt;Despite the professed importance of culture, a Gallup poll found that only 21 percent of employees report feeling connected to their company’s culture.&lt;/p&gt;

&lt;h2 id=&quot;our-approach&quot;&gt;Our Approach&lt;/h2&gt;

&lt;p&gt;We wrote this book to help managers develop and manage culture so they can improve their organization’s performance.&lt;/p&gt;

&lt;p&gt;We do so by helping to sort out what’s what with respect to culture, to consider several of the most salient popular beliefs about culture, and to offer our evaluations as professional social scientists, one of us (Chatman) a psychologist and the other (Carroll) a sociologist. We have been researching organizational culture for decades.&lt;/p&gt;

&lt;p&gt;Our main goal in writing this book is to offer guidance on how to manage organizational culture effectively to those who are responsible for leading and directing organizations and the teams of people within them, including divisions, departments, and other units. We recognize at the outset the difficulty of try- ing to define culture. Academic definitions often extend broadly to include symbols, behaviors, norms, values, and language. For example, pioneering culture researcher Ed Schein defined organizational culture as “the pattern of basic assumptions which a given group has invented, discovered or developed in learning to cope with its problems of external adaptation and internal integration, which have worked well enough to be considered valid, and therefore to be taught to new members as the correct way to perceive, think and feel in relation to those problems. It is the assumptions which lie behind values and which determine the behavior patterns and the visible artifacts such as architecture, office layout, dress codes, and so on.” Yet we view it as counter-productive for managers to worry about definitional debates. We suggest using a simpler, more straightforward definition of culture as “a system of shared values that define what is important, and norms that define appropriate attitudes and behaviors for organizational members.”&lt;/p&gt;

&lt;p&gt;Culture can also be hard to identify “in the wild.” To see culture in action can be like trying to spot camouflaged animals in the jungle. Adding to the challenge, culturally relevant behaviors can be ambiguous, frequently spawning multiple interpretations.&lt;/p&gt;

&lt;p&gt;And members of an organization’s culture can claim to have a certain culture, but the reality of that culture can be quite different from what people say it is.&lt;/p&gt;

&lt;p&gt;Mainly, we aim to offer guidance on how to manage organizational culture—ranging from crafting a culture that helps an organization execute on its strategy to ensuring that the culture adapts over time. We do so in a particular way—by sorting out what is true about culture, what’s not true, and what appears ambiguous or unresolved.9 We believe that this approach will enable managers to prioritize what really matters, to understand what is consequential, and to know what to ignore and leave behind.&lt;/p&gt;

&lt;p&gt;To illustrate our approach, consider the issue of measuring culture quantitatively. Many managers wonder whether measuring culture is a good idea, and if so, how and when they would do so. Others wonder whether their hiring processes should evaluate a person’s fit to the culture, and if so, how they can avoid bias and discrimination in the process. And, of course, managers wonder about culture’s impact on organizational performance and how they can ensure that culture helps rather than hinders people trying to accomplish organizational goals. These questions often challenge managers as well as social scientists. But they become even harder to answer, if not impossible, without a solid understanding of the behavioral realities of culture, which requires looking beyond the popular beliefs to find what’s true about culture.&lt;/p&gt;

&lt;h2 id=&quot;strong-culture-organizations&quot;&gt;Strong Culture Organizations&lt;/h2&gt;

&lt;p&gt;We pay particular attention to a culture’s strength, a widely used social science term that is sometimes misunderstood.&lt;/p&gt;

&lt;p&gt;Specifically, social scientists define a &lt;em&gt;strong culture organization&lt;/em&gt; by two pronounced features. First, its members hold a high consensus around the appropriate norms, values, and beliefs of the organization. In other words, people agree about “the right thing to do at this organization.” Second, members display a high intensity of commitment to those norms, values, and beliefs, such that people will act on their own to ensure that others comply. Imagine being taught on the assembly line the “right” way to do things by your fellow worker or being scolded by your peer when violating a normative expectation of timely attendance at meetings. In both cases, the targeted employee is being instructed and sanctioned by a peer rather than a boss. This self-managed aspect of strong culture organizations is part of their appeal, and systematic research (reviewed in chapter 7) shows that strong culture organizations indeed require fewer managers to operate effectively—operationally, they are simply more efficient.&lt;/p&gt;

&lt;p&gt;Note that by this definition, a strong culture organization does not depend on any specific norms or practices (often called “cultural content”) to make it strong—all that’s required is high agreement and high intensity. Another way of saying this is that cultural strength is independent of cultural content. Accordingly, you can find examples of strong culture organizations with virtually any cultural content. Indeed, in this book we will review examples of strong culture organizations engaged in manufacturing, service delivery, research, terrorism, religion (including cults), policing, military activities, and more. We will see strong culture organizations that are large and small, old and newly founded, across a variety of industries and operating in many different countries.&lt;/p&gt;

&lt;p&gt;For example, among commonly recognized strong culture organizations are the following well-known organizations:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Southwest Airlines, with its breezy culture of fun and teamwork&lt;/li&gt;
  &lt;li&gt;The Unification Church, aka the “Moonies,” a religious cult that draws people in and won’t let them go easily&lt;/li&gt;
  &lt;li&gt;Nordstrom, the Seattle-based department store known for exceptional customer service&lt;/li&gt;
  &lt;li&gt;Goldman Sachs, the long-successful investment banking firm that drives performance through information sharing&lt;/li&gt;
  &lt;li&gt;Navy SEALs, Green Berets, Special Weapons and Tactics (SWAT) teams, military-like special forces, who perform highly specialized and immensely difficult tasks for national defense&lt;/li&gt;
  &lt;li&gt;Amazon, the internet-based retailer and cloud service com- pany who seeks to provide consumers with anything they might want to buy online&lt;/li&gt;
  &lt;li&gt;Netflix, the video-streaming service whose culture has enabled it to transform its business model from sending DVDs in the mail to now producing its own entertainment content&lt;/li&gt;
  &lt;li&gt;Google, the information technology company built on inter- net search who selects people based on sheer curiosity&lt;/li&gt;
  &lt;li&gt;Uber, the ride-hailing platform that dominates most US urban markets&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;five-popular-beliefs&quot;&gt;Five Popular Beliefs&lt;/h2&gt;

&lt;p&gt;We organized this book around five common popular beliefs about culture. We focus on these specific beliefs because they come up most frequently in our consulting activities and classroom teaching. The beliefs capture many of the key challenges that managers face in using culture to improve organizational performance and to sustain performance at high levels.&lt;/p&gt;

&lt;p&gt;The discomfort that many managers feel about culture often leads to misdirected efforts to learn more about culture or to engage consultants and other experts. While we applaud any attempt to gain more knowledge, we think that the biggest challenge that leaders face in managing culture is not intellectual but behavioral. Once you learn to ignore the munificent and often imprecise babble about culture—the underlying forces involved in building and maintaining a strong culture hold little mystery, at least to social scientists.&lt;/p&gt;

&lt;p&gt;Cultures get built and sustained through a variety of well-studied and well-known social and psychological processes. For the most part, social scientists do not question or debate the ways that these processes operate; they reside in the scientific canon as accepted facts. Most important, these processes are not mysterious or technical: You can readily learn and remember them. For example, the relevant managerial levers include culturally selective hiring, intense early socialization, aligning compensation and other incentives, and communicating expectations throughout the organization. The payoff can be substantial because under- standing and implementing these processes will enable you to enhance your organization’s well-being and performance.&lt;/p&gt;

&lt;p&gt;By contrast, what is difficult—exceptionally so in our estimation—is the ability to act and behave &lt;em&gt;consistently&lt;/em&gt; in ways that advance your goals as a manager in using these processes. Acting consistently day in and day out, meeting after meeting, activity after activity, in the presence of many different people holding many different positions and playing many different roles, requires self-discipline, deliberateness, and personal presence. Jack Welch, the highly successful long-term CEO at GE, famously said that good leaders had to be “relentlessly boring.” Welch was not advocating that a leader be boring when speaking or acting (he passionately believed the opposite) but he appreciated that if an intelligent and engaged leader repeats the same message consistently hundreds, perhaps thousands, of times, then it will likely get boring to the leader himself. Welch was warning against being harmfully inventive by modifying the message to make it interesting to the leader.&lt;/p&gt;

&lt;p&gt;A second difficulty lies in ensuring a &lt;em&gt;comprehensive&lt;/em&gt; approach to managing culture. That is, leading through culture involves using a variety of managerial levers that affect a variety of processes. While there may be some absolute no-no’s, there is no magic bullet, no single way to build and sustain an organization’s culture. You must attend to several or many levers and processes at once if you want to manage the culture effectively. It’s not just about incentives, training, or culturally selective hiring—it’s about orchestrating a wide range of the levers at your command. The challenge rests with juggling many balls to the same end, some of which may be easy for you and some which you will find hard.&lt;/p&gt;

&lt;p&gt;Finally, offering a &lt;em&gt;coherent&lt;/em&gt; narrative about these consistent and comprehensive practices ensures that members of your organization understand without ambiguity why you wish to cultivate a particular culture, with specific behavioral norms. What do various groups in the organization—the executive team, managers, individual contributors, and others—think you and they will gain by following this particular culture with these values and norms? The narrative contains both formal scripted chapters as well as informal spontaneous ones. Cultural coherence provides the logic for coordination across organizations, something essential for getting big things done.&lt;/p&gt;

&lt;p&gt;So, in our view, success in managing culture does not require you to become a rocket scientist—defining and figuring out difficult unsolved problems. Instead, the challenge involves performing on point. Perhaps an orthopedic surgeon represents a better metaphor—a knowledgeable professional who executes time and time again in a consistent, comprehensive, and coherent way. Managers need to behave consistently, comprehensively, and coherently in scientifically known ways to get the results that they hope for in managing organizational culture. Our goal in writing this book is to demystify culture, to offer clarity about the known and proven ways of leading and managing an effective culture.&lt;/p&gt;

&lt;p&gt;The book will demonstrate that managing through culture typically differs from conventional management in numerous ways. For example, leading through culture involves culturally selective hiring for fit rather than just focusing on skills. New hires are socialized to the culture, and motivational messages aim to inspire instead of offering higher pay. Leaders are often treated like peers, information is shared widely, and peers are involved in supervision as much as bosses. Rules in strong culture organizations also tend to general and generative rather than specific and detailed. Strong culture organizations typically manage through social control—peer pressure or normative sanctioning—rather than heavy doses of formal control, consisting of rules, policies, and defined procedures.&lt;/p&gt;

&lt;div class=&quot;aside&quot;&gt;
&lt;div class=&quot;row&quot;&gt;
&lt;div class=&quot;col-md-3&quot;&gt;
  &lt;a href=&quot;https://cup.columbia.edu/book/making-organizational-culture-great/9780231221368/&quot;&gt;&lt;img src=&quot;/assets/images/blog/2026-04-chatman-organizational-culture-cover.jpg&quot; /&gt;&lt;/a&gt;
&lt;/div&gt;
&lt;div class=&quot;col-md-9&quot;&gt;
  &lt;h5&gt;Editorial Note&lt;/h5&gt;
  &lt;p&gt;This post is adapted from a chapter of the book &quot;&lt;a href=&quot;https://cup.columbia.edu/book/making-organizational-culture-great/9780231221368/&quot;&gt;Making Organizational Culture Great: Moving Beyond Popular Beliefs&lt;/a&gt;&quot; by Jennifer A. Chatman and Glenn R. Carroll (c) April 2026 Columbia Business School Publishing. Used by arrangement with the Publisher. All rights reserved.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;

</description>
        <pubDate>Thu, 02 Apr 2026 08:33:00 -0700</pubDate>
        <link>http://localhost:4000/2026/04/making-organizational-culture-great-moving-beyond-popular-beliefs/</link>
        <guid isPermaLink="true">http://localhost:4000/2026/04/making-organizational-culture-great-moving-beyond-popular-beliefs/</guid>
        
        <category>Cultural fit</category>
        
        <category>Culture</category>
        
        <category>Leadership</category>
        
        <category>Leadership teams</category>
        
        <category>Leading teams</category>
        
        
        <category>[Culture]</category>
        
        <category>[Leadership]</category>
        
        <category>[Teams &amp; Collaboration]</category>
        
        <category>[Communication]</category>
        
        <category>[Organizational Behavior]</category>
        
      </item>
    
      <item>
        <title>When the State Rewires Logistics: A Framework for Automation Strategy in Infrastructure-Shifting Environments</title>
        <description>&lt;p&gt;Managers deciding where and how to automate supply chains typically anchor their analysis on internal metrics: labour costs, throughput targets, return on investment. Yet in many emerging economies, a parallel transformation is unfolding that makes those static calculations obsolete. Governments are not simply fixing potholes or adding ports; they are fundamentally rewiring logistics infrastructure, integrating previously disconnected modes, and exposing real-time data through digital platforms. India’s PM Gati Shakti a GIS-based coordination system linking 16 ministries to plan railways, roads, ports, inland waterways and logistics parks as one multimodal network illustrates this shift at scale. Brazil’s PAC infrastructure programme and Indonesia’s logistics modernization efforts follow similar logics. The managerial question is no longer “should we automate?” but “how do we design automation strategy when the external logistics system is not fixed but changing under our feet?”&lt;/p&gt;

&lt;p&gt;This article offers a framework for aligning firm-level automation with state-led infrastructure transformation. Drawing on India’s logistics automation market projected to grow from USD 1.88 billion in 2024 to over USD 8 billion by 2033, alongside an estimated 80% of warehouses adopting some automation by 2030 the framework identifies when automation amplifies infrastructure gains and when it becomes stranded investment. The core insight: automation returns depend less on “how much technology” and more on &lt;strong&gt;timing, location and complementarity with external policy execution&lt;/strong&gt;. Managers who treat automation roadmaps as independent of infrastructure maps risk deploying expensive assets in precisely the wrong places at the wrong moments.&lt;/p&gt;

&lt;h2 id=&quot;the-puzzle-why-similar-automation-investments-pay-off-differently&quot;&gt;The Puzzle: Why Similar Automation Investments Pay Off Differently&lt;/h2&gt;

&lt;p&gt;Consider two warehouses in India, each investing roughly USD 2 million in semi-automated sorting, put-to-light systems and warehouse management software. The first sits in an industrial estate 60 kilometres from the nearest rail link, relying on road freight through congested corridors. Power is unstable; broadband patchy. The operator cannot access real-time data on vessel berthing, train movements or port congestion because the facility predates government digital platforms. When customer contracts shift or volumes drop, the sorter becomes a fixed-cost burden the firm cannot easily redeploy.&lt;/p&gt;

&lt;p&gt;The second warehouse is located inside a Multi-Modal Logistics Park (MMLP) co-designed with rail sidings, highway access and dedicated power. The operator plugs warehouse management and transportation systems directly into India’s Unified Logistics Interface Platform (ULIP), which exposes over 1,800 data fields from 41 government systems via APIs vessel schedules, rail rake visibility, customs documentation, e-way bills. When disruptions hit, the firm can reroute shipments across rail, road or coastal modes because the infrastructure and data to do so exist in real time. The automation investment here amplifies gains from better connectivity, lower dwell times and modal flexibility.&lt;/p&gt;

&lt;p&gt;Both firms “automated.” Only one captured the complementary value from infrastructure transformation. This is not an India-specific problem; it is a structural challenge wherever states are rewiring logistics at the same time firms are automating operations.&lt;/p&gt;

&lt;h2 id=&quot;framework-the-automationinfrastructure-alignment-matrix&quot;&gt;Framework: The Automation–Infrastructure Alignment Matrix&lt;/h2&gt;

&lt;p&gt;To navigate this environment, managers need a simple but robust heuristic. The &lt;strong&gt;Automation–Infrastructure Alignment Matrix&lt;/strong&gt; maps automation intensity against logistics infrastructure quality and policy alignment.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/blog/2026-03-manzoor-fig1.png&quot; style=&quot;box-shadow:none;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figure 1:&lt;/strong&gt; Automation–Infrastructure Alignment Matrix. The vertical axis represents automation intensity (low to high); the horizontal axis reflects infrastructure and policy alignment (low to high). The four quadrants capture distinct risk–return profiles and strategic choices for managers navigating infrastructure transformation.&lt;/p&gt;

&lt;h4 id=&quot;quadrant-1-stranded-automation-high-automation-low-alignment&quot;&gt;Quadrant 1: Stranded Automation (High Automation, Low Alignment)&lt;/h4&gt;

&lt;p&gt;Firms here have deployed sophisticated automation in locations poorly served by infrastructure or excluded from policy coordination. Hardware spend exceeds 50% of total automation investment in India’s logistics market, and much of it sits in precisely this quadrant. A 3PL operating a highly automated warehouse off-corridor faces long, variable lead times that automation cannot compress because external bottlenecks dominate; limited modal choice, forcing reliance on congested road freight; and no access to real-time government logistics data, so planning systems operate with stale information.&lt;/p&gt;

&lt;p&gt;The economic risk is acute in volatile markets. When customer contracts shift common in India’s fragmented 3PL sector where multi-year contracts are rare firms cannot easily redeploy fixed automation assets. What looked like “strategic” investment becomes a sunk cost eating margin.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Managerial implication:&lt;/strong&gt; Avoid front-loading automation in locations where infrastructure quality lags. Treat such investments as &lt;strong&gt;options&lt;/strong&gt;, not commitments pilot with modular, subscription-priced solutions until infrastructure clarity improves.&lt;/p&gt;

&lt;h4 id=&quot;quadrant-2-latent-potential-low-automation-high-alignment&quot;&gt;Quadrant 2: Latent Potential (Low Automation, High Alignment)&lt;/h4&gt;

&lt;p&gt;Facilities here sit in well-connected locations Gati Shakti corridors, designated MMLPs, export-oriented industrial clusters but have not yet automated meaningfully. This is the highest-return space for new automation investment because external enablers already exist. India has roughly 35% of logistics automation spend concentrated in Western India’s logistics hubs, but many smaller operators in those same hubs remain manual. They benefit from better roads, multimodal access and faster customs clearance, but they leave productivity gains on the table by not mechanising internal flows or digitising planning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Managerial implication:&lt;/strong&gt; Prioritise these locations for rapid automation scale-up. IRR calculations anchored only on internal labour and error reduction understate true returns, because improved corridor speed, lower dwell times and modal flexibility compound automation gains.&lt;/p&gt;

&lt;h4 id=&quot;quadrant-3-cautious-optimisation-mediummedium&quot;&gt;Quadrant 3: Cautious Optimisation (Medium–Medium)&lt;/h4&gt;

&lt;p&gt;Most mid-sized firms cluster here: incremental automation in moderately connected locations. Operators adopt warehouse management systems, basic mechanisation and some analytics, but avoid big robotics bets. This is rational risk management in uncertain environments, but it also means firms are not positioned to exploit infrastructure breakthroughs when they arrive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Managerial implication:&lt;/strong&gt; Build &lt;strong&gt;node-specific automation roadmaps&lt;/strong&gt; tied to infrastructure timelines. When a corridor upgrade, MMLP commissioning or port expansion is confirmed, pre-position modular automation capacity to scale quickly once external bottlenecks clear.&lt;/p&gt;

&lt;h4 id=&quot;quadrant-4-policy-leveraged-automation-high-automation-high-alignment&quot;&gt;Quadrant 4: Policy-Leveraged Automation (High Automation, High Alignment)&lt;/h4&gt;

&lt;p&gt;This is the strategic target zone. Firms here combine high automation intensity with strong infrastructure and policy alignment. They operate in or near MMLPs, plug into government digital platforms like ULIP and the Logistics Data Bank (which tracks 100% of India’s containerised export-import cargo via RFID), and co-invest in workforce skilling aligned with government training modules. Automation here acts as a force multiplier for public infrastructure, not a substitute.&lt;/p&gt;

&lt;p&gt;India’s e-commerce and export-focused FMCG sectors increasingly occupy this quadrant. With logistics costs estimated to have dropped from 13–14% of GDP historically to a 7.8–8.9% band recently driven by better infrastructure coordination automation in well-connected nodes delivers compounding returns: faster internal flows meet faster external corridors, and digital integration reduces planning blind spots.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Managerial implication:&lt;/strong&gt; Anchor major automation capex to this quadrant. Design investments as &lt;strong&gt;complements to policy execution&lt;/strong&gt;, not independent bets. Sequence automation to follow infrastructure completion, not precede it. See Figure 2 for managerial implications.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/blog/2026-03-manzoor-table1.png&quot; style=&quot;box-shadow:none;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Table 1:&lt;/strong&gt; Risk–Return Profiles Across the Four Quadrants&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/blog/2026-03-manzoor-fig2.png&quot; style=&quot;box-shadow:none;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figure 2:&lt;/strong&gt; Automation–Infrastructure Alignment Matrix Managerial Implication&lt;/p&gt;

&lt;h2 id=&quot;three-complementarities-that-determine-automation-roi&quot;&gt;Three Complementarities That Determine Automation ROI&lt;/h2&gt;

&lt;p&gt;Beneath the matrix sits a deeper structural logic. Automation returns depend on complementarity with three external assets the firm does not control:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network Complementarity:&lt;/strong&gt; Physical connectivity quality multimodal links, corridor speeds, terminal throughput. PM Gati Shakti’s core promise is to move India’s logistics infrastructure from fragmented, single-mode planning to integrated, multimodal design. Firms automating yard management, gate systems or control towers inside MMLPs capture gains that isolated warehouses cannot, because trucks, trains and ships actually move faster and more reliably through those nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Complementarity:&lt;/strong&gt; Access to real-time, system-level logistics data. India’s ULIP connects 41 government systems and exposes vessel berthing, rail schedules, port congestion and customs workflows via APIs. When a firm’s WMS or TMS integrates with ULIP, automated planning engines operate on current, accurate data rather than guesswork. Firms off the ULIP grid automate with one hand tied behind their backs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Human Complementarity:&lt;/strong&gt; Availability of supervisors, technicians and planners who can interpret automated systems and manage exceptions. India’s National Logistics Policy explicitly targets workforce skilling through platforms like iGOT and logistics training in higher education. Warehouses in Gati Shakti-linked districts that co-invest in training retain talent and exploit automation more fully. Firms that automate without skilling face high attrition, manual overrides and brittle operations when disruptions hit.&lt;/p&gt;

&lt;p&gt;Managers should audit automation investments against these three dimensions. A robotics project scoring high on network and data complementarity but low on human complementarity will underperform; so will one that ticks the human box but sits in a poorly connected location with no ULIP access.&lt;/p&gt;

&lt;h2 id=&quot;propositions&quot;&gt;Propositions&lt;/h2&gt;

&lt;p&gt;From the framework and complementarities, three testable propositions emerge:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Proposition 1:&lt;/strong&gt; &lt;em&gt;The return on warehouse automation investment is significantly higher in locations with high infrastructure and policy alignment (proximity to multimodal hubs, access to government digital platforms) than in otherwise similar locations with low alignment.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Proposition 2:&lt;/strong&gt; &lt;em&gt;Automation projects that exhibit strong complementarity across network, data and human dimensions achieve greater operational resilience and lower stranded-asset risk than projects that score high on only one or two dimensions.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Proposition 3:&lt;/strong&gt; &lt;em&gt;In policy-active environments, firms that sequence automation to follow infrastructure completion (option-based strategy) outperform firms that front-load automation commitments in advance of infrastructure clarity.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;These propositions offer clear hypotheses for future empirical work and immediate guidance for managers evaluating automation portfolios.&lt;/p&gt;

&lt;h2 id=&quot;managerial-playbook-four-non-obvious-moves&quot;&gt;Managerial Playbook: Four Non-Obvious Moves&lt;/h2&gt;

&lt;h4 id=&quot;move-1-map-policy-execution-timelines-before-finalising-automation-roadmaps&quot;&gt;Move 1: Map policy execution timelines before finalising automation roadmaps&lt;/h4&gt;

&lt;p&gt;Obtain corridor completion schedules, MMLP commissioning dates and digital platform rollout plans. Sequence automation to follow infrastructure, not lead it. In India, firms can access PM Gati Shakti’s spatial data layers through the national portal; similar platforms exist or are emerging in other infrastructure-active economies.&lt;/p&gt;

&lt;h4 id=&quot;move-2-treat-automation-as-real-options-on-policy-delivery&quot;&gt;Move 2: Treat automation as real options on policy delivery&lt;/h4&gt;

&lt;p&gt;In high-uncertainty environments, deploy modular, subscription-priced automation (cloud WMS, robotics-as-a-service) that can scale quickly when infrastructure clarity improves. Avoid large, fixed robotics investments in locations where policy execution risk is high.&lt;/p&gt;

&lt;h4 id=&quot;move-3-build-a-policy-radar-function-in-supply-chain-teams&quot;&gt;Move 3: Build a “policy radar” function in supply chain teams&lt;/h4&gt;

&lt;p&gt;Designate staff to track infrastructure announcements, budget allocations and digital platform rollouts. Front-load pilot automation in locations where the state is over-investing. In India, Western India accounted for over 35% of logistics automation spend precisely because Gati Shakti and earlier programmes concentrated multimodal investments there.&lt;/p&gt;

&lt;h4 id=&quot;move-4-co-invest-in-complementary-workforce-development&quot;&gt;Move 4: Co-invest in complementary workforce development&lt;/h4&gt;

&lt;p&gt;Do not automate in isolation. Partner with government skilling programmes, vocational institutes and logistics academies to ensure supervisors and technicians can exploit automation. Firms that upgrade roles from manual pickers to robot operators, from paper-based planners to control-tower coordinators retain talent and sustain automation gains.&lt;/p&gt;

&lt;h2 id=&quot;implications-beyond-india&quot;&gt;Implications Beyond India&lt;/h2&gt;

&lt;p&gt;The framework generalises. Any context where states are simultaneously upgrading hard infrastructure, integrating modes and exposing digital logistics data creates the conditions for policy-leveraged automation. Brazil’s logistics investment corridors, Indonesia’s logistics reform agenda and parts of Southeast Asia’s ASEAN connectivity push all fit this pattern. The managerial challenge is identical: how to time, locate and design automation so it amplifies rather than ignores or contradicts what the state is building.&lt;/p&gt;

&lt;p&gt;The alternative treating automation as a purely internal, firm-level decision produces the stranded investments, brittle systems and disappointed returns that characterise Quadrant 1. Managers who ignore the policy map deploy robots in the wrong places, at the wrong times, for the wrong reasons. Those who align automation roadmaps with infrastructure transformation capture compounding gains that static ROI models cannot see.&lt;/p&gt;

&lt;p&gt;India’s logistics automation market, racing from USD 1.88 billion to over USD 8 billion this decade, offers a live laboratory for this dynamic. The lesson is not “automate more” or “automate less.” It is: automate deliberately, with the grain of policy, in locations and at moments where external complementarities are strongest. That is how managers turn automation from a cost centre into a strategic lever and how they avoid the expensive mistakes visible across India’s warehouses, 3PLs and manufacturing clusters today.&lt;/p&gt;
</description>
        <pubDate>Tue, 31 Mar 2026 05:05:00 -0700</pubDate>
        <link>http://localhost:4000/2026/03/when-the-state-rewires-logistics-a-framework-for-automation-strategy-in-infrastructure-shifting-environments/</link>
        <guid isPermaLink="true">http://localhost:4000/2026/03/when-the-state-rewires-logistics-a-framework-for-automation-strategy-in-infrastructure-shifting-environments/</guid>
        
        <category>Alternative investment rules</category>
        
        <category>Automation</category>
        
        <category>Logistics</category>
        
        <category>Government policy</category>
        
        <category>Supply chain strategies</category>
        
        
        <category>[Supply Chain Management]</category>
        
        <category>[Automation]</category>
        
        <category>[Logistics]</category>
        
        <category>[Government]</category>
        
        <category>[Regulation]</category>
        
      </item>
    
      <item>
        <title>Silver Economy Influencers: Unlocking the Untapped Potential of Mature Content Creators</title>
        <description>
</description>
        <pubDate>Sun, 29 Mar 2026 02:00:00 -0700</pubDate>
        <link>http://localhost:4000/2026/03/68-2-silver-economy-influencers-unlocking-the-untapped-potential-of-mature-content-creators/</link>
        <guid isPermaLink="true">http://localhost:4000/2026/03/68-2-silver-economy-influencers-unlocking-the-untapped-potential-of-mature-content-creators/</guid>
        
        
        <category>[Digital Marketing]</category>
        
        <category>[Marketing Strategy]</category>
        
        <category>[Social Media]</category>
        
        <category>[Sales]</category>
        
        <category>[Diversity, Equity, &amp; Inclusion]</category>
        
      </item>
    
      <item>
        <title>Optimizing AI Value: What Managers Must Do Differently</title>
        <description>&lt;p&gt;AI adoption across enterprises is moving faster than most leaders expected. Many organizations are rolling out internal GenAI apps/tools with access to multiple models like chatbots, agents, workflows, and automation tools across their business functions. On paper, it looks like real progress. In practice, the value story is far less convincing. Despite widespread adoption, many CFOs and senior leaders are still struggling to see meaningful returns. A Gartner study reported that only &lt;strong&gt;7% of CFOs say they are seeing high ROI from AI in finance&lt;/strong&gt;, even as usage continues to grow.&lt;sup&gt;1&lt;/sup&gt; CIO Dive painted a similar picture: &lt;strong&gt;nearly 80% of AI projects fail to deliver on their original promise,&lt;/strong&gt; and &lt;strong&gt;42% are shut down before they ever reach full production.&lt;/strong&gt;&lt;sup&gt;2&lt;/sup&gt; For something that has attracted so much investment and attention, that’s a sobering reality.&lt;/p&gt;

&lt;p&gt;Most organizations today can be described as &lt;strong&gt;AI-enabled&lt;/strong&gt;. They have the tools and access to powerful technological models. Many have extensive enterprise data. However, few utilize an &lt;strong&gt;AI-advantaged&lt;/strong&gt; approach where AI is applied in a consistent and repeatable way to make better decisions, move faster, and outperform competitors. The gap between having access to AI and benefiting from it keeps getting wider. As a result, an important leadership question has fundamentally changed. It’s no longer about whether an organization can deploy AI. The real question is harder: &lt;strong&gt;can we use AI in a way that creates lasting competitive advantage?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This scenario is frustrating to many companies especially since technology itself keeps getting better. Large language models are constantly setting new performance records. However, value creation remains elusive. This suggests that the problem is not technology, but rather the way organizations choose to apply it. These are &lt;strong&gt;leadership and operating model flaws&lt;/strong&gt;, and not one of technological model selection. While tools can be easily acquired, value cannot. To find value, companies must create a deliberate design and embed a planned system into decision making processes and operations.&lt;/p&gt;

&lt;p&gt;AI maturity in organizations will not be defined by more pilots, experiments, or isolated success stories. It will be defined by whether organizations can deploy a central enterprise AI platform that scales across divisions, connects to meaningful enterprise data, and keeps humans in control. It will not be found “in the loop” to rubber stamp decisions, but rather in the control of intent, constraints, and outcomes. Ultimately, AI success will be achieved with something very human: &lt;strong&gt;the ability to guide powerful technology toward outcomes that matter, safely and at scale.&lt;/strong&gt;&lt;/p&gt;

&lt;h2 id=&quot;challenges-in-ai-value-creation&quot;&gt;Challenges in AI Value Creation&lt;/h2&gt;

&lt;p&gt;There are five key reasons that explain why AI value creation has remained elusive to many organizations and why they need to deliberately navigate these challenges if they want AI to deliver real value.&lt;/p&gt;

&lt;h4 id=&quot;solving-the-wrong-problem&quot;&gt;Solving the Wrong Problem&lt;/h4&gt;

&lt;p&gt;One of the biggest reasons AI fails to deliver value is that teams focus on building impressive solutions instead of solving the right business problem at scale. AI is often applied to automate tasks or generate insights without first asking whether those activities matter to business outcomes. When the problem is poorly defined even a high-performing AI system will struggle to create meaningful value. Organizations need to start with the business problem. It is essential to define the value to be realized and assess it against total cost of ownership.&lt;/p&gt;

&lt;h4 id=&quot;fragmented-ai-efforts&quot;&gt;Fragmented AI Efforts&lt;/h4&gt;

&lt;p&gt;Many organizations roll out internal GenAI apps and tools in silos. Implementing the same use case across different functions often results in multiple tools, without shared standards, governance, or consistent approach. Fragmented efforts reduce visibility of use cases across the organization, increase duplication, and drive-up costs. Forrester Consulting conducted a commissioned study on behalf of Tines found that 88% of IT leaders say AI adoption remains difficult to scale without orchestration, as disconnected systems, data, and teams’ dilute value.&lt;sup&gt;3&lt;/sup&gt; Organizations need to establish a central enterprise AI platform as a hub, with flexibility to connect multiple tools and platforms as spokes.&lt;/p&gt;

&lt;h4 id=&quot;inability-to-scale&quot;&gt;Inability to Scale&lt;/h4&gt;

&lt;p&gt;In many organizations, AI pilots and experiments generate early excitement but never evolve into enterprise-scale capabilities. This often happens because initiatives start with the technology rather than a clearly defined business problem, and even when a problem is identified, the solution is not integrated into core enterprise workflows. As a result, promising pilots remain isolated experiments. An MIT NANDA report titled “GenAI Divide: State of AI in Business 2025” found that despite $30–40 billion in enterprise investment into GenAI, 95% of organizations are getting zero return.&lt;sup&gt;4&lt;/sup&gt; Organizations need to stop treating AI as an experiment. It makes strategic sense to start with a production-grade minimum viable product designed to solve a key business problem at scale from day one.&lt;/p&gt;

&lt;h4 id=&quot;reactive-approach-to-risk-and-governance&quot;&gt;Reactive Approach to Risk and Governance&lt;/h4&gt;

&lt;p&gt;Many organizations address risk only after an AI failure or compliance issue occurs, rather than building a responsible AI approach from the ground up. Risk controls, ethics, and accountability are often added reactively, once problems surface. An EY survey found that almost every company has experienced financial losses from AI-related risks, and that organizations with proactive governance and responsible AI practices see fewer incidents and lower impact when issues arise.&lt;sup&gt;5&lt;/sup&gt; Organizations need to create responsible AI policies and guidelines and embed them directly into platforms, workflows, and everyday practices.&lt;/p&gt;

&lt;h4 id=&quot;human-in-the-loop-becomes-a-bottleneck&quot;&gt;Human-in-the-Loop Becomes a Bottleneck&lt;/h4&gt;

&lt;p&gt;While human-in-the-loop sounds like a safe approach for implementing responsible AI practice, it also often becomes a barrier to both scale and value. When humans are asked to review AI outputs, decisions slow down and bottlenecks form because humans cannot keep up with the speed at which AI operates. Over time, these reviews turn into routine approvals, reducing the quality of oversight and creating a false sense of control. More importantly, when human effort is spent approving outputs instead of shaping inputs, outcomes and constraints, organizations struggle to translate AI capabilities into real business value. As AI systems become more autonomous especially with agentic AI putting a human in every loop simply does not scale and blurs accountability. Organizations need to move from human-in-the-loop to human-in-control. Human-in-control helps in faster decision making, lower risk, and higher ROI. In addition, human-in-control is essential for agentic AI to scale up and would tighten the loop.&lt;/p&gt;

&lt;h2 id=&quot;opportunities-for-value-optimization&quot;&gt;Opportunities for Value Optimization&lt;/h2&gt;

&lt;p&gt;To unlock real value, organizations need to move from human-in-the-loop design pattern to human-in-control operating model. In this model, humans are no longer positioned as reviewers of AI outputs, but as designers with control. Humans provide the inputs and define the goals, constraints, and success criteria, while AI agents determine the most efficient paths to achieve business outcomes and create value.&lt;/p&gt;

&lt;p&gt;A central enterprise AI platform is foundational to this shift. Without a shared platform, it would be nearly impossible to apply consistent guardrails, governance, or oversight at scale. The platform becomes the locus of control where data access, policies, models, and agents are orchestrated in a unified way.&lt;/p&gt;

&lt;p&gt;This operating model moves control upstream. Instead of approving individual AI outputs, organizations focus on architecting guardrails with clear rules, boundaries, and escalation paths that guide how AI behaves. This allows AI to operate at speed to solve real business problems, while keeping humans accountable for outcomes, not just approvals.&lt;/p&gt;

&lt;p&gt;There are three important strategic considerations to enhance value creation:&lt;/p&gt;

&lt;h4 id=&quot;strategy-1-build-the-right-talent-mix&quot;&gt;Strategy 1: Build the right talent mix&lt;/h4&gt;

&lt;p&gt;Organizations need teams that balance AI &lt;strong&gt;strategy (Head)&lt;/strong&gt;, &lt;strong&gt;responsible AI and ethics (Heart)&lt;/strong&gt;, and &lt;strong&gt;hands-on AI practitioners (Hands)&lt;/strong&gt;. Hiring only strategists creates vision without execution. Hiring only technologists creates solutions without trust. Value emerges when organizations intentionally build teams that combine all three.&lt;/p&gt;

&lt;h4 id=&quot;strategy-2-monitor-and-measure-critical-goals&quot;&gt;Strategy 2: Monitor and measure critical goals&lt;/h4&gt;

&lt;p&gt;For AI to deliver real value, managers need to monitor and manage it intentionally Identifying and tracking a manageable set of clear signals is usually enough to keep a company’s AI agenda focused on real outcomes. Managers need to ask important questions such as:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Are we solving the right business problem?&lt;/li&gt;
  &lt;li&gt;What business value do we expect if this problem is solved?&lt;/li&gt;
  &lt;li&gt;Is AI helping us make better and faster decisions?&lt;/li&gt;
  &lt;li&gt;Are AI systems staying within defined rules and boundaries?&lt;/li&gt;
  &lt;li&gt;Is ownership for outcomes clearly defined?&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id=&quot;strategy-3-prepare-to-pivot&quot;&gt;Strategy 3: Prepare to pivot&lt;/h4&gt;

&lt;p&gt;Given that AI continues to evolve, and organizations are learning along the way, there may be a need to change the AI agenda and direction altogether. Companies need to anticipate this reality and be prepared to swiftly change course in order to find optimal value.&lt;/p&gt;

&lt;p&gt;In essence, contemporary managers need to think and act differently in the quest for value creation. They need to be prepared to steer away from the conventional and embrace the unconventional. Managers need to be adept at assessing project progress and implementing solutions in new and creative ways. Keen operational understanding and the ability to undertake prompt refinements is critical. When managers focus on solving the right problem and track decision impact, speed, control, trust, and adoption, AI value tends to follow.&lt;/p&gt;

&lt;h2 id=&quot;paradigm-and-management-shift&quot;&gt;Paradigm and Management Shift&lt;/h2&gt;

&lt;p&gt;AI value does not fail because models are weak. It fails when organizations deploy AI without clear intent, shared platforms, and accountability for outcomes. For managers, realizing AI value is less about choosing tools and more about how AI is applied, governed, and measured. The seven recommended approaches below would help move the needle in value creation:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Start with the problem, not the technology.&lt;/li&gt;
  &lt;li&gt;Push for a centralized enterprise AI platform.&lt;/li&gt;
  &lt;li&gt;Move from approvals to guardrails.&lt;/li&gt;
  &lt;li&gt;Adopt Human-in-Control.&lt;/li&gt;
  &lt;li&gt;Embed responsible AI by design.&lt;/li&gt;
  &lt;li&gt;Measure what matters, not just accuracy.&lt;/li&gt;
  &lt;li&gt;Build a balanced Center of Excellence team with AI &lt;strong&gt;strategy, Responsible AI,&lt;/strong&gt; and AI &lt;strong&gt;practitioners&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Many firms have found AI value optimization to be elusive. While the right strategic approaches may vary across companies, and catching up with AI progression can be difficult, companies will be well served by rethinking their management methodologies and prioritizing the creation of impactful value.&lt;/p&gt;

&lt;h2 id=&quot;references&quot;&gt;References&lt;/h2&gt;

&lt;ol&gt;
  &lt;li&gt;Matthew Kiel, “&lt;a href=&quot;https://www.gartner.com/en/articles/how-cfos-can-maximize-roi-from-ai&quot;&gt;Steps CFOs Can Take to Maximize ROI from AI Initiatives&lt;/a&gt;,” &lt;em&gt;Gartner&lt;/em&gt;, accessed March 14, 2025.&lt;/li&gt;
  &lt;li&gt;Lindsey Wilkinson, “&lt;a href=&quot;https://www.ciodive.com/news/AI-project-fail-data-SPGlobal/742590&quot;&gt;AI Project Failure Rates Are on the Rise&lt;/a&gt;,” &lt;em&gt;CIO Dive&lt;/em&gt;, accessed March 14, 2025.&lt;/li&gt;
  &lt;li&gt;Forrester, “&lt;a href=&quot;https://www.tines.com/access/whitepaper/forrester-it-ai-orchestration-2025&quot;&gt;Unlocking AI’s Full Value: How IT Orchestrates Secure, Scalable Innovation&lt;/a&gt;,” &lt;em&gt;Tines&lt;/em&gt;, August 2025.&lt;/li&gt;
  &lt;li&gt;MIT NANDA, “&lt;a href=&quot;https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf&quot;&gt;GenAI Divide: State of AI in Business 2025&lt;/a&gt;,” mlq.ai, accessed July 2025.&lt;/li&gt;
  &lt;li&gt;Joe Depa, “&lt;a href=&quot;https://www.ey.com/en_gl/insights/ai/how-can-responsible-ai-bridge-the-gap-between-investment-and-impact&quot;&gt;How Responsible AI Translates Investment into Impact&lt;/a&gt;,” &lt;em&gt;EY&lt;/em&gt;, accessed October 8, 2025.&lt;/li&gt;
  &lt;li&gt;Jessica Apotheker et al., “&lt;a href=&quot;https://www.bcg.com/publications/2025/are-you-generating-value-from-ai-the-widening-gap&quot;&gt;The Widening AI Value Gap&lt;/a&gt;,” &lt;em&gt;Boston Consulting Group&lt;/em&gt;, accessed January 17, 2025.&lt;/li&gt;
  &lt;li&gt;Alex Singla et al., “&lt;a href=&quot;https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai&quot;&gt;The State of AI in 2025: Agents, Innovation and Transformation&lt;/a&gt;,” &lt;em&gt;McKinsey &amp;amp; Company&lt;/em&gt;, accessed January 17, 2026.&lt;/li&gt;
  &lt;li&gt;PwC, “&lt;a href=&quot;https://www.pwc.com/us/en/tech-effect/ai-analytics/ai-predictions.html&quot;&gt;2026 AI Business Prediction&lt;/a&gt;,” &lt;em&gt;PwC&lt;/em&gt;, accessed January 17, 2026.&lt;/li&gt;
&lt;/ol&gt;
</description>
        <pubDate>Thu, 26 Mar 2026 04:56:00 -0700</pubDate>
        <link>http://localhost:4000/2026/03/optimizing-ai-value-what-managers-must-do-differently/</link>
        <guid isPermaLink="true">http://localhost:4000/2026/03/optimizing-ai-value-what-managers-must-do-differently/</guid>
        
        <category>Artificial intelligence</category>
        
        <category>Innovation</category>
        
        <category>Digital transformation</category>
        
        <category>Leadership</category>
        
        <category>Learning</category>
        
        
        <category>[Artificial Intelligence]</category>
        
        <category>[Innovation]</category>
        
        <category>[Digital Tranformation]</category>
        
        <category>[Leadership]</category>
        
        <category>[Talent Management]</category>
        
      </item>
    
      <item>
        <title>The AI Automation Trap: Transform from Optimizing Activities to Allocating Capital</title>
        <description>&lt;p&gt;As organizations scale artificial intelligence to increase speed and efficiency, many are unintentionally creating an unmanaged &lt;em&gt;Algorithmic CMO&lt;/em&gt;: a system that relentlessly optimizes tactical performance metrics while quietly eroding long-term customer equity and enterprise value. The problem is not that AI is ineffective, but that it is governed under the wrong paradigm.&lt;/p&gt;

&lt;p&gt;This tension is amplified by the scale of investment now flowing into marketing AI. Global spending exceeded $20 billion in 2024 and continues to grow at a double-digit rate.&lt;sup&gt;2&lt;/sup&gt; Yet deployment is advancing faster than managerial oversight. Fewer than half of marketing teams report systematically measuring the return on AI investments, and formal governance programs remain unevenly established.&lt;sup&gt;3&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Most marketing AI systems are treated as activity optimizers, even though in practice they now function as capital allocators.&lt;sup&gt;4&lt;/sup&gt; These systems repeatedly determine how customer attention, trust, and marketing investment are deployed over time. These decisions shape not only near-term performance but also the durability of customer relationships and the trajectory of long-term value creation.&lt;/p&gt;

&lt;p&gt;Using recent platform changes by Google and Apple as a natural stress test for email marketing, we can observe how optimization-driven AI can accelerate value depletion once customer friction is reduced, and why governance frameworks designed for executional automation are no longer sufficient for autonomous decision systems.&lt;/p&gt;

&lt;h2 id=&quot;when-optimization-outruns-governance&quot;&gt;When Optimization Outruns Governance&lt;/h2&gt;

&lt;p&gt;As AI systems gain partial autonomy over marketing decisions, firms must reframe how those decisions are governed. The automation trap does not arise from automation itself, but from embedding narrow optimization objectives into systems that now operate continuously and at scale. Many of the risks associated with AI-driven marketing remain difficult to observe because customer friction delays feedback. When friction is high (for example, when unsubscribing required multiple steps), the effects of aggressive optimization accumulate slowly and are often absorbed without immediate consequences. When friction is reduced, those same effects surface rapidly, exposing misalignment between what systems are optimizing for and what organizations are trying to preserve.&lt;/p&gt;

&lt;p&gt;Between 2024 and 2025, platform changes such as one-click unsubscribes, centralized subscription management, and AI-generated inbox summaries materially reduced the cost of disengagement in email marketing. These changes did not alter how marketing related AI systems made decisions. They altered how quickly customers could respond to those decisions.&lt;/p&gt;

&lt;p&gt;The results were immediate for high-frequency senders. Industry benchmarks confirm the pattern: average unsubscribe rates more than doubled between 2024 and 2025, from 0.08 percent to 0.22 percent, following one-click unsubscribe enforcement.&lt;sup&gt;5&lt;/sup&gt; Some high-volume senders experienced unsubscribe spikes nearly twice their historical average within weeks of Gmail’s Subscription Center rollout in mid-2025.&lt;sup&gt;6&lt;/sup&gt; This shift was not driven by failures in AI systems. The systems were optimizing for activity as designed, but the assumptions embedded in that optimization no longer held once customers could exit instantly.&lt;/p&gt;

&lt;h2 id=&quot;how-engagement-signals-became-misleading&quot;&gt;How Engagement Signals Became Misleading&lt;/h2&gt;

&lt;p&gt;The underlying mechanism was behavioral rather than technical. Engagement-optimized systems interpreted late-stage customer interaction (opens, clicks, brief reactivation) as signals of rising purchase intent. These signals triggered escalation in contact frequency. From the AI’s perspective, customers who continued to engage appeared to offer increasing marginal returns.&lt;/p&gt;

&lt;p&gt;From the customer’s perspective, the outreach intensity crossed individual tolerance thresholds. Customer response was nonlinear. At lower exposure levels, unsubscribe behavior remained relatively stable. Once frequency exceeded tolerance limits, exit rates increased sharply.&lt;/p&gt;

&lt;p&gt;This pattern was most pronounced among long-tenured customers, who had accumulated greater exposure over time. &lt;em&gt;Loyalty did not act as a buffer&lt;/em&gt;. Instead, long-tenured customers exhibited the greatest sensitivity to frequency increases and were often the first to exit. The removal of friction did not create this failure. It revealed it.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/blog/2026-03-kunchala-fig1.png&quot; style=&quot;box-shadow:none;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figure 1.&lt;/strong&gt; The Efficiency Illusion: When Short-Term Lift Masks Long-Term Erosion.&lt;/p&gt;

&lt;p&gt;Figure 1 illustrates why the problem was not immediately visible to marketing teams. Engagement and response metrics continued to perform well in executive dashboards even as cumulative exposure pushed customers toward exit. The divergence between reported performance and underlying asset health emerged across successive optimization cycles rather than within individual campaigns, creating the illusion of efficiency.&lt;/p&gt;

&lt;h2 id=&quot;why-managerial-responses-fell-short&quot;&gt;Why Managerial Responses Fell Short&lt;/h2&gt;

&lt;p&gt;Faced with rising unsubscribe rates, most marketing leaders treated the issue as a channel performance problem. Typical responses included content refinement, subject-line testing, churn propensity modeling, and budget reallocation. These interventions addressed symptoms rather than causes.&lt;/p&gt;

&lt;p&gt;The core failure was not insufficient prediction accuracy, but the absence of decision constraints. While predictive models identified commercial risk, AI systems remained free to act on the same engagement signals that were creating the risk. As long as engagement was treated as a universal indicator of demand, mitigation efforts only delayed the inevitable exit without altering the underlying decision logic.&lt;/p&gt;

&lt;p&gt;This logic holds under high friction conditions, where disengagement is costly and delayed. Once friction is removed, the logic reverses. &lt;em&gt;Engagement increasingly reflects tolerance rather than intent, and escalation accelerates exit rather than conversion&lt;/em&gt;.&lt;/p&gt;

&lt;h2 id=&quot;where-the-automation-trap-enters-the-operating-model&quot;&gt;Where the Automation Trap Enters the Operating Model&lt;/h2&gt;

&lt;p&gt;The automation trap does not emerge from flawed algorithms or poor intent. It enters through the everyday operating structures that govern modern marketing organizations. Most firms already believe in long-term customer value at a strategic level. The failure occurs because that belief is not translated into how decisions are automated, reviewed, and rewarded.&lt;/p&gt;

&lt;p&gt;Most marketing dashboards are built to monitor flows rather than stocks. Engagement rates, conversion lift, and short-term return provide immediate feedback on campaign performance, but they offer no visibility into cumulative exposure or tolerance erosion. As a result, AI systems can continue to escalate contact intensity while executive dashboards signal success. Groupon’s trajectory illustrates this pattern: their aggressive email campaigns initially drove impressive conversion metrics, but “voucher fatigue” and message overload eventually turned customers away at scale, contributing to years of subscriber losses and a stock price decline of over 85 percent from its IPO high.&lt;sup&gt;7&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;This creates a structural blind spot. By the time churn or unsubscribe rates register as a concern, the underlying depletion has already occurred. Customer exit is not a leading indicator; it is the final realization of a long sequence of decisions that appeared locally optimal. Because dashboards aggregate performance across time and customers, they conceal the nonlinear dynamics that occur at the individual level when tolerance thresholds are crossed.&lt;/p&gt;

&lt;p&gt;Most organizations assume that existing analytics frameworks will catch these risks. In practice, they rarely do. &lt;em&gt;Marketing mix models&lt;/em&gt; detect saturation effects at the channel level, but they operate on historical aggregates and are recalibrated infrequently. They are not designed to intervene in real-time escalation decisions driven by AI systems. &lt;em&gt;Attribution models&lt;/em&gt; reward incremental response, even when that response reflects tolerance rather than demand. &lt;em&gt;Churn or propensity models&lt;/em&gt; may identify customers at risk, but they typically function as descriptive overlays rather than as binding constraints on automated action. The result is a governance gap: predictive insight exists, but decision authority remains embedded within execution systems that are rewarded for short-term lift. Risk is identified, but not prevented.&lt;/p&gt;

&lt;p&gt;Incentives further entrench the problem. Marketing teams are evaluated on near-term revenue or efficiency, and AI systems inherit those targets. No single team owns cumulative tolerance or attention depletion, so those outcomes never appear in quarterly objectives. Each optimization cycle improves local metrics, yet collectively these decisions exhaust customer trust faster than it can be rebuilt.&lt;/p&gt;

&lt;h2 id=&quot;automation-without-ownership&quot;&gt;Automation Without Ownership&lt;/h2&gt;

&lt;p&gt;The defining issue is not automation itself, but automation without ownership of long-term costs. When AI systems gain autonomy over contact frequency, spend allocation, and engagement escalation, they implicitly make value judgments about how customer assets should be used. If leadership does not explicitly set those rules, optimization defaults will set them instead.&lt;/p&gt;

&lt;p&gt;This dynamic reflects a broader pattern. Marketing leaders are investing aggressively in AI to automate personalization, optimize campaigns, and scale content, believing they are modernizing marketing operations. According to a 2025 Gartner survey, 65 percent of CMOs say advances in AI will transform their role, and McKinsey’s Global AI Survey found that marketing and sales functions report the highest revenue impact from AI adoption.&lt;sup&gt;8&lt;/sup&gt; In practice, they are embedding a governance problem. AI systems do exactly what they are designed to do. When instructed to maximize engagement, response, or short-term return, they become highly effective at extracting value from customers. Extraction, however, is not the same as value creation.&lt;/p&gt;

&lt;p&gt;As AI becomes more effective at local optimization, the risk of undermining long-term customer assets increases. Incentive structures that were once moderated by organizational friction are now embedded directly into systems that act continuously and at scale. When customer friction is reduced, the consequences of this misalignment surface abruptly.&lt;/p&gt;

&lt;h2 id=&quot;from-philosophy-to-mechanism-closing-the-gap&quot;&gt;From Philosophy to Mechanism: Closing the Gap&lt;/h2&gt;

&lt;p&gt;The idea that customers should be managed as financial assets rather than activity targets is not new.&lt;sup&gt;9 &lt;/sup&gt; Marketing philosophy has largely embraced this view, yet marketing technology has lagged.&lt;/p&gt;

&lt;p&gt;Organizations have deployed AI systems optimized for rapid, short-cycle gains, while expecting outcomes that require long-horizon asset stewardship. Even if leadership believes customers are assets, they have deployed autonomous agents that treat attention as a renewable resource. The frictionless environment of 2025 has simply exposed the mechanistic gap between strategic intent and operational reality.&lt;/p&gt;

&lt;p&gt;The gap is not a modeling issue; it is a categorization error. Marketing AI systems are still governed as executional tools, even though they now make repeated decisions that allocate scarce customer resources over time. They control contact frequency, allocate promotional spend, prioritize customer segments, and determine when to intensify or withdraw engagement. These capabilities are embedded in frequency-optimizing email engines, next-best-action models within CRM platforms, automated bidding and budget allocation tools, and lifecycle orchestration systems that dynamically adjust outreach across channels. Each of these decisions draws down a finite stock of customer attention and trust while shaping future revenue potential. In effect, these systems allocate customer capital over time, even though they are rarely governed as such.&lt;/p&gt;

&lt;h2 id=&quot;governing-ai-as-a-capital-allocation-system&quot;&gt;Governing AI as a Capital Allocation System&lt;/h2&gt;

&lt;p&gt;To bridge this gap, leadership must shift from activity optimization to asset stewardship. Table 1 contrasts these two governance paradigms. Under activity optimization, AI is evaluated as a campaign execution engine, rewarded for improving short-term flows such as clicks and conversions. Under asset stewardship, AI functions as a fiduciary mechanism responsible for preserving and compounding customer equity over time.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/blog/2026-03-kunchala-table1.png&quot; style=&quot;box-shadow:none;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Table 1.&lt;/strong&gt; The Shift from Optimization to Stewardship&lt;/p&gt;

&lt;p&gt;Recognizing AI as a capital allocation system clarifies the core governance challenge: not where performance can be increased, but where automation should be constrained.&lt;/p&gt;

&lt;h2 id=&quot;governing-decisions-with-a-response-tolerance-lens&quot;&gt;Governing Decisions with a Response-Tolerance Lens&lt;/h2&gt;

&lt;p&gt;Effective governance requires explicit constraints on where and how automation is allowed to act. A response-tolerance lens helps leaders distinguish where engagement compounds value from where it accelerates depletion. Because AI systems now continuously allocate scarce marketing capital (customer attention, message opportunities, and budget) across the portfolio, the framework directly informs where those resources should flow. High-tolerance, high-response segments justify sustained investment; low-tolerance segments require capital restraint to preserve relationship durability. Without this lens, AI defaults to short-term optimization, misallocating capital toward segments that convert quickly but churn faster, systematically depleting the customer assets the organization depends on for long-term growth.&lt;/p&gt;

&lt;p&gt;Consider four archetypes. Customers with high response and high tolerance compound value under sustained engagement. Those with high response but low tolerance, often the most commercially attractive in the short term, convert readily but exit quickly under pressure. They are precisely the segment AI systems escalate toward, and precisely the segment most damaged by that escalation. Customers with low response but high tolerance hold latent value that optimization logic ignores. And those with neither tolerance nor response are depleted by continued contact with no offsetting return.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/blog/2026-03-kunchala-fig2.png&quot; style=&quot;box-shadow:none;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figure 2.&lt;/strong&gt; A Response-Tolerance Lens for Governing AI Decisions&lt;/p&gt;

&lt;p&gt;The purpose of this lens is not to improve prediction accuracy, but to determine where decision rights must be constrained based on long-term cost. Models may identify risk, but governance determines whether systems are allowed to act on it.&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;Recent changes by Google and Apple have provided unexpected clarity: when customers can disengage instantly, short-term performance signals no longer reliably indicate long-term value. Systems optimized for engagement can appear efficient while accelerating customer exit.&lt;/p&gt;

&lt;p&gt;The lesson extends beyond email. As AI systems take on larger roles in customer decisioning, the managerial challenge shifts from improving optimization to governing its consequences. The path forward is not to abandon AI, but to govern it as what it has become: a capital allocation system that shapes long-term customer equity with every decision it makes at the customer level.&lt;/p&gt;

&lt;h2 id=&quot;references&quot;&gt;References&lt;/h2&gt;

&lt;ol&gt;
  &lt;li&gt;Author’s analysis of an anonymized dataset derived from professional practice, comprising approximately 8 million unique retail user interactions (2024–2025).&lt;/li&gt;
  &lt;li&gt;Grand View Research, &lt;a href=&quot;https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-marketing-market-report&quot;&gt;Artificial Intelligence in Marketing Market (2025–2030)&lt;em&gt;: Size, Share &amp;amp; Trends Analysis Report&lt;/em&gt;&lt;/a&gt;, October 2024, .&lt;/li&gt;
  &lt;li&gt;Jasper, &lt;a href=&quot;https://www.jasper.ai/state-of-ai-marketing-2025&quot;&gt;The State of AI in Marketing 2025&lt;/a&gt; (2025), .&lt;/li&gt;
  &lt;li&gt;V. Kumar, &lt;em&gt;Valuing Customer Engagement: Strategies to Measure and Maximize Profitability&lt;/em&gt;, 2nd ed. (Palgrave Macmillan, 2024).&lt;/li&gt;
  &lt;li&gt;Duncan Elder, “&lt;a href=&quot;https://www.mailerlite.com/blog/compare-your-email-performance-metrics-industry-benchmarks&quot;&gt;Email Marketing Benchmarks 2025&lt;/a&gt;,” &lt;em&gt;MailerLite&lt;/em&gt;, December 2025, ; &lt;a href=&quot;https://www.campaignmonitor.com/resources/guides/email-marketing-benchmarks/&quot;&gt;Campaign Monitor&lt;/a&gt;, &lt;em&gt;Email Marketing Benchmarks 2024&lt;/em&gt;, .&lt;/li&gt;
  &lt;li&gt;Michael Wright, “&lt;a href=&quot;https://www.salesforce.com/blog/email-unsubscribe-rates/&quot;&gt;Why Email Unsubscribe Rates Are on the Rise&lt;/a&gt;,” Salesforce Blog, September 2025, .&lt;/li&gt;
  &lt;li&gt;Natasha Frost, “‘&lt;a href=&quot;https://www.modernretail.co/retailers/the-proposition-has-changed-how-groupon-fell-from-grace/&quot;&gt;The Proposition Has Changed’: How Groupon Fell from Grace&lt;/a&gt;,” &lt;em&gt;Modern Retail&lt;/em&gt;, July 9, 2020, ; Michael Morisy, “Groupon FTC Complaints Allege Never-Ending Spam Emails,” &lt;em&gt;MuckRock&lt;/em&gt;, February 26, 2013, &lt;a href=&quot;https://www.muckrock.com/news/archives/2013/feb/26/groupon/&quot;&gt;https://www.muckrock.com/news/archives/2013/feb/26/groupon/&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;Gartner, “ &lt;a href=&quot;https://www.gartner.com/en/newsroom/press-releases/2024-11-17-gartner-survey-finds-65-percent-of-cmos-say-advances-in-ai-will-dramatically-change-their-role-in-the-next-two-years&quot;&gt;Gartner Survey Finds 65% of CMOs Say Advances in AI Will Dramatically Change Their Role in the Next Two Years&lt;/a&gt; “(press release, November 17, 2025); McKinsey &amp;amp; Company, “&lt;em&gt;&lt;a href=&quot;https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai&quot;&gt;The State of AI in 2025: Agents, Innovation, and Transformation&lt;/a&gt;&lt;/em&gt;,” November 2025, .&lt;/li&gt;
  &lt;li&gt;V. Kumar, “&lt;em&gt;A Theory of Customer Valuation: Concepts, Metrics, Strategy, and Implementation&lt;/em&gt;,” Journal of Marketing 82, no. 1 (2018): 1–19.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4 id=&quot;selected-references&quot;&gt;Selected References&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;Fader, Peter. &lt;em&gt;Customer Centricity: Focus on the Right Customers for Strategic Advantage&lt;/em&gt;. Wharton Digital Press, 2012.&lt;/li&gt;
  &lt;li&gt;Gupta, Sunil, and Donald R. Lehmann. &lt;em&gt;Managing Customers as Investments: The Strategic Value of Customers in the Long Run&lt;/em&gt;. Wharton School Publishing, 2005.&lt;/li&gt;
  &lt;li&gt;Rust, Roland T., Valarie A. Zeithaml, and Katherine N. Lemon. &lt;em&gt;Driving Customer Equity: How Customer Lifetime Value Is Reshaping Corporate Strategy&lt;/em&gt;. The Free Press, 2000.&lt;/li&gt;
  &lt;li&gt;Brynjolfsson, Erik, and Michael D. Smith. “Frictionless Commerce? A Comparison of Internet and Conventional Retailers.” &lt;em&gt;Management Science&lt;/em&gt; 46, no. 4 (2000): 563–585.&lt;/li&gt;
&lt;/ul&gt;
</description>
        <pubDate>Tue, 24 Mar 2026 01:39:00 -0700</pubDate>
        <link>http://localhost:4000/2026/03/the-ai-automation-trap-transform-from-optimizing-activities-to-allocating-capital/</link>
        <guid isPermaLink="true">http://localhost:4000/2026/03/the-ai-automation-trap-transform-from-optimizing-activities-to-allocating-capital/</guid>
        
        <category>Artificial intelligence</category>
        
        <category>Governance</category>
        
        <category>Marketing</category>
        
        <category>Strategy</category>
        
        <category>Customer lifetime value</category>
        
        
        <category>[Artificial Intelligence]</category>
        
        <category>[Strategy]</category>
        
        <category>[Marketing Strategy]</category>
        
        <category>[Corporate Governance]</category>
        
        <category>[Customer Relationships]</category>
        
      </item>
    
      <item>
        <title>Advancing Circularity: The Dynamic Capabilities to Drive Transformative Change</title>
        <description>
</description>
        <pubDate>Mon, 23 Mar 2026 02:00:00 -0700</pubDate>
        <link>http://localhost:4000/2026/03/68-2-advancing-circularity-the-dynamic-capabilities-to-drive-transformative-change/</link>
        <guid isPermaLink="true">http://localhost:4000/2026/03/68-2-advancing-circularity-the-dynamic-capabilities-to-drive-transformative-change/</guid>
        
        
        <category>[Circular Economy]</category>
        
        <category>[Capabilities]</category>
        
        <category>[Business Models]</category>
        
        <category>[Sustainability]</category>
        
        <category>[Planning &amp; Forecasting]</category>
        
      </item>
    
      <item>
        <title>Governing the Agentic Enterprise: A New Operating Model for Autonomous AI at Scale</title>
        <description>&lt;blockquote&gt;
  &lt;p&gt;As organizations deploy increasingly autonomous artificial intelligence systems, many are discovering that existing governance and operating models are ill-suited to software that can independently perceive, decide, and act. While recent advances in generative AI have focused on model capability, the more consequential challenge for enterprises lies in governing systems that function as organizational actors rather than decision-support tools. This article argues that autonomous AI represents an institutional shift, not merely a technological one.&lt;/p&gt;

  &lt;p&gt;To address this challenge, the article &lt;strong&gt;proposes&lt;/strong&gt; the &lt;strong&gt;Agentic Operating Model (AOM)&lt;/strong&gt;, a &lt;strong&gt;conceptual&lt;/strong&gt; &amp;amp; &lt;strong&gt;illustrative&lt;/strong&gt; governance framework that specifies the structural conditions required to operate autonomous agents responsibly at enterprise scale. The AOM comprises four interdependent layers, cognitive specialization, coordination architecture, real-time control, and organizational governance, that together constrain autonomy while preserving its benefits. Drawing on illustrative enterprise vignettes, the article demonstrates how failures in agentic systems typically arise from misalignment across these layers rather than from deficiencies in model performance.&lt;/p&gt;

  &lt;p&gt;The article contributes a practical and conceptual foundation for leaders seeking to scale autonomous AI without sacrificing accountability, resilience, or trust. By reframing agentic AI as an operating-model problem, it offers senior executives a systematic approach to governing autonomy as a durable source of competitive advantage.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2 id=&quot;the-governance-failure-nobody-planned-for&quot;&gt;The Governance Failure Nobody Planned For&lt;/h2&gt;

&lt;p&gt;Over the past decade, organizations have invested heavily in artificial intelligence to improve efficiency, insight, and decision-making. Early deployments focused on prediction engines, recommendation systems, and, more recently, generative AI tools that assist employees with writing, coding, and analysis. These systems were largely framed as &lt;em&gt;tools&lt;/em&gt;: powerful, but ultimately subordinate to human judgment. In recent years, that framing quietly broke down.&lt;/p&gt;

&lt;p&gt;Enterprises are now deploying AI systems that do not merely assist work but &lt;em&gt;perform it&lt;/em&gt;. Autonomous agents monitor markets, negotiate with vendors, route logistics, approve transactions, remediate IT incidents, and coordinate with other software agents often without human intervention. These systems perceive their environment, reason over goals, take actions through enterprise systems, and collaborate with other agents to achieve outcomes. In organizational terms, they have crossed a threshold from tools to actors.&lt;/p&gt;

&lt;p&gt;This shift has exposed a managerial blind spot. Most firms govern autonomous AI using control mechanisms designed for deterministic software or transactional analytics models. Security teams frequently prioritize &lt;strong&gt;infrastructure perimeters&lt;/strong&gt;, which can be bypassed by agents using legitimate credentials to execute unintended, non-deterministic actions. Similarly, while some firms treat compliance as a core value, many risk functions still rely on static checklists, and IT departments frequently treat agents as standard applications. These approaches struggle to account for agents that independently perceive, decide, and act at machine speed. These approaches are increasingly inadequate. When autonomous agents operate at machine speed, interact non-deterministically, and exercise delegated decision rights, failures transcend traditional software bugs. Because these components have transitioned from tools to actors, their failures resemble &lt;strong&gt;unpredictable organizational breakdowns&lt;/strong&gt; that are significantly more difficult to explain or remediate using standard technical logic.&lt;/p&gt;

&lt;p&gt;Recent incidents across several sectors illustrate that when autonomous agents operate at machine speed, failures resemble organizational breakdowns rather than simple software bugs. In 2024, the &lt;strong&gt;Moffatt v. Air Canada&lt;/strong&gt; case established a critical legal precedent: organizations are held liable for the “non-deterministic” promises made by their autonomous agents, even when those actions contradict internal policy. Furthermore, the &lt;strong&gt;DPD “Rogue” Chatbot incident&lt;/strong&gt;&lt;sup&gt;8&lt;/sup&gt;demonstrated how a lack of real-time behavioral monitoring allows agents to deviate into “unintended” action such as criticizing their own firm once a system update alters their reasoning boundaries.&lt;/p&gt;

&lt;p&gt;More technically concerning is the rise of &lt;strong&gt;indirect prompt injection&lt;/strong&gt;, such as the &lt;strong&gt;“EchoLeak” vulnerability&lt;/strong&gt;&lt;sup&gt;9&lt;/sup&gt; . In this scenario, malicious instructions embedded in external data sources were used to manipulate an agent’s legitimate credentials to exfiltrate internal data, bypassing traditional perimeter defenses. These cases prove that governance is no longer a post-deployment checklist but a requirement for real-time control.&lt;/p&gt;

&lt;p&gt;The core challenge, therefore, is no longer how to make AI systems more intelligent. It is how to &lt;em&gt;govern&lt;/em&gt; software that can independently decide and act at scale. This article argues that autonomous AI requires a fundamentally new operating model, one that treats agents as organizational actors embedded within explicit structures of coordination, control, and accountability.&lt;/p&gt;

&lt;p&gt;To address this gap, the article makes three contributions. First, it clarifies what distinguishes agentic AI from earlier generations of automation and generative tools, emphasizing why autonomy changes the nature of managerial responsibility. Second, it introduces the &lt;strong&gt;Agentic Operating Model (AOM)&lt;/strong&gt;, a layered framework that explains how enterprises can design, deploy, and govern autonomous agents at scale. Third, it examines the implications of this model for senior leaders, reframing AI governance as a source of operational resilience rather than a constraint on innovation.&lt;/p&gt;

&lt;h2 id=&quot;from-tools-to-actors-what-makes-ai-agentic&quot;&gt;From Tools to Actors: What Makes AI Agentic&lt;/h2&gt;

&lt;p&gt;For much of its history in organizations, artificial intelligence has been framed as a form of decision support. Predictive models scored risks, recommendation systems suggested options, and generative tools produced drafts for human review. Even when outputs were sophisticated, responsibility remained firmly with human decision-makers. The introduction of autonomous agents disrupts this arrangement.&lt;/p&gt;

&lt;p&gt;Agentic AI differs from prior systems along three dimensions: autonomy, persistence, and delegation. Autonomous agents do not merely respond to prompts; they initiate actions based on environmental signals. Persistence allows agents to operate continuously over time, learning from feedback and adapting behavior without repeated human instruction. Delegation grants agents formal authority to act on behalf of the organization, including access to systems of record and the ability to commit resources.&lt;/p&gt;

&lt;p&gt;Together, these characteristics transform AI systems into organizational actors. Like human employees, agents operate within defined roles, pursue assigned objectives, and interact with others to complete work. Also like humans, their behavior is non-deterministic and context-sensitive. This combination complicates oversight, as managers cannot rely on exhaustive rules or static testing to anticipate all possible actions.&lt;/p&gt;

&lt;p&gt;The shift from tools to actors also alters accountability. When a spreadsheet produces an error, responsibility lies with the analyst who used it. When an autonomous agent approves a transaction or reroutes a shipment, responsibility is often ambiguous. Was the failure caused by the model, the data, the configuration, or the delegation decision itself? Without an explicit operating model, organizations struggle to answer these questions consistently.&lt;/p&gt;

&lt;p&gt;Recognizing agents as actors clarifies why governance must move beyond technical safeguards. Just as firms establish policies, reporting structures, and controls for human workers, they must design institutional arrangements for digital ones. The Agentic Operating Model introduced in the previous section provides a foundation for this shift by embedding autonomy within explicit layers of coordination, control, and governance.&lt;/p&gt;

&lt;h2 id=&quot;the-agentic-operating-model-aom&quot;&gt;The Agentic Operating Model (AOM)&lt;/h2&gt;

&lt;p&gt;As organizations experiment with autonomous agents, many encounter the same pattern: individual agents perform well in isolation, yet the overall system behaves unpredictably when deployed at scale. This gap reflects a mismatch between the complexity of agentic systems and the operating models used to manage them. Traditional IT operating models assume deterministic behavior, centralized control, and clearly bounded applications. Agentic systems violate all three assumptions.&lt;/p&gt;

&lt;p&gt;To address this challenge, this article proposes the &lt;strong&gt;Agentic Operating Model (AOM)&lt;/strong&gt; a governance-centric framework that specifies the minimum structural components required to operate autonomous AI responsibly and effectively at enterprise scale. The AOM consists of four interdependent layers: the Cognitive Layer, the Coordination Layer, the Control Layer, and the Governance Layer. Each layer addresses a distinct managerial problem, and failure in any one undermines the stability of the entire system.&lt;/p&gt;

&lt;h4 id=&quot;the-cognitive-layer-specialized-intelligence&quot;&gt;The Cognitive Layer: Specialized Intelligence&lt;/h4&gt;

&lt;p&gt;The Cognitive Layer defines how intelligence is instantiated within the organization. Rather than relying on a single, general-purpose model, agentic enterprises increasingly deploy multiple specialized models embedded within autonomous agents. These models are optimized for specific domains, tasks, and performance constraints.&lt;/p&gt;

&lt;p&gt;This specialization is not merely a technical optimization; it is a governance choice. Smaller and domain-specific models are easier to evaluate, constrain, and audit than monolithic systems trained on broad, opaque data sources. They reduce hallucination risk in regulated domains and enable clearer alignment between an agent’s capabilities and its delegated responsibilities. In the AOM, intelligence is deliberately fragmented to make accountability tractable. While this approach requires greater investment in domain-specific training and model management compared to monolithic systems, it avoids the ‘generality risk’ where broad decision authority leads to unpredictable outcomes. For general enterprise, the tradeoff of higher initial complexity is offset by the ability to precisely evaluate, constrain, and audit agents in regulated or high-risk domains.&lt;/p&gt;

&lt;p&gt;Organizations that neglect this layer often conflate autonomy with generality, embedding broad decision authority into overly capable models. The result is agents whose behavior is difficult to predict and even harder to justify after the fact. By contrast, firms that design cognitive specialization as an operating principle create agents whose scope of action is intelligible by design.&lt;/p&gt;

&lt;h4 id=&quot;the-coordination-layer-from-hierarchies-to-swarms&quot;&gt;The Coordination Layer: From Hierarchies to Swarms&lt;/h4&gt;

&lt;p&gt;The Coordination Layer governs how agents interact with one another to accomplish complex tasks. While early systems relied on centralized “Hub-and-Spoke” orchestration, modern enterprises are shifting toward &lt;strong&gt;Swarm Intelligence&lt;/strong&gt;, where agents operate via decentralized local rules and shared goals without a single point of failure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-World Applications of Agentic Coordination&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As agentic deployments mature, the transition to decentralized collaboration has moved from theory to core enterprise operations:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Multi-Agent Insurance Swarms:&lt;/strong&gt; In the insurance sector, firms have moved beyond manual task-routing to “collaborative multi-agent teams”. A single claim may be processed by a swarm of seven specialized agents including Planner, Coverage, and Fraud agents communicating through a shared environment to verify policies and weather data simultaneously. This shift has enabled leaders like &lt;strong&gt;Lemonade&lt;/strong&gt; to process approximately one-third of claims autonomously, with its “AI Jim” agent achieving settlements in as little as three seconds.&lt;sup&gt;6&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Autonomous Logistics “Orchestras”:&lt;/strong&gt; Global logistics leaders like &lt;strong&gt;Maersk&lt;/strong&gt; and &lt;strong&gt;Unilever&lt;/strong&gt; are utilizing agentic meshes to respond to real-time disruptions. Maersk’s “Project Autosub” utilized autonomous vessel agents that coordinate route optimization and port scheduling without human intervention, achieving a &lt;strong&gt;23% reduction in fuel consumption&lt;/strong&gt;. Similarly, &lt;strong&gt;Unilever&lt;/strong&gt; uses reactive swarms to autonomously negotiate with carriers and reorganize warehouse logistics during shipping delays.&lt;sup&gt;1,2,5&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Decentralized Financial Consensus:&lt;/strong&gt; High-frequency trading and fraud detection at firms like &lt;strong&gt;J.P. Morgan&lt;/strong&gt; and &lt;strong&gt;Goldman Sachs&lt;/strong&gt; now utilize multi-agent systems (MAS) where agents analyze market signals in parallel. These systems employ “consensus mechanisms” to prevent rogue actions, requiring multiple agents to agree on high-risk capital commitments before execution.&lt;sup&gt;4,7&lt;/sup&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Managerial “Orchestration Gap”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This shift introduces what I term the &lt;strong&gt;“Orchestration Gap”&lt;/strong&gt;: a mismatch where decentralized software outpaces centralized human management. In the Agentic Operating Model (AOM), coordination is treated as an explicit design decision rather than an emergent property.&lt;/p&gt;

&lt;p&gt;Leaders must evolve from task supervisors to &lt;strong&gt;“Switchboard Operators,”&lt;/strong&gt; defining the ethical boundaries and goals for the entire mesh rather than specific workflows. Importantly, when no single agent is “in charge,” governance mechanisms must be embedded within the coordination protocol itself. In practice, this takes the form of &lt;strong&gt;programmable constraints.&lt;/strong&gt; For example, in Financial agent swarms, a coordination protocol might include a &lt;strong&gt;Consensus Mechanism&lt;/strong&gt; that physically prevents any single agent from executing a transaction unless some other independent agents (e.g., Risk, Compliance, and Audit agents) sign off on the telemetry. Here, accountability is not a manual review process but a hardcoded requirement for system execution.&lt;/p&gt;

&lt;h4 id=&quot;the-control-layer-constraining-autonomous-action&quot;&gt;The Control Layer: Constraining Autonomous Action&lt;/h4&gt;

&lt;p&gt;The Control Layer defines how agent behavior is bounded in real time. Traditional controls such as role-based access and static permissions are insufficient when agents generate novel actions in dynamic environments. Agentic systems require adaptive controls that respond to context, confidence, and risk.&lt;/p&gt;

&lt;p&gt;Key mechanisms in this layer include confidence thresholds, behavioral baselines, and guardrail agents that monitor inputs and outputs. For example, a ‘Guardrail Agent’ can be implemented as a lightweight model that intercepts a primary agent’s output before it reaches a system of record. If a Procurement Agent initiates a $50,000 vendor payment exceeding its $10,000 ‘behavioral baseline’ the Guardrail Agent triggers a ‘Confidence Threshold’ check. If the agent’s internal reasoning score is below 95%, the action is physically blocked and escalated for Human-on-the-Loop review. Rather than approving every action, these controls intervene selectively when uncertainty or potential impact exceeds predefined limits. This approach supports a shift from Human-in-the-Loop oversight to Human-on-the-Loop supervision, where humans set boundaries and intervene only when necessary.&lt;/p&gt;

&lt;p&gt;Organizations that underinvest in the Control Layer often rely on informal supervision or post hoc audits. Such approaches fail at scale, allowing small errors to propagate rapidly across interconnected agents. Effective control does not eliminate autonomy; it makes autonomy survivable.&lt;/p&gt;

&lt;h4 id=&quot;the-governance-layer-accountability-and-legitimacy&quot;&gt;The Governance Layer: Accountability and Legitimacy&lt;/h4&gt;

&lt;p&gt;The Governance Layer anchors the AOM by assigning accountability for agentic behavior and aligning it with organizational and regulatory expectations. This layer encompasses policies, standards, and decision rights that define who is responsible for an agent’s actions throughout its lifecycle.&lt;/p&gt;

&lt;p&gt;Frameworks such as ISO/IEC 42001 and the NIST AI Risk Management Framework provide structural guidance, but governance is not achieved through compliance alone. In the AOM, each agent is associated with a clear business owner, a defined risk profile, and documented decision boundaries. Outputs are traceable to specific model versions, configurations, and prompts, enabling post hoc explanation and audit.&lt;/p&gt;

&lt;p&gt;Without this layer, autonomous agents become organizational orphans capable of acting but owned by no one. Such systems may deliver short-term efficiency gains while accumulating long-term operational and reputational risk. Effective governance restores legitimacy by ensuring that autonomy is always coupled with responsibility.&lt;/p&gt;

&lt;h4 id=&quot;why-the-layers-must-work-together&quot;&gt;Why the Layers Must Work Together&lt;/h4&gt;

&lt;p&gt;The four layers of the Agentic Operating Model are mutually reinforcing. Specialized intelligence simplifies control. Coordination choices determine governance complexity. Controls operationalize governance principles. Governance, in turn, constrains how intelligence is deployed. Organizations that address these layers in isolation often experience brittle systems that fail under stress, this is a pattern that is observed when technical autonomy is granted without corresponding control thresholds or accountability structures. As detailed in the below mentioned enterprise vignettes, failures typically stem from this misalignment across layers rather than from deficiencies in model performance itself.&lt;/p&gt;

&lt;p&gt;The AOM reframes autonomous AI as an institutional design problem rather than a technology project. By making operating assumptions explicit, it allows leaders to reason systematically about how autonomy is granted, constrained, and supervised. In doing so, it provides a foundation for scaling agentic AI without surrendering managerial oversight.&lt;/p&gt;

&lt;h2 id=&quot;governing-autonomous-actors-from-human-in-the-loop-to-human-on-the-loop&quot;&gt;Governing Autonomous Actors: From Human-in-the-Loop to Human-on-the-Loop&lt;/h2&gt;

&lt;p&gt;A central tension in agentic systems is the relationship between autonomy and oversight . Early governance approaches emphasized &lt;strong&gt;Human-in-the-Loop (HITL)&lt;/strong&gt; controls, requiring manual human approval for critical actions . While effective in low-volume settings, HITL becomes a bottleneck as enterprises execute thousands or millions of agentic actions per hour . Consequently, organizations are shifting toward &lt;strong&gt;Human-on-the-Loop (HOTL)&lt;/strong&gt; supervision, where humans define objectives, constraints, and escalation thresholds, while agents operate independently within those boundaries .&lt;/p&gt;

&lt;h4 id=&quot;from-reactive-audits-to-proactive-controls&quot;&gt;From Reactive Audits to Proactive Controls&lt;/h4&gt;

&lt;p&gt;Effective HOTL governance depends on &lt;strong&gt;proactive controls&lt;/strong&gt; rather than reactive audits . Because agents can be manipulated through indirect prompt injection or enter unintended feedback loops at machine speed, organizations can no longer afford to wait for a “post-mortem” log review .&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;“Safe-Action” Pipelines &amp;amp; Infrastructure:&lt;/strong&gt; To prevent the “Unbounded Agent” failure mode, enterprises are adopting &lt;strong&gt;Safe-Action Pipelines&lt;/strong&gt;. This reflects the move toward HOTL supervision where high-risk actions such as the &lt;strong&gt;DPD “Rogue” Chatbot&lt;/strong&gt; incident or the &lt;strong&gt;Moffatt v. Air Canada&lt;/strong&gt; hallucinations are physically blocked at the system level if they exceed predefined “blast radius” or confidence thresholds .&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Digital Provenance and Accountability:&lt;/strong&gt; Traceability is ensured through digital provenance mechanisms, allowing organizations to reconstruct how and why a particular outcome occurred . This shift is supported by &lt;strong&gt;ISO/IEC 42001&lt;/strong&gt; and the &lt;strong&gt;NIST AI Risk Management Framework&lt;/strong&gt;, which mandate continuous monitoring and lifecycle responsibility rather than point-in-time compliance .&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id=&quot;institutionalizing-authority&quot;&gt;Institutionalizing Authority&lt;/h4&gt;

&lt;p&gt;Crucially, governance must be embedded by design rather than layered on after deployment . Assigning each agent a clear business owner, risk classification, and escalation protocol ensures that accountability remains intact even as autonomy increases . By reframing oversight as &lt;strong&gt;supervision rather than approval&lt;/strong&gt;, the HOTL model reconciles the speed of the “Agentic Mesh” with the necessity of managerial control . It transforms governance from a bureaucratic bottleneck into a prerequisite for responsible innovation at scale .&lt;/p&gt;

&lt;h2 id=&quot;enterprise-vignettes-how-agentic-systems-succeed-and-fail&quot;&gt;Enterprise Vignettes: How Agentic Systems Succeed and Fail&lt;/h2&gt;

&lt;p&gt;The managerial challenges of agentic AI become most visible when autonomous systems interact with real organizational constraints. The following vignettes are illustrative composites drawn from common enterprise patterns and real-world failure modes. While the narratives are synthesized to highlight structural misalignments, they are grounded in documented historical incidents such as the &lt;strong&gt;Moffatt v. Air Canada [3]&lt;/strong&gt; legal precedent regarding non-deterministic agent promises and the &lt;strong&gt;DPD ‘Rogue’ Chatbot[8]&lt;/strong&gt; incident where a lack of real-time monitoring allowed an agent to deviate from firm policy. These cases serve as the empirical basis for the ‘Unbounded Agent’ and ‘Compliant Failure’ modes described below.&lt;/p&gt;

&lt;h4 id=&quot;vignette-1-the-unbounded-agent&quot;&gt;Vignette 1: The Unbounded Agent&lt;/h4&gt;

&lt;p&gt;An enterprise deploys an autonomous operations agent tasked with resolving routine service disruptions. The agent has broad system access and a general-purpose language model to interpret logs and remediation options. Initially, performance improves dramatically. Over time, however, the agent begins executing increasingly complex interventions, including configuration changes that exceed its original mandate.&lt;/p&gt;

&lt;p&gt;When a major outage occurs, post-incident review reveals that no clear decision boundary constrained the agent’s authority. The Cognitive Layer favored generality over specialization, while the Control Layer relied on static permissions rather than dynamic thresholds. Although the agent behaved “correctly” according to its internal logic, the organization lacked a governance mechanism to prevent scope creep. Under the AOM, tighter cognitive specialization and explicit control thresholds would have limited escalation while preserving autonomy for routine tasks.&lt;/p&gt;

&lt;h4 id=&quot;vignette-2-the-invisible-swarm&quot;&gt;Vignette 2: The Invisible Swarm&lt;/h4&gt;

&lt;p&gt;A second organization experiments with decentralized agent collaboration to improve internal coordination &lt;strong&gt;of complex operational workflows&lt;/strong&gt;. Unlike human coordination (e.g., manual meeting scheduling or routing sub-tasks via email), this ‘swarm’ involves multiple agents that monitor real-time data streams, update shared state, and trigger cross-system action such as &lt;strong&gt;autonomously synchronizing inventory levels with procurement orders and shipping schedules&lt;/strong&gt; without centralized orchestration. While resilient to individual agent downtime, the architecture generates unexpected outcomes when agents respond to partial or outdated information.&lt;/p&gt;

&lt;p&gt;In one instance, several agents independently initiate compensating actions in response to the same signal, amplifying rather than resolving the issue. Investigation is hindered by the absence of a clear ownership model. No single team claims responsibility for the collective behavior of the swarm.&lt;/p&gt;

&lt;p&gt;This vignette highlights the importance of the Coordination and Governance layers working in tandem. While decentralized collaboration increases robustness by removing single points of failure, it requires &lt;strong&gt;Conflict Resolution Protocols&lt;/strong&gt; to manage contradictory agent actions. The AOM emphasizes that ‘resilience through redundancy’ must be functionally paired with &lt;strong&gt;Auditable Ownership&lt;/strong&gt;. In practice, this means every autonomous action within the swarm must be linked to a specific business owner and risk profile. Without this linkage, the speed of the mesh creates ‘organizational orphans’ where no human team is responsible for the collective outcome.&lt;/p&gt;

&lt;h4 id=&quot;vignette-3-the-compliant-failure&quot;&gt;Vignette 3: The Compliant Failure&lt;/h4&gt;

&lt;p&gt;A third enterprise invests heavily in formal AI governance, documenting policies, approvals, and compliance artifacts aligned with external standards. Autonomous agents are certified prior to deployment and reviewed periodically. Despite this rigor, the organization experiences repeated near-misses involving inappropriate agent actions.&lt;/p&gt;

&lt;p&gt;The issue lies not in the absence of governance but in its implementation. Oversight focuses on pre-deployment checklists rather than real-time supervision. Agents operate without behavioral monitoring, and escalation protocols are rarely triggered because thresholds are poorly calibrated. Governance exists on paper but not in operation.&lt;/p&gt;

&lt;p&gt;This scenario illustrates why governance must be embedded within the Control Layer rather than treated as an external audit function. The AOM reframes compliance as a continuous activity that shapes how agents behave in practice, not merely how they are approved.&lt;/p&gt;

&lt;h2 id=&quot;implications-for-senior-leaders&quot;&gt;Implications for Senior Leaders&lt;/h2&gt;

&lt;p&gt;The adoption of agentic AI has implications that extend beyond technology management. By redistributing decision-making authority from humans to autonomous systems, agentic enterprises reshape roles, responsibilities, and risk across the organization.&lt;/p&gt;

&lt;p&gt;For chief executives, the primary concern is operational resilience. Autonomous agents can sense and respond to change faster than human teams, but they also introduce new dependency risks. Leaders must ensure that critical processes remain intelligible and recoverable when agentic systems fail. The Agentic Operating Model provides a way to balance speed with stability by making autonomy an explicit design choice rather than an implicit byproduct of capability.&lt;/p&gt;

&lt;p&gt;Chief financial officers face a different challenge: the emergence of variable cognitive costs. Unlike traditional software licenses, the cost of agentic AI scales with usage, interaction frequency, and model complexity. Without disciplined operating assumptions, enterprises risk deploying agents that are economically irrational even when technically effective. By aligning cognitive specialization with delegated authority, the AOM supports more transparent cost governance &lt;strong&gt;by enabling granular cost attribution&lt;/strong&gt;. When agents are specialized for specific tasks, leaders can move from opaque ‘API-call bundles’ to a &lt;strong&gt;unit-cost model&lt;/strong&gt;, where the expense of a specific model (e.g., a high-reasoning model for fraud) is directly mapped to the business value of its delegated domain. Transparency arises from the ability to see exactly which business functions are driving cognitive spend.&lt;/p&gt;

&lt;p&gt;For chief information officers, integration and proliferation are central concerns. Autonomous agents interact continuously with systems of record, often at volumes that exceed legacy design assumptions. At the same time, the ease of deploying agents encourages experimentation outside formal IT channels. An explicit operating model helps CIOs provide secure, scalable pathways for innovation while reducing the risks associated with unmanaged agent sprawl.&lt;/p&gt;

&lt;p&gt;Across roles, the common implication is that governance is no longer a constraint on innovation but a prerequisite for sustaining it. Enterprises that invest in operating models rather than ad hoc controls are better positioned to scale autonomy without sacrificing trust.&lt;/p&gt;

&lt;h2 id=&quot;conclusion-from-intelligent-systems-to-institutional-design&quot;&gt;Conclusion: From Intelligent Systems to Institutional Design&lt;/h2&gt;

&lt;p&gt;The rise of autonomous AI marks a transition from intelligent tools to digital actors embedded within organizations. This shift challenges long-standing assumptions about control, accountability, and managerial oversight. As autonomy increases, technical excellence alone is insufficient. What determines success is the quality of the institutional structures surrounding agentic systems.&lt;/p&gt;

&lt;p&gt;The Agentic Operating Model reframes autonomous AI as an organizational design problem. By articulating the cognitive, coordination, control, and governance layers required to operate agents responsibly, it offers leaders a systematic way to reason about autonomy at scale. Rather than asking whether agents are capable, the model asks whether they are governable.&lt;/p&gt;

&lt;p&gt;Firms that address this question proactively can harness agentic AI as a durable source of advantage. Those that do not may find themselves managing systems that act decisively yet remain fundamentally unaccountable. In an era where software increasingly performs work once reserved for humans, the future of competitive advantage lies not in intelligence alone, but in the institutions that shape how intelligence is exercised.&lt;/p&gt;

&lt;h2 id=&quot;references&quot;&gt;References&lt;/h2&gt;

&lt;ol&gt;
  &lt;li&gt;Sushree Swagatika Pati, “&lt;a href=&quot;https://kanerika.com/blogs/agentic-ai-in-supply-chain/&quot;&gt;Agentic AI in Supply Chain 2025: Autonomous Decision Making&lt;/a&gt;,” May 14, 2025.&lt;/li&gt;
  &lt;li&gt;Eva Richardson, “&lt;a href=&quot;https://ean-network.com/maersk-launches-ai-powered-vessel-routing-platform-to-cut-emissions-and-improve-efficiency/&quot;&gt;Maersk Launches AI-Powered Vessel Routing Platform to Cut Emissions&lt;/a&gt;,” EAN Network, April 18, 2025.&lt;/li&gt;
  &lt;li&gt;Barry B. Sookman, “&lt;a href=&quot;https://www.mccarthy.ca/en/insights/blogs/techlex/moffatt-v-air-canada-misrepresentation-ai-chatbot&quot;&gt;Bereavement Fares and Chatbot Liability&lt;/a&gt;,” &lt;em&gt;Moffatt v. Air Canada&lt;/em&gt;, February 19, 2024.&lt;/li&gt;
  &lt;li&gt;Gizel Gomes,”&lt;a href=&quot;https://ctomagazine.com/jp-morgan-chase-accelerates-ai-adoption/&quot;&gt;AI in Banking: JP Morgan Leads the AI Sphere&lt;/a&gt;,” September 3, 2024.&lt;/li&gt;
  &lt;li&gt;Graham Sommer et al., “&lt;a href=&quot;https://meet.aeratechnology.com/aerahub-2025-london-ungated-how-unilever-is-envisioning-the-autonomous-supply-chain-with-agentic-ai&quot;&gt;How Unilever is envisioning the Autonomous Supply Chain with Agentic AI&lt;/a&gt;,”.&lt;/li&gt;
  &lt;li&gt;Ancil Mohamed, “&lt;a href=&quot;https://www.devoteam.com/expert-view/innovation-in-insurance/&quot;&gt;Generative AI in Insurance: Lemonade Case Study,&lt;/a&gt;”.&lt;/li&gt;
  &lt;li&gt;Fei Xiong et al., “&lt;a href=&quot;https://arxiv.org/abs/2509.09995&quot;&gt;QuantAgent: Price-Driven Multi-Agent LLMs for High-Frequency Trading&lt;/a&gt;,” September 27, 2025.&lt;/li&gt;
  &lt;li&gt;Tom Gerken, “&lt;a href=&quot;https://www.bbc.com/news/technology-68025677&quot;&gt;DPD error caused chatbot to swear at customer&lt;/a&gt;”, January 19, 2024.&lt;/li&gt;
  &lt;li&gt;Lexi Croisdale, “&lt;a href=&quot;https://www.varonis.com/blog/echoleak&quot;&gt;EchoLeak in Microsoft Copilot: What it Means for AI Security&lt;/a&gt;,” June 12, 2025.&lt;/li&gt;
&lt;/ol&gt;
</description>
        <pubDate>Fri, 20 Mar 2026 02:47:00 -0700</pubDate>
        <link>http://localhost:4000/2026/03/governing-the-agentic-enterprise-a-new-operating-model-for-autonomous-ai-at-scale/</link>
        <guid isPermaLink="true">http://localhost:4000/2026/03/governing-the-agentic-enterprise-a-new-operating-model-for-autonomous-ai-at-scale/</guid>
        
        <category>Agents</category>
        
        <category>AI companions</category>
        
        <category>Computer software</category>
        
        <category>Governance</category>
        
        <category>Information technology</category>
        
        
        <category>[Agentic AI]</category>
        
        <category>[Artificial Intelligence]</category>
        
        <category>[Digital Platforms]</category>
        
        <category>[Information Technology]</category>
        
        <category>[Risk Management]</category>
        
      </item>
    
  </channel>
</rss>
