California Management Review
California Management Review is a premier professional management journal for practitioners published at UC Berkeley Haas School of Business.
Pavan Kunchala, V. Kumar, and Ali Hasan Butt
Image Credit | Author
As organizations scale artificial intelligence to increase speed and efficiency, many are unintentionally creating an unmanaged Algorithmic CMO: a system that relentlessly optimizes tactical performance metrics while quietly eroding long-term customer equity and enterprise value. The problem is not that AI is ineffective, but that it is governed under the wrong paradigm.
Oliver Gassmann and Joakim Wincent, “The Non-Human Enterprise: How AI Agents Reshape Organizations,” California Management Review Insights, October 22, 2025.
V. Kumar et al., “Understanding the Role of Artificial Intelligence in Personalized Engagement Marketing,” California Management Review 61, no. 4 (Summer 2019): 135–55.
This tension is amplified by the scale of investment now flowing into marketing AI. Global spending exceeded $20 billion in 2024 and continues to grow at a double-digit rate.2 Yet deployment is advancing faster than managerial oversight. Fewer than half of marketing teams report systematically measuring the return on AI investments, and formal governance programs remain unevenly established.3
Most marketing AI systems are treated as activity optimizers, even though in practice they now function as capital allocators.4 These systems repeatedly determine how customer attention, trust, and marketing investment are deployed over time. These decisions shape not only near-term performance but also the durability of customer relationships and the trajectory of long-term value creation.
Using recent platform changes by Google and Apple as a natural stress test for email marketing, we can observe how optimization-driven AI can accelerate value depletion once customer friction is reduced, and why governance frameworks designed for executional automation are no longer sufficient for autonomous decision systems.
As AI systems gain partial autonomy over marketing decisions, firms must reframe how those decisions are governed. The automation trap does not arise from automation itself, but from embedding narrow optimization objectives into systems that now operate continuously and at scale. Many of the risks associated with AI-driven marketing remain difficult to observe because customer friction delays feedback. When friction is high (for example, when unsubscribing required multiple steps), the effects of aggressive optimization accumulate slowly and are often absorbed without immediate consequences. When friction is reduced, those same effects surface rapidly, exposing misalignment between what systems are optimizing for and what organizations are trying to preserve.
Between 2024 and 2025, platform changes such as one-click unsubscribes, centralized subscription management, and AI-generated inbox summaries materially reduced the cost of disengagement in email marketing. These changes did not alter how marketing related AI systems made decisions. They altered how quickly customers could respond to those decisions.
The results were immediate for high-frequency senders. Industry benchmarks confirm the pattern: average unsubscribe rates more than doubled between 2024 and 2025, from 0.08 percent to 0.22 percent, following one-click unsubscribe enforcement.5 Some high-volume senders experienced unsubscribe spikes nearly twice their historical average within weeks of Gmail’s Subscription Center rollout in mid-2025.6 This shift was not driven by failures in AI systems. The systems were optimizing for activity as designed, but the assumptions embedded in that optimization no longer held once customers could exit instantly.
The underlying mechanism was behavioral rather than technical. Engagement-optimized systems interpreted late-stage customer interaction (opens, clicks, brief reactivation) as signals of rising purchase intent. These signals triggered escalation in contact frequency. From the AI’s perspective, customers who continued to engage appeared to offer increasing marginal returns.
From the customer’s perspective, the outreach intensity crossed individual tolerance thresholds. Customer response was nonlinear. At lower exposure levels, unsubscribe behavior remained relatively stable. Once frequency exceeded tolerance limits, exit rates increased sharply.
This pattern was most pronounced among long-tenured customers, who had accumulated greater exposure over time. Loyalty did not act as a buffer. Instead, long-tenured customers exhibited the greatest sensitivity to frequency increases and were often the first to exit. The removal of friction did not create this failure. It revealed it.

Figure 1. The Efficiency Illusion: When Short-Term Lift Masks Long-Term Erosion.
Figure 1 illustrates why the problem was not immediately visible to marketing teams. Engagement and response metrics continued to perform well in executive dashboards even as cumulative exposure pushed customers toward exit. The divergence between reported performance and underlying asset health emerged across successive optimization cycles rather than within individual campaigns, creating the illusion of efficiency.
Faced with rising unsubscribe rates, most marketing leaders treated the issue as a channel performance problem. Typical responses included content refinement, subject-line testing, churn propensity modeling, and budget reallocation. These interventions addressed symptoms rather than causes.
The core failure was not insufficient prediction accuracy, but the absence of decision constraints. While predictive models identified commercial risk, AI systems remained free to act on the same engagement signals that were creating the risk. As long as engagement was treated as a universal indicator of demand, mitigation efforts only delayed the inevitable exit without altering the underlying decision logic.
This logic holds under high friction conditions, where disengagement is costly and delayed. Once friction is removed, the logic reverses. Engagement increasingly reflects tolerance rather than intent, and escalation accelerates exit rather than conversion.
The automation trap does not emerge from flawed algorithms or poor intent. It enters through the everyday operating structures that govern modern marketing organizations. Most firms already believe in long-term customer value at a strategic level. The failure occurs because that belief is not translated into how decisions are automated, reviewed, and rewarded.
Most marketing dashboards are built to monitor flows rather than stocks. Engagement rates, conversion lift, and short-term return provide immediate feedback on campaign performance, but they offer no visibility into cumulative exposure or tolerance erosion. As a result, AI systems can continue to escalate contact intensity while executive dashboards signal success. Groupon’s trajectory illustrates this pattern: their aggressive email campaigns initially drove impressive conversion metrics, but “voucher fatigue” and message overload eventually turned customers away at scale, contributing to years of subscriber losses and a stock price decline of over 85 percent from its IPO high.7
This creates a structural blind spot. By the time churn or unsubscribe rates register as a concern, the underlying depletion has already occurred. Customer exit is not a leading indicator; it is the final realization of a long sequence of decisions that appeared locally optimal. Because dashboards aggregate performance across time and customers, they conceal the nonlinear dynamics that occur at the individual level when tolerance thresholds are crossed.
Most organizations assume that existing analytics frameworks will catch these risks. In practice, they rarely do. Marketing mix models detect saturation effects at the channel level, but they operate on historical aggregates and are recalibrated infrequently. They are not designed to intervene in real-time escalation decisions driven by AI systems. Attribution models reward incremental response, even when that response reflects tolerance rather than demand. Churn or propensity models may identify customers at risk, but they typically function as descriptive overlays rather than as binding constraints on automated action. The result is a governance gap: predictive insight exists, but decision authority remains embedded within execution systems that are rewarded for short-term lift. Risk is identified, but not prevented.
Incentives further entrench the problem. Marketing teams are evaluated on near-term revenue or efficiency, and AI systems inherit those targets. No single team owns cumulative tolerance or attention depletion, so those outcomes never appear in quarterly objectives. Each optimization cycle improves local metrics, yet collectively these decisions exhaust customer trust faster than it can be rebuilt.
The defining issue is not automation itself, but automation without ownership of long-term costs. When AI systems gain autonomy over contact frequency, spend allocation, and engagement escalation, they implicitly make value judgments about how customer assets should be used. If leadership does not explicitly set those rules, optimization defaults will set them instead.
This dynamic reflects a broader pattern. Marketing leaders are investing aggressively in AI to automate personalization, optimize campaigns, and scale content, believing they are modernizing marketing operations. According to a 2025 Gartner survey, 65 percent of CMOs say advances in AI will transform their role, and McKinsey’s Global AI Survey found that marketing and sales functions report the highest revenue impact from AI adoption.8 In practice, they are embedding a governance problem. AI systems do exactly what they are designed to do. When instructed to maximize engagement, response, or short-term return, they become highly effective at extracting value from customers. Extraction, however, is not the same as value creation.
As AI becomes more effective at local optimization, the risk of undermining long-term customer assets increases. Incentive structures that were once moderated by organizational friction are now embedded directly into systems that act continuously and at scale. When customer friction is reduced, the consequences of this misalignment surface abruptly.
The idea that customers should be managed as financial assets rather than activity targets is not new.9 Marketing philosophy has largely embraced this view, yet marketing technology has lagged.
Organizations have deployed AI systems optimized for rapid, short-cycle gains, while expecting outcomes that require long-horizon asset stewardship. Even if leadership believes customers are assets, they have deployed autonomous agents that treat attention as a renewable resource. The frictionless environment of 2025 has simply exposed the mechanistic gap between strategic intent and operational reality.
The gap is not a modeling issue; it is a categorization error. Marketing AI systems are still governed as executional tools, even though they now make repeated decisions that allocate scarce customer resources over time. They control contact frequency, allocate promotional spend, prioritize customer segments, and determine when to intensify or withdraw engagement. These capabilities are embedded in frequency-optimizing email engines, next-best-action models within CRM platforms, automated bidding and budget allocation tools, and lifecycle orchestration systems that dynamically adjust outreach across channels. Each of these decisions draws down a finite stock of customer attention and trust while shaping future revenue potential. In effect, these systems allocate customer capital over time, even though they are rarely governed as such.
To bridge this gap, leadership must shift from activity optimization to asset stewardship. Table 1 contrasts these two governance paradigms. Under activity optimization, AI is evaluated as a campaign execution engine, rewarded for improving short-term flows such as clicks and conversions. Under asset stewardship, AI functions as a fiduciary mechanism responsible for preserving and compounding customer equity over time.

Table 1. The Shift from Optimization to Stewardship
Recognizing AI as a capital allocation system clarifies the core governance challenge: not where performance can be increased, but where automation should be constrained.
Effective governance requires explicit constraints on where and how automation is allowed to act. A response-tolerance lens helps leaders distinguish where engagement compounds value from where it accelerates depletion. Because AI systems now continuously allocate scarce marketing capital (customer attention, message opportunities, and budget) across the portfolio, the framework directly informs where those resources should flow. High-tolerance, high-response segments justify sustained investment; low-tolerance segments require capital restraint to preserve relationship durability. Without this lens, AI defaults to short-term optimization, misallocating capital toward segments that convert quickly but churn faster, systematically depleting the customer assets the organization depends on for long-term growth.
Consider four archetypes. Customers with high response and high tolerance compound value under sustained engagement. Those with high response but low tolerance, often the most commercially attractive in the short term, convert readily but exit quickly under pressure. They are precisely the segment AI systems escalate toward, and precisely the segment most damaged by that escalation. Customers with low response but high tolerance hold latent value that optimization logic ignores. And those with neither tolerance nor response are depleted by continued contact with no offsetting return.

Figure 2. A Response-Tolerance Lens for Governing AI Decisions
The purpose of this lens is not to improve prediction accuracy, but to determine where decision rights must be constrained based on long-term cost. Models may identify risk, but governance determines whether systems are allowed to act on it.
Recent changes by Google and Apple have provided unexpected clarity: when customers can disengage instantly, short-term performance signals no longer reliably indicate long-term value. Systems optimized for engagement can appear efficient while accelerating customer exit.
The lesson extends beyond email. As AI systems take on larger roles in customer decisioning, the managerial challenge shifts from improving optimization to governing its consequences. The path forward is not to abandon AI, but to govern it as what it has become: a capital allocation system that shapes long-term customer equity with every decision it makes at the customer level.