California Management Review
California Management Review is a premier professional management journal for practitioners published at UC Berkeley Haas School of Business.
Image Credit | PPstock
The “Jevons Paradox” states that as technological advances improve the efficiency with which a resource is used, its overall consumption may paradoxically increase rather than decrease. This phenomenon is not rooted in a logical contradiction, but rather in an implicit belief: that improving the efficiency of a resource should naturally lead to a reduction in its use.
Dario Gruda and Boram Aeon, “Seven Myths about AI and Productivity: What the Evidence Really Says,” California Management Review Insight, October 16, 2025.
Mats O. Pettersson, Joakim Björkdahl, and Marcus Holgersson, “Profiting from AI: Evidence from Ericsson’s Pursuit to Capture Value,” California Management Review 67, no. 4 (Summer 2025): 5–20.
In practice, when a resource becomes more efficient, that is, more accessible, more powerful, more profitable, its value proposition increases. It becomes more attractive than less optimized alternatives, which stimulates demand and, consequently, increases total consumption.
Similarly, the belief that increasing efficiency in executing tasks with AI would necessarily reduce non-productivity within organizations rests on a mirrored illusion.
Five factors underpin this blind spot:
1. It assumes that any unit task performed by AI results exclusively in a unit productivity gain and that the sum of these tasks necessarily leads to a net productivity gain.
This is built on the illusion that productivity can be understood as a simple sum of unit gains, independently of the behavioral, contextual, or systemic dynamics these gains trigger. It hypothesizes that each task automated by AI is a net substitute for a human task, and that this substitution necessarily leads to improved organizational or economic performance. But this view overlooks composition effects, cognitive externalities, and negative feedback loops tied to increasing AI usage.
In fact, the more efficient AI becomes, the more frequent, widespread, even systematic its use becomes. This triggers a cognitive rebound effect: what was occasionally helpful becomes structurally overused.
As AI takes over functions like writing, summarizing, formulating responses, or proposing ideas, users tend to withdraw effort from these tasks or even devalue them. Efficiency becomes an incentive to abandon active skills, not a lever to expand their scope. Empirical evidence supports this pattern, showing that increased AI use is associated with cognitive offloading and reduced critical thinking1.
This shift causes a slow but deep erosion of human faculties associated with productive effort, such as analyzing problems, confronting ideas in teams, forming judgments without algorithmic assistance, producing autonomous reasoning, or simply facing the discomfort of doubt. Evidence from automation contexts illustrates this clearly: studies demonstrate that users often over-rely on automated suggestions, even when they are incorrect, a cognitive phenomenon widely known as automation bias2.
By substituting for intermediate cognitive processes, AI desensitizes organizations to critical effort diminishing their ability to generate value where AI cannot act effectively: in zones of ambiguity, uncertainty, or interpretive conflict.
Furthermore, the growing role of AI in internal processes tends to condition human judgment to the logic structure of algorithmic assistance. When humans work with algorithmic systems, their decision thresholds and reasoning patterns progressively align with the structure and framing of the algorithm’s recommendations3. In this way, AI becomes an implicit reasoning model. Similarly, prolonged reliance on automation fosters both complacency and bias, subtly shifting cognitive effort from independent analysis to confirming the machine’s output4. AI does not just respond to our queries; it shapes our expectations, our pace, and our tolerance thresholds for slowness or complexity. This fosters an invisible form of dependency, in which individuals become less willing, then less able, to produce certain forms of thinking or interaction without AI support.
2. It also assumes that there are no unproductive tasks that can be performed by AI.
This overlooks the fact that increased AI efficiency does not guarantee that the tasks performed are useful, necessary to productivity, or net contributors to value, and that some may be inherently counterproductive. In other words, it presumes that every task performed by AI deserved to be done in the first place.
This assumption confuses execution capacity with strategic or organizational necessity. The term heteromation describes the extraction of economic value from low-cost or free labor in computer-mediated networks5. In a symmetrical development, we may now be entering a post-heteromation era in which AI systems generate pseudo-work, outputs that mimic productive activity and appear low-cost or “free” within AI-mediated networks, yet are strategically irrelevant or even counterproductive.
As AI becomes more efficient, the range of tasks it can deliver at high speed expands dramatically, increasing the likelihood of their rapid and large-scale reproducibility, including tasks with no real utility, no measurable impact, or even those detrimental to collective functioning. The low marginal cost of using AI leads to an inflation of solicitations (queries, automations, content generation) that, while seemingly productive in isolation, overwhelm attention flows, validation circuits, or collaboration channels.
An AI can thus generate redundant summaries, useless texts, reports no one reads, contextless simulations, or summaries of documents with no strategic purpose. This results in activity without purpose apparent productivity, but zero functional effectiveness. Doing is mistaken for advancing, producing for contributing, automating for improving.
Moreover, by systematizing the automatic generation of unproductive tasks under the guise of volume, performance, or responsiveness, AI sustains an illusion of value: we believe we’re saving time, but we end up spending time reviewing, sorting, filtering, validating things that arguably never needed to be generated. This leads to a form of cognitive triage being outsourced to human teams, who become the filters for automated overproduction. The relationship between automation and cognitive load is inverted: AI doesn’t free up time; it relocates its constraint elsewhere. This dynamic reflects a form of technostress, in which technology shifts rather than reduces work, adding to mental load through the constant monitoring, validation, and filtering that digital outputs require6.
This phenomenon remains structurally invisible in classical productivity metrics. Since unproductive AI-generated tasks aren’t necessarily costly, they aren’t perceived as losses. Yet they consume invisible resources: attention, strategic clarity, cognitive bandwidth, collective motivation. Their cost is diffuse but cumulative.
3. Morever, it assumes that there is no task whose productivity, once past a certain threshold, becomes counterproductive.
This would imply that the productivity of any task is intrinsically positive, and that optimizing it even to an extreme can only improve its effectiveness and contribute to organizational performance. This belief rests on an atomized, local view of work, where each task is evaluated independently of its interdependencies with others.
And this view ignores the systemic nature of work in organizations: no task exists in isolation, and the performance of one task only makes sense in its harmonious integration into a collective and coordinated process.
A task can thus become counterproductive if its productivity exceeds what the rest of the system can absorb or process. This occurs especially when AI enables certain actions to be performed instantly or at massive scale, in contexts where downstream actors (teams, partners, ecosystems) are misaligned in rhythm, analytical capacity, bandwidth, or strategic priorities. This causes saturation, bottlenecks, broken workflows, even full desynchronization of collective processes.
This functional asynchrony is particularly striking in complex organizations, where each task is linked to others, to departments, or to processes. An AI capable of generating ten reports in one minute may seem highly productive at its point of action but can also disorganize the entire downstream chain: reading overload, unclear prioritization, planning conflicts, managerial frustration, diluted attention.
To illustrate, a 2024 Upwork Research Institute survey7 found a stark gap between leadership expectations and employee experience: while 96% of executives anticipated productivity gains from AI, 77% of employees said it had increased their workload. 39% now spend more time reviewing or moderating AI-generated content, 71% report burnout, and 65% feel heightened pressure from productivity expectations.
Local performance becomes a negative externality for the whole, and value created at one point is canceled or even reversed elsewhere.
This imbalance is not just logistical or operational. It is also cognitive and relational. When a tool moves faster than the collective culture in which it is embedded, it desynchronizes interaction norms, time anchors, and shared thresholds of acceptability. The ecosystem becomes incoherent: some accelerate while others resist; some produce without context while others absorb without direction. The result is a global loss of productive coherence, where hyper-efficiency in one area degrades the flow and clarity of the whole.
4. Furthermore, it assumes that productivity gains from AI are immediately accessible without profound transformation of organizational settings and underlying norms.This belief overlooks a temporal and structural reality: many of AI’s potential productivity gains are not simply “waiting to be harvested” by adding the technology into existing workflows. When new digital tools are introduced into organizations with entrenched legacy organizational systems, these systems often absorb the technology without altering the underlying processes8. Rather than serving as a catalyst for reimagining workflows, the technology becomes grafted onto existing routines, which neutralizes much of its transformative potential and leaves performance gains largely underutilized.
And they are in fact, unreachable until two fundamental shifts occur. The first shift to occur is what I call the “Technological Stockholm Syndrom”9: a paradoxical attachment arising after the initial resistance to change that is inherent in any adoption of disruptive tools.
The other shift to occur is that organizations must re-think, re-format, and in some cases entirely redesign the processes of work so that AI is not an appendage grafted onto legacy systems, but a fully integrated actor within a rebalanced operational ecosystem.
This involves a deeper barrier that is less visible: the embedded architecture of work. Every organization operates within a dense mesh of principles, standards, and implicit conventions, some written into procedures, most of the others embedded in culture and tacit knowledge. Many of these structural assumptions go unquestioned precisely because they have “always been there” and silently define what is considered socially acceptable, possible, or even conceivable. Yet these invisible frames shape the logic of task sequencing, decision-making authority, validation loops, and quality thresholds. Without questioning and re-engineering these foundational stances, AI is forced to operate within constraints designed for a pre-AI reality. This is why the most significant productivity leaps are often deferred gains: they require deliberate dismantling and reconstruction of workflows, redistribution of roles between humans and machines, and sometimes even a redefinition of the very purpose of certain tasks. The paradox is that these transformations are often resisted because they force organizations to confront uncomfortable truths: that some of their long-standing processes are artifacts of outdated constraints; that “best practices” may no longer be best; and that productivity is not just about acceleration, but about re-aligning the structure of work with the new capabilities available. Until these shifts occur, AI’s potential remains trapped in a holding pattern, visible in theory, but inaccessible in practice, because the system in which it operates has not yet been adapted to let those gains materialize.
5. Finally, it assumes that productivity is an exclusive, mechanistic relationship between output and tangible inputs, reducing the complexity of human work to an exchange of labor, capital, and materials for outputs, while completely overlooking the intricate interplay of intangible inputs that drive sustainable value creation. This mechanistic view rests on the assumption that optimizing tangible resources, like automating tasks, replacing labor with machines, or improving efficiency in material use, will inevitably lead to a linear increase in productivity, regardless of the Human qualities and context that is required to achieve meaningful results. Yet, by reducing productivity to an equation of tangible input versus output, this perspective ignores the profound role of intangible resources, such as critical thinking, judgment, leadership, emotional intelligence, cultural sensitivity, passion, love and many more, in driving long-term, sustainable outcomes.
Meta-analytic evidence consistently shows a positive association between emotional intelligence and job performance across diverse roles, with the largest effects observed in emotionally demanding occupations such as customer service, sales, and leadership where managing emotions is central to effectiveness; this pattern is confirmed by integrative meta-analyses reporting positive links between all three emotional intelligence and job performance10, 11.
In fact, the neglect of intangible inputs, often leads to a narrow view of work that fails to account for how the human aspects of productivity foster performance, innovation, adaptability, and therefore long-term value creation.
For example, contrary to what we read here and there, translators jobs might not be replaced by AI when translating require nuanced judgment, cultural understanding, and creative decision-making, intangible factors that AI cannot replicate fully.
Compelling evidence indicates that machine translation has not reached parity with human performance12. They show that when you control for factors such as translationese, evaluator expertise, and inter-sentential context, particularly using texts originally written in the source language rather than translations, the gap becomes clear. Professional translators were consistently better at distinguishing between human and machine outputs, exhibiting superior fluency, fidelity, and coherence. These findings underline that nuanced judgment, cultural sensitivity, and context-aware decision-making remain well beyond the reach of current AI.
Despite advancements in AI’s social-emotional capabilities, empathy remains distinctly human in the eyes of recipients. In a comprehensive investigation with over 6,000 participants across nine experiments, identical empathic responses generated by large language models were consistently rated as more supportive, emotionally resonant, and caring when participants believed they were crafted by humans rather than AI13. Moreover, even subtle suspicion that the human-labeled response might have been aided by AI notably diminished its perceived empathy and support. These results underscore a profound insight: AI may simulate empathy, but human-attributed empathy retains psychological authenticity and emotional impact that AI cannot match, reinforcing the irreplaceability of emotional connection in human systems.
The flawed mechanistic view of productivity, which assumes a simple, exclusive causation relationship between tangible inputs and output, fails to acknowledge the profound interdependence between tangible and intangible inputs in creating lasting value.
Together or separately, these five factors have the power to neutralize or jeopardize the promise of reducing non-productivity attributed to AI and, by extension, to cancel out the expected gains.
Efficiency is local, functional, operational. It becomes a strategic illusion when detached from human, social, or organizational purpose.
Productivity, on the other hand, is systemic, collective, purpose-driven.
Confusing the two, is mistaking speed for direction, and execution for value.
Therefore, addressing these five blind spots calls for deliberate, targeted action. The following recommendations translate each factor into concrete steps executives can take to unleash productivity gains and ensure their AI transformation programs deliver on their promise.
1. Embed deliberate “human-in-the-loop” safeguards by ensuring that critical decision points always require human review and rationale, especially in contexts of uncertainty or ambiguity. This set the landscape of continuous skill development to be reinforced through targeted training programs. In parallel, AI deployments should be evaluated with expanded KPIs that track not only task-level efficiency but also indicators of human capacity retention such as decision-making quality, cross-team knowledge exchange, and resilience in novel situations, so that gains in speed do not come at the expense of the very competencies that sustain long-term performance.
2. Benchmark AI outputs with the expected definition of done and then enforce and continuously monitor it, using a strategic relevance filter before deploying AI for any task. This involves defining clear criteria for what constitutes a valuable output in the organization’s context and ensuring that AI is applied only to activities that meet these criteria. In addition, broadening the productivity metrics to capture the hidden costs of validation, filtering, and cognitive triage is a no-brainer, so that performance evaluations account for both visible outputs and the invisible resource drain AI may impose, thereby recursively challenging the assumed productivity ROI.
3. Model AI integration as a networked process, mapping interdependencies before deployment to anticipate where speed mismatches might occur. This means stress-testing workflows under AI-accelerated conditions, aligning upstream and downstream capacities, and defining absorption thresholds, the maximum pace at which different teams, systems, or partners can process new outputs without loss of quality or clarity and redesign all processes involved accordingly. Measuring performance at the ecosystem level, rather than at isolated task nodes, ensures that gains in one area do not quietly erode value elsewhere.
4. Pair AI adoption with a structured “process deconstruction audit” before deployment. This means systematically mapping workflows, identifying legacy bottlenecks, and questioning implicit norms that limit AI’s transformative scope. To do so, cross-functional design labs should be established to prototype AI-enabled workflows from scratch, rather than forcing AI into pre-existing molds. Finally, adoption metrics should shift from ‘time saved per task’ to ‘systemic performance uplift,’ ensuring that productivity is measured in terms of collective, end-to-end impact, not just local acceleration.
5. Weight roles not only by task type but by the intangible human capacities they require (e.g., empathy, contextual judgment, cultural fluency) and design performance metrics to explicitly track and reward these capacities, ensuring they remain cultivated alongside efficiency gains. Symetrically, protect high-intangible-value workflows from full automation, instead using AI in augmentation mode to free humans from low-value tasks so they can invest more in uniquely human contributions.
True progress lies not in what AI is able to do, but in what humans deliberately choose not to delegate, not to abandon, and to continue practicing.
This precise dissociation between efficiency and purpose, between execution and discernment, is what makes artificial intelligence structurally incomplete.
AI, in its current form, mimics certain human cognitive functions: it reformulates, responds, predicts, optimizes, but it is guided by no internal integrity.
Yet this mimicry alone is not sufficient to ensure value in human systems.
It is the ability to link intelligence to coherence, purpose, and responsibility that does.
This is why the shift in investment and research, from Artificial Intelligence toward Artificial Integrity, is necessary14.
As long as AI is designed as an amplifier of efficiency without a systemic social compass (let alone the equally vital ethical and moral one), it will contribute to productivity to a certain extent, while simultaneously sustaining the illusion of productivity without ensuring its real, systemic and positive effects.
Real progress will not lie in making AI ever more powerful at mimicking human cognitive intelligence, but in designing systems capable of functional integrity, of knowing why they act, in what context, with what limits, and for what collective ends.