California Management Review
California Management Review is a premier professional management journal for practitioners published at UC Berkeley Haas School of Business.
Kaarthikeyan Subramaniam and J. Mark Munoz
Image Credit | Mihail
AI adoption across enterprises is moving faster than most leaders expected. Many organizations are rolling out internal GenAI apps/tools with access to multiple models like chatbots, agents, workflows, and automation tools across their business functions. On paper, it looks like real progress. In practice, the value story is far less convincing. Despite widespread adoption, many CFOs and senior leaders are still struggling to see meaningful returns. A Gartner study reported that only 7% of CFOs say they are seeing high ROI from AI in finance, even as usage continues to grow.1 CIO Dive painted a similar picture: nearly 80% of AI projects fail to deliver on their original promise, and 42% are shut down before they ever reach full production.2 For something that has attracted so much investment and attention, that’s a sobering reality.
Tobias Straube, Olaf J. Groth, and Jöran Altenberg, “AI Initiatives Don’t Fail - Organizations Do: Why Companies Need AI Experimentation Sandboxes and Pathways,” California Management Review Insights, May 22, 2025
Most organizations today can be described as AI-enabled. They have the tools and access to powerful technological models. Many have extensive enterprise data. However, few utilize an AI-advantaged approach where AI is applied in a consistent and repeatable way to make better decisions, move faster, and outperform competitors. The gap between having access to AI and benefiting from it keeps getting wider. As a result, an important leadership question has fundamentally changed. It’s no longer about whether an organization can deploy AI. The real question is harder: can we use AI in a way that creates lasting competitive advantage?
This scenario is frustrating to many companies especially since technology itself keeps getting better. Large language models are constantly setting new performance records. However, value creation remains elusive. This suggests that the problem is not technology, but rather the way organizations choose to apply it. These are leadership and operating model flaws, and not one of technological model selection. While tools can be easily acquired, value cannot. To find value, companies must create a deliberate design and embed a planned system into decision making processes and operations.
AI maturity in organizations will not be defined by more pilots, experiments, or isolated success stories. It will be defined by whether organizations can deploy a central enterprise AI platform that scales across divisions, connects to meaningful enterprise data, and keeps humans in control. It will not be found “in the loop” to rubber stamp decisions, but rather in the control of intent, constraints, and outcomes. Ultimately, AI success will be achieved with something very human: the ability to guide powerful technology toward outcomes that matter, safely and at scale.
There are five key reasons that explain why AI value creation has remained elusive to many organizations and why they need to deliberately navigate these challenges if they want AI to deliver real value.
One of the biggest reasons AI fails to deliver value is that teams focus on building impressive solutions instead of solving the right business problem at scale. AI is often applied to automate tasks or generate insights without first asking whether those activities matter to business outcomes. When the problem is poorly defined even a high-performing AI system will struggle to create meaningful value. Organizations need to start with the business problem. It is essential to define the value to be realized and assess it against total cost of ownership.
Many organizations roll out internal GenAI apps and tools in silos. Implementing the same use case across different functions often results in multiple tools, without shared standards, governance, or consistent approach. Fragmented efforts reduce visibility of use cases across the organization, increase duplication, and drive-up costs. Forrester Consulting conducted a commissioned study on behalf of Tines found that 88% of IT leaders say AI adoption remains difficult to scale without orchestration, as disconnected systems, data, and teams’ dilute value.3 Organizations need to establish a central enterprise AI platform as a hub, with flexibility to connect multiple tools and platforms as spokes.
In many organizations, AI pilots and experiments generate early excitement but never evolve into enterprise-scale capabilities. This often happens because initiatives start with the technology rather than a clearly defined business problem, and even when a problem is identified, the solution is not integrated into core enterprise workflows. As a result, promising pilots remain isolated experiments. An MIT NANDA report titled “GenAI Divide: State of AI in Business 2025” found that despite $30–40 billion in enterprise investment into GenAI, 95% of organizations are getting zero return.4 Organizations need to stop treating AI as an experiment. It makes strategic sense to start with a production-grade minimum viable product designed to solve a key business problem at scale from day one.
Many organizations address risk only after an AI failure or compliance issue occurs, rather than building a responsible AI approach from the ground up. Risk controls, ethics, and accountability are often added reactively, once problems surface. An EY survey found that almost every company has experienced financial losses from AI-related risks, and that organizations with proactive governance and responsible AI practices see fewer incidents and lower impact when issues arise.5 Organizations need to create responsible AI policies and guidelines and embed them directly into platforms, workflows, and everyday practices.
While human-in-the-loop sounds like a safe approach for implementing responsible AI practice, it also often becomes a barrier to both scale and value. When humans are asked to review AI outputs, decisions slow down and bottlenecks form because humans cannot keep up with the speed at which AI operates. Over time, these reviews turn into routine approvals, reducing the quality of oversight and creating a false sense of control. More importantly, when human effort is spent approving outputs instead of shaping inputs, outcomes and constraints, organizations struggle to translate AI capabilities into real business value. As AI systems become more autonomous especially with agentic AI putting a human in every loop simply does not scale and blurs accountability. Organizations need to move from human-in-the-loop to human-in-control. Human-in-control helps in faster decision making, lower risk, and higher ROI. In addition, human-in-control is essential for agentic AI to scale up and would tighten the loop.
To unlock real value, organizations need to move from human-in-the-loop design pattern to human-in-control operating model. In this model, humans are no longer positioned as reviewers of AI outputs, but as designers with control. Humans provide the inputs and define the goals, constraints, and success criteria, while AI agents determine the most efficient paths to achieve business outcomes and create value.
A central enterprise AI platform is foundational to this shift. Without a shared platform, it would be nearly impossible to apply consistent guardrails, governance, or oversight at scale. The platform becomes the locus of control where data access, policies, models, and agents are orchestrated in a unified way.
This operating model moves control upstream. Instead of approving individual AI outputs, organizations focus on architecting guardrails with clear rules, boundaries, and escalation paths that guide how AI behaves. This allows AI to operate at speed to solve real business problems, while keeping humans accountable for outcomes, not just approvals.
There are three important strategic considerations to enhance value creation:
Organizations need teams that balance AI strategy (Head), responsible AI and ethics (Heart), and hands-on AI practitioners (Hands). Hiring only strategists creates vision without execution. Hiring only technologists creates solutions without trust. Value emerges when organizations intentionally build teams that combine all three.
For AI to deliver real value, managers need to monitor and manage it intentionally Identifying and tracking a manageable set of clear signals is usually enough to keep a company’s AI agenda focused on real outcomes. Managers need to ask important questions such as:
Given that AI continues to evolve, and organizations are learning along the way, there may be a need to change the AI agenda and direction altogether. Companies need to anticipate this reality and be prepared to swiftly change course in order to find optimal value.
In essence, contemporary managers need to think and act differently in the quest for value creation. They need to be prepared to steer away from the conventional and embrace the unconventional. Managers need to be adept at assessing project progress and implementing solutions in new and creative ways. Keen operational understanding and the ability to undertake prompt refinements is critical. When managers focus on solving the right problem and track decision impact, speed, control, trust, and adoption, AI value tends to follow.
AI value does not fail because models are weak. It fails when organizations deploy AI without clear intent, shared platforms, and accountability for outcomes. For managers, realizing AI value is less about choosing tools and more about how AI is applied, governed, and measured. The seven recommended approaches below would help move the needle in value creation:
Many firms have found AI value optimization to be elusive. While the right strategic approaches may vary across companies, and catching up with AI progression can be difficult, companies will be well served by rethinking their management methodologies and prioritizing the creation of impactful value.