California Management Review
California Management Review is a premier academic management journal published at UC Berkeley
by Brian R. Spisak and Gary Marcus
Image Credit | korkeng
In the buildup to New Year, the AI prognosticators made their predictions about how 2025 would unfold. Many painted a world where GenAI would fundamentally transform organizations overnight. But the reality by year’s end may well look very different from their predictions.
If you work with leaders, you’ve probably felt the ripple effects of these grandiose claims. Influencers on stages and social media have been painting GenAI as a magic wand that will dissolve hierarchies, reinvent processes, and deliver innovation at an unprecedented scale. So far that hasn’t happened. And it probably won’t happen soon. In all likelihood, AI isn’t going to restructure your organization, solve all your problems, or deliver instant transformation in 2025.
Silicon Valley loves hype, but leaders need to focus on what’s real, what’s actionable, and what will actually deliver value in the messy complexity of their organization. This article outlines the facts leaders actually need to know about GenAI, its potential, and how to approach AI transformation pragmatically.
Marcus Holgersson, Linus Dahlander, Henry Chesbrough, & Marcel Bogers, “Open Innovation in the Age of AI,” California Management Review, 67/1 (2024): 5-20.
Bold claims and well-polished demos from AI influencers are frequent – but they’re also irresponsible. The hype has created unrealistic expectations about what this technology can do. Leaders have been told that GenAI and large language models (LLMs) will be the centerpiece of an “AI-native organization,” reinventing business models and unlocking limitless growth. But few if any companies have actually seen that happen. A Q1 2025 Cisco survey of over 2,500 CEOs worldwide, for example, found that “97% plan to adopt and incorporate AI … into their business. But only 2% feel ready for AI.”
The problem is that enthusiastic narratives gloss over the realities of AI adoption, downplay the barriers, and oversimplify the complexities, pushing leaders to leap into the tech without understanding the costs, challenges, or limitations. When these initiatives inevitably fall short, it erodes trust in AI as a tool and damages the credibility of those advocating for its responsible adoption. It’s time to refocus on the realities leaders must navigate to make AI work for their organizations.
Dispelling the overblown notions surrounding GenAI requires a clear look at the facts.
According to a large Boston Consulting Group (BCG) survey of 1000 C-suite leaders and senior executives published in Q4 2024, the vast majority of organizations (74%) are making little to no progress with AI initiatives, resulting in a failure to scale or generate meaningful value. At best, many organizations are stuck in the proof-of-concept phase. Leaders present light-touch AI initiatives to the board, investors, and other stakeholders to appear committed to transformation, but lack a plan to build a meaningful relationship with AI – often because the current tech isn’t mature enough to deliver on unrealistic expectations.
Even among organizations deemed “AI leaders,” BCG found that only 4% have developed cutting-edge AI capabilities that generate consistent value. The remaining 22% are just starting to realize some gains after years of investment. And here’s the kicker: this survey covered all types of AI, not just GenAI. That means return on GenAI is likely even worse. If only 4% of organizations consistently see value from all types of AI, it’s reasonable to estimate that those seeing returns on GenAI is approaching zero – something Microsoft’s CEO, Satya Nadella, recently highlighted.
Also, this pattern isn’t unique to BCG’s data. Deloitte uncovered similar findings in their Q3 2024 report on the “State of Generative AI in the Enterprise.” Of the nearly 3,000 senior leaders surveyed, the majority (68%) reported transitioning less than a third of their GenAI experiments into production due to issues such as a lack of reliability, security, and privacy. And in government, Deloitte reported no business function achieving more than 3% GenAI adoption. Similarly, a Q3 2024 report by McKinsey found that just 11% of the 1000 companies they surveyed adopted GenAI at scale. In other words, the pace of adoption remains far below the lofty promises made by AI influencers.
This data-driven reality contrasts sharply with hyped-up narratives. GenAI for most organizations is, at best, an experiment, far from achieving the enterprise-wide, “game-changing” status predicted for 2025. Years of employee training, mindset shifts, cultural evolution, process modifications, strategic adjustments, and technological improvements will be required before GenAI fundamentally changes how we work.
Leaders looking to implement GenAI face two interconnected challenges: executing a practical plan and ensuring that plan aligns with their organization’s values. Companies like IBM and BMW offer clear examples of how to approach these challenges in a structured and meaningful way.
IBM’s approach focuses on the “how” of implementation. The following eight steps ensures their AI initiatives are practical and scalable:
These steps provide a structured way to move from planning to action, but execution is only part of the story. BMW complements the technical “how” with a clear focus on the “why.” The car manufacturer’s leadership team developed five principles to guide AI use and ensure alignment with their values by focusing on:
As BMW emphasizes, AI transformation is never an end in itself. Instead, it should enhance employee well-being, alleviate burdens, and deliver clear benefits. This focus on “duty of care” grounds AI adoption in a commitment to people, not just efficiency.
To responsibly explore GenAI, leaders can combine the structured steps of IBM with the guiding principles of BMW. Together, they offer a roadmap that balances execution with purpose. Start with measurable goals, assess your data and technology, and scale incrementally, but always with an eye on how the technology supports employees, builds trust, and aligns with your organization’s values.
In the real world, companies like IBM and BMW must navigate volatility, uncertainty, complexity, and ambiguity. As we’ve shown, most organizations are limiting GenAI to specific domains and experimenting with proofs of concept. Why? Partly due to corporate friction, but also because GenAI itself has limits. Yes, it’s an impressive tool, excelling at natural language processing, content generation, and (to a degree) summarization, but it’s no silver bullet for the messiness of reality.
GenAI struggles with reliability. It can hallucinate false information, reinforce biases, and produce outputs that are inappropriate. The outcomes can range from the absurd – like Google’s chatbot recommending glue as a pizza ingredient – to the dangerous, such as an AI mistakenly doubling a medication dosage in a hospital discharge summary.
While companies like OpenAI are investing heavily to advance LLMs like those powering ChatGPT, leading researchers acknowledge that hallucinations “can’t be stopped.” In fact, as these models evolve, the concern is that they will become better at generating highly detailed yet flawed responses, making them more convincing and harder for trainers, testers, and users to detect. These limitations have led experts to agree that, in its current state, this technology remains unsuitable for high-stakes decisions without significant human oversight.
Another popular narrative is that GenAI will dissolve hierarchies and usher in a golden age of fluid, project-based organizations. According to this logic, middle management will disappear, and small teams will seamlessly collaborate with AI to drive unprecedented innovation.
Maybe someday, but no time soon. Most organizations, for better or worse, rely on hierarchies for stability, accountability, and scalability. These entrenched structures exist because they work, particularly in complex environments where operations span geographies, products, and services.
That doesn’t mean AI won’t reshape roles or processes. It will – some middle managers might in fact go in the short term. But deep AI transformation will likely be incremental, not revolutionary. Middle management, for instance, won’t entirely disappear. It’ll shrink and evolve over time into roles focused on human-AI coordination, leveraging a mixture of employees and AI agents to streamline workflows, improve decision-making, and boost performance. Over the course of this evolution, leaders should see AI as a tool for augmenting existing structures, not dismantling them, to ensure a smooth and safe transition into the future.
Innovative leaders need to take this measured approach because the hard reality is that AI transformation is expensive, complex, and slow. Major barriers include:
These aren’t trivial obstacles. They’re why transformation, especially in large enterprises, is measured in years, not months. Leaders must approach AI with patience and a willingness to address these challenges head-on with pragmatism and precision.
The pioneers who succeed in integrating AI into their organizations will be the ones who focus on deliberate, actionable strategies. They will prioritize substance over spectacle and make decisions grounded in their organization’s unique needs and realities. Success will come from identifying practical opportunities where AI can create real, measurable value. Here’s how to do that:
Leaders should not be seduced by the hype. They need clarity, focus, and actionable insights. GenAI and other AI technologies hold immense promise, but they won’t magically solve all organizational challenges. Transformation requires patience, investment, and a deep understanding of both the opportunities and the limitations of AI.
The best organizations won’t be the ones swept up by the latest buzzwords. They’ll be the ones with leaders who separate signal from noise, address realities, and guide their teams through the hard but necessary work of AI adoption.