CMR INSIGHTS

 

4 Widespread GenAI Assumptions to Question

by Oguz A. Acar and Pedro Amorim

4 Widespread GenAI Assumptions to Question

Image Credit | AndS

Questioning common assumptions helps leaders adopt GenAI with greater clarity and precision.
  PDF

Generative AI is everywhere. Scroll through consultancy reports or tech blogs, and you’ll see a familiar narrative: AI will solve every business problem, it is growing exponentially, and if you are not “AI-first” with proprietary models and data you are already falling behind.

Related CMR Articles

R. C. Ångström, M. Björn, L. Dahlander, M. Mähring, & M. W. Wallin, “Getting AI Implementation Right: Insights from a Global Survey,” California Management Review, 66/1 (2023): 5-22.

K. Hopf, O. Müller, A. Shollo, & T. Thiess, “Organizational Implementation of AI: Craft and Mechanical Work,” California Management Review, 66/1 (2023): 23-47.


It sounds convincing—until you look at the numbers. While 72% of companies are adopting GenAI and investing millions in GenAI initiatives, only about one in four report achieving tangible value. This stark contrast suggests that AI hype has outpaced its impact.

In our conversations with executives, we keep hearing the same frustrations. The GenAI playbooks they have been handed are big on promises but short on sound reasoning and results. These not only fail to inspire confidence but often lead to costly missteps, inefficiencies, and confusion instead of transformation.

So what should leaders do? It is time to hit pause and ask the right questions: What advice is actionable, and what is just noise? How can we cut through the AI hype and focus on what actually works?

The truth is, GenAI holds immense potential—but realizing it does not come from blind adoption of AI recommendations. Leaders must scrutinize AI’s promises carefully and separate fact from fiction. Drawing on research, real-world implementations, and our experiences with organizations, we identify four pervasive GenAI assumptions that oversimplify, mislead, and demand closer examination.

Assumption 1: The Optimization Illusion—Why GenAI Isn’t Always Better Than Traditional Tools

Claim: “Generative AI can optimize business processes more effectively than established methods”

Reality Check: Despite its impressive capabilities, GenAI often falls short of traditional optimization methods in tasks requiring precision, reliability, and stability.

Take supply chain optimization as an example. Many consulting firms promote GenAI for inventory management, but traditional methods like mathematical programming and machine learning forecasting models still deliver more dependable results. Supply chains are complex systems with many constraints, and they demand predictable, stable solutions—something current generative models struggle to deliver.

Advice on using GenAI for pricing decisions offers another case in point. GenAI has made improvements in mathematical reasoning, but industries with sophisticated pricing needs still favor established algorithms that have been refined through the years. For instance, airlines, pioneers of dynamic pricing, continue to rely on established optimization models that deliver precision and reliability at scale.

Or consider warehouse design. GenAI promises innovative layouts but often falls short when faced with real-world constraints. A recent study revealed that AI-generated warehouse designs required substantial modifications to meet practical requirements. In contrast, while far less flashy, traditional discrete event simulation models, a method to model uncertain processes over time, are better equipped to handle the interplay of space, workflows, and safety constraints.

Takeaway: GenAI is a powerful complement, not a replacement for established optimization methods. It is great for generating creative alternatives, exploring possibilities, or enhancing natural language interfaces. But for business-critical decisions where precision and reliability are non-negotiable, pairing GenAI with established optimization techniques is the smarter move. Success lies in understanding where each approach contributes most effectively.

Assumption 2: The Growth Promise—Why Betting on Exponential AI Progress Is Risky

Claim: “Generative AI will grow exponentially”

Reality Check:  While AI progress has been remarkable, the assumption of inevitable, exponential growth in the short term is far less certain than it sounds. There are many significant technical and practical challenges that make AI’s trajectory unpredictable.

First, the rapid progress we saw with large language models (LLMs) leading up to GPT-4 has slowed. The scaling hypothesis—the belief that bigger models trained on more data will keep delivering improvements—is increasingly under scrutiny. Experts like Ilya Sutskever, one of the co-founders of OpenAI, highlight a fundamental limitation: “There’s only one internet.” In other words, we’re running out of data to train these models. Some are turning to synthetic data to fill the gap, but that comes with its own limitations. Research shows that training AI on its own outputs can lead to model collapse—where the performance of subsequent models irreversibly degrades.

To be fair, there are promising signs of progress in other directions. OpenAI’s GPT-o1 suggests that the next frontier for AI development might lie in scaling inference—not just model size. Yet, even with significant breakthroughs, the road ahead faces real obstacles and uncertainties.

Take computational power. Despite advances in algorithms, hardware, and supply of compute, it has not kept pace with the escalating demands of training larger models. Costs of training frontier models now stretch into the north of a hundred of millions of dollars, and scaling up further is not guaranteed to be sustainable. Then there is the environmental impact. The energy demands and carbon footprint of large-scale AI training could act as natural constraints. Developers are exploring alternative energy sources like nuclear energy, but the viability of these remains unclear at the moment.

Takeaway: Betting everything on exponential AI growth is a risky strategy. While AI’s potential is undeniable, its trajectory is anything but predictable. The smartest organizations are those that focus on delivering value today with tools that work now while staying agile enough to capitalize on future breakthroughs. Build a balanced portfolio: experiment, learn, and invest where AI creates real impact—but hedge your bets against the hype. Success comes from being ready for multiple scenarios—whether progress comes fast, slow, or somewhere in between.

Assumption 3: The AI-First Fallacy—Why Strategy Should Lead Technology, Not the Other Way Around

Claim: “AI-first strategy is needed to stay competitive”

Reality Check: Blanket AI-first strategies—putting AI at the center of all strategic decisions— often create more problems than they solve by ignoring critical organizational priorities, practical realities, and essential human and ethical considerations.

The first issue is deploying AI where it doesn’t belong. Not every problem needs an AI-powered solution. Optimization challenges, for example, often perform better with traditional tools.

Second, there is system readiness—or the lack of it. A recent survey by Cisco illustrates this issue: only 13% of companies globally feel ready to leverage AI, with infrastructure noted as a particularly concerning area. Rushing into AI initiatives without a solid foundation will likely lead to operational headaches, costly retrofitting, and stalled projects.

Then there’s the human factor, which often gets ignored. Aggressive GenAI deployment can erode the very qualities organizations rely on for success—motivation, collaboration, and engagement. For example, research shows that algorithmic management systems undermine employee prosocial behavior, making colleagues feel less like collaborators and more like objects. When technology is prioritized over people, employees check out, resist change, and adoption falters.

And finally, there is the ethical and regulatory landscape. AI-first strategies can put organizations on the wrong side of fairness, accountability, and compliance—a risk that is becoming harder to ignore as regulations work to catch up to technology.

Takeaway: AI is a powerful technology, but it is not a strategy. The real value of AI comes from how it supports your organizational goals. Instead of forcing AI into every decision, leaders need to take a different approach: let AI enhance human capabilities, solve real problems, and align with strategic priorities. Success comes not from treating AI as the answer to everything, but from knowing where—and how—it can create the most value.

Assumption 4: The Data Misconception—Why You Don’t Need Proprietary Data and Models to Unlock GenAI Value

Claim: “To leverage GenAI, you need your own proprietary data and model”

Reality Check:  High-quality data is undoubtedly valuable but proprietary data and models aren’t always a necessity for leveraging the power of GenAI. Believing otherwise risks discouraging organizations—especially those with limited resources—from tapping into GenAI’s potential. In many cases, techniques like prompt engineering and retrieval-augmented generation (RAG)—an approach for retrieving relevant external information and integrating it into the generative process—can significantly improve performance without relying on large, custom datasets or models deliver significant performance improvements. These approaches are often more agile, cost-effective, and strategically sensible than building, fine-tuning, or developing bespoke models, which require vast amounts of data and ongoing resources.

Consider Bloomberg’s custom LLM, BloombergGPT, specifically designed for the financial industry. The decision to build a custom model seemed forward-thinking at the time, yet a study revealed that GPT-4 outperformed it on core financial tasks. This highlights the risk of investing in custom solutions in a fast-moving AI landscape—companies risk being saddled with models that quickly become obsolete as frontier AI models advance.

Takeaway: While data remains important, the key determinant of success with GenAI lies in the strategic application of existing assets. Organizations should focus on leveraging pre-existing AI capabilities to deliver immediate value and creatively applying general-purpose models to address business challenges. It is essential to remain flexible and avoid overcommitting to custom solutions that may quickly become outdated as AI technology continues to evolve.

Practical Recommendations for Managers

GenAI offers tremendous potential, but successful adoption requires a thoughtful and disciplined approach. Leaders should consider the following steps to create value with GenAI:

  1. Invest in AI Skills: Equip teams with the skills to critically evaluate AI solutions and regulations to separate hype from reality. Workshops, training programs, and cross-functional AI teams can help build a culture of informed decision-making.
  2. Combine Generative and Analytical AI: Leverage GenAI to complement—not replace—existing analytical techniques. Hybrid approaches that blend creativity with precision often deliver the most robust, sustainable results.
  3. Adopt a Risk-Adjusted Mindset: Balance bold experimentation with rigorous evaluation when tackling high-stakes decisions. Act decisively when opportunities align with clear evidence or high-reward potential, but observe and assess when risks are high or uncertain. Focus resources on initiatives that demonstrate measurable value or long-term strategic impact.
  4. Take a Problem-Centric Approach: Focus on identifying the organization’s most pressing challenges before exploring AI solutions. Avoid the temptation to retrofit problems to fit AI capabilities; instead, evaluate where GenAI can add meaningful value.
  5. Start Small, Scale Thoughtfully: Begin with pilot projects and proof-of-concept initiatives to test assumptions and learn while using readily available models or small, strategically curated datasets. Use these small-scale efforts as a foundation for smarter, more confident large-scale implementations.  

GenAI is undeniably powerful, but its true value lies in thoughtful, strategic application—not in chasing hype or following the latest AI playbook. The widespread assumptions we have focused on about optimization, growth, strategy, or data illustrate a key point: AI is a tool, not a magic wand. Leaders who can distinguish hype from actionable insight will not only avoid costly missteps but thrive in an AI-driven world.



Oguz A. Acar
Oguz A. Acar Oguz A. Acar is Professor of Marketing & Innovation and Head of Generative AI at King’s Business School, King’s College London, as well as a research affiliate at Harvard University’s Laboratory for Innovation Science. His current research is at the nexus of genAI, organizations, and education.
Pedro Amorim
Pedro Amorim Pedro Amorim is a Professor at the University of Porto. He is also a Partner at LTPlabs, a consultancy company that applies advanced analytical methods to help make better complex decisions. He is co-author of the book 'The Analytics Sandwich: Bringing People and Artificial Intelligence Together to Unlock Business Value.'

Recommended




California Management Review

Berkeley-Haas's Premier Management Journal

Published at Berkeley Haas for more than sixty years, California Management Review seeks to share knowledge that challenges convention and shows a better way of doing business.

Learn more