INSIGHT

 

Strategy

Beyond the Big Data Mindset: An Executive’s Guide to Cultivating AI as Talent

Amy Wenxuan Ding and Shibo Li

Beyond the Big Data Mindset: An Executive’s Guide to Cultivating AI as Talent

Image Credit | Who is Danny

Cultivate AI as talent as a prerequisite for unlocking its real, procedural value
  PDF

Enterprise GenAI initiatives are failing to deliver ROI, creating a “GenAI Value Paradox.” This is not a technological problem but a strategic one, rooted in a persistent “Big Data mindset.” This legacy mindset leads managers to confuse historical records (“receipts”) with the procedural know-how (“recipes”) that drives value, and to mistake pattern recognition for the necessary process acquisition. To solve this, we propose a fundamental shift in philosophy: leaders must cultivate AI as talent, not just implement it as a tool. We translate this into the actionable, three-step AI Apprenticeship framework, offering a new playbook to build real capabilities.

Related Articles

George S. Day and Paul J. H. Schoemaker, “Adapting to Fast-Changing Markets and Technologies,” California Management Review 58, no. 4 (2016): 59–77.


The Big Idea

Enterprises are investing tens of billions of dollars in generative AI (GenAI), expecting it to revolutionize how they do business. Yet, a staggering number of these initiatives are failing to deliver a measurable impact on profit and loss. A recent MIT report revealed a shocking truth: despite enterprise spending on GenAI soaring toward $40 billion, 95% of integrated pilots are failing to produce any measurable return1. Stuck as “science projects,” they are quietly rejected by the business units they were meant to transform. For executives, this widespread failure constitutes a crisis—the “GenAI Value Paradox,” defined as the profound disconnect between massive GenAI investment and the lack of tangible business value. How can a technology demonstrating remarkable capability in personal applications fail so spectacularly at the enterprise level? While common culprits like poor data quality, talent shortages, and misaligned strategy are contributing factors2,3, they are merely symptoms of a deeper, more fundamental error in thinking.

The problem isn’t the technology—it’s the data we feed it and the way we teach it. For decades, we have been training our systems to analyze the digital exhaust of our businesses: transaction logs, customer clicks, and performance reports. We have become experts at showing our AI the receipts of past activity. But GenAI is different. It is a “second mind” that can understand human communication, generate new content, and reason about process—capabilities previously unique to human beings. We cannot unlock its potential by keeping it siloed in data mining and analysis tasks. We must bring it into the core of the business.

To do this, leaders must make a profound mental shift: they must stop treating GenAI as a software tool to be implemented and start treating it as talent to be cultivated. This philosophy is the foundation for a new, practical framework we call AI Apprenticeship. To build this framework, we draw on decades of foundational, peer-reviewed management research into organizational knowledge and managerial cognition. It is a new managerial logic that shifts the focus from curating historical records to capturing expert know-how, enabling organizations to build unique, AI-driven capabilities that competitors cannot easily replicate.

The Cognitive Trap: Why a Big Data Mindset Fails in the GenAI Era

For the past two decades, a specific way of thinking has dominated corporate IT and strategy: the Big Data mindset. This cognitive frame is a classic example of what management scholars C.K. Prahalad and Richard Bettis identified as a “dominant logic”4. A dominant logic is the mental map and set of shared assumptions that managers use to make sense of their business. Forged in past success, this “Big Data mindset” was born from the spectacular wins of analytical AI at consumer tech giants. Google learned to master advertising by analyzing trillions of clicks. Netflix built its empire by finding patterns in viewing histories. Amazon optimized its supply chain by analyzing records of every package movement.

This mindset, forged in success, is built on three core beliefs that are now dangerously obsolete in the GenAI age. First, it assumes data is primarily a historical record, where value comes from amassing huge volumes of data about past events or human behaviors. Second, it posits that learning is synonymous with pattern recognition, where the goal is simply to find statistical correlations in those records. Finally, it operates on the belief that the bigger the data, the better the insight, and that competitive advantage comes from having more records than anyone else.

This logic was—and still is—incredibly powerful for analytical tasks. The problem, as research on dominant logic predicts, is that managers are now misapplying this same cognitive playbook to a new technology (GenAI) that operates under a different set of rules. This leads directly to a value-destroying trap, which manifests as two fundamental misalignments.

Misalignment 1: Confusing Records With Recipes

The first failure is a fundamental misunderstanding of what “data” means. Guided by the Big Data mindset, we have spent years amassing petabytes of transaction logs, supply chain movements, and customer clicks. This is “data-as-a-record”—a log of what happened, who did it, and when. Think of the media industry’s recommendation engines, which analyze viewing history, and capture viewers’ attentions; the precision of digital advertising, which targets based on user clicks or customer sentiment; the logistics of supply chain management and the sharing economy, which optimizes routes based on (goods) travel patterns; or even healthcare management systems, which streamline billing based on patient records. This record-based data fuels AI’s success in analyzing past behavior because their core task is to find patterns in human records.

But high-value enterprise work—manufacturing, underwriting, complex debugging—is procedural. These core processes rely on “data-as-know-how,” the specific craft and technique that creates value. This know-how, what strategy research calls “tacit knowledge” embedded in organizational routines5,6,7, is rarely found in data lakes focused on explicit records. Consider a senior engineer’s design task. The system records that she finished (the “record”), but captures nothing of how—her experience, trade-offs, and methodology (the “know-how”). When she retires, that crucial knowledge vanishes. An AI trained only on records cannot perform her job because the essential procedural data is missing. Focusing solely on historical records starves AI of its most vital input. Firms feed it digital junk food and wonder why it lacks the capability for substantive work.

Misalignment 2: Mistaking Pattern Recognition for Process Acquisition

The second failure lies in a flawed understanding of “learning.” The Big Data mindset views learning as passively finding patterns in historical records, which is sufficient for prediction tasks. But the core processes that run a company are not prediction problems; they are procedural execution problems.

An airline, for example, can use a predictive model to classify which passengers are likely to miss their connection, but it cannot execute the complex operational process of rebooking them during disruptions. That requires executing a series of steps, applying situational rules, and making trade-offs—a process that is not documented in any historical log8. While GenAI can boost productivity on discrete tasks like drafting emails, these are isolated assists, not the end-to-end automation driving real transformation. If firms want AI to execute workflows, its learning must move beyond recognizing patterns (like triangles) to acquiring process (like Euclidean geometry). True learning involves acquiring method. The current approach mistakenly treats AI as a passive analyst when it needs to be treated as an active apprentice.

The AI Apprenticeship Framework: Three Steps to Building Real Capability

To break free from the Big Data mindset and start creating real value, leaders need a new mental model. This new model is rooted in the philosophy of cultivating AI as talent. We call the practical application of this philosophy the AI Apprenticeship framework. This framework isn’t a technical architecture; it’s a new managerial logic for shifting your organization’s focus from receipts to recipes. It consists of three iterative steps.

Step 1: Define the Curriculum

This first step is the strategic planning phase of talent management. Instead of asking, “What data do we have?” you must ask, “What is the job we are hiring this AI for, and what is the curriculum it must learn?” The goal is to define the “job” for the AI and curate the expert knowledge—the “recipes”—it needs to learn. This begins with a “Know-How Audit,” a modern application of the principles of organizational knowledge creation9.

Consider the case of a large (anonymized) insurer, “Global Assurance Inc.” (GAI). GAI was plagued by high variance in underwriting its complex commercial policies. New underwriters took years to become effective, and the firm’s best, most experienced underwriters were a constant bottleneck. Their initial GenAI project, which followed the Big Data mindset, involved feeding the AI 30 years of digitized policy records. The AI failed: it could find correlations (e.g., policies in a certain industry had high claims) but it could not replicate the judgment of a senior underwriter.

Switching to an AI Apprenticeship model, GAI launched a “Know-How Audit.” They identified their top 5% of underwriters and, using workflow capture tools, recorded them processing dozens of complex applications. This was paired with structured interviews where the experts “thought aloud,” explaining why they were flagging certain items. The output of this audit was not a dataset of records, but a “dynamic playbook” of the experts’ tacit knowledge. This playbook consisted of hundreds of heuristics (e.g., “If the applicant’s industry is X and revenue is Y, then check for this specific hidden liability Z, unless they have this mitigating certification A”). This playbook became the official curriculum for training their AI apprentice.

This approach requires leaders to ask a series of pointed, strategic questions. They must first identify which processes truly differentiate their firm from its competitors. Following this, they must determine who the top 5% of performers are in those specific processes and, critically, what those individuals do differently. Finally, they must solve the organizational challenge of how to make capturing this expert workflow a seamless part of the expert’s job, rather than a distraction from it.

Step 2: Cultivate AI as Talent

This step reframes AI training from a passive, technical task to an active, pedagogical one. You are not just fine-tuning a model; you are developing a new member of your workforce. The goal is to develop the AI from a novice with raw potential into a skilled contributor capable of reliably executing a core process. This mirrors the talent development process for a human apprentice. It involves a robust “human-in-the-loop” methodology where your experts act as mentors. They guide the AI through the process, correct its mistakes in real time, and provide rich feedback on how it performs the steps, not just on the final outcome. For example, a senior underwriter reviewing an AI’s work wouldn’t just tell the AI its final risk assessment is wrong. She would use a feedback tool to show the AI which three data points it overlooked, explain the reasoning behind her own, more nuanced assessment (“In this industry, we discount stated assets by 20% due to volatility”), and force the AI to re-run the process with this new heuristic. This is active teaching, not passive labeling. This pedagogical approach raises critical questions for management. Leaders must first evaluate their internal incentive structures: are experts truly incentivized to be good teachers for the AI, and is time formally allocated for this mentorship? Concurrently, they must assess their technical infrastructure, asking if the firm has the right feedback tools to allow for this kind of rich, procedural correction? Finally, they must challenge their existing metrics and determine how to measure learning based on procedural fidelity—the AI’s ability to follow the right steps—rather than just on final outcome accuracy.

Step 3: Embed and Grow AI Talent

The final step is to move the AI from training into a productive role, complete with a path for ongoing development, just like any other employee. The goal is to successfully place your newly skilled AI into a productive role and create a system for its ongoing growth and development. This stage is analogous to a human employee’s career path and performance management. The AI is deployed to handle the high-volume, standard-procedure cases (the “80%”), freeing up human experts to focus on the most complex edge cases and on continuous improvement of the core process. The expert’s role evolves from a “doer” to a “fleet manager” of AI agents. They supervise the AIs, handle exceptions, and, most importantly, use their expertise to update and improve the underlying recipes. This step must include creating a feedback loop. As the AI works, it generates new performance data, which is then used to refine the curated know-how and improve its own skills. The AI doesn’t just perform its job; it helps the organization get better at the job. This integration demands that leaders answer several organizational design questions. They must first map out how the introduction of this AI capability changes the roles and responsibilities of the human team members. They must also ensure that success is measured appropriately, asking if the AI’s contribution is being judged by its total impact on the business process, not just by simplistic time-savings. Finally, leaders must ask a crucial, dynamic question: is our AI helping us update and improve our core “recipes” over time, creating a true learning loop?

Answering the Skeptic: Why Apprenticeship Is a True Investment

At this point, many executives will raise a valid objection: “This sounds slow and expensive. The AI race is happening now. We need to move fast and break things, not run a slow, pedagogical training program.” This view, while understandable, represents a false economy and is the very reason 95% of pilots are failing. The “fast” approach—dumping historical records into a model and hoping for magic—is a fast path to a dead end. It produces a flashy demo that cannot perform reliably in the real world. The AI Apprenticeship model is not a cost; it is an investment in building a durable, proprietary asset. The asset is not the AI model itself (which will be a commodity), but the firm’s unique, curated, machine-readable “dynamic playbook” of its core know-how. This asset is incredibly difficult for competitors to replicate. This approach is not about moving fast; it’s about building something that lasts.

The Leadership Challenge: Putting AI Apprenticeship into Practice

Successfully cultivating AI as talent is ultimately a leadership challenge. Adopting this framework requires more than a new project management methodology; it requires a fundamental shift in leadership focus and a willingness to make different kinds of investments. The first priority is to rethink metrics. The old metrics of the Big Data mindset will lead leaders astray. They must stop measuring AI projects by the size of the dataset or abstract accuracy scores and start measuring them by their impact on end-to-end business process metrics that the C-suite actually cares about: reduction in process cycle time, decrease in first-time-not-right errors, and increase in straight-through processing rates. Concurrently, leaders must create new roles. The Big Data mindset created the data scientist; the AI Apprenticeship mindset will create the “AI Trainer” or “Knowledge Shepherd.” This is not a machine learning engineer; it is a senior domain expert—your best underwriter, your most experienced engineer, your savviest marketer—who is formally tasked with and rewarded for mentoring the firm’s AI agents. To make this role successful, it must have real authority. This means the AI Trainer should not report to the CIO or the IT department. Instead, they should be embedded within the business unit and report directly to the process owner (e.g., the Chief Underwriter or the VP of Engineering). This ensures the business logic of the training is prioritized over the technology. Compensation must also be realigned. This cannot be a “side of the desk” project. These individuals should be on a formal “Teaching Excellence” track, with their bonuses and performance reviews tied directly to the measurable performance of the AI talent they are cultivating, such as the AI’s error rate and its impact on the business unit’s KPIs. This strategic shift also has implications for investment. The bulk of AI spending today must be rebalanced from raw compute power toward new tools for process mining, workflow capture, and sophisticated human-in-the-loop feedback systems. Finally, this approach must be rolled out strategically. Leaders should not attempt a massive, big-bang AI transformation. Instead, they should start small, identifying one or two vital processes where this model can prove its value, and use that success to shift the organization’s mindset.

The GenAI revolution is not about creating better analysts of historical records. It is about creating a new generation of digital partners that can learn to execute the core work of your business. This requires a profound shift in mindset: firms must start treating AI as talent to be cultivated. To do that, you must stop treating them like passive students to be shown receipts and start treating them like apprentices to be taught the recipe. That is the only path to building real, durable, and defensible value in the age of GenAI.

References

  1. MIT Media Lab (Project NANDA), “The GenAI Divide: State of AI in Business,” 2025, Project NANDA
  2. McKinsey & Company, “The State of AI in Early 2024: Gen AI Adoption Spikes and Starts to Generate Value,” Survey, May 30, 2024.
  3. Gartner, “Gartner Predicts 80% of Enterprises Will Have GenAI-Enabled Applications by 2026,” press release, October 11, 2023.
  4. Coimbatore K. Prahalad and Richard A. Bettis, “The Dominant Logic: A New Linkage between Diversity and Performance,” Strategic Management Journal 7, no. 6 (1986): 485–501, [suspicious link removed].
  5. Robert M. Grant, “Toward a Knowledge‐Based Theory of the Firm,” Strategic Management Journal 17, no. S2 (1996): 109–22, [suspicious link removed].
  6. Michael Polanyi, The Tacit Dimension (Chicago: University of Chicago Press, 1966), 18.
  7. Richard R. Nelson and Sidney G. Winter, An Evolutionary Theory of Economic Change (Cambridge, MA: Harvard University Press, 1985), 75.
  8. Fabrizio Dell’Acqua et al., “Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality,” Harvard Business School Technology & Operations Mgt. Unit Working Paper 24-013, September 2023, 5,.
  9. Ikujirō Nonaka and Hirotaka Takeuchi, “The Knowledge-Creating Company,” Harvard Business Review 85, no. 7/8 (2007): 162.
Keywords
  • Artificial intelligence
  • Organization
  • Process innovation
  • Strategic alignment
  • Strategic management


Amy Wenxuan Ding
Amy Wenxuan Ding Amy Wenxuan Ding is a Professor of AI and Business Analytics at emlyon business school. She received a PhD from Carnegie Mellon University. She conducts forefront research on generative AI, scientific discovery, and AI applications in business, innovation, marketing, and healthcare. She has published in many top-tier academic journals.
Shibo Li
Shibo Li Shibo Li is John R. Gibbs Professor of Marketing at the Kelley School of Business, Indiana University. He obtained Ph.D. in Industrial Administration (marketing) from Carnegie Mellon University. His research interests include consumer dynamics and digital marketing. His research has appeared in many top-tier academic journals.




California Management Review

Published at Berkeley Haas for more than sixty years, California Management Review seeks to share knowledge that challenges convention and shows a better way of doing business.

Learn more