INSIGHT

 

Innovation

When AI Joins the Product Team, Will Leadership Still Drive Innovation?

Shivam Srivastava

When AI Joins the Product Team, Will Leadership Still Drive Innovation?

Image Credit | charles taylor

Leadership still drives innovation, not AI
  PDF

CEOs and CFOs today hear that AI can turbocharge product development. Indeed, 65% of leading organizations now use generative AI – nearly double last year’s rate – and analysts project the AI market to expand by roughly 30–40% annually through 2030 (with the enterprise product-development segment alone growing at about a 35% CAGR).1 Those figures translate into real gains on the ground: recent studies show AI-assisted teams developing ideas about 13–16% faster than before, and generating higher-quality, more novel solutions. One P&G field experiment found AI-enabled teams were three times more likely to produce top 10%-tier ideas than teams without AI.2 But the key insight for senior leaders is this: AI serves as an accelerant and amplifier of talent, not a replacement for strategic decision-making. Machines can search vast data, propose thousands of design variants, and automate routine tests – yet they do not choose ambitions or bear consequences. It remains human executives who define what success looks like and who guide those accelerated innovation processes.

Related Articles

David L. Rogers and Thomas M. Siebel, “How Companies Navigate the Uncharted Waters of Digital Transformation,” California Management Review 68, no. 1 (Fall 2025): 45–68.

Brian R. Spisak and Gary Marcus, “Cutting Through the AI Hype: The Facts Leaders Need to Know About GenAI Adoption and Return on Investment,” California Management Review Insight, June 13, 2025.


AI-Powered Innovation Across Industries

Across sectors, companies are retooling their front-end innovation with AI. For example, Nestlé and IBM built a generative language model to explore sustainable food-packaging materials. By mining scientific literature and patent data, the tool proposes novel high-barrier films and papers that traditional R&D might never find through experiment alone.3 This “AI exploration engine” greatly widens the range of options early on, suggesting materials that meet product needs at the right cost and recyclability. (Nestlé’s CTO notes it could optimize the development of sustainable packaging across categories.) In consumer-packaged goods, Procter & Gamble’s recent AI hackathon provides a clear data point. Cross-functional P&G teams using an internal GPT-4 chatbot solved real product challenges faster and with better ideas: AI-augmented teams not only hit higher-quality solutions, but on average shaved ~15% off the cycle time for ideation. ii Strikingly, individuals working alone with AI performed about as well as two-person human teams without AI, suggesting the technology broadens who can contribute. And crucially, P&G found that AI works best with people: “If you want to be in that top 10% of performers, a full human team plus AI seems like the recipe for success,” one researcher observed.

Other case studies reinforce the pattern. Mondelēz International (Oreo) is deploying a generative tool for marketing content that is expected to cut production costs by roughly 30–50%. 4 Barry Callebaut, the world’s largest chocolate maker, has partnered with NotCo AI to embed machine learning into recipe development. In that project, AI will generate and evaluate novel chocolate formulations (varying ingredients and constraints) to accelerate R&D.5 The goal is to shorten development cycles and improve hit rates: as Barry Callebaut explains, testing AI-driven formulation processes promises “unprecedented speed and precision” in creating new recipes. Across these examples, the lesson is consistent: AI expands the team’s creative capacity (by surfacing patterns and alternatives humans could not feasibly enumerate), but it doesn’t define which of those alternatives to pursue. That remains a leadership choice.

Leadership, Strategy, and Governance

In practice, the primary innovation constraint has proven to be leadership, not technology. Many organizations are still “experimenting with AI” but fail to translate pilots into profits. Industry surveys consistently show only about 10–15% of companies achieve measurable business impact from AI. 6 The majority of AI initiatives stall: nearly 85% never make it to full production. What separates the leaders from the laggards is clear executive ownership and discipline.7 High-performing firms treat AI not as a tinkering project but as a strategic capability with a designated C-level sponsor, clear objectives, and accountability. In other words, product innovation must be explicitly tied to AI strategy and funding.

Where leadership falters, AI efforts tend to be fragmented and underfunded – pilots remain siloed, and learnings aren’t institutionalized. By contrast, top teams set up AI projects with well-defined outcomes (for example, a percentage reduction in development time or defect rates). They then manage these as staged investments: define the business problem first, ensure training data is fit for purpose, run a controlled pilot, measure actual lift against a known baseline, and then decide whether to scale. Leaders also impose simple, structured criteria for “go/no-go” decisions. For instance: Does the data exist in usable form? Can the AI output integrate into the product workflow? Is there security, ethical or regulatory red lines? And critically: what is the size of the economic prize (efficiency gains or revenue growth)? By scoring AI initiatives on data readiness, integration risk, compliance, and expected value, leaders force early clarity on what must be true for success.

  • Treat AI projects as staged bets. Don’t jump to full deployment until a pilot proves out. First define the strategic goal and success metric, then work your way through data validation, model testing, and incremental rollouts.
  • Enforce cross-functional ownership. Assign a single executive or product leader to each AI-driven program, with clear responsibility for outcomes. Ensure collaboration among R&D, IT, and business teams from the start, so pilots aren’t trapped in narrow silos.
  • Embed human review and guardrails. Always build humans into the loop. For example, Mondelez’s AI tool for ads was explicitly constrained by rules – no manipulative language or off-brand content – and every output is reviewed by humans. In this way, leadership steers the AI output to align with company values and brand safety.

Hybrid Teams and Human Accountability

The most effective innovation teams are hybrids: people working with AI. AI augments, but does not replace, human judgment and creativity. In the P&G study, employees reported that AI collaboration boosted their enthusiasm and confidence, not anxiety. Teams that view AI as a “cybernetic teammate” can dramatically widen who contributes ideas. As one researcher put it, more parts of the company can now bring forward great ideas: the classic distinction between “domain experts” becomes less relevant once AI handles technical detail.

That said, accountability remains human. AI can automate experimentation, but it cannot (and should not) automate responsibility for product goals or ethics. Leaders’ roles shift toward defining intent and outcomes. They must articulate the vision (“what problem are we solving?”) and the value criteria (“what results matter?”) that guide AI’s search. They also set escalation rules: at what point will we pause and reevaluate? Unlike math equations, these judgments are value-based. For example, Mondelez’s policy documents explicitly prohibit AI-generated content that highlights unhealthy eating or uses stereotypes – decisions only human stewards can make.

In short, AI changes where work happens, not who is ultimately responsible. When an AI system proposes a new design or marketing concept, human teams must interpret, refine, or reject it. Leaders must ensure success metrics stay customer-centric, not just proxy efficiencies. They also need to develop new skills: managers now need “decision literacy” to question probabilistic outputs. Good managers will ask AIs for counterfactual scenarios (“What if we tweak this assumption?”), demand evidence for strong claims, and weigh multiple signals when they conflict. Over time, these changes compound: teams become faster and more aligned, not by passively accepting AI, but by making better-informed choices with fewer iterations.

Moving Forward: Principles for AI-Driven Innovation

For senior leaders ready to act today, the task is to adapt their processes to the AI era. Traditional stage-gate reviews and annual planning are often too slow for AI’s rapid iteration. Instead, successful companies push decision authority closer to the work while establishing a few clear guardrails at the top. Product executives might codify only critical “non-negotiables” – for instance, customer experience outcomes, compliance requirements, data privacy, and ethical boundaries – and then allow AI-augmented teams to iterate aggressively within those bounds. This preserves speed without sacrificing oversight.

Leaders also need to invest in people as much as technology. The hardest gaps are often organizational, not technical. Training should focus on how AI changes decision-making. For example, one action item is to teach managers to see AI as a collaborator: teams that completed brief hands-on workshops on prompting and verifying AI made much better use of the tools. Senior managers, likewise, should regularly review AI pilot results to learn its failure modes and biases. Reward systems can shift to value “speed to insight” and successful human-AI collaboration, not just process adherence.

In practice, firms report that over time they converge on structured flexibility. As one practitioner put it, “Intelligence is becoming abundant; judgment remains scarce.” Competitive advantage increasingly hinges on leadership clarity and accountability. Some CEOs summarize it this way: AI can explore and propose alternatives at machine scale, but only humans choose the ambition and accept the risk. The companies that win will be those that redesign leadership to operate at the speed of AI – making ambition, ethics, and trade-offs explicit, even as algorithms cover ever more ground.

Ultimately, when AI joins the product team, human leadership still drives the innovation roadmap. Machine intelligence multiplies the search space and execution power, but it is leaders who ask the right questions, set the destination, and own the outcomes. In a governed AI-powered R&D process, innovation becomes a dialogue: human intent shapes the pathway, and AI brings rapid iteration and new ideas. CEOs and CFOs must recognize that their role evolves but does not diminish. In fact, it becomes more critical than ever – ensuring that AI-generated capability translates into real, strategic impact.

  1. People Matters, “AI Adoption Spikes as Companies Start Seeing Real Value from Gen AI,” People Matters, June 4, 2024, updated June 10, 2024.
  2. Fabrizio Dell’ Acqua et al., “The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise,” Harvard Business School Working Paper 25, no. 43 (2025).
  3. Nestlé, “Nestlé and IBM Leverage AI and Deep Tech to Unlock New Packaging Innovations,” Nestlé, July 3, 2025.
  4. Jessica DiNapoli, “Oreo-Maker Mondelez to Use New Generative AI Tool to Slash Marketing Costs,” Reuters, October 25, 2025.
  5. Barry Callebaut, “Barry Callebaut Partners with NotCo AI to Unlock Next-Level Chocolate Innovation,” press release, November 18, 2025.
  6. Thomas Calvi, “Business and AI: Only 10% of Companies Would Perceive a Very Significant Financial Impact According to a Study,” Actuia, October 21, 2020.
  7. Kim Billeter et al., “Can AI Advance Toward Value if Workforce Tensions Linger?EY (Insight), November 10, 2025.
Keywords
  • Artificial intelligence
  • Hybrid organizations
  • Hybrid structure
  • Leadership potential
  • Product innovation


Shivam Srivastava
Shivam Srivastava Shivam Srivastava is a Research Associate at the Centre for Business Innovation (CBI), Indian School of Business, Hyderabad. He holds an MA from the University of Delhi. His research focuses on AI-driven product innovation, decision-making, and organizational design. His interests include marketing, innovation, defense and aviation ecosystems, international relations, and applied AI.




California Management Review

Published at Berkeley Haas for more than sixty years, California Management Review seeks to share knowledge that challenges convention and shows a better way of doing business.

Learn more