California Management Review
California Management Review is a premier professional management journal for practitioners published at UC Berkeley Haas School of Business.
Adrian Wolfberg
Image Credit | 1STunningART
Leaders are discovering that AI’s biggest challenges are not technical but conceptual. The question isn’t whether an algorithm can analyze, predict, or generate, rather it’s whether leaders can frame the problem well enough to decide how human and machine intelligence should work together. The organizations that stumble tend to treat AI as a single capability to deploy, rather than as a partner whose value depends on context. The key insight is simple: the type of problem you face should determine the form of collaboration you choose. When that match is off, efficiency erodes, trust collapses, and innovation stalls. Framing, not technology, becomes the decisive leadership act.
Almasan, J. R., Durvasula, S., Krishna, S., & Santanam, R. T. (2025, October 13). Intelligence equilibrium: A new operating model. California Management Review Insights.
Not all problems are created equal. Some are familiar and structured. Others are novel, entangled, or deeply value-laden. The Hybrid Diagnostic Cube, shown in Figure 1, organizes these variations along three dimensions: familiarity (how well understood the problem is), complexity (how many moving parts interact), and wickedness (how much the problem involves conflicting values or unclear success criteria).
Figure 1: Hybrid Diagnostic Cube with Types of Hybrid Problems

For example, a logistics manager who optimizes truck routes deals with a familiar and complex problem, one that is technically challenging, but measurable. A hospital allocating scarce ICU beds faces a complex and wicked one, one that requires trade-offs between fairness, urgency, and uncertainty. A national-security analyst assessing a potential climate-driven migration crisis confronts a novel and wicked problem, one that resists prediction altogether. Recognizing what kind of problem you are solving is the first step in determining what kind of intelligence—human, machine, or both—should lead.
In practice, human–AI collaboration occurs through four modes: (1) Automated Execution where AI performs well-defined, repeatable tasks (e.g., processing claims); (2) Machine-Augmented Decision-Making where AI amplifies human cognition by surfacing patterns or alternatives (e.g., predictive maintenance or drug discovery); (3) Human-in-the-Loop Collaboration where humans guide, correct, or constrain machine outputs (e.g., algorithmic credit scoring with oversight); and (4) Expert Judgment where humans lead, using AI sparingly for sense-making in ambiguous or ethical domains (e.g., crisis management or policy deliberation). These modes form a choreography—a dance of cognition—in which leadership determines who leads, who adapts, and when to switch partners. Failures of AI adoption often occur when organizations pick the wrong dance for the song they’re hearing. They are over-automating what still requires judgment, or under-automating what could benefit from computational insight.
When the Hybrid Diagnostic Cube is mapped to these collaboration modes, a practical framework for alignment emerges. Table 1 below captures the relationship between problem types and the optimal mode of collaboration showing that different combinations of novelty, complexity, and wickedness determine which human–AI collaboration mode fits best.
Table 1: Framework for Matching Hybrid Problem to Optimal Human-AI Mode

The framework helps leaders answer a simple question: Who should be in charge—the human, the machine, or both? But problems don’t stay still. As conditions evolve, problem movement occurs. Figure 2 provides examples of problem movement. What begins as novel and complex may become familiar and structured through learning and experience. Effective leaders periodically reassess contextual fit and shift collaboration modes accordingly. For example, when an AI tool for climate-risk assessment matures from experimental to routine use, the organization should move from a Human-in-the-Loop posture to Machine-Augmented Decision-Making. This continual reframing transforms AI integration from a one-time implementation to a living leadership discipline.
Figure 2: Problem Movement and Reframing the Hybrid Problems

Adapting collaboration modes is not just a technical adjustment. It’s a social transformation. Effective leaders treat reframing as change management. Three practices help. One, surface cognitive resistance early. For example, analysts and engineers defend their existing frames. Encourage debate about what kind of problem is being solved before introducing tools. Two, translate between epistemic communities. Scientists, data specialists, and decision-makers speak different knowledge languages. Framing becomes a translation act that builds shared understanding. Three, reframe as learning, not loss. When AI is framed as an extension of human expertise rather than its replacement, adoption accelerates and morale improves. In short, change management in AI integration is less about governance documents and more about reframing conversations, a shift from compliance to learning.
Framing is the leadership discipline of the AI era. It determines whether technology amplifies human judgment or distorts it. Before implementing an AI system to solve a problem, the essential question is: What kind of problem are we solving, and how might it evolve? Misframed problems lead to mismatched collaboration: algorithms making value judgments or humans bogged down in pattern recognition. Getting the fit right and then adjusting it as the problem moves turns AI from a technical instrument into a cognitive partner. The leaders who master this reflex will not merely automate intelligence; they will learn to dance with it.