California Management Review
California Management Review is a premier professional management journal for practitioners published at UC Berkeley Haas School of Business.
Anindo Bhattacharjee, Graham Ward, and Raul Villamarin Rodriguez
Image Credit | deagreez
Recently, we reconnected with old friends over coffee after more than 15 years apart. All of them now hold senior management positions in large corporations: one heads the Credit Advisory unit of a multinational bank, another works in IT consulting. Naturally, our conversation drifted toward the pressing challenges of contemporary organizational life. Again and again, the subject of artificial intelligence (AI) surfaced.
Vegard Kolbjørnsrud, “Designing the Intelligent Organization: Six Principles for Human-AI Collaboration.” California Management Review 66, no. 2 (2024): 44–64.
Tina Shah Paikeday, . “Slow Thinking Fast: How AI Trumped Human Bias.” California Management Review Insights, June 13, 2025.
Ankit Chopra. “Adoption of AI and Agentic Systems: Value, Challenges, and Pathways.” California Management Review Insights, August 15, 2025.
They spoke of shrinking workforces, shifting expectations, impatient clients, and, above all, a growing fear of redundancy. The spouse of one friend admitted candidly: “I just hope my partner can keep his job until the age of 50.” These were not idle anxieties. For many executives, AI is no longer a distant technological development; it is a pervasive presence shaping their daily work, their relationships with clients, and their career prospects.
This anxiety is echoed by leading thinkers. Geoffrey Hinton, the 2024 Nobel Laureate in Physics (often described as the “Godfather of AI”) has warned that AI may already be “smarter than us.” He predicts a future of large-scale job losses, profound social dislocation, and even “digital immortality.”1 While such predictions may seem speculative, they underscore a broader truth: AI is not simply a technical tool but a force altering human cognition, decision-making, and attention itself. From the fairness dilemmas of hiring and admissions to the subtle ways algorithms shape our perceptions of truth and bias, the stakes are both practical and deeply ethical.2 These come accompanied by psychological aspects of which leaders need to be mindful and address, which we will come to.
However, it would be wrong to reduce AI to a story of impending crisis. The technology is clearly enabling remarkable advances. In healthcare, it accelerates drug discovery and enhances patient safety. In manufacturing, it delivers precision, reliability, and efficiency. In customer service, it allows more tailored experiences, reducing information overload while sharpening relevance. AI is a double-edged sword: the same tools that generate insecurity can also extend human capability and create value. So how to deal with that?
The challenge for leaders is not whether to use AI, for that ship has departed, but how to use it responsibly. To that end, we propose a “4C framework” that offers a way to think about the responsible deployment of AI within organizations. Each element is not only technical but also psychological, addressing the human responses that will shape adoption, trust, and effectiveness.
When Satya Nadella repositioned Microsoft’s purpose around “empowering every person and organization on the planet to achieve more,” AI projects, such as Copilot in Office, were explicitly tied to that mission.3 Employees and customers saw these as productivity enhancers rather than existential threats, which boosted adoption. AI should thus be introduced in ways that enhance clarity of purpose, rather than sowing confusion. Leaders must ask: Does this deployment align stakeholders around our shared purpose beyond narrow profit motives?
From a psychological perspective, clarity addresses the human need for meaning and coherence in the face of change. As Harvard’s Ron Heifetz notes, when people experience ambiguity, it often triggers anxiety, defensiveness, or resistance. Conversely, when AI initiatives are framed as tools that support organizational values and collective goals, individuals are more likely to feel secure, engaged, and willing to experiment. So, by contrast, consider the failed AI recruitment tool at Amazon in 2018. It not only lacked a clear purpose beyond efficiency, but also, embarrassingly, the system replicated historical gender bias and had to be scrapped.4 Instead of clarifying strategy, it created reputational damage and reinforced mistrust.
The myths surrounding AI, that it exists solely to drive efficiency, cut jobs, or that its processes are opaque and unaccountable, fuel mistrust. Proactive communication is therefore essential. Transparency about how data is collected, analyzed, and applied not only protects reputations but also safeguards the psychological contract between organizations and their people. At Unilever employees are actively involved in AI-driven sustainability projects, framing them as tools to reduce waste and energy consumption, a core part of the company mission.5 This transparent communication and involvement reinforced employee pride and customer trust.
At a deeper level, however, communication is about trust and psychological safety. If employees feel excluded from conversations about AI, they may experience what psychologists call loss of agency, a state that fosters cynicism and disengagement. The backlash against Clearview exemplifies just that: when facial recognition technology was quietly used without public consent, it swiftly demonstrated what happens when communication fails. The result was lawsuits, bans in several jurisdictions, and a profound loss of trust.6 Open channels of dialogue reduce fear, enable people to voice concerns without penalty, and create a sense of shared authorship in technological change. AI, then, becomes less a threat imposed from above and more a shared resource co-designed with those it affects.
Many will remember and be grateful that during the early months of COVID-19, firms like Pfizer used AI simulations to accelerate vaccine development.7 The leadership framed AI as a decision-support tool that complemented human expertise, creating confidence under extreme uncertainty. AI has gone on to prove its value in other crisis contexts like predictive analytics in disaster response. But the psychological dimension here is equally critical. Crises evoke heightened stress, a narrowing of attention, and often, a reliance on habitual rather than creative responses.
AI can act as a stabilizing force, widening the decision-making horizon under pressure. Yet its effectiveness depends on whether humans trust its recommendations. Over-reliance may breed complacency, while mistrust can paralyze decision-making. Boeing’s reliance on automated flight control systems without adequately preparing or informing pilots illustrates the danger of misplaced trust in automation. The tragic crashes of the 737 MAX revealed how poor integration of human and machine decision-making in crises can lead to catastrophic outcomes.8
Leaders must therefore cultivate what Scharowski et al. in their recent paper described as “calibrated trust”: neither blind faith in the machine nor reflexive skepticism, but a balanced confidence grounded in transparency and accountability.9 This trust must be modelled by leaders who demonstrate how human judgment and AI complement rather than displace each other.
Finally, responsible AI requires reimagining collaboration. Too often, technology is seen as a replacement for human creativity. In reality, AI opens up new possibilities for co-creation—from generative AI tools supporting design and innovation to hybrid systems where human intuition and machine intelligence work in tandem. At Porsche, design teams use generative AI to develop new vehicle concepts, but always in tandem with human designers who make the final calls. Employees describe the system as a “creative partner,” not a rival.10
The psychology of collaboration with AI is subtle. For some, AI can be experienced as an empowering partner that expands imagination. For others, it provokes status anxiety, raising fears of irrelevance or obsolescence. IBM’s 2015 Watson Health initiative promised to replace significant elements of clinical diagnosis. Physicians resisted, patients distrusted it, and ultimately the venture was broken up and sold off at a loss.11 Framing AI as a substitute rather than a collaborator undermined the project.
Leaders must therefore cultivate what organizational psychologists describe as collective efficacy: the belief that a group can achieve more together than apart. By framing AI not as a competitor but as a collaborator, organizations can create a culture of curiosity, resilience, and shared achievement.
AI is not just a technical revolution; it is a profound psychological and cultural shift. Organizations that treat it solely as an efficiency tool risk alienating their people and eroding trust. Those that engage with its human dimensions, by fostering clarity, communication, crisis readiness, and collaboration, will not only harness its power but also cultivate the resilience and adaptability that this uncertain future demands.