California Management Review
California Management Review is a premier professional management journal for practitioners published at UC Berkeley Haas School of Business.
Shefali Patil, Tinglong Dai, and Christopher G. Myers
Image Credit | Svitlana
Since 9/11, the U.S. government has spent more than $7 trillion on private military contractors (PMCs), outsourcing core military functions at an unprecedented scale.1 At the height of the Afghanistan war, these contractors outnumbered American troops by a staggering three to one.2 Initially embraced as a pragmatic move to boost efficiency, control costs, and offset a shrinking military, this outsourcing soon spiraled into a costly and reputationally damaging quagmire—marked by legal battles, financial waste, and scandals that tarnished America’s global image.2,3
Amy Wenxuan Ding and Shibo Li, “Beyond the Big Data Mindset: An Executive’s Guide to Cultivating AI as Talent,” California Management Review Insights, December 3, 2025.
Fariba Latifi, “Leading and Strategizing in the Age of AI: Navigating the Next Frontier,” California Management Review Insights, November 20, 2025.
Martin Mocker and Joe Peppard, “Why It’s Safe To Bet That Most Companies Will Not Benefit From AI Investments,” California Management Review Insights, November 10, 2025.
High-profile disasters—including the Nisour Square massacre, abuses involving human trafficking, and the notorious Abu Ghraib torture scandal—exposed the profound dangers of delegating critical, sensitive roles to external entities. What started as a short-term solution devolved into a structural dependency.2 As the Pentagon became increasingly reliant on PMCs, oversight diminished, accountability blurred, and extraction became nearly impossible.3
Today, business leaders risk falling into a similar trap—not through armies, but through artificial intelligence (AI) vendors. From enterprise giants like Microsoft and Google to fast-growing firms like Anthropic and Cohere, AI vendors now offer far more than simple software. They are re-shaping how organizations make decisions, allocate resources, and manage risk. When AI systems begin to drive essential business processes—from hiring and supply chain forecasting to performance management and clinical decision-making—companies are not just outsourcing routine tasks. They are outsourcing judgment, oversight, and accountability.
The parallels to PMCs are striking:
Together, these dynamics create a new kind of organizational entrapment—where AI suppliers become “too embedded to regulate effectively,” even as the risks mount. But while AI developers have clear incentives to expand their influence, the core problem lies deeper, within the organizations themselves.
Organizations unknowingly accelerate this entrenchment. Not out of malice or negligence, but through subtle, recurring psychological patterns that shape how leaders perceive and interact with their suppliers. These psychological forces—often operating in unnoticed but powerfully influential ways—can pull companies into dependencies they struggle to recognize, let alone reverse.
We call this progression the “Supplier Entrenchment Spiral”—a three-phase psychological cascade through which organizations gradually lose control over critical decisions and embed vendors deeper into their operations. What begins as a practical reliance on external expertise morphs into an enduring structural dependency, often without leaders recognizing the shift. The spiral explains not just why supplier entrenchment happens, but why it’s so hard to reverse once it takes root.
This “halo effect” mirrors how PMCs were once seen as “neutral” policy executors due to their lack of national allegiance.2 Just as military leaders conferred exaggerated competence on PMCs, today’s executives often assume that AI vendors are not only technically capable, but also ethically aligned with organizational values.
This dynamic echoes how the Pentagon distanced itself from PMC misconduct by labeling contractors as “independent actors.” Today’s business leaders can similarly deflect accountability—even when AI systems are operating on parameters their own teams may have helped configure.
Worryingly, the less people understand a technology, the more likely they are to trust and adopt it,13 deepening a cycle of blind reliance and deflected responsibility.
Over time, once unthinkable practices become standard. PMCs evolved from being controversial stopgaps to permanent fixtures of military strategy—an embodiment of privatized defense.1,2,3 In much the same way, delegating high-stakes decisions to AI—whether for hiring, medical triage, or loan approvals—can quickly become business as usual.
Unchecked, this normalization fosters a culture of passive reliance. AI vendors gain increasing autonomy and internal decision-makers stop asking hard questions—not necessarily out of complacency, but because they no longer see oversight as their role.
Once suppliers gain enough control, they rarely give it up. PMCs didn’t just support military operations—they reshaped them. AI vendors are now doing the same to corporate decision-making. They refine their models using proprietary data (and their black box nature makes it nearly impossible to hold them accountable for training/re-training their models on restricted data), shape industry standards, and make themselves increasingly difficult to replace.
This is no longer just a question of outsourcing decisions. It’s about vendors embedding themselves into the fabric of organizational decision-making. Like any entrenched system, they develop a kind of survival instinct—reinforcing their presence, replicating their influence, and making it increasingly unthinkable to operate without them.
To avoid repeating the PMC playbook, business leaders must act early—by identifying the psychological mechanisms behind entrenchment, confronting misplaced trust, and rebuilding internal capacity for oversight and accountability.
The Supplier Entrenchment Spiral does not unfold all at once—it creeps in over time, often going unnoticed until reliance on external vendors becomes deeply embedded in how the organization functions. Here’s a diagnostic tool that leaders can use to identify early warning signs at each phase of the spiral. The more signs that apply, the more likely your organization is drifting toward the path of irreversible reliance on AI vendors.

This tool isn’t just about spotting risks—it’s about adopting a particular mindset. Entrenchment often stems less from technical contracts and more from powerful, often invisible, psychological forces. Leaders who identify these patterns early can take proactive steps to reassert control before they find themselves locked into external systems they can no longer audit, override, or replace.
The Pentagon did not fully interrogate its growing reliance on PMCs until it was too late. Business leaders have an opportunity to do better—by examining how AI tools are reshaping not just workflows, but the cognitive and cultural fabric of their organizations.
A real-world case from healthcare shows how easily the spiral can take hold—and how quickly critical oversight can fade.
In recent years, hundreds of U.S. hospitals have adopted the Epic Sepsis Model (ESM), a proprietary AI tool developed by Epic Systems to help clinicians detect sepsis—a fast-moving, life-threatening condition where early intervention is critical.15 The tool was marketed as a predictive alert system, which was to be integrated into electronic health records (EHRs), and designed to support—not replace—clinical judgment. But, as ESM became embedded in daily workflows, its influence evolved. In practice, the system began shaping clinical decisions more directly than intended.
Clinicians, already burdened by high patient volumes, faced mounting alert fatigue—a well-documented phenomenon in healthcare where constant notifications reduce sensitivity to actual risk. One study found that ESM triggered alerts for 18% of hospitalized patients but failed to identify 67% of sepsis cases.15 During the COVID-19 pandemic, sepsis alerts increased by 43% across 24 hospitals.16 This surge signaled that the model was casting a wider net, likely over-flagging patients and further eroding its ability to discriminate between true and false positives.
Due to more frequent, but less informative, alerts, overwhelmed providers faced even greater difficulty discerning meaningful signals from background noise; many simply assumed the AI was correct, regardless. Few understood how the algorithm worked or what data it relied on, but its technical opacity and seamless integration into EHR gave it an aura of authority. In hindsight, this trust appears to have been misplaced. Independent research found that the tool’s real-world accuracy (0.63 discriminatory score) was significantly lower than what Epic had reported in its internal documentation (0.76-0.83).16
Over time, that trust hardened into accountability offloading. When the model failed to flag deteriorating cases, many hospitals blamed the system, not their growing reliance on it. Such challenges of assigning responsibility when AI systems fail, particularly in areas like sepsis treatment, are well documented.17 The vendor’s role in training the model essentially blurs accountability—leaving hospitals in a precarious position, deeply reliant on a system they no longer fully understand, especially when lives are at stake.
In some settings, the ESM became so normalized that staff restructured workflows around its outputs—without formally redefining roles or oversight protocols. For example, some studies have found that Epic’s warning systems are associated with faster antibiotic administration,18 despite research showing the system’s high false alarm rate.15 Over time, the belief that “the AI will catch it” became culturally embedded.
What began as a well-intentioned augmentation quietly became a hidden liability. The AI system was never designed to be the ultimate decision-maker, but in practice, it became one—through a gradual process of superhumanization, moral disengagement, and normalization. And, when it failed, hospitals found themselves in a familiar but dangerous position: reliant on a vendor they could no longer question—and unable to assign responsibility when outcomes went wrong.
This dynamic isn’t limited to sepsis. Similar patterns are emerging across healthcare. For example, in radiology, AI tools designed to assist image interpretation are producing concerning signs of overreliance. One study found that when AI-generated diagnoses were incorrect, physicians’ accuracy dropped dramatically—from 85.3-92.8% to 23.6-26.1%.19 Similar warning signs are emerging with regards to mental health risk scoring: these algorithms are being deployed despite limited transparency and severe risks of bias.20
The Pentagon learned too late that military contracting, initially seen as efficient and pragmatic, had quietly became indispensable—and ultimately destructive. Its legacy: costly legal battles, damaged credibility, and strategic paralysis. Today’s business leaders face a similar tipping point in the making. They must act urgently to ensure their reliance on AI vendors doesn’t become another cautionary tale.
Our diagnostic framework offers an early-warning system. Reversing the spiral will ultimately require leaders to re-engineer not just their systems, but their own assumptions—about control, accountability, and the role of human judgment in an increasingly automated world. The time to reclaim oversight is now—before it’s permanently outsourced.
Generative AI tools were used to refine the manuscript for clarity, grammar, and readability. Additionally, these tools were utilized to assist with citation formatting.