California Management Review
California Management Review is a premier professional management journal for practitioners published at UC Berkeley Haas School of Business.
John Rares Almasan, Sastry Durvasula, Sreekar Krishna, and Raghu T. Santanam
Image Credit | peshkova
In today’s AI-accelerated economy, leaders must overhaul their operating models, not just pilot shiny tools. That starts with defining a future-state matched to industry realities and plotting the milestones to reach it—shifting from impulse-driven experiments to a policy-driven agenda. At TIAA, we’ve implemented an AI framework to achieve significant progress in our operational transformation. To reach a future state where AI enhances our mission to provide lifetime income solutions, we have implemented a phased approach. Starting with AI integration into contact center operations, we moved to implement Generative AI (GenAI) in asset management research, and most recently, AI-augmented empathetic customer communications, especially for those experiencing cognitive decline. This article unpacks the framework guiding that journey.
Rebecka C. Ångström, Michael Björn, Linus Dahlander, Magnus Mähring, and Martin W. Wallin, “Getting AI Implementation Right: Insights from a Global Survey.” California Management Review, 66/1 (2023): 5–22.
As a leading provider of secure retirements and outcome-focused investment solutions to millions of people and thousands of institutions, TIAA (Teachers Insurance and Annuity Association of America) manages over $1.4 trillion in assets (12/31/2024) across retirement, wealth, and asset management businesses. Our mission—securing lifetime income for millions—now intersects with soaring demands for hyper-personalized advice and mounting regulatory scrutiny. Delivering bespoke guidance at scale while maintaining uncompromising compliance is our central paradox. Anchoring our AI strategy to that mission helps resolve the tension—and offers lessons to any executive who must balance growth, personalization, and risk control in equal measure.
AI can reshape institutions like TIAA, but only if leaders redesign work across three intelligence dimensions: spatial (optimizing hybrid workflows for legacy paper-based clients and digital-first users), cognitive (analyzing actuarial data for TIAA’s flagship Traditional Annuity product), and emotional (building trust during retirement planning consultations). At TIAA, this blend of intelligence is a practical focus. AI-driven self-service in our contact center has cut call volume by nearly 30 percent, freeing agents for richer conversations. In asset management, GenAI “research assistants” reduce portfolio-research time while surfacing deeper insights. For complaint handling, an AI trained on years of correspondence drafts empathetic, personalized replies for specialists to polish—relieving bandwidth constraints and enhancing customer care.
By deconstructing core workflows we’re forging an operating model to harness AI to amplify human ingenuity. Although rooted in TIAA’s experience navigating legacy systems and new intelligence, the playbook is relevant well beyond financial services industry. The framework gives a blueprint for scaling AI without losing the human touch. By spotlighting the spatial, cognitive, and emotional dimensions of work, our framework prescribes intentional moves and disrupt the status-quo equilibrium.
The foundation for the skill-level operating model is the recognition that all productive work follows a simple hierarchy—roles, tasks and processes—but each task draws on three intelligence dimensions. A surgeon, for example, blends spatial precision in the operating room, cognitive acuity in diagnosis, and emotional intelligence when coaching patients and mentoring trainees. A barista’s day likewise spans spatial assembly, cognitive recipe recall, and emotional service. AI is already absorbing the routine portions of these tasks: robots handle repetitive coffee prep, while robotic-assisted surgery reshapes surgical teams and outcomes. At mission-driven institutions such as TIAA, roles that secure retirement—financial advisors, compliance experts, actuarial teams—face the same multidimensional demands, making a skill-level operating model essential.
Many repetitive tasks that were once manual are now automated. AI tools pull keywords from SEC filings and prepare first-pass messages, giving advisors more time to tailor annuities and to guide fiduciaries through investment choices. The shift to AI is allowing advisors to focus on high-value, emotionally nuanced work. Algorithms now prescreen incoming documents in the workflow, easing the spatial load by flagging data gaps or eligibility issues. AI generated summaries of each client’s financial plan lighten the cognitive burden by spotlighting shortfalls common in nonprofit retirement accounts. Sentiment analysis surfaces aspirations—traveling the world, buying a second home—so advisors can design payout schedules that protect long-term income.
A similar change is underway in annuity administration. ML models track saving patterns and suggest dynamic adjustments, such as raising variable annuity exposure for clients who plan to launch a business after retirement. Advisors then weave in personal goals—funding a child’s wedding or relocating to a tax-friendly state. By automating routine paperwork, AI frees professionals to turn generic retirement plans into purpose-driven roadmaps that mirror each client’s ambitions.
Defining workplace intelligence is notoriously tricky because human abilities are multidimensional. We focus on the facets that matter most for professional work and operations. O*NET’s research-based taxonomy lists more than 50 abilities across four groups—cognitive, psychomotor, physical, and sensory.1 To make the implications of AI easier to grasp, we distill this breadth into three intelligence dimensions that show up in every workflow:
Cognitive (C) – thinking, reasoning and problem solving: It encompasses fluid intelligence (the ability to think in abstract terms, reasoning and problem solving), and crystallized intelligence (the ability to apply knowledge from experiences and memory).
Spatial (S) – interacting with physical or digital space: Although spatial ability is a key requirement for many organizational tasks, it is often overlooked. High spatial ability is central to roles from pilots and surgeons to plumbers and game designers.
Emotional and Social (E) – encompasses the ability to perceive and manage emotions in oneself and others.2 Social problem solving requires self and social awareness, empathy, relationship management, and social adaptability.
Figure 1 maps these three dimensions onto the tasks that underpin an organization’s operating model, highlighting how spatial, cognitive, and emotional intelligence work together, not in silos.
Examples from TIAA’s operating model show how AI supports each dimension and is shifting work across TIAA’s operating model while keeping human strengths at the centre.
Spatial. AI tools now prescreen handwritten beneficiary forms and paper policy agreements, converting them into searchable files and autofilling IRS compliance forms. By streamlining these document flows, advisors reclaim time for human judgement—such as resolving estate-planning discrepancies.
Cognitive. ML models digest market data, longevity tables, and tax codes to price annuities and forecast risk. A coastal insurer, for instance, can refine products for clients exposed to climate-related volatility, while advisors translate the insights into disaster-resilient portfolios. Teams now have more time to focus on designing hybrid annuities for gig-economy workers who lack traditional plans.
Emotional and Social. Chatbots now handle many routine questions (“How do I update my beneficiary?”), leaving advisors to manage high-stakes conversations during market swings. Sentiment analysis parses emails and calls, flagging moments such as a client delaying retirement to fund graduate school. Advisors can then propose liquidity-friendly payout structures that honour both immediate needs and long-term security.
Our experience shows that task demands along the three intelligence dimensions are fluid. As an operating model matures, work often moves from Advanced (human-centric) to Defined (human + AI) and eventually to Simple (fully automated). AI already manages Simple chores such as document review and validation. Defined work—say, portfolio rebalancing—mixes algorithmic precision with advisor oversight. Advanced tasks remain human led: orchestrating a multi-generational wealth transition that funds grandchildren’s education, launching a family sustainability initiative, and preserving a charitable legacy still relies on empathy, ethical judgment, and creative problem solving no algorithm can match.
Almost every task taps all three intelligence dimensions, but at different intensities. The minimum levels in each dimension are as baseline in Figure 1. Checking an account balance would be “purely cognitive” because its spatial and emotional requirements are negligible.
Figure 1: The three-dimensional skills abilities in organizational tasks

Most tasks begin in the Advanced zone—high human effort in all three intelligence dimensions. As the operating model matures they can shift to Defined (shared with AI) and ideally to Simple (largely automated). Simple tasks scale best. TIAA’s contact-center self-service, which cut call volume by nearly 30 percent, shows how automation lowers task intensity. Yet not every activity can—or should—be automated; conversations that rely on deep empathy still belong to people.
An operating model crowded with Advanced tasks will always struggle to scale. Our project teams therefore push work toward the “origin” of Figure 1, where cognitive, spatial, and emotional demands are lower and automation is practical. Tasks that remain far from that origin—complex surgeries, for instance—signal permanent human involvement. The goal is not blanket automation but smart re-engineering: reduce intensity where possible so employees can apply uniquely human insight to the work that matters most.
Let’s take a typical hiring process as an example to illustrate the intelligence equilibrium concept (Figure 2). The Figure identifies the dimensionality of tasks (along spatial, cognitive, and emotional) within the process and their intensity (Simple, Defined, and Advanced).
The Scaling and Technological Forces shaping the Equilibrium: Achieving process scale requires that firms identify execution hurdles, fill gaps in operational knowledge, and optimize around outcomes and costs. Screening résumés once required a recruiter’s judgment (Advanced). Introducing a structured scorecard moved it to Defined. Adding an AI résumé-ranking tool makes the activity Simple, freeing recruiters to coach hiring managers on hard-to-judge culture fit. Each downward shift lowers cost per hire and shortens time-to-fill. The longer a task spends in the advanced zone, the harder it is for an organization to scale that process. As a task starts to become repeatable through systematized processes, it moves to a Defined state such that there is flexibility and predictability in who performs the task, and how to perform the task to maintain operational quality. Finally, as tasks become more routine, and automation is enabled, the task arrives in the Simple zone. This downward pressure is an important driver for scale, as scale is achieved when more tasks can be automated. We call this the Scaling force.
At the opposite end, technology is constantly pushing the boundaries of systems capabilities to reduce human involvement in operations. Video-interview analytics, for example, now flag skill cues recruiters once spotted by hand, nudging that step downward into the Simple zone. Background checks that tap real-time data sources do the same. The upward push from technological forces lets firms redeploy human effort to the few hiring moments—negotiating offers, selling mission—that still demand high emotional and cognitive skill.
Effective operating-model design balances these opposing pressures, lowering task intensity where feasible while preserving the human touch candidates value most. We consider the equilibrium between the scaling and technological forces to _be the point of stability that an organization attains in harnessing AI and human abilit_y. This equilibrium is never constant but dynamically changing as external and internal factors disrupt the equilibrium.
Figure 2: Sample Hiring process to Illustrate Intelligence equilibrium

Leaders must judge carefully which tasks are truly ready for AI. Over-promising creates a mountain of trust: when an algorithm assigned to an Advanced step stumbles—through bias, hallucination, or opacity—confidence collapses and legal or ethical risk rises. At the opposite extreme lies the risk of valley of disappointment: automation failing to deliver on clearly simple, repetitive work.
AI also multiplies hand-offs. Classic process design sought to minimize human-to-human transfers; now teams must add human-AI and AI-AI coordination. Mismanaged links can prove costly. Hiring workflows will soon feature multiple algorithmic hand-offs—video-interview analytics feeding offer-generation agents, for example. Organizations must audit every transition, ensure fallback controls, and train both people and models to keep the process safe, productive, and trusted.
There are clear recommendations that emerge from the new operating model we propose in this article from our experiences. We believe that these recommendations are broadly applicable to a variety of industry and operations contexts.
Recommendation 1: Start the disruption in the C-suite and boardroom. Intelligence-equilibrium shifts succeed only when company directors and top executives act first. To continue creating value with AI, leaders will need to update decision rights, operating models, and accountability structures—and tackle culture as seriously as strategy. Begin by naming a target operating model that fits your mission. At TIAA we tied our vision—delivering lifetime income—to concrete AI initiatives, such as agentic fraud-prevention tools that protect older adults with cognitive decline. Clear, relatable framing lowers employee resistance. The result is work that is both more productive and more rewarding.
Recommendation 2: Focus AI on the “Defined” zone. Opportunities for AI value creation usually sit in the Defined band of a process—steps that are repeatable, documented, and technology enabled. Leaders facing any business scenario—start-up, turnaround, rapid growth, realignment, or steady-state—should tag them to the appropriate AI enablement levels:
At TIAA we began with Defined work such as inbound-call handling and portfolio research that already had clear coordination points, making it easy to embed generative agents and build guardrails.
Recommendation 3: Build a task-intensity inventory and manage it actively. Teams should know, not guess, where each task sits on the Cognitive-Spatial-Emotional map and how intense those skills are. Start by cataloging every high-volume workflow and tagging each task with Primary skill dimension(s) (C, S, E) and Current intensity (Advanced, Defined, Simple). This inventory lets teams target the right improvement path and exposes the coordination points where AI agents can slot in.
We began with high-volume processes. In marketing, for example, we split social-media production into human-led message refinement and AI-driven drafting, scheduling, and optimization. The result: faster content cycles and higher engagement. Similar gains emerged wherever teams could debate a real inventory, not abstract assumptions, about which tasks deserved human ingenuity and which should be handed to machines.
Recommendation 4: Do not overlook spatial design in AI initiatives. Leaders often ignore how physical or digital layout influences automation. Yet small spatial changes can unlock big gains. We are all familiar with how in the US the postal service utilizes handwriting recognition to speed up mail sorting, but misreads still slow delivery. Japan and the Netherlands have attempted to solve that bottleneck upstream: every domestic address begins with a rich zip code that pinpoints city, block, and even apartment, letting algorithms sort almost error-free. Our recommendation is to reflect first on the constraints and implications of the spatial aspects of your operations before introducing any AI capabilities for automation or augmentation. Amazon grew its mobile-robot fleet from 10k (2013) to 750k (2023) by marrying AI vision models with purpose-built layouts.3 Robotic arms, guided by chat-style interfaces, now pick faster and with fewer errors than traditional methods—proof that spatial re-engineering plus AI yields step-change performance.
Recommendation 5: Redefine roles and routines so employees thrive. AI projects must emphasize the goal of making employee experiences better. When employees spend their cognitive energy on work they would rather not be doing, they will not be able to focus on the important tasks in the process. AI should make work richer, not just faster. In our teams, GenAI already fills forms and drafts reports pulled from scattered documents, removing drudgery and freeing people for higher-impact tasks. Leaders can amplify that lift by reshaping roles around three simple rules:
Recommendation 6: Leverage AI to tame complex customer workflows. Workflows in a hundred year old company like TIAA evolve to high levels of complexity, especially because customers come from different age groups and generations. A century of growth has layered TIAA’s customer workflows with complexity: tech-savvy retirees and clients with cognitive decline require very different fraud-prevention paths today. Agents now flag early fraud signals—tech-support scams, government-impersonation calls, romance swindles—while also detecting signs of cognitive decline and prompting agents to involve trusted contacts. This dual support both thwarts bad actors and enables empathetic service.
The lesson travels beyond financial industry. Mapping each step’s cognitive, spatial, and emotional load reveals why the same task varies in difficulty across products, markets, and customer segments. AI’s strength is synthesizing that information complexity, letting organizations tailor safeguards and guidance to each customer profile without overwhelming staff bandwidth.
Recommendation 7: Stress test every AI Chain. When multiple tasks in a process use chained AI models, organizations unwittingly introduce new risks in the process. These risks should be acknowledged and mitigation strategies should be implemented. Chained AI models are inherently non-linear and sensitive to random variations. With such models, strong checkpoints in the chain are important. Moreover, process teams should implement a cybersecurity mindset. In other words, teams should develop a mindset that enables generation of use cases that could “break” the chain (red team), actively identify and implement human/technical defense mechanisms that alert malfunctioning within the AI chain (blue team), and ensure that the two teams actively communicate with each other and identify improvement opportunities (purple team). For example, GenAI systems can analyze earnings call transcripts, assessing management sentiment and extracting key insights through objective analysis of market trends. However, analysts need to stress test the output by infusing their insights on market dynamics, sociopolitical factors, and low-probability events that can disrupt predictions in either direction. Overall, analysts need to take on more “red team” responsibilities as they begin to embrace GenAI in their analyses.
GenAI offers the greatest step-change since the dawn of computing. Realizing that promise, however, requires more than proof-of-concept tinkering; it demands a rebuilt operating model. Leaders must map work at the skill—not role—level and deliberately rebalance every human–machine hand-off. Our Intelligence-Equilibrium framework supplies the playbook: diagnose workflows for scale, isolate the skills that truly matter, and recompose teams so people and AI agents amplify one another.
Board-room slogans about “AI transformation” mask the micro-decisions that lead to value creation—where to embed a model, how to retrain staff, which guardrails to add. Executives who adopt a scientific, experiment-driven mindset will capture value faster, cut risk, and, importantly, make work more engaging. The AI revolution is under way; its winners will be those who reimagine their operating model with skill-level precision and give employees clear ownership of the work that matters.