CMR INSIGHTS

 

AI-Driven Does Not Equal Knowledge-Driven Workers

by Hamilton Mann

AI-Driven Does Not Equal Knowledge-Driven Workers

Image Credit | Google Deepmind

The future of work will not be determined by how well we can delegate tasks to machines, but by how effectively we can integrate AI as a partner that complements human strengths.
  PDF

The narrative around AI deployment for productivity, especially since the advent of Generative AI, often centers on whether this capability will replace and displace jobs. However, it hides another phenomenon that might be equally, if not more, pronounced: the way people counterbalance the perceived competition introduced by AI systems presented as co-pilots in their workplaces.  

Related CMR Articles

Vegard Kolbjørnsrud, “Designing the Intelligent Organization: Six Principles for Human-AI Collaboration,” California Management Review, 66/2.


This is about complaining louder about manual work. Manual work in the corporate world has always referred to tasks and endeavors that are considered demeaning for what Peter Drucker has called the knowledge workers.

As technology advances in capability, there is a growing willingness to delegate tasks that humans dislike or devalue, which are labeled as “manual tasks” because they are considered daunting. 

This increasing disdain for “manual” tasks serves as a red-alert signal for managers and executives to push for automation. This reaction is almost Pavlovian: no manual task is perceived as worthwhile, especially when teams complain about it. The assumption is that manual tasks waste time, reduce EBIT, demotivate teams, and cause inefficiencies. 

The call for “manual tasks” as a defense strategy against AI is a powerful mechanism for workers to shield themselves from the fear of being replaced or displaced. Ironically, this mindset often becomes one of the primary causes of AI project failure, leading to substandard performance. 

Why? Because not all tasks labeled as “manual” are exclusively manual.

Even though some tasks involve manual actions, they invariably require cognitive processes such as analysis and decision-making, which precede the physical act. These essential mental efforts often go unnoticed, even by the individuals performing the tasks, due to the reductive and stigmatized perception of the term “manual.

Take, for example, a worker tasked with handling requests from website visitors, flowing into a CRM system, and dispatching them to appropriate business entities. On the surface, this might seem like a manual task that could be easily delegated to AI. However, there’s much more at play. 

Beyond the clicks and mechanical actions, this process depends on a deeper understanding of the requests. 

Each request is written differently by different visitors with varying styles and levels of clarity, requiring nuanced interpretation and judgment. 

While some fields might help structure the categorization, unstructured information, especially in comment fields, holds the essence of what determines the correct redirection. Decoding this in-text information demands significant cognitive effort.

Workers must use their judgment to determine the best department to handle a request. This requires analyzing ambiguous inputs, balancing competing priorities, and applying domain-specific knowledge.

Furthermore, there is often more than one possible redirection, and the worker’s experience plays a critical role in determining the most relevant option. This judgment relies on assumptions with the highest likelihood of success, going beyond what is explicitly written in the text. This may require prioritization based on subtle factors.

Lastly, workers often rely on implicit knowledge, such as understanding the company’s organizational structure, historical precedents, or nuances in communication.

Tasks like these are not just “manual” tasks. 

Even though they may seem daunting, they are fundamentally “knowledge tasks.” They cannot be easily delegated to AI because of the importance of the unwritten information and the subtle reasoning required to make decisions that significantly impact the overall process performance.

Even if AI were trained to perform this task, several limitations would arise:

AI excels at repetitive, well-defined tasks, but its effectiveness decreases when faced with high variability and ambiguity unless trained extensively on similar tasks. It would need vast amounts of high-quality, annotated data to learn patterns and make accurate decisions, which can be costly and time-consuming to produce, requiring human so-called “manual” tasks that would not be temporary but recurring, to regularly audit and improve the system.

As AI lacks the ability to fully replicate the subtle, situational reasoning humans use, it may require human intervention to address uncommon or unexpected requests properly, and, in some cases, even more time to correct mistakes. This could prompt workers to overthink tasks usually performed without conscious reflection due to practice and experience, leading to a phenomenon known as paralysis by analysis, thereby increasing rather than eliminating unnecessary human cost.

Misrouting a request could lead to inefficiencies or customer dissatisfaction, making the stakes higher for AI accuracy and creating the impression that it is a technical responsibility to be fixed under the umbrella of “the machine does not work well,” instead of acknowledging that it is a knowledge stake for which people need to be accountable.

AI can assist, assuming the volume of requests justifies its implementation. However, humans would still need to review and finalize AI’s work to validate it, interpret ambiguous cases, and apply the contextual judgment necessary to ensure relevancy and adaptability in complex or nuanced scenarios.

Mischaracterizing knowledge-based tasks as “manual” can lead to poorly designed AI implementations.

Three guideposts that organizations should keep in mind:

  1. Balance cost-saving goals with knowledge preservation: Executives should recognize that training AI systems often requires human inputs, audits, and adjustments, then cost. The aim is to avoid the trap of pursuing automation solely for task optimization, as it may lead to hidden, long-term expenses for the organization that outweigh the visible short-term benefits.

  2. Reframe “manual” tasks as “knowledge-driven” tasks: Managers should take the time to assess the true nature of tasks labeled as “manual” to understand the cognitive effort behind these processes and recognize their value to the organization. Rather than dismissing them outright, they should identify the reasoning, judgment, and expertise involved.

  3. Invest in training and awareness: Workers should be educated about the capabilities and limitations of AI tools. Training should emphasize the importance of critical thinking and decision-making in conjunction with AI assistance. It can help alleviate fears of displacement while equipping teams with the skills to leverage AI effectively in their roles.

Organizations that succeed in this new era will be those that foster a culture of collaboration between humans and machines—considering when it is relevant to choose any of the four co-intelligence modes of what I term Artificial Integrity: Marginal Mode (where limited AI contribution and human intelligence is required—think of it as ‘less is more’), AI-First Mode (where AI takes precedence over human intelligence), Human-First Mode (where human intelligence takes precedence over AI), and Fusion Mode (where a synergy between human intelligence and AI is required).

Ultimately, the key to navigating these transitions lies in leadership—leaders who can look beyond cost-cutting automation and recognize the profound value of human intelligence paired with AI will drive human ingenuity at the heart of a technology-driven future.



Hamilton Mann
Hamilton Mann Hamilton Mann is Group VP of Digital at Thales and lecturer at INSEAD and HEC Paris. He is a globally recognized expert in AI for Good and was inducted into the Thinkers50 Radar as one of the Top 30 most prominent rising business thinkers. Mann is the author of "Artificial Integrity" (Wiley).

Recommended




California Management Review

Berkeley-Haas's Premier Management Journal

Published at Berkeley Haas for more than sixty years, California Management Review seeks to share knowledge that challenges convention and shows a better way of doing business.

Learn more
Follow Us