CMR INSIGHTS

 

How to Build an AI-Prepared Workforce

by Brian R. Spisak

How to Build an AI-Prepared Workforce

Image Credit | Marina Molgulska

Proactive steps are essential to prepare workers for AI's evolution from today's capabilities to its superhuman future.
  PDF

The evolution of AI is typically seen as a binary phenomenon oscillating between amazement at its current capabilities and speculation about its superhuman future. In other words, few are considering the steps from emerging AI to superhuman AI. Even the “godfathers of AI” tend to get caught up in black and white debates about progress. Unfortunately, this binary perspective hinders preparedness. If the only envisioned next step is the distant superhuman level, there seems little cause for concern and action. However, the reality is that we’re on the cusp of AI evolving to a level that will have serious implications for everyone from frontline employees to C-suite leaders. 

Related CMR Articles

Kolbjørnsrud, V. (2023). Designing the Intelligent Organization: Six principles for human-ai collaboration. California Management Review. Volume 66 Issue 2.


Scientists are already creating virtual software companies where AI agents assume roles ranging from CEO and CTO to programmers and testers. And a rapid surge in the “generality” of AI performance, which companies such as Google DeepMind are making big strides in, will soon turn these emerging examples of replacing managers and knowledge workers with intelligent agents into a mainstream business practice.

Remember, what seemed like science fiction just over a year ago—such as engaging in lifelike conversations with AI—is now commonplace. It’s therefore important not to underestimate the rapid advancement of this technology and its impact on everyone, from junior staff to those at the top. 

Imagine a situation where AI outperforms just half of all skilled workers before the labor force is able to sufficiently reskill and upskill. The spike of unemployment while humanity adapts has the potential for significant economic and social upheaval. 

To maintain resilience in this fast-paced, dynamic environment, proactively evolving with AI is a must. Leaders and their teams need to rapidly increase their AI literacy, which will empower them to navigate this changing landscape effectively.

The Evolution of AI 

The first step in increasing literacy, and consequently resilience, is understanding the broad stages of AI evolution. IBM and others identify four phases: Reactive Machines, Limited Memory (where we are now), Theory of Mind, and Self-Aware. To put this in the context of management, here’s a hypothetical example about how each type of AI would address the emotional state of “Emily,” who is displaying clear signs of burnout:

  1. Reactive Machines (Type I): A Type I AI follows predefined rules and recommends standard information about burnout and interventions. It lacks the ability to learn from or adapt to Emily’s specific emotional state.
  2. Limited Memory (Type II): A Type II AI relies on historical data to generate personalized interventions. While it can simulate empathy by recognizing patterns in data and adjusting responses, it lacks a deep understanding of Emily’s emotional state.

  3. Theory of Mind (Type III): In contrast, a hypothetical Type III AI, equipped with a Theory of Mind, goes beyond recognizing patterns. It actively understands and responds to Emily’s emotional state in a compassionate, human-like way, demonstrating a more nuanced comprehension of her feelings and intentions.
  4. Self-Aware (Type IV): A Type IV AI, which is purely speculative and involves self-awareness, could potentially understand its own limitations and autonomously evolve to improve its interactions. It will actively seek feedback from Emily and proactively learn new interventions to manage her burnout and improve her emotional state.

Although this summary simplifies the complexities of AI development – for example, the lines between Types II and Type III will blur, and the exploration of Type IV remains highly speculative – it highlights the fact that AI’s evolution is not a binary shift from a helpful tool to a superintelligence outperforming everyone. Rather, it’s a set of near-term steps in between these two endpoints of development that will alter the nature of work in distinct ways.   

The Dawn of Artificial General Intelligence

To further clarify AI’s evolution, and boost society’s AI literacy, DeepMind recently published a framework for classifying the levels of artificial general intelligence (AGI) – a form of AI that, like human intelligence, can understand, learn, and apply knowledge across a wide range of tasks. The levels in their framework are based on performance from Level 0 (no AI) to Level 5 (“Superhuman”), where the AI outperforms 100% of humans. It also considers generality, differentiating between narrow intelligence (focused on specific tasks) versus general intelligence (capable of a wide range of tasks and learning new skills).   

In a narrow sense, the DeepMind framework indicates that superhuman intelligence, where the AI outperforms all humans, is already a reality in specific tasks such as mastering games like chess, shogi, and go, along with predicting protein structures — a formidable and critical challenge in biology, chemistry, and medicine. However, at the time of writing this, the framework indicates that the general intelligence of AI systems is in its early stages. State-of-the-art AI is currently at Level 1 (“Emerging”) for generality, where it’s “equal to or somewhat better than an unskilled human.” Notable examples at this level include AIs like ChatGPT, Bard, and Llama 2. The next step in their framework is “Competent” – that is, it’ll outperform “at least [the] 50th percentile of skilled adults.” 

AGI’s Impact on Leaders and Their Employees

Some suggest that OpenAI’s secretive Q* (pronounced “Q-Star”) project and DeepMind’s published work on cultural transmission in artificially intelligent agents mark technological breakthroughs, propelling their AI toward the Competent level of artificial general intelligence. This imminent advancement is a real possibility given the rapid progression of AI. For example, consider OpenAI’s meteoric developments starting with ChatGPT’s debut in November 2022, followed by the release of GPT-4 in March 2023, GPT-4V with visual capabilities in September 2023, and the introduction of GPT-4 Turbo in November 2023. 

As the pulse of innovation quickens, the prospect of AI outpacing human performance and regulatory frameworks becomes more tangible. The capability of these AI systems to surpass at least half of society in what OpenAI refers to as “economically valuable work” presents not just an opportunity but a formidable challenge. The bottom line is that the door to a new era in AI capabilities is not merely ajar, it’s poised to swing wide open.

In the face of this impending technological shift, preparedness is key. Beyond the confines of Big Tech, individuals, organizations, and industries alike must heed the knock at the door and take proactive measures to navigate the uncharted waters of AI’s evolution. 

Preparing for AI’s Evolution

As society advances toward the dawn of AGI, the question arises: What can people and businesses do to boost their resilience and secure their place in this future? Central to this effort is AI literacy and readiness. We’ll withstand abrupt leaps in AI capabilities by increasing our understanding of the tech we use and how to build people-centric systems dedicated to the responsible use of AI. It’s about developing our skills and business practices to meet AI where it’s going to be in the months and years to come, not simply learning how to leverage its current level of functionality. To promote the goals of AI literacy and readiness, here are several key points to ensure preparedness:

Steps for Becoming an AI-Prepared Leader

Education

  • Invest in AI Training: Allocate resources for continuous training programs that enhance everyone’s understanding of AI, promoting a leadership team and a workforce equipped to navigate the opportunities and challenges of AI.

Policies

  • Prioritize Ethical AI Policies: Develop and implement robust ethical AI policies within the organization, emphasizing transparency, fairness, and accountability in AI adoption. 

Governance

  • Embrace Responsible Adoption Practices: Advocate for and use responsible AI adoption practices which emphasizes thorough testing, risk assessment, employee reskilling and upskilling, and adherence to safety guidelines throughout the AI transformation process.

Collaboration and Communication

  • Encourage Cross-Functional Collaboration: Foster collaboration between technical teams and non-technical stakeholders to ensure a holistic understanding of AI implications and ethical considerations at all levels of the organization.

Diversity, Equity, and Inclusion

  • Promote Diversity, Equity, and Inclusion in AI Adoption: Encourage diverse perspectives and inclusive practices in teams tasked with choosing AI-powered tools, recognizing the importance of varied viewpoints in mitigating biases and ensuring equity in AI outcomes.

Industry-Wide Commitments

  • Build Industry Alliances for Responsible AI: Collaborate with leaders from other organizations within the industry to establish alliances that collectively demand responsible AI products and services from suppliers.

Steps for Becoming an AI-Prepared Employee

Continuous Learning

  • Stay Informed on Developments: To prepare for change, employees should regularly update their knowledge on AI innovations and the potential impact of these advancements.

Skill Development

  • Participate in Reskilling and Upskilling Programs: Actively engage in reskilling and upskilling programs to adapt to evolving AI technologies, ensuring the ability to work alongside intelligent agents and ultimately minimizing the risk of job displacement.

Ethics Training

  • Participate in AI Ethics Training: Attend workshops or training sessions focused on AI ethics to recognize and address ethical challenges associated with unsafe AI technologies at work.

Reporting and Accountability

  • Report Ethical Concerns: Report ethical concerns related to AI-powered tools to promote a culture based on transparency and accountability.

Championing Safety

  • Advocate for Safe Practices: Employees should actively promote safe AI practices within their teams in order to build a culture that prioritizes responsible AI adoption over unchecked speed.

Advocacy Groups

  • Form Industry-Wide Employee Advocacy Groups: Collaborate with employees across organizations to establish groups focused on learning about and advocating for safe AI practices that improve work conditions and safeguard against mass employee displacement. These communities can serve as platforms for sharing knowledge, discussing concerns, and collectively promoting responsible AI development within their industry.

By taking these steps, leaders and employees can contribute to the responsible use of AI, striking a balance between the rapid progress of technology and the need for sound business practices to ensure a safe and equitable AI transformation. In doing so, leaders and their employees alike take an active role in protecting their organizations and themselves from the potential pitfalls of AI development. Ultimately, through collaborative efforts and a commitment to ongoing education, we can collectively build a mature and secure AI-powered future.



Brian R. Spisak
Brian R. Spisak Brian R. Spisak, PhD is an independent consultant focusing on digital transformation and workforce management in healthcare. He’s also a research associate at the National Preparedness Leadership Initiative (Harvard T.H. Chan School of Public Health, Harvard University), a faculty member at the American College of Healthcare Executives, and the author of the best-selling book, 'Computational Leadership: Connecting Behavioral Science and Technology to Optimize Decision-Making and Increase Profits' (Wiley, 2023).

Recommended




California Management Review

Berkeley-Haas's Premier Management Journal

Published at Berkeley Haas for more than sixty years, California Management Review seeks to share knowledge that challenges convention and shows a better way of doing business.

Learn more
Follow Us