California Management Review
California Management Review is a premier academic management journal published at UC Berkeley
by Brian R. Spisak
Image Credit | Marina Molgulska
The evolution of AI is typically seen as a binary phenomenon oscillating between amazement at its current capabilities and speculation about its superhuman future. In other words, few are considering the steps from emerging AI to superhuman AI. Even the “godfathers of AI” tend to get caught up in black and white debates about progress. Unfortunately, this binary perspective hinders preparedness. If the only envisioned next step is the distant superhuman level, there seems little cause for concern and action. However, the reality is that we’re on the cusp of AI evolving to a level that will have serious implications for everyone from frontline employees to C-suite leaders.
Kolbjørnsrud, V. (2023). Designing the Intelligent Organization: Six principles for human-ai collaboration. California Management Review. Volume 66 Issue 2.
Scientists are already creating virtual software companies where AI agents assume roles ranging from CEO and CTO to programmers and testers. And a rapid surge in the “generality” of AI performance, which companies such as Google DeepMind are making big strides in, will soon turn these emerging examples of replacing managers and knowledge workers with intelligent agents into a mainstream business practice.
Remember, what seemed like science fiction just over a year ago—such as engaging in lifelike conversations with AI—is now commonplace. It’s therefore important not to underestimate the rapid advancement of this technology and its impact on everyone, from junior staff to those at the top.
Imagine a situation where AI outperforms just half of all skilled workers before the labor force is able to sufficiently reskill and upskill. The spike of unemployment while humanity adapts has the potential for significant economic and social upheaval.
To maintain resilience in this fast-paced, dynamic environment, proactively evolving with AI is a must. Leaders and their teams need to rapidly increase their AI literacy, which will empower them to navigate this changing landscape effectively.
The first step in increasing literacy, and consequently resilience, is understanding the broad stages of AI evolution. IBM and others identify four phases: Reactive Machines, Limited Memory (where we are now), Theory of Mind, and Self-Aware. To put this in the context of management, here’s a hypothetical example about how each type of AI would address the emotional state of “Emily,” who is displaying clear signs of burnout:
Limited Memory (Type II): A Type II AI relies on historical data to generate personalized interventions. While it can simulate empathy by recognizing patterns in data and adjusting responses, it lacks a deep understanding of Emily’s emotional state.
Although this summary simplifies the complexities of AI development – for example, the lines between Types II and Type III will blur, and the exploration of Type IV remains highly speculative – it highlights the fact that AI’s evolution is not a binary shift from a helpful tool to a superintelligence outperforming everyone. Rather, it’s a set of near-term steps in between these two endpoints of development that will alter the nature of work in distinct ways.
To further clarify AI’s evolution, and boost society’s AI literacy, DeepMind recently published a framework for classifying the levels of artificial general intelligence (AGI) – a form of AI that, like human intelligence, can understand, learn, and apply knowledge across a wide range of tasks. The levels in their framework are based on performance from Level 0 (no AI) to Level 5 (“Superhuman”), where the AI outperforms 100% of humans. It also considers generality, differentiating between narrow intelligence (focused on specific tasks) versus general intelligence (capable of a wide range of tasks and learning new skills).
In a narrow sense, the DeepMind framework indicates that superhuman intelligence, where the AI outperforms all humans, is already a reality in specific tasks such as mastering games like chess, shogi, and go, along with predicting protein structures — a formidable and critical challenge in biology, chemistry, and medicine. However, at the time of writing this, the framework indicates that the general intelligence of AI systems is in its early stages. State-of-the-art AI is currently at Level 1 (“Emerging”) for generality, where it’s “equal to or somewhat better than an unskilled human.” Notable examples at this level include AIs like ChatGPT, Bard, and Llama 2. The next step in their framework is “Competent” – that is, it’ll outperform “at least [the] 50th percentile of skilled adults.”
Some suggest that OpenAI’s secretive Q* (pronounced “Q-Star”) project and DeepMind’s published work on cultural transmission in artificially intelligent agents mark technological breakthroughs, propelling their AI toward the Competent level of artificial general intelligence. This imminent advancement is a real possibility given the rapid progression of AI. For example, consider OpenAI’s meteoric developments starting with ChatGPT’s debut in November 2022, followed by the release of GPT-4 in March 2023, GPT-4V with visual capabilities in September 2023, and the introduction of GPT-4 Turbo in November 2023.
As the pulse of innovation quickens, the prospect of AI outpacing human performance and regulatory frameworks becomes more tangible. The capability of these AI systems to surpass at least half of society in what OpenAI refers to as “economically valuable work” presents not just an opportunity but a formidable challenge. The bottom line is that the door to a new era in AI capabilities is not merely ajar, it’s poised to swing wide open.
In the face of this impending technological shift, preparedness is key. Beyond the confines of Big Tech, individuals, organizations, and industries alike must heed the knock at the door and take proactive measures to navigate the uncharted waters of AI’s evolution.
As society advances toward the dawn of AGI, the question arises: What can people and businesses do to boost their resilience and secure their place in this future? Central to this effort is AI literacy and readiness. We’ll withstand abrupt leaps in AI capabilities by increasing our understanding of the tech we use and how to build people-centric systems dedicated to the responsible use of AI. It’s about developing our skills and business practices to meet AI where it’s going to be in the months and years to come, not simply learning how to leverage its current level of functionality. To promote the goals of AI literacy and readiness, here are several key points to ensure preparedness:
Education
Policies
Governance
Collaboration and Communication
Diversity, Equity, and Inclusion
Industry-Wide Commitments
Continuous Learning
Skill Development
Ethics Training
Reporting and Accountability
Championing Safety
Advocacy Groups
By taking these steps, leaders and employees can contribute to the responsible use of AI, striking a balance between the rapid progress of technology and the need for sound business practices to ensure a safe and equitable AI transformation. In doing so, leaders and their employees alike take an active role in protecting their organizations and themselves from the potential pitfalls of AI development. Ultimately, through collaborative efforts and a commitment to ongoing education, we can collectively build a mature and secure AI-powered future.