Generative AI offers promise for organizational agility, but managers must be aware of three key traps.
Introduction
Agility is a necessity for organizations in the face of change and uncertainty. Generative AI, a powerful subset of artificial intelligence, has emerged as a transformative force. By automating creativity, it is capable of catalyzing innovation, informing strategizing and decision-making, and enhancing operational efficiency, all of which are key components of organizational agility.
Despite its potential, the rapid adoption of generative AI carries certain pitfalls. These pitfalls can inadvertently hinder the agility they seek to enhance. Managers face the challenge of integrating these powerful technologies while avoiding the ensnaring traps that can lead to ethical quandaries, overreliance, and unmanageable complexity and unpredictability.
This short article illustrates that the true value of generative AI lies not only in its ability to accelerate processes but also in its responsible utilization, ensuring that agility is coupled with ethical integrity and careful reflection.
Three Traps for Organizational Agility
Trap 1: Rapid deployment circumventing ethical concerns
The speed at which generative AI can be deployed is alluring and promises immediate enhancements. However, the rapidity of this deployment often bypasses ethical deliberations. This oversight can lead to privacy violations, as seen with data protection non-compliance, unintended biases in recruitment algorithms, and a lack of accountability in automated decision-making. Here are three solutions that managers can keep in mind for this trap:
- Ethical AI frameworks: Organizations should consider adopting an ethical AI framework that is comprehensive and robust. This includes conducting impact assessments for high-risk AI systems, ensuring data governance measures are in place, and embedding mechanisms for human oversight. For instance, adopting a principle-based approach akin to the Asilomar AI Principles can ensure AI development aligns with human values and ethical standards.
- Ethical review boards: Establishing an ethical review board can be invaluable. This board should include cross-functional experts from both within and external to organizations, including ethicists, sociologists, and consumer advocates. They can review and approve AI projects, similar to the ethics board approach taken by the Ethics & Society team at Google DeepMind.
- Stakeholder engagement: Developing a continuous stakeholder engagement process is crucial. This process should involve transparent communication with customers, regular consultations with advocacy groups, and collaborations with academic institutions to assess the impact of AI. Organizations like the Algorithmic Justice League offer interesting models for how to engage with a broad community to ensure AI systems are developed responsibly.
Trap 2: Overreliance on AI for ideation and decision-making
Generative AI’s capacity to ideate and facilitate decision-making offers rich potential. It can process and synthesize information at an unthinkable scale. However, its overuse risks overshadowing human expertise and can create feedback loops that recycle existing ideas instead of generating truly innovative concepts. This is apparent in the ‘echo chambers’ created by content recommendation algorithms, which often perpetuate existing consumer behaviors rather than fostering new interests. Below are three possible solutions for this trap:
- Hybrid decision-making models: To combat overreliance on AI, organizations can implement a hybrid decision-making model where AI-generated options are evaluated alongside human judgment. For instance, AI could generate several business strategies, which are then reviewed and refined by a team of human experts, ensuring that decisions are made with a full understanding of the business context and moral implications. This mirrors the ‘centaur’ approach in chess, where players combined with AI can outperform both AI and humans playing alone.
- Continuous training and education: A program of continuous learning and development must be instituted to keep employees up-to-date with the latest AI advancements and their ethical implications. This should include not only technical training but also workshops on critical thinking and ethics in AI, similar to Microsoft’s AI Business School, which aimed to educate leaders on responsible AI integration.
- Diverse teams for AI oversight: Creating diverse teams for AI development and deployment involves assembling groups from different backgrounds and disciplines. This can help challenge assumptions and prevent inertia. It is also vital to include voices that are often underrepresented in tech, to ensure that a variety of viewpoints. A prime example of this is the Partnership on AI, which brings together organizations, civil society, and academics to share best practices and ensure responsible AI development.
Trap 3: Complexity and unpredictability of AI
Generative AI systems can become enigmatic, even to their creators. This complexity can result in systems that are difficult to predict or control, leading to unexpected outcomes. For example, in autonomous vehicles, unpredictable AI behavior can have dangerous consequences, emphasizing the need for systems whose logic can be understood and anticipated. For this trap, the follow three solutions are suggested:
- Simplicity and transparency: Opting for simpler AI models that are more understandable and manageable is wise. Decision trees can often be used in place of neural networks for certain tasks, providing clear rationale for each decision. This approach is seen in industries like healthcare, where transparent AI systems are crucial for diagnosis and treatment, and any decisions made by AI need to be explainable to patients.
- Explainable AI (XAI): Investing in XAI involves not just the adoption of tools, but also a commitment to research and development in this space. For example, DARPA’s XAI program aims to create a suite of machine learning techniques that produce more explainable models, while still maintaining a high level of learning performance. Organizations should seek to contribute to and apply such findings to ensure their AI systems can be understood by all stakeholders.
- Robust testing and monitoring: A comprehensive strategy for testing and monitoring AI systems must be developed to detect, report, and respond to unexpected behaviors. This includes implementing AI auditing, akin to financial auditing practices, to regularly assess AI systems. Organizations like Salesforce have developed AI monitoring systems that continuously evaluate AI decisions, ensuring they are consistent with expected outcomes and ethical guidelines.
By considering and expanding upon these solutions, organizations can be better prepared to integrate generative AI responsibly without sacrificing the promise of agility. Each solution requires a commitment not only to the implementation of policies and procedures but also to the ongoing education and engagement of all stakeholders involved.