California Management Review
California Management Review is a premier academic management journal published at UC Berkeley
by Anand Kumar and Amit Kumar Kashyap
Image Credit | tantawat
Generative AI (GAI) has transformed the business by automating content creation, driving personal marketing by creating tailored messages and content based on individual customer data and preferences, enhancing engagement and conversion rates, enhancing customer services through chatbots, and supporting decision-making by providing relevant information to the context. The existing legal frameworks and judicial precedents all over the globe are designed for a world where the application of AI has negligible impact on society as well as business, and it may be difficult for these frameworks to govern decisively this rapidly evolving technology. Generative AI outputs, including creations such as images and information, have led to various legal and ethical conflicts in existing jurisdictions. One of the cases happened in June 2023 by Mr. Mark Walters, a radio host in the United States, alleging that Chat GPT had defamed him and filed a lawsuit against Open AI (Vincent, 2023). ChatGPT stated, “Mr. Walters had been accused of defrauding and embezzling funds from a non-profit organization.” The information was false. Dissemination of wrong information, termed “hallucination,” is a serious concern of GAI (Bengesi et al., 2024). Data aggregators and consent managers have a policy to inform users about privacy concerns regarding the potential use of personal information collected from them. GAI introduces a new layer of complexity. Once a GAI model is trained on a dataset, it cannot truly “unlearn” the information it has already absorbed. It raises a critical question. How can individuals exercise control over their personal information when it is woven into the very fabric of a powerful AI model?
Berthon, P., Pitt, L., Kietzmann, J., & McCarthy, I. P. (2015). “CGIP: Managing Consumer-Generated Intellectual Property.” California Management Review, 57(4), 43–62.
Internet governance is a persistent and contentious issue in determining the liabilities of “intermediaries” for the content they host. The role of GAI tools is debatable due to different views and opinions. Some argue they should be considered intermediaries since they are used almost like a search engine, even though they do not host links to third-party websites. It is argued that these platforms primarily function as conduits for user prompts, where modifications to prompts directly influence the generated output, thereby equating the resulting content to third-party speech and potentially reducing liability. The ambiguity in classifying GAI tools, whether as intermediaries, conduits, or active creators, will complicate the ability of courts to assign liability, particularly in user reposts. The paper will explore the significant convergence of law and technology, specifically examining the revolutionary influence of GAI in jurisprudence.
The benefits of intellectual property (IP) rights depend on firm strategy, the competitive landscape, and the rapidly changing contours of intellectual property law (Fisher & Oberholzer-Gee, 2013). A copyright owner can take legal action against anyone who infringes on his/her work with remedies such as injunctions and damages. However, the question of who is responsible for copyright infringement by AI tools remains to be determined. As previously argued, whether as intermediaries, conduits, or active creators will complicate courts’ ability to assign liability. ChatGPT’s usage policy was updated on January 10, 2024. The ChatGPT’s ‘Terms of Use’ attempt to assign liability to a user for any illegal output. But the enforceability of such terms is uncertain.
Business jurisprudence of GAI refers to the legal and regulatory frameworks governing the deployment, use, and implications of highly advanced AI systems in business contexts. This area of law addresses various aspects, including liability, intellectual property, ethical considerations, and regulatory compliance. These areas include:
AI-Created Works: Determining the ownership and copyright of works created by GAI systems. In November 2023, the Beijing Internet Court made a pivotal ruling in a copyright infringement case involving an image created by AI. This decision tackled two essential questions: (1) whether works generated by AI can be protected by copyright and (2), if so, who owns the copyright (Tang et al., 2024). The question of copyrightability for AI-generated works will continue to be assessed based on the unique details of each case. Further, The United States Copyright Office clarified in March 2023 that copyright protection only applies to works created by artificial intelligence when human beings have creative control. In August 2023, the US District Court affirmed this position in Thaler v. Perlmutter, further establishing that copyright eligibility is contingent upon human authorship.
Patents: Addressing whether innovations generated by GAI can be patented and who holds the patent rights. The United Kingdom Supreme Court decided that AI-generated inventions are inherently unprotectable. In contrast, the Bundesgerichtshof (Federal Court of Justice) in Germany ruled that AI-generated inventions are protectable and that a natural person can be named the inventor, even if the invention was created using AI (Abbott, 2024). AI is aggressively pursuing the field of invention and discovery. Take DeepMind, an AI tool developed by Google, for instance. It was revealed in November 2023 that 2.2 million new crystals had been found. The need for a human inventor under patent law, however, creates obstacles in the legal realm. This problem is highlighted in cases such as Thaler v. Commissioner of Patents. There has to be clear legal guidance since AI complicates the process of establishing who owns a patent and makes us wonder what counts as an invention’s uniqueness and non-obviousness.
Bias and Fairness: Ensuring that GAI systems are designed and implemented to avoid bias and promote fairness. Although fairness and bias are closely related, they differ significantly (Ferrara, 2023). Bias refers to the systematic and consistent deviation of an algorithm’s output from the actual value or from what is expected if there were no bias. Conversely, fairness in AI means the absence of discrimination or favouritism towards any individual or group based on protected characteristics such as race, gender, age, or religion.
Transparency & Explainability: Businesses must be transparent about GAI usage in their operations, especially in decision-making processes that affect individuals. “Transparency” in AI systems encompasses several related concepts, including the clarity of the algorithms, the openness to discussing AI-influenced decisions, and the ability to challenge or explain these decisions (Andrada et al., 2023). It is also challenging to determine what elements contributed to a choice or how it was arrived at because of the opaque decision-making process inherent in AI, which is known as the “black box” characteristic of AI. When it comes to high-stakes matters like loan approvals or employment choices, this lack of openness really puts fairness to the test.
Accountability: Establishing responsibility for AI systems presents a complex array of issues. One of the most significant challenges is clarifying the ambiguous boundaries of accountability between people and robots. Consider autonomous vehicles; in the event of an accident, who is held accountable? Is it the manufacturer who constructed the car, the developer who authored the code, or the passenger occupying the driver’s seat? This ambiguous zone ignites a discussion on the boundaries of human supervision and the onset of AI autonomy.
Jurisdiction: The conventional borders of jurisdiction have become more porous due to the worldwide nature of the internet and the frictionless movement of information. AI technology makes These complications worse, necessitating new ways of thinking about jurisdictional issues. There are sometimes territorial, national, and impact/or origin disputes when legal proceedings involving AI go across several countries. Operating AI systems across borders makes defining the proper jurisdiction much more problematic, making it harder to maintain legal norms and resolve disputes efficiently globally.
Data Privacy: The reliance on AI on massive datasets for training and operation raises significant privacy problems. This is particularly true when personal data is used without express agreement, which might violate privacy laws. The seriousness of these problems is brought to light by high-profile occurrences, including data breaches using AI systems.
Strict compliance with data privacy legislation, modern data anonymization technologies, and openness surrounding data collection, storage, and usage are crucial to resolving these issues.
Data Protection: Ensuring GAI systems comply with data protection laws such as GDPR, which govern the handling of personal data. Privacy is a privilege that people generally enjoy with information about themselves and the sort of control over information about oneself that people count in ordinary interpersonal interactions (Lee, 2020).
Safety Standards: Implementing and adhering to safety standards specific to AI technologies to prevent harm and ensure reliability. A security interest can arise when others use information gathered about one to damage the person’s interest (Lee, 2020).
Workplace Automation: Managing the legal implications of replacing human labour with AI systems, including job displacement and worker rights. Amazon’s warehouse and delivery workers took the brunt of skyrocketing demand for delivered goods with constant surveillance and productivity tracking software pushing the pace of work to an alarming rate and putting workers’ health at risk (Bernhardt et al., 2023).
Discrimination: Preventing discriminatory practices in hiring, firing, and workplace management through AI (Andrieux et al., 2024). The CVs submitted to Amazon that were used to train AI algorithms over the previous ten years mostly came from men (Blackham, 2023). The algorithm had “learned” that men were to be preferred. It awarded more stars for masculine language in CV and took off stars for anyone attending a women’s college. The algorithm had been taught to discriminate, copying human bias.
Legislation: Governments are enacting laws specifically tailored to address AI-related issues, such as the European Union’s AI Act.
Standards and Guidelines: International organizations and industry bodies are developing standards and guidelines for AI governance.
Case Law: Courts interpret existing laws in the context of AI-related disputes, gradually shaping the legal landscape.
Ethical Frameworks: Ethical principles and frameworks, such as those developed by the IEEE and other organizations, guide AI’s responsible development and use.
Several countries and regions have implemented or are developing laws and regulations for AI use in business. These regulations ensure that AI technologies are deployed responsibly, ethically, and in compliance with existing laws. Here are a few examples:
General Data Protection Regulation (GDPR): GDPR impacts businesses using AI by regulating data privacy and protection, requiring transparency in how AI systems process and use personal data (Lin, 2024). Businesses must ensure AI algorithms do not infringe on individuals’ privacy and data protection rights.
Artificial Intelligence Act: This regulation, once adopted, will impose obligations on businesses deploying AI systems, especially high-risk applications (Neuwirth, 2023). It includes requirements for risk management, data governance, transparency, and human oversight, affecting sectors like finance, healthcare, and transportation.
Algorithmic Accountability Act: This legislation would require businesses to conduct impact assessments of automated decision-making systems to identify and mitigate risks related to bias, discrimination, and privacy (Mökander et al., 2022). It targets companies using AI to make significant decisions that affect consumers.
California Consumer Privacy Act (CCPA): CCPA impacts businesses using AI by regulating how they collect, use, and share personal data of California residents (Baik, 2020). It mandates transparency and provides consumers with rights to access, delete, and opt out of data sales, affecting AI-driven business models.
Personal Information Protection Law (PIPL): Effective November 2021, PIPL regulates businesses’ handling of personal data, requiring explicit consent and specifying conditions for data processing (Zhang & Zha, 2023). Businesses using AI must ensure compliance with data protection and cybersecurity standards.
Provisions on the Governance of the Algorithm Recommendations of Internet Information Services: Effective from March 2022, these regulations require businesses to ensure transparency and user control over algorithmic recommendations, addressing concerns about data security, manipulation, and ethical use (Yang & Yao, 2022).
Act on the Protection of Personal Information (APPI): This law requires businesses using AI to handle personal data carefully, ensuring data privacy and protection (Orito & Murata, 2008). It mandates obtaining consent for data processing and implements strict guidelines for data security.
These acts, regulations, and frameworks highlight the importance of ethical AI deployment in business, addressing data privacy, transparency, fairness, accountability, and discrimination. Businesses must navigate these legal landscapes to ensure compliance and build trust in AI-driven processes and products.
Currently, generative AI is not explicitly addressed by any legislation in India. However, India has put forth important programs and standards to encourage AI’s ethical creation and application. The National AI Strategy (#AIFORALL), unveiled in 2018 by NITI Aayog, prioritizes areas including healthcare, education, and transportation. Also, in 2021, NITI Aayog put forth the “Principles for Responsible AI,” which dealt with ethical issues, including responsibility, transparency, and fairness in AI. Aiming to address privacy problems relating to AI and digital media, the Digital Personal Data Protection Act, 2023 and IT Rules 2021 were enacted. Furthermore, the Draft National Data Governance Framework Policy aims to increase data availability and administration to boost AI research.
The business jurisprudence of GAI is an evolving field that seeks to balance the innovative potential of advanced AI systems with the need to protect individuals, businesses, and society from associated risks. It involves a complex interplay of legal principles, regulatory frameworks, and ethical considerations, requiring ongoing adaptation as AI technologies advance. There needs to be more consensus between the existing laws in different nations. Thus, an international committee is needed to observe and mitigate the legal challenges of this evolving technology. One can borrow from these existing laws and change them to suit their business practices and country. There are several steps to pursue, as suggested by Patnaik (2024). First, an approach to enhance learning involves granting temporary liability immunity to GAI platforms, resembling a sandbox model. It allows for responsible development while facilitating data collection, which is crucial for identifying and addressing legal challenges, thereby informing future legislative frameworks. Secondly, addressing data rights and responsibilities in GAI training necessitates a revamped approach to data acquisition. Developers must prioritize legal compliance, including proper licensing and fair compensation for IP used in training models. Potential solutions may involve implementing revenue-sharing models or negotiating licensing agreements with data owners to ensure ethical data practices. Thirdly, navigating licensing complexities for GAI involves overcoming challenges inherent in decentralized web data. Unlike the music industry, GAI lacks centralized copyright societies. One potential solution is establishing centralized platforms analogous to stock photo repositories like Getty Images. These platforms could streamline data licensing processes, enhance developer access to necessary datasets, and mitigate historical biases and discriminatory data practices.
Organizations developing GAI can share and protect their product through copyleft licensing, allowing unrestricted use and adaptation under the same license, or open patents, allowing free use with re-licensed back to the original patent holder (Berthon et al., 2015). The legal framework for GAI needs to be clarified and requires comprehensive reassessment within existing digital jurisprudence. A holistic governmental strategy, including international bodies such as the International Court of Justice and informed interpretations by constitutional courts, is essential. This approach aims to optimize the advantages of this transformative technology while ensuring robust protection of individual rights and mitigation against potential harms.
Andrada, G., Clowes, R. W., & Smart, P. R. (2023). Varieties of transparency: Exploring agency within AI systems. AI & SOCIETY, 38(4), 1321–1331. https://doi.org/10.1007/s00146-021-01326-6
Andrieux, P., Johnson, R. D., Sarabadani, J., & Van Slyke, C. (2024). Ethical considerations of generative AI-enabled human resource management. Organizational Dynamics, 53(1), 101032. https://doi.org/10.1016/j.orgdyn.2024.101032
Baik, J. S. (2020). Data Privacy Against Innovation or Against Discrimination?: The Case of the California Consumer Privacy Act (CCPA) (SSRN Scholarly Paper 3624850). https://papers.ssrn.com/abstract=3624850
Bengesi, S., El-Sayed, H., Sarker, M. K., Houkpati, Y., Irungu, J., & Oladunni, T. (2024). Advancements in Generative AI: A Comprehensive Review of GANs, GPT, Autoencoders, Diffusion Model, and Transformers. IEEE Access, 12, 69812–69837. https://doi.org/10.1109/ACCESS.2024.3397775
Bernhardt, A., Kresge, L., & Suleiman, R. (2023). The Data-Driven Workplace and the Case for Worker Technology Rights. ILR Review, 76(1), 3–29. https://doi.org/10.1177/00197939221131558
Berthon, P., Pitt, L., Kietzmann, J., & McCarthy, I. P. (2015). CGIP: Managing Consumer-Generated Intellectual Property. California Management Review, 57(4), 43–62. https://doi.org/10.1525/cmr.2015.57.4.43
Blackham, A. P. A. (2024, June 14). When AI gets it wrong, workers suffer. Pursuit by the University of Melbourne. https://pursuit.unimelb.edu.au/articles/when-ai-gets-it-wrong-workers-suffer#:~:text=Other%20studies%20have%20found%20that,or%20culturally%20diverse%20or%20LGBTQI%2B.
Ferrara, E. (2023). Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies, 2023. arXiv preprint arXiv:2304.07683.
Fisher, W. W., & Oberholzer-Gee, F. (2013). Strategic Management of Intellectual Property: An Integrated Approach. California Management Review, 55(4), 157–183. https://doi.org/10.1525/cmr.2013.55.4.157
German court allows patents for AI-generated inventions - University of Surrey. (n.d.). https://www.surrey.ac.uk/news/german-court-allows-patents-ai-generated-inventions#:~:text=Earlier%20this%20year%2C%20the%20United,used%20to%20generate%20the%20invention.
Lee, R. S. T. (2020). AI Ethics, Security and Privacy. In R. S. T. Lee (Ed.), Artificial Intelligence in Daily Life (pp. 369–384). Springer. https://doi.org/10.1007/978-981-15-7695-9_14
Lin, Y. (2024). More Than an Enforcement Problem: The General Data Protection Regulation, Legal Fragmentation, and Transnational Data Governance. Columbia Journal of Transnational Law, 62(1).
Mökander, J., Juneja, P., Watson, D. S., & Floridi, L. (2022). The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act: What can they learn from each other? Minds and Machines, 32(4), 751–758. https://doi.org/10.1007/s11023-022-09612-y
Neuwirth, R. J. (2023). Prohibited artificial intelligence practices in the proposed EU artificial intelligence act (AIA). Computer Law & Security Review, 48, 105798. https://doi.org/10.1016/j.clsr.2023.105798
Orito, Y., & Murata, K. (2008). Socio‐cultural analysis of personal information leakage in Japan. Journal of Information, Communication and Ethics in Society, 6(2), 161–171. https://doi.org/10.1108/14779960810888365
Patnaik, A. (2024, July 3). Digital jurisprudence in India, in an AI era. The Hindu. https://www.thehindu.com/opinion/op-ed/digital-jurisprudence-in-india-in-an-ai-era/article68360073.ece
Tan, L., Lau, J., & Wong, H. (2024, April 23). China: A landmark court ruling on copyright protection for AI-generated works. Lexology. https://www.lexology.com/library/detail.aspx?g=866417bd-9c6f-4686-9bed-c1e41e452c08#:~:text=In%20November%202023%2C%20the%20Beijing,yes%2C%20who%20owns%20the%20copyright.
Vincent, J. (2023, June 9). OpenAI sued for defamation after ChatGPT fabricates legal accusations against radio host. The Verge. https://www.theverge.com/2023/6/9/23755057/openai-chatgpt-false-information-defamation-lawsuit
Yang, F., & Yao, Y. (2022). A new regulatory framework for algorithm-powered recommendation services in China. Nature Machine Intelligence, 4(10), 802–803. https://doi.org/10.1038/s42256-022-00546-9
Zhang, Z., & Zha, Y. (2023). Systematic construction of lawfulness of processing employees’ personal information under China’s personal information protection law. Computer Law & Security Review, 50, 105853. https://doi.org/10.1016/j.clsr.2023.105853