CMR INSIGHTS

 

Dilemmas of ChatGPT in Content Creation Industry

by Mark Esposito, Terence Tse, and Tahereh Saheb

Dilemmas of ChatGPT in Content Creation Industry

Image Credit | Diego PH

We explore dilemmas in ChatGPT & the content creative industry.
  PDF

The development of ChatGPT in November 2022 elicited mixed reactions.1 While many were captivated by its exceptional ability to create texts autonomously, several raised ethical concerns, particularly in education, writing, and content creation.2 But it is not all against: several research papers have now been written with ChatGPT as a co-author.3 Despite its promising features in content creation, specifically text creation in journalism, reporting, or marketing, this emerging technology is fraught with several ethical and legal challenges, which expand further the gap of necessary governance of the new forms of technologies that congenially challenge the sense of normal in conventional societal norms. Besides, these technologies and in this case AI generated language models are therefore not yet business-ready: companies have to find ways to better monetize the technology and reduce unintended consequences of AI-created content. In this short piece, we aim a highlighting three obstacles that companies may have to overcome to fully reap the benefits of ChatGPT.

Related CMR Articles

“A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence” by Michael Haenlein & Andreas Kaplan. (Vol. 61/4) 2019.


Intellectual Property and Inaccurate Information

The first and most pressing issue in regard to AI-generated journalism is intellectual property4 and the complexities associated with fraud, authentication, and generation of disinformation.5 Generative AI is a large and complex language model that processes past texts and uses statistical techniques to predict the most commonly word sequence. This content creation loop is composed of a number of players: Data collectors who created the database for algorithm training, programmers and developers who come up with the algorithms, third parties who sell the tools, the AI algorithm that writes the content, and a business that uses and publishes it are all involved in the process of creating AI-based content. As automated as the process may come across to a casual user, there are large set of deliberate choices behind such an open application.

What has therefore got lost in this mix is ownership and the consequences of ownership, such as legal liability and accountability. Even the most advanced AI algorithm is still demonstrating some uncontrollable and erroneous behavior,6 resulting in outputs that are questionable and inaccurate. Businesses should clearly determine ownership and legal liability in cases of defamation caused by AI-created content or production of factually inaccurate texts that disseminate disinformation. This is needed and largely underdeveloped now.

Accountability and Responsibility of Content

When AI is employed to assist humans create content or as a recommender tool, the lines between authorship and legal responsibility become increasingly blurred. It is indeed difficult to define which domains ChatGPT is augmenting, and which ones are being threatened. To this extent, businesses must devise strategies that clearly define the lines between human and machine intelligence and determine, in the case of coexistence of human and machine intelligence in content creation, how much humans should be held accountable and how much AI stakeholders should be held legally responsible. The legal framework and the regulatory sandboxes must be established as early as the beta versions are launched, to draft policy models that will be able to properly accompany the launch of the technology into the market.

Content Creation

Although generative AI is still in its early life, it has the potential to revolutionize the marketing industry.7 Automated marketing is an emerging buzzword that is catching businesses’ attention. In the context of creating content for marketing purposes, the concern is that AI generative may produce general content that ignores the specific context and features of a customer.

Imagine a company that has very micro-segmented customers who have very specific needs and preferences such as “black female homosexual Christian students from low-economic background”. The inquiry here is whether generative AI capable of creating marketing content for such a micro-segmented customer. Managing this situation is complicated by the fact that AI can only learn from existing content created by humans and, most likely, their biases. Some customers may have unexpressed needs and preferences for which AI cannot be trained. Some AI datasets and training models may be excluded from datasets, either intentionally or unintentionally, resulting in an unrealistic picture of customers. Creating an insufficient and incomplete picture of customers may skew an AI model’s output. Creating general or skewed or incomplete marketing content and ignoring the specifics of the customer’s context may result in customer frustration and their churn. To increase customer loyalty and their lead, sequencing of words by generative AI should be able to connect emotionally and psychologically with customers for the most effective marketing campaigns. Incorporating human creativity and intelligence into the marketing content will reduce the disconnect between marketers, customers, and the brand for which they work.

One of the best ways achieving these marketing goals is real-time and up-to date access to complete, integrated and not-biased datasets with high quality8 to increase alignment of marketing contents with business strategies and features of micro-segmented customers. But generative AI needs a massive amount of reliable and quality data, which many businesses may lack. Indeed, for markets in which customer and societal characteristics change rapidly, the algorithm has been continuously re-trained. Businesses will likely have to prioritize using reliable and unbiased data sources to make the best use of generative AI in content creation. They will also have to consider the diverse set of features and characteristics of customers, avoiding frustration and increasing trust.

Looking at these challenges, ChatGPT may have fired up our imagination. For it to be a truly useful business tool, it will require a lot harder work that no algorithm can take on for the moment, as we may be scratching on the surface of Artificial General Intelligence, but it feels indeed, just as a preliminary sensation. The journey may have just gotten started.

References

  1. Kelly, S. M. (2023, January 23). Microsoft confirms it’s investing billions in the creator of ChatGPT. CNN Business. CNN. https://www.cnn.com/2023/01/23/tech/microsoft-invests-chatgpt-openai/index.html

  2. Krügel, S., Ostermaier, A., & Uhl, M. (2023). The moral authority of ChatGPT. ArXiv:2301.07098 [Cs]. https://arxiv.org/abs/2301.07098

  3. Stokel-Walker, C. (2023). ChatGPT listed as author on research papers: many scientists disapprove. Nature. https://doi.org/10.1038/d41586-023-00107-z

  4. Chesterman, S. (2023, January 11). AI-Generated Content is Taking over the World. But Who Owns it? Papers.ssrn.com. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4321596

  5. Could ChatGPT Become A Monster Misinformation Superspreader? (n.d.). NewsGuard. Retrieved January 24, 2023, from https://www.newsguardtech.com/misinformation-monitor/jan-2023/

  6. Jin, L., Tan, F., & Jiang, S. (2020). Generative Adversarial Network Technologies and Applications in Computer Vision. Computational Intelligence and Neuroscience, 2020, 1–17. https://doi.org/10.1155/2020/1459107

  7. Fares, O. H. (n.d.). ChatGPT could be a game-changer for marketers, but it won’t replace humans any time soon. The Conversation. Retrieved January 24, 2023, from https://theconversation.com/chatgpt-could-be-a-game-changer-for-marketers-but-it-wont-replace-humans-any-time-soon-198053

  8. O’Neill, S. (n.d.). What Are The Dangers of Poor Quality Generative AI Content? Www.lxahub.com. Retrieved January 24, 2023, from https://www.lxahub.com/stories/what-are-the-dangers-of-poor-quality-generative-ai-content



Mark Esposito
Mark Esposito Mark Esposito is Professor at Hult Int'l Business School and Harvard University’s Division of Continuing Education and works in public policy at the Mohammed Bin Rashid School of Government. He directs the Hult Futures Impact Lab. He co-founded Nexus FrontierTech and the Circular Economy Alliance. He has written over 150 articles and edited/authored 13 books. His next book, "The Great Remobilization" will be published by MIT University Press in the course of 2023.
Terence Tse
Terence Tse Terence Tse is Professor of Finance at Hult International Business School and a co-founder and executive director of Nexus FrontierTech, an AI company. He is also a co-founder of Excellere, a think tank with the goal to help people explore and release their potential through new technologies. He has worked with more than thirty corporate clients and intergovernmental organisations in advisory and training capacities. He has written over 110 articles and three books including The AI Republic: Building the Nexus Between Humans and Intelligent Automation (2019). His next book, The Great Remobilization, will be published by MIT University Press, in the course of 2023.
Tahereh Saheb
Tahereh Saheb Tahereh Sonia Saheb is a research fellow at Hult Int’l Business School's Future Readiness Lab. She is the author of over 22 papers on the adoption of digital technologies and their ethical implications. She has over seven years of experience as a digital consultant and strategist at various banks and organizations. She also established and founded the first DBA/MBA program in digital banking.

Recommended




California Management Review

Berkeley-Haas's Premier Management Journal

Published at Berkeley Haas for more than sixty years, California Management Review seeks to share knowledge that challenges convention and shows a better way of doing business.

Learn more
Follow Us