California Management Review
California Management Review is a premier professional management journal for practitioners published at UC Berkeley Haas School of Business.
Image Credit | Leigh Prather
As generative AI erases the visible difference between real and fabricated content, authenticity becomes a strategic imperative. This article focuses on why perceived authenticity depends on three factors operating together, and offers guidance for organizations navigating this new reality.
Lucas Whittaker, Andrew Park, and Jan Kietzmann, “Synthetic Media: What Managers Need to Know about This Emergent Phenomenon,” California Management Review Insight, October 17, 2022.
Hanan Al Haddi, “AI Washing: The Cultural Traps That Lead to Exaggeration and How CEOs Can Stop Them,” California Management Review Insight, December 2024.
Brian R. Spisak and Gary Marcus, “Cutting Through the AI Hype: The Facts Leaders Need to Know About GenAI Adoption and Return on Investment,” California Management Review Insight, June 13, 2025.
OpenAI launched Sora 2 recently, an app that generates photorealistic video from text prompts. Within a week, Hollywood talent agencies called it exploitation. The Motion Picture Association objected to its copyright policies. Then came the flood of videos featuring copyrighted characters in unauthorized scenes. Despite being invite-only, the app hit one million downloads in five days. Faster than ChatGPT’s launch.1 Social media is now overwhelmed with synthetic clips so convincing that Zelda Williams, daughter of Robin Williams, publicly asked strangers to stop sending her AI-generated videos of her deceased father.2
This is not an isolated technology story. It represents the acceleration of something larger happening across business and wider society. We have entered what might be called a Perceived Era, a time when people judge reality not by absolute truth but by a configuration of signals and cues that suggest something feels real.
Consider the implications for executive communications. That CEO video announcing the merger… is it actually him, or a deepfake created by activists trying to tank your stock price? The customer testimonial on your landing page… did a real person write that, or was it generated in seconds?
Every communication now carries an invisible question mark.
This matters to C-level leaders because trust is no longer a given. Trust must be actively constructed, signal by signal, and across every selected touchpoint. Consumer trust has already been showing strain. Thales’ 2025 Digital Trust Index pointed to a universal decline in trust for digital services compared to the previous year, with not one sector reaching above 50% approval when consumers were asked which they trusted with their personal data.3 On the other end, brand reputation also takes years to build. But it can collapse in days when authenticity falters. To measure the financial effects of reputational risks, WTW’s 2024-25 global survey found that 86 percent of senior executives have a formal process to assess and manage them, reflecting the steep cost of trust erosion in an era where authenticity can be questioned with a single viral incident.4 In sum, when doubt creeps into stakeholder relationships, decisions slow, service escalations rise, and partners hesitate.
So the question every leader must answer is this. How can organizations maintain perceived authenticity when AI can simulate personas, content, and entire events so convincingly?
The instinct is to look for a technology solution. Better detection tools. Stronger watermarks. More sophisticated verification systems. But that misses the point entirely. This is a trust problem, not a technology problem. And trust problems get solved by understanding how audiences now assess authenticity, then systematically addressing those signals across your most important communications.
Authenticity is not a new discussion point. Philosophers have debated it for centuries. Social media forced brands to rethink it a decade ago.
Now AI has fundamentally changed the game.
Our analysis of nearly 5,000 authenticity-related publications across multiple fields reveals a paradigm shift after 2020. Before that year, authenticity research had concentrated primarily on isolated dimensions. Researchers examined whether content aligned with factual truth. Whether leaders projected genuine sincerity. Whether brands remained consistent with their stated values. Whether influencers maintained the same persona backstage as they presented publicly. The focus centered entirely on questions of provenance and the performance of authenticity.
After 2020, the conversation changed completely.
Authenticity is now analyzed as a perceived quality, constructed by multiple overlapping factors rather than a single truth signal. Communication science has emerged as the hub discipline linking previously separate conversations on authenticity while connecting insights from psychology, media studies, marketing, and technology research. This indicates that authenticity is being fundamentally rethought at the intersection of these fields.
Why the shift? Because any single signal can now be faked.
A deepfake can reproduce a CEO’s voice with perfect accuracy. AI can generate customer reviews that pass every traditional authenticity check. Synthetic video can show events that never happened, with lighting and physics that look entirely correct.
When one signal can be spoofed, audiences instinctively look for coherence across multiple signals. They ask layered questions. Does the content match what else I know about this company? Are they being transparent about how they created this? Do I trust the source based on their track record?
This brings us to what our research identified as the most powerful combination of authenticity cues. We call it the Layer Coherence Triad. Three factors consistently work together in driving perceptions of authenticity across contexts.
Does the content itself seem accurate and reliable? This is about facts, data, verifiable sources. Can someone check your work?
Are we upfront about AI involvement or synthetic elements? This is about clearly communicating methods and intentions in language people actually understand.
Does the source or messenger have a trusted track record? This draws on brand history, third-party endorsements, certifications, and consistent voice across channels.
Here is what makes this triad powerful. Each element alone provides some assurance. But together they create something multiplicative rather than merely additive.
Transparent disclosure without credible content feels hollow, like admitting you are guessing. Credible content without reputation backing raises immediate skepticism. Why should I trust you specifically? Reputation without transparency looks defensive, as though you are hiding something.
When all three align, they create coherence across layers of judgment. The content checks out. The messenger is honest about methods. The source has earned trust over time. This configuration is rare in practice, appearing in fewer than 9% of cases we analyzed, but when it does appear, it delivers positive authenticity outcomes 82% of the time. That is the highest success rate of any combination we identified.
This is the insight that should reshape how leaders think about authentic communication in the AI age. Authenticity is no longer about perfecting one signal. It is about orchestrating three signals in harmony across your most critical messages.

Figure 1: Perceived Authenticity Configuration Model
The following examples show how AI-mediated authenticity plays under pressure, illustrating our framework.
Social media giants are on the front lines of the authenticity battle. Every day, millions of images and videos flow through their platforms. How do you help users distinguish real from synthetic when the visual quality of AI-generated content now rivals professional photography?
Meta faced this challenge head-on in 2024. The company introduced a labeling system for AI-generated or AI-altered content on Facebook, Instagram, and Threads. Initially, they used a straightforward tag called “Made with AI.” The label appeared on any content that had been touched by generative AI tools, even minor edits like background changes or filter applications.
The response was swift and instructive. Content creators were surprised, sometimes frustrated, by how broadly the label appeared. A photographer who used AI to remove a distracting element from an otherwise authentic photo received the same “Made with AI” label as a fully synthetic image. Users found it confusing. Did “made with AI” mean entirely fabricated, or just lightly edited?
The transparency was there. But it was too blunt to preserve credibility.
Meta adapted. Within months, they refined the approach based on user feedback. The label changed to “AI Info.” Less declarative, more inviting. More importantly, it became clickable, offering context about how AI was used in creating the content.5 This nuance made all the difference. Users could now understand degree of AI involvement rather than receiving a binary flag.
What does this tell us about authenticity management? Meta’s experience demonstrates that transparency about AI’s role must be handled with care. Too little disclosure breeds suspicion and eventual backlash. Too much, or disclosure that lacks nuance, can cause confusion and dilute trust in the very system meant to protect it. When calibrated well, though, transparency becomes a powerful trust-builder.
The case shows all three authenticity cues working together. Meta enhanced Information Credibility by alerting users to potentially manipulated content, enabling more informed judgments. They practiced Disclosure Transparency through visible labeling. And they drew on Reputation Trust as Meta’s policy signaled that the platform was proactively governing authenticity rather than ignoring the problem.
The lesson for leaders beyond social media? If you deploy AI in content creation or customer communication, consider clear labeling and user education as essential parts of maintaining authenticity. Get the disclosure right, not just present but comprehensible, or it can backfire.
Now let us move outside Big Tech to see how a traditional company experimented with AI in customer-facing roles.
Vodafone Germany ran a digital marketing campaign on TikTok featuring a fully AI-generated virtual influencer. In the video, a photorealistic young woman who does not exist in reality promotes Vodafone’s internet speed and a cashback offer. The production quality was high. The spokesperson looked human. Well, almost.
Alert viewers noticed something off. Facial features seemed inconsistent across frames. A mole would appear, then disappear, then reappear in a slightly different location. These subtle glitches betrayed the virtual nature of the presenter.
Comments filled with questions. Why not use a real person? Is this entire ad fake? What is Vodafone trying to hide?
Vodafone faced a choice at that moment, one that many brands will confront as AI-generated content becomes more common. They could stay silent, deflect, or acknowledge openly. They chose transparency. The company publicly confirmed that the ad featured an AI-generated presenter, framing it as an intentional experiment.6 A spokesperson explained that they were “trying out different styles, as AI is now such a big part of everyday life… people are experimenting with it in advertising too.”
This response turned a potential authenticity crisis into something else. It became a conversation about creative experimentation.
Vodafone had precedent for this approach. In late 2024, they released a short film advertisement that was openly marketed as “100% AI-produced, without a single real pixel.” By being upfront about the synthetic nature of their creative work, they positioned themselves as experimenters testing new frontiers rather than deceivers trying to fool audiences.
The Vodafone case highlights the Reputation and Trust dimension of authenticity. A consumer brand must gauge how much AI-generated communication its audience will accept, and how transparency factors into that acceptance. It demonstrates that in sectors like marketing and advertising, perceived authenticity can be maintained by proactively telling audiences when AI is used, especially as part of a brand’s experimental image.
But there is tension here worth acknowledging. Research from The Influencer Marketing Factory found that only 15% of consumers highly trust AI influencers, while surveys indicate that nearly half of consumers are less likely to trust content from a virtual influencer compared to a human one.7 The novelty creates skepticism. Vodafone’s transparent communication helped manage that skepticism, but it did not eliminate it entirely.
This illustrates the double-edged nature of synthetic media in brand building. AI influencers offer efficiency and control. They also risk alienating consumers who feel unsettled by the absence of human realness.
Together, these two examples of Meta’s content labeling and Vodafone’s AI influencer show that the theoretical framework of credibility, transparency, and reputation plays out in practice across very different organizational contexts. However, both learned the same lesson. Authenticity in the AI age requires managing multiple signals simultaneously and being deliberate about how those signals interact to shape audience perceptions.
Safeguarding authenticity requires systematic improvement across three dimensions: credibility, transparency, and reputation.
Implement verification protocols for high-stakes communications—earnings calls, product launches, regulatory filings. Assign clear ownership for accuracy. Consider content authenticity technologies like C2PA standards that create verifiable metadata showing how content was made and modified.
Begin with your highest-traffic content. Establish monthly audits with clear accountability. Ask: Can someone verify what you claim? If not, why are you claiming it?
Develop clear guidelines answering: When do we disclose AI involvement? What language do we use? Where does disclosure appear? Test your disclosure language with real users—aim for 80% comprehension. Disclosure must be comprehensible and contextual, not buried in footnotes. Position transparency positively, as honesty rather than liability admission.
Publicize third-party endorsements, certifications, or independent audits of your AI systems. Ensure AI-generated content aligns with your brand voice—inconsistency erodes trust quickly. Establish brand-fit review gates asking: Does this sound like us? Would we say this if a human wrote it?
Create a clear public statement on your website outlining when you use AI, how you verify outputs, and what safeguards exist.
Build digital literacy among stakeholders through primers on recognizing AI content and sessions on AI and trust. This signals your organization takes authenticity seriously.
Develop crisis response plans for authenticity incidents. Pre-write holding statements, establish approval chains, and run tabletop exercises. When crises occur, respond across all three dimensions simultaneously: issue corrections, communicate transparently, and leverage your reputation.
These dimensions work together—a message strong in all three will feel authentically you, even when AI-assisted.

Figure 2: Authenticity Management Matrix
We are not going back to a simpler time when authenticity was easy to judge. AI-generated content and interactions will become more prevalent, not less. The perception problem will intensify, not fade.
This makes authenticity a C-suite issue rather than a niche concern for communications teams alone. Those organizations that get it right, the ones that embrace the new authenticity framework of credibility, transparency, and reputation working in concert, can turn trust into a genuine competitive advantage.
In markets where customers have abundant choices, trust becomes the deciding factor. In talent wars, authenticity in how you communicate culture and values attracts better candidates. In partnerships, reputation for straight dealing opens doors that no pitch deck can.
Conversely, those who ignore the authenticity imperative risk significant brand damage. One viral authenticity crisis can undo years of marketing investment. A pattern of opacity around AI use can trigger regulatory scrutiny, customer exodus, or internal morale collapse.
The downside is steep.
But here is the optimistic view, and it is genuinely optimistic rather than forced positivity. By viewing authenticity not as a lost cause but as a quality that can be actively managed through credibility, transparency, and trust, organizations can thrive even amid the “synthetic reality” revolution that AI enables. The tools exist. The frameworks are clear. What remains is commitment and execution.
This requires leadership. It requires acknowledging that AI changes the rules of authentic communication. It requires investing in verification systems, disclosure policies, and reputation management in ways that might feel unfamiliar or even uncomfortable at first. Most importantly, it requires recognizing that in a world where anything can be faked by AI, the consistent cultivation of truth signals becomes your most valuable asset.
Authenticity is not a compliance checklist. It is a relationship. Relationships require ongoing investment, honest communication, and actions that align with words over time.
That has always been true. AI simply makes it more explicit and more urgent.
In the end, genuine leadership in the AI age will be measured not by technological sophistication but by how steadfastly we cultivate the signals of truth. That is the strategic imperative. Guard it accordingly.
About the Research This article draws on a macro-analysis of 4,951 publications on authenticity across disciplines and detailed coding of 128 high-impact communication studies. The research identifies communication scholarship as the bridge field linking authenticity conversations after 2020 and documents the shift toward multi-cue assessment, with audiences now deploying an average of 3.9 authenticity signals per judgment. The Layer Coherence Triad of credibility, transparency, and reputation appears in fewer than 9% of cases but achieves positive authenticity outcomes 82% of the time when present. Full materials, codebooks, and analysis are archived on the Open Science Framework at https://osf.io/vce37/?view_only=c01972742cf04571a355431c88c28d3e