AI Advertising and the Authenticity Paradox

AI Advertising and the Authenticity Paradox


AI Advertising and the Authenticity Paradox

This essay is part of the series: World Creativity and Innovation Day 2026: Sparks and Shields


The rapid evolution of artificial intelligence (AI) has pushed advertising into a new era in which this frontier technology is no longer confined to internal analytics and automation operations, but is producing creative outputs, generating human images, voices, and other visuals in the place of actors, voice-over artists, or set designers. Compared to time-, money- and people-intensive traditional advertising, brands can now operate with unprecedented efficiency and scale by using generative AI (gen-AI) – a type of artificial intelligence with the capacity to generate text, images, video, audio, and other media. However, the trade-off inherent to producing captivating yet affordable campaigns is that maintaining authenticity as a brand and a trusting relationship with the consumer is also essential. This presents a fundamental tension, whereby increased reliance on synthetic content risks eroding perceptions of credibility and authenticity on which effective advertising depends; the more artificial advertising becomes, the more likely consumers are to seek “real”, authentic content.

Increased reliance on synthetic content risks eroding perceptions of credibility and authenticity on which effective advertising depends; the more artificial advertising becomes, the more likely consumers are to seek “real”, authentic content.

This article explores the perspectives of both companies and consumers in relation to AI use, assessing its impact on how consumers interpret and respond to advertising. This requires weighing the benefits of scale and efficiency against potential drawbacks related to consumer trust and unethical use of human-made creative products.

The Rise of AI Advertising: Frictionless, Cheap, and Accessible

The advertising industry pre-AI could not have been considered a totally “honest” one, presenting only the most idealised qualities of the product or company, and digitally altering images of products and people. However, it is only now, with the rise of gen-AI, that the human creativity necessary to think up the campaign, film it, or act in it has the potential to be so significantly reduced. The proliferation of gen-AI tools such as OpenAI’s Sora and Meta’s competitor application Vibes has made it all the more easy for users to create still images, video, and audio, bringing enormous advantages to businesses, which can now advertise far more cheaply while still maintaining “unprecedented realism”.

Aside from cost, researchers have also highlighted that gen-AI advertising can reduce the risk of reputational damage to brands. AI-generated characters that have come to populate social media  – known as “virtual influencers” (VIs) – are predominantly created and managed by third-party marketing agencies, AI startups, or brands themselves, meaning that they are unlikely to stray off-message or engage in controversial behaviour in the same way as human influencers or celebrities. Use of the latter has resulted in customer backlash when their actions or personal views do not align with the company’s image or the values of its consumer base. In the highly publicised case of Kanye West, the artist’s antisemitic comments resulted in him being dropped by sportswear giant Adidas, resulting in 600 million euros in lost revenue in the three months following the split. By comparison, VIs represent a highly controllable advertising asset, allowing brands to maintain consistency in messaging and minimise reputational risk.

AI-generated virtual influencers Imma, Rozzy, and Miquela

Ai Advertising And The Authenticity Paradox

Source: Virtual Humans

From a company’s perspective, AI advertising offers operational efficiency and enables fast, lower-cost production and reduced potential for PR blowback. However, while these advantages make AI an increasingly attractive tool for brands, they also raise a fundamental question about whether content that is optimised for performance and control can still achieve the emotional resonance required to build genuine connections with audiences.

The Authenticity Gap: Human Touch vs Artificial Perfection

Marshall McLuhan’s often-cited theory on how media shapes the perception and interpretation of content, “the medium is the message”, is particularly relevant to the increasing use of synthetic content in advertising. In the case of gen-AI, the artificiality of the medium itself is likely to shape how audiences interpret a brand’s messaging. For brands and industries for which modernity and innovation are core selling points, obvious use of AI can be an asset. An example of this would be Nike’s recent “Never Done Evolving” campaign, which used AI-generated visuals to depict a tennis match featuring a young Serena Williams, reinforcing the brand’s message that its products are defined by high-tech innovation. However, for companies whose positioning does not rely so heavily on a futuristic image, AI content – even the most realistic – may fail to generate genuine emotional engagement or even damage consumer confidence.

In the same way that celebrities who are paid to endorse a brand are not always perceived as reliable spokespeople, AI-generated influencers or characters in advertisements are similarly not perceived as trustworthy.

Studies consistently show that while audiences may admire the visual polish and innovation of AI-generated content, they often perceive it as “too perfect”, which can feel artificial or unsettling, as well as lacking emotional relatability. According to some statistics, human influencers continue to outperform in emotionally driven sectors because they embody lived experience and authentic human behaviour that AI struggles to authentically replicate. In the same way that celebrities who are paid to endorse a brand are not always perceived as reliable spokespeople, AI-generated influencers or characters in advertisements are similarly not perceived as trustworthy. This is because, being virtual, they cannot have had a real user experience.

Figure 1: Comparison of Engagement between Virtual and Human Influencers

Ai Advertising And The Authenticity Paradox

Source: The Impact of AI-Generated Advertising and Virtual Influencers on Consumer Perception and Brand Authenticity

Though studies show that consumers – particularly younger generations – are still able to detect AI-generated imagery, the alarming rate of technological advancement means that the gap between perceptible artificiality and complete realism may soon close. However, even as this distinction becomes harder to detect, consumers’ awareness of gen-AI usage in advertising has increased alongside concerns that imperceptible AI advertising may manipulate buying behaviour. This has prompted debate around whether disclosure is necessary, as simply knowing that a piece of content is AI-generated affects perceptions of credibility and willingness to engage.

Ethical Considerations for Brands, Consumers, and Creatives

While disclosure of AI-generated content can sometimes reduce perceived credibility, a lack of transparency risks far greater backlash, fostering suspicion and eroding trust if it is later revealed. Regulatory frameworks are beginning to reflect consumer demand for transparency, with new laws worldwide requiring disclosure of AI-generated content. An example of this is the EU’s AI Act, which mandates the use of watermarks or metadata indicating AI-generated content, with practical guidance for businesses using the technology being provided by a European Commission Code of Practice. AI-generated content in adverts must not “misrepresent products or create impressions that consumers would not form if they knew the content was AI-generated,” explicitly incorporating gen-AI transparency requirements into well-established accountability standards for traditional advertising content.

Regulatory frameworks are beginning to reflect consumer demand for transparency, with new laws worldwide requiring disclosure of AI-generated content.

Creative and advertising industry experts have raised concerns about the implications of generative AI for both intellectual property and the future of human creativity. Among the most prominent concerns from the industry pertains to the unauthorised use of original, human-made creative content in gen-AI model training, raising intellectual property and legal risks. Then there’s the broader concern that increasingly sophisticated AI-generated outputs may devalue human creativity over time. Seemingly in response, some companies have begun experimenting with counter-positioning strategies – such as clothing brand Aerie’s promise not to feature AI bodies in their campaigns – to signal authenticity and build trust with consumers in an increasingly synthetic advertising landscape. While this may be an effective short-term differentiation strategy, Emma De La Fosse of communications firm Edelman UK suggests that we are unlikely to see a wholesale rejection of AI in the creative industries, and that creatives instead “need to lean in and make sure they are the ones wielding the tool”. The question, then, is not whether AI has a role, but how to use it without diluting the credibility that makes advertising effective.

Conclusion: Toward a Transparent Future

The future of advertising is unlikely to be a binary choice between human creativity and AI. Instead, it will be defined by how effectively brands assess whether their identity and customer base align with – and are likely to respond to – gen-AI content versus traditional authenticity. AI offers unmatched efficiency, scalability, and creative potential, but without trust and transparency when declaring AI use, these advantages risk diminishing returns.

The future of advertising is unlikely to be a binary choice between human creativity and AI. Instead, it will be defined by how effectively brands assess whether their identity and customer base align with – and are likely to respond to – gen-AI content versus traditional authenticity.

Policy will likely continue to evolve alongside the pace of development in gen-AI tools. Businesses have already pushed back against blanket “Made with AI” labelling policies in favour of more nuanced approaches that allow for clarity without undermining creative intent, instead adopting systems such as “AI info” icons that can be expanded to show how and to what extent AI was used in the creation of the content. This reflects a broader recognition that the impact of disclosure depends on how it is communicated and interpreted by audiences. As industry voices increasingly argue, the real opportunity lies in combining AI’s capabilities with human creativity, emotional intelligence, and recognition of consumers’ desire for transparent and honest communication with the businesses they engage with.


Elizabeth Heyes is Junior Fellow, Technology, ORF Middle East.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! to access our curated content — blogs, longforms and interviews.



Content Curated Originally From Here