By Christophe Lachnitt, Founder and CEO of Croisens, Consultant specializing in optimizing strategy, operating model, and generative artificial intelligenceadoption within communications organizations

The most immediate and obvious opportunity offered by generative artificial intelligence to communications professionals lies in content production. This benefit is spectacular — and well known. But focusing exclusively on these production capabilities would be a mistake. Doing so risks obscuring its equally significant impact on content consumption. When we communicators fixate solely on generative artificial intelligence’s (Gen AI’s) power to generate content, we fall into an operational mindset that turns our gaze inward. We ignore our audiences — when empathy toward them should always be our guiding priority. To truly understand the implications of Gen AI, we must ask: How does it reshape the way people consume content — not just the way we produce it?

One of the most troubling developments in this realm is the falsification of human nature, made possible by the ability of generative artificial intelligences to create fake humans. While AI agents differ from assistants by acting autonomously, fake humans are characterized by their fabricated identities.

These false identities come in two forms:

  • Some artificial intelligence avatars are the digital twins of existing people. This practice is particularly popular on social networks (e.g. Facebook, Instagram, Snapchat, TikTok), which allow us to create our virtual double. These are half-fake humans, in the sense that they falsely embody a real person.
  • There are also more and more fully fake humans, which are entirely fabricated humans invented and animated by Gen AI. We’re already familiar with virtual influencers, some of whom already command large followings and generate significant income. In this regard, YouTube analyzed activity from 300 virtual content creators on its platform and found they racked up over 15 billion views in 2024. That same year, virtual influencer Aitana Lopez (@fit_aitana on Instagram) earned an average of $10,000 per month. Unfortunately, we’re increasingly going to encounter fully fake humans that have far less benign intentions.

In both cases, these human counterfeits distort our ontic identity — our essential sense of being.

Worryingly, fake humans can be nearly indistinguishable from real ones. Consider a joint study1 by Google DeepMind and Stanford University. Researchers interviewed 1,000 volunteers from diverse demographics — age, gender, ethnicity, education, political beliefs—using a model based on OpenAI’s GPT-4o. Following two-hour interviews, the model created a digital twin of each person. Both the humans and their AI twins then independently completed personality tests, surveys, and logic games. The AI twins mirrored their human counterparts’ responses in 85% of cases.

Now imagine what AI could replicate based on longer interviews. This potential is already being commercialized. For example, the American company Brox2 uses extended interviews to create digital twins of human consumers, and provides brands with synthetic focus groups — faster, cheaper, and just as insightful as traditional ones. Brox can tell a client whether a very specific demographic would tolerate a price increase or embrace a new product. Similarly, the U.S. agency rehabAI developed a tool called Stress Tester3 that allows brands to evaluate creative concepts using groups of fully fake humans to gauge target audience reactions. Consumer goods giant Colgate also uses digital twins to test product ideas —allowing the company to iterate more quickly during development. While these AI-generated profiles speed up early testing, Colgate will still conduct feedback trials with human participants before launch. At the same time, Colgate is collaborating with Market Logic to build a Gen AI platform that mines its decades of proprietary consumer research to surface deeper insights into market trends.

But this emphasis on digital twins is just the ethical tip of the iceberg formed by the creation of fake humans. Its submerged face is much more substantial, and often less ethical. It concerns the online manipulation of real humans by fake humans, within a framework of false intimacy, to induce them to buy a product, vote for a candidate, commit to a cause, or divulge secret information.

In fact, the ability to manufacture fake humans may bring about the most profound communication revolution since the invention of writing over 5,300 years ago.

It will trigger two major, unstoppable dynamics for brands…

This article was published in the Autumn-Winter 2025 issue of CODI, EACD’s magazine shaped by experts in the field and dedicated entirely to communication professionals. In a world where everything moves fast, CODI creates a moment to step back and explore key issues in real depth. Twice a year, it gives communicators the perspective they need beyond the noise of the news cycle.

To continue reading the article, simply log in to your account if you are a member and access CODI via the navigation bar on your profile page.

Not a member yet? Join the EACD to access CODI and a wealth of other benefits, including events, networking, and professional development opportunities.