top of page
White logo - no background.png

The Average Machine: How AI Makes Every Brand Sound the Same

When every brand runs on the same models, trained on the same data, content converges toward the same center of gravity.
When every brand runs on the same models, trained on the same data, content converges toward the same center of gravity.

AI tools raise the quality of marketing content. They also make it harder to tell apart.


As early as 2023, Harvard Business School and BCG found that AI access improved output quality by 40% in a study of 758 consultants, but it also narrowed the spread of ideas across the group. Higher quality, less variety, in the same experiment.


The buried finding from that study is the one that matters most for brand strategy. When researchers assessed the diversity of each consultant's thinking, the AI group had converged. More polish, more group-think.


AI improved the work that falls inside its capability zone and degraded what falls outside it. For content marketers, mapping that boundary is now a strategic necessity to determine where their efforts are best spent.


This piece maps what the emerging evidence actually shows and what content marketers need to do to stay distinctive as AI raises the floor for everyone.


The Content Generation Frontier


The BCG researchers named this concept the jagged frontier: an irregular line separating the tasks AI handles well from the tasks it handles poorly.


What makes it jagged is that the line doesn't follow intuitive patterns about task difficulty. Some seemingly complex tasks sit inside the frontier. Some seemingly simple ones sit outside it.


For content generation, the frontier maps onto a distinction most marketing teams haven't made explicit about human versus AI capabilities.


Currently, AI is good at execution. Drafting structure from a brief, polishing prose, reformatting arguments for different channels, summarizing research, scaling production. These are tasks where AI reliably improves speed and output quality. Teams that use AI for execution work faster and produce cleaner output.


Where AI is less effective is concept origination, holding a genuine point of view. This is the writer's fingerprint. Contrarian arguments that cut against category consensus. Brand-specific conviction drawn from real customer insight or proprietary data. The take that distinguishes one brand from every other brand in the category.


These are the tasks where AI produces output that looks like origination but isn't. It mimics the form of a strong opinion without the substance of one. The output passes review but it just doesn't stand out.


Creating content with AI holds tremendous potential for marketers. But using AI for origination tasks, the ones that sit outside the frontier, risks accepting passable output for genuine conviction.


Models Aren't Built for Memorable Content


Origination is outside the frontier based on how LLMs are built.


After training on enormous amounts of text data, models like ChatGPT and Claude are further refined using a technique called Reinforcement Learning from Human Feedback (RLHF). Human raters evaluate different outputs and select the ones they prefer. The model learns to produce more of what gets approved.


This sounds sensible, and in many ways it is. But researchers have pointed out the selection pressure this creates. A 2026 synthesis paper published in Trends in Cognitive Sciences explains that RLHF has been repeatedly shown to reduce stylistic and expressive variability, because what human raters tend to approve is clear, agreeable and conventionally structured content.


Edge cases, unusual arguments and idiosyncratic voices get filtered out in favor of outputs that feel safe and professional. The model isn't being trained to be creative but  to be acceptable.


The result is what linguists call "homogenization." When researchers study texts polished or generated by LLMs, including Reddit posts, academic abstracts, personal essays and news articles, they consistently find that the outputs converge in writing complexity and style.


Author characteristics that normally predict writing style, including personality, age, political leaning and professional background, become harder to detect. The models are averaging across all those voices into something that belongs to none of them.


This is why origination sits outside the frontier by design. A model optimized for approval will never produce a take that most raters would reject. Brand voice, by contrast, is competitive precisely because it's specific, opinionated and sometimes uncomfortable.


The exact qualities that get smoothed away by RLHF are the ones that make content memorable, shareable and trust-building.


When Engagement Numbers Don't Lie


The cognitive science research explains the mechanism.A 2025 working paper by Liu, Wang and Yang studied AI's impact on marketing content using a natural experiment that no researcher could have designed themselves: in April 2023, Italy banned ChatGPT for a month over data privacy concerns. Restaurants in Milan, the treatment group, suddenly lost access to the tool.


Researchers compared their Instagram marketing content before, during and after the ban against control restaurants in other countries.


During the ban, Milanese restaurants' content became measurably more diverse, less similar in vocabulary, sentence structure, semantics and tone. When ChatGPT returned, the homogenization returned with it.


And here's the number that matters for marketing: the ban period was associated with approximately a 3.5% increase in average like counts, despite posts being shorter and less frequent.


Less AI, less polished content, created more engagement.


The researchers are careful to note this is one industry and one short time window. But the mechanism it illustrates is consistent with the frontier model: when AI handles origination tasks at scale, audiences notice. Content that reads like it was written by a real person, even imperfectly, creates a different kind of connection than content optimized for acceptability.


When everyone's content sounds similarly professional, yours has no signal left and it just becomes indistinguishable. Yikes.


A Whole Industry Outside the Frontier


The cost compounds at category level. There's a concept from the homogenization research called the "diversity growth rate." It measures how much new conceptual territory each additional piece of content adds to a corpus.


Researchers studying 2,200 college admissions essays found that each additional human-written essay contributed meaningfully more new ideas than each additional GPT-4 essay, and this gap widened as the corpus grew. The more AI content you add, the less new thinking you get.


Now scale that out to an industry. Imagine every SaaS company using the same LLM to write their thought leadership. Every B2B brand drafting their email sequences with the same tools and default prompts. Every agency offering "AI-assisted content" built on the same model family.


What you get is ideological compression: a narrowing of the range of perspectives, framings and arguments being made across an industry. Individual pieces may be well-written. But the total intellectual diversity of the conversation contracts. The provocative argument never gets made. The counterintuitive take doesn't survive the drafting process. The genuine disagreement is softened into a balanced perspective that offends no one and moves no one.


A 36-participant study published at the ACM Conference on Creativity & Cognition found something especially telling: people who used ChatGPT for ideation not only produced less semantically distinct ideas than those using other tools, they also felt less responsible for the ideas they generated.


When your content team outsources origination to AI, they don't just get similar output. They lose skin in the game. The content stops being an expression of genuine conviction and becomes a plausible arrangement of expected things to say.


Readers feel this, even when they can't name it.


Model Collapse and the Poisoned Well


The frontier problem has a second, longer time horizon. What happens when AI-generated content becomes the training data for the next generation of models?


In 2024, researchers at the University of Oxford published a paper in Nature showing that training AI models on AI-generated content causes what they called "model collapse." The mechanism is statistical: AI models don't just replicate the most common patterns in their training data, they amplify them.


When you train a new model on content generated by a previous model, the rare, unusual and low-frequency ideas, the ones that create genuine diversity, disappear first. The models increasingly reflect and optimize for the center of the distribution at the expense of the edges.


Early model collapse is subtle: overall performance metrics may even improve, because the model gets better at producing the most common types of output. But the diversity of what it can produce quietly shrinks. Late model collapse is visible: the model starts generating incoherent, repetitive or contextually wrong outputs.


This matters for marketing in two ways.


First, the models you're using are already trained on a web that contains enormous amounts of AI-generated content from their predecessor models. The averaging process has been compounding for several cycles.


Second, the content you're publishing now will become part of the training data for future models, reinforcing the same patterns you're currently drawing on. The more the industry converges on AI-generated content as the norm, the more the norm contracts.


The researchers put it starkly: data about genuine human thought and voice will become both more valuable and scarcer.


Moving Back Inside the Frontier


A research paper by Ghods and Liu (preprint, 2024) made a pointed observation: LLM homogenization largely disappears when the model is given meaningful human context. 


When researchers provided LLMs with the beginning of a human author's creative trajectory, even a few sentences of distinctive voice and direction, the model's outputs became as diverse as human-generated alternatives.


The problem, the authors suggest, may be less about the models themselves and more about how they're being used. Empty prompts produce averaged outputs. Rich prompts, grounded in a specific point of view, produce something else.


The practical implication is concrete. Don't open a blank prompt and ask AI to write your brand's thought leadership on a topic. Do the origination work first: the specific observation drawn from your own data, the counterintuitive take your team debated last quarter, the thing your customers keep telling you that the industry hasn't acknowledged yet.


Use the raw materials to structure, extend or polish that thinking.


The Ceiling Remains Human


The AI homogenization research isn't an argument against using AI in content marketing. It's an argument about where AI sits on the content generation frontier. 


Evidence across cognitive science, linguistics, organizational behavior and marketing points on one direction: AI is inside the frontier when executing and outside it on origination. Which translates as "every brand in your category using the same tools is getting the same execution."


The ceiling remains a human job: the original insight, the memorable argument, the voice that actually sounds like someone. The brands that stand out in a content landscape saturated with AI-generated adequacy will be the ones who take this insight and run with it.



Agentic Foundry: AI For Real-World Results


Learn how agentic AI boosts productivity, speeds decisions and drives growth

— while always keeping you in the loop.



 
 

×

iconmonstr-thumb-10-240.png

Success. Next stop, your inbox.

Get updates on agentic AI that works.

iconmonstr-thumb-10-240.png

Success. Next stop, your inbox.

iconmonstr-thumb-10-240.png

Success. Next stop, your inbox.

bottom of page