View More Articles
You're staring at your Monday morning dashboards. Creative A shows 2.3% CTR on Facebook, Creative B shows 1.7%. On Google, it's the opposite - Creative B leads with 2.1% CTR.
Which creative is actually better?
If you're like most media managers, you'll make budget allocation decisions worth tens of thousands of pounds based on these numbers. But what if both dashboards are lying to you?
What if the creative with "worse" performance is actually superior for the 85% of your audience who matter most?
This isn't about platform differences or measurement discrepancies. It's about a fundamental attribution error that's quietly wasting millions in ad spend across the industry - and it's happening in your campaigns right now.
(Inspired by Peter Buckley's MWC (Minority Who Click) LinkedIn Post)
"One of the reasons we're building Maaten is because we kept seeing brilliant creative campaigns labeled as failures. The problem wasn't the creative - it was the measurement. When you can't separate creative effectiveness from audience click-propensity, you're making million-pound decisions based on platform bias, not business reality."
— Neil Pursey, Co-founder, Maaten
When Creative A outperforms Creative B, you're not seeing creative effectiveness. You're seeing the combined effect of:
Most attribution systems can't separate these variables. They show you the combined result and call it "creative performance."
Here's the problem: You're optimising for the wrong people.
According to Global Web Index, only 15% of people clicked on an ad last month. Yet WARC data shows that 56% of global ad spend uses click-based optimisation.
Do the math: We're spending more than half our budgets optimising for the Minority Who Click on everything.
The Minority Who Click aren't your typical customers. Research shows they're:
When you optimise for CTR, platform algorithms systematically favour users who click frequently. This creates the wrong feedback loop:
The result? Your "winning" creative might be the one that appeals to serial clickers, not actual buyers.
Let's say you're testing two creatives:
Creative A: Professional product shot, benefit-focused copy
Creative B: Lifestyle imagery, brand positioning
Reality check: Creative A appeals to MWC (older, click-prone users). Creative B appeals to your actual target audience (younger, click-averse but high-value buyers).
The platform algorithm, optimising for CTR, increasingly shows Creative A to easy-to-click audiences while Creative B reaches genuine prospects who don't click but do convert.
This bias explains why creative performance rankings vary wildly across platforms:
You're not seeing creative effectiveness. You're seeing platform-specific audience bias.
Performance Symptoms:
Operational Symptoms:
Strategic Symptoms:
Consider a £100,000 monthly budget split 70/30 based on CTR performance:
If the MWC creative drives 2x the clicks but half the customer lifetime value, you're:
While you optimise for easy clicks, competitors who understand true creative attribution are:
Over time, MWC optimisation pushes creative strategy toward:
Most A/B tests change two things simultaneously:
You can't isolate creative impact when audience composition varies between test cells.
Each platform's algorithm has different MWC populations:
Cross-platform creative insights become meaningless when each platform delivers different audience compositions.
Traditional creative testing only measures people who engage. The 85% who see your ad but don't click are invisible in your data - even though they might be your most valuable prospects.
CTR-based optimisation prioritises immediate clicks over:
Answer these five questions about your current creative testing:
Scoring:
The issue isn't that click-based metrics are wrong. It's that they're incomplete. CTR tells you about attention and engagement, but not about:
The fix requires separating creative effectiveness from audience effects by:
This isn't about abandoning performance marketing. It's about measuring performance accurately so you can optimise for customers who matter, not just customers who click.
This creative attribution problem is solvable, but it requires a different approach to testing and measurement. The solution involves three key changes:
Understanding the problem is the first step. The next step is implementing controlled creative testing that separates creative effectiveness from audience bias.
In our next article, we'll walk through the step-by-step framework for fixing creative attribution in your campaigns - including specific platform settings, measurement approaches, and common implementation mistakes to avoid.
Ready to audit your current creative attribution? Start by answering the five questions above and identifying which campaigns show the strongest symptoms of MWC bias. Those campaigns are your best candidates for implementing controlled creative testing.
The 85% of your audience who don't click are still seeing your ads. The question is: are you creating content that resonates with them, or just with the minority who click on everything?
One of the reasons we're building Maaten is because we kept seeing brilliant creative campaigns labeled as failures. The problem wasn't the creative - it was the measurement. When you can't separate creative effectiveness from audience click-propensity, you're making million-pound decisions based on platform bias, not business reality.