At some point in most QBRs, someone asks why the numbers don't add up. Meta says it drove 150 conversions. Google says 120. TikTok claims 80. Actual revenue reflects 200 transactions. The room goes quiet. Someone blames the tracking.
The tracking is not broken. This is how platform attribution is designed to work. Meta's default attribution window credits any purchase made within seven days of a click or one day of a view. Google's data-driven model distributes credit across Google touchpoints — and only Google touchpoints. TikTok runs its own clock entirely. None of these platforms can see what the others are doing. All of them declare victory anyway.
The scale of this is documented. Meta reports 26% more conversions on average than third-party analytics tools, driven by modeled conversions and view-through attribution. Google Ads over-attributes by 15–20% when Enhanced Conversions or Consent Mode V2 applies modeled data. A systematic analysis across 792 marketing mix models found that platforms over-report their own performance by 1.2x to 2.3x on average — with extreme cases exceeding 4x.
The strategic consequence is more serious than a reporting discrepancy. Budget decisions made on platform-reported ROAS systematically over-invest in channels that are loudest about their own contribution and under-invest in the channels doing the quieter work of creating demand. Over time, spend concentrates at the bottom of the funnel, top-of-funnel programs get cut for underperforming, and growth slows — while every individual dashboard continues to look healthy.
Building a neutral view is not a measurement team problem. It is a strategic architecture decision — one that determines whether marketing spend is being evaluated against business outcomes or against platform-authored accounts of their own contribution.
Why Platform Attribution Cannot Be Its Own Judge
The structural problem with platform attribution is not inaccuracy. It is conflict of interest.
Cross-channel attribution is supposed to solve exactly this — pinpointing which channels are pulling their weight across the full customer journey, not just the ones that happened to fire a pixel last. But that only works when the attribution layer sits outside the platforms being measured. When Meta is both the channel running your ads and the system telling you how well those ads performed, you are not getting measurement. You are getting a platform-authored account of its own contribution.
The evidence is consistent. Google's attribution models are optimized to give Google channels more credit than they may deserve — because Google benefits commercially from overstating the value of Google properties. Meta's system credits a sale to Meta even when the user only viewed an ad without clicking, as long as the conversion happens within the attribution window. Both approaches are technically defensible within each platform's own rules. Neither produces an unbiased picture of what actually drove revenue.
Platform dashboards were built to justify continued ad spend, not to surface portfolio-level problems.
The mechanics of inflation are well documented. When a user sees a Meta ad on Monday, searches for a brand on Google on Thursday, and converts on Friday, both platforms count the full conversion. Meta counts it because the purchase falls within the seven-day click window. Google counts it because a search ad was clicked before conversion. One purchase. Two full-credit claims. TikTok may have served the video that started the whole journey — and gets nothing.
The problem is also accelerating. On January 12, 2026, Meta permanently removed its 7-day view and 28-day view attribution windows from the Ads Insights API — a change announced in October 2025 that most advertisers missed. Industry analysis puts the conversion drop at 15–30% for accounts that relied on those longer windows. Then in March 2026, Meta reclassified what counts as a click: likes, shares, and saves no longer trigger the 7-day click window. Only link clicks do. If your Meta numbers look worse than Q4 2025 without an obvious performance reason, you are likely looking at a measurement shift, not a channel decline.
Platform attribution also has a structural blind spot: it can only assign credit to the touchpoints it can see. Meta's pixel does not know the user received an email that morning. Google's tag does not know the user watched a TikTok ad three days earlier on a different device. Each platform fills the gaps with assumptions that favor its own contribution, because the alternative — reporting a lower conversion count — is not commercially useful to them.
Over time, this produces a predictable misallocation. Budgets shift toward harvesting existing demand rather than creating new demand, because the channels that build awareness are invisible to the models that claim credit for conversion. Brand marketing can account for up to 60% of long-term sales growth, yet these effects rarely appear in any attribution dashboard.
The Three Layers of a Neutral Measurement System
A neutral measurement system is not a single tool swap. It is three complementary approaches — each answering a different question — that together produce a picture of performance no individual platform can distort.
Layer 1: Marketing Efficiency Ratio — The Platform-Agnostic Floor
Before building any attribution model, establish a baseline that exists entirely outside any platform's reporting. Marketing Efficiency Ratio (MER) — total revenue divided by total ad spend across all channels — gives you a single number that no attribution window can inflate and no platform controls.
MER does not tell you which channel drove what. That is precisely the point. Unlike ROAS, which is platform-reported and attribution-dependent, MER includes all revenue regardless of how or where the conversion was attributed. It is the ground truth your channel-level data gets tested against, not the other way around.
When blended MER holds steady but platform-reported ROAS climbs, platforms are claiming more credit for the same business outcomes. When MER drops while individual dashboards look healthy, something your measurement system cannot see is breaking down. That divergence is a signal. The platforms will not surface it for you.
This is one of the gaps Prism's Campaign Portfolio is built to close — holding platform ROAS and blended MER in the same view across Meta, Google, and TikTok, so the gap between what platforms claim and what revenue confirms is visible where decisions actually get made, not three weeks later during reconciliation.
Layer 2: Media Mix Modeling — The Channel-Level Strategic View
MER tells you the health of the portfolio. It does not tell you how to allocate within it. That is the role of media mix modeling.
MMM works from aggregated revenue and spend data over time, using statistical modeling to estimate each channel's contribution without relying on user-level tracking. Because it operates entirely outside the platforms, it is structurally immune to attribution window disputes, cross-device gaps, and the commercial incentives that make platform reporting unreliable.
It also captures effects that click-based attribution cannot see at all: the lag between a brand awareness campaign and the branded search volume it generates weeks later, or the revenue contribution of channels that never produce a last click. Organizations using MMM for planning commonly improve marketing efficiency by 10–20% through more accurate budget allocation.
The limitation is latency. MMM is a strategic instrument, not a day-to-day optimization tool. It produces directional guidance on budget allocation — which channels to scale, which to stress-test, where concentration creates risk. For in-channel decisions, you still need platform-level data. You just hold it with appropriate skepticism about what it actually represents. A practical way to reconcile Google Ads and GA4 discrepancies — before MMM even enters the picture — is a useful first diagnostic step.
Layer 3: Incrementality Testing — The Causal Validator
Incrementality testing answers the question that neither platform attribution nor MMM can answer cleanly: would this revenue have happened without this spend? By holding out a portion of audience or geography from ad exposure and comparing outcomes, it isolates the causal contribution of a channel — not the correlational one that shows up in platform dashboards.
The distinction matters more than most attribution conversations acknowledge. Attribution tracks which channels and campaigns were present when a conversion occurred. Incrementality tests whether those channels actually caused the conversion — or whether the customer would have converted regardless. The difference between those two questions is the difference between a budget built on evidence and one built on platform-authored narrative.
At the enterprise level, the value is not running incrementality tests on every campaign. It is using them selectively to calibrate the other two layers. If MMM says a channel is contributing X to revenue and incrementality returns 0.6X, you now have a correction factor to apply across future planning without re-testing every dollar. A/B testing, geo-testing, and holdout groups are all viable approaches depending on channel mix and available sample size.
The gap between platform-reported and true incremental performance varies significantly by advertiser and cannot be borrowed from a benchmark. Meta's incremental ROAS across advertisers spans from under 1x to over 4x — meaning the only way to know where your business sits is to measure it directly.
The three layers together — MER as the floor, MMM for strategic allocation, incrementality for causal validation — form a measurement architecture that does not depend on any single platform's account of itself. Three out of four enterprise marketers say their current measurement approaches are not delivering the accuracy or trust they need. The gap is not in data availability. It is in architecture.
Building the Unified Portfolio View — and Where Prism Fits
The three-layer architecture solves the strategic measurement problem. The operational problem is different: you need a place where cross-platform performance lives that is not inside any of the platforms.
Not a Meta dashboard with Google data imported as an afterthought. Not a Google Analytics view that attributes everything it can to the last Google-adjacent click. A genuinely unified view that holds campaign data across Meta, Google, and TikTok in the same frame, measured against the same business outcomes, with no single platform controlling the narrative. Running spend across multiple platforms without that unified view means optimizing each channel in isolation — which is precisely the condition that lets platform-reported numbers go unchallenged.
The questions a unified portfolio view makes answerable are ones that are simply invisible from inside any individual platform. Which channel mix produced the highest blended MER last quarter — and where does that diverge from platform-reported ROAS? Where is budget concentration creating single-platform dependency risk? Which campaigns are generating overlapping attribution claims for the same conversions? Where is spend generating real incremental demand versus harvesting intent that already existed?
Prism's Campaign Portfolio addresses this directly. By pulling Meta, Google, and TikTok campaign data into a single unified view — with cross-platform ROAS comparison running across all three simultaneously — it makes visible what platform-siloed reporting structurally cannot show: how channels interact, where they overlap, and which ones are actually moving the MER number. The Cross-Platform Analysis layer surfaces these interactions without requiring a manual data pull from each platform.
Scheduled Workflows let teams automate cross-platform performance reviews rather than waiting for someone to manually notice a divergence. Brand Knowledge ensures the analysis is applied consistently against the same business context each time, not recalibrated from scratch per report. And because Prism supports full Meta action execution — with Google and TikTok coming in Q2–Q3 2026 — the gap between insight and allocation decision can close within the same workflow.
The underlying principle: platform dashboards were not built to measure marketing's contribution to revenue. They were built to justify continued spend on that platform. A unified portfolio view inverts that relationship — every channel's claimed contribution has to survive contact with revenue reality before it influences how budget moves.
How CMOs Build the Accountability Framework
The goal is not perfect attribution. Perfect attribution does not exist, and the pursuit of it is how marketing organisations end up with six measurement tools, three competing reports, and no agreement on which number to take into the board meeting.
The goal is a measurement framework where no individual platform's self-reported numbers can move a budget unilaterally — where every channel's claimed contribution has to survive contact with revenue reality before it changes allocation. That accountability structure is a leadership decision before it is an analytics one.
In practice, this means establishing clear ownership of each measurement layer. MER is a finance-adjacent metric — it belongs on the same dashboard as revenue and margin, not buried in the media team's weekly report. When MER and platform ROAS diverge materially, that divergence gets escalated, not explained away. The platforms will always have a reason their number is right. MER does not negotiate.
MMM becomes the basis for annual and quarterly budget allocation decisions — replacing the common pattern of allocating by last year's platform performance plus a growth percentage. Channel-level ROAS targets get set against what MMM says each channel's true contribution is, not against what the platform reports. The channel-level mechanics of setting those targets across Meta, Google, and TikTok are the execution layer beneath this strategic framework — how you operationalise the architecture once the accountability structure is in place.
Incrementality testing gets built into the budget review cadence. Before any channel receives a significant budget increase, the incremental lift case has to be made — not the platform ROAS case. This shifts the burden of proof from the marketing team defending a number the platform gave them to the platform proving its claimed contribution holds up under scrutiny.
The result is an organisation where platforms are used for what they are genuinely good at — in-channel optimisation, audience signals, creative testing — without being allowed to be the judge of their own contribution to revenue.
The Platforms Will Keep Grading Their Own Homework
Meta's attribution model is not going to start crediting Google for the conversions Google drove. Google's model is not going to surface TikTok's contribution to branded search volume. These are not bugs that will be patched. They are features of how platforms are built and how they are commercially incentivized.
The question for marketing leadership is not whether platform attribution is biased — it is. The question is whether the measurement architecture sitting above it is strong enough that the bias cannot move budget on its own. MER as the floor, MMM for strategic allocation, incrementality for causal validation, and a unified portfolio view that no single platform controls — that is the architecture.
The organisations building durable advantage in media measurement are the ones that decided this was a leadership accountability question before they decided it was a data question. The data problems are solvable. The harder problem is building a culture where platform numbers are treated as inputs to a decision, not the decision itself.
