I've been trying to build a content programme that compounds for about eight months. I'm going to be upfront about where things stand: it's harder than the guides make it look, some approaches I was confident about haven't worked, and I'm still in the middle of it.
This piece isn't a playbook. It's closer to a progress report — what the data says about why content programmes fail to compound, what I've tried in response, what broke, and where I'm beginning to see patterns that point somewhere useful. I'll also share where I think tools like Pixis Visibility might fit into this picture — honestly, which means naming both where I think they're working and where the jury is still out.
What the Data Says About Why Content Programmes Stall
The numbers here are uncomfortable. Over 90% of web pages receive zero organic traffic — and the data consistently points to the same culprits: weak topical authority and broken execution chains, not weak writing.
Only 46% of marketers believe they measure content performance accurately, and only 29% track revenue attribution from content at all. Most content programmes are running without clear feedback loops, which means problems compound silently while output keeps increasing.
The compounding effect, when it works, is real. Sites sustaining cluster publishing for 12+ months see 40% higher organic traffic than comparable single-page strategies. Publishing at least 25 interconnected articles within a tightly connected content cluster is associated with a 40–70% increase in keyword rankings within 3–6 months. The strategy works — when executed consistently. The question is why consistent execution is so difficult to achieve in practice.
What I've Tried — And What Broke
I started where most people start: build a topical map, define clusters, write a pillar page, populate cluster content around it. The map was clean. The logic held. And then execution came apart in three specific places.
Internal linking — messier than expected
The theory is well understood: bidirectional linking between pillar and cluster pages passes PageRank and tells search engines how pages relate. In practice, pages went live and linking happened inconsistently. Writers added links when they remembered. Editors caught some but not all. Previously published pages didn't get updated when new cluster content went live.
Three months in, an audit showed that roughly half the cluster pages had the linking structure they were supposed to have. The topical map showed a well-connected architecture. The actual site showed something more fragmented.
Adding a pre-publish linking checklist helped. It's still manual and still gets skipped sometimes. This remains an open problem.
What to try: Before any cluster page publishes, audit the five most closely related existing pages and confirm bidirectional links are present. It takes about ten minutes and is probably the highest-return structural action in most content programmes. For a deeper look at how internal linking fits into a broader search strategy, this piece on SEO, GEO, and AEO covers how the technical foundations support everything built on top.
Entity consistency — a briefing problem worth taking seriously
Search engines in 2026 evaluate content at the entity level, not just the keyword level. 76% of marketers say the SERP has shifted to an AI-generated answer layer — and AI systems select sources based on how coherently a site covers a topic, not just how well individual pages are optimised.
What emerged across the cluster was that writers working from different briefs were using slightly different terminology for the same concepts. Not dramatically different — but enough to dilute the semantic signal. A concept defined one way in the pillar page appeared in slightly varied forms across cluster pages.
Adding an entity glossary to briefs — a one-page document defining core terms for the cluster, attached to every brief — made a noticeable difference. It's probably the highest-return change made so far. The challenge is maintaining it as the programme scales. When production accelerates, that maintenance slips.
What to try: Before the first brief in any cluster gets written, spend thirty minutes defining core entities: name them, define them, specify how they should appear across every page. It prevents an expensive editing problem downstream.
Freshness tracking — the part that's hardest to do well
This is where the gap between knowing and doing is most visible. Only 5.7% of all pages reach the top 10 within one year of publication, and pages that do reach it face ongoing decay pressure as SERP formats shift with core updates.
Not all content decays at the same rate. High-competition informational pages targeting head terms lose ground faster than long-tail cluster pages covering specific subtopics with low competition. The approach so far has been reactive: notice a page dropping, update it. What's missing is a proactive tiering system that flags which pages are most at risk before the drop happens.
A spreadsheet-based approach worked for about six weeks and then went unmaintained. A more systematic solution hasn't been found yet.
What to try: Tier content into three buckets — high decay risk (head terms, competitive clusters, review monthly), medium decay risk (mid-tail informational, review quarterly), low decay risk (long-tail, low competition, review every six months). Even an imperfect version of this recovers more performance than most teams expect.
The Patterns Starting to Emerge
Despite the above, something is working — not at the scale intended, but the directional signal is there. Pages where all three conditions were met consistently — internal linking, entity consistency enforced at the brief stage, freshness tracked even roughly — are outperforming pages where only one or two conditions were met.
That tracks with what SearchAtlas found across 400+ SEO campaigns: sites focusing on topical authority first see ranking gains up to 3x faster than those prioritising domain authority alone. The compounding effect is real. It just requires all three structural conditions to be present simultaneously, which is where most programmes are inconsistent.
The measurement gap on AI citations is the other pattern worth naming. Only 14% of marketers are currently tracking AI and LLM citation visibility — the fastest growing source of first-touch discovery. AI-driven referrals grew over 300% in 2025, with ChatGPT referrals to websites growing 367% across the year according to Euromonitor. The measurement infrastructure most content teams have — organic traffic, keyword rankings, CTR — doesn't capture any of that.
Understanding how SEO, GEO, and AEO work together is increasingly the foundation of any serious search strategy. This breakdown of how the three disciplines relate is a useful starting point if you're mapping out where each one starts and stops.
Where Pixis Visibility Fits — And Where the Question Is Still Open
The execution problems described above — inconsistent internal linking, entity drift, reactive freshness tracking, no AI citation visibility — are all symptoms of the same underlying issue: the execution chain is fragmented. Brief generation, writing, publishing, interlinking, and measurement happen across separate tools and separate people, with manual handoffs between each stage. At low volume that's manageable. At scale it breaks.
Pixis Visibility is trying to connect that chain. On the SEO side: surface keyword gaps from GSC data, generate briefs, write and publish directly to CMS, track ranking movement from day one. On the GEO side: monitor brand appearance across ChatGPT, Perplexity, and Google AI Overviews — running each canonical prompt multiple times across all three platforms because generative engines don't return the same answer twice — and show exactly where competitors are being cited instead. For a practical breakdown of how the GEO execution workflow actually runs, this guide on getting cited by ChatGPT covers it step by step.
On the pipeline and publishing side, connecting the brief-to-publish sequence is solving a real problem — the one that's been approximated manually, and gotten wrong repeatedly. Whether it fully solves the entity consistency problem at scale — where briefs need to enforce consistent terminology across an entire cluster — is something still being watched. That's the hardest condition to automate without degrading quality, and a fully clean answer from any tool hasn't appeared yet.
The GEO monitoring side is where the interest is highest. The measurement gap is real, the shift in how people discover content is real, and having visibility into AI citations rather than just organic rankings addresses something most content teams are flying blind on. Whether the content Visibility generates performs well enough to actually move citation numbers — that's the question to come back and answer with real data over the next few months.
Where This Lands
Building a content programme that compounds is harder than the guides make it look. SEO has the strongest compounding effect of all marketing channels when done right — but "done right" requires execution consistency across three structural conditions simultaneously, and most programmes are inconsistent on at least one of them.
The three things most worth paying attention to based on what's been learned so far: entity glossaries at the brief stage have the highest return on investment for the effort required; proactive freshness tiering matters more than most teams act like it does; and the measurement gap on AI citations is going to become more expensive to ignore as AI search continues to grow.
This is an ongoing process. A follow-up piece with actual performance data from the Visibility pipeline is planned for later in the year — at which point there will be something more definitive to say about whether the execution loop is closing the way the theory suggests it should.
If you're thinking about the execution layer of your content programme — here's your sign to go book a demo and find it out for yourself today!

