Field Notes: How I Use Our Performance Marketing AI for Real Client Work

I am the ‘human in the loop’ people are referring to when they talk about AI-powered marketing workflows.
My job is to advise B2C brands - our customers - on how to extract the most possible value from our AI models.
Our AI platform handles a lot of the bid and budget optimizations, uncovers new target audiences, and recommends creative variations to test. It can see across channels to unlock insights that would normally be hidden in the black box AI models on offer from Google and Facebook.
It does that work almost as a matter of course. For an AI, those are business as usual operations.
But somebody needs to ask the out-of-the-box questions, find ways to get the answers, evaluate ad campaign results in the context of business strategy (or even in the context of office politics).
That’s what I do.
Well, me and my partner: our purpose-built LLM for performance marketing. We call it Prism.
Here’s how Prism and I worked together last week.
Example 1: Crawl/Walk/Run Planning for H2
One of my clients recently reached out with a very specific challenge:
They wanted to grow their business from new interest-based audiences on Meta—but they didn’t want to jump in too fast. Their goal was to explore these audiences carefully, without sacrificing their return on ad spend (ROAS).
In short, they wanted a phased plan that would improve both reach and conversions over the short, medium, and long term.
How I Used Prism:
To start, I needed a quick but solid baseline analysis of their current campaigns and performance data. So, I turned to Prism, our AI-powered performance marketing assistant, and asked it for help in laying the groundwork.
To start, I just need to decide how I’m going to ask Prism for help. In this case, it was pretty straightforward:

It’s designed specifically for media planning and performance marketing. It understands ad platforms like Meta at a deep level. Unlike general-purpose tools like ChatGPT or Claude, Prism knows how to structure actionable plans around targeting, bidding, creative, and scaling.
What Prism Did:
Prism took the data I provided—campaign performance metrics, audience segments, ROAS numbers—and returned a structured plan.
It identified where we already have strong performance from that audience segment, looked at ways we could expand that segment, and provided me a crawl/walk/run plan prioritized by effort and impact.
It outlined a Crawl-Walk-Run strategy—starting with interest-based tests (Crawl), followed by seasonal DPAs and retargeting (Walk), and scaling through high-value lookalike audiences (Run).

What I Did Next:
This is a fantastic start, but I didn’t just accept everything Prism suggested blindly. I:
- Interpreted the AI’s output carefully, deciding what was relevant, realistic, and timely for the client’s business.
- Asked follow-up questions in Prism to validate certain assumptions and clarify numbers.
- Ultimately, I grouped my recommendations into three clear priorities:
- Act: Immediately reallocate 15–20% of spend from low-performing markets (like France) to stronger ones (Norway, Italy, Spain) for quick wins.
- Recommend: Launch a DPA campaign focused on winter sports products, timed ahead of the seasonal peak, to capture demand.
- Ask: Explore a lookalike audience strategy, but only after initial testing proves its potential, since it would require more resources and longer-term buy-in.
Prism gave me speed and objectivity. I provided strategy and context.
Together, we moved from “what should we do?” to a concrete roadmap- in a few minutes.
Example 2: Creative Analysis – Are Our Static Ads Still Working?
Creative fatigue is one of the most common silent killers in performance marketing. So when a client came to me and said,
“Our static sale creatives feel a little tired. But are they really the problem?”
— I knew we needed more than just a gut check.
How I Used Prism:
I started by selecting four recent ads from their Meta Ads account:
- Two “Full Send Sale” ads featuring dynamic skiing action shots
- Two static best-seller creatives showing models in lifestyle poses
I ran these through Prism’s creative analysis workflow, which evaluates performance based on multiple dimensions: CTR, ROAS, conversion rate, CPC, and more. It also identifies patterns across ad types, regions, and formats.
What Prism Did:
Prism came back with some very clear signals:
- Ads with strong sale messaging ("UP TO 50% OFF") had a 4.5% conversion rate—more than 2x that of static product-focused ads.
- Action imagery outperformed static poses significantly. The skiing shots had a ROAS of 32.7 in Canada, versus single digits for the lifestyle-focused creatives.
- Geography mattered. The same creative had dramatically different results in different regions—especially Canada vs the US.
In short, the data was clear:
Action > Static.
Urgency messaging > Brand-only messaging.
Localized relevance > Global sameness.
What I Did Next:
Here’s where Prism handed me the baton, and I ran with it:
- I explained to the client that while their static ads weren’t totally broken, their performance was capped—especially during sale cycles.
- I recommended building a 3-part urgency sequence using action footage:
- Prepare: Gear up for the season
- Launch: Sale is live
- Full Send: Final days—get it before it’s gone
- I layered in a creative twist: use regional variants to match local relevance. For instance, “Great North Sale” for Canada (where skiing extends longer) vs “End-of-Season Steals” for Europe.
- I also flagged potential quick wins: swapping out single-font headlines for designs with multiple font sizes and positioning key copy above the fold—another insight Prism surfaced.
The client bought in. We briefed our design team that week and began testing the new sequence in two top-performing regions. Early indicators? Higher engagement, improved CTR, and stronger ROAS right out of the gate.
Example 3: Speed to Insight & Validation on Ad Performance Softness
In this case, I needed to quickly figure out what was causing a drop in campaign performance across Europe.
I didn’t set out with a layered prompt strategy from the start—it unfolded naturally as I explored the issue. With Prism, you don’t necessarily need a highly engineered prompt or a solid plan. You can just start talking to it. Here’s how it happened:
- First, I asked Prism to pull the latest campaign performance for Europe in June 2025.
- Once I had the basics, I realized I needed ROAS too, so I asked for that.
- With the full numbers in front of me, I then asked Prism for direct insights into the performance softness. My exact prompt was:

What Prism Did:
Prism’s response made it immediately clear what was happening:

In essence, Prism acted like an objective analyst. Normally, our reaction would have been to push harder on conversion campaigns and pull back on awareness spend. Prism pointed out that a factor outside of our control - seasonality - was the culprit. As a result, the best move we could make was to avoid further drop in future conversions by juicing up the top and middle of our funnel. cutting through the noise and highlighting the two most critical factors that needed attention.
What I Did:
This is where I stepped in to apply my own judgment:
- I cross-checked Prism’s findings against past seasonal data and confirmed this wasn’t unexpected.
- I identified some ready-to-go mid-funnel content that we could activate immediately.
- I recommended shifting some budgets toward those mid-funnel campaigns right away, focusing on engagement and audience-building until conversion intent picked back up.
- We also aligned on a timeline to revisit the strategy and return to conversion-heavy ads once seasonality normalized.
This wasn’t a one-and-done query. I worked through it in stages—starting with data gathering, refining the metrics, and then analyzing the root cause.
Honestly, having Prism’s analysis made it easier to confidently suggest this shift. Without that clear, objective data backing me up, it would’ve felt much harder to recommend pulling back on direct conversion efforts—even if it was the right call.
What These Examples Prove (Again and Again)
At the end of the day, these aren’t just examples of AI tools doing what they’re programmed to do. They’re examples of something far more powerful—what happens when AI works with a human who knows when to zoom in, when to zoom out, and when to ask better questions.
Prism isn’t here to replace decision-making. It’s simply here to allow speed.
The real magic lies in how you use that speed:
- To act earlier,
- To test smarter,
- To shift strategies before the moment passes.
And that’s the difference between simply using AI and actually working alongside it.