All blogs

Field Notes: How I Use Our Performance Marketing AI for Real Client Work

By William Lewis Eldredge

Director - Customer Success @ Pixis

I am the ‘human in the loop’ people are referring to when they talk about AI-powered marketing workflows.

My job is to advise B2C brands - our customers - on how to extract the most possible value from our AI models.

Our AI platform handles a lot of the bid and budget optimizations, uncovers new target audiences, and recommends creative variations to test. It can see across channels to unlock insights that would normally be hidden in the black box AI models on offer from Google and Facebook.

It does that work almost as a matter of course. For an AI, those are business as usual operations.

But somebody needs to ask the out-of-the-box questions, find ways to get the answers, evaluate ad campaign results in the context of business strategy (or even in the context of office politics).

That’s what I do.

Well, me and my partner: our purpose-built LLM for performance marketing. We call it Prism.

Here’s how Prism and I worked together last week.

Example 1: Crawl/Walk/Run Planning for H2

One of my clients recently reached out with a very specific challenge:

They wanted to grow their business from new interest-based audiences on Meta—but they didn’t want to jump in too fast. Their goal was to explore these audiences carefully, without sacrificing their return on ad spend (ROAS).

In short, they wanted a phased plan that would improve both reach and conversions over the short, medium, and long term.

How I Used Prism:

To start, I needed a quick but solid baseline analysis of their current campaigns and performance data. So, I turned to Prism, our AI-powered performance marketing assistant, and asked it for help in laying the groundwork.

Here’s where the beauty of Prism comes in:
It’s designed specifically for media planning and performance marketing. It understands ad platforms like Meta at a deep level. Unlike general-purpose tools like ChatGPT or Claude, Prism knows how to structure actionable plans around targeting, bidding, creative, and scaling.

What Prism Did:

Prism took the data I provided—campaign performance metrics, audience segments, ROAS numbers—and returned a structured plan.

It generated:

  • Crawl stage (short-term moves): Initial interest-based tests in known high-converting markets
  • Walk stage (medium-term): Launch Dynamic Product Ads (DPAs) focused on seasonal products, retargeting high-intent visitors
  • Run stage (long-term): Develop lookalike audiences based on high-value outdoor and winter sports shoppers, and scale successful campaigns

Prism also highlighted key differences across markets. For example:

  • Norway stood out with a low cost-per-click (CPC) and high click-through rate (CTR).

France and Switzerland showed weaker performance, suggesting these markets weren’t ideal for immediate scaling.

But here’s what makes Prism useful—it only analyzes the numbers. It doesn't have the full picture of my past conversations with the client, their internal politics, or their hidden business priorities. In that sense, it acts like an impartial, third-party ad analyst.

What I Did Next:

 My role wasn’t just to accept everything Prism suggested blindly. I:

  1. Decided how to ask the right questions in the first place using Prism’s pre-built media prompts, designed for tasks like this.
  2. Interpreted the AI’s output carefully, deciding what was relevant, realistic, and timely for the client’s business.
  3. Asked follow-up questions in Prism to validate certain assumptions and clarify numbers.
  4. Ultimately, I picked the top three most actionable recommendations:
     
    • Immediately shift 15–20% of spend from low-performing markets (like France) to stronger ones (Norway, Italy, Spain).
    • Launch a focused DPA campaign with winter sports products, timed ahead of the seasonal peak.
    • Build a lookalike audience from high-value outdoor gear buyers, but only after some initial testing.

In practice, I broke this into three categories for the client:

  • Action immediately: Budget reallocations and quick-win audience targeting tweaks.
  • Recommend: A pilot DPA campaign to test creative and product combinations.
  • Ask client about: The longer-term lookalike strategy, since it required deeper buy-in and more resources.

Prism gave me speed and objectivity. I provided strategy and context.

Together, we moved from “what should we do?” to a concrete roadmap—in a single day.

Example 2: Creative Analysis – Are Our Static Ads Still Working?

Creative fatigue is one of the most common silent killers in performance marketing. So when a client came to me and said,

“Our static sale creatives feel a little tired. But are they really the problem?”
 — I knew we needed more than just a gut check.

How I Used Prism:

I started by selecting four recent ads from their Meta Ads account:

  • Two “Full Send Sale” ads featuring dynamic skiing action shots
  • Two static best-seller creatives showing models in lifestyle poses

I ran these through Prism’s creative analysis workflow, which evaluates performance based on multiple dimensions: CTR, ROAS, conversion rate, CPC, and more. It also identifies patterns across ad types, regions, and formats.

What Prism Did:

Prism came back with some very clear signals:

  • Ads with strong sale messaging ("UP TO 50% OFF") had a 4.5% conversion rate—more than 2x that of static product-focused ads.
  • Action imagery outperformed static poses significantly. The skiing shots had a ROAS of 32.7 in Canada, versus single digits for the lifestyle-focused creatives.
  • Geography mattered. The same creative had dramatically different results in different regions—especially Canada vs the US.

In short, the data was clear:
Action > Static.
Urgency messaging > Brand-only messaging.
Localized relevance > Global sameness.

What I Did Next:

Here’s where Prism handed me the baton, and I ran with it:

  • I explained to the client that while their static ads weren’t totally broken, their performance was capped—especially during sale cycles.
  • I recommended building a 3-part urgency sequence using action footage:
    • Prepare: Gear up for the season
    • Launch: Sale is live
    • Full Send: Final days—get it before it’s gone
  • I layered in a creative twist: use regional variants to match local relevance. For instance, “Great North Sale” for Canada (where skiing extends longer) vs “End-of-Season Steals” for Europe.
  • I also flagged potential quick wins: swapping out single-font headlines for designs with multiple font sizes and positioning key copy above the fold—another insight Prism surfaced.

The client bought in. We briefed our design team that week and began testing the new sequence in two top-performing regions. Early indicators? Higher engagement, improved CTR, and stronger ROAS right out of the gate.

Example 3: Speed to Insight & Validation on Ad Performance Softness

In this case, I needed to quickly figure out what was causing a drop in campaign performance across Europe.

I didn’t set out with a layered prompt strategy from the start—it unfolded naturally as I explored the issue. Here’s how it happened:

  • First, I asked Prism to pull the latest campaign performance for Europe in June 2025.
  • Once I had the basics, I realized I needed ROAS too, so I asked for that.
  • With the full numbers in front of me, I then asked Prism for direct insights into the performance softness. My exact prompt was:

“Weekly insights – what’s contributing to current softness. Returns largely highlighted seasonality, also highlighted midfunnel opportunities for additional consideration/awareness-based coverage, given more limited conversion rates.”

What Prism Did:

Prism’s response made it immediately clear what was happening:

  1. Seasonality: It confirmed that conversions were down partly because of expected seasonal patterns—this was a typical low period for purchases.
     
  2. Mid-Funnel Gaps: It also flagged that we were leaning too heavily on conversion-focused campaigns, while audiences weren’t quite ready to buy. Prism suggested focusing more on awareness and consideration campaigns—such as product videos, testimonials, or category overviews—to keep engagement steady during the lull.

In essence, Prism acted like an objective analyst, cutting through the noise and highlighting the two most critical factors that needed attention.

What I Did:

This is where I stepped in to apply my own  judgment:

  • I cross-checked Prism’s findings against past seasonal data and confirmed this wasn’t unexpected.
  • I identified some ready-to-go mid-funnel content that we could activate immediately.
  • I recommended shifting some budgets toward those mid-funnel campaigns right away, focusing on engagement and audience-building until conversion intent picked back up.
  • We also aligned on a timeline to revisit the strategy and return to conversion-heavy ads once seasonality normalized.

This wasn’t a one-and-done query. I worked through it in stages—starting from data gathering, then refining metrics, then analyzing root causes.

What These Examples Prove (Again and Again)

At the end of the day, these aren’t just examples of AI tools doing what they’re programmed to do. They’re examples of something far more powerful—what happens when AI works with a human who knows when to zoom in, when to zoom out, and when to ask better questions.

Prism isn’t here to replace decision-making. It’s simply here to allow speed.

The real magic lies in how you use that speed:

  • To act earlier,
  • To test smarter,
  • To shift strategies before the moment passes.

And that’s the difference between simply using AI and actually working alongside it.