All articles
AI

50 AI Prompts for Media Planning in 2025

AI prompts for media planning are specific instructions for your AI tools. They detail which data to analyze, which metrics to prioritize, and how to format the output. This turns a vague request into actionable insights.

Most marketers waste time with AI because their prompts are too vague. We'll show you how to write prompts that get it right the first time, saving you from fixing work you could've done yourself faster.

This guide walks through 50 tested prompts for Meta, Google, TikTok, and programmatic campaigns—plus the structure, inputs, and versioning tactics that make them actually save time.

Why great prompts drive better media plans

AI prompts for media planning work best when they include specific details like target audience, campaign goals, budget constraints, and desired output format. Generic requests like "analyze my campaign" force the AI to guess what you care about—ROAS versus reach, last week versus last quarter, Meta versus Google.

When you say "analyze my campaign performance," the AI has to make assumptions about what KPI matters to you. It might assume you care about CTR when you actually care about ROAS. It might also pull the last 30 days when you meant the full campaign flight.

Then you spend 10 minutes fixing what the AI got wrong, at which point you could've just pulled the report yourself.

The difference between a prompt that wastes your time and one that saves it comes down to specificity, context, and structure. You're training the AI to think like your media planner, not a general assistant.

How to structure an AI prompt for paid media

A well-structured prompt eliminates the back-and-forth that eats up your day. The anatomy of an effective media planning prompt includes four core components that transform vague requests into actionable outputs.

Here's what separates a time-waster from a time-saver:

Bad prompt:

Analyze my campaign performance.

Better prompt:

Review Meta Ads data for March 1–15. Focus on campaigns tagged "Spring Launch." Calculate ROAS, CPA, and CTR for each ad set. Flag any ad sets with CPA above $40 or ROAS below 2.5. Output as a table with columns: campaign name, ad set name, spend, conversions, ROAS, CPA, CTR. Sort by spend descending.

State the objective and KPI

Start every prompt by defining what you're trying to achieve. Are you looking to reduce cost per acquisition? Improve return on ad spend? Expand reach within budget?

Name the primary metric upfront—ROAS, CPA, CPM, conversion rate, or reach. Then add one or two supporting metrics that provide context, like CTR or frequency. This tells the AI what success looks like before it starts calculating anything.

Define the data source and range

Specify the exact file name, date range, and any filters that apply. If you're working with multiple campaigns, define which ones matter by name, tag, budget threshold, or performance tier.

Include brand safety requirements here too. If certain placements, audiences, or creative types are off-limits, state that now rather than fixing it later.

Specify output format

Request a table with defined columns, a CSV export, or a summary with bullet points. If you plan to drop the result into a spreadsheet or BI tool, ask for a clean header row with no extra commentary.

Ask the AI to show its calculations and state any assumptions it made. This turns validation from a guessing game into a quick scan. You can spot errors in seconds instead of minutes.

Inputs every prompt needs for reliable results

Reliable outputs depend on consistent inputs. The more you standardize what you feed the AI, the faster you'll spot when something's wrong.

Every media planning prompt benefits from four categories of input:

  • KPIs and efficiency metrics: Your primary KPIs—ROAS, CPA, purchases, leads—tell you if the campaign worked. Your efficiency metrics—CPM, CPC, CTR—tell you why it worked or didn't.
  • Budget and flight dates: Define your spend threshold, pacing requirements, and any seasonal considerations. If you're analyzing mid-flight, mention the total budget and how much you've spent so far.
  • Audience and creative constraints: Describe your target demographics, any creative asset limitations, and platform-specific requirements. If you're working with video, mention length caps. If you're testing static versus carousel, say so.
  • Brand safety guardrails: Establish compliance boundaries, approved language, and content restrictions before the AI generates anything. This prevents outputs that require legal review or brand team rewrites.

Add lines like "do not make claims beyond the data" or "describe correlations only, not causation." Simple guardrails save cleanup time later.

Channel-specific media planning prompts

Different platforms optimize differently. A Meta prompt focused on audience layering won't translate to Google's keyword bidding logic. Here are ready-to-use templates for each major channel.

Meta Ads prompt library

Audience overlap analysis:

Review all active Meta campaigns from the last 30 days. Identify ad sets with audience overlap above 20%. Output a table with columns: campaign name, ad set name, audience definition, overlap percentage, total spend. Recommend consolidation opportunities.

Budget reallocation by ROAS:

Analyze Meta campaigns from March 1–31 with spend above $2,000. Calculate ROAS for each campaign. Suggest a revised budget allocation that shifts 20% of spend from campaigns with ROAS below 2.0 to campaigns with ROAS above 3.5. Show current spend, proposed spend, and expected impact.

Creative fatigue detection:

Review all Meta ads from the last 60 days with spend above $1,000. Flag ads where CTR declined by more than 25% between the first 30 days and the second 30 days. Output a table with ad name, first 30-day CTR, second 30-day CTR, percent change, and total spend.

Placement performance breakdown:

Analyze Meta campaign "Summer Sale" from June 1–30. Break down spend, impressions, CTR, and CPA by placement (Feed, Stories, Reels, Audience Network). Recommend which placements to scale and which to pause based on CPA performance.

Lookalike audience testing:

Compare performance of 1%, 3%, and 5% lookalike audiences for campaign "Q2 Acquisition" from April 1–30. Calculate CPA, ROAS, and conversion rate for each. Recommend which lookalike percentage to scale and why.

Ad set pacing alerts:

Review all Meta ad sets from the last seven days. Flag any ad set that spent more than 150% or less than 50% of its daily budget target. Output a table with ad set name, daily budget, actual daily spend, pacing percentage, and days remaining in flight.

Gender and age performance split:

Analyze Meta campaign "Product Launch" from May 1–31. Break down spend, conversions, and CPA by gender and age group. Identify the top three performing segments by CPA. Recommend budget shifts to prioritize those segments.

Frequency cap analysis:

Review all Meta campaigns from the last 30 days with average frequency above 4. Calculate CTR and CPA for each. Flag campaigns where high frequency correlates with rising CPA. Recommend frequency caps or audience expansion.

Conversion funnel drop-off:

Analyze Meta campaign "Webinar Registration" from March 1–31. Show the conversion funnel from impression to click to landing page view to registration. Calculate drop-off rate at each stage. Identify the stage with the highest drop-off and suggest optimization tactics.

Dynamic creative testing:

Review dynamic creative campaign "Fall Collection" from October 1–31. Break down performance by headline, primary text, and image combination. Identify the top three combinations by CTR and ROAS. Recommend which creative elements to prioritize in future campaigns.

Google and YouTube prompt library

Keyword performance audit:

Analyze Google Ads search campaign "Brand Keywords" from February 1–28. Output a table with keyword, match type, impressions, clicks, CTR, conversions, CPA, and quality score. Flag keywords with CPA above $60 or quality score below 5.

Bid adjustment recommendations:

Review Google Ads campaigns from the last 30 days. Identify ad groups where top-of-page bid estimates exceed current bids by more than 30%. Recommend bid increases for ad groups with ROAS above 4.0 and current impression share below 60%.

Search term mining:

Analyze search term report for campaign "Product Category" from March 1–31. Identify high-performing search terms (CPA below $40, at least 10 conversions) that aren't yet added as exact match keywords. Output a table with search term, impressions, conversions, CPA, and recommended match type.

Negative keyword expansion:

Review search term report for all Google Ads campaigns from the last 60 days. Flag search terms with spend above $100, zero conversions, and CTR below 1%. Recommend adding as negative keywords at campaign or account level.

YouTube video completion rates:

Analyze YouTube campaign "Brand Story" from April 1–30. Break down video completion rates by 25%, 50%, 75%, and 100%. Calculate CPV (cost per view) at each completion threshold. Identify videos with completion rates below 30% and recommend creative adjustments.

Shopping campaign product performance:

Review Google Shopping campaign "Ecommerce Catalog" from May 1–31. Output a table with product ID, product title, impressions, clicks, CTR, conversions, ROAS, and spend. Flag products with spend above $500 and ROAS below 2.0. Recommend bid adjustments or product feed improvements.

Geographic performance split:

Analyze Google Ads campaigns from March 1–31. Break down spend, conversions, and CPA by state or metro area. Identify the top five locations by ROAS and the bottom five by CPA. Recommend location bid adjustments.

Device performance comparison:

Review Google Ads campaigns from the last 30 days. Compare performance across desktop, mobile, and tablet by CTR, conversion rate, and CPA. Recommend device bid adjustments based on CPA performance relative to target.

TikTok and short-form video prompt library

Trending sound analysis:

Identify the top 10 trending sounds on TikTok in the [industry] category from the last 14 days. For each sound, provide view count, engagement rate, and example creator usage. Recommend which sounds align with our brand voice and campaign goals.

Video hook performance:

Analyze TikTok campaign "Product Demo" from March 1–31. Break down performance by the first three seconds of each video. Calculate hook retention rate (percentage of viewers who watched past three seconds). Identify the top three hooks by retention rate and CTR.

Creator collaboration ROI:

Analyze TikTok creator partnership campaign from May 1–31. Break down performance by creator, including views, engagement rate, click-through rate to website, and conversions. Calculate cost per engagement and cost per conversion for each creator. Recommend which partnerships to renew.

Programmatic display prompt library

Viewability optimization:

Analyze programmatic display campaign "Awareness Push" from March 1–31. Break down viewability rate by placement, format, and publisher. Flag placements with viewability below 60%. Recommend blocklists or bid adjustments to improve viewability.

Cross-device attribution:

Analyze programmatic campaign "Retargeting" from April 1–30. Track conversions by device (desktop, mobile, tablet) and time lag from impression to conversion. Identify the most common conversion path. Recommend attribution model adjustments based on actual user behavior.

Analytics and optimization prompts

Performance analysis prompts turn raw data into decisions. Start with recurring analysis tasks that inform budget shifts, creative refreshes, and campaign adjustments.

ROAS and CPA trend analysis

Review all campaigns from the last 90 days with spend above $5,000. Calculate weekly ROAS and CPA trends. Identify campaigns where ROAS declined by more than 20% or CPA increased by more than 30% in the last 30 days compared to the prior 60 days. Output a table with campaign name, 60-day ROAS, 30-day ROAS, percent change, and potential causes.

This prompt surfaces performance declines early, before they drain budget. It separates seasonal fluctuations from real problems by comparing recent performance to a longer baseline.

Budget reallocation suggestions

Analyze all active campaigns from March 1–31. Identify the top 20% of campaigns by ROAS and the bottom 20% by ROAS. Calculate how much budget is allocated to each group. Recommend a reallocation plan that shifts 25% of budget from bottom performers to top performers. Show current allocation, proposed allocation, and projected impact on overall ROAS.

Reallocation prompts work best when you define clear performance tiers and specify how aggressive you want the shift to be. A 25% reallocation is meaningful but not disruptive.

Pacing and frequency alerts

Review all campaigns from the last seven days. Flag campaigns that are pacing to spend more than 110% or less than 85% of their monthly budget based on current daily spend. Output a table with campaign name, monthly budget, spend to date, days elapsed, projected end-of-month spend, and pacing status.

Pacing alerts catch delivery issues before they become budget problems. Running this prompt weekly gives you time to adjust bids, budgets, or targeting before month-end.

Creative fatigue detection

Analyze all ads from the last 60 days with spend above $1,000. Compare CTR and conversion rate in the first 30 days versus the second 30 days. Flag ads where CTR declined by more than 20% or conversion rate declined by more than 15%. Output a table with ad name, first 30-day CTR, second 30-day CTR, percent change, and total spend. Recommend creative refresh priority.

Creative fatigue happens gradually, then suddenly. This prompt quantifies the decline so you can refresh assets before performance collapses.

Versioning and testing your prompt library

Prompts improve through iteration. The first version rarely nails the output format, filters, or thresholds you actually need.

Treat prompts like any other marketing asset. Give them version numbers and test them on a baseline dataset where you know the correct answer. Compare the AI's output to your manual calculation, and retest if it's off.

Save, score, iterate

After you get a useful output, ask the AI to consolidate the session into one reusable prompt. Replace specific dates with placeholders like "last 30 days" or "current month." Save it with a version number.

Rate each prompt's usefulness on a simple scale—does it save time, does it surface insights you'd miss manually, does it require cleanup? Keep the ones that score high. Retire the ones that don't.

Automate with prompt templates in Pixis

We at Pixis built Prism to eliminate the prompt-crafting step entirely. Instead of writing detailed instructions every time you analyze a campaign, Prism uses context-aware templates that pull live data from your ad accounts.

You select the analysis type—budget reallocation, creative fatigue, pacing alerts—and Prism generates the output automatically. No manual prompts, no CSV uploads, no reformatting. Try Prism today and see how automated prompt workflows accelerate your planning cycle.

Plan, launch, and learn faster with AI-powered media

Proper prompting compresses the time between question and answer. Instead of spending 30 minutes pulling reports and another 20 minutes analyzing them, you get validated insights in under five minutes.

That time savings compounds. Faster analysis means faster optimizations. Faster optimizations mean better performance.

The real leverage comes from building a prompt library that mirrors your workflow. When you have tested, versioned prompts for your most common analysis tasks, you stop starting from scratch every day.

Start with five prompts that cover your weekly reporting needs—ROAS trends, budget pacing, creative performance, audience breakdowns, and placement analysis. Refine them until they're reliable. Then expand to monthly audits, competitive benchmarking, and scenario planning.

Get started with Prism and turn prompts into automated workflows that run on your schedule, with your data, in your format.

FAQs about AI prompts for media planning

Which AI tools are safest for brand data?

Use enterprise-grade AI platforms with data privacy controls, SOC 2 compliance, and brand safety features. General-purpose LLMs like ChatGPT store conversation history by default unless you opt out. Marketing-specific AI tools like Prism process data without retaining it, which reduces compliance risk.

How often should I refresh prompts with new performance data?

Update prompts weekly for active campaigns and monthly for evergreen templates. If you're running high-spend campaigns or testing new channels, refresh daily during the first two weeks to catch delivery issues early.

Can I use the same prompt across different advertising channels?

Channel-specific prompts perform better because each platform has unique metrics, bidding logic, and optimization goals. A Meta prompt focused on frequency and audience overlap won't translate directly to Google's keyword-level bidding. Build separate prompt libraries for each channel, then adapt shared frameworks like ROAS analysis or budget pacing to each platform's data structure.

How do I measure the impact of better prompts on campaign performance?

Track two things: time saved on analysis tasks and improvement in recommendation quality. Time saved is straightforward to measure. Quality is trickier, but you can track how often AI recommendations improve performance when implemented.