Most marketing services don’t fail because the people are incompetent. They fail because the buyer signs a contract while the success definition is still fuzzy, the data is murky, and the “strategy” is basically a repackaged playbook from a different industry.
So you audit before you commit. Not after. Not three months in when you’ve already trained yourself to accept vague reports and “optimizations” that never land.
One-line truth:
You’re not buying ads or SEO. You’re buying a measurement system and decision-making discipline.
Start with goals… but make them hard to wiggle out of
If your goal is “more leads,” you’re about to get absolutely steamrolled by vanity metrics. I’ve seen it happen in companies that should’ve known better.
Try this instead: pick an outcome that maps to revenue reality, then reverse-engineer the funnel with your provider in the room. If you’re looking for one, Available here.
A decent goal definition has three parts:
– Business result: pipeline created, CAC target, trial-to-paid lift, retention improvement
– Funnel step: awareness → consideration → conversion (pick the one you actually need)
– Time boundary: “in 90 days” forces honesty; “over time” invites excuses
Now, this won’t apply to everyone, but if you’re early-stage or launching a new offer, you should bias toward learning metrics (cost per qualified click, landing-page conversion rate, message-market fit signals) before you demand hard ROI. Just don’t let a provider hide behind “learning” for six months.
And don’t approve a KPI until you know the baseline. If there’s no baseline, the first 2–4 weeks might need to be pure instrumentation and benchmarking. That’s not wasted time. That’s how you avoid fake progress.

If they can’t explain attribution cleanly, walk.
Yes, even if their creative looks great.
Here’s the thing: attribution is where agencies quietly bury bad performance. If you don’t pin down how credit is assigned, every channel becomes a hero in its own report.
You want explicit answers to questions like:
– Which attribution model are you using (last-click, data-driven, position-based), and why?
– What counts as a conversion: form fill, booked meeting, qualified opportunity, closed-won?
– How do you handle multi-touch journeys and long sales cycles?
If you’re in B2B with long cycles, insisting on “ROAS in 30 days” can be irrational. But accepting “trust us, it’s working” is worse.
One technical note that separates adults from amateurs: ask how they handle incrementality. Do they test lift, holdouts, geo splits, or any other method to prove the marketing caused the outcome? Many won’t. The best will at least have a plan.
The data they should share (and what “transparent” actually means)
Agencies love dashboards. Dashboards are cheap. Audit trails are the real flex.
You want to see:
Data sources and access
– What platforms are used (GA4, Google Ads, Meta, HubSpot/Salesforce, ad server, call tracking)?
– Who owns admin access?
– What happens to access when the contract ends?
Metric definitions
– Exact formulas (not vibes) for CAC, MQL, SQL, ROAS, conversion rate
– Event naming conventions and UTM governance (yes, UTMs matter more than most people admit)
– Refresh cadence: daily, weekly, monthly—and why
Data lineage
Where did this number come from? If the answer is “our internal reporting,” you ask: based on what raw data, pulled when, transformed how?
Look, a provider doesn’t need to expose proprietary scripts line-by-line. But if they won’t show methodology, you’re buying a black box. Black boxes don’t belong in contracts.
Quick stat, because it frames the stakes: Gartner has projected that by 2025, 80% of marketers will abandon third-party cookies as targeting/measurement shifts reshape tracking and attribution (Gartner, Future of Marketing research and related cookie-deprecation guidance). That forces more reliance on first-party data, server-side tagging, modeled conversions, and—guess what—clear measurement discipline. If your provider is still stuck in 2018-era tracking assumptions, you’ll feel it.
Benchmarks: stop letting “industry averages” bully you
“Your CTR should be 2%.”
Cool. For what audience? What offer? What creative format? What stage of funnel? What placements? What season?
Industry benchmarks are frequently used like a magic wand. In practice, they’re often a way to avoid accountability. I’m not anti-benchmark; I’m anti-benchmark-without-context.
When a provider shares benchmarks, press for segment-level comparability:
– same industry and similar price point?
– similar deal size and sales cycle?
– similar monthly spend and channel mix?
– similar geographic targeting and competitiveness?
If they can’t answer those, treat benchmarks as “interesting trivia,” not contract-grade evidence.
Also ask for confidence intervals or sample sizes when they’re quoting performance. Anyone can cherry-pick the best 30-day window from a multi-year account (and plenty do).
Case studies: you’re not buying their best day
Case studies are marketing. Shocking, I know.
Still useful, though, if you interrogate them properly.
When you review a case study, I want you to do a mental translation exercise: “Could this result happen in my constraints?” Budget, brand awareness, sales team capacity, website conversion rate, product-market fit. Those constraints decide outcomes more than the agency’s slide deck does.
Ask for the messy parts:
– What failed in the first month?
– What assumptions were wrong?
– What didn’t scale?
– What was the client doing internally to make it work (enablement, sales follow-up SLAs, landing page rebuilds)?
A strong provider will admit limitations. A weak one will sell you a fairy tale with screenshots.
Channels and messaging: strategy or channel roulette?
If the plan looks like “we’ll run paid search, paid social, SEO, email, and content,” that’s not a strategy. That’s a services menu.
The audit question is simple: do their tactics align with how your buyers actually behave?
A technical team selling $50k ACV software should not be judged by the same playbook as a $40 DTC product. Yet I still see agencies pitching TikTok because it’s trendy, not because it’s efficient.
Messaging gets even sloppier. Watch for this common mismatch:
Ad promise: “Get results fast.”
Landing page: “We’re a trusted partner with 10 years of experience.”
Follow-up email: “Here’s a 12-step framework.”
That’s three different tones, three different intents, and a confused prospect.
In my experience, the best agencies do two unsexy things consistently:
1) document persona hypotheses and pain points in plain language
2) run structured creative tests tied to funnel stages, not random variations for “engagement”
Short list of what I’d want in writing:
– audience definitions and exclusions
– channel rationale tied to intent
– messaging guardrails (voice, claims, proof points, compliance constraints)
– test plan: what gets tested, how long, what “win” means
Onboarding should feel like project management, not vibes
If onboarding is “we’ll get started and circle back,” you’re already in trouble.
I like onboarding plans that read like an implementation brief: owners, milestones, dependencies, acceptance criteria. It doesn’t have to be bureaucratic, but it does have to be explicit.
Two-sentence section, because it matters:
If they can’t say what happens in week one, they won’t magically become organized in week eight. Chaos scales.
Things that should be pinned down early (and yes, in writing):
– who owns tracking and tag management changes
– who builds landing pages, who approves copy, who uploads creatives
– how long approvals take on your side (agencies can’t outrun internal bottlenecks)
– what “done” means for each deliverable
Reporting: raw enough to trust, structured enough to act on
I’m opinionated here: monthly reporting is too slow for most paid media and lifecycle programs. Weekly or biweekly is usually the sweet spot, with a monthly strategic wrap.
Also, a dashboard is not a report. A report has interpretation, anomalies, decisions, next actions.
Ask to see a sample report and look for:
– clear KPI definitions
– baseline vs current vs target
– segmentation (by audience, channel, creative, offer)
– explanation of what changed and why
– an “issues and risks” section that isn’t sanitized
And demand anomaly alerts. If spend spikes or conversion rates collapse, you shouldn’t find out in a Friday recap.
Guarantees, SLAs, and contract boundaries: where deals go to die
Guarantees are tricky. Some channels are too variable to “guarantee results” honestly. But that doesn’t mean you accept a contract with no teeth.
You can get SLAs around:
– response times
– reporting cadence
– error remediation
– uptime for landing pages they host
– turnaround times for creative iterations
– data access and export rights
Contract boundaries matter just as much. What’s included? What counts as out of scope? How is scope expanded, priced, and approved?
One detail people forget: data ownership and portability. If you can’t walk away with your campaigns, creatives, audiences, naming conventions, and reporting history, you’re locked in (even if the contract says you aren’t).
Pilots: the only “guarantee” I actually trust
A pilot is where talk meets friction: approvals, data quality, speed, decision-making, and how they behave when something breaks.
Keep it tight. Realistic. Measurable.
Pilot rules I like:
– fixed duration (e.g., 30–60 days)
– fixed scope (one product line, one region, one funnel stage)
– explicit success and failure thresholds
– your analytics as the source of truth (they can have dashboards too, but yours wins)
Track more than performance:
delivery speed, QA discipline, how they explain bad weeks, whether they proactively flag risks, whether communication gets slippery when results dip.
Because that’s the real audit.
Not the proposal. Not the pitch.
The moment you ask for the first verified result—and see what happens next.