Case Study Kit: Measuring Conversion Lift After Applying Account-Level Placement Exclusions
case-studyppcmeasurement

Case Study Kit: Measuring Conversion Lift After Applying Account-Level Placement Exclusions

UUnknown
2026-02-26
10 min read
Advertisement

Reproducible kit to measure conversion lift after Google Ads account-level placement exclusions — A/B design, KPIs, SQL & dashboards (2026).

Hook: Stop guessing — measure real conversion lift from account-level placement exclusions

Too many teams blindly add placement exclusions in Google Ads and assume performance improves. You need proof: a reproducible way to show the incremental conversions, the change in CPA, and the impact on long-term attribution. This case study kit — updated for 2026's account-level placement exclusions and privacy shifts — gives you KPI definitions, an A/B test design, sample SQL, and dashboard templates so you can measure conversion lift with confidence.

Why this matters in 2026

Google's Jan 15, 2026 update added account-level placement exclusions, letting advertisers block sites and YouTube inventory centrally across Performance Max, Demand Gen, YouTube, and Display. That solves a management problem, but it raises measurement questions: does excluding placements improve conversions, or just shift spend elsewhere?

"Account-level placement exclusions give brands more control without undermining automation." — Google Ads announcement, Jan 2026

Two trends make this kit essential in 2026:

  • Automation-first formats (Performance Max, Demand Gen) increase opacity. You must validate any guardrail with experiment-grade measurement.
  • Privacy-forward measurement and reduced deterministic attribution make lift studies and randomized tests the gold standard for causal measurement.

What you'll get from this kit

  • Clear KPI definitions for conversion lift and ROI
  • A reproducible A/B test design that works with account-level exclusions
  • Sample SQL (BigQuery) to calculate lift and statistical significance
  • Reporting dashboard blueprint (metrics, segments, visualizations)
  • Operational checklist and rollout playbook

Core KPI definitions (use these consistently)

Define and lock these KPIs before you change exclusions. Consistent definitions prevent post-hoc rationalization.

  1. Incremental conversions: Conversions attributable to the exclusion change (treatment) minus conversions in control during the same period.
  2. Conversion Lift (%): (Conversions_treatment_per_user - Conversions_control_per_user) ÷ Conversions_control_per_user × 100.
  3. Cost per incremental conversion (CPIC): (Spend_treatment - Spend_control) ÷ Incremental_conversions.
  4. Incremental ROAS: (Incremental_revenue ÷ (Spend_treatment - Spend_control)). Use modeled revenue or LTV if available.
  5. Attribution alignment window: Standardize on 7/30/90-day conversion windows, report each. In 2026, default to 30-day for primary KPI and 90-day for LTV analyses.
  6. Reach & impressions on excluded placements: Track pre-change impressions and spend on the to-be-excluded inventory to understand potential headroom.

Design: A/B test template for placement exclusions

There are two reproducible designs depending on scale and tooling:

Use a randomized user assignment (first-party cookie or signed-in user ID) to create control and treatment groups. Apply the account-level exclusion only for the treatment group programmatically (via server-side logic) or by duplicating account structures and serving to randomized audiences.

  • Sample ratio: 50/50 for maximum power; use 30/70 when risk-averse.
  • Duration: Run for at least one full business cycle + enough conversions for statistical power (use sample size calc below).
  • Attribution: Measure conversions by user ID; use first interaction and last-click windows as sensitivity checks.

Design B — Account-level campaign mirror (practical for most advertisers)

Duplicate the account or campaign structure. In the treatment account, apply the account-level placement exclusion list. Drive comparable traffic by sharing budgets or using geo-split/creative parity.

  • Use geo or day-part randomization to assign traffic if user-level randomization isn't possible.
  • Ensure creative, bid strategies, and budgets are mirrored to avoid confounders.

Sample size & power calculator (quick formula)

For proportions (conversion rates), approximate sample size per group:

n = (Z_alpha/2 + Z_beta)^2 * (p1(1-p1) + p2(1-p2)) / (p1 - p2)^2

Where p1 = baseline conversion rate, p2 = expected treated conversion rate, Z_alpha/2 = 1.96 (95% CI), Z_beta = 0.84 (80% power). Use a 10–20% minimum detectable lift for sensible experiments.

Pre-experiment audit checklist

  • Document the list of placements to be excluded and the pre-period performance (impressions, clicks, spend, conversions).
  • Capture baseline conversion rates and revenue per conversion by campaign type (Performance Max vs Display vs YouTube).
  • Map creative and audience parity between control and treatment.
  • Confirm data pipeline: Google Ads → Google BigQuery (via Ads Data Transfer or API) → Looker Studio/Looker.
  • Define and freeze the attribution windows (7/30/90 days).

Sample BigQuery SQL: measure conversion lift per user

Below is a simplified, reproducible query pattern. It assumes you have a table of impressions/clicks and a conversions table with user_id and event_time. Adjust names to match your schema.

-- Aggregate exposures and conversions by user and treatment
WITH exposures AS (
  SELECT
    user_pseudo_id,
    ANY_VALUE(treatment_flag) AS treatment, -- 1 = excluded at account level in user's experience
    COUNTIF(event_type = 'impression') AS impressions,
    SUM(CASE WHEN event_type = 'click' THEN 1 ELSE 0 END) AS clicks
  FROM `project.dataset.ad_events`
  WHERE event_date BETWEEN DATE_SUB(CURRENT_DATE(), INTERVAL 30 DAY) AND CURRENT_DATE()
  GROUP BY user_pseudo_id
),
conversions AS (
  SELECT
    user_pseudo_id,
    COUNT(*) AS conversions
  FROM `project.dataset.conversions`
  WHERE event_timestamp BETWEEN TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY) AND CURRENT_TIMESTAMP()
  GROUP BY user_pseudo_id
)
SELECT
  e.treatment,
  COUNT(DISTINCT e.user_pseudo_id) AS users,
  SUM(e.impressions) AS total_impressions,
  SUM(e.clicks) AS total_clicks,
  SUM(IFNULL(c.conversions, 0)) AS conversions,
  SAFE_DIVIDE(SUM(IFNULL(c.conversions,0)), COUNT(DISTINCT e.user_pseudo_id)) AS conv_per_user
FROM exposures e
LEFT JOIN conversions c
  USING(user_pseudo_id)
GROUP BY e.treatment;
  

Take the results and compute conversion lift and confidence intervals with a two-proportion z-test.

Two-proportion z-test SQL (approximate)

-- inputs: n_t, conv_t, n_c, conv_c from previous query
WITH stats AS (
  SELECT
    1 AS id,
    CAST(100000 AS FLOAT64) AS n_t, -- replace with your users in treatment
    CAST(1200 AS FLOAT64) AS conv_t,
    CAST(100000 AS FLOAT64) AS n_c,
    CAST(1100 AS FLOAT64) AS conv_c
)
SELECT
  conv_t / n_t AS p_t,
  conv_c / n_c AS p_c,
  (conv_t / n_t) - (conv_c / n_c) AS diff,
  -- pooled prop
  ((conv_t + conv_c) / (n_t + n_c)) AS p_pool,
  -- standard error
  SQRT( ((conv_t + conv_c) / (n_t + n_c)) * (1 - ((conv_t + conv_c) / (n_t + n_c))) * (1/n_t + 1/n_c) ) AS se,
  -- z and p-value
  ((conv_t / n_t) - (conv_c / n_c)) /
    SQRT( ((conv_t + conv_c) / (n_t + n_c)) * (1 - ((conv_t + conv_c) / (n_t + n_c))) * (1/n_t + 1/n_c) ) AS z_score
FROM stats;
  

Interpretation: absolute diff and z-score give you statistical significance. For small counts or skewed distributions use exact tests or bootstrap.

Advanced measurement: model-based uplift and Bayesian approach

When user-level IDs are noisy or conversions are rare, use a Bayesian hierarchical model to estimate uplift with credible intervals. In 2026, demand gen and privacy noise make Bayesian approaches practical — they naturally handle shrinkage and sparse data.

Practical tip: run a Beta-Binomial model per segment (device, campaign type). Tools: BigQuery ML for simple models, or export to Vertex AI / Python for more complex hierarchical models.

Reporting dashboard blueprint (Looker Studio / Looker / Looker Studio + BigQuery)

Build a dashboard with these components:

  • Summary KPI row: Users, Conversions, Conversion rate, Conversion lift %, Incremental conversions, CPIC, Incremental ROAS
  • Time series: conversions and conversion rate by day for control vs treatment (with annotation for exclusion rollout date)
  • Segmented performance: by campaign type (PMax, Demand Gen, Display, YouTube), device, and placement type
  • Placement heatmap: pre-change spend & conversions per placement (to validate which placements were low quality)
  • Attribution windows: toggles for 7/30/90-day windows, and first/last touch comparisons
  • Statistical significance panel: diff, standard error, z-score, p-value, or credible intervals

Visual best practices:

  • Use dual-axis sparingly. Prefer separate small multiples for conversion rate and spend.
  • Always annotate the exclusion application date and any large bid/budget changes.
  • Expose the raw numbers table for auditing (users, conversions, spend by segment).

Interpreting results — decision rules

Set these rules before the experiment:

  1. If conversion lift > 5% and p-value < 0.05 (or 95% credible interval excludes zero), keep exclusions and scale.
  2. If conversion lift ≈ 0 and CPIC increases, rollback — exclusions likely just shifted spend to similar-performing inventory.
  3. If conversion decreases significantly, roll back immediately and analyze dose-response (which placements mattered).

Common pitfalls and how to avoid them

  • Confounding changes: Avoid creative, bidding, or audience changes during the test window.
  • Insufficient sample: Not enough conversions will lead to inconclusive tests — run longer or increase traffic.
  • Attribution mismatch: Confirm conversion windows and attribution settings across control and treatment.
  • Automation reallocation: Automated bidding may reallocate spend; capture bid adjustments and monitor spend drift.

Sample scenario & outcome (reproducible example)

Example: E-commerce advertiser identified 1,200 placements producing clicks but near-zero conversions. They used Design B (account mirror) with a 50/50 geo split. Pre-period: 60k users per cell, control CR 1.8%, treatment CR 2.16% after exclusions.

  • Users per cell: 60,000
  • Control conversions: 1,080 (1.8%)
  • Treatment conversions: 1,296 (2.16%)
  • Absolute lift: 0.36pp; relative lift: 20% (2.16/1.8)
  • Incremental conversions: 216; incremental ROAS positive given same spend (after spend drift control)

Decision: exclude placements account-wide and incrementally widen the list. Continue to monitor for automation reallocation and long-term LTV impact.

Operational playbook: step-by-step

  1. Pre-audit placements and export a list of candidates with impressions, spend, conv rate for last 90 days.
  2. Create the exclusion list in Google Ads (staged: narrow → wide).
  3. Pick experiment design: user-randomized (A) or account mirror (B). Implement test controls.
  4. Set up BigQuery ingestion for Ads & conversion data; validate schema and timestamps.
  5. Run the test for the pre-calculated sample size or minimum 4–6 weeks depending on traffic.
  6. Use provided SQL to compute lift; visualize results in the dashboard and export to stakeholders.
  7. Follow decision rules to keep, tweak, or rollback exclusions.
  8. Iterate: expand exclusion list, re-run tests for new candidate placements.

2026 considerations and future predictions

Late 2025 and early 2026 made one thing clear: control points like account-level exclusions are essential but not sufficient. Expect these trends:

  • Greater automation across formats will drive ad systems to reallocate budget quickly; continuous experimentation becomes standard.
  • Privacy signals and modeled conversions will grow; hybrid lift measurement (randomized + modeling) will be common.
  • Platform guardrails (like Google’s account-level exclusions) will expand; advertisers who pair guardrails with experiments will maintain performance advantages.
  • Data: Google Ads API → BigQuery (export unsampled data)
  • Modeling: BigQuery ML, Vertex AI, or Python + PyMC for Bayesian uplift
  • Dashboards: Looker Studio for quick reporting; Looker for governed metrics
  • Experimentation: in-house user-randomization or third-party platforms that integrate with ads and first-party IDs

Final checklist before you start

  • Locked KPI definitions and attribution windows
  • Pre-period placement performance export
  • Experiment design selected and sample size computed
  • Data pipeline validated (ads + conversions into BigQuery)
  • Dashboard template created and shared

Quick takeaways

  • Don't guess: validate account-level exclusions with an A/B design.
  • Measure lift, not last-click: use user-level comparisons and appropriate windows.
  • Automate measurement: standardize SQL and dashboards so every exclusion rollout is tested.
  • 2026 priority: hybrid randomized + modeled approaches are best in a privacy-first world.

Call to action

Use this kit to run your first exclusion lift test this quarter. If you want a ready-made BigQuery template, dashboard file, and an experiment review call with a growth analyst, request our case study package and we’ll help you instrument the test end-to-end.

Advertisement

Related Topics

#case-study#ppc#measurement
U

Unknown

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-02-26T02:21:51.774Z