Prevents wasteful ad spend on non-incremental conversions (potentially saving 20-40% of current Meta budget) and grounds campaign scaling decisions in actual P&L impact rather than platform-inflated ROAS metrics.
Implements 7-day click attribution and MER tracking to eliminate wasteful view-through budget decisions and align reporting with actual P&L.
Business Applications
HIGH Meta Ads attribution setup for TFWW and GnomeGuys (meta_ads)Switch Meta Ads Manager default attribution from '7-day click + 1-day view' to '7-day click only' for optimization decisions; implement CAPI server-side tracking immediately to reduce reliance on 'platform assisted reporting' (C-tier)
MEDIUM Incrementality testing infrastructure (general)Set up holdout testing framework for TFWW campaigns: create geo-excluded control groups to measure true incremental lift vs. platform-reported conversions before scaling to $5K+ monthly spend
MEDIUM AIAS dashboard metrics (sales_script)When building attribution tracking in AIAS CRM, weight last-click touchpoints higher than first-click in the opportunity pipeline; do NOT implement view-through attribution for SMS/booking flows
HIGH GnomeGuys pre-order tracking (sales_script)Ignore 'blended' ROAS from Shopify/Triple Whale; report actual paid acquisition MER (total revenue / total ad spend) using 7-day click windows exclusively to determine if $15-25K profit target is viable
Nathan Perdriau - appears to be a performance marketing/e-commerce growth expert specializing in Meta advertising measurement and attribution accuracy
Hook: Visual tier list format ranking attribution models with S-tier already populated by incrementality testing icon
- S-Tier: Incrementality testing is the 'gold standard of measurement'
- A-Tier: Seven-day click attribution best for aligning acquisition MER with P&L and Meta reporting
- C-Tier: Platform-assisted reporting because default attribution windows are too wide
- F-Tier: Post-purchase surveys compared to JFK assassination car color memory study - people don't remember accurately
- D-Tier: First-click attribution is unreliable because you can't track far enough back in customer journey
- B-Tier: Last-click attribution is 'almost just as useless' but better than first-click since at least you know it was the final touchpoint
- F-Tier: View-through attribution 'over attributes enormously' and most view-through conversions aren't actually incremental
- D-Tier: Triple Whale blended numbers described as 'trying to solve an unknowable reality' and not helpful
- F-Tier: Gut feel
“Incrementality testing. S tier, the gold standard of measurement”
“Seven day click. A tier, very good for building congruency between acquisition MER and the PNL and what you're seeing in Meta”
“Post purchase surveys. F tier, we all know the saying of when JFK got shot then they asked everyone what color was the car. Everyone says it was a different color because people don't actually remember correctly”
“View through attribution. Shocking, F tier over attributes enormously. Most views through conversions are not incremental due to that ad”
“Triple whale blended numbers. We're trying to solve an unknowable reality with triple whale and I don't think it's a helpful exercise”
What it is: A tier list ranking of attribution models/methods from S-tier (incrementality testing) to F-tier (gut feel, view-through, post-purchase surveys) with specific critiques of each method's reliability for measuring true ad performance
How it helps us: Critical for TFWW's Meta Pixel setup (currently listed as TODO for 'Advanced Meta tracking/analytics/CAPI') and GnomeGuys' $15-25K profit target tracking. We currently lack incrementality testing which could be inflating our perceived ROAS and wasting ad spend on non-incremental conversions.
Limitations: The critique of Triple Whale may not apply if we need basic blended reporting for GnomeGuys ops (we don't currently use Triple Whale anyway). The dismissal of post-purchase surveys ignores qualitative value for messaging insights, though he's right about attribution accuracy.
Who should see this: Dylan/media buyer for TFWW and GnomeGuys campaigns; dev team for implementing proper conversion API and attribution tracking in AIAS dashboard
✅ [SOLID] "Post-purchase surveys are F-tier and useless for attribution due to memory fallibility" — Supports academic research on recall bias; comments confirm this ('Gut feel 😂😂😂'). While surveys have qualitative value for brand messaging research, they're statistically invalid for attribution - which aligns with our current approach of not using them in AIAS lead intake.
Instead: Use platform-native attribution (7-day click) combined with incrementality testing; reserve surveys for NPS/brand research only
✅ [SOLID] "View-through attribution over-attributes enormously and most view-through conversions aren't incremental" — Post-iOS14.5, view-through tracking is highly unreliable due to ATT opt-outs; creator correctly identifies that 'view' conversions often would have happened anyway via organic/brand search. This validates the need to exclude view-through from optimization metrics.
Instead: Optimize for 7-day click only; use view-through data only for creative insights (thumb-stop ratios) not conversion attribution
🤔 [PLAUSIBLE] "Triple Whale blended numbers are D-tier and trying to solve an unknowable reality" — While Triple Whale aggregates data well, blended ROAS obscures true paid efficiency - especially problematic for GnomeGuys where organic Masters content might spike during tournament week ('April 6-12, 2026'), making blended metrics misleading for ad scaling decisions.
Instead: Calculate MER (Marketing Efficiency Ratio) manually: Total Revenue / Total Ad Spend using 7-day click attribution for paid decisions