Lessons in Purpose-Built Predictive Modeling
- Emma Davis

- 5 days ago
- 3 min read
A Working Planet client thought they had it figured out: a finely tuned predictive model delivering real-time performance insights. But after months of testing, we uncovered gaps in what looked, on paper, like a perfectly sound approach.
Context: OCI / VBB
For businesses with long or complex sales cycles, the most effective way to get Google Ads’ machine learning to optimize toward real value is to use offline conversion imports (OCI) paired with predictive modeling to optimize for a target return on ad spend. This method is referred to as value-based bidding (VBB). The goal is to give Google real-time feedback on the relative value of each individual conversion.
That was the plan. Then the hiccups started.
Hiccup 1. Inconsistent Conversion Data
A couple months into testing, we discovered the client’s systems weren’t communicating reliably with Google Ads. At times, feedback lagged by days.
The second time we saw it happen, and the first time we saw it materially impact performance, we switched to a system we trusted. Machine learning can only be as strong as the data it receives. Delayed feedback means distorted optimization.
Hiccup 2. Model Variables That Hurt Optimization
The predictive model itself was strong for forecasting overall business performance. But some of its variables conflicted with how Google’s algorithm learns.
Seasonality Offsets
After resolving the data connection issue, we hit another snag. The model included a seasonality adjustment. On August 31, leads were valued at one level. On September 1, they were historically worth roughly half as much.
When we crossed that threshold, Google received dramatically different value signals overnight.
The campaigns initially overperformed, fueled by strong late-August signals. Then performance collapsed as the algorithm tried to reconcile the sudden drop in assigned value. From Google’s perspective, it had been doing great. Then suddenly it wasn’t.
Lesson: Extract seasonality from the model used to train Google. What’s useful for forecasting isn’t always useful for machine learning inputs.
Source as a Heavy Variable
In the broader predictive model, a referral lead might be worth 10x a non-brand search lead. That level of nuance improved overall forecasting accuracy.
But because source carried so much weight, the relative values being passed to Google lacked sufficient differentiation within the channel. The algorithm wasn’t getting clear enough signals about what was good and what was bad inside its own ecosystem.
If source is one of the biggest predictors of value in your model, you risk chronically over- or under-valuing specific networks or campaigns if you apply those model values directly to your value-based bidding optimization. Worse, you muddy the feedback loop. Even if you’re optimizing campaigns, landing pages, or the sales funnel, the algorithm may not be receiving clean, actionable signals. You could get stuck in a self-fulfilling prophecy of poor performance.
Lesson: Build a purpose-built model specifically designed to train the network, separate from the broader business forecasting model.
The Outcome: A Purpose-Built Predictive Model
The process was a roller coaster: unexplained performance swings, strategic adjustments, waiting for learning periods to reset. And we learned some hard lessons along the way.
But once we implemented a custom model built specifically for Google optimization, performance changed dramatically.
Over the next four months:
CAC from Google decreased by an average of 30% month over month.
Ultimately, CAC dropped 75% from where it was when we assumed management.

If you’re curious how we designed the new model and structured the feedback loop, let’s talk.


