The Setup
A programmatic network was reportedly delivering an astonishing volume of sales for our client with jaw-dropping performance metrics. The ROI for the entire program hovered around 300%, but this network claimed an eye-watering 7,000% ROI. Naturally, we were skeptical. Could this be a fluke, or perhaps a case of overly generous attribution modeling on the network’s part?
When we cross-checked the results with our own first-party data, the numbers were far less impressive. Given that the network relied entirely on display ads, we suspected significant out-of-channel impact—but just how significant was it?
The Test
To determine the true effectiveness of the campaign, we designed an experiment. We split the target audience into two completely isolated groups:

Test Group: This group was shown banners promoting our client’s business.
Control Group: This group was shown a public service announcement (PSA) encouraging viewers to quit smoking.
The rationale was simple: the PSA had no conceivable connection to our client’s business, which sold HR compliance posters. Any conversions attributed to the PSA group would represent the baseline volume of users the network claimed as conversions without adding any actual value. By comparing the performance of the PSA to that of the real ads, we aimed to measure the incremental value generated by the campaign.
The Results
At first, the test showed modest lift in conversions for the real ads compared to the PSA. However, within weeks, a surprising trend emerged: the PSA began outperforming the client’s ads.
How could this happen? Why would a no-smoking PSA drive more conversions for HR compliance posters?
The Explanation
The answer lay in the ad network’s early AI-driven optimization engine. The algorithm had become adept at identifying users who were already poised to make a purchase. It would strategically place a banner (whether it was the client’s ad or the unrelated PSA) in front of the user just before they completed their purchase. The network then claimed credit for the conversion and continued optimizing based on this flawed feedback loop.
In other words, the AI wasn’t creating incremental value. It was simply taking credit for conversions that would have happened anyway.
The Conclusion
When we brought these findings to the ad network’s support team, they had no satisfactory explanation. Ultimately, we paused the entire network. As we suspected, doing so had no noticeable impact on overall sales, confirming that the AI optimization engine was not contributing meaningful value.
The Takeaways
Machine learning, much like an enthusiastic but untrained helper on a home improvement project, can be either remarkably helpful or surprisingly destructive. The key is proper training and oversight. Algorithms need to be guided by meaningful metrics and robust systems that measure true incremental value. Without these safeguards, you risk wasting significant resources on campaigns that deliver impressive numbers but no real results.
The other key lesson learned from this: whether AI is involved or not, always verify third-party results against your own first-party data. While ad networks will have access to data that you simply don't, that doesn't mean their results are meaningful or accurate to your business outcomes. Stay curious!