Marketing Metrics Aren’t Baseball Scores
Lester Wunderman is called “the Father of Direct Marketing” — not because he was the first one to put marketing offers in the mail, but because he is the one who started measuring results of direct channel efforts in more methodical ways. His marketing metrics are the predecessors of today’s measurements.
Now, we use terms like 1:1 marketing or digital marketing. But, in essence, data-based marketing is supposed to be looped around with learnings from results of live or test campaigns. In other words, playing with data is an endless series of learning and relearning. Otherwise, why bother with all this data? Just do what you gut tells you to do.
Even in the very beginning of the marketer’s journey, there needs to a step for learning. Maybe not from the results from past campaigns, but something about customer profiles and their behaviors. With that knowledge, smart marketers would target better, by segmenting the universe or building look-alike or affinity models with multiple variables. Then a targeted campaign with the “right” message and offers would follow. Then what? Data players must figure out “what worked” (or what didn’t work). And the data journey continues.
So, this much is clear; if you do not measure your results, you are really not a data player.
But that doesn’t mean that you’re supposed to get lost in an endless series of metrics, either. I sometimes see what is commonly called “Death by KPI” in analytically driven organizations. That is a case where marketers are too busy chasing down a few of their favorite metrics and actually miss the big boat. Analytics is a game of balance, as well. It should not be too granular or tactical all of the time, and not too high in the sky in the name of strategy, either.
For one, in digital marketing, open and clickthrough rates are definitely “must-have” metrics. But those shouldn’t be the most important ones for all, just because all of the digital analytics toolsets prominently feature them. I am not at all disputing the value of those metrics, by the way. I’m just pointing out that they are just directional guidance toward success, where the real success is expressed in dollars, pounds and shillings. Clicks lead to conversions, but they are still a few steps away from generating cash.
Indeed, picking the right success metrics isn’t easy; not because of the math part, but because of political aspects of them, too. Surely, aggressive organizations would put more weight onto metrics related to the size of footprints and the rate of expansion. More established and stable companies would put more weight on profitability and various efficiency measures. Folks on the supply side would have different ways to measure their success in comparison to sales and marketing teams that must move merchandise in the most efficient ways. If someone is dedicated to a media channel, she would care for “her” channel first, without a doubt. In fact, she might even be in direct conflicts with fellow marketers who are in charge of “other” channels. Who gets the credit for “a” sale in a multi-channel environment? That is not an analytical decision, but a business decision.
Even after an organization settles on the key metrics that they would collectively follow, there lies another challenge. How would you declare winners and losers in this numbers game?
As the title of this article indicates, you are not supposed to conclude one version of creative beat the other one in an A/B test, just because the open rate was higher for one by less than 1%. This is not some ballgame where a team becomes a winner with a walk-away homerun at the bottom of the 11th inning.
Differences in metrics should have some statistical significance to bear any meaning. When we compare heights of a classroom full of boys, will we care for differences measured in 1/10 of a millimeter? If you are building a spaceship, such differences would matter, but not when we measure the height of human beings. Conversion rates, often expressed with two decimal places, are like that, too.
I won’t get too technical about it here, but even casual decision-makers without any mathematical training should be aware of factors that determine statistical significance when it comes to marketing-related metrics.
- Expected and Observed Measurements: If it is about open, clickthrough and conversion rates, for example, what are “typical” figures that you have observed in the past? Are they in the 10%to 20% range, or something that is measured in fractions? And of course, for the final measure, what are the actual figures of opens, clicks and conversions for A and B segments in test campaigns? And what kind of differences are we measuring here? Differences expressed in fractions or whole numbers? (Think about the height example above.)
- Sample Size: Too often, sample sizes are too small to provide any meaningful conclusions. Marketers often hesitate to put a large number of target names in the no-contact control group, for instance, as they think that those would be missed revenue-generating opportunities (and they are, if the campaign is supposed to work). Even after committing to such tests, if the size of the control group is too small, it may not be enough to measure “small” differences in results. Size definitely matters in testing.
- Confidence Level: How confident would you want to be: 95% or 90%? Or would an 80% confidence level be good enough for the test? Just remember that the higher the confidence level that you want, the bigger the test size must be.
If you know these basic factors, there are many online tools where you can enter some numbers and see if the result is statistically significant or not (just Google “Statistical Significance Calculator”). Most tools will ask for test and control cell sizes, conversion counts for both and minimum confidence level. The answer comes out as bluntly as: “The result is not significant and cannot be trusted.”
If you get an answer like that, please do not commit to a decision with any long-term effects. If you want to just declare a winner and finish up a campaign as soon as possible, sure, treat the result like a baseball score of a pitchers’ duel. But at least be aware that the test margin was very thin. (Tell others, too.)
Here's some advice related to marketing success metrics:
- Always Consider Statistical Significance and do not make any quick conclusions with insufficient test quantities, as they may not mean much. The key message here is that you should not skip the significance test step.
- Do Not Make Tests Too Complicated. Even with just 2-dimensional tests (e.g., test of multiple segments and various creatives and subject lines), the combination of these factors may result in very small control cell sizes, in the end. You may end up making a decision based on less than five conversions in any given cell. Add other factors, such as offer or region, to the mix? You may be dealing with insignificant test sizes, even before the game starts.
- Examine One Factor at a Time in Real-Life Situations. There are many things that may have strong influences on results, and such is life. Instead of looking at all possible combinations of segments and creatives, for example, evaluate segments and creatives separately. Ceteris paribus (“all other factors held constant,” which would never happen in reality, by the way), which segment would be the winner, when examined from one angle?
- Test, Learn and Repeat. Like any scientific experiments, one should not jump to conclusions after one or two tests. Again, data-based marketing is a continuous loop. It should be treated as a long-term commitment, not some one-night stand.
Today’s marketers are much more fortunate in comparison to marketers of the past. We now have blazingly fast computers, data for every move that customers and prospects make, ample storage space for data, affordable analytical toolsets (often for free), and in general, more opportunities for marketers to learn about new technologies.
But even in the machine-driven world, where almost everything can be automated, please remember that it will be humans who make the final decisions. And if you repeatedly make decisions based on statistically insignificant figures, I must say that good or bad consequences are all on you.
Stephen H. Yu is a world-class database marketer. He has a proven track record in comprehensive strategic planning and tactical execution, effectively bridging the gap between the marketing and technology world with a balanced view obtained from more than 30 years of experience in best practices of database marketing. Currently, Yu is president and chief consultant at Willow Data Strategy. Previously, he was the head of analytics and insights at eClerx, and VP, Data Strategy & Analytics at Infogroup. Prior to that, Yu was the founding CTO of I-Behavior Inc., which pioneered the use of SKU-level behavioral data. “As a long-time data player with plenty of battle experiences, I would like to share my thoughts and knowledge that I obtained from being a bridge person between the marketing world and the technology world. In the end, data and analytics are just tools for decision-makers; let’s think about what we should be (or shouldn’t be) doing with them first. And the tools must be wielded properly to meet the goals, so let me share some useful tricks in database design, data refinement process and analytics.” Reach him at email@example.com.