Marketing Success Metrics: Response or Dollars?
It’s tempting to ask about whether marketing success metrics should be response rates or money. But you don’t need to ask marketers what they want. Basically, they want everything.
They want big spenders who also visit frequently, purchasing flagship products repeatedly. For a long time (some say “lifetime”). Without any complaint. Paying full price, without redeeming too many discount offers. And while at it, minimal product returns, too.
Unfortunately, such customers are as rare as a knight in white armor. Because, just to start off, responsiveness to promotions is often inversely related to purchase value. In other words, for many retailers, big spenders do not shop often, and frequent shoppers are often small item buyers, or worse, bargain-seekers. They may just stop coming if you cut off fat discount deals. Such dichotomy is quite common for many types of retailers.
That is why a seasoned consultants and analysts ask what brand leaders “really” want the most in marketing success metrics. If you have a choice, what is more important to you? Expanding the customer base or increasing the customer value? Of course, both are very important goals — and marketing success metrics. But what is the first priority for “you,” for now?
Asking that question upfront is a good defensive tactic for the consultant, because marketers tend to complain about the response rate when the value target is met, and complain about the revenue size when goals for click and response rates are achieved. Like I said earlier, they want “everything, all the time.”
So, what does a conscientious analyst do in a situation like this? Simple. Set up multiple targets and follow multiple marketing success metrics. Never hedge your bet on just one thing. In fact, marketers must follow this tactic as well, because even CMOs must answer to CEOs eventually. If we “know” that such key marketing success metrics are often inversely correlated, why not cover all bases?
Case in point: I’ve seen many not-so-great campaign results where marketers and analysts just targeted the “best of the best” segment — i.e., the white rhinoceros that I described in the beginning — in modeled or rule-based targeting. If you do that, the value may be realized, but the response rate will go down, leading to disappointing overall revenue volume. So what if the average customer value went up by 20%, when only a small group of people responded to the promotion?
A while back, I was involved in a case where “a” targeting model for a luxury car accessory retailer tanked badly. Actually, I shouldn’t even say that the model didn’t work, because it performed exactly the way the user intended. Basically, the reason why the campaign based on that model didn’t work was the account manager at the time followed the client’s instructions too literally.
The luxury car accessory retailer carried various lines of products — from a luxury car cover costing over $1,000 to small accessories priced under $200. The client ordered the account manager to go after the high-value target, saying things like “who cares about those small-timers?” The resultant model worked exactly that way, achieving great dollar-per-transaction value, but failing at generating meaningful responses. During the back-end analysis, we’ve found that the marketer indeed had very different segments within the customer base, and going only after the big spenders should not have been the strategy at all. The brand needed a few more targets and models to generate meaningful results on all fronts.
When you go after any type “look-alikes,” do not just go after the ideal targets in your head. Always look at the customer profile reports to see if you have dual, or multiple universes in your base. A dead giveaway? Look at the disparity among the customer values. If your flagship product is much more expensive than an “average” transaction or customer value in your own database, well, that means most of your customers are NOT going for the most expensive option.
If you just target the biggest spenders, you will be ignoring the majority of small buyers whose profile may be vastly different from the whales. Worse yet, if you target the “average” of those two dichotomous targets, then you will be shooting at phantom targets. Unfortunately, in the world of data and analytics, there is no such thing as an “average customer,” and going after phantom targets is not much different from shooting blanks.
On the reporting front — when chasing after often elusive targets — one must be careful not to get locked into a few popular measurements in the organization. Again, I recommend looking at the results in every possible way to construct the story of “what really happened.”
- Response Rate/Conversion Rate: Total conversions over total contacted. Much like open and click-through rate, but I’d keep the original denominator — not just those who opened and clicked — to provide a reality check for everyone. Often, the “real” response rate (or conversion rate) would be far below 1% when divided by the total mail volume (or contact volume). Nonetheless, very basic and important metrics. Always try to go there, and do not stop at opens and clicks.
- Average Transaction Value: If someone converted, what is the value of the transaction? If you collect these figures over time on an individual level, you will also obtain Average Value per Customer, which in turn is the backbone of the Lifetime Value calculation. You will also be able to see the effect of subsequent purchases down the line, in this competitive world where most responders are one-time buyers (refer to "Wrestling the One-Time Buyer Syndrome").
- Revenue Per 1,000 Contacts: Revenue divided by total contacts multiplied by 1,000. This is my favorite, as this figure captures both responsiveness and the transaction value at the same time. From here, one can calculate net margin of campaign on an individual level, if the acquisition or promotion cost is available at that level (though in real life, I would settle for campaig- level ROI any time).
These are just three basic figures covering responsiveness and value, and marketers may gain important intelligence if they look at these figures by, but not limited to, the following elements:
- Source of the contact list
- Segment/Selection Rule/Model Score Group (i.e., How is the target selected)
- Offer and Creative (hopefully someone categorized an endless series of these)
- Wave (if there are multiple waves or drops within a campaign)
- Other campaign details such as seasonality, day of the week, daypart, etc.
In the ultimate quest to find “what really works,” it is prudent to look at these metrics on multiple levels. For instance, you may find that these key metrics behave differently in different channels, and combinations of offers and other factors may trigger responsiveness and value in previously unforeseen manners.
No one would know all of the answers before tests, but after a few iterations, marketers will learn what the key segments within the target are, and how they should deal with them discriminately going forward. That is what we commonly refer to as a scientific approach, and the first step is to recognize that:
- There may be multiple pockets of distinct buyers,
- Not one type of metrics will tell us the whole story, and
- We are not supposed to batch and blast to a one-dimensional target with a uniform message.
I am not at all saying that all of the popular metrics for digital marketing are irrelevant; but remember that open and clicks are just directional indicators toward conversion. And the value of the customers must be examined in multiple ways, even after the conversion. Because there are so many ways to define success — and failure — and each should be a lesson for future improvements on targeting and messaging.
It may be out of fashion to say this old term in this century, but that is what “closed-loop” marketing is all about, regardless of the popular promotion channels of the day.
The names of metrics may have changed over time, but the measurement of success has always been about engagement level and the money that it brings.
Stephen H. Yu is a world-class database marketer. He has a proven track record in comprehensive strategic planning and tactical execution, effectively bridging the gap between the marketing and technology world with a balanced view obtained from more than 30 years of experience in best practices of database marketing. Currently, Yu is president and chief consultant at Willow Data Strategy. Previously, he was the head of analytics and insights at eClerx, and VP, Data Strategy & Analytics at Infogroup. Prior to that, Yu was the founding CTO of I-Behavior Inc., which pioneered the use of SKU-level behavioral data. “As a long-time data player with plenty of battle experiences, I would like to share my thoughts and knowledge that I obtained from being a bridge person between the marketing world and the technology world. In the end, data and analytics are just tools for decision-makers; let’s think about what we should be (or shouldn’t be) doing with them first. And the tools must be wielded properly to meet the goals, so let me share some useful tricks in database design, data refinement process and analytics.” Reach him at firstname.lastname@example.org.