Testing, The Dirty Dozen (1,848 words)
Mistake #8: Assuming the Rollout Will Perform in Line With the Test
More often than not, test results aren't duplicated upon rollout. This applies to both creative and list tests. The old rule of regression to the mean is at play here, and a great part of it has to do with the confidence levels and margins of error you factor into determining the test size.
The higher the confidence level and the lower the margin of error, the less variance from test to rollout. Of course, this assumes everything else—consumer attitudes, seasonality, world events, mail delivery—is constant. Which is rarely the case.
Given the impossibility of a 100-percent consistent environment, marketers simply cannot assume that the creative package that beat the old control by 80 basis points will perform as strongly in rollout execution. For planning and pro-forma purposes, err on the side of caution. Expect less, and if you deliver more—great!
Mistake #9: Neglecting the Postmortem
Neglecting to do a postmortem review of packages, offers, lists or scripts tested is a common oversight. By reviewing both the objective metric results and the less objective qualitative elements, an important learning process takes place.
The caution here is that you don't want to get bogged down in subjective interpretation. Did a creative breakthrough package win because the offer was for a limited time only or was it because the tone of the letter was warm and fuzzy? Or was it the size of the envelope, the Johnson box, the brevity of the copy or a combination of all this? The truth is you can spend a lot of time speculating and even a lot more time testing one element at a time, but for what?
The more thoroughly you go through a process of clinically testing and reading results, the keener your staff's sense for identifying successful techniques and attributes. Postmortems illuminate successes and losers and, when done right, should drive home a correlation between what works and what doesn't.