Testing, The Dirty Dozen (1,848 words)
By Shawn D. Morris
No direct marketer worth his or her salt will say that he or she has never made a mistake. Experienced direct marketers will tell you that a disciplined testing regimen is an integral part of a well-constructed direct response strategy. Although initial success may be possible without it, sustaining that success is not.
The advantage of constructing, conducting and observing hundreds of tests over the course of a career is that there's as much to learn from the mistakes as the wins. Here are a dozen dirty mistakes to learn from and avoid—compiled from the experience of many, including myself.
Mistake #1: The Temptation to Tinker
The temptation to tinker is powerful. Many individuals, often those with a bit less experience, think they can distinguish better creative from poorer and will go to great lengths to edit copy, modify graphics, eliminate elements or (heaven forbid) change a letter's serif font to a more "contemporary" sans serif font.
Unfortunately, the end result is often that the flow, integrity and effectiveness of the piece is compromised. The only exceptions to this are:
> if the creative simply is not direct response oriented;
> if the cost elements are just too expensive; and
> if the legal or regulatory folks emphatically nix the concept or any element of the package.
Mistake #2: Not Enough Testing or Worse ... Not Testing at All
This is the most unforgivable and grievous one of all. Consider the true case of a large, well-established direct mailer that had not conducted any tests—media, creative, offer—over the course of several years. In fact, even over the past 10 years its total testing log was woefully sparse. Despite this, the company was doing well, making strong profits but, you guessed it, had seen steadily eroding marketing efficiencies. The existing package control was producing lesser but still adequate results.
The company faced two problems: 1) no one would ever know whether the results could have been improved, and 2) if the control went suddenly from moderate decline to rapid death—not an uncommon occurrence—the company would be at a major disadvantage with no alternative at the ready.
Some experts recommend you allocate roughly 10 percent of your direct marketing budget to testing. Don't expect miracles, though. Over the long haul, most tests will not beat the control. The key is to aggressively test your best element ideas, especially when you are going against a well-established control.
It's also important to have a mix of testing strategies. Element tests— color, pricing, copy, etc.—usually do not produce the radically different results that breakthrough tests, such as an entirely new acquisitions direct mail piece or a completely fresh telemarketing script, will. But, because element tests typically cost less, you can run them more often.
Mistake #3: Non-clinical Testing
The transgressions here are numerous. Not taking random samples, testing too many elements, testing too large or small a quantity, and not seeding lists are all examples of abandoning the discipline of a true test.
I once encountered a large mortgage company that prided itself on having arrived at a creative control through multiple testing. This was a top-five lender with ample resources. Through a series of questions, I found that in determining how to set up their test-vs.-control cells, they simply "split the country in half!" People living on the eastern half of the country received the control and those on the western half got the new creative. Ouch! This set up all sorts of behavioral, demographic and lifestyle biases that basically invalidate the testing. Sad but true.
Mistake #4: No Control
With an element test, you'll know the impact of the element you've tested. With a breakthrough test, you will not know what specific element(s) drove the difference in results. Nor do you care—you're testing entire packages.
Typically, element tests are more aggressively engaged once a history of testing different breakthrough packages has evolved. A conscientiously engaged element-testing strategy should allow you to move your control to a higher, more profitable level.
Be sure, though, to isolate testing to one variable at time—otherwise you won't know which change (or combination thereof) produced the change in results.
Mistake #5: Using a General Agency
Direct response testing is as scientific as it is artistic. To entrust your testing program to a novice is begging for trouble. While the occasional large general-advertising agency will tout its direct marketing prowess, it's about knowing the rules and having experience.
Too many direct marketing novices concentrate on aesthetics, neglecting the offer and the copy. Remember, it doesn't have to be pretty to work. An experienced direct marketer, with hundreds of tests—successful and not-so-successful—to its credit should get you to the point of viable marketing results much sooner.
Mistake #6: Using one Agency (or Person) to Handle All Direct Response Testing
You can inflate your chances of success by spreading your testing dollars around. Use a variety of resources to test your direct mail creative, your lists, your telemarketing scripts and your opt-in e-mail campaign. Dedicating your entire testing budget to a single internal or external outlet not only will lead to eventual burnout of results, but will cost you in the form of huge opportunity dollars that are lost forever.
Mistake #7: Majoring in Minors
Too many direct marketers are hooked on testing minute package element variations—mail indicias, stamp treatments, color and texture of the envelope—only to find that the results change little.
This strategy is even more harmful when a company neglects to test new offers and new creative concepts as a result. Spend the most money on testing where the results matter most. If you're going to test the envelope treatment, test something significant, like going from a #10 envelope to a 9˝ x 12˝ flat package. Modify the contents strictly to fit the newer size and see what the consumer says through an applied, disciplined test.
Mistake #8: Assuming the Rollout Will Perform in Line With the Test
More often than not, test results aren't duplicated upon rollout. This applies to both creative and list tests. The old rule of regression to the mean is at play here, and a great part of it has to do with the confidence levels and margins of error you factor into determining the test size.
The higher the confidence level and the lower the margin of error, the less variance from test to rollout. Of course, this assumes everything else—consumer attitudes, seasonality, world events, mail delivery—is constant. Which is rarely the case.
Given the impossibility of a 100-percent consistent environment, marketers simply cannot assume that the creative package that beat the old control by 80 basis points will perform as strongly in rollout execution. For planning and pro-forma purposes, err on the side of caution. Expect less, and if you deliver more—great!
Mistake #9: Neglecting the Postmortem
Neglecting to do a postmortem review of packages, offers, lists or scripts tested is a common oversight. By reviewing both the objective metric results and the less objective qualitative elements, an important learning process takes place.
The caution here is that you don't want to get bogged down in subjective interpretation. Did a creative breakthrough package win because the offer was for a limited time only or was it because the tone of the letter was warm and fuzzy? Or was it the size of the envelope, the Johnson box, the brevity of the copy or a combination of all this? The truth is you can spend a lot of time speculating and even a lot more time testing one element at a time, but for what?
The more thoroughly you go through a process of clinically testing and reading results, the keener your staff's sense for identifying successful techniques and attributes. Postmortems illuminate successes and losers and, when done right, should drive home a correlation between what works and what doesn't.
Mistake #10: Shelving a Close Runner Up
Just because a test didn't beat the control doesn't mean it should be abandoned. Sharp marketers will take almost-winners, and look for ways to viably engage them.
For instance, take a creative package that outperforms the control, but because of higher in-the-mail costs does not produce a return on investment that meets your company's hurdle rate. Perhaps the package would be effective if it were only mailed to those lists that typically return higher margins. Another technique is to take an "extractive" approach and lift an element from a test to incorporate into the control.
Mistake #11: Forgetting Old Controls
Old controls were great until they were ushered out the back door by some new, flashy upstarts. Oh well, they were losers anyway—right? Wrong! Old controls, especially in creative, often can be dusted off, re-tested and sometimes successfully reintroduced.
All sorts of factors can contribute to the demise of a control position, from overuse resulting in package, envelope or offer fatigue, to the morphing nature of the core audience's demographics or psychographics. The reality of direct marketing is that many past controls have been re-engaged.
Here, too, extractive techniques can work. By taking each element individually and testing accordingly, marketers are sometimes able to raise the efficiency of current control positions.
Mistake #12: Inadequate Reporting and Measurement of Results
Inadequate reporting and measurement can ruin your testing program. So starting out knowing how and with what criteria winning positions will be determined is important.
Make sure a direct response-oriented recording and reporting discipline is firmly established. Know what your evaluative criteria for determining successful tests are before you test. Is return on investment the ultimate goal or is it top line revenue? Understand how you'll measure and how you'll factor. Make sure you test in accordance with statistically valid rule sets.
Calculate what size mailing you'll need to conduct to get results that are statistically confident within a certain margin of error. Be careful not to test at an unreasonably high confidence level with too tight a margin of error—if you do, you will end up mailing far more pieces than necessary. Usually a confidence level of 85 percent to 90 percent is adequate.
If all this sounds like Dutch to you, pick up a book on statistics or even a basic direct marketing text. It will provide confidence tables and the formula to determine appropriate mailing and calling sample test sizes.
Bonus Tip: Back Test
Consider this a baker's dozen … lagniappe … a little something extra. Too many direct marketers neglect to back test. When a new element or breakthrough concept beats your control, that's great. But don't forget to retest the new winner against the control. Results can vary, so make sure you do indeed have a true winner.
A well-designed and well-orchestrated testing strategy can reap huge rewards for just about any direct marketing enterprise. This piece only touches the surface of a deep and disciplined approach to testing.
If nothing else, remember this from Malcolm Decker, who came up with two inviolable rules of direct marketing:
Rule #1: Test everything.
Rule #2: See rule #1.
Shawn D. Morris is president of SDM Associates and affiliated with JCG Ltd., an international direct marketing consulting group specializing in the insurance industry. He can be reached at ShawnMorris@JCG-Ltd.com or (615) 308-3590.