Six Little Secrets of Test Design and Analysis
Testing is the foundation on which we build a direct marketing business. Therefore, without proper knowledge of test planning and analysis, we are not in the best position possible to help our company grow. It’s that simple!
There are many secrets of test design and analysis guaranteed to maximize the effectiveness of your testing programs and ensure you stay within budget. Let me share six of them with you now.
Secret 1: When rolling out with a new promotional strategy for the first time, you must always back-test or re-test the old package at least one time. The reason is that if this new package does not hold up in rollout, you may very well be misled into thinking the new package was a loser. In reality, it may have performed badly due to a name selection error, list fatigue or seasonality. Without a back test you will not be able to determine the true reason the new package did poorly and, as a result, you might erroneously revert back to the old package. Be careful.
Secret 2: When planning new creative or price tests, you must first conduct a break-even analysis to determine the appropriate sample sizes. Many marketers just set up their test sample sizes to be able to read a 10 percent difference as significant. In most cases this is not tight enough. Every test is different, and some will require a tighter read than others. You must conduct a break-even analysis first to help you properly assess the maximum amount of error you can tolerate in your testing results. For example, if you are conducting a new and more expensive format test and the test needs five additional orders to breakeven versus the control, make sure you set your sample sizes to read a minimum increase of .005 percent or .5 percent over your control response rate as significant.
Secret 3: Only make one change at a time to your test panels. If you make multiple changes to your test packages, you will not be getting the most out of your testing program.
For example, if your test package loses against your control, it may very well be a result of only one of the changes you made to the package and not all. You will never know which changes were working and which were not. As a result, you may overlook potentially winning elements. This simply is not smart testing.
Secret 4: No direct marketer should ever consider evaluating its test results with a confidence level lower than 85 percent to 90 percent. To do so assumes way to much risk. And, fishing for a confidence level that yields a significant result should never be practiced.
The proper rules that any good direct marketer should follow regarding significance are as follows:
Begin by assessing your test with 95 percent confidence. If it’s significant at 95 percent, see if it also is significant at 99 percent. If it also is significant at 99 percent, you definitely have a winner, and rollout should be a no brainer. If it’s not significant at 99 percent, but is at 95 percent, a partial to full rollout at a minimum should be considered. If it’s not significant at 95 percent, take a look and see if it is significant at 90 percent. If it is significant at this lower level, a retest or partial rollout certainly is in the cards. If it is not significant at this lower level, you definitely have a loser and should not consider the test any further.
Secret 5: A full factorial test design usually is never warranted. The only time it is warranted is if you truly believe all the elements you are testing will interact with one another. Most elements simply do not interact with one another.
For example, if you are testing a price increase and a color change to the outer envelope of a direct mail package, most likely these two elements will not interact with each other. This means that if the price increase causes a 10 percent decrease in response with a yellow outer envelope, it will most likely do the same with a blue outer envelope. In other words, price is independent of outer envelope color. Therefore, there is no need to test both combinations. One combination will tell you everything you need to know.
Things that typically interact are price, offer, incentive and maybe a call to action. Use your business smarts and only test those combinations you think are necessary. Save your testing budget.
Secret 6: Statistical significance alone is never enough to make a roll-out decision. Just because a test beats the control does not mean it beat it by enough. You must assess the lower bound of a confidence interval to see just how low the response rate of the test can go.
For example, suppose you are testing a more expensive format and you determine you need an additional five orders per thousand mail quantity to break even with the control (Secret 2). Simply knowing if the test beat the control with statistical significance is not enough. The real question: Did it beat it by at least five additional orders per thousand mail quantity? To determine this, you must calculate the lower and upper bounds on response for the test via a confidence interval.
Perry Drake is vice president and general manager of New York database consultancy Drake Direct. He also is an assistant professor in NYU’s direct marketing communications program and the author of “Optimal Database Marketing” (Sage Publications). He can be reached at email@example.com.