Six Little Secrets of Test Design and Analysis
For example, if you are testing a price increase and a color change to the outer envelope of a direct mail package, most likely these two elements will not interact with each other. This means that if the price increase causes a 10 percent decrease in response with a yellow outer envelope, it will most likely do the same with a blue outer envelope. In other words, price is independent of outer envelope color. Therefore, there is no need to test both combinations. One combination will tell you everything you need to know.
Things that typically interact are price, offer, incentive and maybe a call to action. Use your business smarts and only test those combinations you think are necessary. Save your testing budget.
Secret 6: Statistical significance alone is never enough to make a roll-out decision. Just because a test beats the control does not mean it beat it by enough. You must assess the lower bound of a confidence interval to see just how low the response rate of the test can go.
For example, suppose you are testing a more expensive format and you determine you need an additional five orders per thousand mail quantity to break even with the control (Secret 2). Simply knowing if the test beat the control with statistical significance is not enough. The real question: Did it beat it by at least five additional orders per thousand mail quantity? To determine this, you must calculate the lower and upper bounds on response for the test via a confidence interval.
Perry Drake is vice president and general manager of New York database consultancy Drake Direct. He also is an assistant professor in NYU’s direct marketing communications program and the author of “Optimal Database Marketing” (Sage Publications). He can be reached at email@example.com.