Direct Selling: Testing Makes Perfect
Direct marketing 101, lesson one: Test the big things first. And few things are bigger than creative formats.
But you don't want to test new formats solely on principle. Author Stephen R. Covey said it best in his book "The 7 Habits of Highly Effective People": Habit two-begin with the end in mind. In format testing, you must begin with an understanding of what you are attempting to learn or prove, as well as an understanding of what you will do with the results of your test - before you start.
If you are testing formats for new customer acquisition, you are likely looking for the format that delivers the greatest overall response rates, as well as the lowest possible acquisition cost. Retailers may look for the piece that produces the most store traffic, e-tailers may look for site visits and B-to-B marketers could look for leads. The point is, you must have the questions you are trying to answer framed before you can build the means to learn what you need to know.
Cracking Source Codes
After asking, "What do you want to know?" the second question to ask is, "How are you going to learn?" The industry is in a tough spot these days for one reason: The customer controls the buying process, and as a result, we are at his or her mercy every time we want to create and execute a test. Why? Source codes.
Source code tracking is the fundamental "must" in direct marketing testing. Dedicate time to strategizing how you plan to capture responses in advance of any design concept. Coupon codes, promotion codes, special landing pages, unique toll-free phone numbers, unique P.O. boxes, retail barcodes and matchback processing all can be employed to improve tracking as much as possible.
Creating a Good Format Test
A good test isolates as many variables as possible so conclusions can be drawn regarding the effect of the "one thing" you are trying to test.
By its very nature, format testing makes it difficult to strictly control every variable except the creative format alone. Often, testing a self-mailer versus a catalog, for example, requires a change to the overall message to accommodate less physical space.
As an example, assume you are testing a multipanel self-mailer versus a 16-page catalog to a segment of customers. Your goal is to determine which produces the most profitable sales or greatest ROI. Assume your self-mailer is merchandised with best-sellers, key branding messages and heavy emphasis on driving traffic to the Web, while the catalog is a typical piece that you might mail-multiple SKUs, a wider assortment, etc. When the results come in and the catalog wins, does that mean it's the better format? Not necessarily.
You may have merchandised the self-mailer wrong; it may have mailed too early versus the catalog; your Web site may have been ill-equipped for the campaign and sent customers away before buying. In other words, a variety of factors may have muddied the waters with respect to conclusions you can draw. Those factors are referred to as confounding variables, which generally can't be controlled but should be considered in your conclusions.
So, what should you consider and attempt to minimize as you set up your format testing? The big three are: messages, offers and timing. If at all possible, the same copy (or elements of the same copy including tone, detail of information, etc.) should be used in each of your creative test pieces.
Second, each format must present the same promotional offers and the same channel ordering options in the same ways. If your URL is a prominent front cover element on one creative but is buried on the other, customers may not respond because they don't know where to go.
Finally, the timing of the campaigns must be consistent. For example, full-size catalogs mail at the Standard Flat rate and often take five to 10 days to arrive, while smaller direct mail pieces such as postcards, self-mailers and solo packages often intermingle with First Class mail and deliver in as few as three to five days. That difference in timing can play a role in response. Customer and Prospect Format Testing
When developing format tests for customers, bear a few things in mind. First, customers let you mail them a wide variety of formats without major negative implications. As a result, you need to understand what your testing goals are for these mailings.
Make sure, too, that you don't have any additional tests taking place at the time of the format test. Customers often are targets for longitudinal testing where companies test to understand the effects of multiple mailings and contact sequence on response and retention. In those cases, you must control for the first study or risk compromising the results of both testing efforts.
Prospects-who don't necessarily know your brand or understand your offer proposition-are typically more finicky about how you communicate with them. As a general rule, more is better for direct selling to prospects. Greater page counts typically produce greater response rates. However, if the answer was that easy, only one format would exist. This is why you test.
Measuring Results, Retests and Rolling Out
The "Rule of 100s" is a good place to start when evaluating format performance. The rule claims that if a segment produces at least 100 responses, there is a greater likelihood that the result would duplicate in the future. In evaluating test results, the rule suggests that if dollars per piece mailed for format A was $2.19 based on 137 orders and dollars per piece mailed was $1.65 for format B based on 105 orders, A will most likely outperform B by about 30 percent again.
When gross responses are low, the limits of the Rule of 100s get tested. For that reason, it's a good idea to retest to validate results. Retesting may become essential when one format works well with one segment, but a different format works best for another segment, and it's tough to draw hard and fast conclusions. When this happens, retests to similar segments controlling as many potential confounders as possible are necessary. Retests also are good if you've tested a format to a small group but the overall roll-out quantity is significantly larger. It's important to validate the result with a larger sample before putting all of your eggs in the winning format's envelope, so to speak.
Finally, when evaluating results of format tests, always consider the cost to roll out the campaign, not just the test cost. In testing, quantities often are controlled and lower than they would be in a roll-out situation. As a result, per piece costs often are high and can negatively impact the numbers of the test. Only by paying attention to roll-out costs can you see what the real effect is and confidently discover your new winning format.
Steve Trollinger is executive vice president of J. Schmid & Associates, Mission, Kan. You can reach him at firstname.lastname@example.org.