B-to-B Insights: Test Your Way to Success
As direct specialists, we are a lucky group of marketers. Unlike the bulk of our brand-building brethren, we actually can measure what works and what doesn’t. Our ability to do this, however, is a mixed blessing. For, as frequently as it makes us feel like heroes, it also makes us feel like crawling under the desk.
Working with B-to-B and B-to-C direct marketers both large and small for 25 years, it has become increasingly clear that B-to-B marketers aren’t doing anywhere near enough testing.
The B-to-B Conundrum
B-to-B marketers are in a quandary: Should they adhere to the traditional guidelines of testing, or yield to their emotional desire to generate better results, right away, to boost their job performance ratings?
There are three traditional rules that hamstring B-to-B marketers:
• You can test only one thing at a time.
• You must always test head-to-head.
• You must always use statistically valid samples.
These rules are quite valid when we can mail or blast out in high volumes and can afford to allocate 10 percent to 20 percent of a campaign to testing. More specifically, traditional guidelines apply when we can afford to create sample sizes of 5,000 to 10,000 contacts for a given test cell. For example, a mailer dropping 1 million packages at a time can conduct 40 different tests of 5,000 each, and still mail the control package to 80 percent of the file.
But what is a B-to-B marketer to do when the suspect universe is constrained by targeting criteria—geography, industry, revenue size, etc.—to 100,000 or 50,000 names, or even perhaps as few as 25,000 names?
When only 50,000 names can be reached, conventional testing methodology limits the number of cells to between five and 10—in effect, turning the entire file into one giant test. Often, that’s just not enough testing to optimize success, particularly when there is no control or offer in place and little or no time to find one.
In the past decade, many of our clients have come to us in similar predicaments. They have to get out in the market quickly, and they either have no control package or the results of their current control have been deteriorating. They must hit their sales numbers, and need direct marketing to feed the sales pipeline. If we can help them find a winner quickly, they tell us, they’ve got resources set aside for a consistent rollout.
To help these clients solve their problems, The Kern Organization has developed its own methodology for multivariate testing. Called ControlFinder™, we employ this methodology to help clients test multiple offers, multiple creative approaches and multiple media sources, all at once. How do you know when you need this type of multivariate testing program? When there is no control. When there has been no testing, prior testing has failed, or when there is no meaningful performance data for lists, packages or offers.
Step One: Designing Your Test
The first step of your test program will determine the key success drivers: Which media channels will work for the offer? Which creative appeals will work for each list source, given a stated offer?
To illustrate this step, let’s consider a 25,000-piece program for a client I’ll call ACME. Here are the key assumptions driving the test program:
• ACME’s total market universe is estimated at 75,000 suspects. However, with limited information about campaign performance, the company has budgeted only a 25,000-piece test.
• Given a conflict between sales and marketing, we need to test a postcard and a letter package format.
• The sales manager thinks the sales leads generated so far from direct campaigns aren’t qualified. Therefore, we need to test three offers: product focus offer vs. a third-party educational offer vs. an educational offer with an incentive for immediate response. The sales manager has told the marketing leader, “I want to prove that our product is so good, it will sell itself.” So he’s demanded a test that promotes only the product. The marketing department knows better and would like to demonstrate the benefits of an early buy cycle educational approach. So it’s requested tests of a few soft offers involving an executive report prepared by an independent third party.
• Media research has shown that we need to test five list sources. Five thousand records are available for testing from each source. Four sources are controlled circulation subscriber files from trade publications in which ACME has advertised, and the other is a file compiled from D&B that ACME can use at half the cost per thousand of controlled circulation files.
• A 2 percent or greater response is required for any cell to be within the acceptable range for cost per inquiry.
Assuming all of the above, the testing grid would look like the Test Matrix shown below.
Although each of these 30 cells contains 833 names, it doesn’t mean the result from each cell is, by itself, statistically valid. For each cell in the grid to be statistically valid, the test would have to involve roughly 150,000 pieces, or twice ACME’s 75,000-piece mailing universe.
However, much can be deduced about market desire from the results of a single cell—when looked at in the aggregate. When conducting this type of multivariate testing, you are looking to identify strong trends and hot spots that will tell you what and what not to do next.
Let’s assume the test generated the response rates shown in the Response Rate Matrix (see below). In this example, it is safe to conclude the following:
• Nothing worked with List One.
• Package Y is a clear winner over Package X.
• Offer C worked best with Lists Two, Three and Four.
• Offer B worked best with List Four.
Step Two: Confirm Your Findings, Expand Your Testing
This is the point at which you conduct a confirmation test of the results from step one. This time, however, you must use only statistically valid sample sizes within test cells to confirm your results with confidence. Here’s a good rule to follow: You need 50 observations from a given cell, drawn from a sample size of 5,000 to read your results at a 95 percent confidence level.
Assuming we can find a new list that matches the profile of List Four and can replace List Five, with a response rate of 2 percent or higher and test cell counts of 5,000, the results from this confirmation test would be statistically projectable, as illustrated in the Check Test Cell Counts Matrix (see below).
Now you see how this multivariate testing methodology can identify a winning control package with just 65,000 names, as opposed to the 150,000 that would have been required for a test of this magnitude using traditional methodology. This could translate into a savings of $25,000, or even $50,000—money that now can be redeployed for a third campaign.
A Final Word
When developing your testing, always look for ways to do it inexpensively. Use laser imaging or black-plate changes to test splits. Once you have identified your control packages, use the 80/20 rule to keep pushing your campaign performance. Run 80 percent of your campaign using your control message, offer and list strategy, and then invest the other 20 percent in testing big differences that really can move the needle.
Russell Kern is president of The Kern Organization, a fully integrated offline and online direct marketing agency in Woodland Hills, Calif. He can be reached at (818) 703-8775 or via e-mail at email@example.com.