Many B-to-B mailers prefer to use cooperative databases for mailing lists because they offer the advantages of just-in-time inventory, cost savings, strong selectability and results, and elimination of merge/purge time and costs. However, as Blair Barondes, executive vice president of White Plains, N.Y.-based list brokerage and management firm MeritDirect, points out in his whitepaper, Thinking Outside the Database: How to Test Lists Sourced From a Cooperative Database vs. Traditional Sources, sometimes there’s a need to use a list from outside the co-op.
But how do you know if it’s worth your time and money to order and test a list outside the database? Barondes offers four steps to making an informed decision.
1. Allocate the cost of the merge/purge. It is important for mailers to consider what the merge/purge cost will be when adding an outside list to their co-op pulls. Oftentimes, states Barondes, the additional revenue generated by a responsive outside list does not merit the cost of running the merge/purge. Check your numbers carefully.
2. Make the outside list the lowest priority when allocating multibuyers. Treating a non-database list with equal priority inflates the percentage of multibuyers, Barondes explains, and artificially boosts the response rate if you fail to measure only the unique names of an outside list. So to accurately measure the value of incremental names added, only consider the unique names the outside list provides.
3. Measure database selects/omits. Many co-op lists use database enhancements such as excluding names in the bottom decile of a mailer’s response regression model, thus making the database list perform better using the additional selections and omissions. Can an outside list hold its own? Make sure you know the answer because Barondes cautions that more times than not, a database list that performs slightly under the curve can be made to perform better, via enhancements, than an outside list with less cost and risk.
4. Perform matchbacks. Matchbacks are critical to measuring results of any list, including when determining if an outside list is right for you. Trying to allocate unassigned orders proportionately to all lists results in inaccurate response data, taking your next test and rollout decisions off-track.