Database: Close the Loop Properly
• Campaign and time window. Allow a consistent time window for responses if multiple campaigns are analyzed.
• Match logic and rules. Soft-match logic using names and addresses must be tweaked in advance to prevent over- and under-matching situations that can affect the validity of subsequent result studies.
• Data reconciliation. Collected source code information often does not agree with the master mail file. Information on the master file typically is trusted over manually collected figures.
• Date window. If there are multiple campaigns going on in close proximity in time, rules must be in place to credit proper campaigns for responses. Generally, the latest campaign gets the credit, as long as the response date is not too close to the mail-drop date.
• Allocation rules for unmatched responses. Even with the most sophisticated match programs, there always will be unmatched records. Provided this is a small part of the response universe, business rules must be set to handle the unmatched response records.
• Key report variables. To avoid any redundant data processing, key analytical variables must be defined and maintained throughout the process.
The main issue with skipping the matchback process is that the analysts solely rely on manually collected data on the response side, which may not agree with the master file or be completely missing. In addition, it is important to recognize that analysts cannot measure anecdotal responses to a fraction of a percent. Without the matchback, it may be necessary to surrender certain levels of details, such as segment or name source, due to lack of coverage, which often are the focal points of all direct mail response analyses.
The Flaw in Random Merge/Purge
Most of the matchback process is done on the master file, which is suitable for testing creative packages, offers and delivery channels. However, because merge/purge output keeps only one record per household/individual regardless of its origins, the master file has a serious flaw when it comes to list source evaluation. Yet, list level measurement is one of the key metrics in ROI studies, as the list cost is the one that varies the most. Imagine a situation where three list providers sent a responsive name, and only one lucky winner who survived the so-called “random” merge/purge gets all the credit in every subsequent study.