Four Modeling Don’ts
With response rates in general not what they used to be, more companies have turned to modeling to help them market smarter. As such, predictive models have “evolved into more cost-effective, widespread and rapidly deployable applications,” notes Maria Marsala Herlihy, senior vice president, strategic consulting and analytics, at Knowledgebase Marketing, a Richardson, Texas-based database marketing solutions provider. Unfortunately, Herlihy says, this quick adoption has resulted in plenty of companies not properly understanding the modeling process, and so their models don’t perform consistently or successfully.
Herlihy identified four pitfalls in the modeling process that marketers need to avoid during her session, When Good Models Go Bad, at DM Days NY Conference & Expo last week. They are:
Pitfall #1: Not Balancing Math and Business—Analysts need to take time to understand the key performance drivers of a business, Herlihy explains, so they can build a model that logically reflects the firm’s market situation and goals. For example, response is dependent on deliverability, but deliverability should be a pre-select before the model is developed and not a variable in the model. In addition, analysts need to be on the lookout for data fields that contain ill-defined content, such as a field that once represented survey responders but in recent years indicated which customers ordered gift packaging on orders.
Pitfall #2: Believing in Quick Fixes—Automated modeling solutions have mass appeal, but can produce lackluster to disastrous results when not handled properly. Even with a “black box” tool, says Herlihy, you still need to know how to interpret the results, run through the appropriate iterations, validate the model and tune the software and settings. Bad models are worse than no models.
Pitfall #3: Not Modeling the Right Thing—In many cases, businesses must drive contradictory behaviors to attract the right prospects and encourage the most profitable customer activitiy. Herlihy gives the example of credit lenders who tend to get the highest responses to offers from individuals who need credit but aren’t likely to pass the criteria for credit approval. Conversely, those individuals who are mostly likely to get approved are least likely to respond because they don’t need credit. Modeling for just response or persistency (conversion/payment) won’t achieve lasting business success; rather, the right thing to do is to develop a balanced model construct to look for those consumers who score well for both behaviors.
Pitfall #4: Inconsistency in Data Storage and Deployment—The wrong data easily can be pulled into a model when the files receive inconsistent coding scores. Some business use a scale of 1 to 10, with 1 being the worst score and 10 being the best; other firms reverse the scale. Still yet, computer programmers often use 0 through 9 for data storage because it takes up a consistent number of bytes, says Herlihy. Analysts who don’t sufficiently study the database and find out what tagging methodologies are being employed (especially those who are new and inexperienced) are likely to pull low-performing deciles by mistake and build the wrong model. The best practice for businesses to learn is to stick to the same tagging methodology.
A final word of advice from Herlihy: “If you don’t have at least 1,000 of whatever you are trying to model, it is best to collect more data before you invest in the model.”
Herlihy can be reached at (972) 664-3600.