Predictive analytics is a powerful tool for creating an efficient and effective customer success organization. However, predictive analytics developed from intuition and hunches without proper validation can be detrimental to a business. Customer success teams need to understand the balance between customer coverage and predictive accuracy to identify false alarms and fire drills, and to know when to act to prevent churn. If not done properly, some scoring rules actually hurt rather than help the customer success organization.
If you implement scoring rules for identifying at-risk customers, you will want to be sure the rules actually help your customer success organization, i.e., that the rules provide a return on investment (ROI). A scoring rule with low accuracy hurts your organization by reducing efficiency and effectiveness. Many scoring rules are developed from the experience and intuition of customer success managers, but most have not been validated as to their customer coverage or predictive accuracy—the two facets that determine ROI in predicting churn.
What Are These Facets and How Do They Affect Your Organization?
Customer coverage is the percentage measurement of the customer base for which the scoring rule provides insight. For example, if a rule is implemented based on survey response, there will be low coverage. Typically, 20 percent is considered an awesome response rate for a survey. Most survey response rates are in the 10 percent to 15 percent range. The implication is more than 80 percent of your customers will not respond to the survey—that means that scoring rule based on survey responses has low customer coverage. Creating rules around other metrics, such as number of open support cases or attendance at customer events, can fall into the low coverage category. Rules with high customer coverage include demographic, firmographic, purchase history and usage metrics.
Predictive accuracy is the percentage measurement of a rule's true vs. false predictions. In the case of churn, if a scoring rule predicts someone to be a non-churner and they do in fact churn, that is a false prediction. Likewise, another false prediction would be if the rule predicts a churner and, without any intervention from customer success, the churner renews. True predictions are, of course, the opposite of these. The percentage of true predictions vs. the total provides predictive accuracy.
So How Do These Metrics Provide Help Discover ROI for a Customer Success Team?
Customer success teams are most efficient and effective if they intervene only when needed—i.e., if a customer is at risk. Interventions with customers not at risk are not only an unwarranted expense, but also represent opportunity costs of assisting the actual at-risk customers. Therefore, low predictive accuracy actually lowers a customer success organization's efficiency and effectiveness. And low customer coverage means a missed opportunity.
The dynamics of customer coverage and predictive accuracy are shown in the graphic. If the metric used has low customer coverage and predictive accuracy, customer success teams will be constantly reacting to fire drills in order to save customers. If the metric has high coverage and low accuracy, the customer success team will have lots of false alarms resulting in wasted effort and opportunity costs. Likewise, high accuracy and low coverage will result in unanticipated churn. Only with high accuracy and high coverage can you get to efficient and effective churn prevention.
The takeaway is that predictive analytics is a powerful tool for creating an efficient and effective customer success organization. With that said, predictive analytics developed from experience and intuition without validation can actually have the opposite effect.
Matt Shanahan is the CMO at Seattle, Wash.-based Azuqua. He has nearly 30 years of experience in the technology industry, ranging from Accenture to startups. He is a proven entrepreneur as the VP of product marketing and management for Documentum from startup through initial public offering and most recently as co-founder and SVP of strategy for Scout Analytics.