A New Approach to Predictive Analytics Model Evaluation
Stability: During analysis the modeler will test his preliminary result on a development file. If the analyst is satisfied with what he observes, he continues to submit his algorithm to the holdout sample. The outcome is typically presented as the evaluation of model performance.
If the validation file result is in conflict with the training file result, we may very well encounter an unstable model. Look at the model on the left, in Chart 3. The height of the bars is almost identical, indicating a close match between analysis and validation files. The bars on the right for the second model have slightly different heights, suggesting a less stable model. The horizontal axis represents the deciles for model 1 on the left side, and the deciles for model 2 on the right side.
Predictability: As we know, the final product of a model is a prediction for each individual. When these records are grouped appropriately, deciles are formed. We may compute for each of these segments:
- Actual response rate
- Predicted response rate
The actual rate is calculated by adding up all the responders and dividing by the total number of records. The predicted rate is determined by employing the model and assigning a probability of responding for each individual. We then take the average of these likelihoods by decile. This is the predicted response rate. Large variances between predicted and actual are a cause for concern.
Analyses must assess how close actual is to predicted. This is an important dimension in evaluating model strength. There are statistical tests that help determine the 'closeness' of the distributions.
Variety: It is necessary to apply several approaches to determine best results. Many analysts do not.
Parsimonious Parameterization: Generally speaking, fewer model predictors are more favorable than more predictors. There is nothing wrong in determining whether the model developer "peeled" off variables. The 'peeling' should stop when model results are impacted.