Users may agree that "true" missing values must be stored as ".", for instance. If a variable such as "number of children in the household" is missing, data managers should never put it in the system as zero unless it's confirmed that the household does not include any children. Further, one should assign separate codes for "missing values due to non-matches to external data source" (i.e., matching issue) vs. "matched to external source but still missing" (i.e., even your data vendor doesn't know). After all, not matching to a professional data compiler's list may mean something, and the missing denotation may act as an independent predictor in models.
For categorical data (or non-numeric) data, similar rules should apply. Values such as plain blank, "N/A", "0" or "." may be used to represent different reasons why the values are missing. Once coded separately, these values often end up playing different roles in subsequent models, often moving together with other known values.
Accounting for Known Unknowns
Modelers often impute values when they encounter missing values, and there are many different methods. Conversely, there are hardly two statisticians who completely agree with each other when it comes to imputation methodologies. Nevertheless, it is important for an organization to have a unified rule for each variable regarding its imputation method. Will it be a simple average of non-missing values? If such method is to be employed, what is the minimum required fill rate? Or, will it be populated with some type of predictive model scores? Once the dust settles, all data fields must be treated with pre-defined rules during the database update process. That way, all analysts will have the common starting point. Often, inconsistent imputation methods lead to inconsistent results.
If, by any chance, individual statisticians end up with freedom to come up with their own ways to fill in the blanks, their model scoring code must include missing value imputation algorithms, as well. It is important that non-statistical staff should be educated about imputation methods, so that everyone who has access to the database shares a common understanding. That list may include external data providers.
Stephen H. Yu is a world-class database marketer. He has a proven track record in comprehensive strategic planning and tactical execution, effectively bridging the gap between the marketing and technology world with a balanced view obtained from more than 30 years of experience in best practices of database marketing. Currently, Yu is president and chief consultant at Willow Data Strategy. Previously, he was the head of analytics and insights at eClerx, and VP, Data Strategy & Analytics at Infogroup. Prior to that, Yu was the founding CTO of I-Behavior Inc., which pioneered the use of SKU-level behavioral data. “As a long-time data player with plenty of battle experiences, I would like to share my thoughts and knowledge that I obtained from being a bridge person between the marketing world and the technology world. In the end, data and analytics are just tools for decision-makers; let’s think about what we should be (or shouldn’t be) doing with them first. And the tools must be wielded properly to meet the goals, so let me share some useful tricks in database design, data refinement process and analytics.” Reach him at firstname.lastname@example.org.