Database: Get the Dupes Out
Data De-Duping Today
Relationship management starts with accurate data. To effectively reach your customers through their preferred channels, to be responsive to their contact preferences and requests, and to achieve cost savings on postage by reducing waste in your mail files—not to mention, to be mindful of conserving natural resources—you must keep your files as error-free as possible.
Older de-duplication processes use character-based logic and look-up tables that are easily deceived by minor variations in the name and address elements, such as married names versus maiden names, nicknames, typos and mis-keys.
Fortunately, marketers are starting to tap newer technology that uses referential transaction databases. Overall, these databases make it possible to provide current addresses for the majority of movers for which no information exists. This often happens when the mover does not complete the NCOA process or when an NCOA card or entry did not match the existing name and address on file.
These databases serve as a knowledge base of millions or billions of data elements culled from purchase transaction information, the marketer's own data and outside data sources. Including transactional data helps create a highly accurate view of consumers at any given address or point in time.
How It Works
Newer technology can automate processes that once required human intervention. These newer solutions employ advanced customer recognition technology and "fuzzy logic" to address the identity recognition issues caused by the dynamic nature of customers and the multiple methods of data collection in use today. These methods leverage newer and more sophisticated data matching algorithms and are applied to customer recognition challenges. Such solutions can now identify and "collapse" customers with multiple identities, regardless of name and address permutations, omissions or mis-keys.
Today's advanced customer recognition solutions incorporate several distinct types of advanced matching algorithms, each designed to group records into temporal data sets for the purpose of bringing visibility to distinct patterns of repetitious error.