Legacy Data Rules Can Mean Big Risks for Marketers
It is very dangerous to forget about legacy data processing rules that may impact future marketing campaign performance. The only way to avoid this risk is to make sure someone with relevant experience is involved in planning and implementing big data transitions.
Let’s look at two examples of how things can go wrong, but also how errors can be avoided.
Transitioning to a New ESP Platform
In our first example, all of the customer data for the business existed on several different platforms after years of acquiring new brands and adding new media channels. Customer journey mapping became the company’s main initiative for optimizing sales performance this past year. All of the data connected to a customer record had to be moved into a single platform.
A new ESP platform was chosen and a transition started to make customer journey marketing a reality. The challenge of getting disparate data into the single new platform was very daunting.
In this case, there had been several employee changes along the way and the key analyst with experience in the old platforms was no longer present. The only analysts left who had experience worked at the outsourced service bureau. They had experience in segmenting customer records into a mailing file for this business. But they were not invited to join the process. The existing rules for how the data itself would be used for marketing campaigns were not considered in the platform transition.
Many decisions had to be made along the way, as data was imported into the new platform. In this case, one of the decisions was to add a “letter” as a prefix to all NEW customer records and sales orders. The reason was to help identify the various business divisions. Data analysts know that any time we fool with a customer record number, there are always consequences somewhere down the line.
The long-term service bureau that had been a partner over the years would have known to consider the consequence of adding a letter to the customer number, but was not involved or notified of the change. Files were output and deadlines met.
Six months later, the new platform was turned on and sales started
growing. However, when the first
response analysis was executed, there were no new buyers present. None! It turns out that a legacy rule was in place from many years ago that suppressed any order with a letter in the customer record.
It took weeks to chase the problem down and fix, costing the business tens of thousands of lost sales as the new customers were not mailed for a short time. The lesson here is an analyst with a history of how data is processed must be involved in a major data transition. This analyst may or may not be an employee of the business.
Adding Segmentation to the Buyer File
In our second example, a retail chain with 100 stores transitioned to a new point-of-sale platform and started sending all of its transaction data to a central repository. When it came time to put an offer in the mail, only the names and addresses were sent for processing into a mail file for print marketing campaigns.
Recently, management decided to add some basic RFM (recency, frequency and monetary value) segmentation to the mailing decisions. The transition that resulted meant a big data initiative for segmentation would have to be executed for the records being mailed.
Unfortunately, the internal IT department had too much on its plate to take on the basic segmentation initiative. The segmentation project was assigned to a third-party data resource that had been performing simple deduplication and postal hygiene for years. The third party sent off a million records with only names and addresses and did not see the output quantity after segmentation. They were too busy with other deadlines and no one thought to keep them involved.
During the response analysis, geographic segmentation was added to gain a view of the appropriate drive distance to each retail location for the mail file. The result was that 19.6% of the transactions did NOT match. The records that did not match the mail file came back as “blank.” The $4.7 million dollars these blank records represented raised a few eyebrows!
At this point, management stopped the process to understand why so many sales had come from records that were not mailed. They also wanted to know how much money may have been left on the table by not mailing some of these records.
Similar to our first example, we discovered a legacy rule hiding in the background that no one had paid attention to. A distance rule was found in the initial deduplication process itself that limited the mail quantity to an arbitrary number assigned by a specific budget. The file was reduced and valuable records were not mailed.
Over the years, names and addresses were sent off and the IT analysts never had a reason to look at the mail file output. The strategy in play simply mailed to an arbitrary budget number. The problem could have been avoided by assigning a new segmentation process to the internal IT analysts with the necessary experience with the relevant data processes. They would have immediately noticed that the output file and mail file had drastically different totals.
The lesson for marketers in these examples is a simple one: Make sure there is an analyst with experience in data processes involved in all elements of a big data transition. An experienced analyst will have the necessary history with the data to make sure all legacy rules and how they impact marketing campaigns are being considered.