Investing in Testing? These Traps Can Cost You Dearly
While others debate about what to send, marketers who test already have the upper hand on creating content that kills. Previously I discussed the places you should test in your email program, but before you act on your results, make sure you're analyzing them correctly. You could be passing over the winning result if you fall into these six testing traps:
1. Poor testing conditions: As any scientist would tell you, you need to keep your lab clean. For a scientist this means clean beakers, test tubes, slides, etc. For a marketer it means your email data. Performing tests in an unclean environment can invalidate the results. Until you're sure your emails are getting to the inboxes of active users, hold off on performing tests, otherwise you may be calling a winning campaign a dud, when the real dud is your deliverability.
2. Jumping to conclusions: Remember taking tests in school? How you would race against the clock to answer all you could (and maybe guess on a couple multiple choices)? How would you feel if your teacher decided to stop the test early? Would it be an accurate representation of your knowledge? Similarly, ending an email test before it has enough time to collect statistically valid results will give you an inaccurate view of your subscribers’ preferences. At minimum, give a test 48 hours to 72 hours to increase the amount of data points as well as its accuracy.
3. Poor representation: While four out of five dentists might agree on gum, you should still ask more than five if you want reliable guidance. While an email test should be performed on as small a sample of your subscriber list as possible, you need to make sure the test group is large enough to be statistically significant. Before you begin a test, calculate the sample size needed to produce results that are representative of your subscribers. Online calculators are a great, free resource here.
4. Mistaken identity: Email marketing isn't done in a vacuum; other variables are bound to influence your test. The best way to make sure your results are caused by your testing variable is to limit your variables as much as possible. In addition, be conscious of those factors you can't influence. Some of the major ones you can guard against (unless of course they're what you're testing for) are delivery day and time (remember to account for different time zones), device or email client, and engagement (have some segments been inactive for months? Purchased recently?).
In addition to those variables, factors outside the email landscape will affect your campaign's performance. Direct mail campaigns, in-store promotions and other offers can influence your tests. Make sure you're analyzing your results with these factors in mind.
5. Hidden costs: A hidden cost is a problem you don't discover until it's too late. For email testing, this refers to a test being deemed a winner based on one aspect of evaluation (e.g., read rate) when in fact it's a loser in other aspects (e.g., complaints) and ultimately ends up hurting your email program. When performing a test, track all of your metrics to prevent this kind of collateral damage. You may think a test is successful if your campaign's read rate is higher than average, but if your complaints soar and threaten your inbox placement, your campaign wasn't the winner you thought it to be. Keep a close eye on metrics like your delete unread rate to make sure you aren't driving your customers away.
6. Stale findings: There's a reason we have channels like CNN that broadcast the news 24/7: The world is constantly changing. Similarly, your email program is constantly evolving. Over time you'll gain (and lose) subscribers, generate new content, and develop new offerings. With all these changes, should you assume your tests from a year ago are still valid? As you gain new people and offerings, you should retest to stay relevant to your current subscriber base. Just because your audience responded to a particular tactic a couple of years ago doesn't mean it works today.
Related story: Testing, Testing 1, 2, 3