Step Up Your Game
It isn’t difficult to make a case for e-mail testing. After all, when performed correctly, testing enables marketers to continuously develop and improve their e-mail programs and realize positive results. For instance, it’s not uncommon to see a 20 percent increase in your open rate by making small changes to an e-mail’s subject line. What is there not to like about that?
Even though it is a critical component of e-mail, many marketers are uncertain about how to test properly. While most marketers perform some general types of tests, some make critical missteps in setting up their tests and evaluating results. Others have yet to take the next step in their marketing programs by performing more advanced—yet very possible—measurement techniques.
With some advanced planning, it is possible to take steps now to improve the performance of your e-mail marketing program.
Keep Your Eye on the Goal
As you begin planning your next campaign, it is critical to establish specific goals up front. Some marketers forego the advance work, put together an e-mail and test it. However, isolated, “one-off” tests that are not part of a planned approach are not likely to help you see sustained improvement in your results.
Your goals and the metrics you are testing will determine the methods you use to test. For example, if your goal is to lift your open rate, you can test multiple subject lines.
Perhaps the No. 1 problem marketers face is they think they know what the results will be before a campaign runs. While it may be tempting to base your current strategy on past performance or on knowledge gained from other industries, try not to enter into testing with any preconceived bias or a strong “gut feeling” about what your results will reveal. Use your past experiences to frame your current tests, but unless you test, it shouldn’t count.
Follow the Leaders
Marketers typically use two methods for measuring the performance of their e-mail programs. The most commonly used method is A/B testing, which compares the effectiveness of two or more e-mails across one independent variable such as subject line or layout. Less common is multivariate testing, a method that tests various independent variables within each e-mail to determine the optimal combination. An example might be testing your subject line, offer and image using different combinations of each of these three components.
Multivariate testing can seem overwhelming considering all the possibilities and factors involved—from creative elements such as layout, images, copy and colors, to details on your offer such as pricing, to your landing page and online forms. But the good news is small changes derived from multivariate testing can lead to good results. Changing your subject line or swapping out one image for another can make a big difference. And positive results can be realized by testing a limited number of e-mails at a time.
Leading marketers know A/B testing and multivariate testing are not exclusive and can be used together as part of an effective e-mail testing approach. By conducting an A/B test first, you can determine which factors influence your results. You then can employ multivariate testing to fine-tune your approach.
As depicted in the testing strategy diagram on page 35, a relatively straightforward but comprehensive testing strategy begins with an A/B test of a small part of the total available audience. This test, for example, can be used to determine the winning e-mail layout and should include vastly different versions. Once a winner is determined, this layout can be rolled out to the rest of the audience.
The winning layout then undergoes a multivariate test to determine which elements affect conversion and the best possible combination of those elements. To confirm the tests, the winner of the initial A/B test and the winner of the multivariate test should be tested against one another. The learnings from this strategy can be used in other channels or campaigns. Testing then begins again at the A/B stage with the testing of another major component of the e-mail campaign.
At the core of any successful e-mail measurement program is the ability to measure its direct impact on your business. Quantitatively tracking the numbers behind your e-mail campaigns will put you ahead of the curve. Some marketplace estimates indicate roughly 30 percent of companies do not track open and clickthrough rates—a surprising statistic, given all the tools available. Beyond basic click and open tracking, several tools also exist that track Web behavior.
Make sure your results are statistically significant. Stick to your initial plan, and don’t cut off testing too early, even when it looks like a clear “winner” is emerging. Wait until all results are in.
Of course, you can’t track your results if your e-mails aren’t getting through. Deliverability is a major issue in e-mail marketing, and you must understand your own deliverability rates. For example, if Yahoo! e-mail sends all of your e-mails to the bulk folder, this will dramatically affect your results based on how many names in each group use Yahoo! for e-mail—and never saw your message in the first place.
Dive Into Data With Segmentation
Once you figure out what you’re going to test, determine which audience segments to test. This way, when you send out “E-mail A” and “E-mail B,” you will know not only which one outperformed, but you also can examine who, in particular, liked which approach more. For example, you may learn one e-mail performed better among men than among women.
As a final step, segment your campaign so you can track its effectiveness at the end of the testing process. In addition to various creative and offer variations, consider one or more of the following data:
• Demographic information;
• E-mail domain (@comcast.com, @aol.com);
• Current customer status (actives and inactives);
• List engagement;
• List type/source (co-registration, in-house list);
• Users who have not responded to a campaign in the past X months; and
• Any other metrics that may be pertinent or available from your data.
To make things easy, your segments should be relatively straightforward. For example, SMA and SFA could be used for “Subject Line A—Male Recipient” and “Subject Line A—Female Recipient,” respectively. The key is to track each segment individually.
Be careful not to cut the segments too small. The smaller the segments, the longer it will take to accumulate the number of conversions it takes to become a statistically valid test. The chart at left shows how many messages you need to send to achieve a statistically valid sample.
You ‘ve done a good deal of work to get ready to test. Now that you are set up properly, you can track your e-mails by segment, clickthrough and conversion rates—and realize your goals.
Now is the time to go back to the metrics you set up in the beginning and examine these same elements at the end of the campaign.
Never assume because one e-mail outperformed another it is superior in every way. Instead, look at your data more closely. The top-performing e-mail may have resulted in more clickthroughs, but did it result in fewer sales or margin? Likewise, if you are conducting a multivariate test that tests image and sales copy, it does not mean the image from the winning combination is the best on its own.
Look at all metrics, and act on the important ones. If you set up a test with a focus on increasing clickthrough rates, at the end of the test be sure to see if the changes you made ultimately changed Web site or call-in behavior. Remember, simple changes can have profound effects, and you don’t want to increase one metric at the sake of another.
Testing does not end with e-mail. It also should not occur in isolation or as a solo component of marketing apart from search engine marketing or print media. Your results should help shape other aspects of your marketing programs. Once you’ve tested and found a winning combination for your e-mails, you can test how that approach works on your Web site. If the results are good, you can start applying that approach to other aspects of your marketing program.
It may be impossible for you to get one perfect e-mail that is right for every audience. But by continuously testing and interpreting the results, you’ll be able to improve your e-mail and create messages that are nearly “perfect.” While it may seem like a good deal of work, the payoff in improved campaign performance is large—and your competitors probably won’t be taking these basic steps to fine-tune their own marketing campaigns.
Brett Charney is the director of strategic services at Merkle, a Lanham, Md.-based database marketing agency. Charney’s 10 years of online marketing expertise includes e-mail, search, Web site analytics and optimization, and online advertising. He can be reached at firstname.lastname@example.org.