Catch the E-mail Testing Bug!
No doubt you’ve read many articles touting the benefits of testing variables within your e-mail marketing campaigns. Testing can lead to results that include, for example, improved clickthrough rates and increased sales. That is why it’s so exciting to learn about people catching the e-mail testing bug.
Recently, I received the following note from a marketer: “Testing does matter. I just looked at the results from my first A/B split test, and the subject line I made up had a 40 percent higher clickthrough rate than the one written by our communications department.” Through testing, she had hit on a winning tactic.
Beyond Subject Line Testing
Most e-mail marketers catch the testing bug when they see results from a subject line test—a fast and simple test to implement. This is the first step on the path to a broad range of factors you can test to improve the effectiveness of your e-mail programs.
In e-mail marketing, you have the ability to track each step of your customers’ engagement with your messages, which in turn provides incredible insights and opportunities for improvement. You can detect if customers opened an e-mail, as well as what they did once the messages were opened. For example, did they click on a link or multiple links, and which ones? Did they complete a survey or forward the e-mail to someone else? Once your e-mail tracking is integrated with Web analytics tools, you gain insight into post-click activity and conversion data. Do the links in your e-mail direct people to the correct landing pages? Do they act on the call-to-action on your landing page, or do they search for something else? Ultimately, do they do something that impacts your bottom line, like making a purchase, calling a sales rep or registering for an event? Each of these pieces of information provides you with opportunities to provide better service to customers and impact your bottom line through testing and optimization.
To start the testing process, use the following three-step approach, which will help you identify what to test within your e-mail campaigns and the appropriate means of conducting the tests.
1. Identify your greatest opportunities to improve your e-mail program.
You’ll want to prioritize the points of the e-mail marketing program that need to be optimized by evaluating the e-mail response funnel (e.g., delivered, opens, clicks, product views, sales). The e-mail response funnel is the same for all organizations up to the click stage. At that point, the response funnel should reflect the objectives of your site. While the objective may be sales, it could just as well be registration for an event, driving a phone call, printing a coupon or downloading a whitepaper. To evaluate your e-mail response funnel, compile your e-mail campaigns’ historical averages of delivery, open, clickthrough and conversion rates, and compare those to a good set of benchmarks (e.g., by considering industry and list size). This will help you identify and prioritize areas that need optimization.
For example, say you are a consulting services firm with a list of 5,000 subscribers. Your average unique open rate is 30 percent and your average unique clickthrough rate is 2 percent. After gathering industry benchmark data from your e-mail service provider, you find that the open rate is in line with industry averages, but the clickthrough rate is much lower than industry averages. You now have identified the starting point for your optimization efforts—the design of your e-mails. Alternatively, if your average unique open rate is 15 percent and your average unique clickthrough rate is 2 percent, you’ll want to look at the factors affecting getting the e-mails opened, since a 15 percent unique rate is well below comparable benchmarks, and opens precede clicks on the e-mail response funnel.
2. Determine what factors or elements have an impact on each stage.
Each point of activity on the e-mail response funnel has distinct factors that influence whether or not the desired actions are taken.
Open. When you’re focusing on getting the e-mail opened, the sender (or from) lines and subject lines are critical. Sender lines should remain consistent across your e-mail sends, but if there is a strong indication that your sender line is not recognized by your audience, test two to three viable alternatives and settle on your new sender line.
Frequency also has an important impact on open rates. E-mail too often and open rates drop; e-mail too infrequently, and you will miss opportunities to get your message out. Consider longitudinal tests of small samples compared to your main list—which will act as the control—to look at both increasing and decreasing frequency. For a frequency test, you would consider the short- versus long-term impact on the bottom line. For example, in the online retail environment, the best way to maximize short-term revenue is to send offers every day. However, this causes a spike in unsubscribes and results in e-mail labeled as spam. The net result of this tactic is a severe decrease in long-term revenue. Some marketers, such as Amazon.com, devise programs that aren’t based on a set schedule; rather, they limit mailings to only timely and relevant communications. When analyzing the results, you will want to look at conversion as well as open rates to find the balance that maximizes sustainable ROI.
Click. Once the e-mail is open, the factors that can influence whether or not a customer chooses to click through can be overwhelming to a marketer. To start, consider the following list of common areas for clickthrough testing:
• Personalization. Does using the recipient’s first name improve clickthrough rates? How about a link to a map showing the store closest to the customer’s home? How much is too little or too much?
• Day/time sent. Do you notice more clickthroughs on Tuesdays than on Saturdays, more at noon than at 5 p.m.?
• Offers/call-to-action. Which works best? Half Off? Fifty percent off? Two for one? Should offers have a limited time frame? Should you ask for the sale, or just a demo?
• Segmentation. How does response change when you target smaller segments? How can segmentation models be optimized?
• Topics/content. What topics get people excited? Observe the links that your customers click most frequently and give them more of what they want.
• Length of copy. Short and sweet or lots of details?
• Copy tone. Which tone is going to get people the most engaged with your e-mail? Cheerful? Serious? Funny? Who thought insurance advertising would take a comical tone? But Nationwide Insurance has been successful with its “life comes at you fast” campaign. Use e-mail to test out those wild new ideas.
• Creative elements. What is the effect of changing creative elements such as images, colors, fonts, response buttons or layouts? Should you use bullets or numbered lists? Which elements have an impact? Which elements don’t?
• Pricing. What price point optimizes profit? Test the balance between higher number of sales and higher margin.
• Navigation. Should you include site navigation in your e-mail? Or does it distract customers from the main message?
Conversion. Once people have clicked through, is there a clear path to conversion? If you’re successful at getting people to open and click through on your e-mails, but conversion rates are low, then the answer probably is no. In this case, forgo e-mail testing for awhile and focus on optimizing your site and/or landing pages. Ensure that the messages that are working in the body of your e-mail are carried through on your site. For example, if an e-mail link takes customers to a general content page that requires them to search for the information they expected to find, you are likely to lose those customers. There need to be visual clues that customers are in the right place, and the information promised in the e-mail should be prominent on the landing page. If these basic elements are in place, and there is still a conversion issue, then more robust optimization is required to look at the call to action, pricing, product or service descriptions, and site usability.
3. Determine which testing methodology will provide necessary insights.
Three primary methodologies can be leveraged in e-mail testing. Each of these methodologies has advantages and disadvantages you’ll want to consider when preparing to design an e-mail test:
• Split testing (i.e., A/B, A/B/C) involves testing a single factor, such as subject lines, images or price points. The advantage is the speed and ease of split testing. To conduct the test, devise different options of the factor (i.e., two subject lines, three price points) to present to test groups. For the results to be valid, nothing else about the messages should be different. For example, if you’re testing subject lines, send the messages at the same time, and make sure the e-mail content is identical—only the subject lines would change. After most responses come in, compare the response rates to see if the difference between results is significant. If so, you’ve found your winning subject line.
• Multivariate testing allows you to simultaneously look at several factors, such as price points and offers, and evaluate the interactions of those factors by creating a test grid. For a simple example, let’s assume you’re selling a single product in an e-mail. You want to test two price points and have a long- and short-copy version of the e-mail. The test grid would contain four test cells. You then create four e-mails and four randomized test groups. Then run the test.
When analyzing the results, you may find that e-mails with the short copy have a better conversion rate than long copy, and the lower price point performed better than the higher one. Had you run sequential A/B split tests of these factors, you may have decided to send e-mails with the lower price point and short copy.
However, when looking at the interaction of price point and copy, you find that short copy wasn’t always the winner. When combined with the high price point, the conversion rate actually was poor. The same goes for long copy and low price point. Most importantly, the long copy version combined with the higher price point performed just slightly worse than the winner.
You’re now ready to incorporate the information gleaned through the analysis of these interactions into your product-pricing decisions.
• Design of experiments (i.e., fractional factorial designs and Taguchi robust design experiments) allow marketers to look at a large number of factors while minimizing the number of test e-mails that need to be created. For example, you can conduct a test that looks at 15 factors, each with two options, providing test results for 32,768 different possible e-mails by creating 18 e-mails. During the design phase of the experiment, the factors and options are identified in a brainstorming session involving both marketers and test designers. Then, a representative set of 18 e-mails are identified by leveraging statistical software with a design of experiments module. After the specified e-mails are created and deployed, the resulting responses from these 18 e-mails then are used to create predictive models that provide marketers with the optimal combination of factors in the e-mail.
To be sure, the process of designing the experiments and creating multiple ads can be a big project. It involves discipline, careful planning and creative preparation to execute this type of testing. However, when compared to the execution of thousands of A/B split tests, the advantages are simple. Design of experiments allows you to learn in weeks what would take years to learn using traditional testing options, and the results are very impressive. Dell, for example, has publicized that it gained more than a seven-fold increase in sales from a Taguchi-optimized e-mail campaign.
What Are You Waiting For?
The one, sure-fire way to improve your e-mail program is to start testing. The e-mail channel provides you incredible insights into every stage of interaction with your messages. Start with subject-line testing, and you’re sure to catch the testing bug. From there, the options get more complicated. By taking this logical approach to identifying the greatest opportunities, determining the factors that impact those opportunities, and selecting the appropriate testing methodology, you will see results that impact your bottom line. Catch the testing bug, the rewards will be worth it!
Morgan Stewart is director of strategic services at ExactTarget, an on-demand, e-mail software solutions provider based in Indianapolis. You can reach him at firstname.lastname@example.org.