5 Important Answers to Your Email Testing Questions
Marketers have been proclaiming the importance of A/B testing for years, but we don't always practice what we preach as deadlines and last-minute edits get in the way of our best-laid marketing plans.
I recently was a presenter for the Direct Marketing IQ Brunch & Learn webinar, "7 Email Tests to Run Today (and Tomorrow)," and if you had the chance to attend (or viewed the webinar on-demand, which is still available), you can probably tell I am very passionate about email testing and could have spoken on the topic for far more than 25 minutes.
Email testing can provide continuous boosts to your revenue and improves your knowledge of your audience-two major wins that are hard to ignore, particularly with how easy A/B testing can be.
Although we ran out of time to answer all of the questions live, I've answered as many as I could below. If you still want to chat about email testing, you can tweet at me anytime!
1. What is the minimum test segment quantity, to be statistically significant? In my direct mail days, it was 12,500 minimum, but we NEVER have anywhere near those quantities in our email campaigns. They are small, VERY targeted.
For this, I recommend using a segment size calculator and a statistical significance calculator. Determining a proper sample size is more about your baseline conversion rate, and the minimum percent improvement you'd like to be able to see.
It's definitely harder to test as your segments get smaller and smaller. I'd recommend finding places to test where you can, and extrapolating those learnings into your smaller sends.
2. How do you decide who is in the "A" group, and who is in the "B" group?"
The subscribers that go into each test group should be completely and totally random! If you handpick people for either group, your results are going to be biased.
The better question to ask yourself is which segment of subscribers you should target in the test overall. With some tests, I like to test on new subscribers who don't have preconceived notions of what my brand's emails look like and how they function, so that I can get some unbiased results.
3. What crucial item do most email marketers leave off their preflight checklists that they should add immediately, and why?
This is going to sound biased because I work at Litmus now, but I promise it's not. I would have given the same answer before I worked here.
I don't think marketers are being thorough enough when testing their designs across the various email clients before they press "send." Too many times, we are in a rush to get things out, and we assume that a template will work this week just because a template worked last week. We send ourselves a test, everything looks good in our inbox and we move on.
This is a bad habit to fall into! Even if you've tested a template before, you've got to test every time you send. For example, Yahoo! recently started doing crazy things to alignment, which could affect many of your designs. Even if you've tested a template before, it could now look wonky in Yahoo! and you might not realize it. The email providers make changes like that all the time, and rarely are they announced. The only way to make sure your emails look great in every inbox is to test thoroughly before every send.
4. How do you correlate A/B Testing results and actions to be taken afterward onto websites and/or email campaigns?
If you don't have that capability, I'd recommend very strict Google Analytics (or whatever analytics platform you use) campaign codes on the links in each variant. For example:
Email A: http://www.litmus.com?utm_campaign=newsletter+version+A
Email B: http://www.litmus.com?utm_campaign=newsletter+version+B
That way, when you are viewing your analytics data, you can tie actions back to specific test variants.
5. If you change multiple things in one email, how do you know which component is the thing that's pulling better?
To identify the thing, you should absolutely change only one specific variable per test variant. That's the textbook answer for sure.
To be candid, though, and speak as an email marketer with strained time and resources, sometimes it can't be that academic in real life. In the value proposition test I mention in the webinar, for example, we were more broadly testing two concepts for value proposition statements (product-focused or solution-focused), so I was less concerned with which individual tagline was going to make the difference, but rather which theme.
Honestly, sometimes you might just need to know that "this group of things did better than that group of things" and try to distill a reason as to why.
Matt Byrd is the senior email marketing manager at Cambridge, Mass.-based Litmus. Reach him at email@example.com.