The Top 3 A/B Testing Challenges That Prevent Marketers From Getting Big Lifts
A/B testing can generate impressive results because it allows marketers to discover what really works. And the results can be clearly measured and communicated to clients, business leaders, or partners – whether it’s an ecommerce test that generates 36% more cart completions or a healthcare marketing test that produces 638% more leads.
But those lifts don’t come easy. In working with companies to optimize conversion using A/B testing in MECLABS Institute Research Partnerships, we’ve noticed a few commonalities in the challenges companies face – whether big or small, B2B or B2C, ecommerce or lead gen.
Challenge 1: Knowing What to Test
You can’t just put any two landing pages into a splitter and expect a lift. Marketers quickly learn that some small changes that are appealing because they are easy to implement just aren’t impactful enough to generate a lift. Or big changes might result in a loss (that’s not entirely bad, see Challenge 3).
To know what to test, first you must know where to test. This is where your customer data can be so powerful. Run a funnel metrics analysis to pinpoint where your sales funnel leaks. Where are customers dropping out of the funnel?
This is where you run your A/B tests for the opportunity to have the biggest impact.
Once you’re identified the where, then you want to ask what is keeping customers from taking the next step in your funnel. There shouldn’t be a definitive statement. These are educated guesses (hypotheses). Here’s a framework that might help with those “guesses,” The MECLABS Conversion Sequence Heuristic.
The heuristic is not a formula to solve but rather a thought tool, and it gives you a language with which to discuss test ideas. For example, Aetna’s HealthSpire ran a test in which it decided to further emphasize the value of the conversion action (contacting call center agents) and reduce anxiety at the expense of increasing friction. It was a challenge to their previous approach, which is why they tested it and didn’t risk just implementing it straight away.
The result: that 638% increase in leads mentioned in the beginning of the article.
Challenge 2: Running Valid Tests
Marketers will discover Challenge 1 pretty quickly when they aren’t generating results or valuable customer insights from their testing. Challenge 2 is pernicious, though. It could cause marketers to think they’ve discovered a way to increase conversion when they really haven’t. Or cause them to overlook a conversion increase.
A/B testing is a successful tactic because of its predictive power. For test results to truly have predictive behavior, you have to make sure they reflect customer behavior and that the change you made in the test is what actually caused the results. To achieve that you have to set up and monitor the experiment in a scientific fashion, and avoid validity threats like:
- Instrumentation effects: For example, 10,000 emails don’t get delivered because of a server malfunction, a piece of hidden code causes an abnormally long load time on one treatment.
- History effects: For example, unexpected publicity around the product at the exact time you’re running the test, a marketing campaign that skews demand temporarily in one direction, running a test for only 20 hours on a Tuesday when weekend traffic behaves very differently, or running a test on your ecommerce site with highly motivated December holiday traffic and expecting to get the same results in January.
- Selection effects: For example, another division runs a pay-per-click ad that directs traffic to your email’s landing page at the same time you’re running your test or customers self-select which treatment they see.
- Sampling distortion effects: This is a failure to collect a sufficient sample size to overcome random chance. For example, determining that a test is valid based on 100 responses.
Challenge 3: Interpreting Test Results
Let’s say you do 1 and 2 correctly and get a huge result. There’s still a fundamental question that needs to be answered – why? Why did customers behave that way? What did you learn about the customer and how can you use this knowledge?
Interpreting test results shouldn’t be an activity confined to after the test. In fact, the key to interpreting test results happens before the test is even run. It comes in setting up the hypothesis with the goal of thoroughly understanding customer thought processes at the key dropoff points in your funnel so you can increase perceived value and decrease perceived cost.
By understanding what prospects are thinking at each stage of the buying process you will be able to better match their motivation and move them through the sales funnel faster.
The goal shouldn’t be simply to increase a KPI (key performance indicator). It should be to understand how you can better serve the customer with your marketing messaging, sales process, and even products. And through that understanding, improve results.
That way, even if the treatment in your test ends up producing less conversions, it isn’t really a loss. Because you’ve gained customer wisdom. If data is the currency of the Internet age, customer wisdom is the gold standard – the core piece of data that all marketing should be directly linked to. And that’s ultimately how you achieve long-term lifts with you’re A/B testing.
Get an Excel tool to diagnose the problems in your funnel and determine where to test – the free MECLABS Conversion Analysis Tool.
Daniel Burstein is the Senior Director, Content and Marketing at MECLABS Institute. Daniel oversees all content and marketing coming from the MarketingExperiments and MarketingSherpa brands while helping to shape the marketing direction for MECLABS — digging for actionable discoveries while serving as an advocate for the audience.