E-commerce Link: Commit to True Online Testing
Having squabbles in the conference room about which checkout button is best? Or which product image to feature? Or is the debate about two different offers? Maybe it’s the lifestyle image, or possibly the header copy on your landing page?
Instead of wasting time and resources guessing about things that used to seem subjective, just commit your campaigns to true online testing and optimization. When properly conducted, A/B testing is a proven method for increasing conversion.
In my experience with our clients, I hear the same two excuses that keep them from testing. The first is cost; the client just doesn’t have the money. The second is education; many just don’t know where to start.
If you’ve been holding off on testing due to cost, last October Google launched its Website Optimizer, a Web-based service similar to Optimost and Offermatica, that assists marketers
in conducting efficient A/B and multi-
variate testing. Since this tool is free, the cost excuse no longer holds water.
Knowing how to start testing is a legitimate concern. We have identified more than 1,100 factors that contribute to the successful completion of a single conversion funnel. Multiply this by the number of campaigns, offers, customer motivators and visitor types, and the volume of factors easily climbs into the tens of thousands. And when you consider that each factor is a potential variant to test, just getting started can seem a bit daunting.
The good news is that not all factors are equal in their impact on conversion, and with a little insight and planning you can maximize your testing.
How to Test
Now, let’s talk about some practical tips you can use to see results sooner rather than later.
First, it helps to know a little about the science behind testing. Let’s say you want to determine whether Nolan Ryan is a better baseball player than Homer Simpson. How should you proceed? First, you might set a metric for what you mean by a “better” baseball player. You can measure evidence in concrete ways, noting the two subjects’ different batting averages or RBIs. You’re searching for a formula that will lead you to a correct decision. Such a formula is called a “fitness function.”
In virtually all such measures, Ryan is the better candidate. If you were choosing a player for your team, you’d certainly pick Ryan.
But think on that a moment. The reason you feel confident signing Ryan stems from your familiarity with the metric and fitness function that are implicitly applied when you speak of baseball. Your decision might be quite different if you want to pick a donut quality assurance taster. Suddenly, Simpson is back in the running.
Even then, your confidence may be based on your understanding that “tastes better” is the donut metric and that Simpson is an acknowledged expert in donut consumption. But what is the fitness function? That is, what does it mean to “taste better”? Are you solely relying on Simpson’s reputation as an expert? His expertise is based on consumption quantity, so perhaps you suspect he enjoys all donuts equally. In other words, it’s quite possible you don’t have any knowledge at all of what you might call the “donut taste” fitness function.
Interestingly, marketers are asked to make more important decisions with less information and an undetermined fitness function.
More formally, the aforementioned process is known as A/B testing, and it has three steps:
1. Identify a metric. What will be contrasted?
2. Agree on a fitness function that describes that metric. How will we measure and contrast the differences?
3. Optimize by tweaking the system based on comparison of exactly two tested solutions, which differ in only one respect of how they meet the fitness function.
A typical problem for A/B testing might be: “Do more people buy when I use a red buy-it-now button or when I use a blue buy-it-now button?” The data for this test will come from your Web analytics. Once you know which candidate converts better, you use the winner and discard the other. You might then test again, comparing the winner to some other variant you have in mind.
By examining incremental improvements to conversion at the page level, you should be able to measurably move higher on your site’s fitness profile.
While A/B testing compares two elements to determine which element converts better, multivariate testing considers several elements at once. When properly conducted, multivariate testing is a legitimate test methodology, but for many online marketers it is nothing more than throwing things at a wall to see what sticks. The problem is that when testing several elements at once, the inexperienced tester has no certain means of determining which element is the contributing factor, making it impossible to remove an offending element or repeat a successful one. For this reason, we recommend A/B testing to start.
Which Platform and Goals?
The next decision is selecting the testing platform you will use. Of course, if cost is a concern or if you just want to put your toe in the testing waters, Google Website Optimizer is the way to go. In our experience, both Offermatica and Optimost also are sufficient platforms, each charging moderate fees. The paid platforms provide support with their technology, while Google outsources support to its partners.
Next, you must define your conversion goals. Do you want to:
• Increase sales?
• Generate leads?
• Boost product specific purchases?
• Generate qualified traffic for on-site purchases?
• Generate traffic for a self-service or subscription service?
• Increase online service sign up or event registration?
• Compare product image conversion?
What to Test
If you’re a typical marketer, you likely have an endless amount of pages you’d like to optimize. We recommend testing some of your more problematic entities. To narrow your task list, start testing the following:
• top five high bounce rate pages
• top five high exit rate pages
• top five lowest time spent pages
• top five key pages (e.g., checkout, cart, registration, top product, about us)
Next, determine the element you want to test on each page. Start by asking the following three questions:
1. What profiles are clicking on this banner, landing on this page?
2. What action do you want them to take? If it’s a banner ad, the goal is a clickthrough; if it’s a landing page, the action may be “Add to Cart,” “Learn More” or “Subscribe.”
3. What information does this profile need to be able to take that action?
These questions help you zero in on what element on each page or ad might have the most impact on conversion.
To give you a head start, here are some of the most effective elements to test:
• Call to action buttons, which include language, shape and size. A successful call to action incorporates several key elements. First, it should stand out from every other action on the page. It can employ a different color, and it should be larger and have prominent placement in relation to other buttons on the page. Also, an effective call to action uses the imperative verb with an implied benefit, such as “Add to Cart: You can always remove it later.”
• Point of action assurances and trust factors. These include refund and return policy language; availability of “About Us” information; shop-with-confidence language; phone number/offline help availability; and secure transaction assurances and placement.
• Product categorization. This includes both category names and product names. Reorganize categories based on external or internal search popularity, and use effective categories within the active window and navigation to quickly orient visitors to the value you have to offer them. This persuasive copy should be above the fold and should guide visitors to the information for which they are searching.
• Messages. Test banner messages, header messages, headlines and messages contained inside an image.
• Images. This includes both product images and lifestyle images.
• Color. Only test color if you are convinced it is a barrier to your persuasive momentum. However, always avoid light text on dark backgrounds in the active window.
I encourage you to think of testing as a series of steps in the ongoing effort of optimization. This testing period should allow you enough time to gather sufficient data to gauge real insight about your A/B splits. Identify the number of unique visitors and/or conversions to test so you can establish good data, then determine how long it will take you to generate this traffic. This number will vary from business to business, but your decisions should yield enough data to confidently declare a “winner.” Many A/B testing platforms will calculate this for you.
The more you test, the more you develop a feel for what works and doesn’t work for your customers. Even small changes can have a dramatic impact on your conversion potential.
So, don’t be intimidated. Just get started! You won’t regret it.
John Quarto-von Tirada, chief thinking officer, Future Now, contributed to this column.
Jeffrey Eisenberg is co-founder and CEO of Future Now Inc., a New York-based consultancy that specializes in online conversion strategies. He can be reached at email@example.com.