How to Start Optimizing Your Website Today
- What elements of the website led to the most "Add to Cart" clicks, followed by successful order completion pages (e.g., "Thank you for your order")?
- Which combination of product information such as graphics, descriptions, layout and color increased average order value?
- What combination of factors relating to site search most successfully brought users to pages from which they ultimately purchased products?
Also consider testing coupons and promotions, including the following:
- free shipping and/or financing;
- credibility factors such as logos denoting secure credit card processing; and
- the availability, placement, and look and feel of customer reviews and testimonials (does it make a difference on purchase decisions?).
These are just a few things to think about. You’ll want to start with the factors you believe are most important to your KPIs, then decide what experimental design is best. With A/B testing you test one factor (e.g., a call-to-action button) against one or more variations to see which is most persuasive. While A/B testing allows you to test just one factor at a time, multivariate testing enables you to test multiple factors simultaneously. Evaluating the impact of combinations of factors and variations often reveals significant interaction effects that can have a dramatic impact on your conversion goal.
There are five common mistakes that are easy to make when running multivariate tests:
1. Improper factoring. This is caused by poor or no isolation of individual test changes — e.g., changing a headline's text, font color and size all at the same time as an A/B test instead of a multivariate test. Why is this problematic? Because it's difficult or impossible to isolate the impact of each individual change. Was it the font color and/or the text that caused the visitor to behave differently?
2. Running a test for too short or too long of a time frame. Stopping a test early because you think you have a winner increases the risk for statistically invalid data. It also may increase time bias from events and conversion cycles. In contrast, running a test too long increases the risk of wasting time waiting for marginal results and consumes test samples that could be applied towards another test.