At Listrak, we believe in helping you beat your industry benchmarks. And you do that by carefully monitoring your results to first set your own benchmarks and then by split-testing everything to optimize campaigns for performance.
Split testing email campaigns has been around almost as long as email itself. Marketers would send out the same email with two different subject lines to see which one had the most opens or they'd send two versions of the same message with different images or calls-to-action to see which one had the most clicks. And they learned important lessons from these tests, such as keeping CTAs above the fold and putting key messages at the beginning of the subject line. But, like every other email campaign aspect, if you haven't updated your A/B split tests lately, it's time to take another look. We’re in a scrolling society, so above the fold isn’t really a factor anymore. And subject line best practices have changed a great deal with more people opening emails on mobile devices. What was once a tried and true best practice isn’t necessarily still what works best. So test to find out what is.
The goal of your split tests shouldn't be short sighted. You're not just trying to figure out what works best for a single email campaign. You should be trying to gather customer insights that you can use to drive business decisions and inform future campaigns. That means that email A/B split tests should be part of your campaign development process and not an afterthought you add in moments before you hit the send button. And you must start with a strong hypothesis. Put aside your feelings and approach it scientifically.
Forming a Hypothesis
- Standard Hypothesis = If variable, then result. Include your rationale.
- Weak Hypothesis = If we send messages over the weekend, then we'll sell more.
- Strong Hypothesis = If we add an email deployment on Sunday, then we'll sell more merchandise.
Our average email conversion rate is 7.6%. If we send more messages each week, we have more chances of converting customers.
The more specific you are, the stronger your email A/B split tests will be. Ask yourself the following questions before deciding upon your test criteria:
- Specifically, what are you trying to learn?
- What are your assumptions?
- What variants will you test?
- Are the changes big enough to truly make an impact?
- Will the tests lead to improved results?
- How will you measure the results?
- Can you re-use the results to inform future campaigns?
- Will the results answer your hypothesis?
Remember, even if your initial assumptions are proven to be incorrect, the test is successful as long as you learned something that will go on to improve campaign performance.
Split Testing Mistakes to Avoid
Setting up email A/B split tests is easy and you have the initial results in a matter of hours. But the strategy behind the split tests is complex and should be well thought out or else you could end up with a lot of data but very little context. Avoid these pitfalls:
- Don't change important branding elements – if your brand uses specific colors, fonts, messaging or different elements, going off-brand is a mistake. Only test elements that make sense.
- Make sure your sample group has statistical significance – if you aren't doing a true A/B split test where half of your list receives one version and the other half receives the second version, your sample group must be large enough to provide definitive results.
- Don't call tests too early – Clients are always curious how long to run tests. While email provides almost instant results, calling tests after just a few hours could cause misleading results. For accuracy, allow tests to run at least 24 hours. Similarly, a single test simply isn't enough data, you must continue to run tests and measure the results over time.
- Measure the right results – make sure that the metrics you use to determine the test winner match what you're actually testing. If you're testing two subject lines, look at the open rate. If you're testing a call-to-action, monitor click-throughs. If you're testing offers or send times, measure the conversion rates. Different variables only influence certain behaviors. With that being said, it is always a good idea to look at the conversion rates of the tests. We've found circumstances where the email with the highest open or click-through rate had a lower conversion rate. While the variable being tested might not have influenced the purchases, you can learn a lot about the customer segment through this metric.
Email Split Testing Ideas
Testing subject lines and CTAs is a great place to start, but you can test nearly every aspect of your email campaign. Here are some additional ideas to try:
- Personalization: Wondering if it is better to recommend your top selling or top trending products or merchandise that is personally recommended for each shopper based on his or her browse and purchase history? Test it!
- From name: You most likely – and rightly - use your company as the From Name in your campaigns. But certain campaigns, such as shopping cart remarketing or transactional messages, could work better coming from your customer service department. We've seen variations such as "Company Name – Customer Service", "firstname.lastname@example.org" or simply "Customer Care" – the key is to test to see if you get a boost.
- Responsive design: More than half of all emails are opened on mobile devices and more and more customers are shopping on tablets and smartphones, so emails should be mobile-optimized. Even if your site isn't responsive yet, try testing responsive emails to see if the boost in conversions provides a business case for building a responsive website.
- Timing: The only way to know for sure what day and what time of day you should send emails is to test. You aren't limited to sending emails Mon through Friday between 8 am and 5 pm. Set tests to deploy on nights and weekends to see what happens!
- Number of emails: Many retailers are sending 5-7 – or more! – emails per week. If you are still sending only one or two, run a test where you send emails daily for a few weeks. You will be surprised at the additional revenue gained when you increase your deployment schedule.
- Bar codes: Email drives sales, not just online but in-store as well. Including a bar code and image that looks like a coupon that customers can print and take into stores with them could drive additional traffic to your stores. It's definitely worth testing to find out.
- Number of products: Customers will scroll through emails that contain row after row of product images, especially if those products are personal product recommendations based on browse or purchase history. But testing these message templates will help you figure out the optimal number of products. That way, you'll know for sure how many products drive sales and where the cutoff point is.
- Navigation: The majority of retail emails still include site navigation at the top of the message, which can certainly help drive clicks but also takes up prime email real estate while distracting from the main message. Try running a test to see if it makes sense to keep the navigation bar in the message or if using a simpler version – or no navigation at all – makes more sense.
- Ratings and reviews: Social proof is a great selling tool and it can really boost sales when included in messages. If it makes sense for your brand, try testing the concept to see if it leads to an increase in revenue.
Questions about testing? Or, did you run a great test and want to share the results? We'd love to hear from you!