When you think about the performance of your email campaigns, do you simply aim to meet industry benchmarks? Or are you striving to blow those numbers out of the water? We hope you are a marketer in the second group -- the one that is willing to take bold strides to kick those benchmarks to the next field.

Of course, it’s always easier said than done to develop an email campaign that actually beats industry benchmarks. That’s why we're here to discuss an important step you might be skipping when developing your email campaigns: split-testing.

Let’s discuss how carefully monitoring your results and implementing split-testing on everything you do can help optimize campaigns for improved performance.

About Split-Testing Email Campaigns

Split-testing email campaigns has been around almost as long as the email itself. Marketers would send out the same email with two different subject lines to see which one had the most opens, or they'd send two versions of the same message with different images or calls-to-action to see which one had the most clicks. They’ve learned important lessons from these tests, such as keeping CTAs above the fold and putting key messages at the beginning of the subject line.

Here’s why keeping the images and important information above the fold is so important on desktop devices. Viewing time drops off drastically once you hit the page fold. So why wouldn’t you put the most important and eye-catching information at the top of the article.

But, like every other email campaign aspect, if you haven't updated your A/B split tests lately, it's time to take another look. We’re in a scrolling society, so above the fold isn’t really a factor anymore. And subject line best practices have changed a great deal with more people opening emails on mobile devices. What was once a tried and true best practice isn’t necessarily still what works best. So test to find out what is and what works for your customer base.

As mobile email open rates are quickly taking over desktop views. People are using their phones more and more and are reading emails everywhere and quickly on the go. Above the fold is still important, but is it as important as it once was? Image courtesy of SuperOffice.

Set Goals to Measure Success

The goal of your split-tests shouldn't be short sighted. You're not just trying to figure out what works best for a single email campaign. You should be trying to gather customer insights that you can use to drive business decisions and inform future campaigns.

That means that email A/B-split tests should be part of your campaign development process and not an afterthought you add in moments before you hit the send button. And you must start with a strong hypothesis. Put aside your feelings and approach it scientifically.

Forming Your Hypothesis

  • Standard Hypothesis = If variable, then result. Include your rationale.
  • Weak Hypothesis = If we send messages over the weekend, then we'll sell more.
  • Strong Hypothesis = If we add an email deployment on Sunday, then we'll sell more merchandise.

For example, if your average email conversion rate is 7.6%, and yousend more messages each week, youhave more chances of converting customers. This is a simple hypotheseis but it can help you look at the content logically and takes the emotional aspect out of the process.

The more specific you are, the stronger your email A/B split tests will be. Ask yourself the following questions before deciding upon your test criteria:

  • Specifically, what are you trying to learn?
  • What are your assumptions?
  • What variants will you test?
  • Are the changes big enough to truly make an impact?
  • Will the tests lead to improved results?
  • How will you measure the results?
  • Can you re-use the results to inform future campaigns?
  • Will the results answer your hypothesis?

Remember, even if your initial assumptions are proven to be incorrect, the test is successful as long as you learned something that will go on to improve campaign performance.

The good thing about split-testing is that even if your initial hypothesis was incorrect, there are many variables you can try to test for better results. Here are a number of the different areas marketers actively test on their email campaigns -- the possiblities are endless! Image courtesy of Opinmonster.

Split-Testing Mistakes to Avoid

Setting up email A/B split-testing is easy and you have the initial results in a matter of hours. But the strategy behind the split-tests is complex and should be well thought out or else you could end up with a lot of data but very little context. Avoid these pitfalls:

  • Don't change important branding elements – if your brand uses specific colors, fonts, messaging or different elements, going off-brand is a mistake. Only test elements that make sense.
  • Make sure your sample group has statistical significance – if you aren't doing a true A/B split-test where half of your list receives one version and the other half receives the second version, your sample group must be large enough to provide definitive results.
  • Don't call tests too early – Clients are always curious how long to run tests. While email provides almost instant results, calling tests after just a few hours could cause misleading results. For accuracy, allow tests to run at least 24 hours. Similarly, a single test simply isn't enough data, you must continue to run tests and measure the results over time.
  • Measure the right results – make sure that the metrics you use to determine the test winner match what you're actually testing. If you're testing two subject lines, look at the open rate. If you're testing a call-to-action, monitor click-throughs. If you're testing offers or send times, measure the conversion rates. Different variables only influence certain behaviors. With that being said, it is always a good idea to look at the conversion rates of the tests. We've found circumstances where the email with the highest open or click-through rate had a lower conversion rate. While the variable being tested might not have influenced the purchases, you can learn a lot about the customer segment through this metric.

This is a great example of what split-testing looks like. In this scenario, there are two different email subject lines that are being tested and one that was sent to the rest of the group.

Email Split Testing Ideas

Testing subject lines and CTAs is a great place to start, but you can test nearly every aspect of your email campaign. Here are some additional ideas to try:

  • Personalization: Wondering if it is better to recommend your top-selling or top trending products or merchandise that is personally recommended for each shopper based on his or her browse and purchase history? Test it!
  • From name: You most likely – and rightly - use your company as the From Name in your campaigns. But certain campaigns, such as shopping cart remarketing or transactional messages, could work better coming from your customer service department. We've seen variations such as "Company Name – Customer Service", "customer_service@company.com" or simply "Customer Care" – the key is to test to see if you get a boost.
  • Responsive design: More than half of all emails are opened on mobile devices and more and more customers are shopping on tablets and smartphones, so emails should be mobile-optimized. Even if your site isn't responsive yet, try testing responsive emails to see if the boost in conversions provides a business case for building a responsive website.
  • Timing: The only way to know for sure what day and what time of day you should send emails is to test. You aren't limited to sending emails Monday through Friday between 8 am and 5 pm. Set tests to deploy on nights and weekends to see what happens!
  • Number of emails: Many retailers are sending 5-7 – or more! – emails per week. If you are still sending only one or two, run a test where you send emails daily for a few weeks. You will be surprised at the additional revenue gained when you increase your deployment schedule.
  • Bar codes: Email drives sales, not just online but in-store as well. Including a bar code and image that looks like a coupon that customers can print and take into stores with them could drive additional traffic to your stores. It's definitely worth testing to find out.
  • Number of products: Customers will scroll through emails that contain row after row of product images, especially if those products are personal product recommendations based on browse or purchase history. But testing these message templates will help you figure out the optimal number of products. That way, you'll know for sure how many products drive sales and where the cutoff point is.
  • Navigation: The majority of retail emails still include site navigation at the top of the message, which can certainly help drive clicks but also takes up prime email real estate while distracting from the main message. Try running a test to see if it makes sense to keep the navigation bar in the message or if using a simpler version – or no navigation at all – makes more sense.
  • Ratings and reviews: Social proof is a great selling tool and it can really boost sales when included in messages. If it makes sense for your brand, try testing the concept to see if it leads to an increase in revenue.
A/B testing allows you to tweak and change your email campaign, in turn, improving the clicks, opens, and conversion rates. Without testing and changing your methods, it’s very likely you’re campaign stats will look like the bottom red line -- almost consistent from month to month. Image courtesy of Optimizely.

Questions about testing? Or, did you run a great test and want to share the results? We'd love to hear from you!