How to test your email campaigns – an introduction to A/B testing

6 minute read

Testing your email marketing campaigns regularly is vital to improving your results, yet according to a recent survey by eConsultancy, only 32% of marketeers currently do regular testing.

With regular testing you can eek every last bit of performance out of your campaigns, by finding out the best ways to word your offers, the best places to position your calls to action, the best format for your subject lines and much more.

It’s easy to test your campaigns but the process can seem daunting if you’ve not done it before, so we’ve put together this quick guide to getting started with testing your email campaigns.

The type of testing we’ll be looking at is usually called split-testing or A/B testing. This is essentially sending two (or more) versions of your email to segments of your database and then measuring the results to see which version performed best.


What to test:

The two main elements you can test are the subject line of your email and the content.

The subject line mainly influences the open rate of your email campaign, whereas the content influences the actions taken during and after reading.

Testing the subject line is simple as all you do is alter the text.

When testing content you have far more options – you could change the order of articles in the email, the wording of your call to action, the colour or size of your headlines – anything you like.

Whichever element you decide to test, it’s important that you pick just ONE thing to test each time, otherwise you can’t be sure which change caused the effect.

For example if you decide to test differently worded calls to action, say “click here for your free trial” and “get your free test account today”, then every other element of the campaign (subject line, other content, size and placing of the call to action) should stay the same, so that you know that any difference is caused solely by the change in wording.


How to test:

We’ll focus here on a simple 2-way (A/B) test, although with you can test up to 5 variations at once (depending on your database size, which we’ll come to in a moment).

Once you have decided on what to test you need to create the different versions of your email campaign. The easiest way to do this is to create your first version (campaign A) and then duplicate it and make the change that you want to test, saving this copy as campaign B.

Now that you have your test versions ready, you need to decide on how you will decide the winning campaign. When testing subject lines, you should generally stick to open rate as your measure and with content tests link click activity is the best indicator to use.

The final decision is whether you want to run a test using your entire database, or test your campaigns to a small segment of your lists and then send out the best performer to the remaining subscribers. If you have a sufficiently large list then the later option is generally best as you could see an overall jump in performance by maximising the number of recipients of your best campaign. Remember though to leave enough time between sending your test and choosing the winner (we suggest 24 hours).

This brings us neatly to the topic of sample (or segment) sizes. It’s possible to get quite deep into statistical maths here, but as a rule of thumb, for most lists you need each test segment to be between 300 and 400 recipients in order to be confident of an accurate test result. If you’d like to geek out and work out your ideal segment size there’s a handy calculator available here.

So, now you’ve created your test versions, decided on how you’ll choose the winner and selected your sample sizes. All that’s left now is to run the test and watch the results! Then make sure you note what you’ve learnt and apply that to your future campaigns – by constantly trying things out you can continually optimise your campaigns and improve the return on investment from your campaigns.

In you can use our split-testing tool to create your segments, run the test and pick the winner automatically (even sending the winner out to the remainder of your database if you wish). You can find out all about split-testing in here.