The Hows and Whys of A/B Testing

Posted by Nick Grant on Feb 19, 2020 9:15:00 AM
Nick Grant
Find me on:

The sales and marketing fields are always changing. It is hard to choose the most effective practices and know what will work specifically for your company. Because of this, many sales teams get stuck in their antiquated sequences.

 

Many marketing teams do not freshen up content. They are afraid to try new ideas, and are scared to take risks due to the “if it ain’t broke, don’t fix it” mindset. This is where A/B testing can be incorporated into your sales and marketing practices.

 

So, what is A/B testing? We're glad you asked. 

 

a/b test

A/B testing is a way of comparing two web pages (version A and version B), apps, playbook or pieces of marketing content (in our case) against each other to see which one is most effective. It is a low-risk, high reward strategy that provides a way to continuously implement new ideas, quickly drop bad ones, and continue on with good ones. A/B testing enables low-risk, continuous performance-ramping innovation.

 

Harvard Business Review cites an example from 2012 when a Microsoft employee proposed an idea to change one simple line of code which would alter the way ad headlines were displayed. After running A/B tests, the company found that the quick code change increased their revenue by 12%, which equated to over $100 million a year. 

 

The problem here is that it took 6 months for the team to actually implement the idea for a fear of risk and revenue loss. The lag translated to a $50 million dollar sum opportunity cost from not implementing the idea when it was proposed.

 

A/B testing creates a way for everyone’s ideas to be heard, tested, but dropped if they are well, not good. Conversely, if they are good, who knows! You could find yourself with an extra $100 million from a result of a tiny change.

 

A/B testing is no beast of a concept to understand. In fact, almost every company in the world has probably used it in some form. Implementation of an A/B test is where things can get confusing: we attempted to lay it out as simple as possible. 

 

Here are 5 Easy Steps for Implementing an A/B Test

 

  1. 1. Choose one variable to test

  2.  

variable testA variable to test can be any component of your webpage, sequence, app, etc. If we're talking about a website, you can test the effect of changing the text in a popup screen for signing up for the email list. If we're talking about testing a variable in a sequence, you can try to see the effect of sending a postcard 1 day after calling your lead or 3 days after. Instead, you could try sending a bifold instead of a postcard. 

 

As many variables you may want to test, you're going to need to stick to changing just one per each test. This way you can see exactly what is causing the changes found though the A/B tests. You can test more than one variable for a specific asset (like a webpage) but just make sure to do it one at a time. 

 

There are, however, situations where you will want to run multivariable tests, called multivariate testing. If that is something you want to learn more about, read this article from Optimizely that compares multivariate and A/B testing. 

 

  1. 2. Create the hypothesis

  2.  

Essentially, your hypothesis answers the question of what you think the results will be from your A/B tests. For example, if “Just checking in” was used in an email subject line, one might think response rate would fall because it's so commonly used. Outreach actually ran this experiment and found the opposite: "Just checking in" received an 86% higher reply rate.

 

  1. 3. Define your success criteria

  2.  

ab testBefore you set up your test, you need to know what you're measuring. There may be multiple metrics tied to the variable you plan to test, but which one will show you if your A/B test was successful? If you are testing an email subject line, the success criteria could be a response rate increase of above 2%. Make sure this is measurable!

 

  1. 4. The control and the challenger

  2.  

control and challenger

Your control is going to be the unaltered version of whatever you're testing. This could be the email sequence with the subject line you have been using. It could be your webpage with the current test in the email list popup.

 

The challenger will be a new version of the sequence or page that has the one variable altered. For example, your new subject line (an alternative to “Just Checking In...”), or new text in your popup that offers a promotion if you sign up instead of just saying “Don’t miss a beat, sign up for our email list”, for example. You will be testing the challenger (the B test, if you will) against the control (the A test). 

 

  1. 5. Decide on the size and duration of your test

ab test duration

 

For email campaigns, determine what percent of your audience will receive the control and the challenger. Make sure the percentages are equal for the two. 

 

Also determine the duration of your test. This is an art, not a science. You want to leave your test running long enough to collect quality data, but not too long as to cause harm to the campaign because of missed opportunity. Begin your A/B test with two versions of one piece of content, or sales playbook. Version A will be your control group and version B will be the same as the control with one variable changed.

 

A/B testing is all about using simplicity to optimize; don't over-think it too much or it will get confusing fast. Make sure your challenger is different from the control to the degree of one variable. Predetermine the duration of the experiment and know the metrics for success. Employ these tactics in the areas your business is lacking and watch it change for the better.

 

Topics: Marketing Psychology and Science