How often should you test new sales and marketing ideas or strategies? Should you always run multiple tests at once? Or should you only run one test at a time? And what exactly is an A/B test?
These are questions that come up all the time at AK Operations. Whether we’re running campaigns for ourselves or for our clients to grow their sales pipelines, it can be tempting to make decisions based on gut feelings. But if you want to take a data-driven approach to lead generation, A/B testing is a crucial component that allows you to identify which elements actually work and bring you closer to your goals.
That’s why we’re sharing the essential information you need to know about A/B testing so you can gain actionable insights, improve conversion rates, and maximize your ROI.
What is A/B testing?
An A/B test is a marketing experiment that consists of testing a number of variations of a campaign asset and determining which performs the best. Essentially, you show version A to one half your audience and version B to the other half. A/B testing is also known as split testing since you’re “splitting” your audience into halves.
Why is A/B testing effective?
A/B testing is effective because it allows us to see how one piece of marketing content performs against another.
For example, let’s say you’re running an email campaign with the goal of booking sales meetings with qualified leads. The first email of the campaign contains a blog article with information relevant to your target audience, and there’s a call-to-action (CTA) button that says “Read Now.” You notice that several of the email recipients are clicking on this button to go to your company’s blog, but not many are clicking on the CTA down below that says “Book Now” to schedule a sales meeting.
You want to increase the conversion rates so you can increase the number of people who book meetings with your sales teams. Ideally, if you increase the conversion rates at this point in the sales funnel, more people will convert later in the funnel which will overall drive an increase in the number of deals closed.
So, you decide to run a simple A/B test on the color of the sales meeting CTA button. Right now, both buttons are green, but you want to see whether changing the sales meeting CTA to red will better draw people’s attention and result in more sales meetings. You split the email recipients for this campaign randomly so you have one group who will receive the Email Version A and another equally sized group who will receive Email Version B. After running the test for two weeks, you discover that this simple change to the button color resulted in a 4% lift on booked sales meetings.
How do you run an A/B test?
The example mentioned above is a simple one, but it highlights the key steps in running an A/B test, which we outline below.
1. Test a single variable.
In order to accurately test how effective a change is, you need to isolate it and test only that change. You can test more than one change, but the key is to only test them one at a time so you can measure the effect of just that one change.
In our example scenario, testing a single variable looks like only changing the color of the CTA button. Although we may have wanted to see whether changing the email subject line, preview text, or even the location of the CTA button would improve conversion rates, we only tested one variable.
2. Set the goal of your A/B test.
You may be measuring several metrics during the test but you’ll want to be sure you have a single goal to focus your test on. In our example, our primary goal was to increase the number of booked sales meetings, even though we might have gathered data on the number of CTA clicks.
For emails, here are some suggestions for which metric you should focus on given the variable you’re testing:
- Email subject line → Email open rate
- Email body copy, design, image vs. no image → Email click-through rate
- Call-to-action → Email response rate
3. Ensure you have a “control” to test your variation against.
Now you’ll need to ensure you have a “control” version of your content asset to test your variation against. The control should be an unaltered version of what you normally use. You can call the altered version “version A” and the unaltered one “version B.”
In our example, the control was the original email with the two CTA buttons in the same color of green. But our test version, the version the control is being tested against, will have the sales meeting CTA in red rather than green.
4. Randomly split your sample groups into two equal groups.
If you’re running a test where you can control the audience, you’ll need to randomly split your sample group into two equal groups. This ensures that each variation receives a random sample of an equal number of participants.
With our example, we can control who receives which email version. This means that we need to split our email list into two random, equally-sized groups.
At AK Operations, we developed our own HubSpot workflow to split a sample group into two equal randomly-selected groups, which you can preview below. Our Sharp Chick secret is that by going A through L in the middle of the keyboard, you’ll get an approximate 50/50 split. This workflow allows you to manually A/B test within workflows, not just in email blasts with the A/B variants that HubSpot promotes. You can then look at workflow’s details page to conclude which version won.
5. Run both test variations at the same time.
Since timing can have an effect on your campaign’s results, you’ll want to run your A/B tests at the same time. This way, you know that factors such as the time of day or day of week won’t skew your findings.
For our example, this can look like sending the two different email versions on the same exact day or even at the same exact time. In fact, for tests that involve emails, we suggest sending the different versions at the same time since sending times can make such a huge difference in open and reply rates.
Meet AK Operations
Need help taking a data-driven approach to increase your sales pipeline?
At AK Ops, our mission is to operationalize the CRM by connecting marketing and sales campaigns that enable demand gen programs on autopilot. We build content campaigns to nurture contacts in the database, then deploy sales sequences to those who engage most. Our program enables sales teams to work the right leads, at the right time, with the right message—ultimately building more pipeline with better conversion rates.
Contact us today to learn more.