A/B Testing

What Is A/B Testing?

A/B test is a test that compares variations of something (e.g. a web page) to determine which works better. It is a randomized experiment that involves the comparison of two or more versions (one version being the control), with the aid of statistical tools, to determine the one that performs best in relation to a specific goal.

<<image>>

A/B testing is one of the most widely-used techniques in user experience research. Also known as bucket testing or split testing, the method has historically been popular in marketing and advertising. It is also commonly used in product development.

Common Uses

There are two primary ways that A/B testing is used by Product teams:

New Feature Rollout – we all think our new feature is going to be amazing for users but sometimes it is can actually have negative impact on KPIs that weren’t anticipated.  Or, the feature may not perform as optimally as expected in other ways such as being confusing and thus not adopted or leading to lower NPS scores.  Thus for major new features, it is a good idea to rollout slowly and measure carefuly for impact – an A/B testing framework can be a great way of doing this.


Engagement Optimization
– Typically this starts with a goal such as increasing conversion rate of new users and will be enacted by determining a set of hypotheses about what what might help to achieve that goal.  For example, you might hypothesize that explaining the value of membership will drive higher conversion on the subscription page, and thus will test variants that intend to do that.  Through regular optimization efforts, teams rack up meaningful improvements to under flow and achievement of their goals.

Data Model Optimization – server-side testing is a more advanced use case but often used in the creation of search or personalized recommendation features that return a list of items to a user, based on their behavior or search queries.  Because there are complex data models on the backend that determine how to react to a given user’s needs, server-side testing is required.  Similar to engagement optimization however, we achieve insights through A/B testing, or sometimes through more sophisticated adaptions of this concept such as multi-variate testing (multiple variables).

How to Run A/B Tests

The A/B testing process may differ from one company to another. However, the following are some key steps to conducting one that helps greatly:

Determine goals – One of the first things to do is to pinpoint what goals you will focus on for comparing variants. An example goal can be conversion. You may have to precede this with data collection, such as from analytics tools, to figure out areas you need to optimize.

Identify data – You need to determine the type of data that you can capture to show how well variations perform. Failure to do this might lead to you not being able to use the results you come up with effectively.

Develop a hypothesis – Come up with a hypothesis (or hypotheses) on how you think users will react to certain changes you might make. Put differently, form a theory on how customers will interact with a new feature. What steps will they follow?

Carry out your experiment – Create variations of the new feature or element of an asset and have users test them. The people that use these samples should be representative enough of your target user base – they should be statistically significant. Users should be directed to the control or a variation at random.

Dig into results – Finally, take time to go over your A/B test results to see what variation performs better or resonates more with users. Check whether any observed difference is statistically significant. A/B testing software makes the work of analyzing your results a lot easier.

Other Recent Articles

Start Building Amazing Products Today!