Please note that this page is still a draft.

What is A/B Testing?

A/B testing is a method to find out which variant of a text or graphic converts better. When you want to change an aspect of the design, rather than going in blindly or going with your gut feeling, A/B testing lets you make changes that perform better, or at the very least as well as the existing design.

Your success with A/B testing depends on a few factors

  • The amount of traffic you have
  • The effect of your changes. Impactful changes will require less participations

Conversion

A conversion can be defined as any action that is desirable. For example, a landing page can use sign-ups as a conversion metric.

  • Email sign-ups for a newsletter
  • Clicks on social media share buttons
  • Email engagements
  • Minimizing website bounces

The goal of split testing is to discover which variant has the higher conversion rate. Each variant has a true conversion rate that is not known to us, but we can try to estimate it using observations and data collection.

The conversion rate is the percentage of participations that resulted in a conversion. It can be calculated as conversions / participations. Note that while we can calculate the conversion rate for different variants without any complex calculations, it won’t tell us anything about the confidence or the statistical significance of the result.

Variants

A/B tests are actually a case of multivariate testing with two cases, namely A and B. Depending on the method used, you can easily perform split tests across an arbitrary number of variants.

Participations

A participation happens the moment you roll the dice and decide on which variant to use. This can be when the user loads the page, or when you are sending emails.

Server-side

Server-side participation is suitable for traditional web applications, such as those written in PHP, C#, Python or Java. When you receive a request from a user, you make the changes to the response you send, and then record the participation in your database. This allows you to serve the same user with the same variant, and prevents a single user from knowing about the variants.

Client-side

For client-side participations, the client knows about all the variants, and can configure itself to any of them. After a variant is picked randomly, the client reports to the server (via API calls, analytics etc.) which variant it has chosen.

Client-side participation is suitable for static websites, SPAs, mobile applications and games.

Methods

There are different ways of performing A/B testing.

Bayesian

The Bayesian method provides estimates of the conversion rates as probabilities. This means that from the start of the split test until the end, you always have an idea about the estimated conversion rates for each variant. As you collect more data for the test, these estimates get more and more accurate.

Conversion rate estimation with Bayesian split testing is commonly done using the beta distribution. You can pull numbers from the beta distribution using random.betavariate(a, b) or numpy.random.beta(a, b).

Frequentist

// TODO: Write this section

Example code

Below are some examples of A/B testing in Python.

Conversion rate

Here’s how you can calculate conversion rates in Python.

participations = 500
conversions = 210

rate = conversions / participations

# The conversion rate is 42%
print("The conversion rate is {rate * 100}%")

Bayesian

import numpy as np

# Variant A
# Participants and conversions
Ap, Ac = 500, 200

# Variant B
# Participants and conversions
Bp, Bc = 550, 230

Relevant links