Skip to content

A/B testing based on Feature Flags

Problem to solve

A/B testing, also known as split testing, is basically an experiment where you "split" your audience to test a number of variations of possible code. In other words, you can show version A to one half of your audience, and version B to another (percentage is subject to change).

image

We want to be able to control deployments based on experiments defined in Gitlab.

Intended users

  • Release managers
  • Support managers
  • Customer success managers (for beta/ test users)
  • Developers awaiting feedback to select the "right" feature path

Internal groups:

  • Marketing
  • Growth

Further details

Proposal

User should be able to define A/B testing through our Feature Flags interface image

This feature flag should support N (TBD) variants Each variant should consist of

  1. Name
  2. Description
  3. Percentage

UX to be provided.

A good base for development could be using % rollout for Feature Flags which could mimic a/b testing at 50%

For the first phase we should be able to allow traffic to go based on the percent configured. Next phase would be to select a "winner" based on metrics that will be user defined such as performance/completing an action etc. An even later phase would be to automatically direct the traffic to the "winner" after a specific user defined time or threshold is reached (# of users logged in or similar)

Permissions and Security

Documentation

Testing

What does success look like, and how can we measure that?

Number of users using a/b testing strategy

What is the type of buyer?

Links / references

Edited by Orit Golowinski