A/B Testing
A/B Testing, also known as split testing, is a method of comparing two versions of a web page, app feature, or any other product component to determine which one performs better. By presenting variant A (the control) and variant B (the treatment) to similar audiences simultaneously, teams can collect data on user behaviour to inform decisions.
Goal
The primary goal of A/B Testing is to make data-driven decisions to improve user engagement, satisfaction, and conversion rates by identifying and implementing the most effective version of a product feature or element.
Context
There are infinite ways of solving a user's problem but they are not all equal. A/B Testing lets you compare different solutions to see which one is best.
Inputs
Artifact | Description |
---|---|
Experiment Plan | A detailed document outlining the experiment objectives, methods, timeline, and success criteria. |
Usability Test Guide | A guide for the moderator to follow during a usability test, including the tasks to be performed, the questions to be asked, and the data to be collected. |
Digital Assets | A live webpage or other digital asset to be used in a Landing Page, Fake Door, A/B or multivariate experiment. |
Outputs
Artifact | Description | Benefits |
---|---|---|
Validated Assumptions | A list of assumptions that have been validated. |
|
Invalidated Assumptions | A list of assumptions that have been invalidated. |
|
Anti-patterns
- You have to build two versions: A/B Testing requires two versions of a feature to be built, which can be time-consuming and costly.
- Testing Too Many Elements at Once: If too many variables change at once it can be difficult to identify which change impacted the results.
- Ignoring Statistical Significance: Making decisions based on results that are not statistically significant.
- Chasing Local Maxima: Over-optimising for minor improvements at the expense of broader product goals.