Running Experiments
Running experiments involves systematically testing hypotheses by conducting evaluative research with target users. This activity is crucial in validating assumptions about user behaviour, product features, and market needs.
Purpose
The purpose of running experiments is to validate or invalidate assumptions about user behaviour, product features, and market needs. It helps teams make informed decisions based on data rather than assumptions.
Context
Industry Context
Most features fail because we make assumptions about how users will behave, but unfortunately we are often wrong. Running experiments is a way to validate these assumptions before we invest too much time and effort into building a solution.
ZeroBlockers Context
We are accountable for outcomes, not just outputs so we need to actively overcome our biases if we want to achieve the desired results.
Methods
Practice | Description | Benefits | Considerations | Best Suited For |
---|---|---|---|---|
Prototype Testing | Creating early models of a product to gather feedback and iterate before full-scale development. |
|
|
|
Landing Page Testing | Creating a page to describe a potential product or feature to gauge user interest through actions like sign-ups. |
|
|
|
Wizard of Oz Testing | Simulating the functionality of a product or feature that doesn't yet exist to test user reactions. |
|
|
|
Concierge Testing | Manually providing services or features to users that you plan to automate in the future, to validate demand. |
|
|
|
Fake Door Testing | Presenting the option for a non-existent product or feature to measure user interest based on engagement. |
|
|
|
Crowdfunding Campaigns | Using platforms to present product ideas to potential customers, gauging interest through financial pledges. |
|
|
|
A/B Testing | Comparing two versions (A and B) to see which one performs better. |
|
|
|
Multivariate Testing | Testing multiple variables simultaneously to see how they interact and affect outcomes. |
|
|
|
Anti-patterns
- Lack of Clear Hypotheses: Conducting experiments without a clear understanding of what is being tested.
- Confirmation Bias: Designing experiments in a way that may inadvertently favour a desired outcome.
- Ignoring Negative Results: Disregarding or rationalising away results that do not support initial assumptions.
- Overgeneralisation: Extrapolating findings from a limited dataset to a broader context without sufficient evidence.
Case Studies
Running Quick Experiments with Guerrilla Research
How the LEGO Group utilized guerrilla research to conduct quick experiments, enabling rapid insights and agile decision-making.
LEGO Group
Optimizing User Experience with A/B Testing at Scale
How Booking.com utilized A/B testing at scale to optimize user experience, drive product improvements, and increase conversion rates.
Booking.com
Implementing Lean Experimental Product Development in High-Stakes Industries
How PlanGrid successfully implemented lean experimental product development.
PlanGrid
Achieving Alignment Through Quick Design Experiments
How BlaBlaCar used quick design experiments to align their teams and improve product development processes.
BlaBlaCar