Why small samples work
A common objection to small research studies is that they aren't statistically significant. Small samples can't tell you the exact answer. That objection misunderstands how product decisions work, and it overstates the value of additional data after the first few observations.
The Urn of Mystery
Imagine a warehouse of urns. Each urn contains a mix of red and green marbles. The percentage of each colour can range anywhere from 0 to 100, and all percentages are equally likely.
You play a game. We pick an urn at random. You state which colour you think is the majority. If you're right, you win £20. If you're wrong, you pay £10. The bet pays out at 2-to-1 odds, so on average you need to be right more than 33% of the time to come out ahead.
Without seeing any marbles, your odds of being right are 50%. Already a profitable bet.
Now you're allowed to draw one marble before guessing. If you see a red marble, you say "red is the majority." How often does that work? Out of all the urns where you'd draw red on your first pull, the majority colour really is red about 66% of the time. One marble takes your guess from 50% accuracy to 66%.
Drawing a second marble bumps your accuracy further, but the gain is much smaller than the gain from the first marble. The first observation is the most valuable one you make.
Diminishing marginal utility of information
This is the principle that the value of additional information drops off quickly as you accumulate more of it. The first interview, the first prototype test, the first cluster of support tickets you read carry most of the signal. The tenth interview adds far less than the first did.
Statistical significance is the point at which the signal is precise enough to publish. Most product decisions don't need that level of precision. They need enough signal to choose between two or three reasonable directions, and the first few observations get you there.
What this means for research budgets
Two practical implications:
- Run studies smaller and more often. Five interviews now beats twenty next quarter. By the time the twenty-interview study finishes, the team has already made the decisions it was meant to inform.
- Stop when the signal stops moving. If two or three consecutive interviews don't change your interpretation, additional data is no longer buying you much. Move on.
The exception is when you need statistical precision: pricing studies, conversion experiments, anything where a 5% difference matters and you'll act on the specific number. Most product research is not that.
What about confirmation bias?
Small samples are vulnerable to confirmation bias more than to statistical noise. You stop asking questions once you've heard what you wanted to hear. The mitigation is process:
- Recruit deliberately, not from the people most likely to say yes.
- Have multiple team members run interviews and compare notes afterwards.
- Note the times your assumption survived contact with a customer separately from the times it didn't.
These habits matter regardless of sample size. With a small-sample practice they matter more, because the cost of bias is hidden by the low cost of the work itself.