Research

Why small sample sizes are not the problem you think they are

Small sample sizes do not tell you how common something is. They tell you whether you have seen enough to understand what is going wrong.

Why small numbers in discovery research are often enough to reveal structural issues, and why waiting for more evidence can become a way of delaying action.

27 December 20245 min read

In short

Why small numbers in discovery research are often enough to reveal structural issues, and why waiting for more evidence can become a way of delaying action.

Why the number becomes the story

Five, maybe six, and suddenly the conversation shifts. The hasn’t changed, but the in it starts to wobble slightly.

It’s a natural reaction. Most people are used to thinking about in terms of scale. Bigger numbers feel safer. More representative. Easier to defend in a room full of .

But that way of thinking comes from a different type of work.

at this stage isn’t trying to measure how many people experience something.

It’s trying to understand why it’s happening at all.

At the point of discovery, the question is not how common something is. It is why it is happening at all.

Why patterns appear faster than people expect

What tends to surprise people, especially if they haven’t sat through many themselves, is how quickly start to appear. Not in a perfectly consistent way, not with users saying exactly the same thing, but in that feels familiar almost immediately.

A hesitation in the same place.

A moment of uncertainty before committing.

Going back to check something that should already be clear.

Individually, those things are easy to dismiss. Together, they start to form a picture.

And once you’ve seen that picture a few times, it’s very hard to ignore.

Key takeaway

You do not need large numbers to spot a structural problem. You need enough repetition to recognise the pattern.

Why the five-user idea still resonates

There’s a reason the five users idea has stuck around for so long. Jakob Nielsen popularised the thinking that a small number of users is often enough to uncover the majority of issues in a .

It gets quoted a lot, sometimes a bit too rigidly, but the reason it resonates is because it reflects what actually happens in practice.

You don’t need large numbers to spot problems.

You need repetition.

What repetition actually tells you

In reality, by the time you’ve seen the same play out across a handful of people, you’re no longer dealing with coincidence. You’re seeing something structural. Something about the way the is put together, the way information is presented, or the way decisions are being framed.

And structural problems don’t tend to affect just a few users.

They show up wherever the same conditions exist.

Why teams keep waiting for more evidence

Where teams often get stuck is waiting for the signal to feel bigger.

More , more users, more confirmation, as if the problem will somehow become more valid the more times it’s observed. But in doing that, they end up circling the same without actually moving anything forward.

The learning doesn’t deepen. It just repeats.

That’s usually where progress slows down.

Not because the team doesn’t understand the issue, but because they’re still looking for permission to act on it.

What small samples can and cannot tell you

Small don’t give you distribution. They don’t tell you how widespread something is, or how it varies across different segments. That kind of understanding comes later, once changes are live and you’re looking at real-world at scale.

But at the point of , that’s not what you’re trying to answer.

What you’re trying to understand is whether something is off, and why.

And once you’ve seen enough to explain that clearly, more doesn’t necessarily make the decision easier. It just delays the moment where something actually changes.

The more useful question

In practice, the more useful question isn’t is this enough users?

It’s have we seen enough to understand the problem?

If the answer is yes, then you’re already in a position to move.

And in most cases, the thing holding teams back isn’t a lack of .

It’s a lack of in acting on what they already know.

LET'S WORK TOGETHER

Ready to improve your product?

UX, research and product leadership for teams tackling complex digital services. The work usually starts where things have become harder than they need to be: unclear journeys, inconsistent products, competing priorities, or teams trying to move forward without a clear direction. I help simplify the problem, shape the right next step, and turn complexity into something people can actually use.

Previous feedback

Will Parkhouse

Senior Content Designer

01/20