UX Strategy

Why Small Sample Sizes Aren't Enough for Business-Critical UX Decisions

“You only need 5 users to find 85% of your problems.” If you've spent any time in UX, you've heard it. But for business-critical decisions, this “truth” is a dangerous half-measure.

Originally from Jakob Nielsen's research in the 1990s, the 5-user rule was about qualitative usability testing: finding major bugs. It was never meant for benchmarking, statistical comparison, or justifying million-dollar product roadmaps.

The Problem with the “Magic Number 5”

Human behavior is complex. Stopping at 5 people means you're hearing from a tiny fraction of your users. Even the Nielsen Norman Group has clarified: for quantitative studies, you need much larger samples to avoid building insights in an echo chamber.

When 5 is Enough
  • • Identifying major usability blockers
  • • Understanding mental models
  • • Catching catastrophic UI failures
When You Need More
  • • Benchmarking against competitors
  • • Measuring exact completion rates
  • • Making high-stakes investment calls

Where Small Samples Fall Apart

Relying on a handful of voices for competitive benchmarking is noise, not data. Without a robust sample, you cannot:

  • Noise vs. Reality: You can't tell if a competitor is better, or if their 5 users just happened to be in a better mood.
  • False Completion Rates: 4 out of 5 people succeeding (80%) could actually represent a true population value anywhere from 30% to 99%.
  • Zero Segmentation: You can't see how mobile users differ from desktop users, or how Singapore differs from Hong Kong.

The Lesson in Margin of Error

Task completion rates are only useful when the margin of error is tight. Consider the difference:

The Guess: 5 Users

True population value could be anywhere from 30% to 99%. This is effectively a coin flip for your strategy.

The Strategy: 100 Users

Margin of error is about ±8%. You can confidently walk into a boardroom and state your performance within a narrow, actionable window.

Why We Use 100 Participants per Brand

At Tetrabase, we don't choose 100 participants because we like big numbers. We do it to unlock segmentation. With 100 users, we can split the data to answer:

  • Are drop-off rates higher on mobile than desktop?
  • Do first-time visitors struggle more than returning users?
  • How do regional market expectations differ between Singapore and Hong Kong?

“Confidence isn't built on a handful of voices. It's built on understanding the whole picture through the Tetrabase Framework.”

Trust the Signal, Not the Noise

The “5 users is enough” myth persists because it's fast and cheap. But acting on incomplete information isn't better than standing still, it's actually worse. By the time you realize you've fixed the wrong problem, you've already wasted your budget and dev time.

The difference between market leaders and everyone else is the certainty that their answers are real. When you strip away the guesses, the truth remains. That is the only place worth building from.