Blink recruits representative users and gives them realistic tasks to accomplish while a study moderator observes and identifies usability problems. We find that a baseline experience test, along with a usability inspection such as a heuristic review, is a good way to start improving the overall user experience of a product or service. These baseline tests provide clear and actionable areas for improvement, and a means to see if design changes are working.

We generally recommend sample sizes of 8-14 study participants for baseline experience testing. Though primarily qualitative by design, some quantitative indicators such as task success rates and satisfaction scores are often reported in these studies.

When larger samples are required or statistical confidence is needed, Blink typically uses samples of fifteen or more study participants to establish quantitative usability benchmark metrics for a product. These might include time on task, error rates, completion rates, or user satisfaction scores. Benchmark tests can also be replicated to quantify design improvements between development builds or releases, compare performance against competing products, or assess more than one potential design in “A/B” testing.

Our researchers and designers collaborate on the research plan to ensure that the data we collect is valid. The sessions we conduct are archived digitally on a secure portal. Transcripts, video clips, and other types of raw data are available in addition to the formal reports prepared by our consultants.

Do you want to conduct an experience benchmark?

Say hello