Centralis

View Original

5 Ways to Save on Moderated User Testing

Moderated user testing is key to making sure your digital product design delivers an intuitive, frictionless, and rewarding user experience. All too often, however, we come across companies that hesitate to conduct this kind of testing because they fear it will slow development or break their budget. That’s a risky proposition. Usability issues that come to light after you launch can mean outsized costs in terms of design and development rework, poor conversion rates, and higher customer service costs.

Centralis uses a number of strategies to make these studies as quick and cost-effective as possible without compromising the quality of our user insights. Here are 5 key tips for your team:

1. Keep your sample size small

Unless the scope of the design you’re testing is large or you need to include very different user segments (e.g., doctors v. patients) in your study, a small sample of between 7-10 users is sufficient to reveal and diagnose most key usability issues. Limiting your sample saves money on recruiting, incentives paid to participants, and time spent in sessions. Remember, in diagnostic testing, you’re not looking for statistical projections of the prevalence of those issues; you’re focused on finding, understanding, and resolving user pain points.

2. Go remote

By leveraging remote screen-sharing technology such as Zoom, you can speed up recruiting by opening your study to participants nationwide rather than limiting it to people in a single location. You’ll also avoid costs associated with renting and traveling back and forth to a research facility.

3. Focus on critical user tasks

Prioritizing features and use cases to be tested allows you to save money and time by limiting the scope of your study to the most important user flows. These may be new flows you are currently working on, and they tend to be areas of the interface that make a big difference to your bottom line, such as browsing products, making purchases, completing applications, or booking appointments.

4. Use low-fidelity prototypes

You don’t need the entire interface to be functional to get feedback from users; in fact, waiting until the design is finished is missing an opportunity to learn and iterate. Tools like Axure, Figma, Invision or Adobe XD enable your team to streamline the process of creating your test stimuli. Tools like this allow you to test a realistic user experience without incurring the cost or time required to code a full alpha or beta HTML interface.

5. Streamline the final deliverable

While some studies may require comprehensive documentation and discussion of every single test finding with accompanying screenshots and annotations, we often find that if our clients have observed the test sessions, a thorough post-testing debrief with a lighter weight text summary of major findings and recommendations is a more agile and cost-effective approach. After all, addressing these major findings will often mean making design changes that obviate many of the smaller design issues as well.

You may have already implemented some of these ideas in your team’s design process. If so, great! If not, we hope some of these strategies will be helpful. If you’d like to discuss how to put these ideas into practice in an upcoming project, we’d love to hear from you.