Discuss and set the metrics the test will use to measure the success of the design. Use the multi-select field below to choose the metrics.
If you need a reminder, the metrics are all explained further down in this task.
Successful task completion
This is one of the simpler metrics and is relevant in almost all tests. It indicates whether the scenario was successfully completed or not, since each scenario will have a goal and tasks to perform.
If you're looking for an easy (if a little simplified) way to say if the design works or not, successful task completion will give you that yes/no answer.
Critical errors
Critical errors are mistakes which result in the task not being completed. This could be due to anything from the participant misinterpreting the task to taking the wrong route through your website.
These are important in most tests, as they indicate the most glaring flaws in the design and usability test itself. If they understood the task but their workflow was flawed, it's probably a critical error for the design. If they didn't understand the task in the first place, it's a critical error in the scenario.
Non-critical errors
Non-critical errors are exactly what they sound like - mistakes which participants make that aren't too damaging. This encompasses any mistake they make, despite the task being eventually completed.
This includes big mistakes which are then corrected by the participant before continuing and small hiccups which don't completely derail them.
For example, they may open the wrong menu or navigate to the wrong page before finding the correct option to open.
Error-free rate
The error-free rate is the percentage of participants who complete the scenario without making any errors, both critical and non-critical.
As you'd imagine, the larger your error-free rate, the more generally successful you could consider your design. After all, this rate indicates how often participants interacted with your design almost perfectly in line with your intentions.
Time on task
This one's pretty self-explanatory - it's the amount of time spent on each task. Along with metrics like error-free rate and critical errors, it's one of the most consistently useful ways to tell how intuitive your design is.
The longer it takes a person to complete the task, the less successful your design is in encouraging the action you want them to take (or the less clear the route is in general).
Subjective measures
Subjective measures are usually gathered from participant questionnaires at the end of the test. These usually ask questions about things like ease of use, how easy it was to find the information, whether it was satisfying, and so on.
While not as clear-cut as things like the time spent on each task, they're great for getting a sense of the emotional response of the audience to the design.
Likes, dislikes and recommendations
Likes, dislikes and recommendations are related to other subjective measures, except they measure the opinions of the audience rather than their emotional response. They're also gained through participants answering questions at the end of the test.
Learning what participants liked, disliked, and what recommendations they have isn't a hard-and-fast way to tell how to improve the design. However, it's never a bad thing to learn your target audience's opinions of the design, and recommendations can help to give inspiration on how to deal with issues when you're too close to the problem to see the solution yourself.