The majority of the work with testing a design comes in the form of the usability test plan. It's here that you need to decide how to find the strengths and weaknesses of your design without wasting time or money.

That's why this usability test plan template exists.

Covering everything from defining the scope of the test to the sessions that will be performed and the success metrics you'll measure, this process will let you create a reliable test to get as much information as you can on how your design should be improved.

Let's get started.

Arrange a planning meeting

Kick things off by arranging a date and time to hold a meeting to decide on the bulk of the usability test plan details. Once set, record the time and date using the form field below.

Ideally the meeting will need to take place with everyone who is majorly involved with or affected by the web design you're testing. Be it your own team or that of a client, this usability testing template works better if everyone understands the specifics of the test and what you need to learn from it.

The planning meeting:

Define the scope


Start off the meeting by defining the scope of the usability test with the form fields below. This includes the name of the website (or product) being tested, the version of the product for future reference, how much of the subject's content will be involved, and what navigation options will be available.

To make it clear, the "Test target name" refers to the name of the website or product being tested, and the product version is there to help future reference know what state the website or product was in when it was tested.

The navigation details should list the navigation elements involved in the test (eg, website menus, links to other sections of the site, and so on), while the content details need to indicate the content that will be present.

In other words, you're defining the scope of the test by stating exactly what will be involved, and thus what will not be involved. Participants may be able to navigate your knowledge base but you might disable navigation to your main site, for example.

Record your goals

Next you'll need to discuss what the goals of the test are with everyone in the meeting. Once these have been identified, record them in the form field below.

The goals will be core to creating the test scenario, and could be anything from "we want to know if our users know how to find the home button" to "we want to know if users can easily work out how to turn the product on".

If in doubt, think of things that have been changed in the most recent version of the product and consider what would be useful to know about its performance.

Identify concerns

Concerns and questions are the two things that affect usability testing just as much as your initial goals, so now it's time to get everyone to voice them. As with your goals, record these concerns in the form field below.

To clarify, these are concerns that you want to answer with the usability testing template, not concerns about performing the test itself.

For example, you could be worried that users won't understand how to find a vital page on your website or be able to use a new feature of your product without frustration.

There will be some overlap with your goals, so don't worry about saying the same thing too much. This is more of an alternative way to find core elements you'd benefit from by testing.

Estimate participant numbers

Now you need to estimate the number of participants you'll need to perform an effective test. This will largely depend on the resources you have, the importance of the test, the size of the differences from your last test, and the nature of the test itself.

"If you want a single number, the answer is simple: test 5 users in a usability study. Testing with 5 people lets you find almost as many usability problems as you'd find using many more test participants." - Jakob Nielsen, How Many Test Users in a Usability Study?

If you're not sure how many people you want to use in the test, Jakob Nielsen has highlighted that a good number to shoot for (in standard usability testing) is five.

Decide on the location


Now your team needs to decide how the test will be performed in terms of location. Most of the time it pays to be able to be with the participants in person while they perform the test but doing so remotely via screen sharing and video cal technology is also a viable option.

Remember that you can always hire out a room if you want to perform the test in person but don't have space where you work. Also, your participant pool will be much larger if you don't mind doing the test remotely, as you can link up with literally anyone who has an internet connection.

Organize session details

Next you need to sort out the details for each session in the usability test.

"Sessions" are the term for the testing period with each participant. If you run the test for an hour with each person, the session length will be 60 minutes.

From your test details so far, judge how long you'll need for each test (typically between an hour and 90 minutes) and how long you'll need to reset the test environment after each participant. If you're performing the test remotely, this time will instead be spent collecting your notes, saving any recordings, and so on.

Decide on scenarios

Scenarios are potential situations - the number and type of tasks being tested. For a focused test to take place you now need to create situations which the participants will be put in or asked to complete.

Record the scenarios using the form fields below. Aim for around 10 scenarios for desktop/laptop tests and around 8 for mobile/smartphone tests.

Identify required equipment


It's time to get technical, as you need to discuss with your team what equipment will be required for the usability test. As with other elements, record the required equipment using the form fields below.

Most usability tests will require you to provide a computer for each participant (or one which they can use after the previous person is finished). However, you'll also need something to take notes with, recording equipment (if you want to record the session) and so on.

It's also worth noting specifics of the technology, such as what programs will be involved, any browsers you'll need to use, the monitor dimensions and settings, and so on. Doing all of this will help you to reset the station to the same state after each test, helping to make them fair.

If the test is going to be performed remotely or the participant needs to bring something in themselves, make a note of what they will need in a similar manner by using the second form field.

Set test metrics

Discuss and set the metrics the test will use to measure the success of the design. Use the multi-select field below to choose the metrics.

If you need a reminder, the metrics are all explained further down in this task.

  • 1
    Successful task completion
  • 2
    Critical errors
  • 3
    Non-critical errors
  • 4
    Error-free rate
  • 5
    Time on task
  • 6
    Subjective measures
  • 7
    Likes, dislikes and recommendations
Successful task completion

This is one of the simpler metrics and is relevant in almost all tests. It indicates whether the scenario was successfully completed or not, since each scenario will have a goal and tasks to perform.

If you're looking for an easy (if a little simplified) way to say if the design works or not, successful task completion will give you that yes/no answer.

Critical errors

Critical errors are mistakes which result in the task not being completed. This could be due to anything from the participant misinterpreting the task to taking the wrong route through your website.

These are important in most tests, as they indicate the most glaring flaws in the design and usability test itself. If they understood the task but their workflow was flawed, it's probably a critical error for the design. If they didn't understand the task in the first place, it's a critical error in the scenario.

Non-critical errors

Non-critical errors are exactly what they sound like - mistakes which participants make that aren't too damaging. This encompasses any mistake they make, despite the task being eventually completed.

This includes big mistakes which are then corrected by the participant before continuing and small hiccups which don't completely derail them.

For example, they may open the wrong menu or navigate to the wrong page before finding the correct option to open.

Error-free rate

The error-free rate is the percentage of participants who complete the scenario without making any errors, both critical and non-critical.

As you'd imagine, the larger your error-free rate, the more generally successful you could consider your design. After all, this rate indicates how often participants interacted with your design almost perfectly in line with your intentions.

Time on task

This one's pretty self-explanatory - it's the amount of time spent on each task. Along with metrics like error-free rate and critical errors, it's one of the most consistently useful ways to tell how intuitive your design is.

The longer it takes a person to complete the task, the less successful your design is in encouraging the action you want them to take (or the less clear the route is in general).

Subjective measures

Subjective measures are usually gathered from participant questionnaires at the end of the test. These usually ask questions about things like ease of use, how easy it was to find the information, whether it was satisfying, and so on.

While not as clear-cut as things like the time spent on each task, they're great for getting a sense of the emotional response of the audience to the design.

Likes, dislikes and recommendations

Likes, dislikes and recommendations are related to other subjective measures, except they measure the opinions of the audience rather than their emotional response. They're also gained through participants answering questions at the end of the test.

Learning what participants liked, disliked, and what recommendations they have isn't a hard-and-fast way to tell how to improve the design. However, it's never a bad thing to learn your target audience's opinions of the design, and recommendations can help to give inspiration on how to deal with issues when you're too close to the problem to see the solution yourself.

Record staff roles

Next you need to discuss and decide on which staff will be directly involved with the test and what roles they will perform. Once decided, record the staff and their roles during the test in the form field below.

As a general rule, a few of the usability team are good to have on-hand to observe the test and make notes. Other team members can be good to use as backup note-takers for a second opinion, and the usability specialist will usually serve as the primary host of the sessions.

Choose a test date

Now it's time to choose a date for your test to take place. Once it's been picked, record the date using the form field below.

There's no real secret here - just chat with your team, discuss when the required staff for the test are available, consider how long it will take to get set up, and then choose a date with those things in mind.

You also need to consider the number of sessions you've planned to use and how long they will need to take place. The more sessions, the longer you'll need to use the location for

Arranging the test:

Confirm the location


After the meeting is adjourned the first thing you need to do is confirm the location of the test.

If the test will be on location you'll need to book it for the date previously decided. Remote tests will still need you to confirm that each staff member will be available to link up with the session host.

Proposed test location: {{form.Proposed_test_location}}

Arrange an alternate location

Since the original location was unavailable you now need to book an alternate location which provides what you need to perform the usability test.

Record the alternate location in the form field below after it's been booked.

Recruit participants

Now it's time to recruit your test participants. Record the participants using the form field below.

Participants can be from anywhere, and mostly differ based on what you're looking to get out of the test itself. If you're looking for a completely fresh look at your design you'll want to either advertise the test yourself through social media, local papers, and so on.

Recruiters are another way to get a fresh viewpoint, although you'll have to pay for their services in connecting you with suitable candidates. It does, however, give you the most control over the test subjects, since you can request people fitting specific demographics (such as those of your target audience).

If push comes to shove you can also use participants from your internal staff. The only problem with this is that they'll likely have some previous experience with your design, so the results you get won't be of a new user approaching the thing you're testing.

No matter where they come from, they need to have no experience with the version of the design you're testing and should represent your target audience. For example, they should be roughly the same age as your target audience, the same interests and desires, and so on.

Arrange directions for participants

Once you have your participants you'll need to set out directions for them to follow in order to get to the test. Record these directions in the form field below for future reference.

If the test is on location, the directions will just be to the building/room where it's taking place. If the test is remote, you'll need to specify how to link up with your team, such as through a specific meeting URL.

Create a schedule for the day


Next you need to create a schedule to follow for the day of the test and record it in the form field below.

This schedule should be a rough timeline to follow, covering which participants will perform the test at a given time. Remember to leave long enough for participants to complete your desired scenarios and have time between each slot to prepare for the next subject.

Let participants know when they're due

Now that you know what time each participant will be taking part in the test you need to get in contact and let them know when they will be due.

You don't need to tell them the whole schedule - just let them know what their time slot is for the day.

Prepare the location

The arrangements have been made - now it's time to prepare the location of the test so that everything's ready for your participants.

Obviously, this likely can't be done until the day of the test, so you might have to wait a while to complete this task. When the day comes, arrive early enough to the location that you can set up all of the necessary equipment and technology with time to spare before the first participant is due

Check your equipment

The final step before the test itself is to double check that your equipment is working properly. This includes everything from the lights in the room to the computers you're using and whatever you're taking notes on or recording the session with.


Relevant checklists:

Sign up for a FREE account and
search thousands of checklists in our library.

Sign up for a FREE account and search thousands of checklists in our library.