Guide to Planning and Conducting Usability Tests
This document is meant to provide a foundation for your next usability test. Pages found here borrow heavily from Jeffrey Rubin's Handbook of Usability Testing.
Overview of the basic usability test
Usability tests include the following elements, each of which will be addressed in this document:
- Develop problem statements or test objectives.
- Use a sample of end users (which may or may not be selected randomly).
- Represent the actual environment the users will face.
- Observe end users' interaction with that environment. The test monitor interrogates and probes end users for feedback.
- Collect qualitative and quantitative performance and preference measurements.
- Recommend improvements to the design of the environment.
Rubin, p. 29-30
Determine which type of usability test to implement
Depending on where you are in the design process, there are three types of usability tests to choose from. These are the exploratory, assessment, and validation tests. A fourth type, the comparison test, can be used at any point in the design life cycle.
Exploratory Test: The objective is to explore the user's mental model of the task you're trying to facilitate.
- When to use: This type of test is usually conducted during the initial phases of a design life-cycle. This test is effective when the designers are still grappling with what functionality to include in the final product.
- Objective: Evaluates whether the user can distinguish between the functional elements of an interface; whether the user values the functions presented; whether the overall structure enables a "walk up and use" product. The test seeks to establish the intuitiveness of the implementation.
- Methodology: This test involves a high degree of interaction between the monitor and the subject. The purpose of the test is to identify points of confusion encountered by the user and then to "walk through" what would help them out.
- Test Example: The user is shown an example of a mundane situation, i.e. a screen shot of a user-account management tool, and then asked to talk through their assumptions, expectations, wishes, etc.
Assessment Test: This is the most common test conducted.
- When to use: Normally conducted early or midway through the design of the product.
- Objective: This test assumes that the basic conceptual models are decided and determines how well they've been implemented. The test seeks to measure the effectiveness of the implementation.
- Methodology: The user performs tasks while the monitor largely stays out of the way. Quantitative measurements are gathered.
- Test Example: User is asked to accomplish a book renewal.
- When to use: This test normally takes place close to the release of the product.
- Objective: The purpose of this test is to establish that the product performance meets or exceeds benchmark standards in time and effort required to accomplish a task. These standards are arrived at either through previous tests or educated guesses. This test also measures how well all the pieces of the design work together.
- Methodology: Benchmarks are first established for the tasks. The tasks are then given to users and the resulting completion efforts monitored. The resulting quantitative data is analyzed and timings over benchmark are identified as possible problem areas.
- Test Example: "Access your library account, identify if any books are due soon and then renew them if they are. Benchmark = 1 minute"
Comparison Test: This test compares the users reactions to multiple examples of a tool or implementation.
- When to use: This test can be used at any stage in the design process to compare radically different designs or implementations against each other.
- Objective: This test is used to determine which design is easiest to use and what are the advantages and disadvantages between designs.
- Methodology: This test can be an exploratory test where multiple designs are compared qualitatively. The usual result is an improved product which combines the best of many different ideas. The best results normally come from comparing examples of wildly differing implementations.
- Test Example: "Locate a tutorial on finding scholarly journals on the following library websites."
Rubin, p. 30-42
Develop a test plan
The test plan serves as a blueprint for the test. It also provides a benchmark to evaluate the results of the test. The test plan contains the following components:
- Problem Statement/Test Objectives
- User Profile
- Method (test design)
- Task List
- Test Environment/Equipment
- Test Monitor Role
- Evaluation Measures (data to be collected)
Rubin, p. 83
While building the test plan, you've already given some consideration as to who the target audience is. You'll also need to determine how many participants you would like in the test.
The usability test facilities are located in Suzzallo library. There you will find the following resources:
- A computer station is provided with a full suite of software. Additionally the desktop recording software Morae has been loaded onto the machine. The computer includes a clip-on microphone and speakers.
- Furniture: There are chairs available for one participant and the test monitor. There is also room for one or two observers, although with two observers things will be tight and the participant will probably be uncomfortable.
- Softer lighting and a house plant have been provided to make the participant feel more at ease.
The test monitor is a crucial factor in the success of the test. The role of the monitor is to lead the participant through the test while being careful to not skew the results in any way. To do this, the test monitor has to keep the following principles in mind at all times:
- Be a good listener
- Build a rapport with the participant
- Be flexible and open to changes in the usability plan
- Maintain a long attention span
- Try not to lead the participant--they need to find the answer themselves!
- Try not to act too knowledgeable--playing dumb can raise some interesting results.
- Try not to jump to conclusions--strive to pick out patterns while not allowing that knowledge to color your interaction with the participant.
Preparing for your test involves the following considerations:
- The Screening Questionnaire: provides a way to qualify and select the participants who will participate in the test. This can be a simple form asking for such things as year in school, number of hours spent on the information gateway, etc. Once a good sized pool of participants is gathered, the results from this questionnaire are used to select the most appropriate.
- The Orientation Script: This is a communications tool intended to put the participant at ease, explain the purpose of the test, and to reassure the participant that they are not being tested, the product is. It's important to read the script verbatim to each participant. This ensures that each participant is receiving the same information and is not swayed at all by an off-hand remark inserted by the monitor. This also ensures that all the points you need to make are heard by the participant, and that you don't forget any.
- The Background / Pre-test Questionnaire: This is used to gather historical information in order to better understand the results of the test. The questionnaire provides a background synopsis of each participant that can be used when analyzing the final results of the test and sharing them with others.
- The Consent Form: The purpose of this form is to get permission from the participant to tape them during the test. This form spells out exactly what will and will not happen to the recordings along with an explanation of their rights.
- The Task Scenarios: These are the tasks that the participant will be asked to complete during the test. The tasks should include a realistic scenario along with the participants motive to perform that task. The tasks should also be written in a way that avoids cueing the participant with regard to what you're trying to find out.
- The Post-test Questionnaire and Debriefing Interview: This questionnaire's main purpose is to gather conclusions and recommendations from the participant. Base the questions on the tasks in order to discover the participants preferences and feelings about the test. This is also a good time to ask any probing questions that come up during the test itself. Try to find out the participants motivations during the test. Ask the participant if there is anything she would like us to know.
Rubin, p. 141-212
Conducting the Test
Guidelines for monitoring the test:
- Monitor the session impartially--don't react negatively to mistakes on the part of the participant
- Be aware that your body language and voice inflections can skew results
- Don't rescue participants when they get stuck
- Make sure the participants are actually finished with a task before moving on
- Keep the session loose
- Do have the participants use the thinking out loud technique
- Use probing questions when appropriate to find out more on the participants decisions