The first step to moving forward with your decision to conduct a usability test is thinking through how you want to design it. In this section, you will find information and resources about the basics for developing a user test before pursuing coordination and recruitment activities. This includes forming research questions, developing measurable goals, selecting a methodology, constructing tasks, determining how many test participants are needed, and building a timeline that encompasses the beginning and completion of your study.
Jump to:
- Research Questions
- Measurable Goals
- User Testing Methodologies
- Task Scenarios
- Test Participants Needed
- Make a Timeline
Research Questions
Before starting any usability test you need:
- A reason for what you’re testing
- Why you’re testing a system with users.
Usability testing research questions should focus on figuring out how users interact with a system, not how the user should adapt to a system’s features. A good approach is to create a research question for what you want to learn from users before jumping into making goals and creating tasks.
If you have a hard time identifying users, the Alliance has library and archive user profiles for you to use on the Persona Resources page, and links to other resources to help you identify an audience on the FAQ/Resources page.
Measurable Goals
After you know what you want to learn from users, the next step is to create measurable goals related to what you want to learn from users. Be aware of what system levels are being assessed: the overall system, a specific user scenario that involves navigating and interaction with various system levels, or focusing on a single page or record. This will help you while shaping your tasks because it determines how many parts of your system have user interactions associated with it.
Make Goals
Goals should be specific, not broad. Broad goals do not allow you to focus on the aim of what you want to learn. For example, “Can users log into the library catalog easily?” is a broad goal. While “Do users notice the button for logging into the library catalog?” is a specific question that emphasizes something that can be assessed.
Make the Goals Measurable
Because “Do users notice the button for logging into the library catalog?” is a specific question that emphasizes something that can be assessed, this is a measurable goal because it can provide quantitative and qualitative metrics for questions like these:
- How much time does it take a user to notice the button? (Speed)
- How many attempts does it take before a user notices the button? (Accuracy)
- What is the overall success in completing the task? (Success)
- How did the user feel while looking for the login button? (Satisfaction)
Key Points
Things to remember about user testing goals
- Describe a complete activity
- Specific and measurable
- Describe what users want to do, not how they should do that.
Resources
How do I create testing goals?
How to set usability goals for user testing. (UX Passion)
Learn how to create usability goals.
How to choose a user research method (UX Planet)
Starts with testing goals and describes how to choose testing methods based on those goals.
How do I measure a task question outcome?
Usability Metrics (Nielsen Norman)
Describes usability metrics, which are quantitative data gathered over time to test iterations of website design.
User Testing Methodologies
There are a number of user testing attitudinal, behavioral, qualitative, qualitative, methodologies to use individually or together on a study. These methods will help you acknowledge and discover what user wants, need, and how you can learn from them in order improve your systems. The goals you make will help identify what one(s) to use.
Questions to consider
- What do people need? (Behavioral)
- What do people want? (Attitudinal)
- Can they use it? (Behavioral)
- Are you what users saying? (Attitudinal)
- What are people doing? (Behavioral)
- Why do they think, say or, do something? (Qualitative)
- How often or how many times do users do, think, or say something? (Quantitative)
The Methods
When thinking about what methods to use for user research know that there are more than 15 different types to pick from and all have benefits and disadvantages. Six common approaches are highlighted here and whatever ones you use should be based on how to achieve your measurable goals.
- Interviews and Focus Groups
- Surveys and Questionnaires
- A/B Testing
- Rapid Prototyping
- Usability Test
- Card Sorting and Tree Testing
Resources
How can I conduct a user test?
5 Steps to Usability Testing (Yale University)
If you want a usable site, you have to test. Learn how to conduct a usability test in 5 steps.
How to Conduct Usability Testing from Start to Finish (UX Mastery)
How to Conduct Usability Testing from Start to Finish
Checklist for Planning Usability Studies (Nielsen Normal Group)
Planning a user test? Follow these 9 steps to make sure you are prepared.
How do I know what is the best test to use?
When to Use Which User-Experience Research Methods (Nielsen Norman Group)
Gives brief summaries of 20 UX methods and explains how each method tests UX differently. Also helpful for determining which testing methods are complementary.
7 Great, Tried and Tested UX Research Techniques (Interaction Design Foundation)
General summary of some of the most common testing methods and the benefits of each.
How to choose a user research method (UX Planet)
Starts with testing goals and describes how to choose testing methods based on those goals.
What types of user tests exist?
Quantitative vs. Qualitative Usability Testing (Nielsen Norman)
An overview of when to use qualiative or quanitative methods for user testing. There is a helpful table breaking down the difference between the two approaches and focuses on how they relate to an iterative design cycle.
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
From the The Encyclopedia of Human-Computer Interaction, 2nd Ed., this indepth overview covers the cart sorting user testing methodology. Cart sorting is an actibity involving the grouping and/or naming of objects or concepts This resource gives a practical example overview, history of card sorting, the best approach for qualiative and quanitative outcome, and suggestion on how to coordinate a card sort activity.
A Primer on A/B Testing (A List Apart)
“In an A/B test, you compare two versions of a page element for a length of time to see which performs better. Users will see one version or the other, and you’ll measure conversions from each set of users. A/B tests help designers compare content such as different headlines, call to action text, or length of body copy. Design and style choices can be tested, too; for example, you could test where to place a sign-in button or how big it should be. A/B tests can even help you measure changes in functionality, such as how and when error messages are shown.” In this List APart Article, the author covers what is A/B Testing, how to decide on what to test, and suggestions on how to implement the test. This is a good basic overview for the A/B testing method.
5 Steps to Quick-Start A/B Testing (UX Booth)
“Sometimes called split testing, [A/B testing] is a method for comparing two versions of something to determine which one is more successful.” Rather than relying on team members’ opinions, A/B testing can provide data to make a choice between two option. The article describes five big, broad steps: identify a goal, form a hypothesis, design and run a test, analyze the results, and implement. At the end, the article provides a list of more detailed guides, tools, and case studies.
Task Scenarios
Task scenarios are used to determine what steps are needed to complete a goal. They describe the activity and context with details to complete goals without directing a user on how to complete a task.
Task scenarios should:
- Give context so users can behave as close to how they would normally complete a task
- Provide details about what the users should know in order to complete a task
- Not be prescriptive to the actions you want a user to follow
- Always have a path to a solution and goal completion
Resources
How do I make a task and scenario question?
Writing Tasks for Quantitative and Qualitative Usability Studies (Nielsen Norman)
All usability studies involve asking participants to perform tasks, but the correct way to write those tasks depends on the methodology you’re using. Good quantitative tasks are concrete and focused, while good qualitative tasks are open-ended, flexible, and exploratory.
From Research Goals to Usability-Testing Scenarios: A 7-Step Method (Nielsen Norman)
Describes 7 steps to turn testing goals into specific tasks and user scenarios.
Write better qualitative Usability Tasks: Top 10 Mistakes to Avoid (Nielsen Norman)
10 mistakes to avoid when writing tasks for a usability study. Nielsen Norman Group.
How do I measure a task question outcome?
Usability Metrics (Nielsen Norman)
Describes usability metrics, which are quantitative data gathered over time to test iterations of website design.
Where can I find templates for testing questions?
Usability Testing Questions (WAI)
Sample questions, tasks, and post-test survey questions from W3C.
Template and Downloadable Documents (Usability.gov)
Templates and documents for many aspects of usability testing.
Test Participants Needed
Ideally, you will need five participants to run a user test. However, it is important to note that if you are testing many different groups, e.g. undergraduates, research faculty, etc., then you need five people from each.
Resources
Who are the testing participants?
Persona Method
From the The Encyclopedia of Human-Computer Interaction, 2nd Ed., this indepth overview covers the user persona method. This resource is an overview about how user personas are used in design and development processes. The author breaks down different aspects of personas creation from perspectives like goal-directed and role-based user personas. It also gives explanations for why user stories, scenarios, and use cases are crucial for design success.
Involving Users in Web Projects for Better, Easier Accessibility (W3C)
Emphasizes the importance of users engaging and being involved with your design and development processes. The resource places accessibility at the center of user engagement with your process. It also highlights the importance of combining user involvement with standards for web application development.
How many test participants do I need for valid results?
How Many Testers in a Usability Study? (Nielsen Norman)
Explains why less is sometimes more when choosing how many people to include in user testing.
Why You Only Need to Test with Five Users (Nielsen Norman)
A pithy explanation from the Nielsen Norman group explaining why more tests with fewer users give the best result for your testing effort.
How should I recruit?
How to find great participants for your user study (Google Ventures)
Advice from a Google Ventures UX researcher with how-to information for recruitment. Includes a worksheet for devising a user screening tool.
Recruiting Test Particpants for Usability Studies (Nielsen Norman)
Although this Nielsen Norman page is geared mainly towards hiring an outside recruiter, with information on the probable costs, it includes a link to a comprehensive report on recruitment
What are ethical considerations for user testing?
With Measurable Usability Goals – We All Score
Explains how to set goals that are precise, measurable, and task-focused and includes a template for documenting goals. Usability.gov
Decision Frames: How Cognitive Biases Affect UX Practitioners
How a fact is reported makes a big difference in the perceptions of a reports consumer; this page explains aware of framing bias and how to avoid over-influencing decision making. Nielsen Norman Group.
Markless, S., & Streatfield, David. (2013). “Thinking about evidence.” Evaluating the impact of your library (Second ed.). London: Facet Publishing.
Chapter 8 section 8.4, “Ethical Evidence-gathering.”
Make a Timeline
Depending on testing scope, make sure to develop a project timeline before beginning testing. Overall things to consider in your planning stage to help you establish project milestones throughout your process:
- What time of year are your users available?
- How long will it take to design the test?
- How long will it take to perform testing?
- When will technologies be available for testing?
- When should all logistics be coordinated and finalized?
- When will users receive their incentive?
- When should results be reported?