The Pluralistic Usability Walk-Through Method S. Riihiaho Helsinki University of Technology P.O. Box 5400, FIN HUT

Similar documents
15/16 CSY2041 Quality and User-Centred Systems

How to Conduct a Heuristic Evaluation

User Interface Evaluation

Hyper Mesh Code analyzer

HCI and Design SPRING 2016

iscreen Usability INTRODUCTION

Problem and Solution Overview: An elegant task management solution, that saves busy people time.

Assignment 5 is posted! Heuristic evaluation and AB testing. Heuristic Evaluation. Thursday: AB Testing

User Experience Report: Heuristic Evaluation

Assignment 8 rekindl Local Community (1:30PM) Meet The Team. Ryan C. Amanda L. Sara V. James C.

Evaluation in Information Visualization. An Introduction to Information Visualization Techniques for Exploring Large Database. Jing Yang Fall 2005

1. Select/view stores based on product type/category- 2. Select/view stores based on store name-

Usefulness of Nonverbal Cues from Participants in Usability Testing Sessions

NPTEL Computer Science and Engineering Human-Computer Interaction

USER-CENTERED DESIGN KRANACK / DESIGN 4

CS-5200 Design Project

Improving the Usability of the University of Rochester River Campus Libraries Web Sites

What is a prototype?

Design Heuristics and Evaluation

Heuristic evaluation is a usability inspection technique developed by Jakob Nielsen. The original set of heuristics was derived empirically from an

What is a prototype?

Persona Characteristics To What Extent Are Templates Suitable for Students?

Group #: Evaluation Proposal For Digital Libraries, LIS 5472, Fall 2010, Professor Sanghee Oh

What is a prototype?

Cindy Fan, Rick Huang, Maggie Liu, Ethan Zhang November 6, c: Usability Testing Check-In

THE USE OF PARTNERED USABILITY TESTING TO HELP TO IDENTIFY GAPS IN ONLINE WORK FLOW

HOSTING A WEBINAR BEST PRACTICE GUIDE

ITP 140 Mobile Technologies. User Testing

cs465 principles of user interface design, implementation and evaluation

Applying Usability to elearning

Story mapping: Build better products with a happier team

TERMINOLOGY MANAGEMENT DURING TRANSLATION PROJECTS: PROFESSIONAL TESTIMONY

Bridging the Gap between User Needs and User Requirements

The exam. The exam. The exam 10. Sitting a City & Guilds online examination 11. Frequently asked questions 18. Exam content 20

SFU CMPT week 11

Assistant Professor Computer Science. Introduction to Human-Computer Interaction

Software Quality. Martin Glinz. Thomas Fritz. Lecture 7 UI Design, Usability & Testing. Many thanks to Meghan Allen and Daniel Greenblatt.

CS 160: Evaluation. Outline. Outline. Iterative Design. Preparing for a User Test. User Test

CS 160: Evaluation. Professor John Canny Spring /15/2006 1

Module 9: Audience Analysis, Usability, and Information Architecture COM 420

CHAPTER 1 COPYRIGHTED MATERIAL. Finding Your Way in the Inventor Interface

New user introduction to Attend

CSE 440: Introduction to HCI User Interface Design, Prototyping, and Evaluation

Crab Shack Kitchen Web Application

Cognitive Walkthrough

Nektarios Kostaras, Mixalis Xenos. Hellenic Open University, School of Sciences & Technology, Patras, Greece

Prezi: Moving beyond Slides

Perfect Timing. Alejandra Pardo : Manager Andrew Emrazian : Testing Brant Nielsen : Design Eric Budd : Documentation

Addition about Prototypes

Carbon IQ User Centered Design Methods

Overview of Today s Lecture. Analytical Evaluation / Usability Testing. ex: find a book at Amazon.ca via search

Evaluation techniques 1

Evaluation techniques 1

MEETINGS ACROSS THE MILES

CSE 403 Lecture 17. Usability Testing. Reading: Don't Make Me Think! A Common Sense Approach to Web Usability by S. Krug, Ch. 9-11

Effective Virtual Meetings & Basics for Using the Skype for Business Tool

CSE 440: Introduction to HCI User Interface Design, Prototyping, and Evaluation

Due on: May 12, Team Members: Arpan Bhattacharya. Collin Breslin. Thkeya Smith. INFO (Spring 2013): Human-Computer Interaction

Usability. Daniela Rosner. Web Architecture, October 9, School of Information UC Berkeley

CLIENT ONBOARDING PLAN & SCRIPT

Information System Architecture. Indra Tobing

CLIENT ONBOARDING PLAN & SCRIPT

Chapter 2 Web Development Overview

The 23 Point UX Design Checklist

Incorporation of users in the Evaluation of Usability by Cognitive Walkthrough

Welcome to another episode of Getting the Most. Out of IBM U2. I'm Kenny Brunel, and I'm your host for

CO328- Human Computer Interaction Michael Kölling Caroline Li. Heuristic Evaluation

CS 147 Let s Do This! Assignment 6 Report

User Centered Design And Prototyping

Evaluation and Design Issues of Nordic DC Metadata Creation Tool

CS3205 HCI IN SOFTWARE DEVELOPMENT PROTOTYPING STRATEGIES. Tom Horton. * Material from: Floryan (UVa) Klemmer (UCSD, was at Stanford)

The LUCID Design Framework (Logical User Centered Interaction Design)

(Refer Slide Time 00:01:09)

Midterm Exam, October 24th, 2000 Tuesday, October 24th, Human-Computer Interaction IT 113, 2 credits First trimester, both modules 2000/2001

Memorandum Participants Method

Unit 9 Tech savvy? Tech support. 1 I have no idea why... Lesson A. A Unscramble the questions. Do you know which battery I should buy?

Assignment 1: Needfinding

Strong signs your website needs a professional redesign

3.1 traversal. 3.2 matching. And the second part are as follows. G E N E R A L S T E P S The user can input the pictures of his clothes followed by

Design, prototyping and construction

A Usability Evaluation of Google Calendar

Ryan Parsons Chad Price Jia Reese Alex Vassallo

Want to Create Engaging Screencasts? 57 Tips to Create a Great Screencast

Cognitive Walkthrough Evaluation

Usability Report. Author: Stephen Varnado Version: 1.0 Date: November 24, 2014

CHAPTER 18: CLIENT COMMUNICATION

Usability evaluation in practice: the OHIM Case Study

Usability Evaluation of Cell Phones for Early Adolescent Users

Prototyping. Unit 5. Zeno Menestrina, MSc Prof. Antonella De Angeli, PhD

EVALUATION ASSIGNMENT 2

Usability Testing Review

YOUR GUIDE TO. Skype for Business

Revision Original May 23, 2005 Latest May 23, 2005 P/N

Chapter 15: Analytical evaluation

Recipes. Marketing For Bloggers. List Building, Traffic, Money & More. A Free Guide by The Social Ms Page! 1 of! 24

facebook a guide to social networking for massage therapists

Printing Envelopes in Microsoft Word

1. Title Sherpa: opening your world, one tour at a time

IPM 15/16 T2.1 Prototyping

Training Module. 1.0 Getting Started with Google+ Hangouts (Teacher)

Transcription:

The Pluralistic Usability Walk-Through Method S. Riihiaho Helsinki University of Technology P.O. Box 5400, FIN-02015 HUT sirpa.riihiaho@hut.fi Abstract Pluralistic usability walkthrough is a usability evaluation method that brings users and system designers together to evaluate new design ideas. The method does not require a working prototype, so the designers get early performance and satisfaction data straight from the users before any functional prototype is available. In our experience, pluralistic usability walkthrough is a very effective and useful method. With little effort, the system designers get valuable information about the users' tasks in addition to the comments on the design ideas. 1 When is the system ready enough for usability evaluation? It is difficult to find the moment in a system development process when the users are already able to comment the design ideas and the system designers are still willing and able to make changes that reflect those comments. All too often, users are asked for their comments only when the system is almost ready. At this point, modifications are hard and expensive to make. Usability evaluation with the users is recommended already in the early phases of system development. Still, system designers often seem reluctant to make paper prototypes and to hand the system over for user testing before it is finished. A method is therefore needed that motivates designers to evaluate their designs early enough. 2 Pluralistic usability walkthrough method Pluralistic usability walkthrough (Bias 1994) is a usability evaluation method that brings representative users and system designers together into a design session to discuss new ideas. The discussion is based on tasks that the participants try to perform with the help of hardcopy panels of the system. The participants get a set of hardcopies of the dialogues that they need to perform the given tasks. Documentation or help functions are rarely available at this point, so the system designers usually serve as "living documentation" and answer questions that users indicate they would try to solve with the help of the system documentation. In this way, the users are able to carry on with their tasks and the designers get valuable hints for their documentation. Bias (1994) gives five defining characteristics of the pluralistic usability walkthrough: 1. The method includes three types of participants in the same walkthrough session: users, system designers and usability experts. 2. The system is presented with hardcopy panels and these panels are presented in the same order as they would appear in the system. 3. All participants take the role of a user. 4. The participants write down the actions they would take to perform the given tasks. 5. The group discusses the solutions to which they have reached. The administrator first presents a correct answer. Then the users describe their solutions, and only after that, do the designers and usability experts offer their opinions. The users in a pluralistic usability walkthrough session are representative users matching the system audience descriptions. The system designers may be architects, coders or writers. A usability expert administers the session. The administrator's role is to make sure the designers' 1

attitude to the users' comments remains positive. If the system designers try to explain away any problems, the users' willingness to give comments soon vanishes. According to Bias (1994), the pluralistic usability walkthrough method provides reliable data on a particular user interface panel in much the same way as traditional usability testing. The efficiency of the system, the user interface flow and navigation throughout the interface, on the other hand, cannot be reliably evaluated with this method. Compared to usability testing, pluralistic usability walkthrough is better in revealing uncertain decisions. In usability tests, these "lucky guesses" might go unnoticed, but in the pluralistic usability walkthrough sessions, the users can easily report that although they had the right solution, they were not sure about it. 3 Modifications to pluralistic usability walkthrough method We have made some major and minor modifications to the pluralistic usability walkthrough method in our experiments at Helsinki University of Technology. The most important change was to keep user testing and inspections separate from each other. In the procedure that Bias (1994) presents, the usability experts conduct usability inspections in the sessions while trying to perform the given tasks. We wanted to conduct inspections and user testing separately for two main reasons. Firstly, if users are present in an evaluation session, we want them to feel important and to let them be the focus of the session. Secondly, both the users' and the usability experts' time is saved if the inspections are conducted separately. The pluralistic usability walkthrough sessions usually last longer than a purely inspection session, so the sessions are less effective for the experts. If the experts comment on the system in the sessions very much, the users may feel left out and feel their time is being wasted when they listen to comments they perhaps do not even understand. The minor changes we made are mostly related to the separation of inspections and user testing. Table 1 summarises our modifications. In our version, the sessions are very similar to usability testing in the sense that users and their comments are the focus of the session. All other participants are already familiar with the system and do not need to search for a solution for the tasks but to remind themselves of the solution. The usability experts' role is to administer and observe the session, and the designers' role is to provide enough information about the system for users to be able to comment on it. In a traditional usability test, we have only one administrator in a test session. In a pluralistic usability walkthrough session, we usually have two administrators with slightly different roles: one main administrator concentrates on the test tasks and the questions prepared beforehand, and the other administrator concentrates on the conversations and asks ad hoc questions if some interesting issues arise. 2

Table 1: Our modifications to the pluralistic usability walkthrough method. Original version (Bias 1994) User testing and usability inspections are combined in the same session. Most of the participants are new to the system. They all take the users' role and try to perform the tasks. One administrator conducts the session. Only one path through the tasks is available in the hardcopy panels. The administrator announces the right answer for the task before conversation takes place. Our modification Inspections are conducted separately, so user testing is the sole issue in the session. Only the users are new to the system, so other participants just go through the tasks as a reminder of the necessary steps. A second administrator supports the main administrator by asking further questions. Several paths through the tasks are available so that the users can select a suitable one. The users present their solutions from scratch, and finally the designers say which solutions the system supports. 4 The walkthrough sessions In our pluralistic usability walkthrough session, there are two or three users, one or two system designers and two usability experts present. One usability expert administers the session and the other supports the main administrator by asking further questions and by observing the users' reactions. In our experiments, the system designers and the usability experts are already familiar with the system, so only the users represent novice users. Still, we ask all the participants to take the role of the user to ensure a user-centred attitude in the session. This also encourages the designers to remind themselves of the steps necessary to perform the tasks. 4.1 Settings We record the sessions with one or two video cameras. If we have a working prototype or demonstrations available in the session, the administrator may use it to give an overview of the general interface style. We try to place the computers out of the designers' reach, so that the designers are not tempted to demonstrate new features every now and then, as has once happened ( Figure 1). Second administrator User Whiteboard Camera Product designer Computer Overhead projector Product designer Users Main administrator Screen Figure 1: The settings in a pluralistic usability walkthrough session. 3

4.2 Tasks and material In the beginning of the session, we explain the goal and the procedure of the session and describe the context of use. Then we give the first task and the related hardcopy panels. As in usability testing, we start the session with an easy task to make the participants familiar with the procedure and to make them relax. We ask the participants to write down on the panels all the actions they would take. For example, they may circle the buttons they would push or the menus they would select, and explain their actions. If the dialogues to be studied include selection lists, we present the lists in the same sheet of paper as the dialogue. The system menu is usually kept handy on a separate paper or on a slide that is visible during the whole session. In the procedure that Bias (1994) presents, only one linear path through the task is presented in the hardcopy panels. In our experiments, we try to offer various paths and let the users select a suitable one for them. This means that the participants sometimes get quite a big pile of hardcopy panels and the administrators must help them to find the right panels according to their actions. 4.3 Conversation after each task After all the participants have written down their solutions, the main administrator starts the conversation. Unlike in the procedure that Bias (1994) presents, our administrator does not present any "right" answer in the beginning. Instead, we want the users to present their solutions from scratch. If there is a silent user or a novice user in the group, the administrator usually starts the conversation with that user to ensure that the expert users or very talkative users do not rule the conversation. Only after all the users have commented on the task, are the system designers allowed to say which solution the system supports. The designers usually suggest some new ideas straight away based on the users' comments. All the participants are welcome to comment these ideas and to generate new ones. To encourage new ideas, we sometimes draw a set of user-interface elements on post-it notes and ask the users to construct a panel that would support their work as much as possible. When the conversation fades out, we collect the panels and move on to the next task. 5 Application to an elevator monitoring system We conducted a pluralistic usability walkthrough session for the first time when we were evaluating the usability of an elevator monitoring and command system. The system was designed to monitor and control the operation of an elevator and escalator bank for example in a department store. The system was aimed at caretakers working in the department stores, and at elevator service and maintenance men. We were planning to conduct a traditional usability test for the system. We had already thought through the test tasks, but the system was not ready enough to be transferred and it seemed that it was not going to be ready in time for the tests. Therefore, we decided to conduct a walkthrough with paper prototypes. Since we had not been able to prepare ourselves for the use of the system, we needed the system designer to serve as living documentation in the walkthrough. To save time and to encourage conversation, we decided to invite both the test users to the same session. One user was an expert user representing the elevator service and maintenance men, and the other user was a novice user representing the caretakers. We wanted to get users' comments about three alternative design ideas. Two ideas could be presented via small computer-based demonstrations, but the newest design was only at the level of paper sketches. With each task, we first showed the related demonstrations and asked the 4

users' opinion of them. Then we gave the users the paper sketches of the newest design idea and asked them to write down the actions they would take to perform the task. Figure 2 shows one solution to the task: "There has been some problem with the elevators soon after 7 o'clock in the morning. Could you find more information about the problem with this dialogue?" Figure 2: One solution to trying to find more information about a problem. Both the demonstrations and the paper sketches inspired vivid conversation. The toolbar presented in the paper prototypes proved to be effective and the users wanted to develop it even further: they wanted to add some common commands to the toolbar and thereby reduce the use of separate dialogues. The designer got lots of new ideas and was very pleased with the effectiveness of the session. Although he seemed very motivated with his work already before the session, interaction with the users seemed to inspire him even more. 6 Application to a student register After some positive experience with pluralistic usability walkthrough method, we were asked to evaluate the usability of a student register at our university. The project team developing the system had no extra resources for usability evaluation and still needed the results quickly. Therefore, we decided to conduct a pluralistic usability walkthrough session. This way, we were able to get quick results, and a short summary of the conversations was enough for a report, since the system designers attended the session and saw for themselves the problems. In the session, there were two users, two system designers and two usability experts present. The users represented both ends of the use of the system: one user fed the results into the system and the other used the system to transfer the results into the official register of our university. The system designers had already talked with both of these users while gathering user requirements, but they had never before discussed the system together. 5

Having representatives from both ends of the use of the system helped to reveal a function that was never needed. The function required a heavy database search and was difficult to implement, so it saved a lot of time and effort for the system designers to find out that it was unnecessary. The matter came out when one of the users mentioned that she had sometimes wondered why this function was there, but had always thought that the other end of the organisation needed it. Since both parts of the organisation were represented, it was possible to confirm that the function was not needed in any part of the work. 7 Effect of modifications Bias (1994) names three limitations with the pluralistic usability walkthrough method: 1. The walkthrough must progress as slowly as the slowest participant on each panel. 2. Only one linear path is available in the paper prototypes. 3. All the participants must conform to the selected path although their own solution might have been viable. Our modifications to the method helped us to diminish some of the problems. In our sessions, the users are the only participants really performing the tasks, and they usually solve the tasks quite quickly. If a user is especially slow, the administrator may encourage the user to speed up a little or help with small hints. In addition, the session is shorter, because the usability experts' comments are separated from the session. Problems 2 and 3 we have fixed by offering several alternative paths in the paper prototypes. Still, some paths may not be covered, if there are numerous alternatives. 8 Try it out! In our experience, pluralistic usability walkthrough is a very effective and encouraging evaluation method - especially for the system designers. As the system designers meet the users face-to-face, they easily put themselves in the users' place and become very interested in the users comments. The users sense this attitude and are willing to make comments on the system and to suggest changes to the design. With very little effort, the system designers get performance and satisfaction data straight from the users even before any functional prototype is available. It is easy, fast and inexpensive to make paper prototypes of the user interface. The prototypes make the system descriptions understandable to the users, so the users are able to comment on the system and even to modify the design. Prototypes also help to uncover unarticulated aspects of users' work as users get a chance to "use" and envision the system (e.g. Bødker & Grønbæk 1991). Pluralistic usability walkthrough thereby gives the users an opportunity to evaluate how well the system supports their work activities and to participate in the design of an improved system. On the whole, pluralistic usability walkthrough is an easy, inexpensive and effective evaluation method. Seeing the real users usually motivates the designers to improve the usability of their design. The sessions also give an opportunity to confirm the system specifications in such an early phase of system development that it is still easy and inexpensive to make even major changes in the design. Based on our experience, we can recommend the pluralistic usability walkthrough method whenever possible. 6

Acknowledgements The research on usability evaluation methods presented in this article has been done on projects funded by the Academy of Finland, the National Technology Agency Tekes and the Finnish Work Environment Fund. References BIAS, R. (1994): The pluralistic usability walkthrough: Coordinated empathies. In Usability inspection methods. NIELSEN, J. and MACK, R.L. (eds.). New York, Wiley. pp. 63-76. BØDKER, S. and GRØNBÆK, K. (1991): Design in action: From prototyping by demonstration to cooperative prototyping. In Design at work: Cooperative design of computer systems. GREENBAUM, J. and KYNG, M. (eds.) Hillsdale, New Jersey, Lawrence Erlbaum Associates. pp. 197-218. 7