Usability of interactive systems: Current practices and challenges of its measurement

Similar documents
Standard Glossary of Terms used in Software Testing. Version 3.2. Foundation Extension - Usability Terms

User Centered Design (UCD)

Interfaces Homme-Machine

CS-5200 Design Project

2/18/2009. Introducing Interactive Systems Design and Evaluation: Usability and Users First. Outlines. What is an interactive system

Nektarios Kostaras, Mixalis Xenos. Hellenic Open University, School of Sciences & Technology, Patras, Greece

Human-Computer Interaction: An Overview. CS2190 Spring 2010

Designing Usable Apps

TESTING SOFTWARE QUALITY CHARACTERISTICS

Design Heuristics and Evaluation

03 Usability Engineering

Foundation Level Syllabus Usability Tester Sample Exam

Analytical Comparison of Usability Measurement Methods

Criteria and methods in evaluation of digital libraries: use & usability

Usability & User Centered Design. SWE 432, Fall 2018 Design and Implementation of Software for the Web

Usability. HCI - Human Computer Interaction

Expert Reviews (1) Lecture 5-2: Usability Methods II. Usability Inspection Methods. Expert Reviews (2)

Learnability of software

ACSD Evaluation Methods. and Usability Labs

EVALUATION OF PROTOTYPES USABILITY TESTING

CHAPTER 3 A REVI EW & ANALYSIS OF USABILITY IN USER INTERFACE DESIGNING

HCI Research Methods

CHAPTER 4 HUMAN FACTOR BASED USER INTERFACE DESIGN

Detection of Web-Site Usability Problems: Empirical Comparison of Two Testing Methods

Software usability datasets

Chapter 4. Evaluating Interface Designs

USABILITY EVALUATIONS OF Health Information Technology

Evaluation in Information Visualization. An Introduction to Information Visualization Techniques for Exploring Large Database. Jing Yang Fall 2005

Interaction Design. Heuristic Evaluation & Cognitive Walkthrough

UX Research in the Product Lifecycle

Mensch-Maschine-Interaktion 1. Chapter 2 (May 12th, 2011, 9am-12pm): Basic HCI principles 1

What is interaction design? What is Interaction Design? Example of bad and good design. Goals of interaction design

MTAT : Software Testing

Product Usability Development

Overview of Today s Lecture. Analytical Evaluation / Usability Testing. ex: find a book at Amazon.ca via search

Chapter 10 Interactive Systems And Usability Organizational Requirements Engineering

EVALUATION OF PROTOTYPES USABILITY TESTING

Carbon IQ User Centered Design Methods

Applying Usability to elearning

Human-Computer Interaction IS4300

Usability attributes: an initial step toward effective user-centred development

cs465 principles of user interface design, implementation and evaluation

CS 315 Intro to Human Computer Interaction (HCI)

Evaluation techniques 1

Evaluation techniques 1

An Integrated Quantitative Assessment Model For Usability Engineering

Software Quality. Martin Glinz. Thomas Fritz. Lecture 7 UI Design, Usability & Testing. Many thanks to Meghan Allen and Daniel Greenblatt.

Usability Evaluation

Usability Evaluation of Method Handbook

ISO INTERNATIONAL STANDARD. Ergonomic requirements for office work with visual display terminals (VDTs) Part 11: Guidance on usability

Web Usability in the Irish Airline Industry

Improving user interfaces through a methodological heuristics evaluation framework and retrospective think aloud with eye tracking

Concepts of Usability. Usability Testing. Usability concept ISO/IS What is context? What is context? What is usability? How to measure it?

Methods: Deciding What To Design

Introduction What is Usability and Usability Testing?

Web Site Usability Evaluation: An Exploratory Study on the Web Site of Directorate General of Higher Education Imam Azhari, Agus Harjoko

IPM 10/11 T1.6 Discount Evaluation Methods

Heuristic evaluation of usability of public administration portal

User Interface Design & Implementation. Prof. Dr.-Ing. Abdulmotaleb El Saddik University of Ottawa (SITE 5-037) (613) x 6277

Quality and usability: A new framework

Usability Inspection Methods. Overview. Usability Measures. SMD157 Human-Computer Interaction Fall 2003

Analytical &! Empirical Evaluation

evision Review Project - Engagement Simon McLean, Head of Web & IT Support Information & Data Services.

Goals of Usability Evaluation

On the performance of novice evaluators in usability evaluations

User-Centered Design. SWE 432, Fall 2017 Design and Implementation of Software for the Web

Public Information System Interface Design Research

Conceptualizing User Satisfaction toward a Library Quality at Malaysia Nuclear Agency Library

Ensure Great User Experience for your Software Product

Translational User Research: Turning Results into Quick Fixes and New Visions. NEASIST Service Design, 1/12/2017 Rong Tang

Usability Testing CS 4501 / 6501 Software Testing

Incorporating Usability into an Object Oriented. Development Process

Improving Software Engineering Practice with HCI Aspects

MAKING YOUR GATEWAY EASY AND PLEASANT TO USE

Additional reading for this lecture: Heuristic Evaluation by Jakob Nielsen. Read the first four bulleted articles, starting with How to conduct a

Usability Evaluation of Live Auction Portal

Course Outline. Department of Computing Science Faculty of Science. COMP 3450 Human Computer Interaction Design (3,1,0) Fall 2015

What is interaction design?

Usability Evaluation of Cell Phones for Early Adolescent Users

An Integrated Measurement Model for Evaluating Usability Attributes

AUTHOR(S) CLIENT(S) CLASS. THIS PAGE ISBN PROJECT NO. NO. OF PAGES/APPENDICES. Open

SFU CMPT week 11

User-centered design in technical communication

A Literature Survey on standards for software product quality

White Paper. Incorporating Usability Experts with Your Software Development Lifecycle: Benefits and ROI Situated Research All Rights Reserved

Breaking down Usability

CS 4317: Human-Computer Interaction

Amsterdam Medical Center Department of Medical Informatics. Improve. Usability evaluation of the sign up process of the Improve app

Introducing Human-Computer Interaction Design

HOMEPAGE USABILITY: MULTIPLE CASE STUDIES OF TOP 5 IPTA S HOMEPAGE IN MALAYSIA Ahmad Khairul Azizi Ahmad 1 UniversitiTeknologi MARA 1

evaluation techniques goals of evaluation evaluation by experts cisc3650 human-computer interaction spring 2012 lecture # II.1

Introducing Evaluation

EVALUATION OF THE USABILITY OF EDUCATIONAL WEB MEDIA: A CASE STUDY OF GROU.PS

Heuristic evaluation is a usability inspection technique developed by Jakob Nielsen. The original set of heuristics was derived empirically from an

Overview of the course. User-Centred Design. Group. Practical issue. Writting the report. Project work. Fang Chen

Enhancement of User Experience by Task Analysis:A Proposal

Criteria for selecting methods in user-centred design

Usability Evaluation of Digital Library

Experimental Evaluation of Effectiveness of E-Government Websites

Work Environment and Computer Systems Development.

Transcription:

Usability of interactive systems: Current practices and challenges of its measurement Δρ. Παναγιώτης Ζαχαριάς Τμήμα Πληροφορικής Πανεπιστήμιο Κύπρου 23/2/2010

Concepts and Definitions

Usability engineering The most challenging task for a software developer is not just to design for the required functionality, but also focus on designing for specific attributes, which contribute to the quality of software One of the goals of software engineering is to construct computer systems that people find usable and will use (Ovaska, 1991). Usability engineering, also known as human computer interaction engineering, is a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and the study of major phenomena surrounding them.

What is usability? (1) Usability has, similar to many other software engineering terms, many definitions. The term usability was originally derived from the term user friendly. But this term had acquired a host of undesirable vague and subjective connotations (Bevan et al., 1991) Therefore the term usability was suggested to replace this term but usability is still a general and vague term There are many definitions in the literature and different approaches to how usability should be measured; what we mean as usability is to a large extent determined by how we measure it (Hornbaek, 2006).

What is usability? (2) the usability of a system is the capability in human functional terms to be used easily and effectively by the specified range of users, given specified training and user support, to fulfill the specified range of tasks, within the specified range of scenarios. (Shackel, 1991) Two sides of usability: Usability is a relative property of the system; Being relative in relation to its users, therefore evaluation is context dependent; resulting in a subjective perception of the product/system. The other side of usability relates to objective measures of interaction.

Shackel s approach For a system to be usable it has to achieve defined levels on the following scales: Effectiveness: performance in accomplishment of tasks. Learnability: degree of learning to accomplish tasks. Flexibility: adaptation to variation in tasks. Attitude: user satisfaction with the system.

Nielsen s approach (1) Nielsen (1993) just as in the case of Shackel, considers usability to be an aspect that influences product acceptance. Acceptability is differentiated into practical and social acceptability. Usability and utility; the ability to help the user carry out a set of tasks, together form the usefulness of a system.

Nielsen s approach (2) Nielsen defines usability to consist of five kinds of attributes: Learnability: Systems should be easy to learn.users can rapidly start getting some work done with the system. Efficiency: Systems should be efficient to use. When a user has fully learned the system, productivity will be possible on a high level. Memorability: Systems should be easy to remember, making it possible for casual users to return to the system after some period of not using the system, without having to learn everything all over again. Errors: The system should have a low error rate, which enables users to make few errors during the use of the system. When they do make errors they can easily recover from them. Catastrophic errors should not occur. Satisfaction: The system should be pleasant to use; which makes users subjectively satisfied when using it.

ISO 9241-11 The ISO organization has developed various HCI and usability standards over the last 15 years. The main purpose of these ISO standards is to provide and impose consistency. Usability is defined as: the extent to which a product can be used by specified users to achieve specified goals with effectiveness; the extent to which the intended goals of use are achieved, efficiency; the resources that have to be expended to achieve the intended goals and satisfaction; the extent to which the user finds the use of the product acceptable, in a specified context of use. This approach presents a contextually oriented view of usability. It incorporates a user performance view; issues such as effectiveness and efficiency and a user view; issues such as satisfaction. The ISO standards and the evaluation tools that result from it have been widely adopted by HCI practitioners.

An overview of definitions It can be concluded that the different definitions of usability overlap. The usability attributes can be divided into: 1. Objective operational criteria: user performance attributes such as efficiency, learnability, effectiveness etc. 2. Subjective operational criteria: user view attributes such as satisfaction, attractiveness, etc.

Usability in HCI research A key research question in HCI is how to work with and improve usability of interactive systems Research addressing the question: Guidelines for improving the usability Methods for predicting usability problems Techniques to test the usability of systems Discussions on how to measure usability The evolution of discussions on how to measure quality of computer systems Ergonomics (Shackel,1959) Ease of use (Miller, 1971; Bennett, 1972) Usability (Bennett, 1979; Shackel, 1981)

Usability Evaluation Methods

Categorization of UEMs Three broad types of usability evaluation methods (Zhang, 2001): Testing Inspection Inquiry

Usability testing The usability testing approach requires representative users to work on typical tasks using the system or the prototype. The evaluators use the results to see how the user interface supports the users to do their tasks. Testing methods include the following: Coaching method (Nielsen, 1993). Co discovery learning (Nielsen, 1993; Dumas and Redish, 1993; Rubin, 1994). Performance measurement (Nielsen, 1993; Soken et al., 1993). Question asking protocol (Dumas and Redish, 1993). Remote testing (Hartson et al., 1996). Thinking aloud protocol (Nielsen, 1993).

Usability inspection The usability inspection approach requires usability specialists or software developers, users and other professionals to examine and judge whether each element of a user interface or prototype follows established usability principles. Commonly used inspection methods are: Heuristic evaluation (Nielsen, 1994). Cognitive walkthrough (Wharton et al., 1994; Rowley et al., 1992). Feature inspection (Nielsen, 1994). Pluralistic walkthrough (Bias, 1994). Standards inspection/guideline checklists (Wixon et al., 1994).

Usability inquiry (1) Usability inquiry requires usability evaluators to obtain information about users likes, dislikes, needs and understanding of the system by talking to them, observing them using the system in real work (not for the purpose of usability testing) or letting them answer questions verbally or in written form. Inquiry methods include: Field observation (Nielsen, 1993). Interviews/focus groups (Nielsen, 1993). Surveys (Alreck and Settle, 1994). Logging actual use (Nielsen, 1993).

Usability inquiry (2) Questionnaires: Another inquiry method that is widely used at usability evaluation. Some of them are: QUIS: Questionnaire for user interface satisfaction (Chin et al., 1988). PUEU: Perceived usefulness and ease of use (Davis, 1989). PSSUQ: Post study system usability questionnaire (Lewis, 1992). CSUQ: Computer system usability questionnaire (Lewis, 1995). ASQ: After scenario questionnaire (Lewis, 1995). http://hcibib.org/perlman/question.cgi?form=asq SUMI: Software usability measurement inventory (HFRG, 2002). MUMMS: Measurement of usability of multimedia software (HFRG, 2002). WAMMI: Website analysis and measurement inventory (HFRG, 2002). http://www.wammi.com/samples/index.html EUCSI: End user satisfaction instrument (Doll et al.,1994).

Measuring Usability If you can t measure it, you can t improve it

Measuring Usability: Major issues(1) The term usability is determined to a great extent by how we measure it Usability cannot be measured directly; operationalization of usability parameters can help us find the aspects of usability that can be measured Which measures are valid indicators of usability? How can we select usability measures? Which measures of usability are suitable? How to understand the relation between different measures of usability?

Measuring Usability: Major issues (2) Common conception of how to measure usability need revisiting New contexts of use require new measures of usability to adequately capture what is considered important in these context Learning technologies (Soloway et al., 1994) Home technologies (Monk, 2002) Ubiquitous computing (Mankoff et al., 2003)

Current practice in measuring usability (1) Measures of effectiveness: Binary task completion (number of correct tasks, number of tasks where users failed to finish within a set time, number of tasks where users gave up) Accuracy measures (quantify the number of errors users make during the process of completing tasks) Recall measures (how much information users can recall after the use of an interface) Completeness measures (the extent to which tasks are solved, number of secondary tasks are solved and proportion of relevant documents found in information retrieval tasks) Quality of outcome (attempts to measure the outcome of the tasks, measures of understanding e.g. tests of what has been learned from an e-learning interface)

Current practice in measuring usability (2) Measures of efficiency Time (refers to measures of how long users take to complete tasks) Input rate (e.g. words per minute) Mental effort (mental resources spend on interaction, e.g. heart rate variability, subjective time estimation) Usage patterns (number of times a certain action has been performed, how much information users access when solving tasks, deviation from the optimal solution etc.) Communication effort (measures of the resources spend on communication in groupware studies) Learning measures (e.g. how users become faster in text input over time)

Current practice in measuring usability (3) Measures of satisfaction Preference measures capture which interface users prefer using Ease of use refers to measures of general satisfaction with the interface Specific attitudes towards the interface (liking, fun, annoyance etc.), toward the content of interface (quality of information, organization of information etc.) Perception of outcomes (users assessment of their performance, users perception of learning, users confidence in the solution to tasks) Comment: Few studies use measures that directly build upon earlier work; very few studies refer to previous research in which particular questions have been used; validation is seldom undertaken

Research Challenges in measuring usability Subjective and objective measures Extending, validating and standardizing measures of satisfaction Measures of usability over time Studies of correlations between measures Micro and macro measures of usability

Subjective and objective measures The challenges to research are to develop subjective measures for aspects of quality-in-use that are currently mainly measured by objective measures, and vice versa, and evaluate their relation. Such developments seem especially to be lacking for outcome quality vs. perceived outcome, time vs. subjective experienced duration, perceived learnability vs. changes in time to complete tasks, and objective measures of satisfaction vs. subjective satisfaction questionnaires.

Measures of usability over time In the large majority of usability studies users typically interact only briefly with interfaces under investigation; (many studies report as median duration of users interaction= 30 min). It would be relevant to know more about how measures of effectiveness and satisfaction develop over time. For example: Do usability measures converge over time in pointing out a particular interface as superior to other interfaces? Are users able, over time, to compensate for most usability problems that lead to initial dissatisfaction?

Extending, validating and standardizing measures of satisfaction A challenge to research on usability is to extend satisfaction measures beyond post-use questionnaires, and to focus more on validation and standardization of satisfaction measures (e.g. physiological measures, heart rate variability, anxiety, etc.). Despite the availability of several validated questionnaires measures of satisfaction are reinvented again and again There are many non validated questionnaires for measuring specific attitudes Validation may be achieved through conducting studies in several contexts of use and studies of correlation

Studies of correlations between measures A weak understanding of the relation between usability measures gives rise to many of the issues discussed in this presentation. With a better understanding, we could make more informed choices about which usability measures to employ. For example, given that satisfaction is not always correlated with effectiveness what does this signify in a particular context? Are there critical aspects of effectiveness we are ignoring? Of satisfaction? Are we looking at too short-term use? Such questions seem worth exploring.

Micro and macro measures of usability (1)

Micro and macro measures of usability (2) The macro perspective on tasks is rare; we seem most often to cope with the complexity of usability evaluation by choosing simple, manageable measures at a micro level For example user interfaces that stimulate creativity (Shneiderman, 2000), support sociability on the internet (Preece, 2000), enables personal fulfillment, and so forth, seem unlikely to be evaluated, let alone achieved, if we focus on micro measures, as these goals seem to involve psychological and social complexities only visible in macro tasks

HCI Resources: an indicative list Journals International Journal of Human Computer Studies Interacting with Computers International Journal of Human Computer Interaction Journal of Usability Studies Books Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests Jeffrey Rubin Usability Engineering, Jakob Nielsen Usability Engineering Lifecycle: A Practitioner's Handbook for User Interface Design, Deborah J. Mayhew Web resources UsabilityNet - www.usabilitynet.org Usability Body of Knowledge - http://www.usabilitybok.org/ Research Based Web Design and Usability Guidelines - http://usability.gov/pdfs Jakob Nielsen's Website http://www.useit.com/ http://hcibib.org/