Survey of severity ratings in usability. Lisa Renery Handalian

Size: px
Start display at page:

Download "Survey of severity ratings in usability. Lisa Renery Handalian"

Transcription

1 Survey of severity ratings in usability Lisa Renery Handalian

2 Renery Handalian 1 Introduction A few careers ago, I was a writing instructor at U. Mass, Boston, where all undergraduates are required to pass a Writing Proficiency Exam ( WPE ) in order to graduate. The WPE consists of an essay test based on a reading set that students have a month to prepare. Three times a year experienced instructors from across the curriculum convene to grade the essays; each essay is assessed by two to three instructors before receiving a Pass or Retake grade. On the day of the grading, before actually starting, instructors engage in a calibration session during which everyone reads and evaluates four model essays; based on the ensuing discussions of why we graded as we did, the evaluators develop assessment guidelines on which to judge that day s essays. The intent of this preliminary process of consensus building, or convergence obtaining (Wilson, 2003), is to develop a workable grading rubric designed to ensure that at least for that particular exam on that particular day with that particular group of evaluators, all the papers are held to the same agreed upon standards. The guidelines behind said standards are then communicated to Retake students in the form of guidance on how to improve their writing. They also provide justification in the event that a student challenged his or her assessment with the Academic Evaluation Committee at the university. I was reminded of these calibration sessions (and their purpose) during a Rolf Molich UPA talk about the CUE (Comparative User Evaluations) studies. This article presents an examination of the different criteria by which usability professionals have categorized the degree to which a problem impede[s] task performance (Catani &

3 Renery Handalian 2 Biers, 1998, p. 1332). In doing so, I will draw upon the methods of Rubin (1994); Wilson (1999, 2003); Dumas & Redish (1999); Nielsen (2001); and, Molich & Jeffries (2003). I will then assess the actual usability case provided based on the different severity ratings reported in the literature, concluding with my assessment of the method that I find most effective. Attention will be paid to the evolution of severity standards over the past decade as a sign of the maturation of the field of usability testing. Review of the Literature Among the authors mentioned above, there is general consistency in how they rate the severity of usability problems. That is, most present a scale comprised of at least three ratings ranging from severe to low; some use a calculation, such as severity plus scope i.e. local vs. global occurrence (Dumas & Redish, 1999), or severity plus frequency (Rubin, 1994), among other factors. For the purposes of my review, I have organized authors treatment of problem severity accordingly: Pervasiveness o Frequency in terms of probability of problem occurrence o Scope of problems experienced o Number of users affected o Problem persistence vs learnability Performance o Disruption of productivity, in particular regarding the inability to accomplish business goals (Wilson, 2003) o Time delays associated with task completion and problem circumvention Impact o Probability of data loss or system damage o Risk to business reputation, profitability, and revenue The implications of these severity assessments on the health of the business, a.k.a. market impact (Nielsen, 2001), will also be discussed.

4 Renery Handalian 3 Pervasiveness Frequency. The use of calculations in determining severity is somewhat common in the literature, perhaps speaking to a desire or instinct among practitioners to emulate scientific research where possible. In the earliest treatment of usability problem ratings that I encountered in my reading, Rubin (1994) quantifies what Nielsen (2001) and Molich & Jeffries (2003) refer to as frequency of problem occurrence as experienced by users. Where Nielsen and Molich & Jeffries provide qualitative guidelines such as 1. Rarely, 2. Often, and 3. Sometimes (Molich & Jeffries, p. 3), Rubin furnishes specific numeric benchmarks for task completion and problem frequency. In determining severity, he first focuses on non criterion tasks that 70 percent of participants do not successfully complete in the allotted time (p. 274). He then prioritizes these noncriterion tasks to be fixed according to criticality which he calculates by combining the severity of a problem and the probability that it will occur (p. 277). Rubin ranks the estimated frequency of occurrence as 4: Will occur 90% product is used; 3: Will occur 51 89% of time; 2: Will occur 11 50% of time; [and,] 1: Will occur of time (279). Rubin points out that the sum of a frequency of 4 and a severity (also ranked from 1 4) of 1 is of course the same as a frequency of 1 of a problem with a severity of 4. Using this method he claims the very highest priority would be assigned to problems that made the product unusable for everyone (p. 279). Scope. Dumas and Redish (1999) also suggest a combination of attributes to determine the overall severity and scope of usability problem severity. Like Rubin, they use the term criticality to represent the scale of problems: Level 1 problems prevent

5 Renery Handalian 4 completion of a task Level 2 problems create significant delay and frustration Level 3 problems have a minor effect on usability Level 4 problems are more subtle and often point to an enhancement that can be added in the future (pp ). However, Dumas and Redish are alone in their mention of scope (or, how widespread is the problem? p. 322) as it pertains specifically to global vs local issues: Global problems are more important than local problems. The scope of local problems is restricted. In their simplest form, they apply to only one screen or dialog box or window or page of a manual Global problems have a scope that is larger than one screen or page Usually, the global problems are the most important to address (pp ) Since neither severity nor scope alone is sufficient to identify which problems should be fixed first, Dumas and Redish recommend taking both together in consideration to organize problems into an order of priority (p. 324). Number of users affected. While Rubin s (1994) and Dumas and Redish s (1999) works address how frequently occurring or widespread usability problems are, neither specifically adds a consideration of the number of users affected not just the frequency with which they are interrupted or lost. Wilson s rough example of a formal definition of usability severity codes mentions that a problem with a Level 2 High rating affects many users (2003, paragraph 7). He also refers to Level 1 catastrophic errors as those that prevent many people from doing their work (1999, paragraph 10). Although Wilson explores the effects on users as members of the greater business / organizational context (discussed in the Impact section of this paper), it would be helpful here to have a scale by which to determine the different degrees of many.

6 Renery Handalian 5 Problem persistence. Nielsen (2001) identifies three factors that combine to determine the severity of a usability problem: frequency (see above), impact, and persistence. These factors are also used by Molich and Jeffries (2003) in the CUE 4 study. I have combined impact and persistence here under the heading of problem persistence. Nielsen rates impact as to the ease with which users will be able to overcome a problem; his persistence attribute refers to the learnability of the problem, that is, whether they will be able to overcome [it] once they know about it or repeatedly be bothered by [it] (1999, paragraph 3). As I see it, the disruption of work and the user s ability to overcome the disruption are inseparable. In their instructions for the preparation of CUE 4 evaluation reports, Molich and Jeffries (2003) rate both impact and persistence on a 1 3 scale in increasing severity. Again, I treat the two concepts as one. Their explanations follow: Impact (how serious is the problem when it occurs) [is rated:] 1. Minor (delays users briefly), 2. Seriously (delays users significantly but eventually allows them to complete the task), [and] 3. Catastrophic (prevents users from completing their task). Persistence (will users learn how to get around the problem) [is rated:] 1. Yes, quickly, 2. Only after encountering the problem several times, [and] 3. No. (p. 3) Consistent with other authors, Molich and Jeffries also suggest combining the numeric values to determine the severity rating score. Performance Disruption of productivity. In assessing severity, Wilson (1999) describes effects on performance as often a primary usability attribute. Whether poor or too good (paragraph 2) a products lack of usability can significantly disrupt or impede user

7 Renery Handalian 6 productivity. He lists several productivity related attributes that should be taken into consideration when assessing severity. His Level 1 Catastrophic problems often indicate performance so bad that the system cannot accomplish business goals (paragraph 10); to a lesser degree, Level 2 Severe problems point to performance so poor that the system is universally regarded as pitiful (paragraph 11). Levels 3 (Moderate) and 4 (Minor but Irritating) problems typically affect the ability for users to work with out being interrupted by inconsistencies that result in increased learning or error rates although mistakes caused by them are typically recoverable (paragraphs 12 and 13). Time delays. All of the authors mention time delays as sources of user frustration and performance obstruction. However, Wilson (2003), Nielsen (2001), and Molich & Jeffries (2003) include reference to both the time lost by trying to figure out a usability problem and the waste of time associated with trying to devise workaround solutions or circumvent a function altogether. Wilson s severity rating for one mentions workarounds into consideration; specifically, he categorizes Severe or showstopper errors as having no workaround (paragraph 1). Furthermore, High errors impair the operation of one or more functions and cannot be easily circumvented or avoided (paragraph 7). In a rating system he published in 1999, Wilson also characterizes Level 3 problems as moderate causing no permanent loss of data, but waste of time. There is a workaround to the problem unlike Level 2 problems that have no workaround (paragraph 12 and 11). In general the time loss associated with usability errors is accepted by all the authors as a source of inefficiency and waste. Impact

8 Renery Handalian 7 Data loss/damage. In addition to loss of time and productivity, Wilson (2003) also judges data loss or system damage as an indicator of the severity of usability problems. Among all of the authors he (and Nielsen to a lesser extent) focuses the most on this increasingly critical issue. Wilson bestows a Severity Code of 1 Severe to an emergency condition that causes the customer s system to fail or causes customer data to be lost or destroyed (paragraph 1). A 1999 example used by Wilson is the wrong default button on the confirmation message Do you want to delete this message?. Choosing Yes for the default button would be very serious because it would cause the loss of the entire database (paragraph 3). With enterprise wide database infrastructures powering entire organizations more than ever before, such usability errors have the potential to be extremely costly. Business risk. As a continuation of the above discussion, Nielsen (2001) and Wilson (2003) raise the important point about the effect of usability errors on the reputation and revenue of businesses. Nielsen warns one needs to assess the market impact of the problem since certain usability problems can have a devastating effect on the popularity of a product, even if they are objectively quite easy to overcome (paragraph 6). He asserts that Level 4 usability catastrophe(s) [are] imperative to fix before the product can be released (paragraph 12). He states that if severity ratings indicate that several disastrous usability problems remain in an interface, it will probably be inadvisable to release it (paragraph 1). Wilson (2003) concurs, suggesting if a usability problem could be connected directly to a business goal, that would insure [sic] that it was important enough to fix before shipping (paragraph 18). Placing

9 Renery Handalian 8 usability problems in a greater business context may well be a strategy to promote closer adherence to implementation suggestions. Please consult the Appendices for tables delineating the different severity rating methods proposed by each author or set of authors. Case Study I will evaluate the following case study by the severity rating methods of Rubin (1994), Dumas & Redish (1999), Molich & Jeffries (2003), and Wilson (2003). I will not apply Catani & Biers (1998) because their scale was not detailed enough, or Nielsen (2001) because he clearly informed the rating system of Molich & Jeffries who then expanded upon his categorization. I have highlighted information I found significant. Two of the five participants in a usability test have a problem that causes loss of data and makes it impossible to complete the task. After editing a page of data, they forget to click the Save button. Instead they close the window using the Close Window control. The edits they made are not saved. The other three don t have the problem because they click Save. Rubin (1994) Although it s a bit of a stretch on what he considers frequency, Rubin might use the fact that 40% of the test participants experienced a usability problem to give this problem a Frequency rating of 2 (moderate). His definitions of Severity do not have the benefit of colleagues fine tuning like those of later authors and as such are rather vague. A rating of 3 might fit his description in that the user will probably use or attempt to use the product, but will be severely limited in his or her ability to do so (p. 278). Adding the two scores together would give this problem a criticality rating of 5.

10 Renery Handalian 9 The fact that Rubin s method does not take into account important elements such as loss of data, scope, impact or time delay severely limits the application of his ratings scale. Thus, I would not choose this method over any of the others. Dumas & Redish (1999) Dumas and Redish combine the concepts of scope and severity to rate usability problems. If the Save button issue is one that is global, then it is of greater importance than if the error occurs only in that one editing function. Along their severity scale, this would be a Level 1 problem since the product design has prevent[ed] the completion of a task (p. 324). The authors do not modify their ratings with the number of users who encounter problems, thereby furnishing little guidance as to how to handle the 40% error incidence in the case test. Assuming that this is a global issue, it is unclear what scope does to increase the severity rating, as no numeric value was attributed to either local or global issues. Dumas & Redish merely explain that scope and severity should both be taken into account in the prioritization of problems to be fixed, yet they do not tell us how. Because of the lack of specificity, I would probably not use this method alone to assess usability problems. Molich & Jeffries (2003) Molich and Jeffries use a three fold scale by which they rate problem severity. Again in terms of frequency it is necessary to stretch the definition to encompass number of users rather than how often the problem occurs since it was presented here as an isolated issue. In this case, they might rate the experience of 2 out of 5 participants as 2 Often (note: even though the concept of Sometimes seems more

11 Renery Handalian 10 appropriate here, its rating is actually higher than that of Often ). With respect to Impact, they would be justified in attributing a Catastrophic rating of 3, as some of the users were prevent[ed] from completing the task (p. 3). Again, no formula has been supplied to determine how to treat severe problems relative to the number of users experiencing them. In terms of persistence, I believe this would take a rating of 2, whereby users could learn how the Edit screen functions after encountering the problem several times (p. 3). Although this method has honed severity ratings more than those developed earlier, I would still have questions in knowing how to apply them to usability situations. Wilson (2003) I find Wilson s rough example of a formal definition of severity codes (2003) to be the most descriptive, helpful and all encompassing of the four reviewed here. Although I found it useful to break down the assessment of problems into the three categories as done by Nielsen (2001) and Molich & Jeffries (2003), I feel that the greater business context in which Wilson places usability issues makes his rating methods extremely helpful to the assessment of problem severity. His descriptions contain rich detail that position the severity very clearly. Regarding the case at hand, I believe Wilson would take the following issues into consideration: 1. Editing a page of data Since the re editing of information is irksome to the user but not mission critical (i.e. that information could be elicited again albeit with difficulty; it can be re entered, though likely causing inconvenience and perhaps time

12 Renery Handalian 11 delay to the customer), Wilson would probably see this as a Medium ( non critical, limited problem no data lost or system failure ) issue. 2. Two out of the five participants Wilson categorizes Level 2 High errors as those that affect many users. It is up for debate whether 40% of users can be called many or not. I would tend to choose the severity rating of High based on that fairly significant proportion. 3. They forget to click the Save button Level 2 of Wilson s severity scale also indicates that with High errors, [t]he software does not prevent the user from making a serious mistake which really does characterize this usability error. Because of the not insignificant number of users experiencing problems with an issue that is not severe but does cause data loss caused by the application s inability to protect the user against error, I believe that the appropriate rating for this issue based on Wilson s scale would be 2 High. Again, some guidance as to how to handle the proportion of users affected would be very helpful. Conclusions Comparing the severity rating systems published in earlier texts (Rubin, 1994; Catani & Biers, 1998; Dumas & Redish, 1999), usability testers have increased the level of specificity and detail in their definitions and attribution of problem severity. It can be hoped and expected that refinements in ratings will also promote greater agreement among problem classifications. One problem identified by Molich, et al. (2003) in the reports produced by usability professions participating in CUE 1 and 2 was the lack of severity classification of problems. Some reports did not distinguish between

13 Renery Handalian 12 disastrous problems and cosmetic details. All problems appeared to be equally serious (p. 5). Catani and Biers (1998) also note disparity and attribute the apparent lack of agreement in severity rating among professionals to methodological issues and problem perception (p. 1335). I will close with Rolf Molich s remark: if we want to become a science, we must describe our core processes in great detail (UPA meeting, October 9, 2003). Whether or not usability testing will ever become scientific research doesn t really impact his call for enhanced standardization of rating methods in order to provide the maximum impact upon the implementation of usability recommendations. He says: For usability tests to be a reliable tool for making informed decisions, we must require that two usability tests of the same piece of software, for instance a web site, produce results that are reasonably similar. In particular, we must require that there is reasonable agreement on the problems that are defined as critical, and on the overall conclusion. Otherwise a project manager, who uses a professional usability test report to allocate development resources for correcting critical usability problems, may be mislead [sic]. (Molich, R.; Kindlund, E.; Seeley, J.; et al., 2003, paragraph one) Perhaps further opportunities, like subsequent CUE studies, will present themselves as to develop the kind of convergence in ratings (Wilson, 2003) that will enable those both performing usability tests and receiving their results to feel more secure in their consistency and acceptance. Such an evolution of standards may just contribute to the increased professionalization and scientification to which Molich and his colleagues referred.

14 Renery Handalian 13 References Catani, M. B. & Biers, D. W. (1998). Usability evaluation and prototype fidelity: Users and usability professionals. Proceedings of the Human Factors and Ergonomics Society 42nd Annual Meeting. Santa Monica, CA: HFES Dumas, J. & Redish, G. (1999). A practical guide to usability testing, London: Intellect Books, Revised Edition. Molich, R. & Jeffries, R. (2003). Comparative expert reviews: CHI 2003 workshop description. [Available online at Accessed on October 10, 2003] Molich, R.; Kindlund, E.; Seeley, J.; et al. (2003, in press). Comparative Usability Evaluation. Nielsen, J, (2001). Severity ratings for usability problems. [Available online at Accessed October 10, 2003] Rubin, J. (1994) Handbook of Usability Testing. NY: John Wiley and Sons, Inc. Wilson, C. (1999). Readers questions: Severity scales. Usability Interface, 5(4) April [Available online at severity scale.html. Accessed October 10, 2003] Wilson, C. (2003). Defining, categorizing, and prioritizing usability problems: UPA 2003 Idea Market. July 11, [Available online at ty problems_wilson.doc. Accessed October 10, 2003]

15 Renery Handalian 14 Appendices 1. Rubin (1994, pp. 278 and 279) Severity + Frequency Ranking Ranking Description Definition The user is either not able 4 Unusable to or will not want to use a particular part of the product because of the way that the product has been designed and implemented. 4 Will occur 90% of time product is used 3 Severe 2 Moderate 1 Irritant The user will probably use or attempt to use the product here, but will be severely limited in his or her ability to do so. The user will have great difficulty in circumventing the problem. The user will be able to use the product in most cases, but will have to undertake some moderate effort in getting around the problem. The problem occurs only intermittently, can be circumvented easily, or is dependent on a standard that is outside the product s boundaries. Could also be a cosmetic problem Will occur 51 89% of time Will occur 11 50% of time Will occur of time 2. Catani & Biers (1998)

16 Renery Handalian 15 [P]roblems were rated by usability professionals on their severity on a 5 point scale with 4 being sever and 0 being no problem at all (p. 1332).

17 Renery Handalian Dumas & Redish (1999, pp ) Problem importance has two dimensions: Scope How widespread is the problem? Global Local Severity How critical is the problem? Level 1 problems prevent completion of a task. Level 2 problems create significant delay and frustration. Level 3 problems have a minor effect on usability. Level 4 problems are more subtle and often point to an enhancement that can be added in the future. 4. Nielsen (1999, paragraphs 7 12) The following 0 to 4 rating scale can be used to rate the severity of usability problems. Ratings are based on a combination of frequency, impact, and persistence and also require attention to market impact of the problem. 0 I don t agree that this is a usability problem at all 1 Cosmetic problem only: need not be fixed unless extra time is available on project 2 Minor usability problem: fixing this should be given low priority 3 Major usability problem: important to fix, so should be given high priority 4 Usability catastrophe: imperative to fix this before the product can be released.

18 Renery Handalian Molich & Jeffries (2003, p. 3) Each problem must be rated on three scales A gross way to determine whether a problem should be fixed could be to add the three numberic values and see whether the problem gets a high score Frequency Impact Persistence How serious is the problem Will users learn how to get when it occurs? around the problem? How often does the problem occur? 1. Rarely 1. Minor (delays user briefly) 2. Often 3. Sometimes 2. Serious (delays users significantly but eventually allows them to complete the task) 3. Catastrophic (prevents users from completing their task) 1. Yes, users quickly learn how to get around the problem. 2. Users only learn to get around the problem after encountering it several times 3. No, users never learn how to get around the problem. 6. Wilson (2003) Rough example of a Formal Definition of Usability Severity Codes Severity Code 1 Severe (1 = Showstopper) 2 High 3 Medium Description An emergency condition that causes the customer s system to fail or causes customer data to be lost or destroyed. A showstopper usability bug can also be one that is likely to cause frequent data integrity errors. There is no workaround to these problems. A key feature needed by many customers is not in the system. A serious condition that impairs the operation, or continued operation, of one or more product functions and cannot be easily circumvented or avoided. The software does not prevent the user from making a serious mistake. The usability problem is frequent, persistent, and affects many users. There is a serious violation of standards. A non critical, limited problem (no data lost or system failure). It does not hinder operation and can be temporarily circumvented or avoided. The problem causes users moderate confusion or

19 Renery Handalian 18 4 Low irritation. Non critical problems or general questions about the product. There are minor inconsistencies that cause a slight hesitation or small aesthetic issues like labels and fields that are not aligned properly.

Generating and Using Results

Generating and Using Results Background Generating and Using Results from Usability Evaluations Kasper Hornbæk University of Copenhagen www.kasperhornbaek.dk Associate professor in the Human computer Interaction group at Copenhagen

More information

User Interface Evaluation

User Interface Evaluation User Interface Evaluation Heuristic Evaluation Lecture #17 Agenda Evaluation through Expert Analysis Cognitive walkthrough Heuristic evaluation Model-based evaluation Cognitive dimension of notations 2

More information

CS 160: Evaluation. Outline. Outline. Iterative Design. Preparing for a User Test. User Test

CS 160: Evaluation. Outline. Outline. Iterative Design. Preparing for a User Test. User Test CS 160: Evaluation Professor John Canny Spring 2006 2/15/2006 1 2/15/2006 2 Iterative Design Prototype low-fi paper, DENIM Design task analysis contextual inquiry scenarios sketching 2/15/2006 3 Evaluate

More information

CS 160: Evaluation. Professor John Canny Spring /15/2006 1

CS 160: Evaluation. Professor John Canny Spring /15/2006 1 CS 160: Evaluation Professor John Canny Spring 2006 2/15/2006 1 Outline User testing process Severity and Cost ratings Discount usability methods Heuristic evaluation HE vs. user testing 2/15/2006 2 Outline

More information

How to Conduct a Heuristic Evaluation

How to Conduct a Heuristic Evaluation Page 1 of 9 useit.com Papers and Essays Heuristic Evaluation How to conduct a heuristic evaluation How to Conduct a Heuristic Evaluation by Jakob Nielsen Heuristic evaluation (Nielsen and Molich, 1990;

More information

Foundation Level Syllabus Usability Tester Sample Exam

Foundation Level Syllabus Usability Tester Sample Exam Foundation Level Syllabus Usability Tester Sample Exam Version 2017 Provided by German Testing Board Copyright Notice This document may be copied in its entirety, or extracts made, if the source is acknowledged.

More information

Expert Reviews (1) Lecture 5-2: Usability Methods II. Usability Inspection Methods. Expert Reviews (2)

Expert Reviews (1) Lecture 5-2: Usability Methods II. Usability Inspection Methods. Expert Reviews (2) : Usability Methods II Heuristic Analysis Heuristics versus Testing Debate Some Common Heuristics Heuristic Evaluation Expert Reviews (1) Nielsen & Molich (1990) CHI Proceedings Based upon empirical article

More information

SFU CMPT week 11

SFU CMPT week 11 SFU CMPT-363 2004-2 week 11 Manuel Zahariev E-mail: manuelz@cs.sfu.ca Based on course material from Arthur Kirkpatrick, Alissa Antle and Paul Hibbits July 21, 2004 1 Analytic Methods Advantages can be

More information

User Experience Report: Heuristic Evaluation

User Experience Report: Heuristic Evaluation User Experience Report: Heuristic Evaluation 1 User Experience Report: Heuristic Evaluation Created by Peter Blair for partial fulfillment of the requirements for MichiganX: UX503x Principles of Designing

More information

Usability. HCI - Human Computer Interaction

Usability. HCI - Human Computer Interaction Usability HCI - Human Computer Interaction Computer systems optimization for easy access and communication Definition design Basic principles Testing assessment implementation Quality Utility funcionality

More information

Due on: May 12, Team Members: Arpan Bhattacharya. Collin Breslin. Thkeya Smith. INFO (Spring 2013): Human-Computer Interaction

Due on: May 12, Team Members: Arpan Bhattacharya. Collin Breslin. Thkeya Smith. INFO (Spring 2013): Human-Computer Interaction Week 6 Assignment: Heuristic Evaluation of Due on: May 12 2013 Team Members: Arpan Bhattacharya Collin Breslin Thkeya Smith INFO 608-902 (Spring 2013): Human-Computer Interaction Group 1 HE Process Overview

More information

HCI and Design SPRING 2016

HCI and Design SPRING 2016 HCI and Design SPRING 2016 Topics for today Heuristic Evaluation 10 usability heuristics How to do heuristic evaluation Project planning and proposals Usability Testing Formal usability testing in a lab

More information

Heuristic Evaluation. Hall of Fame or Shame? Hall of Fame or Shame? Hall of Fame! Heuristic Evaluation

Heuristic Evaluation. Hall of Fame or Shame? Hall of Fame or Shame? Hall of Fame! Heuristic Evaluation 1 USER INTERFACE DESIGN + PROTOTYPING + EVALUATION Hall of Fame or Shame? Heuristic Evaluation Prof. James A. Landay University of Washington Pocket By Read It Later 11/1/2012 2 Hall of Fame or Shame?

More information

Heuristic Evaluation! Hall of Fame or Shame?! Hall of Fame or Shame?! Hall of Fame or Shame?! Hall of Fame!!

Heuristic Evaluation! Hall of Fame or Shame?! Hall of Fame or Shame?! Hall of Fame or Shame?! Hall of Fame!! CS 147: HCI+D UI Design, Prototyping, and Evaluation, Autumn 2014 HCI+D: USER INTERFACE DESIGN + PROTOTYPING + EVALUATION Hall of Fame or Shame? Heuristic Evaluation Computer Science Department Autumn

More information

Usability Testing CS 4501 / 6501 Software Testing

Usability Testing CS 4501 / 6501 Software Testing Usability Testing CS 4501 / 6501 Software Testing [Nielsen Normal Group, https://www.nngroup.com/articles/usability-101-introduction-to-usability/] [TechSmith, Usability Basics: An Overview] [Ginny Redish,

More information

USER INTERFACE DESIGN + PROTOTYPING + EVALUATION. Heuristic Evaluation. Prof. James A. Landay University of Washington CSE 440

USER INTERFACE DESIGN + PROTOTYPING + EVALUATION. Heuristic Evaluation. Prof. James A. Landay University of Washington CSE 440 USER INTERFACE DESIGN + PROTOTYPING + EVALUATION Heuristic Evaluation Prof. James A. Landay University of Washington CSE 440 February 19, 2013 Hall of Fame or Shame? Pocket By Read It Later Jan. 14-18,

More information

Heuristic Evaluation of Groupware. How to do Heuristic Evaluation of Groupware. Benefits

Heuristic Evaluation of Groupware. How to do Heuristic Evaluation of Groupware. Benefits Kimberly Tee ketee@ucalgary.ca CPSC 681 Topic Heuristic Evaluation of Groupware Heuristic evaluation [9] is a discount evaluation method for finding usability problems in a singleuser interface design.

More information

GUIDELINES FOR MASTER OF SCIENCE INTERNSHIP THESIS

GUIDELINES FOR MASTER OF SCIENCE INTERNSHIP THESIS GUIDELINES FOR MASTER OF SCIENCE INTERNSHIP THESIS Dear Participant of the MScIS Program, If you have chosen to follow an internship, one of the requirements is to write a Thesis. This document gives you

More information

Chapter 18: Usability Testing U.S. Dept. of Health and Human Services (2006)

Chapter 18: Usability Testing U.S. Dept. of Health and Human Services (2006) Chapter 18: Usability Testing U.S. Dept. of Health and Human Services (2006) There are two major considerations when conducting usability testing. The first is to ensure that the best possible method for

More information

iscreen Usability INTRODUCTION

iscreen Usability INTRODUCTION INTRODUCTION Context and motivation The College of IST recently installed an interactive kiosk called iscreen, designed to serve as an information resource for student/visitors to the College of IST. The

More information

Lose It! Weight Loss App Heuristic Evaluation Report

Lose It! Weight Loss App Heuristic Evaluation Report Lose It! Weight Loss App Heuristic Evaluation Report By Manuel Ryan Espinsa Manuel Ryan Espinosa 1-27-2017 Heuristic Evaluation IN4MATX 283 LEC A: USER EXPERIENCE (37000) TABLE OF CONTENTS EXECUTIVE SUMMARY

More information

Rapid Software Testing Guide to Making Good Bug Reports

Rapid Software Testing Guide to Making Good Bug Reports Rapid Software Testing Guide to Making Good Bug Reports By James Bach, Satisfice, Inc. v.1.0 Bug reporting is a very important part of testing. The bug report, whether oral or written, is the single most

More information

The Reliability of Usability Tests: Black Art or Science? Paul Naddaff. Bentley University

The Reliability of Usability Tests: Black Art or Science? Paul Naddaff. Bentley University Running head: THE RELIABILITY OF USABILITY TESTS: BLACK ART OR SCIENCE? The Reliability of Usability Tests: Black Art or Science? Paul Naddaff Bentley University 1.1 INTRODUCTION The field of user centered

More information

Stream Features Application Usability Test Report

Stream Features Application Usability Test Report Stream Features Application Usability Test Report Erin Norton and Katelyn Waara HU 4628: Usability and Instruction Writing Michigan Technological University April 24, 2013 Table of Contents Executive Summary

More information

IPM 10/11 T1.6 Discount Evaluation Methods

IPM 10/11 T1.6 Discount Evaluation Methods IPM 10/11 T1.6 Discount Evaluation Methods Licenciatura em Ciência de Computadores Miguel Tavares Coimbra Acknowledgements: Most of this course is based on the excellent course offered by Prof. Kellogg

More information

Usability of interactive systems: Current practices and challenges of its measurement

Usability of interactive systems: Current practices and challenges of its measurement Usability of interactive systems: Current practices and challenges of its measurement Δρ. Παναγιώτης Ζαχαριάς Τμήμα Πληροφορικής Πανεπιστήμιο Κύπρου 23/2/2010 Concepts and Definitions Usability engineering

More information

Assignment 5 is posted! Heuristic evaluation and AB testing. Heuristic Evaluation. Thursday: AB Testing

Assignment 5 is posted! Heuristic evaluation and AB testing. Heuristic Evaluation. Thursday: AB Testing HCI and Design Topics for today Assignment 5 is posted! Heuristic evaluation and AB testing Today: Heuristic Evaluation Thursday: AB Testing Formal Usability Testing Formal usability testing in a lab:

More information

EXAM PREPARATION GUIDE

EXAM PREPARATION GUIDE When Recognition Matters EXAM PREPARATION GUIDE PECB Certified ISO 22000 Lead Implementer www.pecb.com The objective of the Certified ISO 22000 Lead Implementer examination is to ensure that the candidate

More information

NPTEL Computer Science and Engineering Human-Computer Interaction

NPTEL Computer Science and Engineering Human-Computer Interaction M4 L5 Heuristic Evaluation Objective: To understand the process of Heuristic Evaluation.. To employ the ten principles for evaluating an interface. Introduction: Heuristics evaluation is s systematic process

More information

Business Analysis for Practitioners - Requirements Elicitation and Analysis (Domain 3)

Business Analysis for Practitioners - Requirements Elicitation and Analysis (Domain 3) Business Analysis for Practitioners - Requirements Elicitation and Analysis (Domain 3) COURSE STRUCTURE Introduction to Business Analysis Module 1 Needs Assessment Module 2 Business Analysis Planning Module

More information

Sample Expert Review. Table of Contents

Sample Expert Review. Table of Contents Sample Expert Review Table of Contents Global Navigation Issues...2 Support for keyboard input is inconsistent...2 My Login box is hard to close...2 Required field doesn t indicate it s required...3 Select

More information

EXAM PREPARATION GUIDE

EXAM PREPARATION GUIDE When Recognition Matters EXAM PREPARATION GUIDE PECB Certified ISO/IEC 27005 Risk Manager www.pecb.com The objective of the PECB Certified ISO/IEC 27005 Risk Manager examination is to ensure that the candidate

More information

Folsom Library & RensSearch Usability Test Plan

Folsom Library & RensSearch Usability Test Plan Folsom Library & RensSearch Usability Test Plan Eric Hansen & Billy Halibut 1 Table of Contents Document Overview!... 3 Methodology!... 3 Participants!... 3 Training!... 4 Procedure!... 4 Roles!... 4 Ethics!5

More information

The data quality trends report

The data quality trends report Report The 2015 email data quality trends report How organizations today are managing and using email Table of contents: Summary...1 Research methodology...1 Key findings...2 Email collection and database

More information

CUE-10: Moderation Page 1. Comparative Usability Evaluation 10. Moderation. Observing usability test moderators

CUE-10: Moderation Page 1. Comparative Usability Evaluation 10. Moderation. Observing usability test moderators CUE-10: Moderation Page 1 Comparative Usability Evaluation 10 Moderation Observing usability test moderators Workshop: Boston, MA, USA, Wednesday 9 May 2018 CUE-10: Moderation Page 2 Call For Participation

More information

Outline. Usability Testing CS 239 Experimental Methodologies for System Software Peter Reiher May 29, What Do We Mean By Usability?

Outline. Usability Testing CS 239 Experimental Methodologies for System Software Peter Reiher May 29, What Do We Mean By Usability? Usability Testing CS 239 Experimental Methodologies for System Software Peter Reiher May 29, 2007 Outline What is usability testing? How do you design a usability test? How do you run a usability test?

More information

Chapter 18 SaskPower Managing the Risk of Cyber Incidents 1.0 MAIN POINTS

Chapter 18 SaskPower Managing the Risk of Cyber Incidents 1.0 MAIN POINTS Chapter 18 SaskPower Managing the Risk of Cyber Incidents 1.0 MAIN POINTS The Saskatchewan Power Corporation (SaskPower) is the principal supplier of power in Saskatchewan with its mission to deliver power

More information

Framework for Improving Critical Infrastructure Cybersecurity. and Risk Approach

Framework for Improving Critical Infrastructure Cybersecurity. and Risk Approach Framework for Improving Critical Infrastructure Cybersecurity Implementation of Executive Order 13636 and Risk Approach June 9, 2016 cyberframework@nist.gov Executive Order: Improving Critical Infrastructure

More information

Usability Evaluation as a Component of the OPEN Development Framework

Usability Evaluation as a Component of the OPEN Development Framework Usability Evaluation as a Component of the OPEN Development Framework John Eklund Access Testing Centre and The University of Sydney 112 Alexander Street, Crows Nest NSW 2065 Australia johne@testingcentre.com

More information

Nektarios Kostaras, Mixalis Xenos. Hellenic Open University, School of Sciences & Technology, Patras, Greece

Nektarios Kostaras, Mixalis Xenos. Hellenic Open University, School of Sciences & Technology, Patras, Greece Kostaras N., Xenos M., Assessing Educational Web-site Usability using Heuristic Evaluation Rules, 11th Panhellenic Conference on Informatics with international participation, Vol. B, pp. 543-550, 18-20

More information

Improve the User Experience on Your Website

Improve the User Experience on Your Website Forrester Consulting Approach Document Improve the User Experience on Your Website Improving the usability of your website will provide a better customer experience. By providing site visitors with a better

More information

CO328- Human Computer Interaction Michael Kölling Caroline Li. Heuristic Evaluation

CO328- Human Computer Interaction Michael Kölling Caroline Li. Heuristic Evaluation CO328- Human Computer Interaction Michael Kölling Caroline Li Heuristic Evaluation Signage: What does this tells you? - History, what went earlier there before - Tells you more about a problematic situation

More information

TEL2813/IS2820 Security Management

TEL2813/IS2820 Security Management TEL2813/IS2820 Security Management Security Management Models And Practices Lecture 6 Jan 27, 2005 Introduction To create or maintain a secure environment 1. Design working security plan 2. Implement management

More information

Foundation Level Syllabus Usability Tester Sample Exam Answers

Foundation Level Syllabus Usability Tester Sample Exam Answers Foundation Level Syllabus Usability Tester Sample Exam s Version 2017 Provided by German Testing Board Copyright Notice This document may be copied in its entirety, or extracts made, if the source is acknowledged.

More information

Improving the Accuracy of Function Points Counts

Improving the Accuracy of Function Points Counts The PROJECT PERFECT White Paper Collection Improving the Accuracy of Function Points Counts Abstract Amit Javadekar The Function Point (FP) Analysis model was invented by Allan Albrecht in 1979 as an alternative

More information

The IDN Variant TLD Program: Updated Program Plan 23 August 2012

The IDN Variant TLD Program: Updated Program Plan 23 August 2012 The IDN Variant TLD Program: Updated Program Plan 23 August 2012 Table of Contents Project Background... 2 The IDN Variant TLD Program... 2 Revised Program Plan, Projects and Timeline:... 3 Communication

More information

Übung zur Vorlesung Mensch-Maschine-Interaktion. e5: Heuristic Evaluation

Übung zur Vorlesung Mensch-Maschine-Interaktion. e5: Heuristic Evaluation Übung zur Vorlesung Mensch-Maschine-Interaktion e5: Heuristic Evaluation Sara Streng Ludwig-Maximilians-Universität München Wintersemester 2007/2008 Ludwig-Maximilians-Universität München Sara Streng Übung

More information

Task-Selection Bias: A Case for User-Defined Tasks

Task-Selection Bias: A Case for User-Defined Tasks INTERNATIONAL JOURNAL OF HUMAN COMPUTER INTERACTION, 13(4), 411 419 Copyright 2001, Lawrence Erlbaum Associates, Inc. Task-Selection Bias: A Case for User-Defined Tasks Richard E. Cordes IBM Corporation

More information

evision Review Project - Engagement Simon McLean, Head of Web & IT Support Information & Data Services.

evision Review Project - Engagement Simon McLean, Head of Web & IT Support Information & Data Services. evision Review Project - Engagement Monitoring Simon McLean, Head of Web & IT Support Information & Data Services. What is Usability? Why Bother? Types of usability testing Usability Testing in evision

More information

EXAM PREPARATION GUIDE

EXAM PREPARATION GUIDE When Recognition Matters EXAM PREPARATION GUIDE PECB Certified ISO 22301 Lead Implementer www.pecb.com The objective of the Certified ISO 22301 Lead Implementer examination is to ensure that the candidate

More information

CPSC 444 Project Milestone III: Prototyping & Experiment Design Feb 6, 2018

CPSC 444 Project Milestone III: Prototyping & Experiment Design Feb 6, 2018 CPSC 444 Project Milestone III: Prototyping & Experiment Design Feb 6, 2018 OVERVIEW... 2 SUMMARY OF MILESTONE III DELIVERABLES... 2 1. Blog Update #3 - Low-fidelity Prototyping & Cognitive Walkthrough,

More information

Heuristic Evaluation Report. The New York Philharmonic Digital Archives archives.nyphil.org

Heuristic Evaluation Report. The New York Philharmonic Digital Archives archives.nyphil.org Heuristic Evaluation Report The New York Philharmonic Digital Archives archives.nyphil.org Cassie Hickman Wednesday, October 14, 2015 Table of Contents Executive Summary... 3 Introduction... 4 Methodology...

More information

SOME TYPES AND USES OF DATA MODELS

SOME TYPES AND USES OF DATA MODELS 3 SOME TYPES AND USES OF DATA MODELS CHAPTER OUTLINE 3.1 Different Types of Data Models 23 3.1.1 Physical Data Model 24 3.1.2 Logical Data Model 24 3.1.3 Conceptual Data Model 25 3.1.4 Canonical Data Model

More information

How Secure Do You Feel About Your HIPAA Compliance Plan? Daniel F. Shay, Esq.

How Secure Do You Feel About Your HIPAA Compliance Plan? Daniel F. Shay, Esq. How Secure Do You Feel About Your HIPAA Compliance Plan? Daniel F. Shay, Esq. Word Count: 2,268 Physician practices have lived with the reality of HIPAA for over twenty years. In that time, it has likely

More information

EXAM PREPARATION GUIDE

EXAM PREPARATION GUIDE When Recognition Matters EXAM PREPARATION GUIDE PECB Certified Management System Auditor www.pecb.com The objective of the PECB Certified Management System Auditor examination is to ensure that the candidates

More information

User Experience Research Report: Heuristic Evaluation

User Experience Research Report: Heuristic Evaluation User Experience Research Report: Heuristic Evaluation SI 622 003 Group 3: Yunqi Hu, Diane Pham, Chieh-Lin Wu, Ruofan Zhang Date: March 31, 2016 Word Count: 2,610 Table of Contents Executive Summary...

More information

Report on Registrar Whois Data Reminder Policy Survey

Report on Registrar Whois Data Reminder Policy Survey Report on Registrar Whois Data Reminder Policy Survey Executive Summary ICANN adopted the Whois Data Reminder Policy (WDRP) on 27 March 2003. The purpose of the WDRP is to encourage registrants to review

More information

The Analysis and Proposed Modifications to ISO/IEC Software Engineering Software Quality Requirements and Evaluation Quality Requirements

The Analysis and Proposed Modifications to ISO/IEC Software Engineering Software Quality Requirements and Evaluation Quality Requirements Journal of Software Engineering and Applications, 2016, 9, 112-127 Published Online April 2016 in SciRes. http://www.scirp.org/journal/jsea http://dx.doi.org/10.4236/jsea.2016.94010 The Analysis and Proposed

More information

Recording end-users security events: A step towards increasing usability

Recording end-users security events: A step towards increasing usability Section 1 Network Systems Engineering Recording end-users security events: A step towards increasing usability Abstract D.Chatziapostolou and S.M.Furnell Network Research Group, University of Plymouth,

More information

EXAM PREPARATION GUIDE

EXAM PREPARATION GUIDE When Recognition Matters EXAM PREPARATION GUIDE PECB Certified ISO 14001 Lead Implementer www.pecb.com The objective of the PECB Certified ISO 14001 Lead Implementer examination is to ensure that the candidate

More information

Heuristic Evaluation of [ Quest ]

Heuristic Evaluation of [ Quest ] Heuristic Evaluation of [ Quest ] 1. Problem Quest is an app that allows you to stay involved in, participate in, and create local clubs and events happening in your community 2. Violations Found 1. [H10.

More information

HEURISTIC EVALUATION WHY AND HOW

HEURISTIC EVALUATION WHY AND HOW HEURISTIC EVALUATION WHY AND HOW REF: Scott Klemmer Jacob Nielsen James Landay HEURISTIC EVALUATION Multiple ways to evaluate Empirical: Assess with real users trying s/w Formal: Models and formulas to

More information

COMMON ISSUES AFFECTING SECURITY USABILITY

COMMON ISSUES AFFECTING SECURITY USABILITY Evaluating the usability impacts of security interface adjustments in Word 2007 M. Helala 1, S.M.Furnell 1,2 and M.Papadaki 1 1 Centre for Information Security & Network Research, University of Plymouth,

More information

Exam4Tests. Latest exam questions & answers help you to pass IT exam test easily

Exam4Tests.   Latest exam questions & answers help you to pass IT exam test easily Exam4Tests http://www.exam4tests.com Latest exam questions & answers help you to pass IT exam test easily Exam : CISM Title : Certified Information Security Manager Vendor : ISACA Version : DEMO 1 / 10

More information

Agile Master Data Management TM : Data Governance in Action. A whitepaper by First San Francisco Partners

Agile Master Data Management TM : Data Governance in Action. A whitepaper by First San Francisco Partners Agile Master Data Management TM : Data Governance in Action A whitepaper by First San Francisco Partners First San Francisco Partners Whitepaper Executive Summary What do data management, master data management,

More information

EVALUATION OF THE USABILITY OF EDUCATIONAL WEB MEDIA: A CASE STUDY OF GROU.PS

EVALUATION OF THE USABILITY OF EDUCATIONAL WEB MEDIA: A CASE STUDY OF GROU.PS EVALUATION OF THE USABILITY OF EDUCATIONAL WEB MEDIA: A CASE STUDY OF GROU.PS Turgay Baş, Hakan Tüzün Hacettepe University (TURKEY) turgaybas@hacettepe.edu.tr, htuzun@hacettepe.edu.tr Abstract In this

More information

You will discuss topics related to ethical hacking, information risks, and security techniques which hackers will seek to circumvent.

You will discuss topics related to ethical hacking, information risks, and security techniques which hackers will seek to circumvent. IDPS Effectiveness and Primary Takeaways You will discuss topics related to ethical hacking, information risks, and security techniques which hackers will seek to circumvent. IDPS Effectiveness and Primary

More information

Heuristic Evaluation Project. Course Agent

Heuristic Evaluation Project. Course Agent Heuristic Evaluation Project Course Agent (www.sis.pitt.edu/~cagent) Jeremy Dennis Ken Garlitz Ed Hayden Heuristic Evalution Summary No. HE1 HE2 HE3 HE4 Message instructing users new to the system what

More information

Special Report. What to test (and how) to increase your ROI today

Special Report. What to test (and how) to increase your ROI today Special Report What to test (and how) to A well-designed test can produce an impressive return on investment. Of course, you may face several obstacles to producing that well-designed test to begin with.

More information

Using the Web in Your Teaching

Using the Web in Your Teaching Using the Web in Your Teaching November 16, 2001 Dirk Morrison Extension Division, University of Saskatchewan Workshop Outline What will we cover? Why use the Web for teaching and learning? Planning to

More information

MTAT : Software Testing

MTAT : Software Testing MTAT.03.159: Software Testing Lecture 03: Black-Box Testing (advanced) Part 2 Dietmar Pfahl Spring 2018 email: dietmar.pfahl@ut.ee Black-Box Testing Techniques Equivalence class partitioning (ECP) Boundary

More information

HCI Research Methods

HCI Research Methods HCI Research Methods Ben Shneiderman ben@cs.umd.edu Founding Director (1983-2000), Human-Computer Interaction Lab Professor, Department of Computer Science Member, Institute for Advanced Computer Studies

More information

Assistant Professor Computer Science. Introduction to Human-Computer Interaction

Assistant Professor Computer Science. Introduction to Human-Computer Interaction CMSC434 Introduction to Human-Computer Interaction Week 12 Lecture 24 Nov 21, 2013 Intro to Evaluation Human Computer Interaction Laboratory @jonfroehlich Assistant Professor Computer Science Hall of Fame

More information

Detection of Web-Site Usability Problems: Empirical Comparison of Two Testing Methods

Detection of Web-Site Usability Problems: Empirical Comparison of Two Testing Methods Detection of Web-Site Usability Problems: Empirical Comparison of Two Testing Methods Mikael B. Skov and Jan Stage Department of Computer Science Aalborg University Fredrik Bajers Vej 7 9220 Aalborg East,

More information

MODEL COMPLAINTS SYSTEM AND POLICY THE OMBUDSMAN'S GUIDE TO DEVELOPING A COMPLAINT HANDLING SYSTEM

MODEL COMPLAINTS SYSTEM AND POLICY THE OMBUDSMAN'S GUIDE TO DEVELOPING A COMPLAINT HANDLING SYSTEM MODEL COMPLAINTS SYSTEM AND POLICY THE OMBUDSMAN'S GUIDE TO DEVELOPING A COMPLAINT HANDLING SYSTEM Published by the Office of the Ombudsman 18 Lower Leeson Street Dublin 2 Telephone: 01 639 5600 Lo-call:

More information

Hyper Mesh Code analyzer

Hyper Mesh Code analyzer Hyper Mesh Code analyzer ABSTRACT Hyper Mesh Code Analyzer (HMCA) is a text based programming environment, designed for programmers to write their source code in a more organized and manageable fashion.

More information

RSA Cybersecurity Poverty Index : APJ

RSA Cybersecurity Poverty Index : APJ RSA Cybersecurity Poverty Index : APJ 2016 Overview Welcome to RSA s second annual Cybersecurity Poverty Index. The RSA Cybersecurity Poverty Index is the result of an annual maturity self-assessment completed

More information

EXAM PREPARATION GUIDE

EXAM PREPARATION GUIDE When Recognition Matters EXAM PREPARATION GUIDE PECB Certified ISO 31000 Risk Manager www.pecb.com The objective of the PECB Certified ISO 31000 Risk Manager examination is to ensure that the candidate

More information

Interaction Design. Heuristic Evaluation & Cognitive Walkthrough

Interaction Design. Heuristic Evaluation & Cognitive Walkthrough Interaction Design Heuristic Evaluation & Cognitive Walkthrough Interaction Design Iterative user centered design and development Requirements gathering Quick design Build prototype Evaluate and refine

More information

Evaluating the Past to Improve Future Techniques: Studying Usability through Heuristic Evaluations

Evaluating the Past to Improve Future Techniques: Studying Usability through Heuristic Evaluations Evaluating the Past to Improve Future Techniques: Studying Usability through Heuristic Evaluations Elise Ransdell Department of Computer Science University of Illinois at Springfield erans@uis.edu Abstract

More information

Enabling efficiency through Data Governance: a phased approach

Enabling efficiency through Data Governance: a phased approach Enabling efficiency through Data Governance: a phased approach Transform your process efficiency, decision-making, and customer engagement by improving data accuracy An Experian white paper Enabling efficiency

More information

European Risk Management Certification. Candidate Information Guide

European Risk Management Certification. Candidate Information Guide European Risk Management Certification Candidate Information Guide Presentation of FERMA Certification 3 Benefits 4 Eligibility criteria 5 Application and fees Examination details Syllabus: FERMA rimap

More information

This PDF was generated from the Evaluate section of

This PDF was generated from the Evaluate section of Toolkit home What is inclusive design? Why do inclusive design? How to design inclusively Overview Map of key activities Manage This PDF was generated from the Evaluate section of www.inclusivedesigntoolkit.com

More information

Analytical &! Empirical Evaluation

Analytical &! Empirical Evaluation Analytical &! Empirical Evaluation Informatics 132 5/22/2012 TODAY Evaluation Due: A3 Paper Prototyping UPCOMING Friday: Group Project Time Monday: Memorial Day, No Class Wednesday: HCI in the Real World

More information

EXAM PREPARATION GUIDE

EXAM PREPARATION GUIDE EXAM PREPARATION GUIDE PECB Certified ISO/IEC 17025 Lead Auditor The objective of the PECB Certified ISO/IEC 17025 Lead Auditor examination is to ensure that the candidate possesses the needed expertise

More information

EXAM PREPARATION GUIDE

EXAM PREPARATION GUIDE When Recognition Matters EXAM PREPARATION GUIDE PECB Certified OHSAS 18001 Lead Auditor www.pecb.com The objective of the PECB Certified OHSAS 18001 Lead Auditor examination is to ensure that the candidate

More information

Re: Exposure Draft Proposed ISAE 3402 on Assurance Reports on Controls at a Third Party Service Organization

Re: Exposure Draft Proposed ISAE 3402 on Assurance Reports on Controls at a Third Party Service Organization Date Le Président Fédération Avenue d Auderghem 22-28 des Experts 1040 Bruxelles 31 May 2008 Comptables Tél. 32 (0) 2 285 40 85 Européens Fax: 32 (0) 2 231 11 12 AISBL E-mail: secretariat@fee.be Mr. Jim

More information

Software Engineering (CSC 4350/6350) Rao Casturi

Software Engineering (CSC 4350/6350) Rao Casturi Software Engineering (CSC 4350/6350) Rao Casturi Testing Software Engineering -CSC4350/6350 - Rao Casturi 2 Testing What is testing? Process of finding the divergence between the expected behavior of the

More information

CS 147 Autumn 2017: Assignment 9 (Heuristic Evaluation Group Template) Instructor: James Landay. Fix: make web also block the desktop screen.

CS 147 Autumn 2017: Assignment 9 (Heuristic Evaluation Group Template) Instructor: James Landay. Fix: make web also block the desktop screen. Fix: make web also block the desktop screen. 5. H10 Help and documentation / Severity: 2 / Found by: A On the web calendar, there aren t any detailed instructions on how to engage in self-care activities,

More information

Heuristic Evaluation of [Slaptitude]

Heuristic Evaluation of [Slaptitude] Heuristic Evaluation of [Slaptitude] 1. Problem I am evaluating Slaptitude, a mobile app that allows you to set a timer and monitor leaderboards to help achieve and improve focus. 2. Violations Found 1.

More information

National Weather Service Weather Forecast Office Norman, OK Website Redesign Proposal Report 12/14/2015

National Weather Service Weather Forecast Office Norman, OK Website Redesign Proposal Report 12/14/2015 National Weather Service Weather Forecast Office Norman, OK Website Redesign Proposal Report 12/14/2015 Lindsay Boerman, Brian Creekmore, Myleigh Neill TABLE OF CONTENTS Parts PAGE Abstract... 3 Introduction...

More information

Criteria for selecting methods in user-centred design

Criteria for selecting methods in user-centred design Extended version of I-USED 2009 workshop paper Criteria for selecting methods in user-centred design Nigel Bevan Professional Usability Services 12 King Edwards Gardens, London W3 9RG, UK mail@nigelbevan.com

More information

E2: Heuristic Evaluation A usability analysis of decorativethings.com. Jordana Carlin LIS Spring 2014

E2: Heuristic Evaluation A usability analysis of decorativethings.com. Jordana Carlin LIS Spring 2014 E2: Heuristic Evaluation A usability analysis of decorativethings.com Jordana Carlin LIS-644-01 Spring 2014 2 E2: HEURISTIC EVALUATION Executive Summary Decorative Things is an online retailer of unique

More information

Work Environment and Computer Systems Development.

Work Environment and Computer Systems Development. CID-133 ISSN 1403-0721 Department of Numerical Analysis and Computer Science KTH Work Environment and Computer Systems Development. Jan Gulliksen and Bengt Sandblad CID, CENTRE FOR USER ORIENTED IT DESIGN

More information

CLEARING THE PATH: PREVENTING THE BLOCKS TO CYBERSECURITY IN BUSINESS

CLEARING THE PATH: PREVENTING THE BLOCKS TO CYBERSECURITY IN BUSINESS CLEARING THE PATH: PREVENTING THE BLOCKS TO CYBERSECURITY IN BUSINESS Introduction The world of cybersecurity is changing. As all aspects of our lives become increasingly connected, businesses have made

More information

Software Engineering Fall 2015 (CSC 4350/6350) TR. 5:30 pm 7:15 pm. Rao Casturi 11/10/2015

Software Engineering Fall 2015 (CSC 4350/6350) TR. 5:30 pm 7:15 pm. Rao Casturi 11/10/2015 Software Engineering Fall 2015 (CSC 4350/6350) TR. 5:30 pm 7:15 pm Rao Casturi 11/10/2015 http://cs.gsu.edu/~ncasturi1 Class announcements Final Exam date - Dec 1 st. Final Presentations Dec 3 rd. And

More information

2/18/2009. Introducing Interactive Systems Design and Evaluation: Usability and Users First. Outlines. What is an interactive system

2/18/2009. Introducing Interactive Systems Design and Evaluation: Usability and Users First. Outlines. What is an interactive system Introducing Interactive Systems Design and Evaluation: Usability and Users First Ahmed Seffah Human-Centered Software Engineering Group Department of Computer Science and Software Engineering Concordia

More information

A Health Literacy and Usability Heuristic Evaluation of a Mobile Consumer Health Application. Helen Monkman & Andre Kushniruk

A Health Literacy and Usability Heuristic Evaluation of a Mobile Consumer Health Application. Helen Monkman & Andre Kushniruk A Health Literacy and Usability Heuristic Evaluation of a Mobile Consumer Health Basics! Range! Cause! Application Helen Monkman & Andre Kushniruk August 22, 2013 Methods Results Discussion Closing Health

More information

Usability Report. Author: Stephen Varnado Version: 1.0 Date: November 24, 2014

Usability Report. Author: Stephen Varnado Version: 1.0 Date: November 24, 2014 Usability Report Author: Stephen Varnado Version: 1.0 Date: November 24, 2014 2 Table of Contents Executive summary... 3 Introduction... 3 Methodology... 3 Usability test results... 4 Effectiveness ratings

More information

1. The Best Practices Section < >

1. The Best Practices Section <   > DRAFT A Review of the Current Status of the Best Practices Project Website and a Proposal for Website Expansion August 25, 2009 Submitted by: ASTDD Best Practices Project I. Current Web Status A. The Front

More information