CogTool-Explorer: A Model of Goal-Directed User Exploration that Considers Information Layout

Size: px
Start display at page:

Download "CogTool-Explorer: A Model of Goal-Directed User Exploration that Considers Information Layout"

Transcription

1 CogTool-Explorer: A Model of Goal-Directed User Exploration that Considers Information Layout Leong-Hwee Teo DSO National Laboratories 20 Science Park Drive Singapore leonghwee.teo@alumni.cmu.edu Bonnie E. John IBM T. J. Watson Research Center 19 Skyline Drive Hawthorne, NY bejohn@us.ibm.com Marilyn Hughes Blackmon Institute of Cognitive Science, University of Colorado, Boulder, CO blackmon@colorado.edu ABSTRACT CogTool-Explorer 1.2 (CTE1.2) predicts novice exploration behavior and how it varies with different user-interface (UI) layouts. CTE1.2 improves upon previous models of information foraging by adding a model of hierarchical visual search to guide foraging behavior. Built within CogTool so it is easy to represent UI layouts, run the model, and present results, CTE1.2 s vision is to assess many design ideas at the storyboard stage before implementation and without the cost of running human participants. This paper evaluates CTE1.2 predictions against observed human behavior on 108 tasks (36 tasks 3 distinct website layouts). CTE1.2 s predictions accounted for 63-82% of the variance in the percentage of participants succeeding on each task, the number of clicks to success, and the percentage of participants succeeding without error. We demonstrate how these predictions can be used to identify areas of the UI in need of redesign. Author Keywords ACT-R; CogTool; Information Foraging; human performance modeling ACM Classification Keywords H5.m. Information interfaces and presentation (e.g., HCI): Miscellaneous. General Terms Human Factors. INTRODUCTION Iterative design and testing is a fundamental process in user interface (UI) design. UI designers may generate dozens of ideas, but a typical project only has the resources to empirically test a handful with appropriate users. A vision for human performance modeling has been to provide a method for running psychologically valid tests on many design ideas and obtaining a) quantitative measures of usability comparable to empirical testing with humans, and Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. CHI 12, May 5 10, 2012, Austin, Texas, USA. Copyright 2012 ACM /12/05...$ (b) an understanding of why the quantitative results came out as they did. Modeling has successfully realized this vision for predictions of efficiency of the UI for skilled users (e.g., [2, 18]). For predictions of novice exploration behavior, Information Foraging Theory [21] and the Linked Model of Comprehension-Based Action Planning and Instruction Taking (LICAI, [17]), based on Kintsch's construction-integration theory [14], provide promising underlying psychological theories, and several tools for website design have grown out of that work, notably Bloodhound [7] and Automatic Cognitive Walkthrough for the Web (AutoCWW, [4]). These models of novice behavior take into account the information scent of links (i.e., the semantic relatedness between the link s label and the user s goal, hereafter, infoscent), but do not consider the 2-D layout of the information on the page. Layout is an important factor, however, in determining a user s success [24]. Budiu and Pirolli [5] take 2-D layout into account, but in the context of a model created for a specific UI (a Degree-of-Interest tree). Likewise, the grouping of links on a page can also influence a user s behavior; AutoCWW [4] takes grouping into account when analyzing infoscent, but does not consider the 2-D layout of the groups when making its predictions. We present CogTool-Explorer 1.2 (CTE1.2), a computational embodiment of information foraging implemented in ACT-R [1] within the CogTool prototyping and analysis tool [13] that takes both 2-D layout and grouping into account when making predictions of novice exploration behavior. The next section will give an overview of CTE1.2, followed by implementation details, an evaluation of CTE1.2 on three different layouts of the same information, and suggestions for future work. OVERVIEW OF COGTOOL-EXPLORER 1.2 (CTE1.2) CTE1.2 is a research extension of CogTool that connects a computational model of eye-movements, visual perception, cognition and motor actions (i.e., a simulated user) with a UI storyboard. The original CogTool [13] had only a simulation of a skilled user (i.e., a Keystroke-Level Model [6]). The first version of CogTool-Explorer [24] added a model of novice behavior that considered 2-D layout to CogTool so a designer can make predictions of novice exploration on the same tasks and UIs that he or she 2479

2 analyzes for skilled execution time. After iterating on CogTool-Explorer many times to add consideration of grouping, improve its mechanisms and set parameters [25, 26], CTE1.2 makes predictions of novice exploration behavior on new text-based website layouts, explaining 63-82% of the variance on measures of interest to UI designers. Figure 1 shows an example run of CTE1.2 s simulated user on an encyclopedia look-up task from an experiment in [3]. In that experiment, the participant was presented with a webpage (shown in the background of Figure 1) with the instructions Find encyclopedia article about at the top and a paragraph of text just below it that constituted the participant s search goal. Below the goal, participants were presented with 93 links organized in 9 groups. Each link was an encyclopedia topic; selecting a link transitioned to its lower-level webpage which presented an alphabetical list of article titles. Participants could check that they had succeeded in the task by finding the target article title from the exploration goal in the A-Z list of article titles. If the target is not in the list, the participant had selected an incorrect link, and he or she would go back to the top-level webpage and continue exploration. For each exploration goal, there is only one correct link that leads to a lowerlevel webpage that contains the target article title. The participants were given 130 seconds to complete each task. Both Kitajima, Blackmon, and Polson [15, 16], Miller and Remington [20], and Hornof [11] argued that in such a layout with groups, users would first evaluate the groups in the webpage, focus attention on a group and then evaluate the links in that group. If the user decides to go back from a group, he or she will reevaluate the groups in the webpage, focus attention on another group and then evaluate the links in the new group. We implemented this group-based hierarchical exploration process in CTE1.2 to make it consider group relationships during exploration. We will present the implementation details in the next section, but refer to the numbers in Figure 1 to convey a sense of what the underlying model is doing. In the left half of Figure 1, CTE1.2 s simulated eye starts at the upper left of the screen where the goal is presented (1), and then moves to the nearest group (2). It determines the infoscent of the group and uses it to decide between two actions, continuing to look at other groups or looking at the links inside the best group seen so far. The model is stochastic in its judgment of nearest and of infoscent, making each run different, simulating the variability in human performance. This run of CTE1.2 looks at seven of the nine groups before deciding (at 3) to focus on the best group so far. It moves its eye to that group (4). The right half of Figure 1 shows the continuation of the run. Having decided to focus on a single group, CTE1.2 looks at the first link it sees (5), determines its infoscent, and uses this to decide between three actions, continuing to look at other links within this group, clicking on the best link seen so far, or abandoning this group and popping back up to explore other groups. In this example run, CTE1.2 looks at links until it decides to click on the best link so far (6). This link does not lead to the correct encyclopedia page (not shown), so CTE1.2 continues to look in this group until it decides to abandon it and go back to looking at other groups (7). CTE1.2 continues this cycle of perceiving groups (8) or links and deciding what to do next, until it either finds the correct encyclopedia page or runs out of time. IMPLEMENTATION DETAILS As shown in Figure 2, CTE1.2 is built inside CogTool [13], which provides the UI designer with a graphical UI (GUI) for representing UI designs and tasks, running the model to Figure 1. Sample run of CogTool-Explorer 1.2 (CTE1.2) on a layout of 93 links in 9 groups on a 3 3 grid. 2480

3 Figure 2. Structure of CogTool-Explorer 1.2 (CTE1.2) and the inputs from the AutoCWW Project at the University of Colorado used to evaluate it in this paper. generate predictions, and displaying its results. CTE1.2 is comprised of a device model and a user model, both implemented in the ACT-R cognitive architecture [1]. The Device Model CTE1.2 uses CogTool s UI storyboarding tool to allow an interactive system designer to create a storyboard of a GUI either by hand or automatically from existing HTML. The storyboard contains frames (e.g., a web page) with interactive widgets like links and buttons, and transitions between those frames that represent actions on widgets the user can perform, (e.g., clicking on a link). Transitions include a representation of the system response time after the users action (which is zero in the experiment and models in this paper). The widgets have three attributes that are used by the user model, their x-y position in the frame, their size, and the textual labels that are displayed to the user. These objects and their attributes are represented in an ACT-R device model with which CTE1.2 can interact. The UI designer can group widgets much like groups are made in drawing or presentation applications, i.e., select all the widgets (or previously-created groups) you want to group and invoke the Group command. Groups can have a textual label, just as widgets can. The CTE1.2 device model includes data structures to represent hierarchical grouping relationships in a frame, so that the user model can see and consider groups during exploration. In more detail, as an ACT-R model runs it extracts information from the device model to create visual-location objects representing what is visible on the screen. These objects have attributes for x-y coordinates and other basic visual features such as size and color (as yet unused in CTE1.2). CTE1.2 also includes a member-of-group attribute in each visual-location object. When the UI designer groups a set of widgets, the member-of-group slot of each visuallocation object that belongs to that group will have a value that references the group s visual-location object. Since a group s visual-location object also has a member-of-group slot, nested groups can be represented in this scheme. The member-of-group slot of a visual-location object that does not belong to any group has a slot value equal to nil. We can interpret the nil value in the member-of-group slot as membership in an implicit top-level group on the webpage. The User Model The user model has three main components implemented in ACT-R: eyes, hands and cognition. In addition to these components, the user model is comprised of (1) a representation of the exploration goal, (2) a representation of a user s semantic knowledge, and (3) a serial evaluation process adapted from the SNIF-ACT 2.0 model [9] and expanded to consider 2-D layout and grouping. 2481

4 Exploration Goal The entire process is driven by giving the user model an exploration goal, a paragraph of text that describes the target webpage in the website. This goal is given directly to the user model by typing the paragraph into a Task textbox in the CogTool s GUI. This text is encoded in a ACT-R chunk representing the exploration goal. The model then uses its eyes to look around the device model in search of a way to get to its goal. ACT-R Eyes: Visual Search, with Knowledge of Grouping As previously outlined, the user model searches hierarchically, first looking at the groups, and only after deciding to focus on one group, looking through the links in that group. This hierarchical search process was derived from prior work by Halverson and Hornof [10,11], but differs from the prior work in that infoscent determines whether CTE1.2 focuses on or leaves a group instead of exact word matching. To implement this hierarchical search, the exploration goal chunk contains a group-infocus slot whose value is initially set to nil (recall that nil represents an implicit top-level group on the webpage). The user model constrains its visual search to visual-location objects with member-of-group slots that match the groupin-focus slot. When the model decides to select a group to focus on, it pushes the current value of the group-in-focus slot into the exploration history and updates the group-infocus slot to reference the new group. When the model decides to select a link and transition to a new page, it pushes the current value of the group-in-focus slot into the exploration history and updates the group-in-focus slot to nil. Finally, when the model decides to go back, it updates the group-in-focus slot to the most recent entry in the exploration history, and then deletes that most recent entry from the exploration history. With this representation and mechanism in place, CTE1.2 s visual search is equipped to navigate through any hierarchically grouped layout of widgets on any number of web pages: pages with a one flat level of links, pages like the one in Figure 1 with a regular grid of groups and links, and pages with any arbitrarily arranged, nested and even overlapping groups. ACT-R s eyes serially look at widgets and groups of widgets (we will refer to both individual widgets and groups as elements) in the device model guided by a visual search process, adapted from the Minimal Model of Visual Search [10]). Implemented in the ACT-R vision module, augmented with the EMMA model of visual preparation, execution and encoding [23], moving the eyes and extracting information from the UI elements ranges in duration from 50 to 250 msec. The process starts in the upper-left corner of the frame and proceeds to look at the closest element with respect to the model s current point of visual attention. To make progress, the visual search marks this element as having been attended. The model will not look at an attended element again on this visit to its containing frame or group, but might do so in a subsequent visit because the elements revert to being unattended when the eyes leave a group or frame. Since distance between elements determines which elements are looked at and in which order (moderated by a noise function, so each run of CTE1.2 model may be different), the influence of the 2-D layout of a UI emerges from CTE1.2 s performance. When the user model looks at an element, it extracts the text label from the device model and uses the representation of its semantic knowledge to decide what to do next. If the user model chooses to click on a widget that results in a transition to a different frame, or chooses to go back to the previous frame (e.g. clicks on the back button in a web browser), the user model s visual field will be updated with the elements of the next frame. If the user model chooses to focus on a group or go back from a group, the user model will now continue exploration among the member elements of the new group. Before discussing the decision process in more detail, we will present the representation of semantic knowledge. Representation of Semantic Knowledge CTE1.2 creates a representation of the semantic knowledge of the user from the text of the exploration goal, the labels of the elements in the device model, and a large Englishlanguage semantic space housed at the AutoCWW [4] website 1 at the University of Colorado (the first-yearcollege level TASA corpus from Touchstone Applied Science Associates, Inc.) 2. The AutoCWW tools uses Latent Semantic Analysis (LSA) [19] to calculate the semantic relatedness of the goal to the text of elements, based on the cosine value between two vectors, one representing the text in the goal and the other representing the text in the element. CTE1.2 creates a dictionary of these cosines, or infoscent scores, by calling out to a particular composite tool 3 on the AutoCWW website that first (a) simulates human comprehension processes by elaborating each link text with semantically similar, familiar words in the TASA college-level corpus, and then (b) uses one-tomany comparison to compute the cosine between the goal text and each of the elaborated link texts. This dictionary is built after the device model and exploration goal have been defined but before the user model is run and is stored in a look-up table for access by the user model during model runs. When the model retrieves infoscent from this dictionary, noise is added to emulate a human s variable judgment about the similarity of concepts CTE1.2 can connect to other sources of information scent score but a discussion of alternative sources is beyond the scope of this paper. 3 The particular tool queried over the Internet by CTE1.2 was

5 The Serial Evaluation Decision Process Every time after the user model evaluates the infoscent of an element, the model will decide to either (1) continue to look at and evaluate another element, (2) click on the best widget or focus on the best group seen so far, or (3) go back to the previous group or frame. The model selects the action with the highest utility as computed by three utility update functions. The first two utility update functions [Eq. 1 and Eq. 2] are from SNIF-ACT 2.0 [9] and remain unchanged in CTE1.2. The third utility update function is a major contribution of CTE1.2 over SNIF-ACT 2.0 and will be described in detail after reviewing the first two functions. U LookAt is the model s estimate of how beneficial it will be to continue to look at more elements. A high value of U LookAt means that the elements looked at so far have been highly related to the goal, so this is a good information patch and looking more may find something even better. Mathematically, U' LookAt + IS Current U LookAt = N' + 1 where U' LookAt is the previous computed utility IS Current is the infoscent of the currently attended element N' is the number of elements already assessed in the current group before the currently attended element [Eq. 1] U Choose is the model s estimate of how beneficial it would be to stop looking in this group and jump to a new group it hasn t yet explored. One way to interpret U Choose is that it reminds the model how attractive the best element seen so far in the group compares to the all the elements that the model has seen in the group so far as captured in U LookAt. Thus, if U LookAt > U Choose, the model keeps looking at unattended elements, but when U Choose > U LookAt, the model stops looking and chooses to click on (if a link) or focus on (if a group) the best element seen so far. The k > 1 parameter in U Choose biases the model to prefer U LookAt when the model starts exploring in a new group. Mathematically, U' Choose + IS Best U Choose = N' + k + 1 where U' Choose is the previous computed utility IS Best is the highest infoscent attended in the current group N' is the number of elements already assessed in the current group before the currently attended element k is a scaling parameter [Eq. 2] Both Eq. 1 and Eq. 2 were derived from a Bayesian analysis by Fu and Pirolli in SNIF-ACT 2.0 and thus have strong theoretical support. These two utility update functions also have empirical support from several modeling studies [9, 24] where those models used these update functions and had good fits to participant data on link selections. However, those models did not emphasize or use go-back behavior, and we found that our initial CogTool-Explorer model that used the original U GoBack update equation from SNIF-ACT 2.0 did not match participants go-back behavior [25]. Therefore, CTE1.2 uses Eq. 3, which was developed in [26]. U GoBack = MIS(elements assessed in previous group excluding the selected element from the previous group) MIS(elements assessed in current group including the selected element from the previous group and incorrect elements in the current group) GoBackCost where MIS is Mean Infoscent GoBackCost is a parameter that represents the fixed cost incurred from interacting with the UI to go back and incorrect elements are assigned zero infoscent [Eq. 3] The first term in Eq.3 represents how attractive the model finds the links on the previous page that have not yet been explored; if this term is large, the model is likely to go back. The second term represents how confident the model is that the current page is on the correct path to its goal; if this term is large, the model is likely to keep exploring the current page. ACT-R Hands: Clicking on Links and Buttons When the user model decides to click on a link or the back button, it does so using ACT-R s standard motor module as used throughout CogTool. The time taken to move the mouse is determined by the Welford formulation [28] of Fitts s Law [8] as implemented in ACT-R, with a=0, b=100 ms, and a minimum time of 100ms. The time to click the mouse button is 150 msec, which emerges from ACT-R s standard motor constants. This click is sent to the device model, which feeds a new frame to ACT-R s visual module if the action changes what is visible on the device s display. ACT-R Cognition: Time to Access Infoscent The time the model takes to perform its operations is important because if the model runs faster than humans, it will be able to explore more links in the allotted time than people can. If it performs more slowly than humans, it will run out of time before exploring as many links. Neither situation will predict human performance well, making the duration of operations a critical aspect of CTE1.2. The duration of CTE1.2 s motor operators presented in the previous paragraph are familiar to HCI researchers. However, the time to access infoscent is a relatively new concept. When the model decides to look at an element, a 2483

6 sequence of three ACT-R production rules are responsible for (1) looking at the element, (2) encoding the text, and (3) assessing the infoscent of the encoded text. The duration for looking at an element is determined by EMMA [23] and the duration for encoding the text is the default 50ms for a single production in ACT-R; both are well established in the literature. The third production to assess the infoscent of an element is unique to the SNIF-ACT 2.0 and CTE models. This production takes the encoded text of the element, and the text from the exploration goal, and approximates the cognitive operation of assessing semantic similarity by invoking a LISP function to retrieve the infoscent of the element from the model s look-up table that represents its semantic knowledge. This LISP function is an example of using a computationally efficient implementation of a complex cognitive process like assessing infoscent, in place of using more native ACT-R mechanisms such as spreading activation between declarative memory chunks. This is a common expediency among cognitive modelers who wish to match patterns of human behavior but are not trying to match the time course of behavior as well. Since the time course of behavior in this model affects its success (i.e., failure occurs because the model, and the humans, ran over the 130 second time limit), we have to care more about time. This meant that the default 50ms for the single production that invokes the LISP function would be shorter than the duration of a more native ACT-R implementation that requires multiple production rules with latencies from declarative memory retrievals. The above reasoning motivated us to perform iterative tests of setting longer durations for the third production on a separate data set [25] than the evaluation in the next section. When set to 275ms, the average duration between link selections by the model matched the 7.4s observed in participants behavior. This duration will be used on another page layout and data set in the next section. EVALUATION OF COGTOOL-EXPLORER 1.2 (CTE1.2) We compared CTE1.2 s performance on 36 tasks on three different layouts of the same information, to demonstrate its ability to predict the effects of layout and grouping. The first layout (multi-page, Figure 3) was used to set the duration of the infoscent assessment production in CTE1.1, so results for the multi-page layout should be considered explanations of the data rather than predictions. However, both results on the second and third layouts (half-flattened, Figure 4, and multi-group, Figure 1) are true predictions, using only the task and UI descriptions, not looking at any human performance data. The tasks were selected from three experiments previously reported by Blackmon et al. [3, 4] and Toldy [27] to provide a set of encyclopedia look-up tasks of varying difficulty that had been tested on all three UI layouts. All three experiments from which the data were drawn used the same procedure and the same tasks. The participants were presented with a webpage with a target paragraph at the top and links below (Figures 1, 3 and 4). They were asked to click links until they either successfully found the correct webpage or were shown a webpage announcing that time had run out. The tasks alternated between hard and easy tasks, counterbalancing to prevent order effects. For example, an easy task (i.e., well supported by the UI design) was to look up Fern ; 100% of the participants found its link ( Plants ) in the Life Science category. In contrast, a hard task was to look up Lifesaving, with only 25% of the participants finding its link ( Organizations ), in the Social Science category. Thirty-six to 60 undergraduate participants completed each task, earning course credit or $15. Participants had 130 seconds to complete each task. Logging software recorded each link clicked, the group heading under which the click was made, and the time elapsed since the previous click. From this log, a total of 4979 completed tasks, we could extract many metrics of interest, for example, the percentage of participants who succeeded in each task, the number of clicks each participant made in each successful trial, and the percentage of participants who succeeded without error on each task. The Layouts The multi-page layout (Figure 3) starts with a top-level page comprised of the goal statement and a list of nine category links below it. When a link is clicked on this start page, a 2 nd -level page appears with a list of links that are more specific aspects of the category link clicked on the top-level. When a link is clicked on the 2 nd -level page, a 3 rd - level page appears with an alphabetical list of terms. If the correct path is followed, the goal term appears on the 3 rd - level page. The participants use the browser s back button to go back from a lower-level page to a higher-level page. The half-flattened layout (Figure 4) works like an accordion widget. 4 It starts with the same top-level page as the multipage layout, but when a category link is clicked, the links below it move down and its more specific links (those that would have been on a 2 nd -level page in the multi-page layout) appear indented, just below the category link. Clicking on one of these links leads to the same 3 rd -level pages as in the multi-page layout. Clicking on another category link at this point collapses the currently expanded category link and expands the one just clicked. The multi-group layout (background of Figure 1) puts all 93 links on a single page organized in 9 groups on a 3 3 grid. Each group has a heading that contains the same words as the category links in the multi-page and half-flattened layouts. Clicking on any link in this layout brings up the same 3 rd -level pages as in the multi-page layout. 4 For example,

7 Figure 3. Multi-page layout. model s power to predict which tasks need no improvement and therefore no further design effort. To obtain stable values for the above metrics, we ran many sets of model runs until convergence, where each set comprises the same number of model runs as participant trials for each of the 36 tasks. We first ran two sets and checked whether %Success for all 36 tasks on the first set were within 1% of the %Success on the combination of both sets (it wasn t). We then ran a third set and compared the %Success for all 36 tasks for the combined runs of the first two sets to the combined runs of all three sets. We continued to run sets of models until %Success for all 36 tasks in the new combined sets are within 1% of the combined previous sets. All tasks converged within 16 sets of model runs (over 20,000 model runs). Results Table 1 shows the results of running CTE1.2 to convergence on the three layouts and comparing them to both human data and CTE1.1 s predictions on the same tasks. Figure 4. Half-flattened layout. Clicking on subordinate links brings up the 3 rd -level pages shown in Figure 3. Metrics and Modeling Process We compared the model runs by CTE1.2 to participant data on five task performance measures [26]. Due to space limitations, this paper reports only the following three metrics, which are both indicative of model goodness-of-fit and important to UI designers. 1. Correlation between model and participants on the percent of trials succeeding on each task (R 2 %Success). Percent success is common in user testing to inform UI designers about how successful their users will be with their design, so a high correlation between model and data will allow modeling to provide similar information. 2. Correlation between model and participants on the number of clicks on links to accomplish each task (R 2 ClicksToSuccess). This metric eliminates unsuccessful trials because some participants click two or three links and then do nothing until time runs out, whereas others continued to click (as did the model), so success trials may be a better test of how well the model fits motivated users. 3. Correlation between model and participants on the percent of trials succeeding without error on each trial (R 2 %ErrorFreeSuccess). This measure indicates the CTE1.2 is identical to CTE1.1 on the multi-page layout (because there are no groups in that layout, so the models perform the same), is statistically indistinguishable from CTE1.1 on the half-flattened layout, but improves substantially on the multi-group layout. As we speculated in [26], the half-flattened layout has at most 24 links visible at one time which may not be sufficiently difficult for participants to adopt a hierarchical visual search strategy. The benefit of hierarchical visual search is revealed in the multi-group layout, however, with its 9 categories and 93 links visible on its page. CTE1.2 accounts for 63-82% of the variance in novice behavior without using human data to set any parameters. In contrast, prior models like DOI-ACT [5] and SNIF-ACT 2.0 [9] accounted for 56% (ClicksToSuccess) and 94% (%Success) of the variances in their human data, respectively, but fit model parameters to the same human data used to evaluate the models. CTE1.2 is the first process model of information foraging that has been applied to Table 1. Correlations of CTE1.1 s and CTE1.2 s predictions with human data. 2485

8 different layouts without peeking at any human data to set parameters. This avoids over-fitting of the model to data and increases the model's ability to generalize. Correlation alone does not tell the entire story, however. For example, Figure 5 shows that CTE1.2 underpredicts human performance on the hardest tasks, having %Success for many tasks at less than 20% whereas the people never performed that badly. The next section demonstrates how, despite predictions that are not perfect, CTE1.2 can identify those tasks most in need of UI design attention and those where the UI already supports the user. ignore the easy tasks, and move on to considering the moderate tasks (categorized as neither easy nor hard) if there was time before the site had to be released. For example, if the product team decided that an easy task was one where 95% of the people could succeed in 2 minutes (approximately the time limit in the experiments) and that a hard task was one that 75% or fewer people could succeed in that time, then CTE1.2 could correctly identify 87% of the easy tasks and 93% of the hard tasks using the same criteria for the model. However, it would miss one hard task and would identify four moderate tasks as hard, so design effort would not be expended exactly as it would were user testing data available. If the product team had a less stringent definition of easy (i.e., 90% success) and hard (50% failure) then CTE1.2 would not miss any of the really Figure 5. CTE1.2's predicted %Success versus observed %Success. If CTE1.2 perfectly matched participant data, all data points will lie on the green diagonal line. The red line is the best fitting line for the data points. DISCUSSION Just as user testing can identify which tasks are not well supported by the current UI design, CTE1.2 s predictions can be used by UI designers for the same purpose. For example, the leftmost two columns of Table 2 display each task and the %Success attained by the human participants using the multi-group layout. The three rightmost columns show how successful CTE1.2 would be at identifying easy tasks (that require no additional design effort) and hard tasks (that should received additional attention and redesign) under different definitions of easy and hard. We present several definitions of easy and hard because the criteria for defining these categories are usually dependant on business considerations. For example, an e-business project may have very stringent criteria because their customers may flee to a competitor as soon as they lose their way in the site, whereas a site providing information about disease treatment and prognosis may be able to depend on the persistence of motivated users and have a less stringent definition. Whatever the criteria, the project team could concentrate their design effort on the hard tasks, Table 2. Identification of easy and hard tasks under several definitions of easy and hard. Shading indicates hits, misses and false alarms as summarized in the last four rows. 2486

9 hard tasks, but its false alarm rate would be higher (35% of the tasks it identifies as hard would not actually be hard, and one would actually be easy), so design effort would be expended needlessly. However, there is no necessity to use the same criteria for both the model and the human data. Knowing that CTE1.2 underpredicts human performance as shown in Figure 5, the criteria for CTE1.2 could be set to 50% in order to identify tasks that would be successful for 75% or fewer users. Likewise, CTE1.2 s criteria for an easy task could be set at 90%, resulting in the rightmost column in Table 2. This results in 93% of both easy and hard tasks being correctly identified, only 1 miss (7%) and 3 false alarms (14%). Another interesting point is that CTE1.2 s predictions of which group heading users will click in first are even more promising than its predictions of our main metrics. Blackmon has shown in laboratory studies [3] and Wolfson and Bailey have shown in practice [29], that a user s first click is highly predictive of eventual success. We analyzed the correspondence between CTE1.2 s predictions of which group heading contained the link first clicked in each task in the multi-group layout against the observed first-clicks of the participants. CTE1.2 s predictions accounted for 71% of the variance with no bias for under or over-predicting. Thus, CTE1.2 could provide targeted guidance for heading label choices in UI designs. CONCLUSION AND FUTURE WORK We have developed CogTool-Explorer (CTE1.2) 5, a model of goal-directed user exploration that considers both layout position and grouping of on-screen options, and test results show that the model accounted for 63-82% of the variance of human performance on three measures of interest to HCI. The model s parameters were set using a multi-page layout of information and attained this level of fit to participant data on a half-flattened (accordion style) layout and a multigroup layout, suggesting that the model is not over fitted to a particular layout. We showed how CTE1.2 might be used to identify tasks where the UI needs to be redesigned to support human exploration and tasks where no more design effort is required, attaining over a 90% hit rate for easy and hard tasks, missing less than 10% of the hard tasks, with less than 20% false alarms. Our eventual goal is for CTE1.2 to work for a wide range of UIs, so that it can be used as a predictive modeling tool for design. We must further test and likely refine the model on many other UIs before we can rely on its predictions in general, but our results so far are encouraging. Several avenues of future work may increase its accuracy as a predictive model and its usefulness as a tool for design. 5 CTE1.2 is part of the CogTool open source project and can be downloaded, as can all of CogTool, from For example, CTE1.2 uses only infoscent to evaluate links; it is likely that humans use logical reasoning mechanisms, especially when information foraging fails, as suggested by CTE1.2 s under-prediction of success on hard tasks. Using the categorical relationships of words as well as a statistical model of semantic similarity, as in [5], especially when UIs are arranged in groups, may be a path to improvement. In addition, AutoCWW [4] uses familiarity of words as well as infoscent to make its predictions; including familiarity in CTE1.2 s decision process may also increase its predictive power. CTE1.2 currently assumes that all information is equally visible, ignoring contrast, color, size, etc, thus, it should be considered as a test of only the textual labels, grouping and positioning at this point. Adding a model of saliency (e.g. [12] and its successors) would fit well within the ACT-R framework and could allow future versions of CTE to be applied to more realistic UIs. Further, CTE1.2 does not model the psychological processes of how visual groups are formed and recognized. Rather, group relationships are provided as input to the model by the human modeler (as does AutoCWW [4] and DOI-ACT [5]). Future work can explore the use of other computational models of visual grouping, for example [22], as input to CTE. This has the potential for both increasing the accuracy of CTE s predictions and decreasing the work for a UI designer using CTE. In sum, CTE1.2 began with SNIF-ACT 2.0 [9], embodied it with the eyes and hands of ACT-R [1], guided its visual search with the Minimal Model of Visual Search [10] and knowledge of grouping [26], improved its Go-Back utility update function [25, 26], and was aligned with the time course of human behavior [26]. Built within CogTool [13] so it is easy to represent UI layouts, run the model, and present results, CTE1.2 contributes to human performance modeling in HCI and to our set of research and design tools. ACKNOWLEDGMENTS We thank the amazing CogTool team. This research was supported by in part by funds from IBM, NASA, Boeing, NEC, PARC, DSO, and ONR, N The views and conclusions in this paper are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of IBM, NASA, Boeing, NEC, PARC, DSO, ONR, or the U.S. Government. REFERENCES 1. Anderson, J. R., Bothell, D., Byrne, M. D., Douglass, S., Lebiere, C., and Qin, Y. (2004). An integrated theory of the mind. Psychological Review 111, 4, Bellamy, R., John, B. E., Kogan, S. (2011). Deploying CogTool: Integrating quantitative usability assessment into real-world software development. Proceeding of the 33rd International Conference on Software Engineering (ICSE '11). ACM, New York, NY, USA,

10 3. Blackmon, M. H. (2012). Information scent determines attention allocation and link selection among multiple information patches on a webpage. Behaviour Information & Technology, 31(1), Blackmon, M. H., Kitajima, M., and Polson, P. G. (2005). Tool for accurately predicting website navigation problems, non-problems, problem severity, and effectiveness of repairs. In Proc. CHI 2005, ACM Press, Budiu, R., and Pirolli, P. L. (2007). Modeling navigation in degree of interest trees. In Proc. of the 29th Annual Conference of the Cognitive Science Society, Cognitive Science Society. 6. Card, S. K., Moran, T.P., and Newell, A. (1980). The keystroke level model for user performance time with interactive systems. Communications of the ACM 23, 7, Chi, E. H., Rosien, A., Supattanasiri, G., Williams, A., Royer, C., Chow, C., Robles, E., Dalal, B., Chen, J., and Cousins, S. (2003). The bloodhound project: automating discovery of web usability issues using the InfoScent simulator. In Proc. CHI 2003, ACM Press (2003), Fitts, P. M. (1954). The information capacity of the human motor system in controlling the amplitude of movement. Journal of Experimental Psychology, xlvii, Fu, W.-T., and Pirolli, P. (2007) SNIF-ACT: A cognitive model of user navigation on the World Wide Web. Human-Computer Interaction, 22, Halverson, T., and Hornof, A. J. (2007) A minimal model for predicting visual search in human-computer interaction. In Proc. CHI 2007, ACM Press, Hornoff, A. J. (2004). Cognitive strategies for the visual search of hierarchical computer displays. Human Computer Interaction, 19, Itti, L., Koch, C., and Niebur, E. (1998) A Model of Saliency-Based Visual Attention for Rapid Scene Analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11), John, B. E., Prevas, K., Salvucci, D. D., and Koedinger, K. (2004) Predictive human performance modeling made easy. In Proc. CHI 2004, ACM Press, Kintsch, W. (1988). The role of knowledge in discourse comprehension: A construction-integration model. Psychological Review, 95, Kitajima, M., Blackmon, M.H., and Polson, P.G. (2000). A comprehension-based model of Web navigation and its application to Web usability analysis. In S. McDonald, Y. Waern and G. Cockton (Eds.), People and Computers XIV - Usability or Else! (Proceedings of HCI 2000, ). Springer-Verlag. 16. Kitajima, M., Blackmon, M. H., and Polson, P. G. (2005). Cognitive Architecture for Website Design and Usability Evaluation: Comprehension and Information Scent in Performing by Exploration. HCI International Kitajima, M., and Polson, P. (1997). A comprehensionbased model of exploration. Human-Computer Interaction, 12, 4, Knight, A., Pyrzak, G., and Green, C. (2007). When two methods are better than one: combining user study with cognitive modeling. Ext. Abstracts CHI 2007, ACM Press (2007), Landauer,T. K., McNamara, D. S., Dennis, S., and Kintsch W. (Eds). Handbook of Latent Semantic Analysis, Mahwah NJ: Lawrence Erlbaum Associates, Miller, C. S., and Remington, R. W. (2004) Modeling Information Navigation: Implications for Information Architecture. Human-Computer Interaction, 19, Pirolli, P. and Card, S.K. (1999). Information foraging. Psychological Review, 106, Rosenholtz, R., Twarog, N. R., Schinkel-Bielefeld, N., and Wattenberg, M. (2009) An intuitive model of perceptual grouping for HCI design. In Proc. CHI 2009, ACM Press, Salvucci, D. D. (2001) An integrated model of eye movements and visual encoding. Cognitive Systems Research 1, 4, Teo L., and John B. E. (2008) Towards a Tool for Predicting Goal-directed Exploratory Behavior. In Proc. of the HFES 52 nd Annual Meeting, Teo, L., and John, B. E. (2011) The Evolution of a Goal- Directed Exploration Model: Effects of Information Scent and GoBack Utility on Successful Exploration. Topics in Cognitive Science, 3, Teo, L. (2011) Modeling Goal-Directed User Exploration in Human-Computer Interaction. Unpublished Doctoral Dissertation, Carnegie Mellon University. 27. Toldy, M. E. (2009) The impact of working memory limitations and distributed cognition on solving search problems on complex informational websites. Unpublished Doctoral Dissertation, University of Colorado Boulder, Department of Psychology. 28. Welford, A. T. (1960). The measurement of sensorymotor performance: Survey and reappraisal of twelve years' progress. Ergonomics, 3, Wolfson, C.A., Bailey, R.W., Nall, J. and Koyani, S. (2008), Contextual card sorting (or FirstClick testing): A new methodology for validating information architectures, Proceedings of the UPA. 2488

Automated Cognitive Walkthrough for the Web (AutoCWW)

Automated Cognitive Walkthrough for the Web (AutoCWW) CHI 2002 Workshop: Automatically Evaluating the Usability of Web Sites Workshop Date: April 21-22, 2002 Automated Cognitive Walkthrough for the Web (AutoCWW) Position Paper by Marilyn Hughes Blackmon Marilyn

More information

Preliminary Evidence for Top-down and Bottom-up Processes in Web Search Navigation

Preliminary Evidence for Top-down and Bottom-up Processes in Web Search Navigation Preliminary Evidence for Top-down and Bottom-up Processes in Web Search Navigation Shu-Chieh Wu San Jose State University NASA Ames Research Center Moffett Field, CA 94035 USA scwu@mail.arc.nasa.gov Craig

More information

April 2 7 Portland, Oregon, USA. CHI 2005 PAPERS: Web Interactions

April 2 7 Portland, Oregon, USA. CHI 2005 PAPERS: Web Interactions Tool for Accurately Predicting Website Navigation Problems, Non-Problems, Problem Severity, and Effectiveness of Repairs Marilyn Hughes Blackmon, Muneo Kitajima and Peter G. Polson Institute of Cognitive

More information

Toward Modeling Contextual Information in Web Navigation

Toward Modeling Contextual Information in Web Navigation Toward Modeling Contextual Information in Web Navigation Ion Juvina (ion@cs.uu.nl) Herre van Oostendorp (herre@cs.uu.nl) Poyan Karbor (poyan@cs.uu.nl) Brian Pauw (brian@cs.uu.nl) Center for Content and

More information

GOMS. Adapted from Berkeley Guir & Caitlin Kelleher

GOMS. Adapted from Berkeley Guir & Caitlin Kelleher GOMS Adapted from Berkeley Guir & Caitlin Kelleher 1 GOMS Goals what the user wants to do Operators actions performed to reach the goal Methods sequences of operators that accomplish a goal Selection Rules

More information

Model-based Navigation Support

Model-based Navigation Support Model-based Navigation Support Herre van Oostendorp Department of Information and Computer Sciences Utrecht University herre@cs.uu.nl In collaboration with Ion Juvina Dept. of Psychology, Carnegie-Mellon

More information

GOMS Lorin Hochstein October 2002

GOMS Lorin Hochstein October 2002 Overview GOMS Lorin Hochstein lorin@cs.umd.edu October 2002 GOMS is a modeling technique (more specifically, a family of modeling techniques) that analyzes the user complexity of interactive systems. It

More information

The Effects of Semantic Grouping on Visual Search

The Effects of Semantic Grouping on Visual Search To appear as Work-in-Progress at CHI 2008, April 5-10, 2008, Florence, Italy The Effects of Semantic Grouping on Visual Search Figure 1. A semantically cohesive group from an experimental layout. Nuts

More information

Modeling Result-List Searching in the World Wide Web: The Role of Relevance Topologies and Trust Bias

Modeling Result-List Searching in the World Wide Web: The Role of Relevance Topologies and Trust Bias Modeling Result-List Searching in the World Wide Web: The Role of Relevance Topologies and Trust Bias Maeve O Brien (maeve.m.obrien@ucd.ie) Mark T. Keane (mark.keane@ucd.ie) Adaptive Information Cluster,

More information

Cognitive Modeling offers Explanations for Effects found in Usability Studies.

Cognitive Modeling offers Explanations for Effects found in Usability Studies. Sabine Prezenski and Nele Russwinkel. 2014. Cognitive Modeling offers Explanations for Effects found in Usability Studies. In Proceedings of the 2014 European Conference on Cognitive Ergonomics (ECCE '14).

More information

Expert Reviews (1) Lecture 5-2: Usability Methods II. Usability Inspection Methods. Expert Reviews (2)

Expert Reviews (1) Lecture 5-2: Usability Methods II. Usability Inspection Methods. Expert Reviews (2) : Usability Methods II Heuristic Analysis Heuristics versus Testing Debate Some Common Heuristics Heuristic Evaluation Expert Reviews (1) Nielsen & Molich (1990) CHI Proceedings Based upon empirical article

More information

How to Exploit Abstract User Interfaces in MARIA

How to Exploit Abstract User Interfaces in MARIA How to Exploit Abstract User Interfaces in MARIA Fabio Paternò, Carmen Santoro, Lucio Davide Spano CNR-ISTI, HIIS Laboratory Via Moruzzi 1, 56124 Pisa, Italy {fabio.paterno, carmen.santoro, lucio.davide.spano}@isti.cnr.it

More information

THE IMPACT OF WEB SITE FAMILIARITY ON USER PERFORMANCE WHEN CRITICAL NAVIGATION LINKS CHANGE

THE IMPACT OF WEB SITE FAMILIARITY ON USER PERFORMANCE WHEN CRITICAL NAVIGATION LINKS CHANGE THE IMPACT OF WEB SITE FAMILIARITY ON USER PERFORMANCE WHEN CRITICAL NAVIGATION LINKS CHANGE Philip Kortum 1 and Lauren F. V. Scharff 2 1 Rice University, Houston, TX 2 United States Air Force Academy,

More information

The Role of Information Scent in On-line Browsing:

The Role of Information Scent in On-line Browsing: The Role of Information Scent in On-line Browsing: Extensions of the ACT-R Utility and Concept Formation Mechanisms Peter Pirolli User Interface Research Area Supported in part by Office of Naval Research

More information

Integrating graphical information into cognitive modeling of web navigation

Integrating graphical information into cognitive modeling of web navigation Integrating graphical information into cognitive modeling of web navigation Saraschandra Karanam (saraschandra@research.iiit.ac.in) Cog Sci Lab, International Institute of Information Technology, Gachibowli,

More information

Evaluating Category Membership for Information Architecture

Evaluating Category Membership for Information Architecture Evaluating Category Membership for Information Architecture Craig S. Miller Sven Fuchs Niranchana S. Anantharaman Priti Kulkarni DePaul University 243 S Wabash Ave Chicago, IL 60604 USA cmiller@cs.depaul.edu

More information

CoLiDeS and SNIF-ACT: Complementary Models for Searching and Sensemaking on the Web

CoLiDeS and SNIF-ACT: Complementary Models for Searching and Sensemaking on the Web CoLiDeS and SNIF-ACT 1 Running head: CoLiDeS and SNIF-ACT CoLiDeS and SNIF-ACT: Complementary Models for Searching and Sensemaking on the Web Muneo Kitajima National Institute of Advanced Industrial Science

More information

Looking Back: Fitts Law

Looking Back: Fitts Law Looking Back: Fitts Law Predicts movement time for rapid, aimed pointing tasks One of the few stable observations in HCI Index of Difficulty: How to get a and b for a specific device / interaction technique

More information

Easing the Generation of Predictive Human Performance Models from Legacy Systems

Easing the Generation of Predictive Human Performance Models from Legacy Systems Easing the Generation of Predictive Human Performance Models from Legacy Systems Amanda Swearngin Myra B. Cohen Dept. of Computer Science & Engineering University of Nebraska-Lincoln Lincoln, NE 68588-0115

More information

Table of contents. Introduction...1. Simulated keyboards...3. Theoretical analysis of original keyboard...3. Creating optimal keyboards...

Table of contents. Introduction...1. Simulated keyboards...3. Theoretical analysis of original keyboard...3. Creating optimal keyboards... Table of contents Page Introduction...1 Simulated keyboards...3 Theoretical analysis of original keyboard...3 Creating optimal keyboards...4 Empirical analysis...6 Learning effects...8 Conclusions...10

More information

Perfect Timing. Alejandra Pardo : Manager Andrew Emrazian : Testing Brant Nielsen : Design Eric Budd : Documentation

Perfect Timing. Alejandra Pardo : Manager Andrew Emrazian : Testing Brant Nielsen : Design Eric Budd : Documentation Perfect Timing Alejandra Pardo : Manager Andrew Emrazian : Testing Brant Nielsen : Design Eric Budd : Documentation Problem & Solution College students do their best to plan out their daily tasks, but

More information

Cognitive Walkthrough Evaluation

Cognitive Walkthrough Evaluation Columbia University Libraries / Information Services Digital Library Collections (Beta) Cognitive Walkthrough Evaluation by Michael Benowitz Pratt Institute, School of Library and Information Science Executive

More information

Repairing Usability Problems Identified by the Cognitive Walkthrough for the Web

Repairing Usability Problems Identified by the Cognitive Walkthrough for the Web Ft. Lauderdale, Florida, USA April 5-10, 2003 Paper: Web Usability Repairing Usability Problems Identified by the Cognitive Walkthrough for the Web Marilyn Hughes Blackmon, Muneo Kitajima and Peter G.

More information

Human Computer Interaction. Outline. Human Computer Interaction. HCI lecture S. Camille Peres, Ph. D.

Human Computer Interaction. Outline. Human Computer Interaction. HCI lecture S. Camille Peres, Ph. D. Human Computer Interaction S. Camille Peres, Ph. D. peressc@uhcl.edu Outline Human Computer Interaction Articles from students Presentation User Centered Design Human Computer Interaction Human Computer

More information

Adaptive Medical Information Delivery Combining User, Task and Situation Models

Adaptive Medical Information Delivery Combining User, Task and Situation Models Adaptive Medical Information Delivery Combining User, Task and Situation s Luis Francisco-Revilla and Frank M. Shipman III Department of Computer Science Texas A&M University College Station, TX 77843-3112,

More information

EVALUATION OF THE USABILITY OF EDUCATIONAL WEB MEDIA: A CASE STUDY OF GROU.PS

EVALUATION OF THE USABILITY OF EDUCATIONAL WEB MEDIA: A CASE STUDY OF GROU.PS EVALUATION OF THE USABILITY OF EDUCATIONAL WEB MEDIA: A CASE STUDY OF GROU.PS Turgay Baş, Hakan Tüzün Hacettepe University (TURKEY) turgaybas@hacettepe.edu.tr, htuzun@hacettepe.edu.tr Abstract In this

More information

How to Conduct a Heuristic Evaluation

How to Conduct a Heuristic Evaluation Page 1 of 9 useit.com Papers and Essays Heuristic Evaluation How to conduct a heuristic evaluation How to Conduct a Heuristic Evaluation by Jakob Nielsen Heuristic evaluation (Nielsen and Molich, 1990;

More information

SNIF-ACT: A Model of Information Foraging on the World Wide Web

SNIF-ACT: A Model of Information Foraging on the World Wide Web SNIF-ACT: A Model of Information Foraging on the World Wide Web Peter Pirolli and Wai-Tat Fu 1 PARC, 3333 Coyote Hill Road, Palo Alto, California 94304 pirolli@parc.com, wfu@gmu.edu 1 Introduction Abstract.

More information

Cognitive Walkthrough

Cognitive Walkthrough 1 1 Cognitive Walkthrough C. Wharton, J. Rieman, C. Lewis and P. Polson, The Cognitive Walkthrough Method: A Practitioner s Guide, in J. Nielsen and R. Mack (eds.), Usability Inspection Methods, John Wiley

More information

Predictive Model Examples. Keystroke-Level Model (KLM) 1 2

Predictive Model Examples. Keystroke-Level Model (KLM) 1 2 Predictive Model Examples Linear prediction equation Fitts law Choice reaction time Keystroke-level model (KLM) Skill acquisition More than one predictor 62 Keystroke-Level Model (KLM) 1 2 One of the earliest

More information

CS 4317: Human-Computer Interaction

CS 4317: Human-Computer Interaction September 8, 2017 Tentative Syllabus CS 4317: Human-Computer Interaction Spring 2017 Tuesday & Thursday, 9:00-10:20, Psychology Building, room 308 Instructor: Nigel Ward Office: CCS 3.0408 Phone: 747-6827

More information

An Exploratory Analysis of Semantic Network Complexity for Data Modeling Performance

An Exploratory Analysis of Semantic Network Complexity for Data Modeling Performance An Exploratory Analysis of Semantic Network Complexity for Data Modeling Performance Abstract Aik Huang Lee and Hock Chuan Chan National University of Singapore Database modeling performance varies across

More information

Building the User Interface: The Case for Continuous Development in an Iterative Project Environment

Building the User Interface: The Case for Continuous Development in an Iterative Project Environment Copyright Rational Software 2002 http://www.therationaledge.com/content/dec_02/m_uiiterativeenvironment_jc.jsp Building the User Interface: The Case for Continuous Development in an Iterative Project Environment

More information

Towards a General Model of Repeated App Usage

Towards a General Model of Repeated App Usage In D. Reitter & F. E. Ritter (Eds.), Proceedings of the 1th International Conference on Cognitive Modeling (ICCM 1). University Park, PA: Penn State. Towards a General Model of Repeated App Usage Sabine

More information

Interfaces Homme-Machine

Interfaces Homme-Machine Interfaces Homme-Machine APP3IR Axel Carlier 29/09/2017 1 2 Some vocabulary GUI, CHI,, UI, etc. 3 Some vocabulary Computer-Human Interaction Interaction HommeMachine User Interface Interface Utilisateur

More information

A Breakdown of the Psychomotor Components of Input Device Usage

A Breakdown of the Psychomotor Components of Input Device Usage Page 1 of 6 February 2005, Vol. 7 Issue 1 Volume 7 Issue 1 Past Issues A-Z List Usability News is a free web newsletter that is produced by the Software Usability Research Laboratory (SURL) at Wichita

More information

The Effect of Menu Type and Task Complexity on Information Retrieval Performance

The Effect of Menu Type and Task Complexity on Information Retrieval Performance 64 The Ergonomics Open Journal, 2009, 2, 64-71 Open Access The Effect of Menu Type and Task Complexity on Information Retrieval Performance Herre van Oostendorp *,a, R. Ignacio Madrid b and Mari Carmen

More information

Information Systems Interfaces (Advanced Higher) Information Systems (Advanced Higher)

Information Systems Interfaces (Advanced Higher) Information Systems (Advanced Higher) National Unit Specification: general information NUMBER DV51 13 COURSE Information Systems (Advanced Higher) SUMMARY This Unit is designed to develop knowledge and understanding of the principles of information

More information

Modelling human-computer interaction

Modelling human-computer interaction Res. Lett. Inf. Math. Sci., 2004, Vol. 6, pp 31-40 31 Available online at http://iims.massey.ac.nz/research/letters/ Modelling human-computer interaction H.RYU Institute of Information & Mathematical Sciences

More information

Evaluation of regions-of-interest based attention algorithms using a probabilistic measure

Evaluation of regions-of-interest based attention algorithms using a probabilistic measure Evaluation of regions-of-interest based attention algorithms using a probabilistic measure Martin Clauss, Pierre Bayerl and Heiko Neumann University of Ulm, Dept. of Neural Information Processing, 89081

More information

cs414 principles of user interface design, implementation and evaluation

cs414 principles of user interface design, implementation and evaluation cs414 principles of user interface design, implementation and evaluation Karrie Karahalios, Eric Gilbert 30 March 2007 Reaction Time and Motor Skills Predictive Models Hick s Law KLM Fitts Law Descriptive

More information

Writing a Research Paper

Writing a Research Paper Writing a Research Paper I. Scott MacKenzie 1 Research Paper The final step Research is not finished until the results are published! 2 1 Organization of a Research Paper Title Abstract Body Main sections

More information

Semi-Supervised Clustering with Partial Background Information

Semi-Supervised Clustering with Partial Background Information Semi-Supervised Clustering with Partial Background Information Jing Gao Pang-Ning Tan Haibin Cheng Abstract Incorporating background knowledge into unsupervised clustering algorithms has been the subject

More information

Process of Interaction Design and Design Languages

Process of Interaction Design and Design Languages Process of Interaction Design and Design Languages Process of Interaction Design This week, we will explore how we can design and build interactive products What is different in interaction design compared

More information

Automatic Reconstruction of the Underlying Interaction Design of Web Applications

Automatic Reconstruction of the Underlying Interaction Design of Web Applications Automatic Reconstruction of the Underlying Interaction Design of Web Applications L.Paganelli, F.Paternò C.N.R., Pisa Via G.Moruzzi 1 {laila.paganelli, fabio.paterno}@cnuce.cnr.it ABSTRACT In this paper

More information

Cognitive Disability and Technology: Universal Design Considerations

Cognitive Disability and Technology: Universal Design Considerations Cognitive Disability and Technology: Universal Design Considerations Clayton Lewis Coleman Institute for Cognitive Disabilities RERC-ACT clayton.lewis@colorado.edu Prepared for AUCD Training Symposium,

More information

Human Performance Regression Testing

Human Performance Regression Testing Human Performance Regression Testing Amanda Swearngin, Myra B. Cohen Dept. of Computer Science & Eng. University of Nebraska-Lincoln, USA Lincoln, NE 68588-0115 {aswearn,myra}@cse.unl.edu Bonnie E. John,

More information

Designing the User Interface

Designing the User Interface Designing the User Interface Strategies for Effective Human-Computer Interaction Second Edition Ben Shneiderman The University of Maryland Addison-Wesley Publishing Company Reading, Massachusetts Menlo

More information

Web-based Interactive Support for Combining Contextual and Procedural. design knowledge

Web-based Interactive Support for Combining Contextual and Procedural. design knowledge Web-based Interactive Support for Combining Contextual and Procedural Design Knowledge J.-H. Lee & Z.-X. Chou Graduate School of Computational Design, NYUST, Touliu, Taiwan ABSTRACT: Design study can take

More information

Visual Appeal vs. Usability: Which One Influences User Perceptions of a Website More?

Visual Appeal vs. Usability: Which One Influences User Perceptions of a Website More? 1 of 9 10/3/2009 9:42 PM October 2009, Vol. 11 Issue 2 Volume 11 Issue 2 Past Issues A-Z List Usability News is a free web newsletter that is produced by the Software Usability Research Laboratory (SURL)

More information

Dynamic Visualization of Hubs and Authorities during Web Search

Dynamic Visualization of Hubs and Authorities during Web Search Dynamic Visualization of Hubs and Authorities during Web Search Richard H. Fowler 1, David Navarro, Wendy A. Lawrence-Fowler, Xusheng Wang Department of Computer Science University of Texas Pan American

More information

2/18/2009. Introducing Interactive Systems Design and Evaluation: Usability and Users First. Outlines. What is an interactive system

2/18/2009. Introducing Interactive Systems Design and Evaluation: Usability and Users First. Outlines. What is an interactive system Introducing Interactive Systems Design and Evaluation: Usability and Users First Ahmed Seffah Human-Centered Software Engineering Group Department of Computer Science and Software Engineering Concordia

More information

Scroll Display: Pointing Device for Palmtop Computers

Scroll Display: Pointing Device for Palmtop Computers Asia Pacific Computer Human Interaction 1998 (APCHI 98), Japan, July 15-17, 1998, pp. 243-248, IEEE Computer Society, ISBN 0-8186-8347-3/98 $10.00 (c) 1998 IEEE Scroll Display: Pointing Device for Palmtop

More information

Enhancing KLM (Keystroke-Level Model) to Fit Touch Screen Mobile Devices

Enhancing KLM (Keystroke-Level Model) to Fit Touch Screen Mobile Devices El Batran, Karim Mohsen Mahmoud and Dunlop, Mark (2014) Enhancing KLM (Keystroke-Level Model) to fit touch screen mobile devices. In: Proceedings of the 16th International Conference on Human-Computer

More information

Model Evaluation. ACT-R, IMPRINT, and Matlab Comparisons, Parameter Optimizations, and Opportunities. Bengt Fornberg

Model Evaluation. ACT-R, IMPRINT, and Matlab Comparisons, Parameter Optimizations, and Opportunities. Bengt Fornberg Model Evaluation Slide 1 of 12 ACT-R, IMPRINT, and Matlab Comparisons, Parameter Optimizations, and Opportunities - 'Unified test problems' Keystroke entry task, and RADAR - RADAR Some modeling results

More information

Evaluation of Commercial Web Engineering Processes

Evaluation of Commercial Web Engineering Processes Evaluation of Commercial Web Engineering Processes Andrew McDonald and Ray Welland Department of Computing Science, University of Glasgow, Glasgow, Scotland. G12 8QQ. {andrew, ray}@dcs.gla.ac.uk, http://www.dcs.gla.ac.uk/

More information

Shedding Light on the Graph Schema

Shedding Light on the Graph Schema Shedding Light on the Graph Schema Raj M. Ratwani (rratwani@gmu.edu) George Mason University J. Gregory Trafton (trafton@itd.nrl.navy.mil) Naval Research Laboratory Abstract The current theories of graph

More information

Nektarios Kostaras, Mixalis Xenos. Hellenic Open University, School of Sciences & Technology, Patras, Greece

Nektarios Kostaras, Mixalis Xenos. Hellenic Open University, School of Sciences & Technology, Patras, Greece Kostaras N., Xenos M., Assessing Educational Web-site Usability using Heuristic Evaluation Rules, 11th Panhellenic Conference on Informatics with international participation, Vol. B, pp. 543-550, 18-20

More information

3Lesson 3: Web Project Management Fundamentals Objectives

3Lesson 3: Web Project Management Fundamentals Objectives 3Lesson 3: Web Project Management Fundamentals Objectives By the end of this lesson, you will be able to: 1.1.11: Determine site project implementation factors (includes stakeholder input, time frame,

More information

Page 1. Welcome! Lecture 1: Interfaces & Users. Who / what / where / when / why / how. What s a Graphical User Interface?

Page 1. Welcome! Lecture 1: Interfaces & Users. Who / what / where / when / why / how. What s a Graphical User Interface? Welcome! Lecture 1: Interfaces & Users About me Dario Salvucci, Associate Professor, CS Email: salvucci@cs.drexel.edu Office: University Crossings 142 Office hours: Thursday 11-12, or email for appt. About

More information

NPTEL Computer Science and Engineering Human-Computer Interaction

NPTEL Computer Science and Engineering Human-Computer Interaction M4 L5 Heuristic Evaluation Objective: To understand the process of Heuristic Evaluation.. To employ the ten principles for evaluating an interface. Introduction: Heuristics evaluation is s systematic process

More information

Predictive Human Performance Modeling Made Easy

Predictive Human Performance Modeling Made Easy Predictive Human Performance Modeling Made Easy Bonnie E. John HCI Institute Carnegie Mellon Univ. Pittsburgh, PA 15213 bej@cs.cmu.edu Konstantine Prevas HCI Institute Carnegie Mellon Univ. Pittsburgh,

More information

Cognitive Walkthrough

Cognitive Walkthrough 1 Cognitive Walkthrough C. Wharton, J. Rieman, C. Lewis and P. Polson, The Cognitive Walkthrough Method: A Practitioner s Guide, in J. Nielsen and R. Mack (eds.), Usability Inspection Methods, John Wiley

More information

SEM / YEAR: VIII/ IV QUESTION BANK SUBJECT: CS6008 HUMAN COMPUTER INTERACTION

SEM / YEAR: VIII/ IV QUESTION BANK SUBJECT: CS6008 HUMAN COMPUTER INTERACTION QUESTION BANK SUBJECT: CS600 HUMAN COMPUTER INTERACTION SEM / YEAR: VIII/ IV UNIT I - FOUNDATIONS OF HCI The Human: I/O channels Memory Reasoning and problem solving; The computer: Devices Memory processing

More information

Evaluating an Associative Browsing Model for Personal Information

Evaluating an Associative Browsing Model for Personal Information Evaluating an Associative Browsing Model for Personal Information Jinyoung Kim, W. Bruce Croft, David A. Smith and Anton Bakalov Department of Computer Science University of Massachusetts Amherst {jykim,croft,dasmith,abakalov}@cs.umass.edu

More information

ITERATIVE SEARCHING IN AN ONLINE DATABASE. Susan T. Dumais and Deborah G. Schmitt Cognitive Science Research Group Bellcore Morristown, NJ

ITERATIVE SEARCHING IN AN ONLINE DATABASE. Susan T. Dumais and Deborah G. Schmitt Cognitive Science Research Group Bellcore Morristown, NJ - 1 - ITERATIVE SEARCHING IN AN ONLINE DATABASE Susan T. Dumais and Deborah G. Schmitt Cognitive Science Research Group Bellcore Morristown, NJ 07962-1910 ABSTRACT An experiment examined how people use

More information

Principles of Visual Design

Principles of Visual Design Principles of Visual Design Lucia Terrenghi Page 1 Talk about rules in design No fixed rules Just guidelines, principles Where do they come from? How can I apply them? Page 2 Outline Origins of the principles

More information

Evaluation and Design Issues of Nordic DC Metadata Creation Tool

Evaluation and Design Issues of Nordic DC Metadata Creation Tool Evaluation and Design Issues of Nordic DC Metadata Creation Tool Preben Hansen SICS Swedish Institute of computer Science Box 1264, SE-164 29 Kista, Sweden preben@sics.se Abstract This paper presents results

More information

LetterScroll: Text Entry Using a Wheel for Visually Impaired Users

LetterScroll: Text Entry Using a Wheel for Visually Impaired Users LetterScroll: Text Entry Using a Wheel for Visually Impaired Users Hussain Tinwala Dept. of Computer Science and Engineering, York University 4700 Keele Street Toronto, ON, CANADA M3J 1P3 hussain@cse.yorku.ca

More information

A Tactile/Haptic Interface Object Reference Model

A Tactile/Haptic Interface Object Reference Model A Tactile/Haptic Interface Object Reference Model Jim Carter USERLab, Department of Computer Science University of Saskatchewan Saskatoon, SK, CANADA (306) 966-4893 carter@cs.usask.ca ABSTRACT In this

More information

1 Introduction RHIT UNDERGRAD. MATH. J., VOL. 17, NO. 1 PAGE 159

1 Introduction RHIT UNDERGRAD. MATH. J., VOL. 17, NO. 1 PAGE 159 RHIT UNDERGRAD. MATH. J., VOL. 17, NO. 1 PAGE 159 1 Introduction Kidney transplantation is widely accepted as the preferred treatment for the majority of patients with end stage renal disease [11]. Patients

More information

CAR-TR-673 April 1993 CS-TR-3078 ISR AlphaSlider: Searching Textual Lists with Sliders. Masakazu Osada Holmes Liao Ben Shneiderman

CAR-TR-673 April 1993 CS-TR-3078 ISR AlphaSlider: Searching Textual Lists with Sliders. Masakazu Osada Holmes Liao Ben Shneiderman CAR-TR-673 April 1993 CS-TR-3078 ISR-93-52 AlphaSlider: Searching Textual Lists with Sliders Masakazu Osada Holmes Liao Ben Shneiderman Department of Computer Science, Human-Computer Interaction Laboratory,

More information

Application Use Strategies

Application Use Strategies Application Use Strategies Suresh K. Bhavnani Strategies for using complex computer applications such as word processors, and computer-aided drafting (CAD) systems, are general and goal-directed methods

More information

Analytical Evaluation

Analytical Evaluation Analytical Evaluation November 7, 2016 1 Questions? 2 Overview of Today s Lecture Analytical Evaluation Inspections Performance modelling 3 Analytical Evaluations Evaluations without involving users 4

More information

What is interaction? communication user system. communication between the user and the system

What is interaction? communication user system. communication between the user and the system What is interaction? communication user system communication between the user and the system 2 terms of interaction The purpose of interactive system is to help user in accomplishing goals from some domain.

More information

Usability Evaluation of Tools for Nomadic Application Development

Usability Evaluation of Tools for Nomadic Application Development Usability Evaluation of Tools for Nomadic Application Development Cristina Chesta (1), Carmen Santoro (2), Fabio Paternò (2) (1) Motorola Electronics S.p.a. GSG Italy Via Cardinal Massaia 83, 10147 Torino

More information

AN APPROACH FOR GRAPHICAL USER INTERFACE DEVELOPMENT FOR STEREOSCOPIC VISUALIZATION SYSTEM

AN APPROACH FOR GRAPHICAL USER INTERFACE DEVELOPMENT FOR STEREOSCOPIC VISUALIZATION SYSTEM AN APPROACH FOR GRAPHICAL USER INTERFACE DEVELOPMENT FOR STEREOSCOPIC VISUALIZATION SYSTEM Rositsa R. Radoeva St. Cyril and St. Methodius University of Veliko Tarnovo, ABSTRACT Human-computer interaction

More information

User Centered Design - Maximising the Use of Portal

User Centered Design - Maximising the Use of Portal User Centered Design - Maximising the Use of Portal Sean Kelly, Certus Solutions Limited General Manager, Enterprise Web Solutions Agenda What is UCD Why User Centered Design? Certus Approach - interact

More information

Module 5. Function-Oriented Software Design. Version 2 CSE IIT, Kharagpur

Module 5. Function-Oriented Software Design. Version 2 CSE IIT, Kharagpur Module 5 Function-Oriented Software Design Lesson 12 Structured Design Specific Instructional Objectives At the end of this lesson the student will be able to: Identify the aim of structured design. Explain

More information

Comparing the Usability of RoboFlag Interface Alternatives*

Comparing the Usability of RoboFlag Interface Alternatives* Comparing the Usability of RoboFlag Interface Alternatives* Sangeeta Shankar, Yi Jin, Li Su, Julie A. Adams, and Robert Bodenheimer Department of Electrical Engineering and Computer Science Vanderbilt

More information

Cognitive Walkthrough. Francesca Rizzo 24 novembre 2004

Cognitive Walkthrough. Francesca Rizzo 24 novembre 2004 Cognitive Walkthrough Francesca Rizzo 24 novembre 2004 The cognitive walkthrough It is a task-based inspection method widely adopted in evaluating user interfaces It requires: A low-fi prototype of the

More information

Overview of Today s Lecture. Analytical Evaluation / Usability Testing. ex: find a book at Amazon.ca via search

Overview of Today s Lecture. Analytical Evaluation / Usability Testing. ex: find a book at Amazon.ca via search Overview of Today s Lecture Analytical Evaluation / Usability Testing November 17, 2017 Analytical Evaluation Inspections Recapping cognitive walkthrough Heuristic evaluation Performance modelling 1 2

More information

Towards Systematic Usability Verification

Towards Systematic Usability Verification Towards Systematic Usability Verification Max Möllers RWTH Aachen University 52056 Aachen, Germany max@cs.rwth-aachen.de Jonathan Diehl RWTH Aachen University 52056 Aachen, Germany diehl@cs.rwth-aachen.de

More information

Ovid Technologies, Inc. Databases

Ovid Technologies, Inc. Databases Physical Therapy Workshop. August 10, 2001, 10:00 a.m. 12:30 p.m. Guide No. 1. Search terms: Diabetes Mellitus and Skin. Ovid Technologies, Inc. Databases ACCESS TO THE OVID DATABASES You must first go

More information

Recall Butlers-Based Design

Recall Butlers-Based Design Input Performance 1 Recall Butlers-Based Design Respect physical and mental effort Physical Treat clicks as sacred Remember where they put things Remember what they told you Stick with a mode Mental also

More information

Exercise. Lecture 5-1: Usability Methods II. Review. Oral B CrossAction (white & pink) Oral B Advantage Reach Max Reach Performance (blue & white)

Exercise. Lecture 5-1: Usability Methods II. Review. Oral B CrossAction (white & pink) Oral B Advantage Reach Max Reach Performance (blue & white) : Usability Methods II Exercise Design Process continued Iterative Design: Gould and Lewis (1985) User-Centered Design Essential Design Activities: Cohill et al. Task Analysis Formal Task Analyses GOMS

More information

Interaction design. The process of interaction design. Requirements. Data gathering. Interpretation and data analysis. Conceptual design.

Interaction design. The process of interaction design. Requirements. Data gathering. Interpretation and data analysis. Conceptual design. Interaction design The process of interaction design Requirements Data gathering Interpretation and data analysis Conceptual design Prototyping Physical design Introduction We have looked at ways to gather

More information

STATISTICS (STAT) Statistics (STAT) 1

STATISTICS (STAT) Statistics (STAT) 1 Statistics (STAT) 1 STATISTICS (STAT) STAT 2013 Elementary Statistics (A) Prerequisites: MATH 1483 or MATH 1513, each with a grade of "C" or better; or an acceptable placement score (see placement.okstate.edu).

More information

Using User Interaction to Model User Comprehension on the Web Navigation

Using User Interaction to Model User Comprehension on the Web Navigation International Journal of Computer Information Systems and Industrial Management Applications. ISSN 2150-7988 Volume 3 (2011) pp. 878-885 MIR Labs, www.mirlabs.net/ijcisim/index.html Using User Interaction

More information

EXAM PREPARATION GUIDE

EXAM PREPARATION GUIDE When Recognition Matters EXAM PREPARATION GUIDE PECB Certified ISO 22000 Lead Implementer www.pecb.com The objective of the Certified ISO 22000 Lead Implementer examination is to ensure that the candidate

More information

Course Outline. Department of Computing Science Faculty of Science. COMP 3450 Human Computer Interaction Design (3,1,0) Fall 2015

Course Outline. Department of Computing Science Faculty of Science. COMP 3450 Human Computer Interaction Design (3,1,0) Fall 2015 Course Outline Department of Computing Science Faculty of Science COMP 3450 Human Computer Interaction Design (3,1,0) Fall 2015 Instructor: Office: Phone/Voice Mail: E-Mail: Course Description Students

More information

GAZE TRACKING APPLIED TO IMAGE INDEXING

GAZE TRACKING APPLIED TO IMAGE INDEXING GAZE TRACKING APPLIED TO IMAGE INDEXING Jean Martinet, Adel Lablack, Nacim Ihaddadene, Chabane Djeraba University of Lille, France Definition: Detecting and tracking the gaze of people looking at images

More information

Framework of a Real-Time Adaptive Hypermedia System

Framework of a Real-Time Adaptive Hypermedia System Framework of a Real-Time Adaptive Hypermedia System Rui Li rxl5604@rit.edu Evelyn Rozanski rozanski@it.rit.edu Anne Haake arh@it.rit.edu ABSTRACT In this paper, we describe a framework for the design and

More information

Survey Creation Workflow These are the high level steps that are followed to successfully create and deploy a new survey:

Survey Creation Workflow These are the high level steps that are followed to successfully create and deploy a new survey: Overview of Survey Administration The first thing you see when you open up your browser to the Ultimate Survey Software is the Login Page. You will find that you see three icons at the top of the page,

More information

Contextion: A Framework for Developing Context-Aware Mobile Applications

Contextion: A Framework for Developing Context-Aware Mobile Applications Contextion: A Framework for Developing Context-Aware Mobile Applications Elizabeth Williams, Jeff Gray Department of Computer Science, University of Alabama eawilliams2@crimson.ua.edu, gray@cs.ua.edu Abstract

More information

Alternative GUI for Interaction in Mobile Environment

Alternative GUI for Interaction in Mobile Environment Alternative GUI for Interaction in Mobile Environment Juraj Švec * Department of Computer Science and Engineering Czech Technical University in Prague Prague / Czech Republic Abstract Standard personal

More information

New Approaches to Help Users Get Started with Visual Interfaces: Multi-Layered Interfaces and Integrated Initial Guidance

New Approaches to Help Users Get Started with Visual Interfaces: Multi-Layered Interfaces and Integrated Initial Guidance New Approaches to Help Users Get Started with Visual Interfaces: Multi-Layered Interfaces and Integrated Initial Guidance Hyunmo Kang, Catherine Plaisant and Ben Shneiderman Department of Computer Science

More information

A Comparison of Error Metrics for Learning Model Parameters in Bayesian Knowledge Tracing

A Comparison of Error Metrics for Learning Model Parameters in Bayesian Knowledge Tracing A Comparison of Error Metrics for Learning Model Parameters in Bayesian Knowledge Tracing Asif Dhanani Seung Yeon Lee Phitchaya Phothilimthana Zachary Pardos Electrical Engineering and Computer Sciences

More information

Screen Fingerprints: A Novel Modality for Active Authentication

Screen Fingerprints: A Novel Modality for Active Authentication Security: DArPA Screen Fingerprints: A Novel Modality for Active Authentication Vishal M. Patel, University of Maryland, College Park Tom Yeh, University of Colorado, Boulder Mohammed E. Fathy and Yangmuzi

More information

Übung zur Vorlesung Mensch-Maschine-Interaktion

Übung zur Vorlesung Mensch-Maschine-Interaktion Übung zur Vorlesung Mensch-Maschine-Interaktion Sara Streng Ludwig-Maximilians-Universität München Wintersemester 2007/2008 Ludwig-Maximilians-Universität München Sara Streng MMI Übung 2-1 Übersicht GOMS

More information