Selecting Success Criteria: Experiences with an Academic Library Catalogue

Size: px
Start display at page:

Download "Selecting Success Criteria: Experiences with an Academic Library Catalogue"

Transcription

1 Selecting Success Criteria: Experiences with an Academic Library Catalogue Paul Clough and Paula Goodale p.d.clough Information School University of Sheffield Sheffield, UK Abstract. Multiple methods exist for evaluating search systems; ranging from more user-oriented approaches to those more focused on evaluating system performance. When preparing an evaluation, key questions include: (i) why conduct the evaluation, (ii) what should be evaluated, and (ii) how the evaluation should be conducted. Over recent years there has been more focus on the end users of search systems and understanding what they view as success. In this paper we consider what to evaluate; in particular what criteria users of search systems consider most important and whether this varies by user characteristic. Using our experiences with evaluating an academic library catalogue, input was gathered from end users relating to the perceived importance of different evaluation criteria prior to conducting evaluation. We analyse results to show which criteria users most value, together with the inter-relationships between them. Our results highlight the necessity of conducting multiple forms of evaluation to ensure that search systems are deemed successful by their users. Keywords: Evaluation, success criteria, digital libraries 1 Introduction Evaluation is highly important for designing, developing and maintaining effective search systems as it allows the measurement of how successfully the system meets its goal of helping users fulfill their information needs or complete their tasks [1-3]. Evaluation involves identifying suitable success criteria that can be measured in some way. Success might refer to whether a search system retrieves relevant (compared with non-relevant) documents; how quickly results are returned; how well the system supports users interactions; whether users are satisfied with the results; how easily users can use the system; whether the system helps users carry out their tasks and fulfill their information needs; whether the system impacts on the wider environment; or how reliable the system is. How to conduct IR system evaluation has been an active area of research for the past 50 years and the subject of much discussion and debate [1, 2]. Traditionally in IR there has been a strong focus on measuring system effectiveness: the ability of an IR

2 system to discriminate between documents that are relevant or not relevant for a given user query. This focus on the system has, in part, been influenced by the focus of the IR community on the development of retrieval algorithms, together with the organization of large IR evaluation events, such as TREC and CLEF. Such events have focused on measuring system effectiveness in a controlled experimental setting [4, 5]. However, the scope of system in IR has slowly broadened to include more elements of the retrieval context, such as the user or the user s environment, which must be included in the evaluation of IR systems [3, 6]. Therefore, instead of focusing on just the system (i.e., its inputs and outputs), a more user-oriented approach can be taken. This may take into account the user, the user s context and situation, and their interactions with an IR system, perhaps in a real-life operational environment [7, 8]. When planning an evaluation, there are at least three key questions to address [10]: (i) why evaluate; (ii) what to evaluate; and (iii) how to evaluate. These apply regardless of the type of search system being evaluated (e.g. search engine or library catalogue). Saracevic [10] also mentions these when planning an evaluation of a digital library, together with identifying for whom to evaluate (i.e., the stakeholder). In this paper we focus on the issue of what: selecting criteria to assess search success. In practice this is a key question as it is often not possible, nor desirable, to run every type of evaluation available, and it is therefore necessary to be selective, both in order to measure the success of the new system, and to make best use of time and scarce resources. Through our experiences with planning the evaluation of an operational search system (online library catalogue) we investigate the importance of different evaluation criteria as perceived by existing end users. This was a necessary pre-cursor of selecting the evaluation methods and criteria to determine the system s success. We specifically address the following research questions: (RQ1) what evaluation criteria are viewed as important by end users? (RQ2) what degree of variation exists between users preferences? (RQ3) what inter-relationships exist between different criteria? The remainder of the paper is structured as follows. Section 2 describes related work on evaluation, particularly in the context of digital libraries where much recent discussion has taken place. Section 3 outlines the methodology used to gather end user s feedback on the importance of different evaluation criteria. Section 4 analyses results based on various statistical analyses. Section 5 provides a discussion and Section 6 concludes the paper and provides avenues for further work. 2 Related Work Saracevic [10] argues for evaluation of digital libraries to take place at different levels, each with different measurement criteria derived from research objectives. In a similar vein, the Interaction Triptych Framework (ITF) defines three main type of digital library evaluation criteria [11, 12]: performance (system performance), usability (user interaction and satisfaction measures), and usefulness (support for task and information needs), which should be deployed to measure success in the digital library context. Further, Tsakonas & Papatheodorou, [12] test the relative preferences of users for evaluation measures in the three ITF categories, finding high scores for

3 usability and usefulness measures, but lower scores for performance measures, as well as correlation between usability and usefulness measures. The usability and usefulness dimensions of evaluation are explored further and again found to be inter-related, with several key attributes of each dimension identified as a basis for an integrated approach to measurement [13]. A holistic and multi-dimensional approach is widely advocated for digital library evaluation, yet ranking of diverse evaluation criteria by degrees of importance is less evident, particularly in advance of designing evaluation protocols. Nielsen [14] rates factors within a range of usability heuristics, concluding they are all very similar in importance, but does not consider other types of evaluation metrics. Toms et al. [15] look at determining the different dimensions of relevance, and Al-Maskari & Sanderson [16] focus on elements of user satisfaction. These are however all studies of individual dimensions of evaluation. Xie [18] considers a broader range of evaluation measures and ranks criteria within several high-level evaluation categories, but does not rank across categories. Conversely, Buchanan & Salako [13] rank a wide range of evaluation criteria across categories. The rankings or relative importance by Xie [17] and Buchanan & Salako [13] are undertaken during (rather than prior to) an active evaluation study, once evaluation experiments have been completed, and can therefore only contribute to the design of future studies of the same systems. User preferences for evaluation measures have also been largely overlooked. Xie [18] addresses this issue, with a study eliciting evaluation criteria from users, finding that measures suggested by users match those proposed in evaluation models and used in empirical studies. However, it is acknowledged that the user sample for this study is too homogeneous and that more diverse groups may influence the results. Kelly [19] criticizes the use of inadequate user models, citing the over-use of the librarian or search intermediary, and other single types of users, when more detailed and varied user models would be more appropriate, especially for more complex systems. Differences between users have been studied in many ways, including search success, information needs, user requirements, information seeking behavior, cognitive style, and relevance judgments, amongst others, yet it is much less common for user differences to be taken into account in the selection of appropriate evaluation criteria. One common user classification in digital library and search user studies is the novice/expert dichotomy. In this light, Marchionini [20] defines three aspects of user difference relating to information skills; domain expertise (subject knowledge), search expertise and system expertise. Similarly, Hölscher & Strube [21] define novices and experts with attributions of domain and search experience, considering the impact of these characteristics on search success. These are then the main characteristics that we consider as differentiators in the current study, with the addition of a three-way classification of user role in line with recommendations by Kelly [19] for consideration of a range of user models. In sum, investigating what criteria users perceive as indicators of search success has been largely underexplored in past research.

4 3 Methodology In the current study, the search system to be evaluated is a virtual union OPAC (online public access catalogue). The UK-based Search25 1 project aimed to develop a significantly updated version of the longstanding and somewhat outdated InforM25 system, retrieving records from the individual OPACs of the 58 member institutions of the M25 Consortium of academic libraries. End users vary from undergraduate and postgraduate students, to academics and research staff, and library professionals, with varying degrees of subject, domain and search expertise, information needs and patterns of search behavior. During the development of Search25, a formative evaluation was planned to assess the success of an existing system known as InforM25. Table 1. How important are the following factors when using academic library catalogues? # Statement (criteria) Criteria group* 1 The catalogue/system is easy to use Usability 2 The interface is attractive Usability 3 Results are retrieved quickly Performance 4 All items on a topic are retrieved Performance 5 The most relevant items of a topic are identified Performance 6 Information in the catalogue is easy to understand Usefulness 7 It is enjoyable to use Usability 8 Help is available when needed Usability * Criteria group based on the relationships identified in the Interaction Triptych Framework [11, 12] An online survey of the current InforM25 users was carried out to identify suitable criteria for evaluating the updated system once fully implemented (i.e., to define what to evaluate). Specific questions were presented as eight statements (Table 1) covering a range of evaluation criteria drawn from various literature, against which a 5-point Likert scale was used to rate users level of agreement, from Strongly Disagree (1) to Strongly Agree (5). Criteria are also grouped based on the Interaction Triptych Framework [11, 12]. In addition, we asked open-ended questions to gather qualitative responses about what users liked and disliked about the InforM25 system and what they would like to change and to corroborate and enrich the quantitative data. An invitation to complete an online survey was distributed to mailing lists of the 58 academic institutions known to use InforM25, and a link was placed on the home page. In total, 196 respondents provided responses about their perceived importance of different evaluation criteria. The survey sample comprises library, learning and technical support staff (79%), academic and research staff (5%), and undergraduate and postgraduate students (16%) from across the M25 Consortium 58 institutions. This skewed distribution is representative of users of the system at the time of the study, and one reason for updating the system was to try to broaden the appeal to academic and student users. Respondents came from a broad range of subject domains with varying levels of search experience, many reporting more than 10 years experi- 1

5 ence in using academic library catalogues (65%) and web search engines (71%). In terms of frequency of use, 87% of participants used their own institution s OPAC on at least a weekly basis, whilst only 22% used InforM25 with this regularity. Data collected through the survey was analysed using descriptive and inferential statistics. One of the main goals of this analysis was to determine if differences between user groups regarding the importance of evaluation criteria could be observed. Relationships between evaluation criteria were assessed using Principal Component Analysis (PCA) and bivariate correlations assessed using Kendall s Tau. Finally, a thematic analysis of qualitative data from the associated open-ended questions and focus groups is used to expand upon and corroborate the quantitative findings. 4 Results Responses relating to the evaluation criteria shown in Table 1 were analyzed first for the whole sample (Section 4.1), and then for differences between users based upon a range of characteristics (Section 4.2). We then explore interrelationships between the criteria in Section 4.3, and associated qualitative results in Section 4.4. Table 2. Frequency of responses for importance of evaluation criteria, all users (N=196) Criteria 1 Strong. 2 Disagree 3 Neutral 4 Agree 5 Strong. Median SD Disagree Agree score Easy To Use 1.0% 0.0% 3.1% 17.4% 78.5% Attractive 3.6% 6.3% 31.3% 35.9% 22.9% Quick 0.5% 1.5% 5.6% 27.2% 65.1% Retrieve All 0.5% 3.2% 8.4% 33.7% 54.2% Relevant 2.6% 3.7% 14.2% 31.6% 47.9% Understandable 0.5% 0.0% 5.8% 24.7% 68.9% Enjoyable 10.0% 10.0% 35.8% 27.4% 16.8% Help 3.2% 11.1% 27.5% 31.7% 26.5% Analysis of all Users Table 2 shows results for percentage of responses across all users for each of the evaluation criteria. The results suggest overall importance for some aspects of usability and system performance measures. The most highly rated measures are ease of use (96% rating this at 4 or 5 on the Likert scale); information in the catalogue is easy to understand (94%); results are retrieved quickly (93%); all items on a topic are retrieved (88%); and, the most relevant items are identified (80%). Responses for interface is attractive (59%); help is available (58%); and, enjoyable to use (44%) indicate that overall, these criteria are somewhat less important to users.

6 4.2 Analysis by User Characteristic To examine the effects of user characteristic (role, subject area, search experience and frequency of use of existing finding aids) on the ratings provided, we divide the data into groups and compare the ratings for each group using a Kruksal-Wallis test (due to the non-parametric nature of the data). When analysing results of the test we test the null hypothesis that the user characteristic makes no difference to the perceived importance of evaluation criteria, rejected at p<0.05. Table 3. Kruksal-Wallis test statistics, importance of evaluation criteria grouped by user characteristics (N=196). Bold indicates results with p<0.05 Evaluation Criteria User type Subject Web search experience Experience of library work Frequency of using InforM25 K-W (2 df) p-value K-W (6 df) p-value K-W (3 df) p-value K-W (3 df) p-value K-W (3 df) p-value Firstly, we find that the null hypothesis is rejected most often for criteria 3) results are retrieved quickly, indicating that the importance of this varies most by user characteristic. Secondly, we observe that experience of library work has an effect on the most criteria (2, 3 and 6), suggesting that this user characteristic causes users to disagree the most about search success. Next we consider each user characteristic in turn: User type. For the user type (e.g. librarian vs. student) the null hypothesis is rejected for attractiveness of interface and results are retrieved quickly. It can be concluded that there is a difference in perceived importance of these evaluation criteria between the various user roles. Splitting results by different academic subject areas there are no statistically significant differences between the criteria, suggesting that domain does not affect the importance of certain evaluation criteria. Search experience. This was measured by two characteristics: number of years experience in using an academic library catalogue and experience of using web search engines. The Kruksal-Wallis test showed no significant differences against the level of experience in using library catalogues; however, for experience of web search the null hypothesis is rejected (p<0.05) for criteria 6 (information is easy to understand). Inspection of the medians reveal that those users with only 0-1 years experience have

7 a lower median of 4, with a median of 5 for all other levels of experience, and the most demanding users (least spread of results) are those with 2-5 years experience. Given the high proportion of library staff in the sample, experience was also considered by the number of years engaged in library work, and for students (the least experienced users), their year of study. No significant differences for student users by year of study were found, but for library staff significance was found for criteria 3 (results are retrieved quickly) at p<0.01, for criteria 4 (all items are retrieved) and 6 (information is easy to understand) at p<0.05. Frequency of using existing finding aids. Finally, we analysed ratings based on grouping users by their frequency of use of a variety of library IR systems, including the OPAC at the user s own institution, and the InforM25 virtual union catalogue. For home library OPACs, no significant differences were found against the frequency of use. However, for the frequency of use on InforM25, the null hypothesis is rejected for criteria 3 (items are retrieved quickly) at p<0.05. Analysis of the medians reveals that for criteria 1, daily users have a median of 4.5, whilst all other users have a median of 5. For criteria 3, the differences are inconclusive, as users of all frequencies have a median of 4, except those with monthly use (median=3.5). Lastly, the medians for criteria 8 show that the availability of help is more important with less frequent users, with a median of 4 for users with a frequency or monthly, less often or never, and median of for more frequent users. 4.3 Relationships between Evaluation Criteria To examine relationships between the evaluation criteria in each group, a factor analysis was conducted using Principal Component Analysis (PCA) with varimax rotation (assuming independence between groupings). An initial PCA based upon Eigenvalues >1 extracted two dimensions, accounting for 56% of total variance. However, the scree plot shows that three or four dimensions might be more appropriate, and therefore revised PCAs specifying three and four fixed factors were undertaken, accounting for 70% and 79% of variance, respectively. Bartlett s Test of Sphericity is found to be highly significant at p<0.001 and the Kaiser-Olkin measure of sampling adequacy is sufficient at 0.792, indicating that the evaluation criteria are likely to be related and that the sample is of adequate size. Rotated component matrices show the loading of the components for each of the evaluation criteria (Table 4). With three components extracted, four criteria (easy to use, quick, understandable, relevant) load high on the first component; three criteria (enjoyable, attractive, help) load high on the second component; and, two criteria (help, retrieve all) load high on the third component. With four components extracted the results are similar, with the exception that one criterion (relevant) now loads highly on the fourth component, instead of the first component, and help now only loads highly on the second component, rather than on the second and third components. It is interesting to note that one evaluation criteria from the usability group in Table 1 (easy to use), and one from the usefulness group (understandable) are close to performance variables, in particular speed of retrieval (quick) and relevant information is returned by the system (relevant). It is perhaps to be expected that relevant and re-

8 trieval all are at opposite ends of this cluster (with four components extracted they occupy their own space) as their importance is likely to vary by task. The two usability variables (enjoyable, attractive) are also clustered together, with the addition of a further usability variable (help). The groupings found using the PCA are further confirmed by inspecting significant (p<0.01) bivariate correlations between criteria using Kendall s Tau: easy to use and quick (tau=0.502), ease of use and understandable (tau=0.529); enjoyable and attractive (tau=0.540), enjoyable and help (tau=0.402). Table 4. Rotated Component Matrices (varimax), 3 and 4 Components extracted 3 Components 4 Components Easy to Use Quick Quick Easy to Use Understandable Understandable Relevant Enjoyable Enjoyable Attractive Attractive Help Help Retrieve All Retrieve All Relevant The manner in which the variables group into components in Table 4 may suggest that combinations of criteria may be particularly important to users. For example, with four components extracted, the first component (easy to use, quick and understandable) might relate to users as they interact with the system and perform specific tasks (user needs); whereas the criteria for the second component would highlight aspects related more to the general user experience (enjoyable, attractive, help). The third and forth components reflect individual aspects of retrieval performance, which could be measured using recall and precision respectively. 4.4 Qualitative Results Areas of performance which users responded to positively in open-ended survey questions included time-saving and ease of use from searching multiple OPACs at the same time, as well as support for specific tasks, such as finding items that are unavailable at their home institution. These findings seem to correspond relatively well with components relating to user needs and peripherally with retrieval performance. In contrast, negative comments focused more on usability and interface design issues, speed of information retrieval, issues with completeness of information, and lack of de-duplication of records for the same item from different institutions, making the information retrieved more difficult to understand. These findings constitute a mix of issues, with the strongest opinions relating to speed, quality and understanding of

9 information retrieved. Focus group discussions surfaced more on issues relating to system and IR performance. Speed of results was generally seen as less of an issue when feedback on progress is provided on-screen, and the speed demonstrated via the prototype was generally seen as acceptable. However, in a virtual union catalogue speed is largely a factor of the number of records fetched from each institution, and in Search25 a limit has been set to manage the inevitable delays from waiting for all records to be retrieved from each participating institution. Users had mixed opinions about the impact of this, and on balance they preferred to be able to set the number or records fetched themselves, and/or to be able to revert to Retrieve All records when needed. Strongly related to this was a concern that with partial retrieval some of the most relevant records might not be fetched. Interestingly then, when probed in depth, IR performance measures come to the fore and along with speed, and information quality and completeness, are perceived to be the most critical measures of system success. 5 Discussion In planning the evaluation of Search25, we asked users about their perceived importance of a range of common evaluation criteria (compared to asking users to carry out tasks and then measuring their importance). There are two main limitations to this approach. Firstly, what users perceive to be important may change after completing tasks. In this analysis we have involved users who have experience with using the legacy system. Secondly, the importance of criteria that define success may change based on carrying out specific tasks, (e.g., recall would be important for a systematic review), and in different contexts (e.g., speed may be important for completing work tasks compared to leisure tasks). However, these results are still useful in gaining a more general impression of what users view as important criteria against which to evaluate. The results in Section 4 clearly demonstrate that some evaluation criteria are more critical than others, but there are some significant differences in the order of preference by users with different characteristics. Relationships between evaluation criteria are also found, with distinct components identified that group together criteria from diverse (component 1) and similar categories (component 2), according to the Interaction Triptych Framework. In light of these findings, we can summarize answers to the three research questions as follows: RQ1: What evaluation criteria are viewed as important by end users? Frequency rankings (Table 5) indicate that participants in this study place greater importance on evaluation criteria 1) easy to use, 6) understandable, and 3) quick, related to their user needs (corresponding to component 1 in the PCA results), than on criteria 2) attractive, 8) help available, and 7) enjoyable, relating to user experience (component 2). Retrieval performance measures (components 3 and 4) are placed in the middle of the ranking, but their scores are closer to those for component 1 than for component 2. These results suggest that a mix of user-centred and system-centred

10 evaluation measures are appropriate within the case study context, but that the more subjective experiential and satisfaction type measures may be less critical. Table 5. Ranking of evaluation criteria (all users, 4 Agree + 5 Strongly Agree), compared with extracted components. # Evaluation criteria Likert 3 Factor 4 Factor 4+5 Components Components 1 Easy to use 96% Understandable 94% Quick 93% Retrieve All 88% Relevant 80% Attractive 59% Help available 58% Enjoyable 44% 2 2 RQ2: What degree of variation exists between users preferences? User responses were analysed according to a variety of characteristics including role, subject/domain, search experience, and system experience. Rankings for importance of evaluation criteria varied to some degree by each classification of user type, and significant differences between users were found for several characteristics. User characteristics which demonstrated the greatest amount of difference in preference are user role (three criteria 2, 3, 6) and experience of library work (four criteria 2, 3, 4, 6). These evaluation criteria are all in the top half of the ranking by overall importance. These user characteristics identified as a source of significant difference could be used as a basis for user recruitment in future evaluation activities, as well as an aid to interpretation of evaluation results. Evaluation criteria with the most significant differences are 4) retrieve all and 6) understandable, but the user characteristics where differences are found vary for each one. No significant differences were found for criteria 5) relevance and 7) enjoyable, suggesting that these criteria may be less prone to variation by user type, or that the results are inconclusive. RQ3: What inter-relationships exist between the different criteria? By applying principal component analysis, 3-4 main groupings of evaluation criteria emerge, depending on the number of factors extracted. These groupings correlate well with the frequency rankings. There is a particularly strong inter-relationship between criteria in the user needs group (component 1), representing a mix of usability, speed and information quality criteria, that combined together would address some of the primary issues in user and task performance. However, whilst the system measures 4) retrieve all, and 5) relevant are collocated in the frequency ranking (Table 5), they are situated wide apart and take on separate components (Table 4). This result is possibly explained by the opposing nature of the criteria; one might either

11 want to retrieve everything available, or be interested in only the most relevant results. In system development terms, as raised by our focus group discussions, decisions relating to relevance, recall (retrieve all), and speed of retrieval impact different users in different ways, according to their task and information needs. A note of caution is therefore required in measuring success in these highly ranked, but potentially conflicting evaluation criteria. It is interesting to note that retrieval speed (quick) correlates highly with ease of use as this is a measure that is commonly not assessed in evaluation campaigns, such as TREC and CLEF, but may reflect more faithfully success factors that users rate highly (e.g., ease of use). Previous work has shown significant correlations between user satisfaction and retrieval effectiveness [16], but additionally considering retrieval speed would be an interesting avenue for further investigation as it may suggest that speed, which could be measured using system-oriented approaches (e.g., test collections), may correlate well with user satisfaction that would otherwise have to involve user studies that are more costly to run. 6 Conclusions In this paper we have described our experiences with gathering information from a sample of 196 end users of an operational search system (academic library catalogue) regarding success criteria. This is important when planning an evaluation in deciding what to evaluate, particularly in the case when evaluation methods must be selected due to resource limitations. We find overall that users rate criteria relating to user needs (as confirmed using PCA), such as ease of use, most highly; contrasted with aspects such as whether the system is enjoyable. The results are used to help us select certain kinds of evaluation criteria that will more likely match user s perceived importance. Understanding how users think about success is an important, and often overlooked, aspect of IR evaluation. Acknowledgments We gratefully acknowledge funding from JISC through the Search25 project and contributions from participants via the survey. Work also partially funded by the PROMISE network of excellence (contract no ) project as a part of the 7th Framework Program of the European commission (FP7/ ). 7 References 1. Saracevic, T. (1995). Evaluation of evaluation in information retrieval. In Fox, E., A. Ingwersen, P. & Fidel, R. (eds.). Proc. 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Seattle, Washington, USA, July 9-13, New York, NY: ACM Press

12 2. Harman, D. (2011). Information retrieval evaluation. Synthesis Lectures on Information Concepts, Retrieval, and Services (3)2. San Raphael, CA: Morgan & Claypool Publishers. 3. Robertson, S.E. & Hancock-Beaulieu, M. (1992). On the evaluation of information retrieval systems. Information Processing and Management, 28(4), Robertson, S. (2008). On the history of evaluation in IR. Journal of Information Science, 34(4), Voorhees, E.M. & Harman, D.K. (2005). TREC: experiments and evaluation in information retrieval. Cambridge, MA: MIT Press. 6. Ingwersen, P. & Järvelin, K. (2005). The turn: integration of information seeking and retrieval in context, New York, NY: Springer-Verlag. 7. Borlund, P. (2009). User-Centred Evaluation of Information Retrieval Systems. In Göker, A. and Davies J. (eds), Information retrieval: searching in the 21st Century, Chichester, UK: John Wiley & Sons 8. Kelly D. (2009). Methods for evaluating interactive information retrieval systems with users. Foundations and Trends in Information Retrieval, 3(1-2), van Rijsbergen, C.J. (1979). Information retrieval. (2nd ed.) London: Butterworths 10. Saracevic, T. (2000). Digital Library Evaluation: Toward Evolution of Concepts. Library Trends 49(2): Fuhr, N., et al. (2007). Evaluation of Digital Libraries. International Journal on Digital Libraries 8(1): Tsakonas, G. & Papatheodorou, C. (2008). Exploring Usefulness and Usability in the Evaluation of Open Access Digital Libraries. Information Processing & Management 44(3): Buchanan, S. & Salako, A. (2009). Evaluating the Usability and Usefulness of a Digital Library. Library Review 58(9): Nielsen, J. (1994). Enhancing the Explanatory Power of Usability Heuristics. Proc. SIGCHI conference on Human Factors in Computing Systems. New York, NY: ACM Press Toms, E.G., O Brien, H.L., Kopak, R., & Freund, L. (2005). Searching for relevance in the relevance of search. In: F. Crestani & I. Ruthven (eds.), Proc. of Fourth International Conference on Conceptions of Library and Information Science (CoLIS 2005), Amsterdam: Springer, Al-Maskari, A. & Sanderson, M. (2010). A Review of Factors Influencing User Satisfaction in Information Retrieval. Journal of the American Society for Information Science and Technology 61(5): Xie, H. (2008). Users Evaluation of Digital Libraries (Dls): Their Uses, Their Criteria, and Their Assessment. Information Processing & Management 44(3): Xie, H. (2006). Evaluation of Digital Libraries: Criteria and Problems from Users' Perspectives. Library and Information Science Research 28(3): Kelly, D., et al. (2009). Evaluation Challenges and Directions for Information-Seeking Support Systems. Computer 42(3): Marchionini, G. (1995). Information seeking in electronic environments. Cambridge, UK: Cambridge University Press. 21. Hölscher, C. & Strube, G. (2000). Web Search Behavior of Internet Experts and Newbies. Computer Networks, 33(1):

A User Study on Features Supporting Subjective Relevance for Information Retrieval Interfaces

A User Study on Features Supporting Subjective Relevance for Information Retrieval Interfaces A user study on features supporting subjective relevance for information retrieval interfaces Lee, S.S., Theng, Y.L, Goh, H.L.D., & Foo, S. (2006). Proc. 9th International Conference of Asian Digital Libraries

More information

Subjective Relevance: Implications on Interface Design for Information Retrieval Systems

Subjective Relevance: Implications on Interface Design for Information Retrieval Systems Subjective : Implications on interface design for information retrieval systems Lee, S.S., Theng, Y.L, Goh, H.L.D., & Foo, S (2005). Proc. 8th International Conference of Asian Digital Libraries (ICADL2005),

More information

Evaluation and Design Issues of Nordic DC Metadata Creation Tool

Evaluation and Design Issues of Nordic DC Metadata Creation Tool Evaluation and Design Issues of Nordic DC Metadata Creation Tool Preben Hansen SICS Swedish Institute of computer Science Box 1264, SE-164 29 Kista, Sweden preben@sics.se Abstract This paper presents results

More information

Speed and Accuracy using Four Boolean Query Systems

Speed and Accuracy using Four Boolean Query Systems From:MAICS-99 Proceedings. Copyright 1999, AAAI (www.aaai.org). All rights reserved. Speed and Accuracy using Four Boolean Query Systems Michael Chui Computer Science Department and Cognitive Science Program

More information

Web document summarisation: a task-oriented evaluation

Web document summarisation: a task-oriented evaluation Web document summarisation: a task-oriented evaluation Ryen White whiter@dcs.gla.ac.uk Ian Ruthven igr@dcs.gla.ac.uk Joemon M. Jose jj@dcs.gla.ac.uk Abstract In this paper we present a query-biased summarisation

More information

Developing a Test Collection for the Evaluation of Integrated Search Lykke, Marianne; Larsen, Birger; Lund, Haakon; Ingwersen, Peter

Developing a Test Collection for the Evaluation of Integrated Search Lykke, Marianne; Larsen, Birger; Lund, Haakon; Ingwersen, Peter university of copenhagen Københavns Universitet Developing a Test Collection for the Evaluation of Integrated Search Lykke, Marianne; Larsen, Birger; Lund, Haakon; Ingwersen, Peter Published in: Advances

More information

Effects of Advanced Search on User Performance and Search Efforts: A Case Study with Three Digital Libraries

Effects of Advanced Search on User Performance and Search Efforts: A Case Study with Three Digital Libraries 2012 45th Hawaii International Conference on System Sciences Effects of Advanced Search on User Performance and Search Efforts: A Case Study with Three Digital Libraries Xiangmin Zhang Wayne State University

More information

2015 User Satisfaction Survey Final report on OHIM s User Satisfaction Survey (USS) conducted in autumn 2015

2015 User Satisfaction Survey Final report on OHIM s User Satisfaction Survey (USS) conducted in autumn 2015 2015 User Satisfaction Survey Final report on OHIM s User Satisfaction Survey (USS) conducted in autumn 2015 Alicante 18 December 2015 Contents 1. INTRODUCTION... 4 SUMMARY OF SURVEY RESULTS... 4 2. METHODOLOGY

More information

UPA 2004 Presentation Page 1

UPA 2004 Presentation Page 1 UPA 2004 Presentation Page 1 Thomas S. Tullis and Jacqueline N. Stetson Human Interface Design Department, Fidelity Center for Applied Technology Fidelity Investments 82 Devonshire St., V4A Boston, MA

More information

Usability Evaluation of Cell Phones for Early Adolescent Users

Usability Evaluation of Cell Phones for Early Adolescent Users Yassierli*, Melati Gilang Industrial Management Research Division, Faculty of Industrial Technology, Bandung Institute of Technology Jl. Ganesa 10 Bandung 40134 Indonesia ABSTRACT:. The increasing number

More information

COMMON ISSUES AFFECTING SECURITY USABILITY

COMMON ISSUES AFFECTING SECURITY USABILITY Evaluating the usability impacts of security interface adjustments in Word 2007 M. Helala 1, S.M.Furnell 1,2 and M.Papadaki 1 1 Centre for Information Security & Network Research, University of Plymouth,

More information

A Task-Based Evaluation of an Aggregated Search Interface

A Task-Based Evaluation of an Aggregated Search Interface A Task-Based Evaluation of an Aggregated Search Interface No Author Given No Institute Given Abstract. This paper presents a user study that evaluated the effectiveness of an aggregated search interface

More information

Hyacinth Macaws for Seniors Survey Report

Hyacinth Macaws for Seniors Survey Report Hyacinth Macaws for Seniors Survey Report http://stevenmoskowitz24.com/hyacinth_macaw/ Steven Moskowitz IM30930 Usability Testing Spring, 2015 March 24, 2015 TABLE OF CONTENTS Introduction...1 Executive

More information

A New Measure of the Cluster Hypothesis

A New Measure of the Cluster Hypothesis A New Measure of the Cluster Hypothesis Mark D. Smucker 1 and James Allan 2 1 Department of Management Sciences University of Waterloo 2 Center for Intelligent Information Retrieval Department of Computer

More information

A World Wide Web-based HCI-library Designed for Interaction Studies

A World Wide Web-based HCI-library Designed for Interaction Studies A World Wide Web-based HCI-library Designed for Interaction Studies Ketil Perstrup, Erik Frøkjær, Maria Konstantinovitz, Thorbjørn Konstantinovitz, Flemming S. Sørensen, Jytte Varming Department of Computing,

More information

ITIL implementation: The role of ITIL software and project quality

ITIL implementation: The role of ITIL software and project quality ITIL implementation: The role of ITIL software and project quality Tom Roar Eikebrokk Department of Information Systems University of Agder Kristiansand, Norway tom.eikebrokk@uia.no Abstract: This research

More information

This PDF was generated from the Evaluate section of

This PDF was generated from the Evaluate section of Toolkit home What is inclusive design? Why do inclusive design? How to design inclusively Overview Map of key activities Manage This PDF was generated from the Evaluate section of www.inclusivedesigntoolkit.com

More information

Relevance in XML Retrieval: The User Perspective

Relevance in XML Retrieval: The User Perspective Relevance in XML Retrieval: The User Perspective Jovan Pehcevski School of CS & IT RMIT University Melbourne, Australia jovanp@cs.rmit.edu.au ABSTRACT A realistic measure of relevance is necessary for

More information

The Tagging Tangle: Creating a librarian s guide to tagging. Gillian Hanlon, Information Officer Scottish Library & Information Council

The Tagging Tangle: Creating a librarian s guide to tagging. Gillian Hanlon, Information Officer Scottish Library & Information Council The Tagging Tangle: Creating a librarian s guide to tagging Gillian Hanlon, Information Officer Scottish Library & Information Council Introduction Scottish Library and Information Council (SLIC) advisory

More information

Usability Testing. November 14, 2016

Usability Testing. November 14, 2016 Usability Testing November 14, 2016 Announcements Wednesday: HCI in industry VW: December 1 (no matter what) 2 Questions? 3 Today Usability testing Data collection and analysis 4 Usability test A usability

More information

Procedia Computer Science

Procedia Computer Science Procedia Computer Science 3 (2011) 1315 1320 Procedia Computer Science 00 (2010) 000 000 Procedia Computer Science www.elsevier.com/locate/procedia www.elsevier.com/locate/procedia WCIT 2010 Factors affecting

More information

NIS Directive : Call for Proposals

NIS Directive : Call for Proposals National Cyber Security Centre, in Collaboration with the Research Institute in Trustworthy Inter-connected Cyber-physical Systems (RITICS) Summary NIS Directive : Call for Proposals Closing date: Friday

More information

Evaluation of Commercial Web Engineering Processes

Evaluation of Commercial Web Engineering Processes Evaluation of Commercial Web Engineering Processes Andrew McDonald and Ray Welland Department of Computing Science, University of Glasgow, Glasgow, Scotland. G12 8QQ. {andrew, ray}@dcs.gla.ac.uk, http://www.dcs.gla.ac.uk/

More information

Examining the Limits of Crowdsourcing for Relevance Assessment

Examining the Limits of Crowdsourcing for Relevance Assessment Examining the Limits of Crowdsourcing for Relevance Assessment Paul Clough: Information School, University of Sheffield, UK; p.d.clough@sheffield.ac.uk; +441142222664 Mark Sanderson: School of Computer

More information

Chartered Membership: Professional Standards Framework

Chartered Membership: Professional Standards Framework Chartered Membership: Professional Standards Framework Foreword The Chartered Institute of Architectural Technologists (CIAT) is the lead professional body for Architectural Technology and the UK Competent

More information

Enabling Semantic Access to Cultural Heritage: A Case Study of Tate Online

Enabling Semantic Access to Cultural Heritage: A Case Study of Tate Online Enabling Semantic Access to Cultural Heritage: A Case Study of Tate Online Paul Clough 1, Jennifer Marlow 1 and Neil Ireson 2 1 Department of Information Studies, 2 Department of Computer Science University

More information

UX Research in the Product Lifecycle

UX Research in the Product Lifecycle UX Research in the Product Lifecycle I incorporate how users work into the product early, frequently and iteratively throughout the development lifecycle. This means selecting from a suite of methods and

More information

Object-Oriented Modeling with UML: A Study of Developers Perceptions

Object-Oriented Modeling with UML: A Study of Developers Perceptions Object-Oriented Modeling with UML: A Study of Developers Perceptions Ritu Agarwal and Atish P. Sinha The object-oriented (OO) approach provides a powerful and effective environment for modeling and building

More information

Comparing the Usability of RoboFlag Interface Alternatives*

Comparing the Usability of RoboFlag Interface Alternatives* Comparing the Usability of RoboFlag Interface Alternatives* Sangeeta Shankar, Yi Jin, Li Su, Julie A. Adams, and Robert Bodenheimer Department of Electrical Engineering and Computer Science Vanderbilt

More information

Involving users in OPAC interface design: perspective from a UK study

Involving users in OPAC interface design: perspective from a UK study Involving users in OPAC interface design: perspective from a UK study Gheorghita Ghinea, Elahe Kani-Zabihi School of Information Systems Computing and Mathematics, Brunel University, Uxbridge, UB8 3PH,

More information

Should the Word Survey Be Avoided in Invitation Messaging?

Should the Word Survey Be Avoided in  Invitation Messaging? ACT Research & Policy Issue Brief 2016 Should the Word Survey Be Avoided in Email Invitation Messaging? Raeal Moore, PhD Introduction The wording of email invitations requesting respondents participation

More information

Retrieval Evaluation. Hongning Wang

Retrieval Evaluation. Hongning Wang Retrieval Evaluation Hongning Wang CS@UVa What we have learned so far Indexed corpus Crawler Ranking procedure Research attention Doc Analyzer Doc Rep (Index) Query Rep Feedback (Query) Evaluation User

More information

NPTEL Computer Science and Engineering Human-Computer Interaction

NPTEL Computer Science and Engineering Human-Computer Interaction M4 L5 Heuristic Evaluation Objective: To understand the process of Heuristic Evaluation.. To employ the ten principles for evaluating an interface. Introduction: Heuristics evaluation is s systematic process

More information

A model of information searching behaviour to facilitate end-user support in KOS-enhanced systems

A model of information searching behaviour to facilitate end-user support in KOS-enhanced systems A model of information searching behaviour to facilitate end-user support in KOS-enhanced systems Dorothee Blocks Hypermedia Research Unit School of Computing University of Glamorgan, UK NKOS workshop

More information

An Experiment in Visual Clustering Using Star Glyph Displays

An Experiment in Visual Clustering Using Star Glyph Displays An Experiment in Visual Clustering Using Star Glyph Displays by Hanna Kazhamiaka A Research Paper presented to the University of Waterloo in partial fulfillment of the requirements for the degree of Master

More information

Harmonization of usability measurements in ISO9126 software engineering standards

Harmonization of usability measurements in ISO9126 software engineering standards Harmonization of usability measurements in ISO9126 software engineering standards Laila Cheikhi, Alain Abran and Witold Suryn École de Technologie Supérieure, 1100 Notre-Dame Ouest, Montréal, Canada laila.cheikhi.1@ens.etsmtl.ca,

More information

Usability Evaluation of Tools for Nomadic Application Development

Usability Evaluation of Tools for Nomadic Application Development Usability Evaluation of Tools for Nomadic Application Development Cristina Chesta (1), Carmen Santoro (2), Fabio Paternò (2) (1) Motorola Electronics S.p.a. GSG Italy Via Cardinal Massaia 83, 10147 Torino

More information

Overview of iclef 2008: search log analysis for Multilingual Image Retrieval

Overview of iclef 2008: search log analysis for Multilingual Image Retrieval Overview of iclef 2008: search log analysis for Multilingual Image Retrieval Julio Gonzalo Paul Clough Jussi Karlgren UNED U. Sheffield SICS Spain United Kingdom Sweden julio@lsi.uned.es p.d.clough@sheffield.ac.uk

More information

Visual Appeal vs. Usability: Which One Influences User Perceptions of a Website More?

Visual Appeal vs. Usability: Which One Influences User Perceptions of a Website More? 1 of 9 10/3/2009 9:42 PM October 2009, Vol. 11 Issue 2 Volume 11 Issue 2 Past Issues A-Z List Usability News is a free web newsletter that is produced by the Software Usability Research Laboratory (SURL)

More information

Summary of Consultation with Key Stakeholders

Summary of Consultation with Key Stakeholders Summary of Consultation with Key Stakeholders Technology and Communications Sector Electronic Manufacturing Services & Original Design Manufacturing Software & IT Services Hardware Semiconductors Telecommunication

More information

An Intelligent Clustering Algorithm for High Dimensional and Highly Overlapped Photo-Thermal Infrared Imaging Data

An Intelligent Clustering Algorithm for High Dimensional and Highly Overlapped Photo-Thermal Infrared Imaging Data An Intelligent Clustering Algorithm for High Dimensional and Highly Overlapped Photo-Thermal Infrared Imaging Data Nian Zhang and Lara Thompson Department of Electrical and Computer Engineering, University

More information

Formulating XML-IR Queries

Formulating XML-IR Queries Alan Woodley Faculty of Information Technology, Queensland University of Technology PO Box 2434. Brisbane Q 4001, Australia ap.woodley@student.qut.edu.au Abstract: XML information retrieval systems differ

More information

Running Head: TREE TAP USABILITY TEST 1

Running Head: TREE TAP USABILITY TEST 1 Running Head: TREE TAP USABILITY TEST 1 Gogglefox Tree Tap Usability Test Report Brandon S. Perelman April 23, 2014 Final Design Documents Final Design Prototype White Paper Team Gogglefox Website Author's

More information

This literature review provides an overview of the various topics related to using implicit

This literature review provides an overview of the various topics related to using implicit Vijay Deepak Dollu. Implicit Feedback in Information Retrieval: A Literature Analysis. A Master s Paper for the M.S. in I.S. degree. April 2005. 56 pages. Advisor: Stephanie W. Haas This literature review

More information

Perceptions and use of web design guidelines: summary report

Perceptions and use of web design guidelines: summary report Steve Szigeti PhD Candidate Graduate Fellow, Knowledge Media Design Institute Contact: steve DOT szigeti AT utoronto DOT ca Executive Summary Interface design guidelines represent a form of codified knowledge,

More information

Honors & Scholars eportfolio Overview and Assessment Dr. Lindsey Chamberlain Dr. Leo Hoar

Honors & Scholars eportfolio Overview and Assessment Dr. Lindsey Chamberlain Dr. Leo Hoar Honors & Scholars eportfolio Overview and Assessment Dr. Lindsey Chamberlain Dr. Leo Hoar Overview What is an eportfolio? Why did we implement it in Honors & Scholars? How did we implement it at OSU? What

More information

Report of the Working Group on mhealth Assessment Guidelines February 2016 March 2017

Report of the Working Group on mhealth Assessment Guidelines February 2016 March 2017 Report of the Working Group on mhealth Assessment Guidelines February 2016 March 2017 1 1 INTRODUCTION 3 2 SUMMARY OF THE PROCESS 3 2.1 WORKING GROUP ACTIVITIES 3 2.2 STAKEHOLDER CONSULTATIONS 5 3 STAKEHOLDERS'

More information

Finding Relevant Documents using Top Ranking Sentences: An Evaluation of Two Alternative Schemes

Finding Relevant Documents using Top Ranking Sentences: An Evaluation of Two Alternative Schemes Finding Relevant Documents using Top Ranking Sentences: An Evaluation of Two Alternative Schemes Ryen W. White Department of Computing Science University of Glasgow Glasgow. G12 8QQ whiter@dcs.gla.ac.uk

More information

Federated Searching: User Perceptions, System Design, and Library Instruction

Federated Searching: User Perceptions, System Design, and Library Instruction Federated Searching: User Perceptions, System Design, and Library Instruction Rong Tang (Organizer & Presenter) Graduate School of Library and Information Science, Simmons College, 300 The Fenway, Boston,

More information

Explorations on Web Usability

Explorations on Web Usability American Journal of Applied Sciences 6 (3): 424-429, 2009 ISSN 1546-9239 2009 Science Publications Explorations on Web Usability K.K Teoh, T.S. Ong, P.W. Lim, Rachel P.Y. Liong and C.Y. Yap Faculty of

More information

Enhancing E-Journal Access In A Digital Work Environment

Enhancing E-Journal Access In A Digital Work Environment Enhancing e-journal access in a digital work environment Foo, S. (2006). Singapore Journal of Library & Information Management, 34, 31-40. Enhancing E-Journal Access In A Digital Work Environment Schubert

More information

Data Quality Assessment Tool for health and social care. October 2018

Data Quality Assessment Tool for health and social care. October 2018 Data Quality Assessment Tool for health and social care October 2018 Introduction This interactive data quality assessment tool has been developed to meet the needs of a broad range of health and social

More information

Usability of interactive systems: Current practices and challenges of its measurement

Usability of interactive systems: Current practices and challenges of its measurement Usability of interactive systems: Current practices and challenges of its measurement Δρ. Παναγιώτης Ζαχαριάς Τμήμα Πληροφορικής Πανεπιστήμιο Κύπρου 23/2/2010 Concepts and Definitions Usability engineering

More information

Inter and Intra-Document Contexts Applied in Polyrepresentation

Inter and Intra-Document Contexts Applied in Polyrepresentation Inter and Intra-Document Contexts Applied in Polyrepresentation Mette Skov, Birger Larsen and Peter Ingwersen Department of Information Studies, Royal School of Library and Information Science Birketinget

More information

A Granular Approach to Web Search Result Presentation

A Granular Approach to Web Search Result Presentation A Granular Approach to Web Search Result Presentation Ryen W. White 1, Joemon M. Jose 1 & Ian Ruthven 2 1 Department of Computing Science, University of Glasgow, Glasgow. Scotland. 2 Department of Computer

More information

Along for the Ride Reducing Driver Distractions

Along for the Ride Reducing Driver Distractions Executive Summary 1 Along for the Ride Reducing Driver Distractions Final Report of the Driver Focus and Technology Forum William T. Pound, Executive Director 1560 Broadway, Suite 700 Denver, Colorado

More information

BCS THE CHARTERED INSTITUTE FOR IT. BCS HIGHER EDUCATION QUALIFICATIONS BCS Level 5 Diploma in IT. March 2017 PRINCIPLES OF USER INTERFACE DESIGN

BCS THE CHARTERED INSTITUTE FOR IT. BCS HIGHER EDUCATION QUALIFICATIONS BCS Level 5 Diploma in IT. March 2017 PRINCIPLES OF USER INTERFACE DESIGN BCS THE CHARTERED INSTITUTE FOR IT BCS HIGHER EDUCATION QUALIFICATIONS BCS Level 5 Diploma in IT March 2017 PRINCIPLES OF USER INTERFACE DESIGN EXAMINERS REPORT General Comments Candidates should focus

More information

Is It Possible to Predict the Manual Web Accessibility Result Using the Automatic Result?

Is It Possible to Predict the Manual Web Accessibility Result Using the Automatic Result? Is It Possible to Predict the Manual Web Accessibility Result Using the Automatic Result? Carlos Casado Martínez, Loic Martínez-Normand, and Morten Goodwin Olsen Abstract. The most adequate approach for

More information

DELOS WP7: Evaluation

DELOS WP7: Evaluation DELOS WP7: Evaluation Claus-Peter Klas Univ. of Duisburg-Essen, Germany (WP leader: Norbert Fuhr) WP Objectives Enable communication between evaluation experts and DL researchers/developers Continue existing

More information

Wrapper: An Application for Evaluating Exploratory Searching Outside of the Lab

Wrapper: An Application for Evaluating Exploratory Searching Outside of the Lab Wrapper: An Application for Evaluating Exploratory Searching Outside of the Lab Bernard J Jansen College of Information Sciences and Technology The Pennsylvania State University University Park PA 16802

More information

3Lesson 3: Web Project Management Fundamentals Objectives

3Lesson 3: Web Project Management Fundamentals Objectives 3Lesson 3: Web Project Management Fundamentals Objectives By the end of this lesson, you will be able to: 1.1.11: Determine site project implementation factors (includes stakeholder input, time frame,

More information

Evaluation in Information Visualization. An Introduction to Information Visualization Techniques for Exploring Large Database. Jing Yang Fall 2005

Evaluation in Information Visualization. An Introduction to Information Visualization Techniques for Exploring Large Database. Jing Yang Fall 2005 An Introduction to Information Visualization Techniques for Exploring Large Database Jing Yang Fall 2005 1 Evaluation in Information Visualization Class 3 2 1 Motivation What are the advantages and limitations

More information

Experimental Evaluation of Effectiveness of E-Government Websites

Experimental Evaluation of Effectiveness of E-Government Websites Experimental Evaluation of Effectiveness of E-Government Websites A. Basit Darem 1, Dr. Suresha 2 1 Research Scholar, DoS in Computer Science, University of Mysore 2 Associate Professor, DoS in Computer

More information

EVALUATION OF SEARCHER PERFORMANCE IN DIGITAL LIBRARIES

EVALUATION OF SEARCHER PERFORMANCE IN DIGITAL LIBRARIES DEFINING SEARCH SUCCESS: EVALUATION OF SEARCHER PERFORMANCE IN DIGITAL LIBRARIES by Barbara M. Wildemuth Associate Professor, School of Information and Library Science University of North Carolina at Chapel

More information

DESIGN AND TECHNOLOGY

DESIGN AND TECHNOLOGY Qualification Accredited A LEVEL NEA Marking Criteria April 2017 DESIGN AND TECHNOLOGY H404, H405 and H406 For first teaching in 2017 www.ocr.org.uk/gcsedesignandtechnology A Level Design and Technology

More information

Improving user interfaces through a methodological heuristics evaluation framework and retrospective think aloud with eye tracking

Improving user interfaces through a methodological heuristics evaluation framework and retrospective think aloud with eye tracking Improving user interfaces through a methodological heuristics evaluation framework and retrospective think aloud with eye tracking Progress Report Supervisors: Dr. Tom Gedeon Mr. Christopher Chow Principal

More information

Examining the Authority and Ranking Effects as the result list depth used in data fusion is varied

Examining the Authority and Ranking Effects as the result list depth used in data fusion is varied Information Processing and Management 43 (2007) 1044 1058 www.elsevier.com/locate/infoproman Examining the Authority and Ranking Effects as the result list depth used in data fusion is varied Anselm Spoerri

More information

EVALUATION OF THE USABILITY OF EDUCATIONAL WEB MEDIA: A CASE STUDY OF GROU.PS

EVALUATION OF THE USABILITY OF EDUCATIONAL WEB MEDIA: A CASE STUDY OF GROU.PS EVALUATION OF THE USABILITY OF EDUCATIONAL WEB MEDIA: A CASE STUDY OF GROU.PS Turgay Baş, Hakan Tüzün Hacettepe University (TURKEY) turgaybas@hacettepe.edu.tr, htuzun@hacettepe.edu.tr Abstract In this

More information

Data Analysis and interpretation

Data Analysis and interpretation Chapter 4 Data Analysis and interpretation 4.1. Introduction This research is related to Network Security Management - A study with special reference to IT industrial units in Pune region. The researcher

More information

Development and Validation of an Instrument for Assessing Users Views about the Usability of Digital Libraries

Development and Validation of an Instrument for Assessing Users Views about the Usability of Digital Libraries Issues in Informing Science and Information Technology Development and Validation of an Instrument for Assessing Users Views about the Usability of Digital Libraries Alex Koohang University of Wisconsin

More information

Heuristic Evaluation of Groupware. How to do Heuristic Evaluation of Groupware. Benefits

Heuristic Evaluation of Groupware. How to do Heuristic Evaluation of Groupware. Benefits Kimberly Tee ketee@ucalgary.ca CPSC 681 Topic Heuristic Evaluation of Groupware Heuristic evaluation [9] is a discount evaluation method for finding usability problems in a singleuser interface design.

More information

REGULATED DOMESTIC ROAMING RESEARCH REPORT 2017

REGULATED DOMESTIC ROAMING RESEARCH REPORT 2017 REGULATED DOMESTIC ROAMING RESEARCH REPORT 2017 Researching the attitudes and perceptions of regional and remote Australians towards mobile providers and domestic roaming Vodafone Regional Roaming Research

More information

Detecting Controversial Articles in Wikipedia

Detecting Controversial Articles in Wikipedia Detecting Controversial Articles in Wikipedia Joy Lind Department of Mathematics University of Sioux Falls Sioux Falls, SD 57105 Darren A. Narayan School of Mathematical Sciences Rochester Institute of

More information

RMIT University at TREC 2006: Terabyte Track

RMIT University at TREC 2006: Terabyte Track RMIT University at TREC 2006: Terabyte Track Steven Garcia Falk Scholer Nicholas Lester Milad Shokouhi School of Computer Science and IT RMIT University, GPO Box 2476V Melbourne 3001, Australia 1 Introduction

More information

"Charting the Course... ITIL 2011 Managing Across the Lifecycle ( MALC ) Course Summary

Charting the Course... ITIL 2011 Managing Across the Lifecycle ( MALC ) Course Summary Course Summary Description ITIL is a set of best practices guidance that has become a worldwide-adopted framework for IT Service Management by many Public & Private Organizations. Since early 1990, ITIL

More information

A Linear Regression Model for Assessing the Ranking of Web Sites Based on Number of Visits

A Linear Regression Model for Assessing the Ranking of Web Sites Based on Number of Visits A Linear Regression Model for Assessing the Ranking of Web Sites Based on Number of Visits Dowming Yeh, Pei-Chen Sun, and Jia-Wen Lee National Kaoshiung Normal University Kaoshiung, Taiwan 802, Republic

More information

Towards Systematic Usability Verification

Towards Systematic Usability Verification Towards Systematic Usability Verification Max Möllers RWTH Aachen University 52056 Aachen, Germany max@cs.rwth-aachen.de Jonathan Diehl RWTH Aachen University 52056 Aachen, Germany diehl@cs.rwth-aachen.de

More information

Challenges on Combining Open Web and Dataset Evaluation Results: The Case of the Contextual Suggestion Track

Challenges on Combining Open Web and Dataset Evaluation Results: The Case of the Contextual Suggestion Track Challenges on Combining Open Web and Dataset Evaluation Results: The Case of the Contextual Suggestion Track Alejandro Bellogín 1,2, Thaer Samar 1, Arjen P. de Vries 1, and Alan Said 1 1 Centrum Wiskunde

More information

Transaction log State Library of Queensland

Transaction log State Library of Queensland Scott Hamilton 1, Helen Thurlow 2 1. Queensland University of Technology, Brisbane, Australia 2. State Library of Queensland, Brisbane, Australia As Queensland's major public reference and research library,

More information

A Study on Website Quality Models

A Study on Website Quality Models International Journal of Scientific and Research Publications, Volume 4, Issue 12, December 2014 1 A Study on Website Quality Models R.Anusha Department of Information Systems Management, M.O.P Vaishnav

More information

TREC 2016 Dynamic Domain Track: Exploiting Passage Representation for Retrieval and Relevance Feedback

TREC 2016 Dynamic Domain Track: Exploiting Passage Representation for Retrieval and Relevance Feedback RMIT @ TREC 2016 Dynamic Domain Track: Exploiting Passage Representation for Retrieval and Relevance Feedback Ameer Albahem ameer.albahem@rmit.edu.au Lawrence Cavedon lawrence.cavedon@rmit.edu.au Damiano

More information

User Interface Evaluation

User Interface Evaluation User Interface Evaluation Heuristic Evaluation Lecture #17 Agenda Evaluation through Expert Analysis Cognitive walkthrough Heuristic evaluation Model-based evaluation Cognitive dimension of notations 2

More information

A new interaction evaluation framework for digital libraries

A new interaction evaluation framework for digital libraries A new interaction evaluation framework for digital libraries G. Tsakonas, S. Kapidakis, C. Papatheodorou {gtsak, sarantos, papatheodor} @ionio.gr DELOS Workshop on the Evaluation of Digital Libraries Department

More information

From Passages into Elements in XML Retrieval

From Passages into Elements in XML Retrieval From Passages into Elements in XML Retrieval Kelly Y. Itakura David R. Cheriton School of Computer Science, University of Waterloo 200 Univ. Ave. W. Waterloo, ON, Canada yitakura@cs.uwaterloo.ca Charles

More information

A Task-Based Evaluation of an Aggregated Search Interface

A Task-Based Evaluation of an Aggregated Search Interface A Task-Based Evaluation of an Aggregated Search Interface Shanu Sushmita, Hideo Joho, and Mounia Lalmas Department of Computing Science, University of Glasgow Abstract. This paper presents a user study

More information

A Formal Approach to Score Normalization for Meta-search

A Formal Approach to Score Normalization for Meta-search A Formal Approach to Score Normalization for Meta-search R. Manmatha and H. Sever Center for Intelligent Information Retrieval Computer Science Department University of Massachusetts Amherst, MA 01003

More information

Easy Knowledge Engineering and Usability Evaluation of Longan Knowledge-Based System

Easy Knowledge Engineering and Usability Evaluation of Longan Knowledge-Based System Easy Knowledge Engineering and Usability Evaluation of Longan Knowledge-Based System ChureeTechawut 1,*, Rattasit Sukhahuta 1, Pawin Manochai 2, Jariya Visithpanich 3, Yuttana Khaosumain 4 1 Computer Science

More information

USER EXPERIENCE ASSESSMENT TO IMPROVE USER INTERFACE QUALITY ON DEVELOPMENT OF ONLINE FOOD ORDERING SYSTEM

USER EXPERIENCE ASSESSMENT TO IMPROVE USER INTERFACE QUALITY ON DEVELOPMENT OF ONLINE FOOD ORDERING SYSTEM USER EXPERIENCE ASSESSMENT TO IMPROVE USER INTERFACE QUALITY ON DEVELOPMENT OF ONLINE FOOD ORDERING SYSTEM 1 HANIF AL FATTA, 2 BAYU MUKTI 1 Information Technology Department, Universitas AMIKOM Yogyakarta,

More information

User-Centered Evaluation of a Discovery Layer System with Google Scholar

User-Centered Evaluation of a Discovery Layer System with Google Scholar Purdue University Purdue e-pubs Libraries Faculty and Staff Scholarship and Research Purdue Libraries 2013 User-Centered Evaluation of a Discovery Layer System with Google Scholar Tao Zhang Purdue University,

More information

Discounted Cumulated Gain based Evaluation of Multiple Query IR Sessions

Discounted Cumulated Gain based Evaluation of Multiple Query IR Sessions Preprint from: Järvelin, K. & Price, S. & Delcambre, L. & Nielsen, M. (2008). Discounted Cumulated Gain based Evaluation of Multiple Query IR Sessions. In: Ruthven, I. & al. (Eds.), Proc. of the 30th European

More information

Slicing and Dicing the Information Space Using Local Contexts

Slicing and Dicing the Information Space Using Local Contexts Slicing and Dicing the Information Space Using Local Contexts Hideo Joho Department of Computing Science University of Glasgow 17 Lilybank Gardens Glasgow G12 8QQ, UK hideo@dcs.gla.ac.uk Joemon M. Jose

More information

Volume-4, Issue-1,May Accepted and Published Manuscript

Volume-4, Issue-1,May Accepted and Published Manuscript Available online at International Journal of Research Publications Volume-4, Issue-1,May 2018 Accepted and Published Manuscript Comparison of Website Evaluation after Ranking Improvement and Implementation

More information

DISABILITY LAW SERVICE BEST PRACTICES FOR AN ACCESSIBLE AND USABLE WEBSITE

DISABILITY LAW SERVICE BEST PRACTICES FOR AN ACCESSIBLE AND USABLE WEBSITE DISABILITY LAW SERVICE BEST PRACTICES FOR AN ACCESSIBLE AND USABLE WEBSITE February 2018 1 FOREWORD This guide aims to provide organisations with essential information for compliance with modern website

More information

Concepts of Usability. Usability Testing. Usability concept ISO/IS What is context? What is context? What is usability? How to measure it?

Concepts of Usability. Usability Testing. Usability concept ISO/IS What is context? What is context? What is usability? How to measure it? Concepts of Usability Usability Testing What is usability? How to measure it? Fang Chen ISO/IS 9241 Usability concept The extent to which a product can be used by specified users to achieve specified goals

More information

Nektarios Kostaras, Mixalis Xenos. Hellenic Open University, School of Sciences & Technology, Patras, Greece

Nektarios Kostaras, Mixalis Xenos. Hellenic Open University, School of Sciences & Technology, Patras, Greece Kostaras N., Xenos M., Assessing Educational Web-site Usability using Heuristic Evaluation Rules, 11th Panhellenic Conference on Informatics with international participation, Vol. B, pp. 543-550, 18-20

More information

GUIDELINES FOR MASTER OF SCIENCE INTERNSHIP THESIS

GUIDELINES FOR MASTER OF SCIENCE INTERNSHIP THESIS GUIDELINES FOR MASTER OF SCIENCE INTERNSHIP THESIS Dear Participant of the MScIS Program, If you have chosen to follow an internship, one of the requirements is to write a Thesis. This document gives you

More information

Iteration vs Recursion in Introduction to Programming Classes: An Empirical Study

Iteration vs Recursion in Introduction to Programming Classes: An Empirical Study BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 16, No 4 Sofia 2016 Print ISSN: 1311-9702; Online ISSN: 1314-4081 DOI: 10.1515/cait-2016-0068 Iteration vs Recursion in Introduction

More information

Survey of Studies Development Plan (SDP) and Subject Matter Expert (SME) Process and Products. July 11, 2016

Survey of Studies Development Plan (SDP) and Subject Matter Expert (SME) Process and Products. July 11, 2016 Survey of 217 219 Studies Development Plan (SDP) and Subject Matter Expert (SME) Process and Products July 11, 216 Survey period: /1/216 /17/216 14 Questions Respondents Designed by BOEM Office of Environmental

More information

A method for measuring satisfaction of users of digital libraries: a case study with engineering faculty

A method for measuring satisfaction of users of digital libraries: a case study with engineering faculty Qualitative and Quantitative Methods in Libraries (QQML) 4: 11 19, 2015 A method for measuring satisfaction of users of digital libraries: a case study with engineering faculty Beatriz Valadares Cendón

More information

Effects of PROC EXPAND Data Interpolation on Time Series Modeling When the Data are Volatile or Complex

Effects of PROC EXPAND Data Interpolation on Time Series Modeling When the Data are Volatile or Complex Effects of PROC EXPAND Data Interpolation on Time Series Modeling When the Data are Volatile or Complex Keiko I. Powers, Ph.D., J. D. Power and Associates, Westlake Village, CA ABSTRACT Discrete time series

More information