A method for measuring satisfaction of users of digital libraries: a case study with engineering faculty

Similar documents
UPA 2004 Presentation Page 1

Software Design and Evaluation by Ergonomics Knowledge and Intelligent Design System (EKIDES)

Rank Similarity based MADM Method Selection

IT User Satisfaction in Academia: A Comparison Across Three Student Types

Discovering Information through Summon:

Intranets and Organizational Learning: Impact of Metadata Filters on Information Quality, User Satisfaction and Intention to Use

Measures of Perceived User Satisfaction with Internet Search Engines

Visual Appeal vs. Usability: Which One Influences User Perceptions of a Website More?

User-Centered Evaluation of a Discovery Layer System with Google Scholar

Evaluation and Design Issues of Nordic DC Metadata Creation Tool

Measuring User Satisfaction with IS Security

2015 User Satisfaction Survey Final report on OHIM s User Satisfaction Survey (USS) conducted in autumn 2015

A Linear Regression Model for Assessing the Ranking of Web Sites Based on Number of Visits

Usability Evaluation of Cell Phones for Early Adolescent Users

Development of system and algorithm for evaluating defect level in architectural work

Interfaces Homme-Machine

WEBUSE: AN APPROACH FOR WEB USABILITY EVALUATION

INFORMATION SECURITY MANAGEMENT SYSTEMS CERTIFICATION RESEARCH IN THE ROMANIAN ORGANIZATIONS

Intelligent Risk Identification and Analysis in IT Network Systems

The Impact of Information System Quality and Media Quality on the Intention to Use IPTV

COMMON ISSUES AFFECTING SECURITY USABILITY

Towards Systematic Usability Verification

Evaluating usability of screen designs with layout complexity

Cognitive Analysis of Software Interfaces

User Satisfaction and Web Portal Use

Research Article Evaluation and Satisfaction Survey on the Interface Usability of Online Publishing Software

International Journal of Innovative Research in Advanced Engineering (IJIRAE) ISSN: Issue 03, Volume 4 (March 2017)

Object-Oriented Modeling with UML: A Study of Developers Perceptions

Factors Affecting Adoption of Cloud Computing Technology in Educational Institutions (A Case Study of Chandigarh)

Evaluation of Malang City Public Transportation Route Search Mobile Application Implementation

Improving Government Websites and Surveys with Usability Testing

Exploring Factors Influencing Perceived Usefulness and Its Relationship on Hospital Information System End User Satisfaction

USER EXPERIENCE ASSESSMENT TO IMPROVE USER INTERFACE QUALITY ON DEVELOPMENT OF ONLINE FOOD ORDERING SYSTEM

Introduction to Web Surveys. Survey Research Laboratory University of Illinois at Chicago October 2010

Measuring the User Experience

Enhancing E-Journal Access In A Digital Work Environment

Web Usability in the Irish Airline Industry

Age-Related Differences in Subjective Ratings of Hierarchical Information

Measuring End User Satisfaction with Korean Hospital s Mobile Health System

The What, Why, Who and How of Where: Building a Portal for Geospatial Data. Alan Darnell Director, Scholars Portal

Journal of Chemical and Pharmaceutical Research, 2014, 6(5): Research Article

Information Technology Certified Computer Professional Certification: Should Four-year Institutions Embrace It?

VERIFICATION OF STRESS AND DISPLACEMENT RESULTS OF AN OPTIMIZED TRUSS

Sketchable Histograms of Oriented Gradients for Object Detection

A lightweight, industrially-validated instrument to measure user satisfaction and service quality experienced by the users of a UML modeling tool

CREATION OF THE RATING OF STOCK MARKET ANALYTICAL SYSTEMS ON THE BASE OF EXPERT QUALITATIVE ESTIMATIONS

ABSTRACT. Keywords: User Satisfaction, User Participation, User Understanding

Automated Heuristic Evaluator

Eye Tracking Experiments in Business Process Modeling: Agenda Setting and Proof of Concept

Human-Computer Interaction

HOMEPAGE USABILITY: MULTIPLE CASE STUDIES OF TOP 5 IPTA S HOMEPAGE IN MALAYSIA Ahmad Khairul Azizi Ahmad 1 UniversitiTeknologi MARA 1

IJESRT. (I2OR), Publication Impact Factor: (ISRA), Impact Factor: 2.114

P a g e 1. MathCAD VS MATLAB. A Usability Comparison. By Brian Tucker

A Novel Website Quality and Usability Evaluation Framework for Online Shopping Websites

Usability Report for Online Writing Portfolio

Quantitative Analysis of Desirability in User Experience

Evaluation in Information Visualization. An Introduction to Information Visualization Techniques for Exploring Large Database. Jing Yang Fall 2005

A STUDY OF ANDROID OPERATING SYSTEM WITH RESPECT WITH USERS SATISFACTION

Explorations on Web Usability

Copyright 2011 please consult the authors

(Milano Lugano Evaluation Method) A systematic approach to usability evaluation. Part I LAB HCI prof. Garzotto - HOC-POLITECNICO DI MILANO a.a.

USING PRINCIPAL COMPONENTS ANALYSIS FOR AGGREGATING JUDGMENTS IN THE ANALYTIC HIERARCHY PROCESS

Comparing the Usability of RoboFlag Interface Alternatives*

Selection of Best Web Site by Applying COPRAS-G method Bindu Madhuri.Ch #1, Anand Chandulal.J #2, Padmaja.M #3

Evaluating the UMD Libraries Website

ITIL implementation: The role of ITIL software and project quality

VIDEO SEARCHING AND BROWSING USING VIEWFINDER

Conceptualizing User Satisfaction toward a Library Quality at Malaysia Nuclear Agency Library

A Study on Website Quality Models

A User Study on Features Supporting Subjective Relevance for Information Retrieval Interfaces

A derivative based algorithm for image thresholding

THE VARIABILITY OF FUZZY AGGREGATION METHODS FOR PARTIAL INDICATORS OF QUALITY AND THE OPTIMAL METHOD CHOICE

Procedia Computer Science

Investigation of the behaviour of single span reinforced concrete historic bridges by using the finite element method

User Satisfaction with Virtual Social Community: The Case of Bulletin Board Systems

6th Annual 15miles/Neustar Localeze Local Search Usage Study Conducted by comscore

Folsom Library & RensSearch Usability Test Plan

Usability evaluation of user interface of thesis title review system

Usability Services at the University of Maryland: Who, What and How

Retrieval of Web Documents Using a Fuzzy Hierarchical Clustering

Comparative of Interpolators Applied to Depth Images

NAGI METHODS OF MAP QUALITY EVALUATION

COPYRIGHTED MATERIAL. Introduction. Noman Muhammad, Davide Chiavelli, David Soldani and Man Li. 1.1 QoE value chain

Bootstrap Confidence Interval of the Difference Between Two Process Capability Indices

A Survey Based on Product Usability and Feature Fatigue Analysis Methods for Online Product

Volume-4, Issue-1,May Accepted and Published Manuscript

Election Analysis and Prediction Using Big Data Analytics

A Comparative Usability Test. Orbitz.com vs. Hipmunk.com

Graph Theory for Modelling a Survey Questionnaire Pierpaolo Massoli, ISTAT via Adolfo Ravà 150, Roma, Italy

User satisfaction analysis for service-now application

A SOCIAL NETWORK ANALYSIS APPROACH TO ANALYZE ROAD NETWORKS INTRODUCTION

& Electronic Health Records

Online Data Modeling Tool to Improve Students' Learning of Conceptual Data Modeling

A Hierarchical Characterization of a Live Streaming Media Workload

SIGNAL PROCESSING TOOLS FOR SPEECH RECOGNITION 1

Use-Case Driven Domain Analysis for Milk Production Information Systems

Using Mplus Monte Carlo Simulations In Practice: A Note On Non-Normal Missing Data In Latent Variable Models

Measuring the Effectiveness of an Organizational Website to Stakeholders

Integration of Fuzzy Shannon s Entropy with fuzzy TOPSIS for industrial robotic system selection

A New Method for Identifying the Central Nodes in Fuzzy Cognitive Maps using Consensus Centrality Measure

Transcription:

Qualitative and Quantitative Methods in Libraries (QQML) 4: 11 19, 2015 A method for measuring satisfaction of users of digital libraries: a case study with engineering faculty Beatriz Valadares Cendón 1 and Juliana Lopes de Almeida Souza 2 1 Professor, School of Information Science, Federal University of Minas Gerais, 2 Assistant Professor, Pontifical Catholic University of Minas Gerais Abstract: This research study proposes an instrument and a method for measuring the satisfaction of users of digital libraries of e-journals. The satisfaction questionnaire has questions on an eight point Lickert Scale about the various factors, which affect the satisfaction of the user of the digital library. These questions which were fundamented on user studies of digital libraries, on the literature of computer science and administration of information systems, included general questions on the satisfaction of the users; and satisfaction with specific aspects of the quality and the content of the system. Satisfaction with the quality of the system included questions on the search resources, the usability (ease of use, flexibility. readability, organization of information and sequence of the screens), and the access to the system (ease of access and speed). Satisfaction with the contents of the system included questions about the number of journals, their quality, their relevance, chronological coverage, up-todatedness, reliability and availability of full text). A method was developed, adapted from Bailey e Pearson (1983), who defined satisfaction as the sum of the user s, positive or negative reaction to a set of factors. The method not only makes it easier to compare satisfaction among different areas of knowledge or among different categories of factors but also allows the normalization of results to neutralize the impact to null responses. The method was demonstrated verifying the degree of satisfaction of engineering faculty who were users of the Brazilian Capes Portal of E-Journals. The population studied came from 17 federal universities from all 5 geographic regions of Brazil. Data was collected by mean of a web-survey answered by 544 engineering faculty. Further research and improvements in the method proposed are suggested. Keywords: Methodology; E-journals, Digital Libraries, Users Satisfaction, User Studies Received: 21.5.2014 / Accepted: 1.12.2014 ISSN 2241-1925 ISAST

12 Beatriz Valadares Cendón and Juliana Lopes de Almeida Souza Acknowledgements: The authors acknowledge and thank the financial support from FAPEMIG (Foundation for Research Support for State of Minas Gerais) 1. Introduction This paper proposes an instrument and a method for measuring the satisfaction of users of digital libraries of e-journals. This method which was fundamented on the literature of computer science and administration of information systems, included general questions on the satisfaction of the users; and satisfaction with specific aspects of the quality and the content of the system. The use of such method was demonstrated with a sample of engineering faculty who were users of the Brazilian CAPES Portal of E-Journals. The Capes Portal is the largest of its type in Brazil, offering, by the end of 2013, access to over 33 thousand scientific journals and 130 databases in all areas of knowledge (Capes, 2013). The article starts with the review of the concept and theory of success of information systems and of the construct of user satisfaction. Next, the instruments for measurement of satisfaction and the Bailey and Pearson (1983) methodology are presented. The proposed instrument and method are then presented and demonstrated through the case study with the engineering faculty. 2. Information systems success and user satisfaction Authors such as Ives, Olson e Baroudi. (1983), DeLone e McLean (1992), Rey Martin (2000) e Melone (1990), privileged in this review, have conducted surveys of the scientific literature on information success and user satisfaction. In their search for the definition for information system success, DeLone e McLean (1992), analyzed around 180 research studies, published in 7 journals from the area of administration, from 1981 to 1988, and synthetized them in a model, based on Shannon e Weaver (1949) theory and on Mason Mason (1978). In order to define the measures and definitions of success used in these studies, they found 6 interdependent constructs and proposed their model of success (FIG. 1): Figure 1 Model of Information System Success Source: DELONE; MCLEAN, 1992.

Qualitative and Quantitative Methods in Libraries (QQML) 4: 11 19, 2015 13 According to DeLone e McLean (1992), user satisfaction is, probably one of the most important dependent variables used to measure information system success, for at least three reasons: first, because it has a high degree of face validity as it is hard to deny the success of a system which the users like; second; for the various instruments, such as the one developed by Bailey and Pearson and others derivate from it, that reliably measure satisfaction and make it easy to compare, and third, because of the other variables are so much harder to measure empirically or are conceptually poorer. (DeLone and Mclean, 1992, p. 69). The vision of the satisfaction of the users as an indication of system success probably originated with Cyert and March in 1963 (Ives, Olson and Baroud, 1983; Bailey and Pearson, 1983). The concept of satisfaction proposed by Cyert and March (1963) suggests that if an information system attends to the needs of the user, the satisfaction of this user will be reinforced while if it does not, the user will be unsatisfied and will look for another source or system. For Ives, Olson e Baroudi. (1983), user satisfaction is a means for the evaluation of an information system. It is defined as the degree in which, according to the user perception, the system satisfies their information needs. Rey Martín (2000) points out that the term user satisfaction gained attention in the academic literature in the area of Library and Information Science in the 80s. This author states that user satisfaction is directly related to the use of the system (Rey Martín, 2000, p. 141). Melone (1990, p. 79) also highlights that the two most frequent measures of success in the literature are user satisfaction and information system use. Among them, user satisfaction has received more attention from researchers and serves and as the main construct to evaluate information systems. Satisfaction has a strong subjective component, and is focused more on perceptions and attitudes than in objective and concrete criteria. The construct offers an evaluation from the point of view of the users. At the final analysis, it is the satisfaction of the information need of the user and not the technical quality of the system that will determine the success or failure of the system. 3. Measures of satisfaction of information systems users Several instruments were developed to measure the satisfaction of users of information systems. One of the more accepted is the one by Bailey and Pearson (1983), who used the scientific literature in the area of psychology and the critical incident technique to identify factors which impacted the satisfaction of the final user of computer systems. Examples of these factors are flexibility, ease of use, perceived utility, data security, documentation, format, relevance, precision, language, timeliness, speed, system integration, expectation, among others. For Bailey and Pearson, satisfaction regarding a situation is defined as the weighted sum of the reaction of the users to a set of factors affecting that situation (Bailey and Pearson, 1983, p. 531), as shown by the equation: n

14 Beatriz Valadares Cendón and Juliana Lopes de Almeida Souza S i = R ij, W ij where where R ij = A reação ao fator j pelo individuo i. W ij = A importância do fator j para o indivíduo i. The implementation of this model of satisfaction requires the identification of the factors that affect the satisfaction in a certain domain and a scale of measurement of the individual s reaction. The user satisfaction is the weighted sum of the positive or negative reaction to the 39 factors they identified. This instrument was used in an organization with five units to prove its validity and reliability. According to Bailey and Pearson (1983, p. 538), their main contribution was the definition of satisfaction through a valid instrument of measurement. Melone (1990, p. 76) states that, Bailey and Pearson s (1983) scale, is a valid and reliable instrument and for this reason it is one of the most popular scales to measure the construct. In Bailey and Pearson instrument bipolar adjectives are used to measure perceptions ranging from a negative to a positive feeling. For example, a printing output could be tested with the adjectives good vs bad, simple vs complex. A factor is tested with 4 pairs of adjectives. The reaction of an individual I to a given factor j is measured by his response to the scale to that factor. Bailey and Pearson used a Likert Scale from -3 to +3 where 0 indicated and indifferent reaction. 4 R i =1/4 I i,j,k where k=1 Ii,j,k= the numeric response of user I to adjective pair k of factor I, To neutralize the effect of individuals who had no reactions (responses 0 in the scale) to one or more factors, the results can be normalized to -1 to +1 if the normalized score is based only on factors with nonzero responses. Normalized satisfaction for each individual is calculated dividing Satisfaction Si by the maximum possible score that individual could receive which would be the number of factors with a score different than zero times three which was the maximum score in Bailey and Pearson scale for each factor. That is: NS i = S i /F i x3.0 where NS i -Normalized satisfaction of user i F i = Number of meaningful factors = 39 = δ ij where δ=0 if R ij =0 and δ=1 otherwise 4. The instrument proposed

Qualitative and Quantitative Methods in Libraries (QQML) 4: 11 19, 2015 15 A questionnaire was developed which contained questions on the user s reactions. The questions were based on Doll and Torkzadeh 1988 who proposed a 12 item questionnaire with questions on 5 components of user satisfaction (content, ease of use, timeliness, accuracy, and format) and on Chin, Diehl and Normam, 1988. The instrument proposed has a general question on the satisfaction of the user with the system with two pairs of adjectives and 16 factors which are: Ease of use, flexibility, readability, organization, sequence, browsing, and ease of access, speed of access, search, and number of journals, quality, reliability, coverage, relevance, full text, up-to-datedness of the journals in the collection. These 16 factors could be grouped in higher order categories as shown in Table 1: DeLone and McLean Categories of IS Success Quality of the System Quality Information of Table 1: User satisfaction factors Satisfaction Factors of User Satisfaction categories Usability Access Search resources Content Ease of use Flexibility Readability Organization of information Sequence of the screens Browsing Ease of access Speed Search resources Number of journals Quality of journals Relevance of journals Chronological Coverage Up-to-datedness Reliability Availability of full text As Table 2 shows, for each factor a pair of opposing adjectives was used. The user answered using a Lickert scale with values assigned from -4 to +4 to the intervals, which indicate the level of satisfaction of the user. The central option (0) indicates an indifferent opinion. Table 2: Adjective pairs for each factor Satisfaction factors Adjective pairs Satisfaction Terrible Excellent

16 Beatriz Valadares Cendón and Juliana Lopes de Almeida Souza Ease of use Flexibility Characters in screens Organization of information Sequence of screens Speed of access Easy of access Searching resources Browsing Number Quality Reliability Relevance Coverage Up-to-datedness Availability of full text Frustrating Satisfying Difficult Easy Rigid Flexible Hard to read Easy to read Confusing -Clear Confusing -Clear Slow -Fast Impossible -Immediate, convenient and reliable Not satisfactory -Satisfactory Not satisfactory -Satisfactory 5. The Method proposed Questions should be answered in a scale with the values from -4 to 4 assigned to the intervals. Using the formulas proposed by Bailey and Pearson and the factors proposed, several satisfaction measures could be derived. For each individual: 1) A general measure of satisfaction could be generated by the average response for the two pairs of adjectives. 2 R i =1/2 I i,j,k k=1 2) Satisfaction taking in account all the factors can be would be calculated by the formula: 16 S i = R ij, where the maximum value for each user would be 64. 3) Satisfaction with the content of the system for each user can be calculated using just the factors for content: 7 S i = R ij, where the maximum value for each user would be 28. 4) Satisfaction with the usability of the system for each user can be calculated using just the factors for usability.

Qualitative and Quantitative Methods in Libraries (QQML) 4: 11 19, 2015 17 6 S i = R ij, where the maximum value for each user would be 24. 5) Satisfaction with the access to the system for each user can be calculated using just the factors for access. 2 S i = R ij, where the maximum value for each user would be 8. 6) Satisfaction with the search resources of the system for each user can be calculated using just the factors for search resources. The maximum value would be 4. 1 S i = R ij, where the maximum value for each user would be 4. 7) Satisfaction with the quality of the system for each user can be calculated using all the factors for usability, access and search resources. Maximum value would be 36. 9 S i = R ij, where the maximum value for each user would be 36. These values can be normalized, as indicated above to remove the impact of the indifferent responses and make the answers uniform and easier to compare, ranging from -1 to +1. For the area of knowledge, the media of all the individuals could be taken. 6. Demonstration of the method The data collected from the engineering faculty came from a web-survey which was sent to around 15 thousand faculty from all areas of knowledge in 17 universities in the 5 geographic regions of Brazil. Of the 5.176 faculty which responded that they used the Portal, 544 where engineering faculty. Results show that: For comparability, the normalized values are shown. Table 3 shows the values for the satisfaction with each of the factors. The columns show the maximum, the minimum and medium value for all respondents with non-neutral answers. Table 3: Satisfaction for Engineering Faculty in each factor Factors Max Min Media Satisfaction 1,0-0,67 0,40 All factors Ease of use 1,0-1,0 0,42 Flexibility 1,0-1,0 0,36 Characters in screens 1,0-1,0 0,39 Organization of 1,0-1,0 0,30

18 Beatriz Valadares Cendón and Juliana Lopes de Almeida Souza information Sequence of screens 1,0-1,0 0,30 Speed of access 1,0-1,0 0,37 Ease of access 1,0-1,0 0,38 Searching 1,0-1,0 0,29 resources Browsing 1,0-1,0 0,31 Number 1,0-1,0 0,20 Quality 1,0-1,0 0,49 Reliability 1,0-1,0 0,56 Relevance 1,0-0,75 0,52 Coverage 1,0-1,0 0,35 Up-to-datedness 1,0-1,0 0,49 Availability of 1,0-1,0 0,19 full text Table 4: Satisfaction for Engineering faculty in aggregated categories of factors Category Max Min Media Quality of the 1,0-0,85 0,37 system Quality of the 1,0-1,0 0,42 content Usability 1,0-0,85 0,37 Access 1,0-1,0 0,38 Search 1,0-1,0 0,28 Satisfaction two 1,0-1,0 0,50 pairs of adjectives Satisfaction All factors 1,0-0,67 0,40 7. Discussion The data shows an overview of the reactions of the users to the system. The faculty declare themselves clearly satisfied to have the Portal ( 0.50). They are more satisfied with the contents of the journals (reliability is 0.56, relevance is 0.52, up-to-datedness is 0.49, quality is 0.49). Still in the content category, they are less satisfied with the availability of full text (0.19), with the number of journals (0.20), and the chronological the coverage (0.35). The overall satisfaction with the category quality of content is 0.42. Satisfaction with quality of the system is lower (0.37) with little variation among the three components: usability (0.37), access (0.38) and search resources (0.28). Access is not a considerable problems (easy of access is 0.38,

Qualitative and Quantitative Methods in Libraries (QQML) 4: 11 19, 2015 19 speed is 0.37). The system is considered easy to use (0.42) but less flexible (0.36) and confusing (sequence is 0.30, organization is 0.30, and browsing is 0.31). It is interesting, however, to observe that there is a high variation between the maximum and the minimum value attributed to each factor by the users. It would be important to conduct tests such as the standard coefficient of variation to determine how homogeneous these responses are, to have a better understanding of the reaction of the users. The data suggests investment should be done in the number of journals and the acquisition of the full text version as well as of the older issues of the journals instead of just the latest years. More in depth studies should be conducted in order to improve the interface and search system. The last two lines of Table 4 show the value of satisfaction taking into consideration all the 16 factor (0.40) and the value of the perceived satisfaction of the user (0.50) when asked if the Portal was excellent and satisfying. The fact that the perceived satisfaction is higher could be attributed to the subjectivity of the construct and to the fact that the user is grateful to have the Portal. 8. Conclusions The method shows that it can be a practical way to compare satisfaction among different areas of knowledge or among different categories of factors. Further research is recommended to improve the instrument and method proposed. Other factors should be added to the initial list to cover reasons that affect users reaction to systems such as training, dissemination of information about the system, services provides. A more complete list of factor needs to be built and the instrument should be validated to prove its reliability. References Bailey, J. E.; Pearson, S. W., (1983). Development of a Tool for Measuring and Analyzing Computer User Satisfaction. Management Science, 29, (5), 530-545. Capes. O que é? Available in: <www.capes.gov.br>. Access in: Mar 28, 2013. Cyert, R. M.; March, J. G., (1963). A behavioral theory of the firm. Englewood Cliff, N. J.: Prentice-Hall. Delone, W. H.; Mclean, E. R.,(1992). Information systems success: the quest for the dependent variable. Information Systems Research, Washington, 3, (1), 60-95, Mar. Chin, J. P. Diehl, V. A. Norman, K. 1988. Development of an instrument for measuring user satisfaction of the human-computer interface. In: E. Soloway, D. Frye; S. B. Sheppard (Eds.). Proceedings of the SIGCHI conference on Juman factors in computing systems. Washington DC, May 1988. New YorK, NY: ACM Press, 1988. p 213-218. Doll, W. J; Torkzadeh, G.,(1988), The measurement of end-user computing satisfaction. MIS Quarterly, 12, 259-274. Ives, B.; Olson, M. H.; Baroudi, J. J.,(1983). The measurement of user information satisfaction. Communications of the ACM, New York, 26, (10), 785-793. Mason, Richard. O.,(1978). Measuring information output: A communication systems approach. Information & Management, 1, (5), 219-234.

20 Beatriz Valadares Cendón and Juliana Lopes de Almeida Souza Melone, N. P.,(1990). Theoretical Assessment of the User-Satisfaction Construct in Information Systems Research. Management Science, 36 (1), 76-91. Shannon, Claude E.; Weaver, Warren.,(1949). A mathematical theory of communication. University of Illinois Press: Urbana, Ill. Rey Martin, C.,(2000). La satisfacción del usuário: un concepto en alza. Anales de Documentación, 3, 139-153.