User Friendly Recommender Systems

Size: px
Start display at page:

Download "User Friendly Recommender Systems"

Transcription

1 User Friendly Recommender Systems MARK HINGSTON SID: SIDERE MENS EADEM MUTATO Supervisor: Judy Kay This thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Information Technology (Honours) School of Information Technologies The University of Sydney Australia 3 November 2006

2 Abstract Recommender systems are a recent but increasingly widely used resource. Yet most, if not all of them suffer from serious deficiencies. Recommender systems often require first time users to enter ratings for a large number of items a tedious process that often deters users. Thus, this thesis investigated whether useful recommendations could be made without requiring users to explicitly rate items. It was shown that ratings automatically generated from implicit information about a user can be used to make useful recommendations. Most recommender systems also provide no explanations for the recommendations that they make, and give users little control over the recommendation process. Thus, when these systems make a poor recommendation, users can not understand why it was made, and are not able to easily improve their recommendations. Hence, this thesis investigated ways in which scrutability and control could be implemented in such systems. A comprehensive questionnaire was completed by 18 participants as a basis for a broader understanding of the issues mentioned above and to inform the design of a prototype; a prototype was then created and two separate evaluations performed, each with at least 9 participants. This investigation highlighted a number of key scrutability and control features that could be useful additions to existing recommender systems. The findings of this thesis can be used to improve the effectiveness, usefulness and user friendliness of existing recommender systems. These findings include: Explanations, controls and a map based presentation are all useful additions to a recommender system. Specific explanation types can be more useful than others for explaining particular recommendation techniques. Specific recommendation techniques can be useful even when a user has not entered many ratings. Ratings generated from purely implicit information about a user can be used to made useful recommendations. ii

3 Acknowledgements Firstly, I would like to thank my supervisor, Judy Kay, for all of the time and effort she has put into guiding me through the production of this thesis. I would like to thank Mark van Setten and the creators of the Duine Toolkit for producing a high quality piece of software and making it available to the public. I want to also thank Joseph Konstan, for taking the time to talk with me and give me encouragement at the formative, early stages of my thesis. I would also like to thank my lovely girlfriend Sarah Kulczycki, for her unwavering support and fun-loving spirit. iii

4 CONTENTS Abstract Acknowledgements List of Figures ii iii vii Chapter 1 Introduction Background Research Questions Chapter 2 Literature Review Social Filtering Content-Based Filtering Hybrid Recommenders (The Duine Toolkit) Unobtrusive Recommendation Scrutability and Control Conclusion Chapter 3 Exploratory Study Introduction Qualitative Analysis Recommendation Algorithm Analysis Questionnaire - Design Part A - Presentation Style Part B - Understanding & Usefulness Final Questions - Integrative Questionnaire - Results Usefulness Understanding Understanding And Usefulness iv

5 CONTENTS v Control Presentation Method Final Questions Test Data Conclusion Chapter 4 Prototype Design Introduction User s View isuggest-usability isuggest-unobtrusive Design & Architecture isuggest-usability isuggest-unobtrusive Conclusion Chapter 5 Evaluations Introduction Design isuggest-usability isuggest-unobtrusive isuggest-usability Evaluations Results Recommender Usefulness Explanations Controls Presentation Method isuggest-unobtrusive - Results Statistical Evaluations Ratings Generation Recommendations Conclusion Chapter 6 Conclusion Future Work

6 CONTENTS vi References 90 Appendix A Appendix A Questionnaire Form 93 Appendix B Appendix B Questionnaire Results 94 Appendix C Appendix C isuggest-usability Evaluation Instructions 95 Appendix D Appendix D isuggest-usability Evaluation Results 96 Appendix E Appendix E isuggest-unobtrusive Evaluation Instructions 97 Appendix F Appendix F isuggest-unobtrusive Evaluation Results 98

7 List of Figures 2.1 MAE For The Duine Toolkit s System Lifecycle Test. Lower MAE Values Indicate Better Performance. The Numbers Below Each Group Indicate The Sample Size (In Number Of Predictions) Examples Of Features That Can Be Computed For Various Item Types Mean Response Of Users To Each Explanation Interface, Based On A Scale Of One To Seven. Explanations 11 And 12 Represent The Base Case Of No Additional Information. Shaded Rows Indicate Explanations With A Mean Response Significantly Different From The Base Cases Summary Of Possible Explanations And Control Features For The Major Algorithms In The Duine Toolkit Demographic Information For Each Of The Respondents List Based Presentation That Was Shown To Participants In The Questionnaire Map Based Presentation That Was Shown To Participants In The Questionnaire One Of The Explanation Screens Shown To Participants In The Questionnaire. This Screen Explains Recommendations From The Learn By Example Technique One Of The Explanation Screens Shown To Participants In The Questionnaire. This Screen Explains Recommendations From The Social Filtering Technique The Genre Based Control Shown To Participants In The Questionnaire The Screens With The Maximum Average Usefulness For Each Recommendation Method. Error Bars Show One Standard Deviation Above And Below The Mean. N = Average Ranking Given To Each Presentation Method. N = 18. Top Ranking = 1. Bottom Ranking = vii

8 LIST OF FIGURES viii 3.10Average Response For Contribution That Each Method Should Make To A Combination Of Recommendation Methods. Error Bars Show One Standard Deviation Above And Below The Mean. N = The Screens With The Maximum Average Understanding For Each Recommendation Method. Error Bars Show One Standard Deviation Above And Below The Mean. N = Respondents Average Understanding Of Recommendation Methods Before And After Explanations. Error Bars Show One Standard Deviation Above And Below The Mean. N = Average Ratings For Questions Regarding Respondents Understanding, Likelihood Of Using And Perceived Usefulness Of Each Control Feature. Error Bars Show One Standard Deviation Above And Below The Mean. N = User s Responses For Questions Regarding Recommendation Presentation Methods. Error Bars Show One Standard Deviation Above And Below The Mean. N = Average Rating For The Usefulness Of Possible Features Of A Recommender. Error Bars Show One Standard Deviation Above And Below The Mean List Based Presentation Of Recommendations The Star Bar That Users Used To Rate Items Recommendation Technique Selection Screen. Note: The Word Of Mouth Technique Shown Here Is Social Filtering And The Let isuggest Choose Technique Is The Duine Toolkit Taste Strategy Explanation Screen For Genre Based Recommendations Social Filtering (Simple Graph) Explanation Screen For Social Filtering Recommendations Explanation Screen For Learn By Example Recommendations Explanation Screen For Most Popular Recommendations The Genre Based Control (Genre Slider) The Social Filtering Control. Note: The actual control is the Ignore This User Link Full Map Presentation Zoomed Out View Full Map Presentation Zoomed In View Similar Items Map Presentation The Explanation Screen Displayed After Ratings Generation 55

9 LIST OF FIGURES ix 4.14Architecture Of The Basic Prototype, With Components Constructed During This Thesis Marked In Blue Architecture Of isuggest-usability, With Components Constructed During This Thesis Marked In Blue Architecture Of isuggest-unobtrusive, With Components Constructed During This Thesis Marked In Blue Demographical Informations About The Users Who Conducted The Evaluations Of isuggest-usability Demographical Informations About The Users Who Conducted The Evaluations Of isuggest-unobtrusive Average Usefulness Ratings For Each Recommendation Method. Error Bars Show Standard Deviation Average Usefulness Ratings For Each Explanation. Error Bars Show Standard Deviation Users Ratings For The Overall Use Of The isuggest Explanations Users Ratings For The Effectiveness Of Control Features Users Ratings For The Overall Effectiveness Of The isuggest Control Features Average Usefulness Of The Map Based Presentations. Error Bars Show Standard Deviation Sum Of Votes For The Preferred Presentation Type Comparison Of Distribution Of Ratings Values Comparison Of MAE And SDAE For Movielens Recommendations And Recommendations Using Generated Ratings. Lower Scores Are Better. Techniques Are Sorted By MAE Average Usefulness Ratings For Each Recommendation Method. Error Bars Show Standard Deviation. 82

10 CHAPTER 1 Introduction Recommender systems are a recent, but increasingly widely used resource. Yet most, if not all of them suffer from serious deficiencies. With so much information available over the Internet, people often turn to recommendation services to highlight the items that will be of most interest to them. All of the significant systems in the area of recommendation build up a profile of a user (usually through asking users to rate items they have seen) and then use content-based or collaborative filtering, or a combination (hybrid) of these methods, to make recommendations about what other pieces of information a user might be interested in. However many recommender systems require first time users to enter ratings for a large number of items. Further, these systems do not always make useful recommendations. Recommendations can be poor for a number of reasons, but what happens when a recommender does make a poor recommendation? Most recommender systems offer no information about the reason that they made particular recommendations. Further, most also offer users little opportunity to affect the system in a way that can improve recommendations. The fact that recommenders require users to rate items can also be a failing, as the tedious process of entering ratings can often deter users. When we take account of all of these factors, it is obvious that many existing recommender systems are not meeting their potential for usefulness and usability. 1.1 Background Since about 1995, recommender systems have been deployed across many domains. Two of the most important early recommender systems were Ringo (publicly available in 1994) and GroupLens 1 (available in 1996). The success of Ringo, one of the first large-scale music recommendation systems, is reported in (Shardanand and Maes, 1995). GroupLens, an automated collaborative filtering system for Usenet 1 1

11 1.2 RESEARCH QUESTIONS 2 news, also proved highly successful. (Konstan et al., 1997) reported trials of the GroupLens system, and this classic paper showed that collaborative filtering could be effective on a large scale. The GroupLens project was soon adapted to produce MovieLens 2, a large-scale, publicly available movie recommendation system. Large interest in recommender systems was soon fostered by the increasing public demand for systems that helped deal with the problem of information overload. Since then, much academic and commercial interest has been shown in recommender systems for many different domains. Although much of their research is not published, Amazon.com is one of the most well known implementers of this technology. Amazon.com makes use of collaborative filtering systems to recommend products that a user might like to purchase. Other companies that use recommender systems, include netflix.com for videos, TiVo for digital television and Barnes and Noble for books. Many music recommendation systems are also available today, such as Pandora.com (which maintains a staff of music analysts who tag songs as they enter the system) and last.fm 3. (Atkinson, 2006) rated these two systems as the best music recommenders currently available to the public. 1.2 Research Questions In order to make recommender systems more user friendly, the problems detailed above need to be addressed. However, there is a lack of existing research into the way that recommender systems can: make recommendations unobtrusively; explain recommendations and offer users useful control over the recommendation process. This lack of research is especially prevalent in the area of music recommendation, where little research has been published. Thus, this project investigated the following research questions: Scrutability & Control: What is the impact of adding scrutability and control to a recommender system? Unobtrusive Recommendation: Can a recommender system provide useful recommendations without asking users to explicitly rate items? This thesis originally aimed to investigate these questions with reference to music recommender systems. To further this goal, a dataset containing unobtrusively obtained information about users was located for use in investigating Unobtrusive Recommendation. However, it quickly became apparent that few music

12 1.2 RESEARCH QUESTIONS 3 datasets containing users explicit ratings of music. Thus, in order to conduct a thorough and rigorous study of Scrutability & Control, the MovieLens standard dataset was used. This contained information on users and their ratings of movies. The contributions of this thesis are: the identification of a lack of existing research into scrutability, control and unobtrusiveness in recommender systems (Chapter 2); the identification of a number of promising methods for adding scrutability and control to a recommender (Chapter 3); the creation of a prototype that implements these scrutability and control methods, and can also provide unobtrusive recommendations (Chapter 4); and the evaluation of the methods implemented in this prototype for providing scrutability, control and unobtrusiveness within a recommender system (Chapter 5).

13 CHAPTER 2 Literature Review The basic purpose of a music recommender is to recommend items that will be of interest to a specific user. This task is required because of the fact that an abundance of information is now available to people via the Internet and many don t have the time sort through it all. Currently, all major recommendation systems use social filtering, content-based filtering, or some combination of these two approaches to predict how interested a user will be in a specific item. This information is then used to recommend items that the system believes will be of the most interest to that user. Each of these approaches to recommendation is discussed below, with reference to Figure (taken from (van Setten et al., 2002)). This graph shows the results of testing a series of approaches to recommendation using the MovieLens standard data set. These tests were evaluated using the Mean Absolute Error (MAE) metric, which (Herlocker et al., 2004) lists as an appropriate metric for the evaluation of recommender systems. Figure 1 gives a good indication of the relative levels of performance that can be achieved by using each approach Social Filtering (Polcicova et al., 2000), (Breese et al., 1998) and (Shardanand and Maes, 1995) explain that social filtering systems work by first asking users to rate items. Then by comparing those ratings, they locate users who share common interests and make personalized recommendations based on like-minded user s opinions. Social filtering does not take formal content into account and makes judgments based purely upon the ratings of users. The GroupLens project, documented in (Konstan et al., 1997), involved a large-scale trial of a social filtering recommender system. This trial was confirmatory research - a large amount of users were asked to test the system, and the results of this testing were collated to provide a statistical confirmation that social filtering could be effective on a large scale. Many further research projects into social filtering have confirmed its utility through simulation. Such projects include (Breese 4

14 2 LITERATURE REVIEW 5 et al., 1998) and (van Setten et al., 2002), which both contain simulations run on the MovieLens data set and evaluated using mean error metrics. In general, social filtering algorithms work in the following way: "In the first step, they identify the k users in the database that are the most similar to the active user. During the second step, they compute the [set of] of items [liked] by these users and associate a weight with each item based on its importance in the set. In the third and final step, from this [set] they select and recommend the items that have the highest weight and have not already been seen by the active user" - (Deshpande and Karypis, 2004), p 4. Figure shows the social filtering recommender to have the equal lowest MAE in four of the five tests, showing that it is a highly effective recommendation method. However, social filtering is not without its problems. (Adomavicius and Tuzhilin, 2005) summarises the issues with social filtering as: An inability to make accurate predictions for new users. (Referred to in this thesis as the cold start problem for new users). Poor recommendation accuracy during the initial stages of the system. (Referred to in this thesis as the cold start problem for new systems). A lack of ability to recommend new items until they are rated by users. Social filtering was one recommendation technique used in this project to make music and movie related recommendations. As stated above, social filtering does not make use of the content of items, only the ratings that users have given each item. This means that social filtering approaches were easily adapted for use in both music and movie related recommendation Content-Based Filtering In content-based filtering systems, users are again asked to rate items. The system then analyses the content of those items and creates a profile that represents a user s interests in terms of item content (features, key phrases, etc.). Then the content of items unknown to the user is analysed and these are compared with the user s profile in order to find the items that will be of interest to the user. The information that a content-based filtering system can compute about a particular item falls into one of two categories: content-derived and meta-content information. Content-derived information (used in (Cano et al., 2005), (Logan, 2004) and (Mooney and Roy, 2000)) is computed by the system through

15 2 LITERATURE REVIEW 6 FIGURE 2.1: MAE For The Duine Toolkit s System Lifecycle Test. Lower MAE Values Indicate Better Performance. The Numbers Below Each Group Indicate The Sample Size (In Number Of Predictions) analysis of the actual content of an item (e.g. the beats per minute of a song or the key words found in a document). Meta-content information (used in (Mak et al., 2003), (van Setten et al., 2002) and (van Setten et al., 2003)) is any information that the system can glean about an item that does not come from analysing the content of that item (such information may come from an external database, or a header attached to the item). Examples of the type of features that can be computed for text, music and movie data are given in Figure 2.2. Content-derived information about an item needs to make use of algorithms that are specific to the type of item that is being analysed. In contrast, meta-content information does not need to be computed from actual items and, in fact, meta-content information is often quite similar for items from different domains. Figure 2.2 shows that meta-content information for each of the different item types exhibits certain similarities, whereas the content-derived information is quite specific to the type of item. This fact means that meta-content based recommenders are able to be easily adapted for use in new domains, but that it is much more difficult to perform the same adaptation on recommenders that use content-derived information. However, systems that make use of content-derived information gain a better picture of each of the items in the system and thus should be able to make more accurate recommendations than systems that use only meta-content information.

16 2.1 HYBRID RECOMMENDERS (THE DUINE TOOLKIT) 7 Like social filtering, content-based filtering also has weaknesses. (Adomavicius and Tuzhilin, 2005) states that they: Become over specialised and only recommend very specific types of items to each user. Are also subject to the cold start problem for new users. May rely on content-derived information, which is often expensive (or impossible) to compute accurately. Text Music Movies Metacontent: Author Composer Writer Abstract N/A Synopsis Publisher Producer Producer Genre Genre Genre N/A Performer Actors Contentderived: Key phrases Beats / min Color Histogrm Term frequencies MFCC s Story Tempo FIGURE 2.2: Examples Of Features That Can Be Computed For Various Item Types (van Setten et al., 2002) makes use of content-based filtering using meta-content to make movie recommendations. This content-based filtering approach is one of a number of prediction techniques used in the Duine Toolkit to make recommendations. This toolkit is discussed in detail in Section 2.1. The tests summarized in (van Setten et al., 2002) show that the content-based algorithm included in the Duine Toolkit performed well during simulations. This project extended the Duine Toolkit to also include content-based prediction techniques for music recommendations. 2.1 Hybrid Recommenders (The Duine Toolkit) Hybrid recommender systems combine content based and social filtering in the hope that this combination might contain all the strengths of the two approaches, while also alleviating their problems. The Duine Toolkit is a hybrid recommender that was produced as a part of a PhD completed by Mark van Setten. It is a piece of software that makes available a number of prediction techniques (including both social filtering and content-based techniques) and allows them to be combined dynamically. This project will involved using the using the Duine toolkit to make both music and movie related recommendations. This toolkit makes use of prediction strategies, which were introduced in (van Setten et al., 2002). Such

17 2.2 UNOBTRUSIVE RECOMMENDATION 8 prediction strategies are a way of easily combining prediction techniques dynamically and intelligently in an attempt to provide better and more reliable prediction results. (van Setten et al., 2002) introduces these prediction strategies and demonstrates how they can be adapted depending upon the various states that a system might be in. It introduces a software platform called Duine, which implements prediction strategies and can be extended to include new prediction techniques and new strategies. Simulations run in (van Setten et al., 2002) and (van Setten et al., 2004) showed that the combination of prediction techniques into prediction strategies can improve the effectiveness of a recommendation system. The testing done in these papers was of sound quality and was performed on the data set made available by the MovieLens project, which is a well-known, standard data set for recommender systems. The results of these tests are summarised in (van Setten et al., 2002). These results show that in every case, the Taste Strategy (a particular prediction strategy used in testing) had the lowest MAE of all of the prediction techniques used. This strategy is able to choose the most effective prediction technique for a particular situation and thus is able to maximise prediction accuracy. The work done in (van Setten et al., 2002) and (van Setten et al., 2004) focused on making predictions based on movie data. This project built upon this work by extending the Duine Toolkit for use in music recommendation. As well as making use of the Duine Toolkit in a new domain, this project also involved the addition of Scrutability & Control features and Unobtrusive Recommendation to this toolkit. Each of these additions is discussed in the following sections. 2.2 Unobtrusive Recommendation Generally, recommender systems build a profile of a user s likes and dislikes by asking a user to rate specific items after they have listened to them. However, users often find this process to be tedious. Further, the cold start problem for new users means that users may need to rate many items before they receive useful recommendations. As a result, this thesis investigated ways in which a system can elicit information about a user s likes and dislikes in an unobtrusive manner. In order to investigate Unobtrusive Recommendation, new features were added to the Duine Toolkit. This allowed would allow the system to make recommendations without needing to ask a user to rate the items that they have seen or heard. Accomplishing this task required an unobtrusive way to gauge a user s level of interest in an item. Some of the unobtrusive methods for judging how interested a user is in an item are summarised in (Oard and Kim, 1998). These methods include the length of time that a user spends viewing an item, the number of times a user has viewed an item, the items that a user is willing to purchase, the items

18 2.2 UNOBTRUSIVE RECOMMENDATION 9 that a user deletes from their collection and the items that a user chooses to retain in their collection. Unfortunately, (Oard and Kim, 1998) merely presents a summary of these methods and does not present any testing of the methods it mentions. Of course, one of the problems with all of the methods mentioned above for modelling users unobtrusively is the fact that preferences based upon such data are likely to be less accurate than preferences based upon explicit user ratings. (Adomavicius and Tuzhilin, 2005) states that "[unobtrusive] ratings (such as time spent reading an article) are often inaccurate and cannot fully replace explicit ratings provided by the user. Therefore, the problem of minimizing intrusiveness while maintaining certain levels of accuracy of recommendations needs to be addressed by the recommender systems researchers" - (Adomavicius and Tuzhilin, 2005), p 12. This paper recognises the need for more research into unobtrusive user modelling and notes a number of papers that have reported on work in this area. Unfortunately, there is a distinct lack of research published that deals with eliciting a user s musical preferences unobtrusively. The literature available on unobtrusive user modelling is often concerned with determining user s preferences in regard to websites and not their opinions on pieces of music. (Kiss and Quinqueton, 2001) mentions the use of navigation histories to gauge a user s level of interest in particular websites. It also proposes some more creative methods for using implicit input, such as matching the sort order of a search with the order that results were visited and using the time taken to press the back button on a browser to judge a user s interest in a page. Although (Kiss and Quinqueton, 2001) is obviously based upon some amount of research, and claims "the implementation has started and is well advancing, and we begin to have some experimental results" - (Kiss and Quinqueton, 2001), p 15, disappointingly, results from the project are not easily available and, as user modelling forms only one part of the paper, it is unlikely that it would be easy to identify the impact that particular user modelling techniques had upon the results of this research. However, this paper does still present some useful ideas on making use of implicit preference information that could be adapted for use in a music recommender. (Middleton et al., 2001) describes similar techniques for user modelling and includes results of a number of exploratory case studies that show that this form of user modelling can be quite successful. This project built upon existing methods for user profiling and extended these to investigate methods for inferring a user s level of interest in an item from only implicit data.

19 2.3 Scrutability and Control 2.3 SCRUTABILITY AND CONTROL 10 The literature discussed in the sections above all deals with the desire to make high quality recommendations. Once these recommendations are made, scrutability is concerned with explaining to the user why a particular recommendation was made. Further, control is concerned with allowing users to control a recommender system in order to improve recommendations. Research published in (Sinha and Swearingen, 2001) and (Sinha and Swearingen, 2002) shows that users are more willing to trust or make use of recommendations that are well explained (i.e. that are scrutable). Joseph Konstan, a leading figure in recommender systems research noted that "adding scrutability to recommender systems is important, but hard" - (Konstan, J., personal communication, June 3, 2006). Scrutability is a key component in a recommender system for a number of reasons. First, users are not always willing to trust a system when they are just beginning to use it. If users can be provided with some level of assurance that the recommendations made by a system are of a high quality, then they are more likely to trust that system. Such assurances are given to the user by showing why a particular recommendation was made. Scrutability is also useful in cases where a recommendation is made that a user believes is not appropriate. In this case, if a user can access some explanation for the recommendation, they may be more likely to understand why that recommendation might be of interest to them. Explanations may also help a user to identify areas where a system is making errors and, ideally, control functions should then be able to help the user alter the function of the system to make it less likely to make inappropriate recommendations. The value of control functions is not limited to allowing alterations to the recommendation process when errors occur. Rather, users can often make use of control functions at any time during the operation of a recommender system. This allows them to influence the process of recommendation in a way that hopefully leads to improved recommendation accuracy. Sinha and Swearingen have shown that scrutability improves the effectiveness of a recommendation system. (Sinha and Swearingen, 2001) and (Sinha and Swearingen, 2002), published the results of research that involved asking users to test a number publicly available recommendation systems and then evaluate their experience with each one. The findings of these studies show that "in general users like and feel more confident in recommendations perceived as transparent" - (Sinha and Swearingen, 2002), p 2. Although their experiments were on only a small scale, they were well designed and the concept of the importance of transparency is supported by other research such as was conducted by "Johnson & Johnson (1993) [who] point out that explanations play a crucial role in the interaction between users and complex systems" - (Sinha and Swearingen, 2002), p 1. A similar experimental study was

20 2.3 SCRUTABILITY AND CONTROL 11 conducted in (Herlocker, 2000), which describes scrutability experiments conducted on a much larger sample group and confirms that "most users value explanations and would like to see them added to their [recommendation] system. These sentiments were validated by qualitative textual comments given by survey respondents" - (Herlocker et al., 2000), p 10. (Herlocker, 2000) describes in detail a series of approaches to adding scrutability to social filtering recommender systems. It reports on user trials that were conducted involving a large number of users, who were each asked to use prototype recommender systems and provide feedback on the value of the explanations given for recommendations. The results of these tests can be seen in Figure 2.3, which shows the most useful techniques for adding scrutability to be explanations showing histograms of ratings from like-minded users (nearest neighbours) and explanations showing the past performance of the recommender. (van Setten, 2005) also describes a small scale investigation into explanations for recommender systems and (Mcsherry, 2005) and (Cunningham et al., 2003) present methods for explaining a particular method of recommendation, named Learn By Example. Some commercial systems (such as liveplasma 1 ) also offer innovative ways of presenting recommendations, such as Map Based presentation of items. Such presentations may increase the usefulness of recommendations and the ability of a user to understand these explanations. The papers (and systems) mentioned above each demonstrate that scrutability can be beneficial in recommender systems, and present some ways of creating it. However, Scrutability & Control in recommender systems is an area which has not received much research attention and thus; there are still many questions to be answered regarding the best way to achieve these goals. Specifically, there is a lack of existing research into: Comparison of the multiple recommendation techniques in terms of their usefulness and ability to be explained. Providing explanations for recommendation techniques other than social filtering. The impact of adding of controls to a recommender system. The relationship between a user s understanding of a recommendation technique and the usefulness of its recommendations, and the potential trade-off between the two. The effect of a Map Based presentation on the usefulness and understandability of recommendations. As a result, this project added Scrutability & Control features to the Duine Toolkit in order to build upon current research and investigate each of these areas. 1

21 2.4 CONCLUSION 12 FIGURE 2.3: Mean Response Of Users To Each Explanation Interface, Based On A Scale Of One To Seven. Explanations 11 And 12 Represent The Base Case Of No Additional Information. Shaded Rows Indicate Explanations With A Mean Response Significantly Different From The Base Cases. 2.4 Conclusion At this stage of the project, a number of key areas where more research was required were identified. The first of these areas was the provision of Unobtrusive Recommendation to users. Although there is existing work into unobtrusive modeling of a user s interests, most of this research has concentrated upon the field of web browsing. Using implicit data to infer a user s interests in items such as music or movies is an area where little research has been conducted. Thus, this project aimed to build upon existing work in the field of unobtrusive user modeling and investigate unobtrusive music recommendation. Adding Scrutability & Control to recommender systems is the second area where a lack of existing

22 2.4 CONCLUSION 13 research was identified. Current research into explaining and controlling recommender systems is quite sparse, and although some research does exist, there are still many questions to be answered regarding this goal. These questions include issues relating to the impact of adding controls to a recommender system, as well as many issues related to providing scrutable recommendations. Ultimately, this project aimed to advance research into both Scrutability & Control in recommender systems and Unobtrusive Recommendation.

23 CHAPTER 3 Exploratory Study 3.1 Introduction The review of literature from Chapter 2 highlighted that there is a lack of existing research in the areas of scrutability, control and unobtrusiveness within recommender systems. This lack of research is especially prominent in the area of music recommendation, where little research at all has been published. Thus, this project aimed to investigate questions related to Scrutability & Control and Unobtrusive Recommendation. In order to investigate these areas, an exploratory study was first conducted, which involved the following tasks: A qualitative analysis of existing recommender technologies. Conduct of a questionnaire to investigate aspects of recommender systems, as a foundation for gaining the understanding needed to create a prototype recommender system. The creation of a dataset of implicit information about a large number of users, required for performing evaluations on a prototype at a later stage of the thesis. The first stage for this research project was a qualitative analysis of a number of existing recommender systems and recommendation algorithms. This aimed to identify a suitable code base that could be extended into a prototype recommender system. An analysis of the recommendation algorithms contained in the chosen code base was then performed. This analysis aimed to discover methods that could be used to add controls and explanations to the prototype recommender system. To investigate users attitudes toward these explanations and controls (as well as attitudes toward other aspects of recommender systems and usability), a questionnaire was conducted. The results of this questionnaire would be used later in this thesis to guide the construction of the prototype. Finally, a source of test data was established for use in evaluating the prototype. Each of these tasks is detailed in the sections below. 14

24 3.2 QUALITATIVE ANALYSIS Qualitative Analysis The system chosen as a code base needed to be open source and have good code quality, resource consumption (with particular reference to running time and memory usage) and recommendation quality. It would also be highly useful if it provided support for the implementation of features such as explanations, control features and unobtrusive recommendation. The recommendation toolkits that were examined during the course of this qualitative analysis include: Taste: open-source recommender, written in Java. Available from Cofi: open-source, written in Java. Available from RACOFI: open-source, written in Java. Available from SUGGEST: Free, written in C. Available from karypis/suggest/ Rating-Based Item-to-Item: public domain, written in PHP. Available from consensus: open-source, written in Python. Available from The Duine Toolkit: open-source, written in Java. Available from The qualitative analysis of these systems began with an examination of the specifications of each toolkit. Further analysis involved the examination of any available reference documentation. This analysis, combined with learnings from the critical literature review described in 2 narrowed the candidates for use down to just Taste, and the Duine Toolkit. At this stage, the code for each of these toolkits was downloaded and examined. Ultimately, the Duine Toolkit was chosen for use for the following reasons: Well documented code base: the Duine Toolkit has complete and high quality documentation, as well as reference documents. Good recommendation quality: (van Setten et al., 2004) showed that the Duine Toolkit is able to choose the most effective recommendation technique for a particular situation and thus is able to maximise the quality of recommendations. Good resource usage: the Duine Toolkit has been built to conserve resources and ensures that the most resource intensive operations (which involve calculating the similarity between a user and all other users) occur only once for each user session, and not every time that a user rates an item. Multiple recommendation methods: the Duine Toolkit has six built in recommendation techniques and the facility to dynamically alter the recommendation technique that is being used.

25 3.3 RECOMMENDATION ALGORITHM ANALYSIS 16 This meant that a system could be built that allowed users to easily swap from using one recommendation technique to another. This also meant that we could test issues regarding users interactions with not just one, but several methods of recommendation. Built in explanation facility: the Duine Toolkit was designed with explanations in mind each recommendation that is created using this toolkit can have an explanation object attached to it, which describes how exactly that prediction was produced. This feature was included in the Duine Toolkit in in anticipation of further extensions to the toolkit that enabled recommendations to be displayed. Easy to add user controls: In the Duine Toolkit, personal settings can be set and saved for each user. Some of these settings affect the recommendations that are produced by the system. The fact that the Duine Toolkit can set and save such personal settings means that it could be extended to allow users to exert control over the recommendation process. 3.3 Recommendation Algorithm Analysis Once the Duine Toolkit was chosen as the code base for this thesis, an analysis of the recommendation techniques that it provided was necessary. The major recommendation techniques made available within the Duine Toolkit are: Most Popular: This technique recommends the most popular items, based on the average rating each item was given, across all users of the system. Genre Based: This is a content-based technique that uses a user s ratings to decide what genres that user likes and dislikes. It then recommends items based upon this decision. Social Filtering: This is a social filtering technique that looks at the current user s ratings and finds others who are similar to that user. These similar users are then used to recommend new items. (Note: this method also makes use of opposite users ). Learn By Example: This is a content-based technique that predicts how interested a user will be in a new artist by looking at how they have rated other similar items in the past. (Requires some measure of similarity to be defined). Information Filtering: This is a content-based technique that uses natural language processing techniques to process a given piece of text for each item (e.g. A description). This information, combined with the a user s ratings is used to predict the user s level of interest in new items.

26 3.3 RECOMMENDATION ALGORITHM ANALYSIS 17 Note that examination of this technique showed that it could be used to create recommendations that were either Lyrics Based (using lyrics from songs) or Description Based (using descriptions of particular artists). Taste Strategy: As noted in Chapter 2, (van Setten et al., 2004) shows that this is the recommendation technique that produces the highest quality recommendations within the Duine Toolkit. This technique is, in fact, a Prediction Strategy that is able to choose to make recommendations using any of the five techniques described above. This technique chooses the best available recommendation technique at any given point in time and makes recommendations using that technique. This is the default recommendation technique for the Duine Toolkit. Note that this technique was not considered as a candidate for the addition of scrutability or control, as it is a Prediction Strategy that merely makes use of other recommendation techniques to make recommendations and does not actually create recommendations itself. Thorough examination and testing was conducted upon these algorithms to ascertain ways in which they could be explained and controlled. The results from this investigation are summarised in Figure 3.1. This table shows the possible explanations and control features that could be implemented for each of the recommendation algorithms within the Duine Toolkit. It also lists any problems that may be encountered when adding scrutability and control to this algorithm. For example, the entry for the Genre Based technique notes that recommendations produced using this technique could be explained by telling the user what genre an item belongs to and how interested the system thinks that user is in those genres. It also notes that one of the ways that users could be given control over this technique would be to allow them to specify their level of interest in particular genres. Finally, it shows that a possible problem that may be encountered when offering users controls and explanations for this technique would be if a user did not agree with the genres that an item was classified into.

27 3.3 RECOMMENDATION ALGORITHM ANALYSIS 18 Algorithm Possible Explanations Possible Control Features Problems Most Popular Tell the user where this item ranks in terms of popularity. Tell the user the average rating that has been given to this item. Genre Based Tell the user how many users have rated this item. Tell the user the recommendation was based on the genres that item belongs to. Allow the user to specify their interest in a particular genre. What if users don't agree with the genre classifications? Show the user how interested the system thinks they are in each genre. Social Filtering Show the user how similar users have rated an item. Allow the user to specify the impact that similar and opposite users should have on recommendations. What if users do not think they are really similar to particular users? Show the user the similar users that factored heavily in their recommendation. Allow the user to choose users who they want to be considered as similar to them. There is A LOT of information involved in this algorithm. The 'opposite users' idea is a hard one to convey. Learn By Example Show the user the similar items that factored heavily in their recommendation and how they rated those similar items. Allow the user to specify what factors should determine the similarity between items. What if users do not think this item is actually similar to the items they have rated in the past. Information Filtering Show the user the key words that are present in the descriptions of items that they have liked in the past. Allow user to control the features used in recommendation. Users might disagree with the keywords used to categorise their interest - even if these key words are quite appropriate. Users might not understand how this approach is working, especially if it works on something other than descriptions (e.g. it may work on the text from forum posts about an item). FIGURE 3.1: Summary Of Possible Explanations And Control Features For The Major Algorithms In The Duine Toolkit. The Taste Strategy, was also examined at this stage, but it was found that because it switches between recommendation techniques, it is not a technique that can be explained in a consistent way to users. This meant that it was not considered as a suitable technique to add scrutability and control to.

28 3.4 QUESTIONNAIRE - DESIGN Questionnaire - Design The recommendation algorithm analysis described in the previous section highlighted a number of usability features that could be added to a recommender system. Further, the analysis of existing recommender systems described in Section 3.2 and the review of literature described in Chapter 2 also brought to light some of the different usability features of existing recommender systems. In order to investigate how understandable and effective users would find these usability features, a questionnaire was designed. The results of this questionnaire should then be used to inform the construction of the prototype. A questionnaire was chosen as it was the most efficient way to gather large amounts of detailed information about users opinions on the set of potential usability features. The specific aims of the questionnaire were to assess several potential usability features related to: Understanding of recommendations provided by various recommendation techniques. Usefulness of recommendations provided by various recommendation techniques. Attitudes toward control features for recommenders and understanding of how these would be used. Preferences for recommendation presentation format. To this end, an extensive questionnaire was designed. It asked users to answer questions on a scale of 1 to 5, where 1 was the lowest score and 5 was the highest. Particular care was taken during the design of the questionnaire to ensure that each question would elicit useful information from participants and that all of the questions were clear and free of bias. An initial group of five respondents filled out the questionnaire, each answering 60 questions. After these respondents had completed the questionnaire, a number of revisions were made. These revisions included the removal of two questions, the addition of seven new questions and minor changes to the wording of a small number of questions. The questionnaire was then conducted with a further 13 people, who answered 65 questions (58 in common with the original questionnaire). Most respondents took around 40 minutes to complete the questionnaire. Figure 3.4 shows demographic information for each of the respondents. The sample group for this questionnaire was carefully selected to contain people from a variety of backgrounds and both males and females. The majority (12/18) of the users who completed the questionnaire were aged under 30. Since modern recommender systems are used most often by people who fall in the age range, a higher proportion of respondents in this age range was deemed to be appropriate.

29 3.4 QUESTIONNAIRE - DESIGN 20 Participant: Age Gender F F M M M F M M F F F F M F M F F F Has An IT Background? N N N N Y N N N N N N N N N N N N N Has Used Any Type Of Recommender Before? Y N Y Y Y N Y Y Y Y Y N Y N N Y Y Y FIGURE 3.2: Demographic Information For Each Of The Respondents. Sections to now describe the final set of questions that were presented to respondents. Although these were many questions, they were actually in three groups: Part A had one set of 5 questions, Part B had six sets of questions, totalling 52 questions and the Final Questions comprised one set of seven questions. The entire questionnaire is included as Appendix A Part A - Presentation Style This section of the questionnaire aimed to investigate users preferences for recommendation presentation format. At this stage, respondents were shown two forms of recommendation presentation. The first of these was a standard List Based format (shown in Figure 3.3) and the second was a Map Based format (shown in Figure 3.4), that was similar to the liveplasma 1 interface mentioned in Chapter 2. After viewing an example of each presentation format, respondents were then asked to rate how well they understood the information conveyed by that example and how useful they would find recommendations that were presented in this format. Finally, after viewing both formats, respondents were asked to indicate whether they would prefer the List Based format, the Map Based format or both Part B - Understanding & Usefulness This section of the questionnaire aimed to investigate understanding of recommendations, usefulness of recommendations and attitudes toward control features. This section presented six recommendation techniques to respondents (Most Popular, Genre Based, Social Filtering, Learn By Example, Description Based and Lyrics Based). For each of these techniques, respondents followed this process: 1

30 3.4 QUESTIONNAIRE - DESIGN 21 FIGURE 3.3: List Based Presentation That Was Shown To Participants In The Questionnaire FIGURE 3.4: Map Based Presentation That Was Shown To Participants In The Questionnaire Respondents were first presented with a short textual description of how this technique works. At this stage, they rated their initial understanding of the technique. Respondents were then presented with a number of explanation screens, each of which showed a recommended item and an explanation of why it was recommended (example explanation screens are shown in Figures 3.5 and 3.6). For each screen, respondents rated how well they understood why the recommendation had been made and how

31 3.4 QUESTIONNAIRE - DESIGN 22 useful they would find recommendations that were produced using this technique and explained in this fashion. If this technique had control features, then respondents were also presented with a control feature screen for each of the controls for this technique (an example control feature screen is shown in Figure 3.7). After viewing each control feature screen, respondents rated how well they understood how they would use this control, how likely they would be to use it and how useful they expected it would be. Finally, respondents rated the overall usefulness of this recommendation technique, and their overall understanding of it. FIGURE 3.5: One Of The Explanation Screens Shown To Participants In The Questionnaire. This Screen Explains Recommendations From The Learn By Example Technique FIGURE 3.6: One Of The Explanation Screens Shown To Participants In The Questionnaire. This Screen Explains Recommendations From The Social Filtering Technique Final Questions - Integrative This section of the questionnaire aimed to investigate the usefulness of recommendation techniques and attitudes toward explanations and control features.

32 3.4 QUESTIONNAIRE - DESIGN 23 FIGURE 3.7: The Genre Based Control Shown To Participants In The Questionnaire At this stage of the questionnaire, respondents were asked to indicate their general opinion on the usefulness of all the six recommendation techniques. They first ranked the techniques from 1 to 6 in order of usefulness. Then respondents were also asked to indicate the weight they would want to place on each technique if a combination of techniques was to be used in a recommender system. The weight that they could place on each technique ranged from Not At All (weight of 0) to Very Much (weight of 100). The final five questions in the questionnaire then asked respondents to rate how useful they would find the following five potential features of a recommender system: System Chooses Recommendation Method: The recommender system chooses the best recommendation technique to use at any point in time. System Chooses Combination Of Recommendation Methods: The recommender system chooses a combination of recommendation techniques to be used. View Results From Other Recommendation Methods: The recommender system chooses the best recommendation technique to use at any point in time. However, users are then able to view what their recommendations would look like if other recommendation techniques were used. Explanations: Explanations are provided for how recommendations were made. Controls: Users are given some amount of control over how recommendations are made. These final questions would give an overall picture of users attitude toward a variety of potential features of a recommender system. As well as providing useful information, these questions also acted as internal consistency checks, allowing a user s answers to be validated. For example, when asked to rank the

33 3.5 QUESTIONNAIRE - RESULTS 24 recommendation techniques in order of usefulness, a user s answers would be expected to correlate with answers to usefulness questions asked earlier in the survey. 3.5 Questionnaire - Results In total, 5 respondents answered the initial questionnaire (60 questions) and a further 13 respondents answered the revised questionnaire (65 questions). We now present and discuss the results of the questionnaire, with reference to the aims of the questionnaire, as expressed in Section 3.4. The results in this section are rather long because they report respondents answers in terms of recommendation usefulness, recommendation understanding, control features and presentation method. Each of these factors is important and each of them is different. For each factor, this section reports a small number of averages. This is explained with illustrative additional data which helps understanding of the results. Then there is a summary of the conclusions and a separate list of the implications for the prototype design. This section is quite long, but it has not been relegated to an appendix because it is all new information about how users can understand and control recommenders Usefulness This section discusses the questionnaire results relevant to the aim of: assessing the perceived usefulness of recommendations provided using various recommendation techniques. In Part B of the questionnaire, respondents rated the usefulness of 18 screens that presented recommendations. The screens that had the maximum average usefulness for each technique are presented in Figure 3.8, along with their average rating (error bars show one standard deviation above and below, actual results for each respondent shown in Appendix B). For example, from five Social Filtering screens presented to respondents, the one with the highest average usefulness rating was the Simple Text screen, so this is shown in Figure 3.8. In the Final Questions section of the questionnaire, respondents ranked the recommendation techniques in order of usefulness (where 1 is the highest possible ranking, and 6 is the lowest ranking). Figure 3.9 shows the average ranking given to each technique, with error bars showing one standard deviation above and below the mean (actual results for each respondent shown in Appendix B).

34 3.5 QUESTIONNAIRE - RESULTS Avg. Usefulness Rating Most Popular 2 (Avg. Rating Info.) Genre Based 1 (Genre Listing) Word Of Mouth 1 (Simple text) Learn By Example 2 (Similar Artists) Description Based 1 (Simple Text) Lyrics Based 1 (Simple Text) FIGURE 3.8: The Screens With The Maximum Average Usefulness For Each Recommendation Method. Error Bars Show One Standard Deviation Above And Below The Mean. N = 18. Technique Avg. St. Dev. Word of Mouth Genre Based Most Popular Learn By Example Description Based Lyrics Based FIGURE 3.9: Average Ranking Given To Each Presentation Method. N = 18. Top Ranking = 1. Bottom Ranking = 6. In the Final Questions section, respondents also indicated the weight they would want to place on each technique if a combination of techniques was to be used. Figure 3.10 shows the average weight (0-100) chosen for each method. Note that respondents could choose any value for each technique. For example, Participant 6 gave Most Popular a weight of 30, Genre Based a weight of 80, Social Filtering a weight of 90, Learn By Example a weight of 70, Description Based a weight of 30 and Lyrics Based a weight of 0. We now discuss these results. Social Filtering: This method had the highest average ranking (1.9, where 1 is the ) and had high average usefulness scores, but, surprisingly, it had the second highest average contribution, with a weight of 68. Six people indicated that Social Filtering should have the most contribution, but low scores from other respondents caused this technique to receive a lower average contribution score than Genre Based. Social Filtering (Simple Text) was the highest rated Social Filtering screen. This screen had the highest average usefulness rating (4.4/5) of all screens shown in the questionnaire. The next highest rated Social Filtering screen was

35 3.5 QUESTIONNAIRE - RESULTS Avg. Weight Most Popular Genre Based Word of Mouth Learn By Example Description Based Lyrics Based FIGURE 3.10: Average Response For Contribution That Each Method Should Make To A Combination Of Recommendation Methods. Error Bars Show One Standard Deviation Above And Below The Mean. N = 18. the Simple Graph screen with an average of 3.9/5. Although Social Filtering (Similar Users) had an average usefulness score of 3.1/5 (the lowest for all Social Filtering screens), four respondents commented that they thought the Social Filtering (Similar Users) screen was useful because it allowed you to view similar users and their profiles. One respondent commented that Social Filtering "is a great way to recommend new music." A further two people commented that this method would be useful, as long as similarity between users was calculated accurately. Another person commented that they did not like the idea of opposite users factoring in their recommendations. Finally, another commented that they would like to be able to indicate friends that have similar interests and are already using the recommender system. Genre Based: This method received the highest average contribution score (76) six people indicated that this technique should have the most contribution. It was also given the second best average ranking (2.4/5). However, one respondent did mention that he thought classifying items by genres was too broad. The Genre Based (Simple Text) screen had the second highest average usefulness (4.1/5) of all screens presented in the questionnaire, and the two Genre Based screens both had average scores of 4 or more. Two people commented that they thought Genre Based (Genre Listing) was the best Genre Based screen as it provided more information. Learn By Example: This method had an average contribution score of 58 and only two people indicated that this method should have the highest contribution. This method was given an average ranking of 3.3, the fourth highest average ranking. The Similar Artists screen had the highest average usefulness score of the Learn By Example screens, with an average usefulness

36 3.5 QUESTIONNAIRE - RESULTS 27 of 4.0/5 the third highest average usefulness score. One respondent commented that they doubted whether similarity between artists could be calculated objectively. Most Popular: Five respondents commented that they would not necessarily be interested in the the most popular items. However, Most Popular had the second highest average contribution score, with 68, and seven people indicated that Most Popular should have the most contribution. Most Popular was also given an average ranking of 2.8, which was the third best average ranking. The two screens displaying Most Popular recommendations Most Popular (Ranking) and Most Popular (Avg. Rating Info.) had average scores of 3.5/5 and 3.4/5 respectively. Description Based: This method scored 41 average contribution and had the second worst average ranking. Respondents viewed only one screen that presented Description Based recommendations. This screen had an average usefulness rating of 2.7/5, the second lowest average usefulness score. Nine people commented that they doubted the usefulness of using descriptions to make recommendations. Four of these people commented that descriptions are too subjective to be useful. Lyrics Based: This method scored 12 average contribution and had the worst average ranking. Respondents viewed only one screen that presented Lyrics Based recommendations. This screen had an average usefulness rating of 2.2/5, the lowest average usefulness score. Nine respondents commented that they didn t think lyrics would be useful for making recommendations. Seven of these commented that lyrics did not determine whether they liked an item. Findings. Social Filtering and Genre Based were judged by respondents to be the most useful techniques. This is supported by the fact that these two methods both had either the first or the second best average score on every question. Respondents were less interested in having Most Popular recommendations delivered on their own than they were in having this recommendation method combined with other techniques. We can see this because this method had the second highest average weight in the question regarding how techniques should be combined. However, five respondents commented that they were not interested in just the most popular items. Respondents did not think that Description Based or Lyrics Based would be useful. This is shown by the fact that these two methods consistently had the lowest average scores for each question.

37 3.5 QUESTIONNAIRE - RESULTS 28 Social Filtering (Simple Text), Genre Based (Simple Text), Most Popular (Ranking) and Learn By Example (Simple Text) were all judged by respondents to be the most useful screens for their particular recommendation techniques. Genre Based (Simple Text) and Genre Based (Genre Listing) were approximately equally useful (their average usefulness scores were quite similar) and each offered a different form of useful information. Most Popular (Avg. Rating Info.) and Most Popular (Ranking) were approximately as useful as one another (their average usefulness scores were quite similar) and each offered a different form of useful information. Some users would find the Social Filtering (Similar Users) screen useful. This screen did not receive a high average usefulness score, but four respondents commented that they liked the ability it provided to examine the ratings of similar users. Implications for the prototype. Social Filtering and Genre Based should be included as recommendation techniques. Most Popular should be included as an optional recommendation technique, or one which can be combined with other techniques. Learn By Example should also be included as a recommendation technique, as it was not found to be significantly less useful than the top three recommendation techniques. Description Based and Lyrics Based should not be included in the prototype. Social Filtering (Simple Text), Genre Based (Simple Text), Most Popular (Ranking) and Learn By Example (Simple Text) should all be included as explanation screens in the prototype. Genre Based (Simple Text) and Genre Based (Genre Listing) should be combined into a single explanation screen, as their average usefulness scores were similar and each displays a different piece of information which would be useful to users. Further, these two screens could easily be combined without causing conflicting information to be displayed. For the same reasons, Most Popular (Avg. Rating Info.) and Most Popular (Ranking) should also be combined. Social Filtering (Similar Users) should be considered for implementation in the prototype.

38 3.5 QUESTIONNAIRE - RESULTS Understanding This section discusses the questionnaire results relevant to the aim of: assessing understanding of recommendations provided using various recommendation techniques. In Part B of the questionnaire, respondents rated their understanding of the 18 screens that presented recommendations. The screens that had the maximum average understanding for each technique are presented in Figure 3.11, along with their average rating (Error bars show one standard deviation above and below the mean. Actual results for each respondent shown in Appendix B). For example, from five Social Filtering screens presented in the questionnaire, the one with the highest average understanding rating was the Simple Text screen, so this is shown in Figure 3.11 (3rd bar from the left). 5.0 Avg. Understanding Rating Most Popular 2 (Avg. Rating Info.) Genre Based 1 (Genre Listing) Word Of Mouth 1 (Simple Text) Learn By Example 1 (Avg. Rating Info.) Description Based 1 (Simple Text) Lyrics Based 1 (Simple Text) FIGURE 3.11: The Screens With The Maximum Average Understanding For Each Recommendation Method. Error Bars Show One Standard Deviation Above And Below The Mean. N = 18 In Part B of the questionnaire, respondents also rated their understanding of four recommendation techniques before and after they saw the screens for that technique. Figure 3.12 shows the average ranking given to each technique, with error bars showing one standard deviation above and below the mean (actual results for each respondent shown in Appendix B). We now discuss the results shown in Figures 3.11 to Social Filtering: Social Filtering (Simple Text) had the highest average understanding of all the Social Filtering screens, with 4.6/5, which was the second highest average score given to any of the Social Filtering screens. The Social Filtering (Simple Graph) screen (average of 4.5/5) and the Social Filtering (Table) screen (average of 4.3/5) both also received high average scores for understanding. Both Social Filtering (Graph w/ Opposites) and Social Filtering (Similar Users)

39 3.5 QUESTIONNAIRE - RESULTS 30 Avg. Understanding Rating Most Popular Genre Based Word Of Mouth Learn By Example Before Explanations After Explanations FIGURE 3.12: Respondents Average Understanding Of Recommendation Methods Before And After Explanations. Error Bars Show One Standard Deviation Above And Below The Mean. N = 18 showed opposite users in their explanation, but three users said that they were confused by the opposite users concept, and these screens had the lowest average ratings from all of the Social Filtering screens in the questionnaire (Social Filtering (Similar Users) averaged 3.9/5 and Social Filtering (Graph w/ Opposites) averaged 3.8/5 these were the only average scores that were below 4.0). Social Filtering was given the highest average understanding rating before explanations were provided (average of 4.4/5). However, after explanations were provided, the average for this technique dropped to 3.9/5 the lowest average understanding rating. As mentioned above, three respondents commented that opposite users had confused them and a further two people commented that the explanations contained too much information and were confusing. Genre Based: Two Genre Based screens were presented in the questionnaire, Genre Based (Simple Text) received the highest average understanding of all the explanation screens 4.7/5. Genre Based (Genre Listing) also received a high average understanding rating of 4.6/5 the third highest average understanding given to any of the 18 explanation screens. One respondent commented that Genre Based (Simple Text) was the better of the two Genre Based screens as it gave "more information about the individual artist and not just a genre". However, another commented that Genre Based (Genre Listing) was better, as it was more related to his ratings and profile. Genre Based actually received the lowest average understanding rating before the explanation screens were provided (average of 4.2/5). Remarkably, after explanations, the average usefulness rating for this method increased to 4.8/5. Eight people gave this method a higher

40 3.5 QUESTIONNAIRE - RESULTS 31 understanding rating after viewing the explanation screens, ten gave it the same rating, and no respondents gave it a lower rating. Learn By Example: Learn By Example (Simple Text) had the highest average understanding rating of the two Learn By Example screens presented in the questionnaire. Learn By Example (Simple Text) had an average of 4.2, which was just higher than the average of 4.1/5 for Learn By Example (Similar Artists). Learn By Example had the equal highest average understanding (4.4/5) before explanation screens were presented. However, this dropped to an average of 4.1/5 after respondents viewed the explanation screens this was the second lowest after-explanation average. Only one respondent gave Learn By Example a higher understanding rating after explanations, fourteen gave it the same rating and three gave it a lower understanding rating. Most Popular: The Most Popular screen with the highest average rating was Most Popular (Ranking), with a score of 4.7/5 (which was the highest average understanding across all the explanation screens). However, Most Popular (Avg. Rating Info.) also received a score of 4.5/5. Five people commented that Most Popular (Ranking) made recommendations easier to understand as it gave more information. One person commented that he would like comments from users about that item to be added to the screen, indicating why they liked or disliked it. Figure 3.12 shows that this method improved from an average understanding of 4.3/5 before explanations to an average of 4.6/5 after the viewing of explanation screens. The average understanding rating for Most Popular after explanations is the second highest average understanding score shown in Figure Four respondents gave Most Popular a higher understanding rating after explanations, twelve respondents gave it the same rating and two gave it a lower understanding rating. Description Based: Respondents viewed only one screen that presented Description Based recommendations. This screen had an average understanding rating of 4.0/5, which is the lowest of all the scores shown in Figure Four respondents gave this method a score of 3 or less. This method is not shown in Figure 3.12 because once the first five respondents had completed the questionnaire, respondents were no longer asked to report their understanding of this method before and after viewing its screens. This decision was made because this method had been given low usefulness and low understanding scores by the first five respondents. Lyrics Based: Respondents viewed only one screen that presented Lyrics Based recommendations. This screen had an average understanding rating of 4.1/5, which is the second lowest of

41 3.5 QUESTIONNAIRE - RESULTS 32 all the scores shown in Figure Three people gave this method a score of 3 or less. One respondent commented that the way this method works "just seems to make no sense". This method is not shown in Figure 3.12 because once the first five respondents had completed the questionnaire, respondents were no longer asked to report their understanding of this method before and after viewing its screens. This decision was made because this method had been given low usefulness and low understanding scores by the first five respondents. Findings. The findings that came from this section of the questionnaire were: Each of the recommendation techniques can be explained in a way that users can easily understand. This is supported by the fact that all of the values shown in Figure 3.12 were equal to or above 4.0. When explaining recommendations, providing more information can often be beneficial. This is supported by the by user comments that indicated a desire for more information about recommendations. However, it is important to find a clear, concise way to deliver that information to people. Complicated or poor explanations will often confuse a user s understanding of a recommendation technique. For example, three people commented that the opposite users idea was confusing. Further, the screens showing opposite users received the lowest average understanding scores and after these screens were shown to users, the average understanding of the Social Filtering technique dropped from 4.4/5 to 3.9/5. This finding was also reported in (Herlocker et al., 2000). Social Filtering (Simple Text), Genre Based (Simple Text), Most Popular (Ranking) and Learn By Example (Simple Text) were judged by users to be the most understandable explanation of each of their recommendation techniques (as each of these had the highest average understanding of the screens for their technique). Social Filtering (Simple Graph) was almost as understandable as Social Filtering (Simple Text) (as they had average understanding scores only 0.1 points apart). Similarly, Learn By Example (Similar Artists) was almost as understandable as Learn By Example (Simple Text) (as they had average understanding scores only 0.1 points apart). Genre Based (Simple Text) and Genre Based (Genre Listing) were approximately as effective at explaining recommendations as one another (their average understanding scores were quite similar) and each offered a different form of useful information.

42 3.5 QUESTIONNAIRE - RESULTS 33 Most Popular (Avg. Rating Info.) and Most Popular (Ranking) were also approximately as effective at explaining recommendations as one another (their average understanding scores were quite similar) and each offered a different form of useful information. The inclusion of the opposite users concept negatively affected users perceived understanding of the Social Filtering (Similar Users) screen. This is supported by the fact that four respondents commented that the opposite users concept confused their understanding of Social Filtering. People found Learn By Example to be harder to understand than techniques such as Most Popular, Genre Based and even Social Filtering. This is surprising as one of the benefits often noted for the Learn By Example technique is the "potential to use retrieved cases to explain [recommendations]" - (Cunningham et al., 2003), p 1. Different people prefer different styles of explanation. Evidence supporting this finding includes the fact that different users rated their understanding of different explanation screens higher than others. Implications for the prototype. Social Filtering (Simple Text), Genre Based (Simple Text), Most Popular (Ranking) and Learn By Example (Simple Text) should all be included as explanation screens in the prototype. Learn By Example (Simple Text) and Learn By Example (Similar Artists) should be combined into a single explanation screen, as their average understanding scores were similar and each displays a different piece of information which would be useful to users. Further, these two screens could easily be combined without causing conflicting information to be displayed. The case for combining Most Popular (Avg. Rating Info.) and Most Popular (Ranking) and Genre Based (Simple Text) and Genre Based (Genre Listing) is also strengthened by these results, as each of these pairs had similar average understanding ratings. Social Filtering (Similar Users) should be included in the prototype, without any reference to opposite users. This is because the ability to view similar users was deemed useful by some respondents, and the ratings for this control may have been negatively affected by the fact that it displayed opposite users a concept which consistently confused people.

43 3.5.3 Understanding And Usefulness 3.5 QUESTIONNAIRE - RESULTS 34 The Pearson Correlation was calculated between the ratings that respondents gave for the usefulness of particular explanation screens and the ratings that they gave for their understanding of these screens. This correlation was calculated to be Squaring this value gives 0.078, or 7.8 percent. This suggests that a user s understanding of a recommendation does affect how useful they deem it to be. In fact, this value suggests that 7.8 percent of a user s opinion on the usefulness of a recommendation technique is determined by how well they understand that recommendation. This result is confirmed by a number of cases that were observed within the questionnaire. Particularly significant were the cases in which a user s understanding was confused by complicated concepts within explanations. This often caused a decrease in both the user s understanding rating and their usefulness rating for that screen. Findings. A user s opinions on the usefulness of recommendations are related to their understanding of these recommendations Control This section discusses the questionnaire results relevant to the aim of: assessing users attitudes toward features that provide control over recommender techniques and their understanding of how these would be used. In Part B of the questionnaire, respondents rated three control features according to how well they understood each control, how useful they thought each control would be and how likely they would be to use that control. Figure 3.13 shows the average score for each of these questions, with error bars showing the one standard deviation above and below the mean (actual results for each user shown in Appendix B). Genre Based Control (Genre Slider): This control had the highest average scores for understanding (4.9/5), usefulness (4.5/5) and likelihood of use (4.6/5). All but two respondents gave this control a 5 for understanding; the other two respondents gave it a 4. All but three people gave this control a 5 for how likely they would be to use it, and all but one users gave this control a rating of 4 or 5 when asked how useful they thought it would be. Further, seven users

44 3.5 QUESTIONNAIRE - RESULTS Avg. Rating Genre Based Control Word Of Mouth Control 1 (Ignore User) Word Of Mouth Control 2 (Adjust Influence) Understanding Use Likelihood Of Use FIGURE 3.13: Average Ratings For Questions Regarding Respondents Understanding, Likelihood Of Using And Perceived Usefulness Of Each Control Feature. Error Bars Show One Standard Deviation Above And Below The Mean. N = 18 commented that they strongly liked this control. One respondent commented that they would like to specify interest in more specific genres (i.e. sub-genres), but another commented that they thought too many genres would become confusing for users. Social Filtering Control (Like/Not Like): This control had the second highest average scores on all questions. Its average ratings were 4.6/5 for understanding, 3.5/5 for likelihood of use and 4.3/5 for usefulness. All but two respondents gave this control a rating of 4 or 5 for understanding, and the other two gave a rating of 3. Most users also gave this control a rating of 4 or 5 for usefulness. However, there was much more variation in the likelihood of use ratings for this control. In fact, this question had the second highest standard deviation (1.3) of any question asked about the three controls and responses to this question were distributed relatively evenly between 1 and 5. Social Filtering Control (Adjust Influence): This control had the lowest average scores for all questions. Social Filtering Control (Adjust Influence) had an average understanding rating of 3.8, likelihood of use rating of 3.0 and usefulness rating of 3.4. This method asked users to adjust the impact of opposite users upon recommendations. As mentioned in section 3.5.2, three users commented that the concept of opposite users was confusing, and the average understanding ratings for the Social Filtering technique fell when this concept was introduced. The ratings given to this method were highly varied three people responded with a 5 for the usefulness of this control usefulness and 5 for their likelihood of using it, yet three others gave scores of only 1 or 2 for both of these questions (each of these three gave lower ratings for their understanding of the Social Filtering technique once the concept of opposite users was

45 3.5 QUESTIONNAIRE - RESULTS 36 introduced.e three gave lower ratings for their understanding of the Social Filtering technique once the concept of opposite users was introduced. Findings. The Genre Based Control (Genre Slider) would get used often and would be easy to understand. Further, respondents also believed that it would be very useful. These findings are supported by the fact this control received the highest average usefulness scores, and most users gave a rating of 4 or 5 for all questions regarding this control. It is important to get the number of available genres correct when allowing users to specify their interest in genres. This is supported by the fact that many users users commented that having too many genres would be overwhelming. Social Filtering Control (Like/Not Like) is easy to understand (most users gave a rating of 4 or 5 for understanding). It would be used by some, but not all users (as there was a high variation in likelihood of use ratings). Further, most users would find this control to be quite useful (most users gave 4 or 5 for usefulness). In general, most users would not understand how Social Filtering Control (Adjust Influence) works and most users would not use it. Most respondents believed that this control would not be very useful. These findings are supported by the fact that this control scored the lowest average rating in every question and three users commented that they were confused by the opposite users concept, which is a part of Social Filtering Control (Adjust Influence). Implications for the prototype. Based upon these findings, it was decided: To include Genre Based Control (Genre Slider) in the prototype. It is important that the right number of genres is used with this control. The number of genres should not be too large (as this may become overwhelming) and should not be too small (as this may not be useful). To include Social Filtering Control (Like/Not Like) in the prototype. This control may not be rated highly by all users, but it is worth testing its effectiveness in a real prototype. Not to include Social Filtering Control (Adjust Influence) in the prototype.

46 3.5 QUESTIONNAIRE - RESULTS Presentation Method This section discusses the questionnaire results relevant to the aim of: assessing users preferences for recommendation presentation format. In Part A of the questionnaire, respondents rated their understanding and opinion on the usefulness of two presentation methods: Map Based and List Based. Figure 3.14(a) shows the average score for each of these questions, with error bars showing the one standard deviation above and below the mean. Users also indicated their preference for the way in which they would like recommendations to be displayed. Figure 3.14(b) shows the sums of responses to this question. The actual results for each user shown in Appendix B. Avg. Rating List Map Understanding Use (a) Understanding And Usefulness Of Presentation Methods Sum of Preferences List Only Both List And Map Map Only (b) Sum Of Recommendation Presentation Preferences. FIGURE 3.14: User s Responses For Questions Regarding Recommendation Presentation Methods. Error Bars Show One Standard Deviation Above And Below The Mean. N = 18 Ten users indicated that they would prefer to have only List Based presentation. Four of these users commented that List Based is quicker to understand and read. These comments are supported by the results shown in Figure This shows that List Based had an average understanding rating of 4.7/5, exactly one point higher than the average understanding rating for Map Based, which was 3.7/5. In addition, seven users commented that the map took longer to work out. However, List Based and Map Based had similar average usefulness scores List Based scored an average of 3.8/5 and Map Based had an average of 3.5/5. Two users indicated that they would like to have recommendations presented through a Map Based only and six users indicated that they would like to have recommendations displayed as in both List Based and Map Based formats. Four users commented that the map gave more information and was useful for that reason.

47 3.5 QUESTIONNAIRE - RESULTS 38 Findings. Most users would find a List Based presentation easier to understand and quicker to read than a Map Based presentation. This is supported by the fact that users commented that a list based presentation is quicker and easier to read and by the fact that the List Based presentation scored a higher average understanding rating than Map Based. In general, users indicated they would find a List Based presentation useful. This is evidenced by the fact that 16/18 respondents indicated that they would want List Based as a part of their recommendation system and this presentation received the highest average usefulness score. Some users indicated they would also find a Map Based presentation to be useful. Evidenced supporting this finding includes that 8/18 users indicated that they would want a Map Based presentation included in a recommender. Different people prefer different styles of presentation. This was shown through the variation in the ratings that were given for the questions regarding presentation. Implications for the prototype. Based upon these findings, it was decided: To definitely include a List Based presentation in the prototype. That there was enough enough support for the usefulness of a Map Based presentation to include it in the prototype to examine how users would interact with an implementation of a Map Based presentation Final Questions This section discusses the results from the final questions asked of users, that gave an overall indication of their opinion of the various features shown in the questionnaire. In the Final Questions section of the questionnaire, respondents rated the general usefulness of five features that could be included in a recommender system. Figure 3.15 shows the average ratings for each of these features, with error bars showing the one standard deviation above and below the mean. Choice Of Recommendation Method: The average rating for the usefulness of the system deciding what recommendation method should be used was 3.6/5. Most people gave this feature a rating of 3 or more, but one person gave this feature a rating of 1, while giving all other features mentioned in this section a rating of 5. The average rating for this feature was much lower than

48 3.5 QUESTIONNAIRE - RESULTS Avg. Rating System Chooses Reco. Method System Chooses Combination Of Reco. Methods View Results From Other Reco. Methods Explanantions Controls FIGURE 3.15: Average Rating For The Usefulness Of Possible Features Of A Recommender. Error Bars Show One Standard Deviation Above And Below The Mean. N = 18 the average rating for the usefulness of having the system choose a combination of methods (average of 4.6/5). There was very little deviation in the responses given to the usefulness of the system selecting a combination of methods, with all respondents giving ratings of either 4 or 5. This feature had the highest average rating of all features presented in this section of the questionnaire. Another feature with a high average usefulness rating was the ability to view recommendations made using different recommendation techniques, which had an average of 4.5/5. One respondent commented that "viewing what your recommendations would be like with different methods allows you to compare the usefulness of each method and choose the best one" and another commented that it would be "interesting and useful to see what your recommendations would look like using different methods." Explanations: The average rating for the usefulness of explanations was 3.8/5 One respondent commented that the addition of explanations "allows you to make your own judgments about on the usefulness of the results." More than half of the respondents for this question gave explanations a usefulness rating of 4 or 5. Controls: The average rating given by users for the usefulness of controls was 4.5/5. As noted in Section 3.5.4, seven respondents commented that they had a strong liking for the Genre Based Control (Genre Slider)control. Twelve respondents rated the usefulness of controls as 5, four users rated it as 4 and the remaining two gave controls a score of 2 and 1. Findings.

49 3.6 TEST DATA 40 Rather than having the system choose a single recommendation technique to use, people would prefer to have the system choose a combination of recommendation techniques or allow them to view recommendations using various techniques. This is supported by the fact that, on average, users rated the usefulness of the System chooses recommendation method feature lower than the features that involved a combination of recommendation techniques and viewing recommendations using different techniques. People in our study believed that explanations would be a useful addition to a recommender system. This is evidenced by the fact that users gave an average of 3.8/5 when asked to rate the usefulness of explanations and more than half of the respondents for this question gave a score of 4 or 5. In general, people in our study believed that having control over a recommender system would be very useful. This is supported by the fact that users gave an average of 4.5/5 when asked to rate the usefulness of having control over a recommender system. Implications for the prototype. The prototype should allow users to view recommendations produce using various techniques and/or make recommendations using a combination of prediction techniques. The prototype should contain explanations for the recommendations that it produces. These explanations should be offered to users if they are interested. The prototype should allow users to have control over certain elements of the recommender system, to help them improve their recommendations. 3.6 Test Data In order to perform evaluations at a later stage in the thesis, a source of test data needed to be established. (Polcicova et al., 2000), (Maltz and Ehrlich, 1995), (Konstan et al., 1997) and (Basu et al., 1998) mention the fact that recommender systems are likely to exhibit poor performance unless they contain a significantly large number of user ratings. As a result, the data set used for testing needed to be large enough to allow effective recommendations to be made. In addition, the type and quantity of test data that could be gained would heavily influence the process of creating and evaluating a prototype at later stages of the project. An ideal set of test data for this project would have been a data set that contained information about around 1000 users, detailing:

50 Their ratings for particular artists. The time that they spent listening to individual music tracks. The actions that they performed while listening to music tracks. 3.6 TEST DATA 41 This mixture of music ratings information and listening patterns was desirable, as this would allow ratings generated from implicit data to be compared with each user s explicit ratings. However, the lack of sources for information regarding music ratings and listening patterns meant that it was not possible to find a single data set containing both users explicit ratings and information about listening habits. Further, it was not possible to find any significant source of information about actions users had performed while listening to music. A dataset used in (Hu et al., 2005) was identified as a possible source of test data. This dataset is a collection of user s ratings on for particular albums taken from the epinions.com 2 website. However, this dataset was inadequate for use in this project, as it was deemed to be too small to enable a recommendation system to produce good recommendations. last.fm, an online radio service, was another source of data that was identified. This service makes large amount of data on users play-counts available through a web service. Due to the large amount of data available through this service, it was decided to use this to produce a dataset for use in investigating Unobtrusive Recommendation. Reading data from this service produced an initial dataset of 500,000 play counts, spanning 10,000 artists and 5,000 users. This dataset was then culled (to get rid of the users and artists that had few play-counts associated with them) to a size of 100,000 play-counts, spanning 3333 artists and 948 users. However, at this stage, the only source of test data that had been established was implicit data based upon users listening patterns. This data would indeed be useful for exploring the Unobtrusive Recommendation question, yet it was not ideal for exploring the Scrutability & Control question. This is because, if scrutability and control features were to be added to a prototype that made ratings based upon implicit data, then the performance of these features may be affected by the fact that this was implicit and not explicit data. Therefore, a data set consisting of explicit ratings was required in order to investigate the Scrutability & Control question. At this point, no significant source of explicit music ratings was able to be located, and so, it was decided that the MovieLens standard dataset (which provides explicit ratings on movies) should be used to investigate issues relating to Scrutability & Control. This dataset contains 100,000 ratings, from 943 users, on 1682 movies. Thus, two datasets were chosen for use in this thesis a dataset compiled from data taken from last.fm and the MovieLens standard dataset. 2

51 3.7 CONCLUSION 42 Implications for the prototype. The prototype will have to have two variants in order to separately test the two goals of the thesis. These two variants would be: A prototype based upon the MovieLens standard dataset, that investigated Scrutability & Control. A prototype based upon the last.fm dataset that was created, that investigated Unobtrusive Recommendation. 3.7 Conclusion In order to investigate the areas of Scrutability & Control and Unobtrusive Recommendation, an exploratory study conducted. This began with a Qualitative Analysis, that identified the Duine Toolkit as the most appropriate code based for extension. This toolkit makes available six different recommendation techniques that could be used within a prototype system. A thorough examination of each technique was then conducted to ascertain ways in which they could be explained and controlled. A number of possible recommender usability features were brought to light through this analysis, and these, along with existing recommender usability features, were investigated through the conduction of a questionnaire. Based upon the results of this questionnaire, a large number of findings could be gleaned about the respondents in general. However, the data that was collected through this questionnaire was quite rich, and demonstrated the individuality of each of the respondents. Particular respondents had preferences for different types of presentation and their answers clearly reflected this. This type of variance in preferences makes a strong case for providing personalisation of presentations and explanations within recommender systems. Each of the recommendation techniques can be explained in a way that users can easily understand. When explaining recommendations, providing more information can often be beneficial. Complicated or poor explanations will often confuse a user s understanding of a recommendation technique. A user s opinions on the usefulness of recommendations are related to their understanding of these recommendations. Social Filtering and Genre Based were judged by respondents to be the most useful recommendation techniques.

52 3.7 CONCLUSION 43 Respondents wanted the Most Popular recommendation technique to be combined with other techniques. Respondents did not think that Description Based or Lyrics Based recommendation techniques would be useful. Respondents believed that Social Filtering (Simple Text), Genre Based (Simple Text), Most Popular (Ranking) and Learn By Example (Simple Text) screens were the easiest to understand and most useful for their recommendation techniques. Some respondents had a strong interest in the ability to view the profiles of other similar users. Respondents indicated they would use the Genre Based Control (Genre Slider) often and that it was easy to understand. Further, respondents believed that it would be very useful. Most respondents indicated they would find a List Based presentation easier to understand and quicker to read than a Map Based presentation. Most users indicated they would find a List Based presentation useful and some users indicated they would also find a Map Based presentation to be useful. Respondents indicated they like to have the system choose a combination of recommendation techniques or allow them to view recommendations using various techniques. Respondents believed that explanations would be a useful addition to a recommender system. Respondents also believed that having control over a recommender system would be very useful. Different users prefer different forms of presentation and explanation. These findings meant that the prototype should: Include both List Based and Map Based presentations. Allow users to view recommendations produce using various techniques and/or make recommendations using a combination of prediction techniques. Contain explanations for recommendations. Allow users to have control over certain elements of the recommender system. Allow users to view profiles for similar users to them. Include Social Filtering, Genre Based, Most Popular and Learn By Example recommendation techniques. Include the following optional explanation screens:

53 3.7 CONCLUSION 44 Social Filtering (Simple Text), Social Filtering (Simple Graph) and Social Filtering (Similar Users) Combination of Genre Based (Simple Text) and Genre Based (Genre Listing) Combination of Most Popular (Avg. Rating Info.) and Most Popular (Ranking) Combination of Learn By Example (Simple Text) and Learn By Example (Similar Artists) Include the following controls: Genre Based Control (Genre Slider) Social Filtering Control (Like/Not Like) Finally, two sources of test data were established for use in conducting simulations and evaluations at a later stage in the thesis. The results of the investigations described in this chapter, along with the test data that was acquired, would inform the construction of a prototype, described in Chapter 4.

54 CHAPTER 4 Prototype Design 4.1 Introduction In order to investigate questions regarding Scrutability & Control in recommender systems and Unobtrusive Recommendation, a prototype was developed. This prototype would later be used to conduct user evaluations and simulations to establish the usefulness of a number of unobtrusive user modeling and usability features. The findings of the questionnaire described in Chapter 3 were used to guide the construction of this prototype and ensure that only features that were likely to be of use in improving recommendation quality would be included in the prototype. Section 1 stated that this thesis aimed to investigate two main questions: the Scrutability & Control question and the Unobtrusive Recommendation question. However, each of these two are separate research questions. If a prototype was created to investigate both of these questions at once, it could be difficult to link each of the findings of this study to one specific research question. So, it was decided that two variants of our prototype should be created - one to investigate each of the major research questions for this project. Each of these prototype variants could then be evaluated separately and the results from each evaluation would provide findings that would clearly be related to only one research question. The prototype that we created to investigate these questions was called isuggest. The two variants that we created of this prototype were called isuggest-usability and isuggest-unobtrusive. isuggest-usability incorporated the highest rated usability interface features from the questionnaire. This version of the prototype made movie recommendations, based upon the MovieLens standard data set. isuggest-usability would later be used to investigate the Scrutability & Control for recommenders through user evaluations. 45

55 4.2 USER S VIEW 46 isuggest-unobtrusive made music recommendations based upon the last.fm 1 dataset described in Section 3.6. It would be used to investigate Unobtrusive Recommendation. isuggest-unobtrusive incorporated the ability to automatically generate the ratings that a user would give particular items using only unobtrusively obtained information. Specifically, this meant that it read the play-counts from a user s ipod and then automatically generated a set of ratings that a user would give to particular artists. The automatically generated ratings were then used to produce recommendations for that user. This prototype aimed to generate ratings for a user in a way that was accurate, but was also easy for them to understand. isuggest-unobtrusive would later be used to investigate the Unobtrusive Recommendation through both user evaluations and statistical evaluations. This chapter describes the functions that each prototype variant made available to users, it then describes the architecture of each of the two variants. 4.2 User s View The basic isuggest prototype showed users the standard type of interface that is used within most current recommender systems. A user s first interaction with the basic isuggest system was to create an account within isuggest and then log in. Users could then view three basic screens: Rate Items: Showed the items that the user had not yet rated and could still enter a rating for. My Ratings: Showed the items that the items that the user had rated, and the rating that the user had given each item. Recommendation List: Showed a list of the recommendations that the system had produced for the user. 4.1 shows an example of this screen. Each of these screens used a standard List Based presentation style, as suggested by the study reported in Chapter 3. Users were able to click to view more information about any of the items shown on any of these screens. They could then click to search the Internet for more information about any of these items (this linked to imdb.com 2 for movie items and Amazon.com 3 for music items). Users rated items by clicking on the Star Bar (shown in Figure 4.2) and dragging their mouse to produce a rating between 0 stars (worst) and 5 stars (best) for each item. This basic prototype made all recommendations using a

56 4.2 USER S VIEW 47 single recommendation method the Duine Toolkits default Taste Strategy (described in Section 3.3). The Taste Strategy was chosen for use within the basic prototype as it is shown in (van Setten et al., 2004) to be the most effective recommendation method available for use in the Duine Toolkit. In this way, the basic isuggest prototype utilised the optimum configuration of the Duine Toolkit and provided a standard List Based presentation of information. The two prototype variants that would be used to investigate the research goals of this thesis extended this basic prototype to incorporate new features and enable these features to be evaluated. FIGURE 4.1: List Based Presentation Of Recommendations FIGURE 4.2: The Star Bar That Users Used To Rate Items isuggest-usability This version of the prototype extended the basic isuggest prototype to incorporate all of the usability features that the results of the questionnaire suggested would be useful additions to a recommender system. This version of the prototype made movie recommendations, based upon the MovieLens standard data set. When using isuggest-usability, users were presented with the following new usability and interface features: Multiple recommendation techniques.

57 4.2 USER S VIEW 48 Explanations for all recommendations that were produced. The ability to view a list of users similar to the current user. Control features that allowed the user to affect the recommendation process. A Map Based presentation of recommendations. Each of these features is discussed in detail in the sections below. Multiple Recommendation Techniques. Social Filtering, Genre Based, Most Popular and Learn By Example recommendation techniques were all included as additional recommendation techniques that could be used by isuggest-usability. These were included as the questionnaire suggested that users would find these recommendation techniques to be the most useful. The questionnaire also suggested that users would like a recommendation system to combine multiple techniques to make recommendations and/or allow users to select which recommendation technique should be used. Thus, isuggest- Usability allowed users to select which of the five available methods (including the standard Taste Strategy) should be used to create recommendations. Users selected the recommendation technique to be used by accessing an options screen that presented them with the five techniques. An example of this screen is shown in Figure 4.3. Each of these techniques had a small description underneath its name to describe how it functioned. Users selected one option from the list of recommendations and confirmed this choice. This would cause the user s recommendations to be replaced with a new set of recommendations. The questionnaire suggested that it would also have been desirable for isuggest-usability to enable combinations of recommendation techniques to be used. However, this was deemed to be outside the scope of the project. Explanations. Every recommendation that was produced using the Social Filtering, Genre Based, Most Popular or Learn By Example techniques was accompanied by an explanation that users could view by clicking to see "More Info" about the recommended movie. The explanations provided to users depended upon the recommendation technique that was used to create the recommendation. The way in which recommendations from each technique were explained is described below. Most Popular: The questionnaire suggested that the Most Popular (Avg. Rating Info.) and Most Popular (Ranking) screens would be useful in explaining this technique to users. Most Popular was therefore explained using a combination of these two screens that displayed the amount of

58 4.2 USER S VIEW 49 FIGURE 4.3: Recommendation Technique Selection Screen. Note: The Word Of Mouth Technique Shown Here Is Social Filtering And The Let isuggest Choose Technique Is The Duine Toolkit Taste Strategy FIGURE 4.4: Explanation Screen For Genre Based Recommendations FIGURE 4.5: Social Filtering (Simple Graph) Explanation Screen For Social Filtering Recommendations users who had rated the recommended movie, the average rating these users had given to the

59 4.2 USER S VIEW 50 FIGURE 4.6: Explanation Screen For Learn By Example Recommendations FIGURE 4.7: Explanation Screen For Most Popular Recommendations movie and the rank that this movie therefore had in the database. The Most Popular explanation screen is shown in Figure 4.7. Genre Based: The questionnaire suggested that the Genre Based (Simple Text) and Genre Based (Genre Listing) screens would be useful in explaining this technique to users. However, the Genre Based (Genre Listing) screen showed users the average rating that they had given movies within a particular genre. Unfortunately, this average is not used by the Genre Based technique to create recommendations so using it to explain recommendations would not necessarily produce useful explanations. Rather, the Genre Based technique calculates a user s interest in particular genres and uses this to make recommendations. Hence, the explanation for the Genre Based technique contained a listing of the genres that a movie belonged to and a link to a screen where the user could view their calculated interest in each genre. The Genre Based explanation screen is shown in Figure 4.4. Social Filtering: The questionnaire showed that Social Filtering (Simple Text), Social Filtering (Simple Graph) and Social Filtering (Similar Users) could all be useful ways to describe this technique. However, these explanations could not easily be combined. As a result, three different types of Social Filtering explanations were provided to users Simple Text, Graph and Similar Users. Simple Text presented text indicating the number of similar users this recommendation was based upon. Graph (shown in Figure 4.5) presented text indicating the number of similar users that this recommendation was based upon and displayed a graph of the number of users who Liked This Movie and Didn t Like This Movie. Finally, Similar Users showed the names of the similar users who were most significant in the creation of this recommendation and whether these users Liked This Movie or Didn t Like This Movie. Users could then click to view the detailed profiles of these similar users.

60 4.2 USER S VIEW 51 Learn By Example: The questionnaire suggested that the Learn By Example (Simple Text) and Learn By Example (Similar Artists) screens would be useful in explaining this technique to users. Thus, Learn By Example was described using a combination of these two screens. This combined screen listed the similar items that this recommendation was based upon (including the rating that the user had given that item) and stated the average rating that this user had given to these similar items. The Learn By Example explanation screen is shown in Figure 4.6. Similar Users. This screen allowed a user to view a list of other users who the system believed were the most similar to them. A user could then click to view the ratings given by each of the similar users displayed in the list. This screen was included because the questionnaire suggested that users had a strong interest in the ability to view the profiles of other similar users. Control Features. The questionnaire suggested that control features would be a useful addition to a recommender system. In particular, it was suggested that Genre Based Control (Genre Slider) and Social Filtering Control (Like/Not Like) would be quite useful to users. As a result, these two features were incorporated into isuggest-usability. These control features are detailed below. FIGURE 4.8: The Genre Based Control (Genre Slider) Genre Based Control (Genre Slider): (shown in Figure 4.8) This control screen displayed the interest that the system had calculated the user had in each genre. These interest levels were displayed using slider bars and the users was able to manually adjust these sliders to indicate their actual interest level in each genre.

61 4.2 USER S VIEW 52 FIGURE 4.9: The Social Filtering Control. Note: The actual control is the Ignore This User Link Social Filtering Control: (shown in Figure 4.9) This control was integrated into all screens that displayed similar users to the current user. On every screen where the system displayed the details of a similar user, these details were accompanied by the option to Ignore This User. Users could then choose to ignore a particular user if they felt that user was not similar to them. This control feature was a slight variation upon the Social Filtering Control screen shown in the questionnaire. The difference is that this feature no longer allowed users to confirm that another user was indeed similar to them. This is because such a confirmation would not have had any impact upon recommendations (as the system already believed that these two users were similar). Map Based Presentation. The questionnaire suggested that many users would find the option of a Map Based presentation of recommendations to be useful. As a result, this form of presentation was incorporated into the prototype. The Map Based presentation displayed items to users so that: Each movie on the map was shown as a circle and the name of the movie was written on that circle. The closer that two circles were to one another, the more related they were (e.g. two very closely related movies would appear right next to one another and two movies not related to one another at all would appear far away from one another). Note: different relationships between items existed for different map types, these are discussed below. If a user had seen an movie, it was coloured blue.

62 4.2 USER S VIEW 53 If a user had not seen an movie, but their predicted rating for that movie was above 2.5 stars, it was coloured a shade of green (darker green indicated a higher rating). If a user had not seen an movie, but their predicted rating for that movie was close to 2.5 stars, it was coloured orange. If a user had not seen an movie, but their predicted rating for that movie was less than 2.5 stars, it was coloured a shade of red (darker red indicated a lower rating). Users were allowed to zoom in and out on the map and move left, right up and down on the map. Users could click on a particular circle to view more information about the movie that circle represented. Three variants of Map Based presentation were included in isuggest-usability. These variants were included in order to investigate how useful users would find particular styles of Map Based presentation. The details of each of these variants are described below. FIGURE 4.10: Full Map Presentation Zoomed Out View Full Map: (shown in Figures 4.10 & 4.11) This map displayed all of the movies found in the MovieLens dataset. Each movie on this map was placed close to the genres that it belonged to. The names of the genres that movies were divided into were displayed in large writing on the map.

63 4.2 USER S VIEW 54 FIGURE 4.11: Full Map Presentation Zoomed In View FIGURE 4.12: Similar Items Map Presentation Top 100 Map: This map was exactly the same as the Full Map, except that to reduce clutter and confusion on the map, it displayed only 100 movies. These 100 movies were the movies with the highest predicted rating for this user. Similar Items Map: (shown in Figure 4.12) This map showed the user a single focus item, surrounded by a number of items. These items were described to users as being related to the focus item because the users who liked the focus item also liked these items. This map was

64 4.2 USER S VIEW 55 chosen for inclusion because it displays items in a similar to the way that liveplasma 4 ) displays items isuggest-unobtrusive This version of the prototype extended the basic isuggest prototype to incorporate the ability to generate ratings using only unobtrusively obtained information about a user. isuggest-unobtrusive made use of the play-counts that were stored on users ipods to automatically generate a set of ratings that these users would give to particular artists. These ratings were then used to generate recommendations for that user. When using isuggest-usability, users connected their ipod, then clicked to Get Ratings From My ipod, ratings were then generated from the ipod connected to the system and an explanation of the ratings generation was shown. Users could then see the ratings that had been generated for them and the recommendations that had been produced for them. Users were able to choose from three different recommendation techniques Random (which merely assigned a random number as the user s predicted rating for each item), Social Filtering and Genre Based. The explanation of the ratings generation that was displayed is shown in Figure It described the number of ratings that had been generated. It also noted that artists the user listened to frequently had been given a high rating and artists the user listened to less frequently received lower ratings. The construction of the ratings generation algorithm and this explanation screen was guided by the findings of the questionnaire. A particularly important consideration was the suggestion that complicated explanations could confuse a user s understanding and do more harm than good. Thus, this explanation screen was designed to be simple for users to understand, yet still communicate effectively the way that ratings had been generated. FIGURE 4.13: The Explanation Screen Displayed After Ratings Generation 4

65 4.3 DESIGN & ARCHITECTURE Design & Architecture The architecture of the basic prototype is shown in Figure 4.14, with components constructed during this thesis marked in blue. The core components of the basic prototype were the isuggest Controller, the isuggest Interface and the Duine Toolkit. The isuggest Controller managed the isuggest system, allowing users to log in, submit ratings, set preferences and receive recommendations. It submitted any ratings and preferences to the Duine Toolkit and decided when a user s recommendations needed to be updated. Such an update was required whenever a user changed their preferences or had submitted a certain number of new ratings to the Duine Toolkit. The isuggest Interface manages all of the user interaction for the isuggest system. This component was built using the Processing graphical toolkit (available from The basic isuggest Interface incorporates List Based presentation screens that enable users to rate items and view recommendations. The isuggest Interface submits the users ratings and preferences to the isuggest Controller and it receives new recommendations from the isuggest Controller whenever the user s recommendations are updated. The Duine Toolkit receives ratings and preferences from the isuggest Controller and uses these, along with a Ratings Database to generate recommendations when required.! "##$% & ' ( ' ( FIGURE 4.14: Architecture Of The Basic Prototype, With Components Constructed During This Thesis Marked In Blue isuggest-usability isuggest-usability extended the basic prototype by adding scrutability and control features. This version of the prototype made movie recommendations, based upon the MovieLens standard data set. Figure 4.15 shows the architecture of isuggest-usability, with components constructed during this thesis marked in blue. The additional features included in this version of the prototype were:

66 4.3 DESIGN & ARCHITECTURE )*+,-./ AB,/+,-. )*+,-./ )*+,-./ 0*+*1*/2 34-/ : ;4<,2=2-/ >2/+ 0*+* FIGURE 4.15: Architecture Of isuggest-usability, With Components Constructed During This Thesis Marked In Blue Map Based Presentation Screens: These presentation screens made use of the traer.physics 5 and traer.animation 6 libraries. The traer.physics library was used to create a simulated particle system. In such a system, all particles repel one another, and links hold particles close to one another. This particle system was used to determine the positions of items in the Map Based presentation. The Full Map and Top 100 Map maps began by placing all of the systems movie genres onto the map as particles. Items were then placed one-by-one onto the map, and each item would be linked to the genres that it belonged to. In this way, each item would be repelled by all other items in the system, but it would stay close to the genres that it belonged to. The Similar Items Map used a different method to position items. This map calculated the correlation between each movie and all other movies in the database in terms of the ratings that users had given them. This map then displayed a single focus item, encircled by all of the movies that had a high level of correlation with the focus item. Similar Users Screen: This screen made use of the a list of similar users that was output from the Social Filtering algorithm. It then displayed the users who were the most similar to the current user (to a maximum of 9 similar users). Control Features: These features received input from the user regarding their preferences and forwarded this information to the isuggest Controller. The isuggest Controller then set these preferences in the Duine Toolkit and updated the user s recommendations. Modified Recommendation Algorithms: The Social Filtering, Genre Based, Learn By Example and Most Popular algorithms were all modified so that they attached extensive explanation information to each recommendation that was made. This allowed the Explanation Screens to 5 traer/physics/ 6 traer/animation/

67 4.3 DESIGN & ARCHITECTURE 58 fully explain each of the recommendations. The Social Filtering and Genre Based algorithms were also modified to make use of the user preferences that were set using control features. Explanation Screens: These screens took the explanation information that was attached to each recommendation and displayed this information in a way that the user should be able to understand isuggest-unobtrusive isuggest-unobtrusive extended the basic prototype by adding the ability to automatically generate a user s ratings from play-counts stored on their ipod. This version of the prototype made music recommendations based upon the last.fm dataset. The architecture of isuggest-unobtrusive is shown in Figure 4.16, with components constructed during this thesis marked in blue. [\]UDFGDEFNGI [\]UDFG^ DEFNGI YLZ CDEFGHI [\FIEFGH CDEFGHI CDEFGHI JDEDKDIL MNGIEOPQELR SONT UDIEVWT XLIE JDED FIGURE 4.16: Architecture Of isuggest-unobtrusive, With Components Constructed During This Thesis Marked In Blue The additional features included in this version of the prototype were: Ratings Generation Algorithm. This algorithm needed to be both accurate at generating ratings from a users play-counts and easy to explain to users. The algorithm that was chosen to generate ratings worked in the following way:

Hybrid Recommendation System Using Clustering and Collaborative Filtering

Hybrid Recommendation System Using Clustering and Collaborative Filtering Hybrid Recommendation System Using Clustering and Collaborative Filtering Roshni Padate Assistant Professor roshni@frcrce.ac.in Priyanka Bane B.E. Student priyankabane56@gmail.com Jayesh Kudase B.E. Student

More information

Usability Report for Online Writing Portfolio

Usability Report for Online Writing Portfolio Usability Report for Online Writing Portfolio October 30, 2012 WR 305.01 Written By: Kelsey Carper I pledge on my honor that I have not given or received any unauthorized assistance in the completion of

More information

Project Report. An Introduction to Collaborative Filtering

Project Report. An Introduction to Collaborative Filtering Project Report An Introduction to Collaborative Filtering Siobhán Grayson 12254530 COMP30030 School of Computer Science and Informatics College of Engineering, Mathematical & Physical Sciences University

More information

A Tagging Approach to Ontology Mapping

A Tagging Approach to Ontology Mapping A Tagging Approach to Ontology Mapping Colm Conroy 1, Declan O'Sullivan 1, Dave Lewis 1 1 Knowledge and Data Engineering Group, Trinity College Dublin {coconroy,declan.osullivan,dave.lewis}@cs.tcd.ie Abstract.

More information

A PERSONALIZED RECOMMENDER SYSTEM FOR TELECOM PRODUCTS AND SERVICES

A PERSONALIZED RECOMMENDER SYSTEM FOR TELECOM PRODUCTS AND SERVICES A PERSONALIZED RECOMMENDER SYSTEM FOR TELECOM PRODUCTS AND SERVICES Zui Zhang, Kun Liu, William Wang, Tai Zhang and Jie Lu Decision Systems & e-service Intelligence Lab, Centre for Quantum Computation

More information

6 TOOLS FOR A COMPLETE MARKETING WORKFLOW

6 TOOLS FOR A COMPLETE MARKETING WORKFLOW 6 S FOR A COMPLETE MARKETING WORKFLOW 01 6 S FOR A COMPLETE MARKETING WORKFLOW FROM ALEXA DIFFICULTY DIFFICULTY MATRIX OVERLAP 6 S FOR A COMPLETE MARKETING WORKFLOW 02 INTRODUCTION Marketers use countless

More information

Prowess Improvement of Accuracy for Moving Rating Recommendation System

Prowess Improvement of Accuracy for Moving Rating Recommendation System 2017 IJSRST Volume 3 Issue 1 Print ISSN: 2395-6011 Online ISSN: 2395-602X Themed Section: Scienceand Technology Prowess Improvement of Accuracy for Moving Rating Recommendation System P. Damodharan *1,

More information

Preference Learning in Recommender Systems

Preference Learning in Recommender Systems Preference Learning in Recommender Systems M. de Gemmis L. Iaquinta P. Lops C. Musto F. Narducci G. Semeraro Department of Computer Science University of Bari, Italy ECML/PKDD workshop on Preference Learning,

More information

Social Voting Techniques: A Comparison of the Methods Used for Explicit Feedback in Recommendation Systems

Social Voting Techniques: A Comparison of the Methods Used for Explicit Feedback in Recommendation Systems Special Issue on Computer Science and Software Engineering Social Voting Techniques: A Comparison of the Methods Used for Explicit Feedback in Recommendation Systems Edward Rolando Nuñez-Valdez 1, Juan

More information

How to use search, recommender systems and online community to help users find what they want. Rashmi Sinha

How to use search, recommender systems and online community to help users find what they want. Rashmi Sinha The Quest for the "right item": How to use search, recommender systems and online community to help users find what they want. Rashmi Sinha Summary of the talk " Users have different types of information

More information

Flight Recommendation System based on user feedback, weighting technique and context aware recommendation system

Flight Recommendation System based on user feedback, weighting technique and context aware recommendation system www.ijecs.in International Journal Of Engineering And Computer Science ISSN:2319-7242 Volume 5 Issue 09 September 2016 Page No.17973-17978 Flight Recommendation System based on user feedback, weighting

More information

Building a website. Should you build your own website?

Building a website. Should you build your own website? Building a website As discussed in the previous module, your website is the online shop window for your business and you will only get one chance to make a good first impression. It is worthwhile investing

More information

A Time-based Recommender System using Implicit Feedback

A Time-based Recommender System using Implicit Feedback A Time-based Recommender System using Implicit Feedback T. Q. Lee Department of Mobile Internet Dongyang Technical College Seoul, Korea Abstract - Recommender systems provide personalized recommendations

More information

Recommender Systems. Collaborative Filtering & Content-Based Recommending

Recommender Systems. Collaborative Filtering & Content-Based Recommending Recommender Systems Collaborative Filtering & Content-Based Recommending 1 Recommender Systems Systems for recommending items (e.g. books, movies, CD s, web pages, newsgroup messages) to users based on

More information

Towards a hybrid approach to Netflix Challenge

Towards a hybrid approach to Netflix Challenge Towards a hybrid approach to Netflix Challenge Abhishek Gupta, Abhijeet Mohapatra, Tejaswi Tenneti March 12, 2009 1 Introduction Today Recommendation systems [3] have become indispensible because of the

More information

Rodale. Upper East Side

Rodale. Upper East Side Rodale Upper East Side Meet Rodale Native American, 29, PHX Level of trust in news? The best you can expect is 60% : Believes reporter s perspective, motive, and experience all drive credibility, as well

More information

Comparison of Recommender System Algorithms focusing on the New-Item and User-Bias Problem

Comparison of Recommender System Algorithms focusing on the New-Item and User-Bias Problem Comparison of Recommender System Algorithms focusing on the New-Item and User-Bias Problem Stefan Hauger 1, Karen H. L. Tso 2, and Lars Schmidt-Thieme 2 1 Department of Computer Science, University of

More information

BECOME A LOAD TESTING ROCK STAR

BECOME A LOAD TESTING ROCK STAR 3 EASY STEPS TO BECOME A LOAD TESTING ROCK STAR Replicate real life conditions to improve application quality Telerik An Introduction Software load testing is generally understood to consist of exercising

More information

Website Designs Australia

Website Designs Australia Proudly Brought To You By: Website Designs Australia Contents Disclaimer... 4 Why Your Local Business Needs Google Plus... 5 1 How Google Plus Can Improve Your Search Engine Rankings... 6 1. Google Search

More information

Academic Editor Tutorial

Academic Editor Tutorial Academic Editor Tutorial Contents I. Assignments a. Receiving an invitation b. Responding to an invitation c. Primary review i. Cascading peer review II. Inviting additional reviewers a. Reviewer selection

More information

Introduction. Chapter Background Recommender systems Collaborative based filtering

Introduction. Chapter Background Recommender systems Collaborative based filtering ii Abstract Recommender systems are used extensively today in many areas to help users and consumers with making decisions. Amazon recommends books based on what you have previously viewed and purchased,

More information

CHAPTER 5 TESTING AND IMPLEMENTATION

CHAPTER 5 TESTING AND IMPLEMENTATION CHAPTER 5 TESTING AND IMPLEMENTATION 5.1. Introduction This chapter will basically discuss the result of the user acceptance testing of the prototype. The comments and suggestions retrieved from the respondents

More information

Getting started with Inspirometer A basic guide to managing feedback

Getting started with Inspirometer A basic guide to managing feedback Getting started with Inspirometer A basic guide to managing feedback W elcome! Inspirometer is a new tool for gathering spontaneous feedback from our customers and colleagues in order that we can improve

More information

R&D White Paper WHP 098. genome A personalised programme guide. Research & Development BRITISH BROADCASTING CORPORATION. November T.

R&D White Paper WHP 098. genome A personalised programme guide. Research & Development BRITISH BROADCASTING CORPORATION. November T. R&D White Paper WHP 098 November 2004 genome A personalised programme guide T. Ferne Research & Development BRITISH BROADCASTING CORPORATION BBC Research & Development White Paper WHP 098 genome A Personalised

More information

Evaluating the suitability of Web 2.0 technologies for online atlas access interfaces

Evaluating the suitability of Web 2.0 technologies for online atlas access interfaces Evaluating the suitability of Web 2.0 technologies for online atlas access interfaces Ender ÖZERDEM, Georg GARTNER, Felix ORTAG Department of Geoinformation and Cartography, Vienna University of Technology

More information

PROJECT REPORT. TweetMine Twitter Sentiment Analysis Tool KRZYSZTOF OBLAK C

PROJECT REPORT. TweetMine Twitter Sentiment Analysis Tool KRZYSZTOF OBLAK C PROJECT REPORT TweetMine Twitter Sentiment Analysis Tool KRZYSZTOF OBLAK C00161361 Table of Contents 1. Introduction... 1 1.1. Purpose and Content... 1 1.2. Project Brief... 1 2. Description of Submitted

More information

KM COLUMN. How to evaluate a content management system. Ask yourself: what are your business goals and needs? JANUARY What this article isn t

KM COLUMN. How to evaluate a content management system. Ask yourself: what are your business goals and needs? JANUARY What this article isn t KM COLUMN JANUARY 2002 How to evaluate a content management system Selecting and implementing a content management system (CMS) will be one of the largest IT projects tackled by many organisations. With

More information

User interface design. Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 16 Slide 1

User interface design. Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 16 Slide 1 User interface design Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 16 Slide 1 The user interface Should be designed to match: Skills, experience and expectations of its anticipated users.

More information

Combining Review Text Content and Reviewer-Item Rating Matrix to Predict Review Rating

Combining Review Text Content and Reviewer-Item Rating Matrix to Predict Review Rating Combining Review Text Content and Reviewer-Item Rating Matrix to Predict Review Rating Dipak J Kakade, Nilesh P Sable Department of Computer Engineering, JSPM S Imperial College of Engg. And Research,

More information

Implicit Personalization of Public Environments using Bluetooth

Implicit Personalization of Public Environments using Bluetooth Implicit Personalization of Public Environments using Bluetooth Hema Mahato RWTH Aachen 52062 Aachen hema.mahato@rwth-aachen.de Dagmar Kern Pervasive Computing University of Duisburg-Essen 45117 Essen,

More information

Entry Name: "INRIA-Perin-MC1" VAST 2013 Challenge Mini-Challenge 1: Box Office VAST

Entry Name: INRIA-Perin-MC1 VAST 2013 Challenge Mini-Challenge 1: Box Office VAST Entry Name: "INRIA-Perin-MC1" VAST 2013 Challenge Mini-Challenge 1: Box Office VAST Team Members: Charles Perin, INRIA, Univ. Paris-Sud, CNRS-LIMSI, charles.perin@inria.fr PRIMARY Student Team: YES Analytic

More information

SEO: SEARCH ENGINE OPTIMISATION

SEO: SEARCH ENGINE OPTIMISATION SEO: SEARCH ENGINE OPTIMISATION SEO IN 11 BASIC STEPS EXPLAINED What is all the commotion about this SEO, why is it important? I have had a professional content writer produce my content to make sure that

More information

Implementing Games User Research Processes Throughout Development: Beyond Playtesting

Implementing Games User Research Processes Throughout Development: Beyond Playtesting Implementing Games User Research Processes Throughout Development: Beyond Playtesting Graham McAllister Founder, Player Research @grmcall Introduction Founder - Player Research, a User Research studio

More information

Enhancing E-Journal Access In A Digital Work Environment

Enhancing E-Journal Access In A Digital Work Environment Enhancing e-journal access in a digital work environment Foo, S. (2006). Singapore Journal of Library & Information Management, 34, 31-40. Enhancing E-Journal Access In A Digital Work Environment Schubert

More information

Table of Contents. I) Project Planning. User Analysis. III) Tasks Analysis. IV) Storyboard. V) Function Design. VI) Scenario Design.

Table of Contents. I) Project Planning. User Analysis. III) Tasks Analysis. IV) Storyboard. V) Function Design. VI) Scenario Design. FINAL REPORT Table of Contents I) Project Planning II) User Analysis III) Tasks Analysis IV) Storyboard V) Function Design VI) Scenario Design VII) Database VIII) Usability Questionnaire IX) System Version

More information

Recommender Systems: User Experience and System Issues

Recommender Systems: User Experience and System Issues Recommender Systems: User Experience and System ssues Joseph A. Konstan University of Minnesota konstan@cs.umn.edu http://www.grouplens.org Summer 2005 1 About me Professor of Computer Science & Engineering,

More information

Recording end-users security events: A step towards increasing usability

Recording end-users security events: A step towards increasing usability Section 1 Network Systems Engineering Recording end-users security events: A step towards increasing usability Abstract D.Chatziapostolou and S.M.Furnell Network Research Group, University of Plymouth,

More information

Joining Collaborative and Content-based Filtering

Joining Collaborative and Content-based Filtering Joining Collaborative and Content-based Filtering 1 Patrick Baudisch Integrated Publication and Information Systems Institute IPSI German National Research Center for Information Technology GMD 64293 Darmstadt,

More information

Evaluation and Design Issues of Nordic DC Metadata Creation Tool

Evaluation and Design Issues of Nordic DC Metadata Creation Tool Evaluation and Design Issues of Nordic DC Metadata Creation Tool Preben Hansen SICS Swedish Institute of computer Science Box 1264, SE-164 29 Kista, Sweden preben@sics.se Abstract This paper presents results

More information

Promoting Component Architectures in a Dysfunctional Organization

Promoting Component Architectures in a Dysfunctional Organization Promoting Component Architectures in a Dysfunctional Organization by Raj Kesarapalli Product Manager Rational Software When I first began my career as a software developer, I didn't quite understand what

More information

camcorders as a social research method

camcorders as a social research method Real Life Methods Part of the ESRC National Centre for Research Methods Toolkit #04 Participant Produced Video: Giving participants camcorders as a social research method Stewart Muir, Real Life Methods,

More information

Part 11: Collaborative Filtering. Francesco Ricci

Part 11: Collaborative Filtering. Francesco Ricci Part : Collaborative Filtering Francesco Ricci Content An example of a Collaborative Filtering system: MovieLens The collaborative filtering method n Similarity of users n Methods for building the rating

More information

Personalized Restaurant Menu

Personalized Restaurant Menu Bachelor-Master-Seminar Personalized Restaurant Menu Pascal Lessel Saarland University 11/17/2011 Master s thesis Advisor: Matthias Böhmer Supervisor: Professor Dr. A. Krüger 1 Outline Motivation Digital

More information

Copyright 2017 by Kevin de Wit

Copyright 2017 by Kevin de Wit Copyright 2017 by Kevin de Wit All rights reserved. No part of this publication may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic

More information

Recommender Systems: Attack Types and Strategies

Recommender Systems: Attack Types and Strategies Recommender Systems: Attack Types and Strategies Michael P. O Mahony and Neil J. Hurley and Guénolé C.M. Silvestre University College Dublin Belfield, Dublin 4 Ireland michael.p.omahony@ucd.ie Abstract

More information

WHITE PAPER Application Performance Management. The Case for Adaptive Instrumentation in J2EE Environments

WHITE PAPER Application Performance Management. The Case for Adaptive Instrumentation in J2EE Environments WHITE PAPER Application Performance Management The Case for Adaptive Instrumentation in J2EE Environments Why Adaptive Instrumentation?... 3 Discovering Performance Problems... 3 The adaptive approach...

More information

Recap: Project and Practicum CS276B. Recommendation Systems. Plan for Today. Sample Applications. What do RSs achieve? Given a set of users and items

Recap: Project and Practicum CS276B. Recommendation Systems. Plan for Today. Sample Applications. What do RSs achieve? Given a set of users and items CS276B Web Search and Mining Winter 2005 Lecture 5 (includes slides borrowed from Jon Herlocker) Recap: Project and Practicum We hope you ve been thinking about projects! Revised concrete project plan

More information

CHAPTER 8 Multimedia Information Retrieval

CHAPTER 8 Multimedia Information Retrieval CHAPTER 8 Multimedia Information Retrieval Introduction Text has been the predominant medium for the communication of information. With the availability of better computing capabilities such as availability

More information

Experiences from Implementing Collaborative Filtering in a Web 2.0 Application

Experiences from Implementing Collaborative Filtering in a Web 2.0 Application Experiences from Implementing Collaborative Filtering in a Web 2.0 Application Wolfgang Woerndl, Johannes Helminger, Vivian Prinz TU Muenchen, Chair for Applied Informatics Cooperative Systems Boltzmannstr.

More information

PROJECT SUMMARY Our group has chosen to conduct a usability study over

PROJECT SUMMARY Our group has chosen to conduct a usability study over LS 560 GROUP 2 Edmund Balzer Michelle Eisele Beth Keene Christine Remenih Usability Study PAGE 4 - CONSENT FORM: REMOTE USABILITY TEST PAGE 5 - SURVEY: QUESTIONS AND GRAPHED RESULTS PAGE 10 - REPORT: OBSERVATIONS,

More information

WIREFRAMING 101. Essential Question: Can We Possibly Build an App? Learning Targets: Lesson Overview

WIREFRAMING 101. Essential Question: Can We Possibly Build an App? Learning Targets: Lesson Overview WIREFRAMING 101 Essential Question: Can We Possibly Build an App? Learning Targets: Students will: Use wireframing to create a design for an app for mobile devices. Collaborate to make decisions about

More information

Data Curation Profile Human Genomics

Data Curation Profile Human Genomics Data Curation Profile Human Genomics Profile Author Profile Author Institution Name Contact J. Carlson N. Brown Purdue University J. Carlson, jrcarlso@purdue.edu Date of Creation October 27, 2009 Date

More information

Perfect Timing. Alejandra Pardo : Manager Andrew Emrazian : Testing Brant Nielsen : Design Eric Budd : Documentation

Perfect Timing. Alejandra Pardo : Manager Andrew Emrazian : Testing Brant Nielsen : Design Eric Budd : Documentation Perfect Timing Alejandra Pardo : Manager Andrew Emrazian : Testing Brant Nielsen : Design Eric Budd : Documentation Problem & Solution College students do their best to plan out their daily tasks, but

More information

CS 229 Final Project - Using machine learning to enhance a collaborative filtering recommendation system for Yelp

CS 229 Final Project - Using machine learning to enhance a collaborative filtering recommendation system for Yelp CS 229 Final Project - Using machine learning to enhance a collaborative filtering recommendation system for Yelp Chris Guthrie Abstract In this paper I present my investigation of machine learning as

More information

The influence of caching on web usage mining

The influence of caching on web usage mining The influence of caching on web usage mining J. Huysmans 1, B. Baesens 1,2 & J. Vanthienen 1 1 Department of Applied Economic Sciences, K.U.Leuven, Belgium 2 School of Management, University of Southampton,

More information

Choosing the Right Usability Tool (the right technique for the right problem)

Choosing the Right Usability Tool (the right technique for the right problem) Choosing the Right Usability Tool (the right technique for the right problem) User Friendly 2005 December 18, Shanghai Whitney Quesenbery Whitney Interactive Design www.wqusability.com Daniel Szuc Apogee

More information

Usability Test Report: Homepage / Search Interface 1

Usability Test Report: Homepage / Search Interface 1 Usability Test Report: Homepage / Search Interface 1 Summary Emily Daly, Bendte Fagge, and Steph Matthiesen conducted usability testing of the homepage and search interface in the newly redesigned Duke

More information

A Document-centered Approach to a Natural Language Music Search Engine

A Document-centered Approach to a Natural Language Music Search Engine A Document-centered Approach to a Natural Language Music Search Engine Peter Knees, Tim Pohle, Markus Schedl, Dominik Schnitzer, and Klaus Seyerlehner Dept. of Computational Perception, Johannes Kepler

More information

STAR Lab Technical Report

STAR Lab Technical Report VRIJE UNIVERSITEIT BRUSSEL FACULTEIT WETENSCHAPPEN VAKGROEP INFORMATICA EN TOEGEPASTE INFORMATICA SYSTEMS TECHNOLOGY AND APPLICATIONS RESEARCH LAB STAR Lab Technical Report Benefits of explicit profiling

More information

INSPIRE and SPIRES Log File Analysis

INSPIRE and SPIRES Log File Analysis INSPIRE and SPIRES Log File Analysis Cole Adams Science Undergraduate Laboratory Internship Program Wheaton College SLAC National Accelerator Laboratory August 5, 2011 Prepared in partial fulfillment of

More information

SPSS INSTRUCTION CHAPTER 9

SPSS INSTRUCTION CHAPTER 9 SPSS INSTRUCTION CHAPTER 9 Chapter 9 does no more than introduce the repeated-measures ANOVA, the MANOVA, and the ANCOVA, and discriminant analysis. But, you can likely envision how complicated it can

More information

User Control Mechanisms for Privacy Protection Should Go Hand in Hand with Privacy-Consequence Information: The Case of Smartphone Apps

User Control Mechanisms for Privacy Protection Should Go Hand in Hand with Privacy-Consequence Information: The Case of Smartphone Apps User Control Mechanisms for Privacy Protection Should Go Hand in Hand with Privacy-Consequence Information: The Case of Smartphone Apps Position Paper Gökhan Bal, Kai Rannenberg Goethe University Frankfurt

More information

Building an ASP.NET Website

Building an ASP.NET Website In this book we are going to build a content-based ASP.NET website. This website will consist of a number of modules, which will all fit together to produce the finished product. We will build each module

More information

Foundation Level Syllabus Usability Tester Sample Exam

Foundation Level Syllabus Usability Tester Sample Exam Foundation Level Syllabus Usability Tester Sample Exam Version 2017 Provided by German Testing Board Copyright Notice This document may be copied in its entirety, or extracts made, if the source is acknowledged.

More information

CHAPTER 6 PROPOSED HYBRID MEDICAL IMAGE RETRIEVAL SYSTEM USING SEMANTIC AND VISUAL FEATURES

CHAPTER 6 PROPOSED HYBRID MEDICAL IMAGE RETRIEVAL SYSTEM USING SEMANTIC AND VISUAL FEATURES 188 CHAPTER 6 PROPOSED HYBRID MEDICAL IMAGE RETRIEVAL SYSTEM USING SEMANTIC AND VISUAL FEATURES 6.1 INTRODUCTION Image representation schemes designed for image retrieval systems are categorized into two

More information

A Tag And Social Network Based Recommender System

A Tag And Social Network Based Recommender System Ryerson University Digital Commons @ Ryerson Theses and dissertations 1-1-2013 A Tag And Social Network Based Recommender System Sogol Naseri Ryerson University Follow this and additional works at: http://digitalcommons.ryerson.ca/dissertations

More information

The influence of social filtering in recommender systems

The influence of social filtering in recommender systems The influence of social filtering in recommender systems 1 Introduction Nick Dekkers 3693406 Recommender systems have become more and more intertwined in our everyday usage of the web. Think about the

More information

A guide to GOOGLE+LOCAL. for business. Published by. hypercube.co.nz

A guide to GOOGLE+LOCAL. for business. Published by. hypercube.co.nz A guide to GOOGLE+LOCAL for business Published by hypercube.co.nz An introduction You have probably noticed that since June 2012, changes have been taking place with the local search results appearing

More information

CHAPTER 18: CLIENT COMMUNICATION

CHAPTER 18: CLIENT COMMUNICATION CHAPTER 18: CLIENT COMMUNICATION Chapter outline When to communicate with clients What modes of communication to use How much to communicate How to benefit from client communication Understanding your

More information

Knowledge Engineering and Data Mining. Knowledge engineering has 6 basic phases:

Knowledge Engineering and Data Mining. Knowledge engineering has 6 basic phases: Knowledge Engineering and Data Mining Knowledge Engineering The process of building intelligent knowledge based systems is called knowledge engineering Knowledge engineering has 6 basic phases: 1. Problem

More information

The most powerful on-page SEO tools in 2016

The most powerful on-page SEO tools in 2016 The most powerful on-page SEO tools in 2016 Frankly speaking, a lot of sources are claiming confidently that SEO is gaining its significance day after day. The point is that it is very hard to disagree

More information

User Interfaces Assignment 3: Heuristic Re-Design of Craigslist (English) Completed by Group 5 November 10, 2015 Phase 1: Analysis of Usability Issues Homepage Error 1: Overall the page is overwhelming

More information

WHAT YOU WILL LEARN PT ACADEMY

WHAT YOU WILL LEARN PT ACADEMY PTACADEMY WHAT YOU WILL LEARN Introduction Step 1 - Identify Your Niche The TLC Formula Qualifying Leads Step 2 - Build Your Lead Magnet Step 3 - Create A Funnel Email Marketing Email Autoresponder Step

More information

Key Stage 1: Computing

Key Stage 1: Computing Weaving Computing Knowledge, Skills and Understanding into the new National Curriculum Key Stage 1: Computing 1 National Curriculum Requirements of Computing at Key Stage 1 Pupils should be taught to:

More information

If you re a Facebook marketer, you re likely always looking for ways to

If you re a Facebook marketer, you re likely always looking for ways to Chapter 1: Custom Apps for Fan Page Timelines In This Chapter Using apps for Facebook marketing Extending the Facebook experience Discovering iframes, Application Pages, and Canvas Pages Finding out what

More information

Movie Recommender System - Hybrid Filtering Approach

Movie Recommender System - Hybrid Filtering Approach Chapter 7 Movie Recommender System - Hybrid Filtering Approach Recommender System can be built using approaches like: (i) Collaborative Filtering (ii) Content Based Filtering and (iii) Hybrid Filtering.

More information

One of the fundamental kinds of websites that SharePoint 2010 allows

One of the fundamental kinds of websites that SharePoint 2010 allows Chapter 1 Getting to Know Your Team Site In This Chapter Requesting a new team site and opening it in the browser Participating in a team site Changing your team site s home page One of the fundamental

More information

Recommender Systems (RSs)

Recommender Systems (RSs) Recommender Systems Recommender Systems (RSs) RSs are software tools providing suggestions for items to be of use to users, such as what items to buy, what music to listen to, or what online news to read

More information

Weighted Alternating Least Squares (WALS) for Movie Recommendations) Drew Hodun SCPD. Abstract

Weighted Alternating Least Squares (WALS) for Movie Recommendations) Drew Hodun SCPD. Abstract Weighted Alternating Least Squares (WALS) for Movie Recommendations) Drew Hodun SCPD Abstract There are two common main approaches to ML recommender systems, feedback-based systems and content-based systems.

More information

First-Time Usability Testing for Bluetooth-Enabled Devices

First-Time Usability Testing for Bluetooth-Enabled Devices The University of Kansas Technical Report First-Time Usability Testing for Bluetooth-Enabled Devices Jim Juola and Drew Voegele ITTC-FY2005-TR-35580-02 July 2004 Project Sponsor: Bluetooth Special Interest

More information

Improving Results and Performance of Collaborative Filtering-based Recommender Systems using Cuckoo Optimization Algorithm

Improving Results and Performance of Collaborative Filtering-based Recommender Systems using Cuckoo Optimization Algorithm Improving Results and Performance of Collaborative Filtering-based Recommender Systems using Cuckoo Optimization Algorithm Majid Hatami Faculty of Electrical and Computer Engineering University of Tabriz,

More information

Hyacinth Macaws for Seniors Survey Report

Hyacinth Macaws for Seniors Survey Report Hyacinth Macaws for Seniors Survey Report http://stevenmoskowitz24.com/hyacinth_macaw/ Steven Moskowitz IM30930 Usability Testing Spring, 2015 March 24, 2015 TABLE OF CONTENTS Introduction...1 Executive

More information

NOTES ON OBJECT-ORIENTED MODELING AND DESIGN

NOTES ON OBJECT-ORIENTED MODELING AND DESIGN NOTES ON OBJECT-ORIENTED MODELING AND DESIGN Stephen W. Clyde Brigham Young University Provo, UT 86402 Abstract: A review of the Object Modeling Technique (OMT) is presented. OMT is an object-oriented

More information

Performance analysis of voip over wimax

Performance analysis of voip over wimax Performance analysis of voip over wimax Shima Faisal Ahmed Muhi-Aldean 1, Amin Babiker 2 1,2 Department of Communications, Faculty of Engineering Al Neelain University, Khartoum,Sudan Abstract: Voice over

More information

Memorandum Participants Method

Memorandum Participants Method Memorandum To: Elizabeth Pass, Associate Professor, School of Writing, Rhetoric and Technical Communication From: Andrew Carnes, WRTC 456 Section 1[ADC] Date: February 2, 2016 Re: Project 1 Competitor

More information

MiPhone Phone Usage Tracking

MiPhone Phone Usage Tracking MiPhone Phone Usage Tracking Team Scott Strong Designer Shane Miller Designer Sierra Anderson Designer Problem & Solution This project began as an effort to deter people from using their phones in class.

More information

TRAINING MATERIAL. An introduction to SONET-BULL Platform for members. HOME PAGE

TRAINING MATERIAL. An introduction to SONET-BULL Platform for members. HOME PAGE TRAINING MATERIAL An introduction to SONET-BULL Platform for members. HOME PAGE REGISTRATION The register page contains the registration form and is the users can register on the SONET-BULL platform. The

More information

Proposing a New Metric for Collaborative Filtering

Proposing a New Metric for Collaborative Filtering Journal of Software Engineering and Applications 2011 4 411-416 doi:10.4236/jsea.2011.47047 Published Online July 2011 (http://www.scip.org/journal/jsea) 411 Proposing a New Metric for Collaborative Filtering

More information

WHITE PAPER: ENTERPRISE AVAILABILITY. Introduction to Adaptive Instrumentation with Symantec Indepth for J2EE Application Performance Management

WHITE PAPER: ENTERPRISE AVAILABILITY. Introduction to Adaptive Instrumentation with Symantec Indepth for J2EE Application Performance Management WHITE PAPER: ENTERPRISE AVAILABILITY Introduction to Adaptive Instrumentation with Symantec Indepth for J2EE Application Performance Management White Paper: Enterprise Availability Introduction to Adaptive

More information

An Honors Thesis (HONRS 499) Thesis Advisor Rui Chen. Ball State University Muncie, Indiana. Expected Date of Graduation

An Honors Thesis (HONRS 499) Thesis Advisor Rui Chen. Ball State University Muncie, Indiana. Expected Date of Graduation The Development of BeatCred.net An Honors Thesis (HONRS 499) by Peter Kaskie Thesis Advisor Rui Chen Ball State University Muncie, Indiana May 2012 Expected Date of Graduation May 2012 Peter Kaskie The

More information

Literature Synthesis - Visualisations

Literature Synthesis - Visualisations Literature Synthesis - Visualisations By Jacques Questiaux QSTJAC001 Abstract This review takes a look at current technologies and methods that are used today to visualise data. Visualisations are defined

More information

Human Error Taxonomy

Human Error Taxonomy Human Error Taxonomy The Human Error Taxonomy (HET) provides a structure for requirement errors made during the software development process. The HET can be employed during software inspection to help

More information

Semantic Extensions to Syntactic Analysis of Queries Ben Handy, Rohini Rajaraman

Semantic Extensions to Syntactic Analysis of Queries Ben Handy, Rohini Rajaraman Semantic Extensions to Syntactic Analysis of Queries Ben Handy, Rohini Rajaraman Abstract We intend to show that leveraging semantic features can improve precision and recall of query results in information

More information

Interactive Transparent Display. Analyst/Designer. K Robert Clark 1/5/16 Digital Studio Practice

Interactive Transparent Display. Analyst/Designer. K Robert Clark 1/5/16 Digital Studio Practice Interactive Transparent Display Analyst/Designer K1454389 Robert Clark 1/5/16 Digital Studio Practice CONTENTS Introduction & Background... 2 Current Situation... 2 Design Aims... 2 Design Overview...

More information

Purpose, features and functionality

Purpose, features and functionality Topic 6 Purpose, features and functionality In this topic you will look at the purpose, features, functionality and range of users that use information systems. You will learn the importance of being able

More information

Reporting for Fundraising

Reporting for Fundraising Strategies for Supporting Fundraising Reporting for Fundraising Functions The Value of User Defined Functions in Your Data Warehouse. SupportingAdvancement.Com SupportingAdvancement.Com. All rights reserved.

More information

A Scalable, Accurate Hybrid Recommender System

A Scalable, Accurate Hybrid Recommender System A Scalable, Accurate Hybrid Recommender System Mustansar Ali Ghazanfar and Adam Prugel-Bennett School of Electronics and Computer Science University of Southampton Highfield Campus, SO17 1BJ, United Kingdom

More information

Fall 2013 Harvard Library User Survey Summary December 18, 2013

Fall 2013 Harvard Library User Survey Summary December 18, 2013 Fall 2013 Harvard Library User Survey Summary December 18, 2013 The Discovery Platform Investigation group placed links to a User Survey on the four major Harvard Library web sites (HOLLIS, HOLLIS Classic,

More information

The Ultimate Guide for Content Marketers. by SEMrush

The Ultimate Guide for Content Marketers. by SEMrush The Ultimate Guide for Content Marketers by SEMrush Table of content Introduction Who is this guide for? 1 2 3 4 5 Content Analysis Content Audit Optimization of Existing Content Content Creation Gap Analysis

More information

WEB CMS SELECTION: How to Go From Shortlist to Final Selection

WEB CMS SELECTION: How to Go From Shortlist to Final Selection WEB CMS SELECTION: How to Go From Shortlist to Final Selection 1 Choosing the right CMS isn t easy. Beyond scalability, there are key concerns around user experience, ease of integration, customizability,

More information