Applying a linguistic operator for aggregating movie preferences

Similar documents
A Constrained Spreading Activation Approach to Collaborative Filtering

Comparison of Recommender System Algorithms focusing on the New-Item and User-Bias Problem

Collaborative Filtering based on User Trends

A Constrained Spreading Activation Approach to Collaborative Filtering

Collaborative Filtering using a Spreading Activation Approach

A Time-based Recommender System using Implicit Feedback

A novel supervised learning algorithm and its use for Spam Detection in Social Bookmarking Systems

Project Report. An Introduction to Collaborative Filtering

A Scalable, Accurate Hybrid Recommender System

Part 11: Collaborative Filtering. Francesco Ricci

Performance Comparison of Algorithms for Movie Rating Estimation

Research on Applications of Data Mining in Electronic Commerce. Xiuping YANG 1, a

Available online at ScienceDirect. Procedia Technology 17 (2014 )

Robustness and Accuracy Tradeoffs for Recommender Systems Under Attack

Two Collaborative Filtering Recommender Systems Based on Sparse Dictionary Coding

Movie Recommender System - Hybrid Filtering Approach

amount of available information and the number of visitors to Web sites in recent years

Interest-based Recommendation in Digital Library

Content-based Dimensionality Reduction for Recommender Systems

Comparing State-of-the-Art Collaborative Filtering Systems

Hybrid Recommendation System Using Clustering and Collaborative Filtering

Michele Gorgoglione Politecnico di Bari Viale Japigia, Bari (Italy)

ECLT 5810 Evaluation of Classification Quality

Towards a hybrid approach to Netflix Challenge

Justified Recommendations based on Content and Rating Data

Weka ( )

Recommendation System with Location, Item and Location & Item Mechanisms

Collaborative Filtering using Euclidean Distance in Recommendation Engine

Collaborative Filtering: A Comparison of Graph-Based Semi-Supervised Learning Methods and Memory-Based Methods

An Empirical Study of Lazy Multilabel Classification Algorithms

New user profile learning for extremely sparse data sets

Towards Time-Aware Semantic enriched Recommender Systems for movies

A PERSONALIZED RECOMMENDER SYSTEM FOR TELECOM PRODUCTS AND SERVICES

Feature-weighted User Model for Recommender Systems

A System for Identifying Voyage Package Using Different Recommendations Techniques

A Recommender System Based on Improvised K- Means Clustering Algorithm

The Effect of Diversity Implementation on Precision in Multicriteria Collaborative Filtering

Information Integration of Partially Labeled Data

Collaborative Filtering Based on Iterative Principal Component Analysis. Dohyun Kim and Bong-Jin Yum*

Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest.

Influence in Ratings-Based Recommender Systems: An Algorithm-Independent Approach

Collaborative Filtering using Weighted BiPartite Graph Projection A Recommendation System for Yelp

Solving the Sparsity Problem in Recommender Systems Using Association Retrieval

System For Product Recommendation In E-Commerce Applications

CHAPTER 4 FUZZY LOGIC, K-MEANS, FUZZY C-MEANS AND BAYESIAN METHODS

Estimating Missing Attribute Values Using Dynamically-Ordered Attribute Trees

Review on Techniques of Collaborative Tagging

Recommender Systems: Attack Types and Strategies

Advances in Natural and Applied Sciences. Information Retrieval Using Collaborative Filtering and Item Based Recommendation

Improving the Efficiency of Fast Using Semantic Similarity Algorithm

Image retrieval based on bag of images

KNOW At The Social Book Search Lab 2016 Suggestion Track

Individualized Error Estimation for Classification and Regression Models

Extension Study on Item-Based P-Tree Collaborative Filtering Algorithm for Netflix Prize

Clustering of Data with Mixed Attributes based on Unified Similarity Metric

A PROPOSED HYBRID BOOK RECOMMENDER SYSTEM

BordaRank: A Ranking Aggregation Based Approach to Collaborative Filtering

Recommender System using Collaborative Filtering Methods: A Performance Evaluation

Predictive Analysis: Evaluation and Experimentation. Heejun Kim

INF4820, Algorithms for AI and NLP: Evaluating Classifiers Clustering

People Recommendation Based on Aggregated Bidirectional Intentions in Social Network Site

A Recursive Prediction Algorithm for Collaborative Filtering Recommender Systems

Recommendation Based on Co-clustring Algorithm, Co-dissimilarity and Spanning Tree

CHAPTER 3 MAINTENANCE STRATEGY SELECTION USING AHP AND FAHP

Towards QoS Prediction for Web Services based on Adjusted Euclidean Distances

A Bagging Method using Decision Trees in the Role of Base Classifiers

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

Evaluation Measures. Sebastian Pölsterl. April 28, Computer Aided Medical Procedures Technische Universität München

How the Distribution of the Number of Items Rated per User Influences the Quality of Recommendations

CS435 Introduction to Big Data Spring 2018 Colorado State University. 3/21/2018 Week 10-B Sangmi Lee Pallickara. FAQs. Collaborative filtering

Selection of Best Web Site by Applying COPRAS-G method Bindu Madhuri.Ch #1, Anand Chandulal.J #2, Padmaja.M #3

Noise-based Feature Perturbation as a Selection Method for Microarray Data

A Classifier with the Function-based Decision Tree

The Tourism Recommendation of Jingdezhen Based on Unifying User-based and Item-based Collaborative filtering

Similarity Measures of Pentagonal Fuzzy Numbers

Retrieval Evaluation. Hongning Wang

Improving Results and Performance of Collaborative Filtering-based Recommender Systems using Cuckoo Optimization Algorithm

The Travelling Salesman Problem. in Fuzzy Membership Functions 1. Abstract

IJREAT International Journal of Research in Engineering & Advanced Technology, Volume 1, Issue 5, Oct-Nov, ISSN:

Best First and Greedy Search Based CFS and Naïve Bayes Algorithms for Hepatitis Diagnosis

arxiv: v1 [cs.ir] 1 Jul 2016

Part 11: Collaborative Filtering. Francesco Ricci

Recurrent Neural Network Models for improved (Pseudo) Random Number Generation in computer security applications

Ordered Weighted Average Based Fuzzy Rough Sets

Semantic feedback for hybrid recommendations in Recommendz

Recommender Systems 6CCS3WSN-7CCSMWAL

The Comparative Study of Machine Learning Algorithms in Text Data Classification*

A Simulation Based Comparative Study of Normalization Procedures in Multiattribute Decision Making

An Improved Switching Hybrid Recommender System Using Naive Bayes Classifier and Collaborative Filtering

Chapter 2 Review of Previous Work Related to Recommender Systems

LEARNING WEIGHTS OF FUZZY RULES BY USING GRAVITATIONAL SEARCH ALGORITHM

Image Quality Assessment Techniques: An Overview

Proposing a New Metric for Collaborative Filtering

Multi Attribute Decision Making Approach for Solving Intuitionistic Fuzzy Soft Matrix

A Comparative Study of Selected Classification Algorithms of Data Mining

CHAPTER 5 CLUSTERING USING MUST LINK AND CANNOT LINK ALGORITHM

Survey on Collaborative Filtering Technique in Recommendation System

Explaining Recommendations: Satisfaction vs. Promotion

Top-N Recommendations from Implicit Feedback Leveraging Linked Open Data

Evaluating Classifiers

Transcription:

Applying a linguistic operator for aggregating movie preferences Ioannis Patiniotakis (1, Dimitris Apostolou (, and Gregoris Mentzas (1 (1 National Technical Univ. of Athens, Zografou 157 80, Athens Greece ( Dept. of Informatics, University of Piraeus, Piraeus 185 34 Greece Abstract Most modern recommender systems in use are taking advantage of algorithms that treat user ratings as numeric values. However this approach is not grounded on a sound theoretic basis. Most rating schemes used today are in fact limited, ordered option sets and therefore plain arithmetic is not appropriate. In this paper we explore the suitability of Linguistic Ordered Weighted Average (LOWA operator as a possible solution of this problem. Our experiments indicate that LOWA operator performs as good as weighted sum, used today. Keywords: Collaborative filtering recommender systems, LOWA operator, rating aggregation 1. Introduction A considerable amount of content is created and distributed over the Internet. Recommender Systems help online users avoid being overwhelmed by helping them to locate content or items that correspond to their needs and preferences [Adomavicius, Kwon (007], [Hanani et al. (001], based on the opinions of a community of users [Resnick, Varian (1997]. As [Adomavicius, Tuzhilin (005] has pointed out Recommender Systems is a highly active research area because of the many open issues. A very common classification of recommender systems, based on their recommendation approach is given by [Balabanovic, Shoham (1997]. Specifically, Content-based recommender systems recommend items based on past user choices, Collaborative Filtering recommenders suggest items based on similar users choices, finally Hybrid recommenders combine both approaches. Another classification is based on the algorithmic technique [Breese et al (1998]; Memory-based algorithms calculate recommendations using all latest data, whereas Model-based algorithms precompute a recommendation model and then apply it to produce new recommendations. Over the past years a wide variety of methods have been developed to implement each of these recommendation approaches. Contemporary methods for Collaborative Filtering recommenders make use of ratings in which users have expressed their preferences about content items. Ratings are often denoted on Likert scales (e.g. 1-5 or 1-7, or other equivalent schemes such as stars or linguistic labels (e.g. good, very

94 Applying a linguistic operator for aggregating movie preferences good, which map directly to numeric values. Many popular websites (e.g. Amazon use stars or linguistic labels instead of numeric values. The numeric rating values, either provided directly or derived indirectly, are used to produce recommendations for a specific item to a specific user, considering an aggregation of all known ratings (e.g. weighted sum that similar users gave to the same item in the past. Weights are the similarities between users that provided recommendations in the past and the user for which a recommendation needs to be made. Calculations often employ arithmetic aggregation techniques and algorithms, for instance cosine-based similarity, Pearson, weighted sum etc [Dunham (00]. An issue here is that arithmetic calculations such as addition and subtraction have no meaning when performed in star- or linguistic-based ratings [Fenton, Pfleeger (1997]. In principle such scales cannot express a continuous interval of preferences, where the distance between numbers also expresses the distance between user preferences in a meaningful and coherent manner. Scales of this type are called ordinal because they are used to denote an ordering of items with respect to an attribute. Although arithmetic operations in ordinal data are often performed in disciplines such as behavioral, management and computer sciences, there exist more appropriate techniques for working with such data. Driven from this theoretical concern, we set out to explore the possibility of using alternative techniques for aggregating ratings, suitable for ratings of ordinal scale. A potential solution to the problem could be the use of a linguistic aggregation method. We apply such a method on memory-based collaborative filtering recommender systems and perform subsequent analyses and comparisons. More specifically we have selected to investigate the usability of the Linguistic Ordered Weighted Average (LOWA method ([Herrera, Verdegay (1993], [Delgado et al (1993a], [Delgado et al (1993b], as an evaluation aggregation operator. According to the results of our experiments LOWA performs as good as weighted sum method used today. Of course, this is not a conclusive work on the suitability of linguistic aggregation techniques in recommender systems. Our research aspires to shed light on the potential of these methods for use in real world implementations. The remaining sections of the paper are organized as follows. Section II presents LOWA operator and gives a short example of its usage. Section III explains how LOWA operator can be used as an aggregation method. Section IV describes the experiments we have conducted to compare the linguistic aggregation approach with a classic collaborative filtering recommendation approach. Section V summarizes the contributions of this work and suggests directions for future researches.. The LOWA operator LOWA has been proposed by [Herrera et al. (1996], [Herrera, Herrera-Viedma (1997]. It is based on the concepts of Ordered Weighted Average (OWA proposed

Πρακτικά 1 oυ Συνεδρίου ΤΕΙ Αθήνας 95 by [Yager (1988] and on the convex combination of linguistic labels defined by [Delgado et al (1993b]. It is an operator that acts on linguistic variables, such as words or sentences in a natural or artificial language, and produces linguistic output too. LOWA is a tool used mainly in multicriteria decision making; for instance in the selection of a solution from a group of alternatives. As [Ben-Arieh, Chen (006] argue In the real world, the uncertainty, constraints, and even the vague knowledge of the experts imply that decision makers cannot provide exact numbers to express their opinions. The use of linguistic labels makes expert judgment more reliable and consistent. LOWA operates on the linguistic evaluations given to alternatives by a decision maker, against a set of criteria. Linguistic evaluations take values from an ordered set of linguistic terms, for instance Very Low, Low, Medium, High, and Very High. Each term has a position in the set, i.e. Very Low is at position 1 and Very High at 5. LOWA produces an overall assessment for that alternative. Typically, LOWA operator (Φ accepts as input a series of linguistic evaluations, along with a series of weights; each weight is associated to a corresponding criterion. The output of LOWA operator is again a single aggregated linguistic evaluation. An example of LOWA inputs / outputs assuming equal criteria weights is shown in Table 1. Table 1. A LOWA example Crit-1 Crit- Crit-3 Crit-4 LOWA Weights 0.5 0.5 0.5 0.5 Alt. 1 VL M H H M Alt. M L VL H M Alt. 3 M VL M L L Alt. 4 L H M L M The operator works first by sorting inputs in descending order based on their positions and then aggregating them in a pair wise manner. Specifically LOWA merges the two highest ratings into a new aggregated one, which is then merged with the third rating to produce an aggregation of the first three ratings. The procedure goes on until all input ratings are used. Φ For example: ( H, H, M, VL = H Φ( H, M, VL = H ( H Φ( M, VL = H ( H ( M VL The pair wise aggregation is achieved using a fuzzy technique, i.e. calculating the distance between the (ordered positions of the two evaluations in the pair; then estimating where in between the (new aggregated evaluation should lie, using the corresponding weights. Let X, Y be two linguistic terms to be aggregated and let sx and sy be their positions in the ordered set of the linguistic terms. Then the result of the pair wise aggregation with LOWA operator is given by the following formula.

96 Applying a linguistic operator for aggregating movie preferences X Y = { s, s + round( β ( s s } min max Y For example, M s position is 3 and VL s position is 1, hence the distance of their positions is 3 1 =. Therefore, { 5,1+ round( 0.33 ( 3 1 } = min{ 5, } = L { 5, + round( 0.33 ( 4 } = min{ 5, 3} = M { 5, 3 + round( 0.33 ( 4 3 } = min{ 5, 3} = M M VL = min H L = min 3 H M = min 3 The weights used (0.33 result from the following formula: β h = w h m k = LOWA operator is thoroughly detailed at [Herrera, Herrera-Viedma (1997]. 3. Applying the LOWA operator in Collaborative Filtering In order to investigate the suitability of the LOWA operator in collaborative filtering recommender systems we have developed a new method of recommendation. Our method is a modification of the typical memory-based, user-based collaborative filtering recommendation approach. The difference lies at the last step, where the users evaluations aggregation takes place. Let s consider that I is the set of available items for review and evaluation in the data set. U is the set of the available users in the data set, who have submitted evaluations for items in I. The evaluation that user u has submitted for item i is expressed as R(u, i. Evaluations can take any value from a predetermined set of options R0; they can be either discrete or contiguous. The goal of each recommender system is to estimate the recommendation function R: U x I R0. Typically, the recommendation process starts with calculating the similarity of user u to all other users in set U. There are several techniques proposed in the literature to compute similarity. In our method we have opted to use two popular methods; the cosine-based similarity and correlation-based similarity (See [Breese et al (1998], [Karypis (000] and [Sarwar et al. (000]. Cosine-based similarity can be calculated as follows: u u sim = cos = sim w k Y = X Y R R i i R(, i R (, i I(u,u is the set of items that both u and u have evaluated. Correlation-based (or Pearson similarity can be calculated using the Pearson correlation function:

Πρακτικά 1 oυ Συνεδρίου ΤΕΙ Αθήνας 97 simu simu (, (, = = E[ ( R( u µ ( ( u R u µ ] [ ( ] E [ R( u ] E[ R( ] E [ R( ] n R( u R( R( u R( E R u n ( R u R( u n R( R( n is the size of I(u,u, i.e. the number of items evaluated by both users. E[X] = µχ is the mean value of X. The recommendation process continues with the selection of the top k most similar users to user u, who have evaluated item i when user u has not. Let s call N(u this set of users. Obviously N(u may vary for different items. Also, k can range from 1 to all users in the data set. We have decided to use k=3 in our experiments. The last step in the recommendation process is the aggregation of evaluations of the users u in N(u for item i. The result of this aggregation is the recommendation R(u, i. Normally, this would be achieved using a weighted sum approach: or an adjusted weighted sum approach: i = z sim R( i R, N ( u i = E[ R( u ] + z sim ( R( i E[ R( ] R, N ( u where z = 1 sim N ( u However, as we have already mentioned, in our approach we have chosen to use the LOWA operator (Φ for evaluation aggregation, instead of a weighted sum technique. LOWA operator is normally used to produce a single aggregate linguistic evaluation using the evaluations given for an alternative on a set of predefined criteria. Each criterion is associated with a weight. In our context we treat items as alternatives. We aggregate item ratings over users, instead of criteria as in LOWA. Finally we compute the LOWA weights, instead of criteria weights, from user similarities. Aggregation using LOWA can be achieved as follows: R i = Φ( ( R(, i, sim z, N( u meaning that the recommendation is the result of the LOWA operator on the set of evaluations of users u in N(u, using as weights their similarities normalized so that: sim ( weight = = 1 z

98 Applying a linguistic operator for aggregating movie preferences 4. The experiment In order to evaluate the suitability of our method in collaborative filtering recommender systems and compare it to typical recommendation methods, we have devised and conducted a series of experiments. As input data set we use one of the well known MovieLens data sets provided from GroupLens research project (http://www.grouplens.org/. The specific dataset includes 100,000 movie ratings from 943 users, regarding 168 different movies. Each user has evaluated at least 0 items, using a rating from 1 to 5. Movie ratings in MovieLens are expressed as stars having the following meanings, 1 star means Awful, stars Fairly bad, 3 stars It s ok, 4 star Will enjoy, and 5 stars Must see. The linguistic term order is 1 for 1 star up to 5 for 5 stars. Data have been partitioned using a 10-fold cross-validation approach, where each fold contains a disjoint subset of data, about 10% of the total evaluations in data set; [Duda et al (000], [Witten, Frank (005]. Therefore it is possible to run 10 times each experiment using one fold as the test set and the rest nine as the training set. The density of the data set, i.e. number of non-zero ratings to the total number of ratings, is estimated to 6.3% and its sparsity at 93.7% (see [Sarwar et al. (001]. We have designed a series of four tests in order to compare a widely used collaborative filtering method, based on weighted sum for rating aggregation, and the proposed linguistic aggregation method. Furthermore, we have opted to use two variations of each method, namely one using cosine-based similarity and one using Pearson, hoping to obtain a better idea on how these methods compare. Both methods will use the top 3 most similar users evaluations to produce their recommendations. Each setup has been executed 10 times, one for each test set using the remaining nine as the training set. In each run, the ratings of the test set have been ignored..1 Comparison of methods using MAE error metric In order to compare the performance of the methods we use four metrics; one error metric, namely Mean Absolute Error (MAE, and three classification metrics, namely Precision, Recall and F-measure metrics. MAE computes the deviation of the recommendations generated, from the actual ratings. Thus it gives an approximation of how well each method performs. MAE is a simple and easy to comprehend method, and therefore it is very popular. [Cremonesi et al (008] We have selected MAE as a performance metric in our work because it is less sensitive to outliers than other error metrics such as Mean Square Error (MSE and Root MSE. MAE is not suitable for classifications, e.g. top-n item recommendations [Cremonesi et al (008]. Table and Figure 1 present in a comparative manner the

Πρακτικά 1 oυ Συνεδρίου ΤΕΙ Αθήνας 99 error distribution of each test setup. As expected the errors seems to have a normal distribution around zero. Table. Methods sorted by MAE Method MAE Count Cosine, w.sum 0.70 98,91 Cosine, LOWA 0.7 98,91 Pearson, w.sum 0.73 98,39 Pearson, LOWA 0.73 98,318 Figure 1. Recommendation error distributions A comparison of the methods examined based on MAE indicate that the weighted sum rating aggregation method coupled to cosine similarity slightly outperforms LOWA, however the difference is marginal. Therefore it seems that both methods perform equally well.. Comparison of methods using Precision-Recall metrics According to [Cremonesi et al (008] the Precision and Recall are the most popular metrics in the information retrieval field. These metrics are suitable for tasks such as top-n items recommendation. One important note is that precision and recall should not be interpreted as absolute measures, but only to compare different methods on the same dataset [Herlocker et al. (004].

100 Applying a linguistic operator for aggregating movie preferences A technique to compute precision and recall in top-n recommendations involves grouping items into two categories, the high-rated and the lower-rated items [Basu et al (1998]. In our case, we consider as high-rated movies those with a 5 rating. By classifying an item as high-rated or not high-rated, the recommendation can be a True Positive (TP, meaning that a truly high-rated item is classified as high-rated, a False Positive (FP, i.e. an item is falsely classified as high-rated, a True Negative (TN, i.e. a low-rated item is correctly classified as low-rated, and a False Negative (FN, an item is falsely classified as high-rated. To calculate precision and recall metrics we have used the following formulas. Based on them we can also calculate the F-measure too, which combines both precision and recall in one metric. TP precision=, TP+ FP TP recall=, TP+ FN precision recall F measure = precision + recall Based on these formulas we get the following results (Table 3. The data set used contained 1,01 high-rated movies, which is 1% of the available evaluations. We observe that the LOWA-based methods slightly outperform the weighted sum methods as far as the F-measure is concerned. Table 3. Precision, Recall and F-measure metrics Method Precision Recall F-measure Cosine, LOWA 0.47916 0.44190 0.45977 Cosine, w.sum 0.49497 0.41114 0.44918 Pearson, LOWA 0.48769 0.3981 0.43837 Pearson, w.sum 0.50491 0.36046 0.4063 Overall, LOWA as a rating aggregation method performs similarly to the collaborative filtering methods implemented in today s recommender systems. Either using the precision, recall, and F-measure classification metrics, or the popular MAE error metric, results indicate that both methods are almost equivalent. 4. Conclusions Future Work We have set out to explore the suitability of linguistic aggregation methods, and more specifically of LOWA, in collaborative filtering recommender systems. For this purpose we have designed an experiment to compare LOWA with a popular aggregation technique implemented in existing recommender systems; weighted sum. We have developed two implementations for each aggregation method, one using a cosine-based user similarity calculation and one using a Pearson-based similarity

Πρακτικά 1 oυ Συνεδρίου ΤΕΙ Αθήνας 101 calculation. Therefore, four collaborative filtering methods have been implemented and tested. The experiment outcomes led us to the conclusion that LOWA can be an alternative rating aggregation method which is also in line with the related theories about operations that can be performed in data of ordinal scale. It is interesting to check whether it can also be exploited in more ways, or whether additional linguistic aggregation methods can be used in recommender systems. As already mentioned it is necessary to conduct more tests using more data sets of various sizes and densities. More test setups can be designed including additional testing variables and options. Another path of exploration would be the use of different linguistic aggregation operators. Furthermore, it would be extremely interesting to implement and compare multicriteria versions of the methods with their single criterion counterparts. Our goal was not to develop a superior collaboration filtering recommendation method but to explore whether linguistic aggregation operators such as LOWA work as good as their numeric counterparts and yield reliable results. According to our experimental results it seems this is true. References Adomavicius G., Tuzhilin A. (005. Towards the Next Generation of Recommender Systems: A Survey of the State-of-the-Art and Possible Extensions. IEEE Trans Knowl. Data Engin, 17-6, pp. 734 749. Adomavicius G., YoungOk Kwon (007. New Recommendation Techniques for Multicriteria Rating Systems. IEEE Intelligent Systems, pp. 48 55. Balabanovic M., Shoham Y. (1997. Fab: Content-Based, Collaborative Recommendation. Comm. ACM, 40-3, pp. 66 7. Basu C., Hirsh H., Cohen W. (1998. Recommendation as classification: Using social and content-based information in recommendation. Proceedings of the Fifteenth National Conference on Artificial Intelligence, pp. 714 70 Ben-Arieh D., Chen Z. (006. Linguistic-Labels Aggregation and Consensus Measure for Autocratic Decision Making Using Group Recommendations. Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions, 36-3, pp. 558 568. Breese J.S., Heckerman D., Kadie C. (1998. Empirical Analysis of Predictive Algorithms for Collaborative Filtering. Proc. 14th Conf. Uncertainty in Artificial Intelligence, Morgan Kaufmann, pp. 43 5. Cremonesi P., Turrin R., Lentini E., Matteucci M. (008. An Evaluation Methodology for Collaborative Recommender Systems. Intl conf. on Automated solutions for Cross Media Content and Multi-channel Distribution, AXMEDIS '08.

10 Applying a linguistic operator for aggregating movie preferences Delgado M., Verdegay J.L., Vila M.A. (1993a. Linguistic Decision Making Models. International Journal of Intelligent Systems, 7, pp. 479 49. Delgado M., Verdegay J.L., Vila M.A. (1993b. On Aggregation Operations of Linguistic Labels. International Journal of Intelligent Systems, 8, pp. 351 370. Duda R., Hart P., Stork D. (000. Pattern Classification. Wiley-Interscience, nd edition. Dunham M. H. (00. Data Mining: Introductory and Advanced Topics. Prentice Hall. Fenton E., Pfleeger S.L. (1997. Software Metrics, A Rigorous and Practical Approach. PWS Publishing, p.48. Hanani U., Shapira B., Shoval P. (001. Information filtering: overview of issues, research and systems. User Modeling and User-Adapted Interaction, 11-3, pp. 03 59. Herlocker J., Konstan J., Terveen L., Riedl J. (004. Evaluating Collaborative Filtering Recommender Systems. ACM Trans on Inform. Syst., -1, pp.5 53. Herrera F., Verdegay J.L. (1993. Linguistic assessments in group decision. Proc. of First European Congress on Fuzzy and Intelligent Technologies, Aachen. Herrera F., Herrera-Viedma E., Verdegay J. L. (1996. Direct approach processes in group decision making using linguistic OWA operators. Fuzzy Sets and Systems, 79, pp. 175 190. Herrera F., Herrera-Viedma E. (1997. Aggregation operators for linguistic weighted information. IEEE Trans. Systems, Man and Cybernetics, 7, pp. 646 656. Karypis G. (000. SUGGEST: Top-N Recommendation Engine [Online]. Available: http://www-users.cs.umn.edu/~karypis/ or http://glaros.dtc.umn.edu/gkhome/suggest/overview Resnick P., Varian H.R. (1997. Recommender systems. Comm. ACM, 40 3, 56 58. Sarwar B.M., Karypis G., Konstan J.A., Riedl J. (000. Analysis of Recommendation Algorithms for Ecommerce. ACM Conference on Electronic Commerce, pp. 158 167. Sarwar B.M., Karypis G., Konstan J., Reidl J. (001. Item-based collaborative filtering recommendation algorithms. WWW 01: Proceedings of the 10th international conference on World Wide Web, ACM Press, pp.85 95. Witten I., Frank E. (005. Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kaufmann. Yager R.R. (1988. On ordered weighted averaging aggregation operators in multicriteria decision making. IEEE Trans. Systems, Man and Cybernetics, 18, pp. 183 190.