Recommendation with Differential Context Weighting

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Recommendation with Differential Context Weighting"

Transcription

1 Recommendation with Differential Context Weighting Yong Zheng Robin Burke Bamshad Mobasher Center for Web Intelligence DePaul University Chicago, IL USA Conference on UMAP June 12, 2013

2 Overview Introduction (RS and Context-aware RS) Sparsity of Contexts and Relevant Solutions Differential Context Relaxation & Weighting Experimental Results Conclusion and Future Work

3 Introduction Recommender Systems Context-aware Recommender Systems

4 Recommender Systems (RS) Information Overload Recommendations

5 Context-aware RS (CARS) Traditional RS: Users Items Ratings Context-aware RS: Users Items Contexts Ratings Companion Example of Contexts in different domains: Food: time (lunch, dinner), occasion (business lunch, family dinner) Movie: time (weekend, weekday), location (home, cinema), etc Music: time (morning, evening), activity (study, sports, party), etc Book: a book as a gift for kids or mother, etc Recommendation cannot live alone without considering contexts.

6 Research Problems Sparsity of Contexts Relevant Solutions

7 Sparsity of Contexts Assumption of Context-aware RS: It is better to use preferences in the same contexts for predictions in recommender systems. Same contexts? How about multiple contexts & sparsity? An example in the movie domain: User Movie Time Location Companion Rating U1 Titanic Weekend Home Girlfriend 4 U2 Titanic Weekday Home Girlfriend 5 U3 Titanic Weekday Cinema Sister 4 U1 Titanic Weekday Home Sister? Are there rating profiles in the contexts <Weekday, Home, Sister>?

8 Relevant Solutions User Movie Time Location Companion Rating U1 Titanic Weekend Home Girlfriend 4 U2 Titanic Weekday Home Girlfriend 5 U3 Titanic Weekday Cinema Sister 4 U1 Titanic Weekday Home Sister? Context Matching The same contexts <Weekday, Home, Sister>? 1.Context Selection Use the influential dimensions only 2.Context Relaxation Use a relaxed set of dimensions, e.g. time 3.Context Weighting We can use all dimensions, but measure how similar the contexts are! (to be continued later) Differences between context selection and context relaxation: Context selection is conducted by surveys or statistics; Context relaxation is directly towards optimization on predictions; Optimal context relaxation/weighting is a learning process!

9 DCR and DCW Differential Context Relaxation (DCR) Differential Context Weighting (DCW) Particle Swarm Intelligence as Optimizer

10 Differential Context Relaxation Differential Context Relaxation (DCR) is our first attempt to alleviate the sparsity of contexts, and differential context weighting (DCW) is a finer-grained improvement over DCR. There are two notion in DCR Differential Part Algorithm Decomposition Separate one algorithm into different functional components; Apply appropriate context constraints to each component; Maximize the global contextual effects together; Relaxation Part Context Relaxation References We use a set of relaxed dimensions instead of all of them. Y. Zheng, R. Burke, B. Mobasher. "Differential Context Relaxation for Context-aware Travel Recommendation". In EC-WEB, 2012 Y. Zheng, R. Burke, B. Mobasher. "Optimal Feature Selection for Context-Aware Recommendation using Differential Relaxation". In RecSys Workshop on CARS, 2012

11 DCR Algorithm Decomposition Take User-based Collaborative Filtering (UBCF) for example. Pirates of the Caribbean 4 Kung Fu Panda 2 Harry Potter 6 Harry Potter 7 U U U U ? Standard Process in UBCF (Top-K UserKNN, K=1 for example): 1). Find neighbors based on user-user similarity 2). Aggregate neighbors contribution 3). Make final predictions

12 DCR Algorithm Decomposition Take User-based Collaborative Filtering (UBCF) for example. 1.Neighbor Selection 2.Neighbor contribution 3.User baseline 4.User Similarity All components contribute to the final predictions, where we assume appropriate contextual constraints can leverage the contextual effect in each algorithm component. e.g. use neighbors who rated in same contexts.

13 DCR Context Relaxation User Movie Time Location Companion Rating U1 Titanic Weekend Home Girlfriend 4 U2 Titanic Weekday Home Girlfriend 5 U3 Titanic Weekday Cinema Sister 4 U1 Titanic Weekday Home Sister? Notion of Context Relaxation: Use {Time, Location, Companion} 0 record matched! Use {Time, Location} 1 record matched! Use {Time} 2 records matched! In DCR, we choose appropriate context relaxation for each component. Balance # of matched ratings best performances & least noises

14 DCR Context Relaxation 1.Neighbor Selection 2.Neighbor contribution 3.User baseline 4.User Similarity c is the original contexts, e.g. <Weekday, Home, Sister> C1, C2, C3, C4 are the relaxed contexts. The selection is modeled by a binary vector. E.g. <1, 0, 0> denotes we just selected the first context dimension Take neighbor selection for example: Originally select neighbors by users who rated the same item. DCR further filter those neighbors by contextual constraint C1 i.e.. C1 = <1,0,0> Time=Weekday u must rated i on weekdays

15 DCR Drawbacks 1.Neighbor Selection 2.Neighbor contribution 3.User baseline 4.User Similarity 1. Context relaxation is still strict, especially when data is sparse. 2. Components are dependent. For example, neighbor contribution is dependent with neighbor selection. E.g. neighbors are selected by C1: Location = Cinema, it is not guaranteed, neighbor has ratings under contexts C2: Time = Weekend A finer-grained solution is required!! Differential Context Weighting

16 Differential Context Weighting User Movie Time Location Companion Rating U1 Titanic Weekend Home Girlfriend 4 U2 Titanic Weekday Home Girlfriend 5 U3 Titanic Weekday Cinema Sister 4 U1 Titanic Weekday Home Sister? Goal: Use all dimensions, but we measure the similarity of contexts. Assumption: More similar two contexts are given, the ratings may be more useful for calculations in predictions. Similarity of contexts is measured by Weighted Jaccard similarity c and d are two contexts. (Two red regions in the Table above.) σ is the weighting vector <w1, w2, w3> for three dimensions. Assume they are equal weights, w1 = w2 = w3 = 1. J(c, d, σ) = # of matched dimensions / # of all dimensions = 2/3

17 Differential Context Weighting 1.Neighbor Selection 2.Neighbor contribution 3.User baseline 4.User Similarity 1. Differential part Components are all the same as in DCR. 2. Context Weighting part (for each individual component): σ is the weighting vector ϵ is a threshold for the similarity of contexts. i.e., only records with similar enough ( ϵ) contexts can be included. 3.In calculations, similarity of contexts are the weights, for example 2.Neighbor contribution It is similar calculation for the other components.

18 Particle Swarm Optimization (PSO) The remaining work is to find optimal context relaxation vectors for DCR and context weighting vectors for DCW. PSO is derived from swarm intelligence which helps achieve a goal by collaborative Fish Birds Bees Why PSO? 1). Easy to implement as a non-linear optimizer; 2). Has been used in weighted CF before, and was demonstrated to work better than other non-linear optimizer, e.g. genetic algorithm; 3). Our previous work successfully applied BPSO for DCR;

19 Particle Swarm Optimization (PSO) Swarm = a group of birds Particle = each bird each run in algorithm Vector = bird s position in the space Vectors we need Goal = the location of pizza Lower prediction error So, how to find goal by swam? 1.Looking for the pizza Assume a machine can tell the distance 2.Each iteration is an attempt or move 3.Cognitive learning from particle itself Am I closer to the pizza comparing with my best locations in previous history? 4.Social Learning from the swarm Hey, my distance is 1 mile. It is the closest!. Follow me!! Then other birds move towards here. DCR Feature selection Modeled by binary vectors Binary PSO DCW Feature weighting Modeled by real-number vectors PSO How it works? Take DCR and Binary PSO for example: Assume there are 4 components and 3 contextual dimensions Thus there are 4 binary vectors for each component respectively We merge the vectors into a single one, the vector size is 3*4 = 12 This single vector is the particle s position vector in PSO process.

20 Experimental Results Data Sets Predictive Performance Performance of Optimizer

21 Context-aware Data Sets AIST Food Data Movie Data # of Ratings # of Users # of Items # of Contexts Real hunger (full/normal/hungry) Virtual hunger Time (weekend, weekday) Location (home, cinema) Companions (friends, alone, etc) Other Features User gender Food genre, Food style Food stuff User gender Year of the movie Density Dense Sparse Context-aware data sets are usually difficult to get. Those two data sets were collected from surveys.

22 Evaluation Protocols Metric: root-mean-square error (RMSE) and coverage which denotes the percentage we can find neighbors for a prediction. Our goal: improve RMSE (i.e. less errors) within a decent coverage. We allow a decline in coverage, because applying contextual constraints usually bring low coverage (i.e. the sparsity of contexts!). Baselines: context-free CF, i.e. the original UBCF contextual pre-filtering CF which just apply the contextual constraints to the neighbor selection component no other components in DCR and DCW. Other settings in DCR & DCW: K = 10 for UserKNN evaluated on 5-folds cross-validation T = 100 as the maximal iteration limit in the PSO process Weights are ranged within [0, 1] We use the same similarity threshold for each component, which was iterated from 0.0 to 1.0 with 0.1 increment in DCW

23 Predictive Performances Blue bars are RMSE values, Red lines are coverage curves. Findings: 1) DCW works better than DCR and two baselines; 2) Significance t-test shows DCW works significantly in movie data, but DCR was not significant over two baselines; DCW can further alleviate sparsity of contexts and compensate DCR; 3) DCW offers better coverage over baselines!

24 Performances of Optimizer Running time is in seconds. Using 3 particles is the best configuration for two data sets here! Factors influencing the running performances: More particles, quicker convergence but probably more costs; # of contextual variables: more contexts, probably slower; Density of the data set: denser, more calculations in DCW; Typically DCW costs more than DCR, because it uses all contextual dimensions and the calculation for similarity of contexts is time-consuming, especially for dense data, like the Food data.

25 Other Results (Optional) 1.The optimal threshold for similarity of contexts For Food data set, it is 0.6; For Movie data set, it is 0.1; 2.The optimal weighting vectors (e.g. Movie data) Note: Darker smaller weights; Lighter Larger weights

26 It is gonna end Conclusions Future Work

27 Conclusions We propose DCW which is a finer-grained improvement over DCR; It can further improve predictive accuracy within decent coverage; PSO is demonstrated to be the efficient optimizer; We found underlying factors influencing running time of optimizer; Stay Tuned DCR and DCW are general frameworks (DCM, i.e. differential context modeling as the name of this framework), and they can be applied to any recommendation algorithms which can be decomposed into multiple components. We have successfully extend its applications to item-based collaborative filtering and slope one recommender. References Y. Zheng, R. Burke, B. Mobasher. "Differential Context Modeling in Collaborative Filtering ". In SOCRS-2013, Chicago, IL USA 2013

28 Future Work Try other similarity of contexts instead of the simple Jaccard one; Introduce semantics into the similarity of contexts to further alleviate the sparsity of contexts, e.g., Rome is closer to Florence than Paris. Parallel PSO or put PSO on MapReduce to speed up optimizer; Acknowledgement Student Travel Support from US NSF (UMAP Platinum Sponsor) See u later The 19th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), Chicago, IL USA, Aug 11-14, 2013

29 Thank You! Center for Web Intelligence, DePaul University, Chicago, IL USA

User-Oriented Context Suggestion

User-Oriented Context Suggestion User-Oriented Context Suggestion Yong Zheng Center for Web Intelligence DePaul University Chicago, Illinois, USA yzheng8@cs.depaul.edu Bamshad Mobasher Center for Web Intelligence DePaul University Chicago,

More information

CS249: ADVANCED DATA MINING

CS249: ADVANCED DATA MINING CS249: ADVANCED DATA MINING Recommender Systems II Instructor: Yizhou Sun yzsun@cs.ucla.edu May 31, 2017 Recommender Systems Recommendation via Information Network Analysis Hybrid Collaborative Filtering

More information

Comparison of Recommender System Algorithms focusing on the New-Item and User-Bias Problem

Comparison of Recommender System Algorithms focusing on the New-Item and User-Bias Problem Comparison of Recommender System Algorithms focusing on the New-Item and User-Bias Problem Stefan Hauger 1, Karen H. L. Tso 2, and Lars Schmidt-Thieme 2 1 Department of Computer Science, University of

More information

Web Personalization & Recommender Systems

Web Personalization & Recommender Systems Web Personalization & Recommender Systems COSC 488 Slides are based on: - Bamshad Mobasher, Depaul University - Recent publications: see the last page (Reference section) Web Personalization & Recommender

More information

arxiv: v4 [cs.ir] 28 Jul 2016

arxiv: v4 [cs.ir] 28 Jul 2016 Review-Based Rating Prediction arxiv:1607.00024v4 [cs.ir] 28 Jul 2016 Tal Hadad Dept. of Information Systems Engineering, Ben-Gurion University E-mail: tah@post.bgu.ac.il Abstract Recommendation systems

More information

Web Personalization & Recommender Systems

Web Personalization & Recommender Systems Web Personalization & Recommender Systems COSC 488 Slides are based on: - Bamshad Mobasher, Depaul University - Recent publications: see the last page (Reference section) Web Personalization & Recommender

More information

Vector Semantics. Dense Vectors

Vector Semantics. Dense Vectors Vector Semantics Dense Vectors Sparse versus dense vectors PPMI vectors are long (length V = 20,000 to 50,000) sparse (most elements are zero) Alternative: learn vectors which are short (length 200-1000)

More information

Semantically Enhanced Collaborative Filtering on the Web

Semantically Enhanced Collaborative Filtering on the Web Semantically Enhanced Collaborative Filtering on the Web Bamshad Mobasher, Xin Jin, and Yanzan Zhou {mobasher,xjin,yzhou}@cs.depaul.edu Center for Web Intelligence School of Computer Science, Telecommunication,

More information

Performance Comparison of Algorithms for Movie Rating Estimation

Performance Comparison of Algorithms for Movie Rating Estimation Performance Comparison of Algorithms for Movie Rating Estimation Alper Köse, Can Kanbak, Noyan Evirgen Research Laboratory of Electronics, Massachusetts Institute of Technology Department of Electrical

More information

Tour-Based Mode Choice Modeling: Using An Ensemble of (Un-) Conditional Data-Mining Classifiers

Tour-Based Mode Choice Modeling: Using An Ensemble of (Un-) Conditional Data-Mining Classifiers Tour-Based Mode Choice Modeling: Using An Ensemble of (Un-) Conditional Data-Mining Classifiers James P. Biagioni Piotr M. Szczurek Peter C. Nelson, Ph.D. Abolfazl Mohammadian, Ph.D. Agenda Background

More information

Reddit Recommendation System Daniel Poon, Yu Wu, David (Qifan) Zhang CS229, Stanford University December 11 th, 2011

Reddit Recommendation System Daniel Poon, Yu Wu, David (Qifan) Zhang CS229, Stanford University December 11 th, 2011 Reddit Recommendation System Daniel Poon, Yu Wu, David (Qifan) Zhang CS229, Stanford University December 11 th, 2011 1. Introduction Reddit is one of the most popular online social news websites with millions

More information

Semantic Clickstream Mining

Semantic Clickstream Mining Semantic Clickstream Mining Mehrdad Jalali 1, and Norwati Mustapha 2 1 Department of Software Engineering, Mashhad Branch, Islamic Azad University, Mashhad, Iran 2 Department of Computer Science, Universiti

More information

What is Learning? CS 343: Artificial Intelligence Machine Learning. Raymond J. Mooney. Problem Solving / Planning / Control.

What is Learning? CS 343: Artificial Intelligence Machine Learning. Raymond J. Mooney. Problem Solving / Planning / Control. What is Learning? CS 343: Artificial Intelligence Machine Learning Herbert Simon: Learning is any process by which a system improves performance from experience. What is the task? Classification Problem

More information

PARTICLE SWARM OPTIMIZATION (PSO)

PARTICLE SWARM OPTIMIZATION (PSO) PARTICLE SWARM OPTIMIZATION (PSO) J. Kennedy and R. Eberhart, Particle Swarm Optimization. Proceedings of the Fourth IEEE Int. Conference on Neural Networks, 1995. A population based optimization technique

More information

The OTT Co-Viewing Experience: 2017 November 2017

The OTT Co-Viewing Experience: 2017 November 2017 The OTT Co-Viewing Experience: 2017 November 2017 Sponsored by Objectives IAB Digital Video Center of Excellence has identified OTT/Connected TV as one of its research priorities in 2017. During the first

More information

arxiv: v2 [cs.lg] 15 Nov 2011

arxiv: v2 [cs.lg] 15 Nov 2011 Using Contextual Information as Virtual Items on Top-N Recommender Systems Marcos A. Domingues Fac. of Science, U. Porto marcos@liaad.up.pt Alípio Mário Jorge Fac. of Science, U. Porto amjorge@fc.up.pt

More information

Graph Mining: Overview of different graph models

Graph Mining: Overview of different graph models Graph Mining: Overview of different graph models Davide Mottin, Konstantina Lazaridou Hasso Plattner Institute Graph Mining course Winter Semester 2016 Lecture road Anomaly detection (previous lecture)

More information

Justified Recommendations based on Content and Rating Data

Justified Recommendations based on Content and Rating Data Justified Recommendations based on Content and Rating Data Panagiotis Symeonidis, Alexandros Nanopoulos, and Yannis Manolopoulos Aristotle University, Department of Informatics, Thessaloniki 54124, Greece

More information

Travel Time Estimation of a Path using Sparse Trajectories

Travel Time Estimation of a Path using Sparse Trajectories Travel Time Estimation of a Path using Sparse Trajectories Yilun Wang 1,2,*, Yu Zheng 1,+, Yexiang Xue 1,3,* 1 Microsoft Research, No.5 Danling Street, Haidian District, Beijing 100080, China 2 College

More information

the uk and the u.s.a.

the uk and the u.s.a. Friends: Preview 1 Label the pictures with the following free-time activities. ride a horse play video games go dancing watch movies drive a car talk to friends play tennis go shopping 1 2 3 4 5 6 7 8

More information

Matrix Co-factorization for Recommendation with Rich Side Information HetRec 2011 and Implicit 1 / Feedb 23

Matrix Co-factorization for Recommendation with Rich Side Information HetRec 2011 and Implicit 1 / Feedb 23 Matrix Co-factorization for Recommendation with Rich Side Information and Implicit Feedback Yi Fang and Luo Si Department of Computer Science Purdue University West Lafayette, IN 47906, USA fangy@cs.purdue.edu

More information

Collaborative Filtering based on User Trends

Collaborative Filtering based on User Trends Collaborative Filtering based on User Trends Panagiotis Symeonidis, Alexandros Nanopoulos, Apostolos Papadopoulos, and Yannis Manolopoulos Aristotle University, Department of Informatics, Thessalonii 54124,

More information

Predict Topic Trend in Blogosphere

Predict Topic Trend in Blogosphere Predict Topic Trend in Blogosphere Jack Guo 05596882 jackguo@stanford.edu Abstract Graphical relationship among web pages has been used to rank their relative importance. In this paper, we introduce a

More information

CS435 Introduction to Big Data Spring 2018 Colorado State University. 3/21/2018 Week 10-B Sangmi Lee Pallickara. FAQs. Collaborative filtering

CS435 Introduction to Big Data Spring 2018 Colorado State University. 3/21/2018 Week 10-B Sangmi Lee Pallickara. FAQs. Collaborative filtering W10.B.0.0 CS435 Introduction to Big Data W10.B.1 FAQs Term project 5:00PM March 29, 2018 PA2 Recitation: Friday PART 1. LARGE SCALE DATA AALYTICS 4. RECOMMEDATIO SYSTEMS 5. EVALUATIO AD VALIDATIO TECHIQUES

More information

Collaborative Filtering using Weighted BiPartite Graph Projection A Recommendation System for Yelp

Collaborative Filtering using Weighted BiPartite Graph Projection A Recommendation System for Yelp Collaborative Filtering using Weighted BiPartite Graph Projection A Recommendation System for Yelp Sumedh Sawant sumedh@stanford.edu Team 38 December 10, 2013 Abstract We implement a personal recommendation

More information

A Conflict-Based Confidence Measure for Associative Classification

A Conflict-Based Confidence Measure for Associative Classification A Conflict-Based Confidence Measure for Associative Classification Peerapon Vateekul and Mei-Ling Shyu Department of Electrical and Computer Engineering University of Miami Coral Gables, FL 33124, USA

More information

Fast Contextual Preference Scoring of Database Tuples

Fast Contextual Preference Scoring of Database Tuples Fast Contextual Preference Scoring of Database Tuples Kostas Stefanidis Department of Computer Science, University of Ioannina, Greece Joint work with Evaggelia Pitoura http://dmod.cs.uoi.gr 2 Motivation

More information

Experiences from Implementing Collaborative Filtering in a Web 2.0 Application

Experiences from Implementing Collaborative Filtering in a Web 2.0 Application Experiences from Implementing Collaborative Filtering in a Web 2.0 Application Wolfgang Woerndl, Johannes Helminger, Vivian Prinz TU Muenchen, Chair for Applied Informatics Cooperative Systems Boltzmannstr.

More information

Collaborative Filtering for Netflix

Collaborative Filtering for Netflix Collaborative Filtering for Netflix Michael Percy Dec 10, 2009 Abstract The Netflix movie-recommendation problem was investigated and the incremental Singular Value Decomposition (SVD) algorithm was implemented

More information

COLLABORATIVE LOCATION AND ACTIVITY RECOMMENDATIONS WITH GPS HISTORY DATA

COLLABORATIVE LOCATION AND ACTIVITY RECOMMENDATIONS WITH GPS HISTORY DATA COLLABORATIVE LOCATION AND ACTIVITY RECOMMENDATIONS WITH GPS HISTORY DATA Vincent W. Zheng, Yu Zheng, Xing Xie, Qiang Yang Hong Kong University of Science and Technology Microsoft Research Asia WWW 2010

More information

Visual Query Suggestion

Visual Query Suggestion Visual Query Suggestion Zheng-Jun Zha, Linjun Yang, Tao Mei, Meng Wang, Zengfu Wang University of Science and Technology of China Textual Visual Query Suggestion Microsoft Research Asia Motivation Framework

More information

BaggTaming Learning from Wild and Tame Data

BaggTaming Learning from Wild and Tame Data BaggTaming Learning from Wild and Tame Data Wikis, Blogs, Bookmarking Tools - Mining the Web 2.0 Workshop @ECML/PKDD2008 Workshop, 15/9/2008 Toshihiro Kamishima, Masahiro Hamasaki, and Shotaro Akaho National

More information

AN IMPROVED DENSITY BASED k-means ALGORITHM

AN IMPROVED DENSITY BASED k-means ALGORITHM AN IMPROVED DENSITY BASED k-means ALGORITHM Kabiru Dalhatu 1 and Alex Tze Hiang Sim 2 1 Department of Computer Science, Faculty of Computing and Mathematical Science, Kano University of Science and Technology

More information

Supervised and Unsupervised Learning (II)

Supervised and Unsupervised Learning (II) Supervised and Unsupervised Learning (II) Yong Zheng Center for Web Intelligence DePaul University, Chicago IPD 346 - Data Science for Business Program DePaul University, Chicago, USA Intro: Supervised

More information

Automatic differentiation based for particle swarm optimization steepest descent direction

Automatic differentiation based for particle swarm optimization steepest descent direction International Journal of Advances in Intelligent Informatics ISSN: 2442-6571 Vol 1, No 2, July 2015, pp. 90-97 90 Automatic differentiation based for particle swarm optimization steepest descent direction

More information

Semi supervised clustering for Text Clustering

Semi supervised clustering for Text Clustering Semi supervised clustering for Text Clustering N.Saranya 1 Assistant Professor, Department of Computer Science and Engineering, Sri Eshwar College of Engineering, Coimbatore 1 ABSTRACT: Based on clustering

More information

New user profile learning for extremely sparse data sets

New user profile learning for extremely sparse data sets New user profile learning for extremely sparse data sets Tomasz Hoffmann, Tadeusz Janasiewicz, and Andrzej Szwabe Institute of Control and Information Engineering, Poznan University of Technology, pl.

More information

Sampling Large Graphs for Anticipatory Analysis

Sampling Large Graphs for Anticipatory Analysis Sampling Large Graphs for Anticipatory Analysis Lauren Edwards*, Luke Johnson, Maja Milosavljevic, Vijay Gadepally, Benjamin A. Miller IEEE High Performance Extreme Computing Conference September 16, 2015

More information

General Framework for Context-Aware Recommendation of Social Events

General Framework for Context-Aware Recommendation of Social Events General Framework for Context-Aware Recommendation of Social Events Wolfgang Beer, Walter Hargassner Software Competence Center Hagenberg GmbH Softwarepark 21, Austria Hagenberg, Austria {wolfgang.beer;

More information

Hyperbolic Traffic Load Centrality for Large-Scale Complex Communications Networks

Hyperbolic Traffic Load Centrality for Large-Scale Complex Communications Networks ICT 2016: 23 rd International Conference on Telecommunications Hyperbolic Traffic Load Centrality for Large-Scale Complex Communications Networks National Technical University of Athens (NTUA) School of

More information

Clustering Part 4 DBSCAN

Clustering Part 4 DBSCAN Clustering Part 4 Dr. Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida, Gainesville DBSCAN DBSCAN is a density based clustering algorithm Density = number of

More information

Particle Swarm Optimization

Particle Swarm Optimization Particle Swarm Optimization Gonçalo Pereira INESC-ID and Instituto Superior Técnico Porto Salvo, Portugal gpereira@gaips.inesc-id.pt April 15, 2011 1 What is it? Particle Swarm Optimization is an algorithm

More information

ECS289: Scalable Machine Learning

ECS289: Scalable Machine Learning ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Sept 22, 2016 Course Information Website: http://www.stat.ucdavis.edu/~chohsieh/teaching/ ECS289G_Fall2016/main.html My office: Mathematical Sciences

More information

Taking Advantage of Semantics in Recommendation Systems

Taking Advantage of Semantics in Recommendation Systems Taking Advantage of Semantics in Recommendation Systems Victor CODINA a, 1 and Luigi CECCARONI a a Departament de Llenguatges i Sistemes Informàtics (LSI), Universitat Politècnica de Catalunya (UPC), Campus

More information

Opportunities and challenges in personalization of online hotel search

Opportunities and challenges in personalization of online hotel search Opportunities and challenges in personalization of online hotel search David Zibriczky Data Science & Analytics Lead, User Profiling Introduction 2 Introduction About Mission: Helping the travelers to

More information

Extension Study on Item-Based P-Tree Collaborative Filtering Algorithm for Netflix Prize

Extension Study on Item-Based P-Tree Collaborative Filtering Algorithm for Netflix Prize Extension Study on Item-Based P-Tree Collaborative Filtering Algorithm for Netflix Prize Tingda Lu, Yan Wang, William Perrizo, Amal Perera, Gregory Wettstein Computer Science Department North Dakota State

More information

Machine Learning. Unsupervised Learning. Manfred Huber

Machine Learning. Unsupervised Learning. Manfred Huber Machine Learning Unsupervised Learning Manfred Huber 2015 1 Unsupervised Learning In supervised learning the training data provides desired target output for learning In unsupervised learning the training

More information

Vector Semantics. Dense Vectors

Vector Semantics. Dense Vectors Vector Semantics Dense Vectors Sparse versus dense vectors PPMI vectors are long (length V = 20,000 to 50,000) sparse (most elements are zero) Alterna>ve: learn vectors which are short (length 200-1000)

More information

Fitting (LMedS, RANSAC)

Fitting (LMedS, RANSAC) Fitting (LMedS, RANSAC) Thursday, 23/03/2017 Antonis Argyros e-mail: argyros@csd.uoc.gr LMedS and RANSAC What if we have very many outliers? 2 1 Least Median of Squares ri : Residuals Least Squares n 2

More information

Recommender Systems. Collaborative Filtering & Content-Based Recommending

Recommender Systems. Collaborative Filtering & Content-Based Recommending Recommender Systems Collaborative Filtering & Content-Based Recommending 1 Recommender Systems Systems for recommending items (e.g. books, movies, CD s, web pages, newsgroup messages) to users based on

More information

Comparison of Optimization Methods for L1-regularized Logistic Regression

Comparison of Optimization Methods for L1-regularized Logistic Regression Comparison of Optimization Methods for L1-regularized Logistic Regression Aleksandar Jovanovich Department of Computer Science and Information Systems Youngstown State University Youngstown, OH 44555 aleksjovanovich@gmail.com

More information

PLB-HeC: A Profile-based Load-Balancing Algorithm for Heterogeneous CPU-GPU Clusters

PLB-HeC: A Profile-based Load-Balancing Algorithm for Heterogeneous CPU-GPU Clusters PLB-HeC: A Profile-based Load-Balancing Algorithm for Heterogeneous CPU-GPU Clusters IEEE CLUSTER 2015 Chicago, IL, USA Luis Sant Ana 1, Daniel Cordeiro 2, Raphael Camargo 1 1 Federal University of ABC,

More information

Part 12: Advanced Topics in Collaborative Filtering. Francesco Ricci

Part 12: Advanced Topics in Collaborative Filtering. Francesco Ricci Part 12: Advanced Topics in Collaborative Filtering Francesco Ricci Content Generating recommendations in CF using frequency of ratings Role of neighborhood size Comparison of CF with association rules

More information

A P2P REcommender system based on Gossip Overlays (PREGO)

A P2P REcommender system based on Gossip Overlays (PREGO) 10 th IEEE INTERNATIONAL CONFERENCE ON COMPUTER AND INFORMATION TECHNOLOGY Bradford,UK, 29 June - 1 July, 2010 Ranieri Baraglia, Patrizio Dazzi, Matteo Mordacchini ISTI,CNR, Pisa,Italy Laura Ricci University

More information

Nearest Neighbor Search by Branch and Bound

Nearest Neighbor Search by Branch and Bound Nearest Neighbor Search by Branch and Bound Algorithmic Problems Around the Web #2 Yury Lifshits http://yury.name CalTech, Fall 07, CS101.2, http://yury.name/algoweb.html 1 / 30 Outline 1 Short Intro to

More information

Internal vs. External Parameters in Fitness Functions

Internal vs. External Parameters in Fitness Functions Internal vs. External Parameters in Fitness Functions Pedro A. Diaz-Gomez Computing & Technology Department Cameron University Lawton, Oklahoma 73505, USA pdiaz-go@cameron.edu Dean F. Hougen School of

More information

1 Lab + Hwk 5: Particle Swarm Optimization

1 Lab + Hwk 5: Particle Swarm Optimization 1 Lab + Hwk 5: Particle Swarm Optimization This laboratory requires the following equipment: C programming tools (gcc, make). Webots simulation software. Webots User Guide Webots Reference Manual. The

More information

Yelp Recommendation System

Yelp Recommendation System Yelp Recommendation System Jason Ting, Swaroop Indra Ramaswamy Institute for Computational and Mathematical Engineering Abstract We apply principles and techniques of recommendation systems to develop

More information

Use of KNN for the Netflix Prize Ted Hong, Dimitris Tsamis Stanford University

Use of KNN for the Netflix Prize Ted Hong, Dimitris Tsamis Stanford University Use of KNN for the Netflix Prize Ted Hong, Dimitris Tsamis Stanford University {tedhong, dtsamis}@stanford.edu Abstract This paper analyzes the performance of various KNNs techniques as applied to the

More information

Chapter 5: Outlier Detection

Chapter 5: Outlier Detection Ludwig-Maximilians-Universität München Institut für Informatik Lehr- und Forschungseinheit für Datenbanksysteme Knowledge Discovery in Databases SS 2016 Chapter 5: Outlier Detection Lecture: Prof. Dr.

More information

Anomaly Detection. You Chen

Anomaly Detection. You Chen Anomaly Detection You Chen 1 Two questions: (1) What is Anomaly Detection? (2) What are Anomalies? Anomaly detection refers to the problem of finding patterns in data that do not conform to expected behavior

More information

IMPROVING THE DIGITAL USER EXPERIENCE

IMPROVING THE DIGITAL USER EXPERIENCE IMPROVING THE DIGITAL USER EXPERIENCE DIGITAL FOCUS 2018 Digital support to assist meeting GSOBT goals: Increase number of click-throughs from GSOBT website to Industry Partner websites Conversion points:

More information

Results and Discussions on Transaction Splitting Technique for Mining Differential Private Frequent Itemsets

Results and Discussions on Transaction Splitting Technique for Mining Differential Private Frequent Itemsets Results and Discussions on Transaction Splitting Technique for Mining Differential Private Frequent Itemsets Sheetal K. Labade Computer Engineering Dept., JSCOE, Hadapsar Pune, India Srinivasa Narasimha

More information

CT79 SOFT COMPUTING ALCCS-FEB 2014

CT79 SOFT COMPUTING ALCCS-FEB 2014 Q.1 a. Define Union, Intersection and complement operations of Fuzzy sets. For fuzzy sets A and B Figure Fuzzy sets A & B The union of two fuzzy sets A and B is a fuzzy set C, written as C=AUB or C=A OR

More information

Reliable Routing In VANET Using Cross Layer Approach

Reliable Routing In VANET Using Cross Layer Approach Reliable Routing In VANET Using Cross Layer Approach 1 Mr. Bhagirath Patel, 2 Ms. Khushbu Shah 1 Department of Computer engineering, 1 LJ Institute of Technology, Ahmedabad, India 1 er.bhagirath@gmail.com,

More information

Building a Concept Hierarchy from a Distance Matrix

Building a Concept Hierarchy from a Distance Matrix Building a Concept Hierarchy from a Distance Matrix Huang-Cheng Kuo 1 and Jen-Peng Huang 2 1 Department of Computer Science and Information Engineering National Chiayi University, Taiwan 600 hckuo@mail.ncyu.edu.tw

More information

BordaRank: A Ranking Aggregation Based Approach to Collaborative Filtering

BordaRank: A Ranking Aggregation Based Approach to Collaborative Filtering BordaRank: A Ranking Aggregation Based Approach to Collaborative Filtering Yeming TANG Department of Computer Science and Technology Tsinghua University Beijing, China tym13@mails.tsinghua.edu.cn Qiuli

More information

CS294-1 Assignment 2 Report

CS294-1 Assignment 2 Report CS294-1 Assignment 2 Report Keling Chen and Huasha Zhao February 24, 2012 1 Introduction The goal of this homework is to predict a users numeric rating for a book from the text of the user s review. The

More information

2. Discovery of Association Rules

2. Discovery of Association Rules 2. Discovery of Association Rules Part I Motivation: market basket data Basic notions: association rule, frequency and confidence Problem of association rule mining (Sub)problem of frequent set mining

More information

DS504/CS586: Big Data Analytics Big Data Clustering Prof. Yanhua Li

DS504/CS586: Big Data Analytics Big Data Clustering Prof. Yanhua Li Welcome to DS504/CS586: Big Data Analytics Big Data Clustering Prof. Yanhua Li Time: 6:00pm 8:50pm Thu Location: AK 232 Fall 2016 High Dimensional Data v Given a cloud of data points we want to understand

More information

Holiday Shopping With Mobile Phones October 2010

Holiday Shopping With Mobile Phones October 2010 Report Price: MMA Members: FREE (Some Restrictions Apply) Non-MMA Members: $US 1495.00 Peter A Johnson Ph.D. VP Market Intelligence and Strategy, MMA Holiday Shopping With Mobile Phones October 2010 MMA

More information

ROUTING PROJECT LIST

ROUTING PROJECT LIST ROUTING PROJECT LIST Branches {Computer Science (CS), Information Science(IS), Software Engineering(SE),Electronics & Communication(EC), Telecommunication (TE),Information Technology(IT),Digital Communication(DCE),Digital

More information

Relational Classification for Personalized Tag Recommendation

Relational Classification for Personalized Tag Recommendation Relational Classification for Personalized Tag Recommendation Leandro Balby Marinho, Christine Preisach, and Lars Schmidt-Thieme Information Systems and Machine Learning Lab (ISMLL) Samelsonplatz 1, University

More information

5/13/2009. Introduction. Introduction. Introduction. Introduction. Introduction

5/13/2009. Introduction. Introduction. Introduction. Introduction. Introduction Applying Collaborative Filtering Techniques to Movie Search for Better Ranking and Browsing Seung-Taek Park and David M. Pennock (ACM SIGKDD 2007) Two types of technologies are widely used to overcome

More information

I211: Information infrastructure II

I211: Information infrastructure II Data Mining: Classifier Evaluation I211: Information infrastructure II 3-nearest neighbor labeled data find class labels for the 4 data points 1 0 0 6 0 0 0 5 17 1.7 1 1 4 1 7.1 1 1 1 0.4 1 2 1 3.0 0 0.1

More information

Matrix Co-factorization for Recommendation with Rich Side Information and Implicit Feedback

Matrix Co-factorization for Recommendation with Rich Side Information and Implicit Feedback Matrix Co-factorization for Recommendation with Rich Side Information and Implicit Feedback ABSTRACT Yi Fang Department of Computer Science Purdue University West Lafayette, IN 47907, USA fangy@cs.purdue.edu

More information

Evaluation Measures. Sebastian Pölsterl. April 28, Computer Aided Medical Procedures Technische Universität München

Evaluation Measures. Sebastian Pölsterl. April 28, Computer Aided Medical Procedures Technische Universität München Evaluation Measures Sebastian Pölsterl Computer Aided Medical Procedures Technische Universität München April 28, 2015 Outline 1 Classification 1. Confusion Matrix 2. Receiver operating characteristics

More information

A NEW APPROACH TO SOLVE ECONOMIC LOAD DISPATCH USING PARTICLE SWARM OPTIMIZATION

A NEW APPROACH TO SOLVE ECONOMIC LOAD DISPATCH USING PARTICLE SWARM OPTIMIZATION A NEW APPROACH TO SOLVE ECONOMIC LOAD DISPATCH USING PARTICLE SWARM OPTIMIZATION Manjeet Singh 1, Divesh Thareja 2 1 Department of Electrical and Electronics Engineering, Assistant Professor, HCTM Technical

More information

Weka ( )

Weka (  ) Weka ( http://www.cs.waikato.ac.nz/ml/weka/ ) The phases in which classifier s design can be divided are reflected in WEKA s Explorer structure: Data pre-processing (filtering) and representation Supervised

More information

Deep Character-Level Click-Through Rate Prediction for Sponsored Search

Deep Character-Level Click-Through Rate Prediction for Sponsored Search Deep Character-Level Click-Through Rate Prediction for Sponsored Search Bora Edizel - Phd Student UPF Amin Mantrach - Criteo Research Xiao Bai - Oath This work was done at Yahoo and will be presented as

More information

A Scalable, Accurate Hybrid Recommender System

A Scalable, Accurate Hybrid Recommender System A Scalable, Accurate Hybrid Recommender System Mustansar Ali Ghazanfar and Adam Prugel-Bennett School of Electronics and Computer Science University of Southampton Highfield Campus, SO17 1BJ, United Kingdom

More information

Data Mining 4. Cluster Analysis

Data Mining 4. Cluster Analysis Data Mining 4. Cluster Analysis 4.5 Spring 2010 Instructor: Dr. Masoud Yaghini Introduction DBSCAN Algorithm OPTICS Algorithm DENCLUE Algorithm References Outline Introduction Introduction Density-based

More information

Three-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization

Three-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization Three-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization Lana Dalawr Jalal Abstract This paper addresses the problem of offline path planning for

More information

Introduction to Machine Learning. Xiaojin Zhu

Introduction to Machine Learning. Xiaojin Zhu Introduction to Machine Learning Xiaojin Zhu jerryzhu@cs.wisc.edu Read Chapter 1 of this book: Xiaojin Zhu and Andrew B. Goldberg. Introduction to Semi- Supervised Learning. http://www.morganclaypool.com/doi/abs/10.2200/s00196ed1v01y200906aim006

More information

Predicting Bus Arrivals Using One Bus Away Real-Time Data

Predicting Bus Arrivals Using One Bus Away Real-Time Data Predicting Bus Arrivals Using One Bus Away Real-Time Data 1 2 3 4 5 Catherine M. Baker Alexander C. Nied Department of Computer Science Department of Computer Science University of Washington University

More information

Effectiveness of Crawling Attacks Against Web-based Recommender Systems

Effectiveness of Crawling Attacks Against Web-based Recommender Systems Effectiveness of Crawling Attacks Against Web-based Recommender Systems Runa Bhaumik, Robin Burke, Bamshad Mobasher Center for Web Intelligence School of Computer Science, Telecommunication and Information

More information

Progress Report: Collaborative Filtering Using Bregman Co-clustering

Progress Report: Collaborative Filtering Using Bregman Co-clustering Progress Report: Collaborative Filtering Using Bregman Co-clustering Wei Tang, Srivatsan Ramanujam, and Andrew Dreher April 4, 2008 1 Introduction Analytics are becoming increasingly important for business

More information

Clustering and Filtering Approach for searching Big Data Application Query S. V. Phulari, Prasad P. Shah, Atul D. Kalpande, Vikas A.

Clustering and Filtering Approach for searching Big Data Application Query S. V. Phulari, Prasad P. Shah, Atul D. Kalpande, Vikas A. Clustering and Filtering Approach for searching Big Data Application Query S. V. Phulari, Prasad P. Shah, Atul D. Kalpande, Vikas A. Pawar Abstract Cluster-based recommendation is best thought of as a

More information

Mining Web Data. Lijun Zhang

Mining Web Data. Lijun Zhang Mining Web Data Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Outline Introduction Web Crawling and Resource Discovery Search Engine Indexing and Query Processing Ranking Algorithms Recommender Systems

More information

I Travel on mobile / FR

I Travel on mobile / FR I Travel on mobile / FR Exploring how people use their smartphones for travel activities Q3 2016 I About this study Background: Objective: Mobile apps and sites are a vital channel for advertisers to engage

More information

An Approach for Reduction of Rain Streaks from a Single Image

An Approach for Reduction of Rain Streaks from a Single Image An Approach for Reduction of Rain Streaks from a Single Image Vijayakumar Majjagi 1, Netravati U M 2 1 4 th Semester, M. Tech, Digital Electronics, Department of Electronics and Communication G M Institute

More information

By Atul S. Kulkarni Graduate Student, University of Minnesota Duluth. Under The Guidance of Dr. Richard Maclin

By Atul S. Kulkarni Graduate Student, University of Minnesota Duluth. Under The Guidance of Dr. Richard Maclin By Atul S. Kulkarni Graduate Student, University of Minnesota Duluth Under The Guidance of Dr. Richard Maclin Outline Problem Statement Background Proposed Solution Experiments & Results Related Work Future

More information

Mining Web Data. Lijun Zhang

Mining Web Data. Lijun Zhang Mining Web Data Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Outline Introduction Web Crawling and Resource Discovery Search Engine Indexing and Query Processing Ranking Algorithms Recommender Systems

More information

DS504/CS586: Big Data Analytics Big Data Clustering II

DS504/CS586: Big Data Analytics Big Data Clustering II Welcome to DS504/CS586: Big Data Analytics Big Data Clustering II Prof. Yanhua Li Time: 6pm 8:50pm Thu Location: AK 232 Fall 2016 More Discussions, Limitations v Center based clustering K-means BFR algorithm

More information

amount of available information and the number of visitors to Web sites in recent years

amount of available information and the number of visitors to Web sites in recent years Collaboration Filtering using K-Mean Algorithm Smrity Gupta Smrity_0501@yahoo.co.in Department of computer Science and Engineering University of RAJIV GANDHI PROUDYOGIKI SHWAVIDYALAYA, BHOPAL Abstract:

More information

ADVANCED ANALYTICS USING SAS ENTERPRISE MINER RENS FEENSTRA

ADVANCED ANALYTICS USING SAS ENTERPRISE MINER RENS FEENSTRA INSIGHTS@SAS: ADVANCED ANALYTICS USING SAS ENTERPRISE MINER RENS FEENSTRA AGENDA 09.00 09.15 Intro 09.15 10.30 Analytics using SAS Enterprise Guide Ellen Lokollo 10.45 12.00 Advanced Analytics using SAS

More information

A Formal Approach to Score Normalization for Meta-search

A Formal Approach to Score Normalization for Meta-search A Formal Approach to Score Normalization for Meta-search R. Manmatha and H. Sever Center for Intelligent Information Retrieval Computer Science Department University of Massachusetts Amherst, MA 01003

More information

Introduction to Medical Imaging (5XSA0) Module 5

Introduction to Medical Imaging (5XSA0) Module 5 Introduction to Medical Imaging (5XSA0) Module 5 Segmentation Jungong Han, Dirk Farin, Sveta Zinger ( s.zinger@tue.nl ) 1 Outline Introduction Color Segmentation region-growing region-merging watershed

More information

GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM

GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM Journal of Al-Nahrain University Vol.10(2), December, 2007, pp.172-177 Science GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM * Azhar W. Hammad, ** Dr. Ban N. Thannoon Al-Nahrain

More information

A MRF Shape Prior for Facade Parsing with Occlusions Supplementary Material

A MRF Shape Prior for Facade Parsing with Occlusions Supplementary Material A MRF Shape Prior for Facade Parsing with Occlusions Supplementary Material Mateusz Koziński, Raghudeep Gadde, Sergey Zagoruyko, Guillaume Obozinski and Renaud Marlet Université Paris-Est, LIGM (UMR CNRS

More information