CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Similar documents
3 announcements: Thanks for filling out the HW1 poll HW2 is due today 5pm (scans must be readable) HW3 will be posted today

Slides based on those in:

CS425: Algorithms for Web Scale Data

Analysis of Large Graphs: TrustRank and WebSpam

Introduction to Data Mining

Big Data Analytics CSCI 4030

Big Data Analytics CSCI 4030

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Introduction to Data Mining

CS345a: Data Mining Jure Leskovec and Anand Rajaraman Stanford University

Link Analysis in Web Mining

CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University

How to organize the Web?

CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University

CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Part 1: Link Analysis & Page Rank

Lecture Notes to Big Data Management and Analytics Winter Term 2017/2018 Node Importance and Neighborhoods

Jeffrey D. Ullman Stanford University

INTRODUCTION TO DATA SCIENCE. Link Analysis (MMDS5)

Information Networks: PageRank

Introduction In to Info Inf rmation o Ret Re r t ie i v e a v l a LINK ANALYSIS 1

Unit VIII. Chapter 9. Link Analysis

CS47300 Web Information Search and Management

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Jeffrey D. Ullman Stanford University/Infolab

Link Analysis. Chapter PageRank

Big Data Analytics CSCI 4030

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS249: SPECIAL TOPICS MINING INFORMATION/SOCIAL NETWORKS

Instructor: Dr. Mehmet Aktaş. Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University

CS6220: DATA MINING TECHNIQUES

Web consists of web pages and hyperlinks between pages. A page receiving many links from other pages may be a hint of the authority of the page

Information Retrieval. Lecture 11 - Link analysis

Einführung in Web und Data Science Community Analysis. Prof. Dr. Ralf Möller Universität zu Lübeck Institut für Informationssysteme

Some Interesting Applications of Theory. PageRank Minhashing Locality-Sensitive Hashing

Link Analysis. CSE 454 Advanced Internet Systems University of Washington. 1/26/12 16:36 1 Copyright D.S.Weld

Brief (non-technical) history

Lecture 8: Linkage algorithms and web search

Link Analysis and Web Search

Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Infinite data. Filtering data streams

Mining Web Data. Lijun Zhang

Lecture #3: PageRank Algorithm The Mathematics of Google Search

COMP 4601 Hubs and Authorities

Fast Random Walk with Restart: Algorithms and Applications U Kang Dept. of CSE Seoul National University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Pagerank Scoring. Imagine a browser doing a random walk on web pages:

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Administrative. Web crawlers. Web Crawlers and Link Analysis!

CS6200 Information Retreival. The WebGraph. July 13, 2015

Graph and Web Mining - Motivation, Applications and Algorithms PROF. EHUD GUDES DEPARTMENT OF COMPUTER SCIENCE BEN-GURION UNIVERSITY, ISRAEL

A STUDY OF RANKING ALGORITHM USED BY VARIOUS SEARCH ENGINE

High dim. data. Graph data. Infinite data. Machine learning. Apps. Locality sensitive hashing. Filtering data streams.

MAE 298, Lecture 9 April 30, Web search and decentralized search on small-worlds

Information Retrieval Lecture 4: Web Search. Challenges of Web Search 2. Natural Language and Information Processing (NLIP) Group

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University.

Information Retrieval Spring Web retrieval

COMP Page Rank

Link Structure Analysis

CS435 Introduction to Big Data FALL 2018 Colorado State University. 9/24/2018 Week 6-A Sangmi Lee Pallickara. Topics. This material is built based on,

Recent Researches on Web Page Ranking

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Information Retrieval and Web Search

1 Starting around 1996, researchers began to work on. 2 In Feb, 1997, Yanhong Li (Scotch Plains, NJ) filed a

Link Analysis. Hongning Wang

Introduction to Information Retrieval (Manning, Raghavan, Schutze) Chapter 21 Link analysis

Information Retrieval May 15. Web retrieval

Lecture 9: I: Web Retrieval II: Webology. Johan Bollen Old Dominion University Department of Computer Science

Popularity of Twitter Accounts: PageRank on a Social Network

TODAY S LECTURE HYPERTEXT AND

PageRank. CS16: Introduction to Data Structures & Algorithms Spring 2018

Link Analysis SEEM5680. Taken from Introduction to Information Retrieval by C. Manning, P. Raghavan, and H. Schutze, Cambridge University Press.

CSI 445/660 Part 10 (Link Analysis and Web Search)

Link Analysis in the Cloud

CPSC 340: Machine Learning and Data Mining. Probabilistic Classification Fall 2017

Searching the Web [Arasu 01]

Introduction to Information Retrieval

Databases 2 (VU) ( / )

Large-Scale Networks. PageRank. Dr Vincent Gramoli Lecturer School of Information Technologies

Web Structure Mining using Link Analysis Algorithms

Web Spam. Seminar: Future Of Web Search. Know Your Neighbors: Web Spam Detection using the Web Topology

Mining Web Data. Lijun Zhang

CPSC 340: Machine Learning and Data Mining

An Improved Computation of the PageRank Algorithm 1

F. Aiolli - Sistemi Informativi 2007/2008. Web Search before Google

Supervised Random Walks

Bruno Martins. 1 st Semester 2012/2013

Collaborative filtering based on a random walk model on a graph

CSE 190 Lecture 16. Data Mining and Predictive Analytics. Small-world phenomena

Lecture Notes: Social Networks: Models, Algorithms, and Applications Lecture 28: Apr 26, 2012 Scribes: Mauricio Monsalve and Yamini Mule

CS535 Big Data Fall 2017 Colorado State University 9/5/2017. Week 3 - A. FAQs. This material is built based on,

On Finding Power Method in Spreading Activation Search

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Information Retrieval

CS 6604: Data Mining Large Networks and Time-Series

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

DATA MINING - 1DL460

Temporal Graphs KRISHNAN PANAMALAI MURALI

COMP5331: Knowledge Discovery and Data Mining

Transcription:

CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 2 High dim. data Graph data Infinite data Machine learning Apps Locality sensitive hashing PageRank, SimRank Filtering data streams SVM Recommen der systems Clustering Community Detection Web advertising Decision Trees Association Rules Dimensional ity reduction Spam Detection Queries on streams Perceptron, knn Duplicate document detection

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 3

y 0.8 ½+0.2 ⅓ M 1/2 1/2 0 0.8 1/2 0 0 + 0.2 0 1/2 1 1/n 1 1 T 1/3 1/3 1/3 1/3 1/3 1/3 1/3 1/3 1/3 a m 0.8+0.2 ⅓ y 7/15 7/15 1/15 a 7/15 1/15 1/15 m 1/15 7/15 13/15 A y a = m r = 1/3 1/3 1/3 A r 0.33 0.20 0.46 0.24 0.20 0.52 0.26 0.18 0.56... 7/33 5/33 21/33 2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 4

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 5 Key step is matrix-vector multiplication r new = A r old Easy if we have enough main memory to hold A, r old, r new Say N = 1 billion pages We need 4 bytes for each entry (say) 2 billion entries for vectors, approx 8GB Matrix A has N 2 entries 10 18 is a large number! A = A = M + (1- ) [1/N] NxN ½ ½ 0 ½ 0 0 0 ½ 1 0.8 +0.2 = 1/3 1/3 1/3 1/3 1/3 1/3 1/3 1/3 1/3 7/15 7/15 1/15 7/15 1/15 1/15 1/15 7/15 13/15

Suppose there are N pages Consider page i, with d i out-links We have M ji = 1/ d i when i j and M ji = 0 otherwise The random teleport is equivalent to: Adding a teleport link from i to every other page and setting transition probability to (1- )/N Reducing the probability of following each out-link from 1/ d i to / d i Equivalent: Tax each page a fraction (1- ) of its score and redistribute evenly 2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 6

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 7 r = A r, where A ji = β M ji + 1 β r j = r j = N i=1 N i=1 A ji r i β M ji + 1 β N = N β M ji r i + 1 β r i N i=1 r N i=1 i N N = β M ji r i + 1 β i=1 since r i = 1 So we get: r = β M r + 1 β N N N Note: Here we assumed M has no dead-ends. [x] N a vector of length N with all entries x

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 8 We just rearranged the PageRank equation 1 β r = βm r + N N where [(1- )/N] N is a vector with all N entries (1- )/N M is a sparse matrix! (with no dead-ends) 10 links per node, approx 10N entries So in each iteration, we need to: Compute r new = M r old Add a constant value (1- )/N to each entry in r new Note if M contains dead-ends then j < 1 and we also have to renormalize r new so that it sums to 1 r j new

Input: Graph G and parameter β Directed graph G with spider traps and dead ends Parameter β Output: PageRank vector r Set: r j 0 = 1 N, t = 1 do: j: r j (t) = (t 1) r β i i j (t) r j = 0 if in-degree of j is 0 Now re-insert the leaked PageRank: t j: r j = r t j where: S = t = t + 1 while j d i + 1 S N r j (t) rj (t 1) > ε (t) j r j If the graph has no deadends then the amount of leaked PageRank is 1-β. But since we have dead-ends the amount of leaked PageRank may be larger. We have to explicitly account for it by computing S. 9

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 10 Encode sparse matrix using only nonzero entries Space proportional roughly to number of links Say 10N, or 4*10*1 billion = 40GB Still won t fit in memory, but will fit on disk source node degree destination nodes 0 3 1, 5, 7 1 5 17, 64, 113, 117, 245 2 2 13, 23

Assume enough RAM to fit r new into memory Store r old and matrix M on disk 1 step of power-iteration is: Initialize all entries of r new = (1- ) / N For each page i (of out-degree d i ): Read into memory: i, d i, dest 1,, dest d i, r old (i) For j = 1 d i r new (dest j ) += r old (i) / d i 0 1 2 3 4 5 6 r new source degree destination 0 3 1, 5, 6 1 4 17, 64, 113, 117 2 2 13, 23 2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 11 r old 0 1 2 3 4 5 6

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 12 Assume enough RAM to fit r new into memory Store r old and matrix M on disk In each iteration, we have to: Read r old and M Write r new back to disk Cost per iteration of Power method: = 2 r + M Question: What if we could not even fit r new in memory?

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 13 0 1 2 3 r new src degree destination 0 4 0, 1, 3, 5 1 2 0, 5 2 2 3, 4 M r old 0 1 2 3 4 5 4 5 Break r new into k blocks that fit in memory Scan M and r old once for each block

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 14 Similar to nested-loop join in databases Break r new into k blocks that fit in memory Scan M and r old once for each block Total cost: k scans of M and r old Cost per iteration of Power method: k( M + r ) + r = k M + (k+1) r Can we do better? Hint: M is much bigger than r (approx 10-20x), so we must avoid reading it k times per iteration

0 1 2 3 r new src degree destination 0 4 0, 1 1 3 0 2 2 1 0 4 3 2 2 3 r old 0 1 2 3 4 5 4 5 0 4 5 1 3 5 2 2 4 Break M into stripes! Each stripe contains only destination nodes in the corresponding block of r new 2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 15

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 16 Break M into stripes Each stripe contains only destination nodes in the corresponding block of r new Some additional overhead per stripe But it is usually worth it Cost per iteration of Power method: = M (1+ ) + (k+1) r

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 17 Measures generic popularity of a page Biased against topic-specific authorities Solution: Topic-Specific PageRank (next) Uses a single measure of importance Other models of importance Solution: Hubs-and-Authorities Susceptible to Link spam Artificial link topographies created in order to boost page rank Solution: TrustRank

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 18

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 19 Instead of generic popularity, can we measure popularity within a topic? Goal: Evaluate Web pages not just according to their popularity, but by how close they are to a particular topic, e.g. sports or history Allows search queries to be answered based on interests of the user Example: Query Trojan wants different pages depending on whether you are interested in sports, history and computer security

Random walker has a small probability of teleporting at any step Teleport can go to: Standard PageRank: Any page with equal probability To avoid dead-end and spider-trap problems Topic Specific PageRank: A topic-specific set of relevant pages (teleport set) Idea: Bias the random walk When walker teleports, she pick a page from a set S S contains only pages that are relevant to the topic E.g., Open Directory (DMOZ) pages for a given topic/query For each teleport set S, we get a different vector r S 2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 20

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 21 To make this work all we need is to update the teleportation part of the PageRank formulation: A ij = M ij + (1- )/ S M ij if i S otherwise A is stochastic! We have weighted all pages in the teleport set S equally Could also assign different weights to pages! Compute as for regular PageRank: Multiply by M, then add a vector Maintains sparseness

0.5 0.4 1 1 0.2 0.5 0.4 2 3 0.8 1 1 0.8 0.8 4 Suppose S = {1}, = 0.8 Node Iteration 0 1 2 stable 1 0.25 0.4 0.28 0.294 2 0.25 0.1 0.16 0.118 3 0.25 0.3 0.32 0.327 4 0.25 0.2 0.24 0.261 S={1}, β=0.90: r=[0.17, 0.07, 0.40, 0.36] S={1}, β=0.8: r=[0.29, 0.11, 0.32, 0.26] S={1}, β=0.70: r=[0.39, 0.14, 0.27, 0.19] S={1,2,3,4}, β=0.8: r=[0.13, 0.10, 0.39, 0.36] S={1,2,3}, β=0.8: r=[0.17, 0.13, 0.38, 0.30] S={1,2}, β=0.8: r=[0.26, 0.20, 0.29, 0.23] S={1}, β=0.8: r=[0.29, 0.11, 0.32, 0.26] 2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 22

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 23 Create different PageRanks for different topics The 16 DMOZ top-level categories: arts, business, sports, Which topic ranking to use? User can pick from a menu Classify query into a topic Can use the context of the query E.g., query is launched from a web page talking about a known topic History of queries e.g., basketball followed by Jordan User context, e.g., user s bookmarks,

SimRank: Random walks from a fixed node on k-partite graphs Setting: k-partite graph with k types of nodes Example: picture nodes and tag nodes Do a Random-Walk with Restarts from node u i.e., teleport set S = {u} Resulting scores measures similarity/proximity to node u Problem: Must be done once for each node u Suitable for sub-web-scale applications 2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 25

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 26 IJCAI KDD ICDM Philip S. Yu Ning Zhong Q: What is most related conference to ICDM? SDM R. Ramakrishnan AAAI M. Jordan NIPS Conference Author

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 27 KDD CIKM PKDD SDM PAKDD 0.009 0.011 0.004 0.004 0.008 ICDM 0.007 0.005 0.004 0.005 0.005 ICML ICDE ECML SIGMOD DMKD

3 announcements: - We will hold office hours via Google hangout: - Sat, Sun 10am Pacific time - HW2 is due - HW3 is out

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 29 Spamming: Any deliberate action to boost a web page s position in search engine results, incommensurate with page s real value Spam: Web pages that are the result of spamming This is a very broad definition SEO industry might disagree! SEO = search engine optimization Approximately 10-15% of web pages are spam

Early search engines: Crawl the Web Index pages by the words they contained Respond to search queries (lists of words) with the pages containing those words Early page ranking: Attempt to order pages matching a search query by importance First search engines considered: (1) Number of times query words appeared (2) Prominence of word position, e.g. title, header 2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 30

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 31 As people began to use search engines to find things on the Web, those with commercial interests tried to exploit search engines to bring people to their own site whether they wanted to be there or not Example: Shirt-seller might pretend to be about movies Techniques for achieving high relevance/importance for a web page

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 32 How do you make your page appear to be about movies? (1) Add the word movie 1,000 times to your page Set text color to the background color, so only search engines would see it (2) Or, run the query movie on your target search engine See what page came first in the listings Copy it into your page, make it invisible These and similar techniques are term spam

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 33 Believe what people say about you, rather than what you say about yourself Use words in the anchor text (words that appear underlined to represent the link) and its surrounding text PageRank as a tool to measure the importance of Web pages

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 34 Our hypothetical shirt-seller looses Saying he is about movies doesn t help, because others don t say he is about movies His page isn t very important, so it won t be ranked high for shirts or movies Example: Shirt-seller creates 1,000 pages, each links to his with movie in the anchor text These pages have no links in, so they get little PageRank So the shirt-seller can t beat truly important movie pages, like IMDB

SPAM FARMING 2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 35

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 36 Once Google became the dominant search engine, spammers began to work out ways to fool Google Spam farms were developed to concentrate PageRank on a single page Link spam: Creating link structures that boost PageRank of a particular page

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 37 Three kinds of web pages from a spammer s point of view Inaccessible pages Accessible pages e.g., blog comments pages spammer can post links to his pages Own pages Completely controlled by spammer May span multiple domain names

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 38 Spammer s goal: Maximize the PageRank of target page t Technique: Get as many links from accessible pages as possible to target page t Construct link farm to get PageRank multiplier effect

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 39 Accessible Own Inaccessible 1 t 2 One of the most common and effective organizations for a link farm M Millions of farm pages

Accessible Own Inaccessible t 1 2 x: PageRank contributed by accessible pages y: PageRank of target page t Rank of each farm page = βy + 1 β M N y = x + βm βy + 1 β M N = x + β 2 y + y = x 1 β 2 + c M N β 1 β M N + 1 β N + 1 β N where c = β 1+β N # pages on the web M # of pages spammer owns Very small; ignore Now we solve for y 2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 40 M

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 41 Accessible Own Inaccessible t 1 2 M N # pages on the web M # of pages spammer owns x y = + c M where c = 1 β 2 N For = 0.85, 1/(1-2 )= 3.6 β 1+β Multiplier effect for acquired PageRank By making M large, we can make y as large as we want

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 43 Combating term spam Analyze text using statistical methods Similar to email spam filtering Also useful: Detecting approximate duplicate pages Combating link spam Detection and blacklisting of structures that look like spam farms Leads to another war hiding and detecting spam farms TrustRank = topic-specific PageRank with a teleport set of trusted pages Example:.edu domains, similar domains for non-us schools

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 44 Basic principle: Approximate isolation It is rare for a good page to point to a bad (spam) page Sample a set of seed pages from the web Have an oracle (human) to identify the good pages and the spam pages in the seed set Expensive task, so we must make seed set as small as possible

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 45 Call the subset of seed pages that are identified as good the trusted pages Perform a topic-sensitive PageRank with teleport set = trusted pages Propagate trust through links: Each page gets a trust value between 0 and 1 Use a threshold value and mark all pages below the trust threshold as spam

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 47 Trust attenuation: The degree of trust conferred by a trusted page decreases with the distance in the graph Trust splitting: The larger the number of out-links from a page, the less scrutiny the page author gives each outlink Trust is split across out-links

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 48 Two conflicting considerations: Human has to inspect each seed page, so seed set must be as small as possible Must ensure every good page gets adequate trust rank, so need make all good pages reachable from seed set by short paths

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 49 Suppose we want to pick a seed set of k pages How to do that? (1) PageRank: Pick the top k pages by PageRank Theory is that you can t get a bad page s rank really high (2) Use trusted domains whose membership is controlled, like.edu,.mil,.gov

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 50 In the TrustRank model, we start with good pages and propagate trust Complementary view: What fraction of a page s PageRank comes from spam pages? In practice, we don t know all the spam pages, so we need to estimate

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 51 r p = PageRank of page p r p + = page rank of p with teleport into trusted pages only Then: What fraction of a page s PageRank comes from spam pages? r p = r p r p + Spam mass of p = r p r p

HITS (Hypertext-Induced Topic Selection) Is a measure of importance of pages or documents, similar to PageRank Proposed at around same time as PageRank ( 98) Goal: Imagine we want to find good newspapers Don t just find newspapers. Find experts people who link in a coordinated way to good newspapers Idea: Links as votes Page is more important if it has more links In-coming links? Out-going links? 2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 54

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 55 Hubs and Authorities Each page has 2 scores: Quality as an expert (hub): Total sum of votes of pages pointed to Quality as an content (authority): Total sum of votes of experts NYT: 10 Ebay: 3 Yahoo: 3 CNN: 8 WSJ: 9 Principle of repeated improvement

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 56 Interesting pages fall into two classes: 1. Authorities are pages containing useful information Newspaper home pages Course home pages Home pages of auto manufacturers 2. Hubs are pages that link to authorities List of newspapers Course bulletin List of US auto manufacturers NYT: 10 Ebay: 3 Yahoo: 3 CNN: 8 WSJ: 9

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 57 Each page starts with hub score 1 Authorities collect their votes (Note this is idealized example. In reality graph is not bipartite and each page has both the hub and authority score)

2/7/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 58 Number of votes Hubs collect authority scores (Note this is idealized example. In reality graph is not bipartite and each page has both the hub and authority score)

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 59 Authorities collect hub scores (Note this is idealized example. In reality graph is not bipartite and each page has both the hub and authority score)

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 60 A good hub links to many good authorities A good authority is linked from many good hubs Model using two scores for each node: Hub score and Authority score Represented as vectors h and a

[Kleinberg 98] 2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 61 Each page i has 2 scores: Authority score: a i Hub score: h i HITS algorithm: Initialize: a i = 1, h i = 1 Then keep iterating: i: Authority: a i = i: Hub: h i = i: normalize: i j a j j i h j j a j = 1, j h j = 1 j 1 j 2 j 3 j 4 i a i = h j j i i j 1 j 2 j 3 j 4 h i = a j i j

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets [Kleinberg 98] 62 HITS converges to a single stable point Slightly change the notation: Vector a = (a 1,a n ), h = (h 1,h n ) Adjacency matrix (n x n): A ij =1 if i j Then: h i So: h A a And likewise: a j hi i j a A j T A h ij a j

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 63 The hub score of page i is proportional to the sum of the authority scores of the pages it links to: h = λ A a Constant λ is a scale factor, λ=1/ h i The authority score of page i is proportional to the sum of the hub scores of the pages it is linked from: a = μ A T h Constant μ is scale factor, μ=1/ a i

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 64 The HITS algorithm: Initialize h, a to all 1 s Repeat: h = A a Scale h so that its sums to 1.0 a = A T h Scale a so that its sums to 1.0 Until h, a converge (i.e., change very little)

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 65 1 1 1 A = 1 0 1 0 1 0 1 1 0 A T = 1 0 1 1 1 0 Yahoo Amazon M soft a(yahoo) a(amazon) a(m soft) = = = 1 1 1 1 1 1 1 4/5 1 1 0.75 1......... 1 0.732 1 h(yahoo) = 1 h(amazon) = 1 h(m soft) = 1 1 2/3 1/3 1 0.71 0.29 1 0.73 0.27......... 1.000 0.732 0.268

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 66 HITS algorithm in new notation: Set: a = h = 1 n Repeat: h = A a, a = A T h Normalize Then: a=a T (A a) new h new a Thus, in 2k steps: a=(a T A) k a h=(a A T ) k h a is being updated (in 2 steps): A T (A a)=(a T A) a h is updated (in 2 steps): A (A T h)=(a A T ) h Repeated matrix powering

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 67 h = λ A a a = μ A T h h = λ μ A A T h a = λ μ A T A a λ=1/ h i μ=1/ a i Under reasonable assumptions about A, the HITS iterative algorithm converges to vectors h* and a*: h* is the principal eigenvector of matrix A A T a* is the principal eigenvector of matrix A T A

2/6/2013 Jure Leskovec, Stanford C246: Mining Massive Datasets 68 PageRank and HITS are two solutions to the same problem: What is the value of an in-link from u to v? In the PageRank model, the value of the link depends on the links into u In the HITS model, it depends on the value of the other links out of u The destinies of PageRank and HITS post-1998 were very different