CS345a: Data Mining Jure Leskovec and Anand Rajaraman Stanford University

Similar documents
Link Analysis in Web Mining

Introduction to Data Mining

Slides based on those in:

CS425: Algorithms for Web Scale Data

3 announcements: Thanks for filling out the HW1 poll HW2 is due today 5pm (scans must be readable) HW3 will be posted today

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Analysis of Large Graphs: TrustRank and WebSpam

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS47300 Web Information Search and Management

Jeffrey D. Ullman Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Big Data Analytics CSCI 4030

Big Data Analytics CSCI 4030

CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University

How to organize the Web?

CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University

Part 1: Link Analysis & Page Rank

CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University

Jeffrey D. Ullman Stanford University/Infolab

INTRODUCTION TO DATA SCIENCE. Link Analysis (MMDS5)

Lecture Notes to Big Data Management and Analytics Winter Term 2017/2018 Node Importance and Neighborhoods

Information Networks: PageRank

Introduction to Data Mining

An Overview of Search Engine Spam. Zoltán Gyöngyi

Introduction In to Info Inf rmation o Ret Re r t ie i v e a v l a LINK ANALYSIS 1

Web consists of web pages and hyperlinks between pages. A page receiving many links from other pages may be a hint of the authority of the page

COMP 4601 Hubs and Authorities

Web Spam Detection with Anti-Trust Rank

Unit VIII. Chapter 9. Link Analysis

Information Retrieval. Lecture 11 - Link analysis

Brief (non-technical) history

A STUDY OF RANKING ALGORITHM USED BY VARIOUS SEARCH ENGINE

Lecture 8: Linkage algorithms and web search

Web Spam. Seminar: Future Of Web Search. Know Your Neighbors: Web Spam Detection using the Web Topology

Link Analysis. CSE 454 Advanced Internet Systems University of Washington. 1/26/12 16:36 1 Copyright D.S.Weld

CC PROCESAMIENTO MASIVO DE DATOS OTOÑO Lecture 7: Information Retrieval II. Aidan Hogan

COMP Page Rank

Web Spam Taxonomy. Zoltán Gyöngyi Hector Garcia-Molina

Exploring both Content and Link Quality for Anti-Spamming

Link Analysis and Web Search

Anti-Trust Rank for Detection of Web Spam and Seed Set Expansion

Link Structure Analysis

Social and Technological Network Analysis. Lecture 5: Web Search and Random Walks. Dr. Cecilia Mascolo

Bibliometrics: Citation Analysis

Large-Scale Networks. PageRank. Dr Vincent Gramoli Lecturer School of Information Technologies

5/30/2014. Acknowledgement. In this segment: Search Engine Architecture. Collecting Text. System Architecture. Web Information Retrieval

Introduction to Information Retrieval

World Wide Web has specific challenges and opportunities

Lecture 8: Linkage algorithms and web search

Some Interesting Applications of Theory. PageRank Minhashing Locality-Sensitive Hashing

Information Retrieval Lecture 4: Web Search. Challenges of Web Search 2. Natural Language and Information Processing (NLIP) Group

Einführung in Web und Data Science Community Analysis. Prof. Dr. Ralf Möller Universität zu Lübeck Institut für Informationssysteme

CPSC 340: Machine Learning and Data Mining. Probabilistic Classification Fall 2017

Link Analysis. Hongning Wang

Link Analysis. Chapter PageRank

Searching the Web What is this Page Known for? Luis De Alba

CS249: SPECIAL TOPICS MINING INFORMATION/SOCIAL NETWORKS

Centralities (4) By: Ralucca Gera, NPS. Excellence Through Knowledge

Pagerank Scoring. Imagine a browser doing a random walk on web pages:

Administrative. Web crawlers. Web Crawlers and Link Analysis!

Lecture #3: PageRank Algorithm The Mathematics of Google Search

CS6200 Information Retreival. The WebGraph. July 13, 2015

Lecture 9: I: Web Retrieval II: Webology. Johan Bollen Old Dominion University Department of Computer Science

CSI 445/660 Part 10 (Link Analysis and Web Search)

A novel approach of web search based on community wisdom

Information Retrieval and Web Search

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Web search before Google. (Taken from Page et al. (1999), The PageRank Citation Ranking: Bringing Order to the Web.)

Mining Social Network Graphs

Lecture 4: Information Retrieval and Web Mining.

Popularity of Twitter Accounts: PageRank on a Social Network

Link Analysis from Bing Liu. Web Data Mining: Exploring Hyperlinks, Contents, and Usage Data, Springer and other material.

1 Starting around 1996, researchers began to work on. 2 In Feb, 1997, Yanhong Li (Scotch Plains, NJ) filed a

Information Retrieval May 15. Web retrieval

CS224W: Analysis of Networks Jure Leskovec, Stanford University

Network Centrality. Saptarshi Ghosh Department of CSE, IIT Kharagpur Social Computing course, CS60017

Agenda. Math Google PageRank algorithm. 2 Developing a formula for ranking web pages. 3 Interpretation. 4 Computing the score of each page

CS 6604: Data Mining Large Networks and Time-Series

Lecture 17 November 7

Automatic Summarization

PV211: Introduction to Information Retrieval

Lec 8: Adaptive Information Retrieval 2

Multi-label classification using rule-based classifier systems

CSE 190 Lecture 16. Data Mining and Predictive Analytics. Small-world phenomena

DATA MINING - 1DL460

Information Retrieval and Web Search Engines

TODAY S LECTURE HYPERTEXT AND

On Finding Power Method in Spreading Activation Search

CS535 Big Data Fall 2017 Colorado State University 9/5/2017. Week 3 - A. FAQs. This material is built based on,

ROBERTO BATTITI, MAURO BRUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Apr 2015

Recent Researches on Web Page Ranking

PageRank, learning to rank, and context-sensitive search. Aristides Gionis Yahoo! Research, Barcelona

Information Retrieval Spring Web retrieval

Graph and Web Mining - Motivation, Applications and Algorithms PROF. EHUD GUDES DEPARTMENT OF COMPUTER SCIENCE BEN-GURION UNIVERSITY, ISRAEL

CS435 Introduction to Big Data FALL 2018 Colorado State University. 9/24/2018 Week 6-A Sangmi Lee Pallickara. Topics. This material is built based on,

Link Analysis in the Cloud

Introduction to Information Retrieval (Manning, Raghavan, Schutze) Chapter 21 Link analysis

Using Spam Farm to Boost PageRank p. 1/2

Detecting Spam Web Pages

Collaborative filtering based on a random walk model on a graph

Transcription:

CS345a: Data Mining Jure Leskovec and Anand Rajaraman Stanford University

Instead of generic popularity, can we measure popularity within a topic? E.g., computer science, health Bias the random walk When the random walker teleports, he picks a page from a set S of web pages S contains only pages that are relevant to the topic Eg E.g., Open Directory (DMOZ) pages for a given topic (www.dmoz.org) For each teleport set S, we get a different rank vector r S 2

Let: A ik = M ik + (1 )/ S if i S M ik otherwise A is stochastic! We have weighted all pages in the teleport set S equally Could also assign different weights to pages 3

0.5 0.4 1 0.8 1 0.2 0.5 0.4 2 3 1 1 0.8 0.8 4 Suppose S = {1}, = 0.8 Node Iteration 0 1 2 stable 1 1.0 0.2 0.52 0.294 2 0 0.4 0.08 0.118 3 0 0.4 0.08 0.327 4 0 0 032 0.32 0.261 Note how we initialize the PageRank vector differently from the unbiased PageRank case. 4

Experimental results [Haveliwala 2000] Picked 16 topics Teleport sets determined using DMOZ E.g., arts, business, sports, Blind study using volunteers 35 test queries Results ranked using PageRank and TSPR of most closely related topic E.g., bicycling using Sports ranking In most cases volunteers preferred dtspr ranking 5

User can pick from a menu Use Naïve Bayes to classify query into a topic Can use the context of the query E.g., query is launched from a web page talking about a known topic History of queries e.g., basketball followed by Jordan User context e.g., user s My Yahoo settings, bookmarks, 6

Goal: Don t just find newspapers but also find experts people who link in a coordinated way to many good newspapers Idea: link voting Quality as an expert (hub): Total sum of votes of pages pointed to Quality as an content (authority): Total sum of votes of experts Principle of repeated improvement NYT: 10 Ebay: 3 Yahoo: 3 CNN: 8 WSJ: 9 7

8

9

10

Interesting documents fall into two classes: 1. Authorities are pages containing useful information Newspaper home pages Course home pages Home pages of auto manufacturers 2. Hubs are pages pg that link to authorities List of newspapers NYT: 10 Ebay: 3 Yahoo: 3 Course bulletin CNN: 8 List of US auto manufacturers WSJ: 9 11

A good hub links to many good authorities A good authority is linked from many good hubs Model using two scores for each node: Hub score and Authority score Represented as vectors h and a 12

Each page i has 2 kinds of scores: Hub score: h i Authority score: a i Algorithm: Initialize: a i =h i =1 Then keep iterating: Authority: a j h i i j Hub: h i a j i j Normalize: a i =1, h i =1 13

HITS uses adjacency matrix A[i, j] = 1 if page i links to page j, 0 else A T, the transpose of A, is similar to the PageRank matrix M but A T has 1s 1 s where M has fractions 14

Yahoo A = y a m y 1 1 1 a 1 0 1 m 0 1 0 Amazon M soft 15

Notation: Vector a=(a 1,a n ), h=(h 1,h n ) Adjacency matrix ti (n x n): A ij =1 if i j Then: h i So: h Aa Likewise: a a j hi i j A T h j A ij a j 16

The hub score of page i is proportional to the sum of the authority scores of the pages it links to: h = λaa Constant λ is a scale factor, λ=1/ h i The authority score of page iis proportional to the sum of the hub scores of the pages pg it is linked from: a = μa T h Constant μ is scale factor, μ=1/ a i 17

The HITS algorithm: Initialize h, a to all 1 s Repeat: h = Aa Scale h so that its sums to 10 1.0 a = A T h Scale a so that its sums to 1.0 Until h, a converge (i.e., change very little) 18

1 1 1 A = 101 1 0 1 0 1 1 0 A T = 101 1 1 1 0 Amazon Yahoo M soft a(yahoo) = 1 1 1 1... 1 a(amazon) = 1 1 4/5 0.75... 0.732 a(m soft) = 1 1 1 1... 1 h(yahoo) = 1 1 1 1... 1.000 h(amazon) = 1 2/3 0.71 0.73... 0.732 h(m soft) = 1 1/3 0.29 0.27... 0.268 19

Algorithm: Set: a = h = 1 n Repeat: h=ma, a=m T h Normalize Then: a=m T (Ma) Thus, in 2k steps: a=(m T M) k a h=(mm T ) k h a is being updated (in 2 steps): M T (Ma)=(M T M)a new h new a h is updated (in 2 steps): p) M(M T h)=(mm T )h Repeated matrix powering 20

h = λaa a = μa T h h = λμaa T h a = λμaa T A a Under reasonable assumptions about A, the HITS iterative algorithm converges to vectors h* and a*: h* is the principal eigenvector of matrix AA T a* is the principal eigenvector of matrix A T A 21

Hubs Authorities Most densely connected core (primary core) Less densely connected core (secondary core) 22

A single topic can have many bipartite cores Corresponding to different meanings or points of view: abortion: pro choice, pro life evolution: o darwinian, a intelligent t design jaguar: auto, Mac, NFL team, panthera onca How to find such secondary cores? 23

Once we find the primary core, we can remove its links from the graph Repeat HITS algorithm on residual graph to find the next bipartite core Roughly, correspond to non primary eigenvectors of AA T and A T A 24

We need a well connected graph of pages for HITS to work well: 25

PageRank and HITS are two solutions to the same problem: What is the value of an in link from u to v? In the PageRank model, the value of the link depends on the links into u In the HITS model, it depends on the value of the other links out of u The destinies of PageRank and HITS post 1998 were very different 26

Search is the default gateway to the web Very high premium to appear on the first page of search results: e commerce sites advertising driven sites 27

Spamming: any deliberate action to boost a web page s position in search engine results, incommensurate with page s real value Spam: web pages that are the result of spamming This is a very broad definition SEO industry might disagree! SEO= search engine optimization Approximately 10 15% of web pages are spam 28

The treatment by Gyongyi & Garcia Molina: Boosting techniques Techniques for achieving high relevance/importance for a web page pg Hiding techniques Techniques to hide the use of boosting From humans and web crawlers 29

Term spamming Manipulating the text of web pages in order to appear relevant to queries Link spamming Creating link structures that boost PageRank or hubs and authorities scores 30

Repetition: of one or a few specific terms e.g., free, cheap, viagra Goal is to subvert TF IDF ranking schemes Dumping: of a large number of unrelated terms e.g., copy entire dictionaries Weaving: Copy legitimate pages and insert spam terms at random positions Phrase Stitching: Glue together sentences and phrases from different sources 31

Three kinds of web pages from a spammer s point of view: Inaccessible pages Accessible pages: eg e.g., blog comments pages spammer can post links to his pages Own pages: Completely controlled by spammer May span multiple domain names 32

Spammer s goal: Maximize the PageRank of target page t Technique: Getas manylinks fromaccessible pages as possible to target page t Construct link farm to get PageRank multiplier effect 33

Accessible Own Inaccessible t 1 2 M One of the most common and effective organizations for a link farm 34

Inaccessible Accessible t Own Suppose rank contributed by accessible pages = x Let PageRank of target page = y Rank of each farm page = y/m + (1 )/N y = x + M[ y/m + (1 )/N] + (1 )/N = x + 2 y + (1 )M/N + (1 )/N y = x/(1 2 ) + cm/n where c = /(1+ ) 1 2 M Very small; ignore N # pages pg on the web M # of pages spammer owns 35

Inaccessible Accessible t Own 1 2 y = x/(1 2 ) + cm/n where c = /(1+ ) For = 0.85, 1/(1 2 )= 3.6 Multiplier effect for acquired PageRank By making M large, g, we can make y as large as we want M N # pages on the web M # of pages spammer owns 36

Term spamming: Analyze text using statistical methods: Eg E.g., Naïve Bayes, Logistic regression Similar to email spam filtering Also useful: detecting approximate duplicate pages Link spamming: Open research area One approach: TrustRank 37

Basic principle: approximate isolation It is rare for a good page to point to a bad (spam) page Sample a set of seed pages from the web Have an oracle (human) identify the good pages and the spam pages in the seed set Expensive task Must make seed set as small as possible 38

Call the subset of seed pages that are identified as good the trusted pages Set trust of each trusted page to 1 Propagate trust through links: Each page gets a trust value between 0 and 1 Use a threshold value and mark all pages below the trust threshold as spam 39

Trust attenuation: The degree of trust conferred by a trusted page decreases with distance Trust splitting: The larger the number of out links from a page, the less scrutiny the page author gives each out link Trust is split across out links 40

Suppose trust of page p is t p Set of out links o p For each q o p, p confers the trust: t p / o p for 0< <1 Trust is additive Trust of p is the sum of the trust conferred on p by all its in linked pages Note similarity to Topic Specific PageRank Within a scaling factor, TrustRank = PageRank with trusted pages as teleport set 41

Two conflicting considerations: Human has to inspect each seed page, so seed set must be as smallasas possible Must ensure every good pg page gets adequate trust rank, so need make all good pages reachable from seed set by short paths 42

Suppose we want to pick a seed set of k pages PageRank: Pick the top k pages pg by PageRank Assume high PageRank pages are close to other highly ranked pg pages We care more about high PageRank good pages 43

Pick the pages with the maximum number of outlinks Can make it recursive: Pick pages pg that link to pages pg with many out links Formalize as inverse PageRank Construct graph G by reversing edges in G PageRank in G is inverse page rank in G Pick top k pages by inverse PageRank 44

In the TrustRank model, we start with good pages and propagate trust Complementary view: What fraction of a page s pg PageRank comes from spam pages? In practice, we don t know all the spam pages, so we need to estimate 45

r(p) = PageRank of page p r + (p) = page pg rank of p with teleport into good pages only Then: r (p) = r(p) r + (p) Spam mass of p = r (p)/r(p) 46

For spam mass, we need a large set of good pages: Need not be as careful about quality of individual pages as with TrustRank One reasonable approach.edu sites.gov sites.mil sites 47

Backflow fromknown spam pages: Course project from last year s edition of this course Still an open area of research 48

Project write up is due Mon, Feb 1 midnight What is the problem you are solving? What dt data will you use (where will you get it)? How will you do it? What algorithms/techniques h i will you use? Who will you evaluate, measure success? What do you expect to submit at the end of the quarter? Homework is due on Tue, Feb 2 midnight 49