Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

Similar documents
Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2016)

Data-Intensive Computing with MapReduce

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

Near Neighbor Search in High Dimensional Data (1) Dr. Anwar Alhenshiri

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Topic: Duplicate Detection and Similarity Computing

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Finding Similar Items:Nearest Neighbor Search

CSE 5243 INTRO. TO DATA MINING

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2016)

Finding Similar Sets. Applications Shingling Minhashing Locality-Sensitive Hashing

Data-Intensive Distributed Computing

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

Finding Similar Sets

Data-Intensive Computing with MapReduce

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2016)

University of Maryland. Tuesday, March 2, 2010

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

Data-Intensive Distributed Computing

Data-Intensive Distributed Computing

US Patent 6,658,423. William Pugh

Data-Intensive Computing with MapReduce

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

Locality-Sensitive Hashing

Shingling Minhashing Locality-Sensitive Hashing. Jeffrey D. Ullman Stanford University

CPSC 340: Machine Learning and Data Mining. Finding Similar Items Fall 2017

Data-Intensive Distributed Computing

MapReduce Locality Sensitive Hashing GPUs. NLP ML Web Andrew Rosenberg

Jordan Boyd-Graber University of Maryland. Thursday, March 3, 2011

What is Unsupervised Learning?

Scalable Techniques for Similarity Search

Mining di Dati Web. Lezione 3 - Clustering and Classification

Homework 3: Map-Reduce, Frequent Itemsets, LSH, Streams (due March 16 th, 9:30am in class hard-copy please)

CPSC 340: Machine Learning and Data Mining. Non-Parametric Models Fall 2016

Task Description: Finding Similar Documents. Document Retrieval. Case Study 2: Document Retrieval

CSE547: Machine Learning for Big Data Spring Problem Set 1. Please read the homework submission policies.

VK Multimedia Information Systems

COSC160: Data Structures Hashing Structures. Jeremy Bolton, PhD Assistant Teaching Professor

CPSC 340: Machine Learning and Data Mining

Graph Algorithms. Revised based on the slides by Ruoming Kent State

Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Infinite data. Filtering data streams

B490 Mining the Big Data. 5. Models for Big Data

Problem 1: Complexity of Update Rules for Logistic Regression

Fast Indexing Strategies for Robust Image Hashes

Association Pattern Mining. Lijun Zhang

CS5112: Algorithms and Data Structures for Applications

Knowledge Discovery and Data Mining 1 (VO) ( )

Today s lecture. Basic crawler operation. Crawling picture. What any crawler must do. Simple picture complications

Chapter 2 Basic Structure of High-Dimensional Spaces

BIG DATA ANALYSIS A CASE STUDY. Du, Haoran Supervisor: Dr. Wang, Qing. COMP 8790: Software Engineering Project AUSTRALIAN NATIONAL UNIVERSITY

Nearest neighbors. Focus on tree-based methods. Clément Jamin, GUDHI project, Inria March 2017

Today s lecture. Information Retrieval. Basic crawler operation. Crawling picture. What any crawler must do. Simple picture complications

Overview of Clustering

An Overview of Search Engine. Hai-Yang Xu Dev Lead of Search Technology Center Microsoft Research Asia

Shortest path problems

Information Retrieval (IR) Introduction to Information Retrieval. Lecture Overview. Why do we need IR? Basics of an IR system.

CPSC 340: Machine Learning and Data Mining. Multi-Dimensional Scaling Fall 2017

Collection Building on the Web. Basic Algorithm

CS290N Summary Tao Yang

Clustering. Distance Measures Hierarchical Clustering. k -Means Algorithms

Coding for Random Projects

Leveraging Set Relations in Exact Set Similarity Join

General Instructions. Questions

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Sergei Silvestrov, Christopher Engström. January 29, 2013

Map-Reduce Applications: Counting, Graph Shortest Paths

Thanks to Jure Leskovec, Anand Rajaraman, Jeff Ullman

Cake and Grief Counseling Will be Available: Using Artificial Intelligence for Forensics Without Jeopardizing Humanity.

Information Retrieval. Lecture 9 - Web search basics

Introduction to Information Retrieval

Mining Web Data. Lijun Zhang

CPSC 340: Machine Learning and Data Mining. Non-Parametric Models Fall 2016

Predicting Popular Xbox games based on Search Queries of Users

Vector Space Models: Theory and Applications

Going nonparametric: Nearest neighbor methods for regression and classification

An efficient approach in Near Duplicate Document Detection using Shingling-MD5

2997 Yarmouth Greenway Drive, Madison, WI Phone: (608) Web:

Textual Similarity Analysis Using Locality Sensitive Hashing

Some Interesting Applications of Theory. PageRank Minhashing Locality-Sensitive Hashing

Data-Intensive Distributed Computing

CS122 Lecture 3 Winter Term,

Machine Learning for NLP

CS 1675 Introduction to Machine Learning Lecture 18. Clustering. Clustering. Groups together similar instances in the data sample

Lecture 6: Unsupervised Machine Learning Dagmar Gromann International Center For Computational Logic

Data Analytics with HPC. Data Streaming

Finding dense clusters in web graph

Missing Data. Where did it go?

1 Metric spaces. d(x, x) = 0 for all x M, d(x, y) = d(y, x) for all x, y M,

Evaluating Classifiers

9881 Project Deliverable #1

Compressing Social Networks

University of Maryland. Tuesday, March 23, 2010

Clustering. (Part 2)

Hashing and sketching

On Compressing Social Networks. Ravi Kumar. Yahoo! Research, Sunnyvale, CA. Jun 30, 2009 KDD 1

Non-trivial extraction of implicit, previously unknown and potentially useful information from data

Indexing and Query Processing. What will we cover?

Outline. Database Management and Tuning. Outline. Join Strategies Running Example. Index Tuning. Johann Gamper. Unit 6 April 12, 2012

Transcription:

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017) Week 9: Data Mining (3/4) March 7, 2017 Jimmy Lin David R. Cheriton School of Computer Science University of Waterloo These slides are available at http://lintool.github.io/bigdata-2017w/ This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States See http://creativecommons.org/licenses/by-nc-sa/3.0/us/ for details

Structure of the Course Analyzing Text Analyzing Graphs Analyzing Relational Data Data Mining Core framework features and algorithm design

Theme: Similarity How similar are two items? How close are two items? Equivalent formulations: large distance = low similarity Lots of applications! Problem: find similar items Offline variant: extract all similar pairs of objects from a large collection Online variant: is this object similar to something I ve seen before? Problem: arrange similar items into clusters Offline variant: entire static collection available at once Online variant: objects incrementally available

Literature Note Many communities have tackled similar problems: Theoretical computer science Information retrieval Data mining Databases Issues Slightly different terminology Results not easy to compare

Four Steps Specify distance metric Jaccard, Euclidean, cosine, etc. Compute representation Shingling, tf.idf, etc. Project Minhash, random projections, etc. Extract Bucketing, sliding windows, etc.

Source: www.flickr.com/photos/thiagoalmeida/250190676/ Distance Metrics

Distance Metrics 1. Non-negativity: d(x, y) 0 2. Identity: d(x, y) = 0 () x=y 3. Symmetry: d(x, y) = d(y, x) 4. Triangle Inequality d(x, y) apple d(x, z) + d(z, y)

Distance: Jaccard Given two sets A, B Jaccard similarity: J(A, B) = A \ B A [ B d(a, B) =1 J(A, B)

Distance: Norms Given x=[x 1,x 2,...x n ] y=[y 1,y 2,...y n ] Euclidean distance (L 2 -norm) d(x, y) = v ux t n (x i y i ) 2 i=0 Manhattan distance (L 1 -norm) L r -norm d(x, y) = d(x, y) = nx x i y i i=0 " X n # 1/r x i y i r i=0

Distance: Cosine Given x=[x 1,x 2,...x n ] y=[y 1,y 2,...y n ] Idea: measure distance between the vectors cos = x y x y Thus: sim(x, y) = pp n i=0 x2 i P n i=0 x iy i pp n i=0 y2 i d(x, y) = 1 sim(x, y)

Distance: Hamming Given two bit vectors Hamming distance: number of elements which differ

Representations

Representations (Text) Unigrams (i.e., words) Shingles = n-grams At the word level At the character level Feature weights boolean tf.idf BM25

Representations (Beyond Text) For recommender systems: Items as features for users Users as features for items For graphs: Adjacency lists as features for vertices For log data: Behaviors (clicks) as features

Minhash Source: www.flickr.com/photos/rheinitz/6158837748/

Near-Duplicate Detection of Webpages What s the source of the problem? Mirror pages (legit) Spam farms (non-legit) Additional complications (e.g., nav bars) Naïve algorithm: Compute cryptographic hash for webpage (e.g., MD5) Insert hash values into a big hash table Compute hash for new webpage: collision implies duplicate What s the issue? Intuition: Hash function needs to be tolerant of minor differences High similarity implies higher probability of hash collision

Minhash Naïve approach: N 2 comparisons: Can we do better? Seminal algorithm for near-duplicate detection of webpages Used by AltaVista Setup: Documents (HTML pages) represented by shingles (n-grams) Jaccard similarity: dups are pairs with high similarity

Preliminaries: Representation Sets: A = {e 1, e 3, e 7 } B = {e 3, e 5, e 7 } Can be equivalently expressed as matrices: Element A B e 1 1 0 e 2 0 0 e 3 1 1 e 4 0 0 e 5 0 1 e 6 0 0 e 7 1 1

Preliminaries: Jaccard Element A B e 1 1 0 e 2 0 0 e 3 1 1 e 4 0 0 e 5 0 1 e 6 0 0 e 7 1 1 Let: M 00 = # rows where both elements are 0 M 11 = # rows where both elements are 1 M 01 = # rows where A=0, B=1 M 10 = # rows where A=1, B=0 J(A, B) = M 11 M 01 + M 10 + M 11

Element A B e 1 1 0 e 2 0 0 e 3 1 1 e 4 0 0 e 5 0 1 e 6 0 0 e 7 1 1 Minhash Computing minhash Start with the matrix representation of the set Randomly permute the rows of the matrix minhash is the first row with a one Example h(a) = e 3 h(b) = e 5 Element A B e 6 0 0 e 2 0 0 e 5 0 1 e 3 1 1 e 7 1 1 e 4 0 0 e 1 1 0

Minhash and Jaccard Element A B e 6 0 0 e 2 0 0 e 5 0 1 e 3 1 1 e 7 1 1 e 4 0 0 e 1 1 0 M 00 M 00 M 01 M 11 M 11 M 00 M 10 P [h(a) =h(b)] = J(A, B) M 11 M 01 + M 10 + M 11 M 11 M 01 + M 10 + M 11

To Permute or Not to Permute? Problem: Permutations are expensive Solution: Interpret the hash value as the permutation Only need to keep track of the minimum hash value Can keep track of multiple minhash values at once

Extracting Similar Pairs Task: discover all pairs with similarity greater than s Naïve approach: N 2 comparisons: Can we do better? Tradeoffs: False positives: discovered pairs that have similarity less than s False negatives: pairs with similarity greater than s not discovered

Extracting Similar Pairs (LSH) We know: P [h(a) =h(b)] = J(A, B) Task: discover all pairs with similarity greater than s Algorithm: For each object, compute its minhash value Group objects by their hash values Output all pairs within each group Analysis: If J(A,B) = s, then probability we detect it is s

Probability of Detection 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Jaccard

Probability of Detection 0.0 0.2 0.4 0.6 0.8 1.0 False Positives Threshold = 0.8 False Negatives 0.0 0.2 0.4 0.6 0.8 1.0 Jaccard

2 Minhash Signatures We know: P [h(a) =h(b)] = J(A, B) Task: discover all pairs with similarity greater than s Algorithm: For each object, compute 2 minhash values and concatenate = signature Group objects by their signatures Output all pairs within each group Analysis: If J(A,B) = s, then probability we detect it is s 2

3 Minhash Signatures We know: P [h(a) =h(b)] = J(A, B) Task: discover all pairs with similarity greater than s Algorithm: For each object, compute 3 minhash values and concatenate = signature Group objects by their signatures Output all pairs within each group Analysis: If J(A,B) = s, then probability we detect it is s 3

k Minhash Signatures We know: P [h(a) =h(b)] = J(A, B) Task: discover all pairs with similarity greater than s Algorithm: For each object, compute k minhash values and concatenate = signature Group objects by their signatures Output all pairs within each group Analysis: If J(A,B) = s, then probability we detect it is s k

Probability of Detection 0.0 0.2 0.4 0.6 0.8 1.0 k Minhash Signatures concatenated together False Negatives False Positives Threshold = 0.8 0.0 0.2 0.4 0.6 0.8 1.0 Jaccard

n different k Minhash Signatures We know: P [h(a) =h(b)] = J(A, B) Task: discover all pairs with similarity greater than s Algorithm: For each object, compute n sets k minhash values For each set, concatenate k minhash values together In each set: group objects by signatures, output all pairs in each group De-dup pairs Analysis: If J(A,B) = s, P(none of the n collide) = (1 s k ) n If J(A,B) = s, then probability we detect it is 1 (1 s k ) n

Probability of Detection 0.0 0.2 0.4 0.6 0.8 1.0 k Minhash Signatures concatenated together False Negatives False Positives Threshold = 0.8 0.0 0.2 0.4 0.6 0.8 1.0 Jaccard

Probability of Detection 0.0 0.2 0.4 0.6 0.8 1.0 6 Minhash Signatures concatenated together False Negatives False Positives Threshold = 0.8 0.0 0.2 0.4 0.6 0.8 1.0 Jaccard

Probability of Detection 0.0 0.2 0.4 0.6 0.8 1.0 n different sets of 6 Minhash Signatures False Negatives False Positives Threshold = 0.8 0.0 0.2 0.4 0.6 0.8 1.0 Jaccard

n different k Minhash Signatures Example: J(A,B) = 0.8, 10 sets of 6 minhash signatures P(k minhash signatures match) = (0.8) 6 = 0.262 P(k minhash signature doesn t match in any of the 10 sets) = (1 (0.8) 6 ) 10 = 0.0478 Thus, we should find 1 (1 (0.8) 6 ) 10 = 0.952 of all similar pairs Example: J(A,B) = 0.4, 10 sets of 6 minhash signatures P(k minhash signatures match) = (0.4) 6 = 0.0041 P(k minhash signature doesn t match in any of the 10 sets) = (1 (0.4) 6 ) 10 = 0.9598 Thus, we should find 1 (1 0.262144) 10 = 0.040 of all similar pairs

n different k Minhash Signatures s 1 (1 s 6 ) 10 0.2 0.0006 0.3 0.0073 0.4 0.040 0.5 0.146 0.6 0.380 0.7 0.714 0.8 0.952 0.9 0.999

Practical Notes Common implementation: Generate M minhash values, select k of them n times Reduces amount of hash computations needed Determining authoritative version is non-trivial

MapReduce/Spark Implementation Map over objects: Generate M minhash values, select k of them n times Each draw yields a signature, emit: key = (p, signature), where p = [ 1 n ] and value = object id Shuffle/Sort Reduce Receive all object ids with same (n, signature), emit clusters Second pass to de-dup and group clusters (Optional) Third pass to eliminate false positives

Offline Extraction vs. Online Querying Batch formulation of the problem: Discover all pairs with similarity greater than s Useful for post-hoc batch processing of web crawl Online formulation of the problem: Given new webpage, is it similar to one I ve seen before? Useful for incremental web crawl processing

Online Similarity Querying Preparing the existing collection: For each object, compute n sets of k minhash values For each set, concatenate k minhash values together Keep each signature in hash table (in memory) Note: can parallelize across multiple machines Querying and updating: For new webpage, compute signatures and check for collisions Collisions imply duplicate (determine which version to keep) Update hash tables

Random Projections Source: www.flickr.com/photos/roj/4179478228/

Limitations of Minhash Minhash is great for near-duplicate detection Set high threshold for Jaccard similarity Limitations: Jaccard similarity only Set-based representation, no way to assign weights to features Random projections: Works with arbitrary vectors using cosine similarity Same basic idea, but details differ Slower but more accurate: no free lunch!

Random Projection Hashing Generate a random vector r of unit length Draw from univariate Gaussian for each component Normalize length h r (u) = Define: 1 if r u 0 0 if r u < 0 Physical intuition?

RP Hash Collisions It can be shown that: Proof in (Goemans and Williamson, 1995) (u, v) P [h r (u) = h r (v)] = 1 cos( (u, v)) = cos((1 Thus: P [h r (u) = h r (v)]) ) Physical intuition?

Random Projection Signature Given D random vectors: [r 1, r 2, r 3,...r D ] Convert each object into a D bit signature: u! [h r1 (u),h r2 (u),h r3 (u),...h rd (u)] Since: cos( (u, v)) = cos((1 P [h r (u) = h r (v)]) ) We can derive: hamming(su, s v ) cos( (u, v)) = cos D Insight: similarity boils down to comparison of hamming distances between signatures

One-RP Signature Task: discover all pairs with cosine similarity greater than s Algorithm: Compute D-bit RP signature for every object Take first bit, bucket objects into two sets Perform brute force pairwise (hamming distance) comparison in each bucket, retain those below hamming distance threshold Analysis: Probability we will discover all pairs: * cos 1 (s) 1 Efficiency N 2 vs. 2 N 2 2 * Note, this is actually a simplification: see Ture et al. (SIGIR 2011) for details.

Two-RP Signature Task: discover all pairs with cosine similarity greater than s Algorithm: Compute D-bit RP signature for every object Take first two bits, bucket objects into four sets Perform brute force pairwise (hamming distance) comparison in each bucket, retain those below hamming distance threshold Analysis: Probability we will discover all pairs: apple cos 1 2 (s) 1 Efficiency N 2 vs. 4 N 4 2

k-rp Signature Task: discover all pairs with cosine similarity greater than s Algorithm: Compute D-bit RP signature for every object Take first k bits, bucket objects into 2 k sets Perform brute force pairwise (hamming distance) comparison in each bucket, retain those below hamming distance threshold Analysis: Probability we will discover all pairs: apple cos 1 k (s) 1 Efficiency N 2 vs. 2 k N 2 k 2

m Sets of k-rp Signature Task: discover all pairs with cosine similarity greater than s Algorithm: Compute D-bit RP signature for every object Choose m sets of k bits; for each, use k selected bits to bucket objects into 2 k sets Perform brute force pairwise (hamming distance) comparison in each bucket, retain those below hamming distance threshold Analysis: Probability " we will discover all pairs: apple # cos 1 k m (s) 1 1 1 Efficiency 2 N N 2 vs. m 2 k 2 k

MapReduce/Spark Implementation Map over objects: Compute D-bit RP signature for every object Choose m sets of k bits and use to bucket; for each, emit: key = (p, k bits), where p = [ 1 m ], value = (object id, rest of signature bits) Shuffle/Sort Reduce Receive (p, k bits) Perform brute force pairwise (hamming distance) comparison for each key, retain those below hamming distance threshold Second pass to de-dup and group clusters (Optional) Third pass to eliminate false positives

Online Querying Preparing the existing collection: Compute D-bit RP signature for every object Choose m sets of k bits and use to bucket Store signatures in memory (across multiple machines) Querying: Compute D-bit signature of query object, choose m sets of k bits in same way Perform brute-force scan of correct bucket (in parallel)

Additional Issues to Consider Emphasis on recall, not precision Two sources of error: From LSH From using hamming distance as proxy for cosine similarity Load imbalance Parameter tuning

Sliding Window Algorithm Compute D-bit RP signature for every object For each object, permute bit signature m times For each permutation, sort bit signatures Apply sliding window of width B over sorted Compute hamming distances of bit signatures within window

MapReduce/Spark Implementation Mapper: Compute D-bit RP signature for every object Permute m times, for each emit: key = (p, signature), where p = [ 1 m ], value = object id Shuffle/Sort Reduce Keep FIFO queue of B bit signatures For each new bit signature, compute hamming distance wrt all in queue Add new bit signature to end of queue, displacing oldest

Four Steps to Finding Similar Items Specify distance metric Jaccard, Euclidean, cosine, etc. Compute representation Shingling, tf.idf, etc. Project Minhash, random projections, etc. Extract Bucketing, sliding windows, etc.

Questions? Source: Wikipedia (Japanese rock garden)