Clustering Documents. Document Retrieval. Case Study 2: Document Retrieval

Similar documents
Clustering Documents. Case Study 2: Document Retrieval

Clustering Documents. Document Retrieval. Case Study 2: Document Retrieval

MapReduce Simplified Data Processing on Large Clusters

Large-scale Information Processing

Graph-Parallel Problems. ML in the Context of Parallel Architectures

Computer Science 572 Exam Prof. Horowitz Tuesday, April 24, 2017, 8:00am 9:00am

Parallel Processing - MapReduce and FlumeJava. Amir H. Payberah 14/09/2018

Introduction to Map/Reduce. Kostas Solomos Computer Science Department University of Crete, Greece

CS 470 Spring Parallel Algorithm Development. (Foster's Methodology) Mike Lam, Professor

ECE5610/CSC6220 Introduction to Parallel and Distribution Computing. Lecture 6: MapReduce in Parallel Computing

UNIT V PROCESSING YOUR DATA WITH MAPREDUCE Syllabus

September 2013 Alberto Abelló & Oscar Romero 1

Data Clustering on the Parallel Hadoop MapReduce Model. Dimitrios Verraros

FAQs. Topics. This Material is Built Based on, Analytics Process Model. 8/22/2018 Week 1-B Sangmi Lee Pallickara

Clustering Lecture 8: MapReduce

Data-Intensive Computing with MapReduce

Map-Reduce in Various Programming Languages

1/30/2019 Week 2- B Sangmi Lee Pallickara

W1.A.0 W2.A.0 1/22/2018 1/22/2018. CS435 Introduction to Big Data. FAQs. Readings

MapReduce Spark. Some slides are adapted from those of Jeff Dean and Matei Zaharia

MapReduce: A Programming Model for Large-Scale Distributed Computation

MapReduce-style data processing

Map Reduce. Yerevan.

COMP4442. Service and Cloud Computing. Lab 12: MapReduce. Prof. George Baciu PQ838.

Java in MapReduce. Scope

The MapReduce Abstraction

Introduction to HDFS and MapReduce

Big Data Analysis using Hadoop. Map-Reduce An Introduction. Lecture 2

Distributed Computations MapReduce. adapted from Jeff Dean s slides

CS427 Multicore Architecture and Parallel Computing

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

Introduction to MapReduce. Adapted from Jimmy Lin (U. Maryland, USA)

Parallel Data Processing with Hadoop/MapReduce. CS140 Tao Yang, 2014

Introduction to MapReduce

An Introduction to Apache Spark

Big Data landscape Lecture #2

Where We Are. Review: Parallel DBMS. Parallel DBMS. Introduction to Data Management CSE 344

Task Description: Finding Similar Documents. Document Retrieval. Case Study 2: Document Retrieval

PARALLEL DATA PROCESSING IN BIG DATA SYSTEMS

Parallel Computing: MapReduce Jin, Hai

COMP Parallel Computing. Lecture 22 November 29, Datacenters and Large Scale Data Processing

Big Data Analytics. 4. Map Reduce I. Lars Schmidt-Thieme

CS 61C: Great Ideas in Computer Architecture. MapReduce

MapReduce and Friends

Chapter 3. Distributed Algorithms based on MapReduce

Big Data Analytics: Insights and Innovations

Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 16. Big Data Management VI (MapReduce Programming)

Problem 1: Complexity of Update Rules for Logistic Regression

CS 345A Data Mining. MapReduce

Database Applications (15-415)

Data Intensive Scalable Computing

Introduction to Data Management CSE 344

MapReduce: Recap. Juliana Freire & Cláudio Silva. Some slides borrowed from Jimmy Lin, Jeff Ullman, Jerome Simeon, and Jure Leskovec

Cloud Computing. Up until now

High Performance Computing on MapReduce Programming Framework

15-388/688 - Practical Data Science: Big data and MapReduce. J. Zico Kolter Carnegie Mellon University Spring 2018

A BigData Tour HDFS, Ceph and MapReduce

MRUnit testing framework is based on JUnit and it can test Map Reduce programs written on 0.20, 0.23.x, 1.0.x, 2.x version of Hadoop.

Computer Science 572 Exam Prof. Horowitz Monday, November 27, 2017, 8:00am 9:00am

Announcements. Optional Reading. Distributed File System (DFS) MapReduce Process. MapReduce. Database Systems CSE 414. HW5 is due tomorrow 11pm

An Introduction to Big Data Analysis using Spark

Developing MapReduce Programs

CS6030 Cloud Computing. Acknowledgements. Today s Topics. Intro to Cloud Computing 10/20/15. Ajay Gupta, WMU-CS. WiSe Lab

Outline. What is Big Data? Hadoop HDFS MapReduce Twitter Analytics and Hadoop

Dept. Of Computer Science, Colorado State University

Outline. Distributed File System Map-Reduce The Computational Model Map-Reduce Algorithm Evaluation Computing Joins

Database Systems CSE 414

Distributed Systems CS6421

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

Introduction to MapReduce Algorithms and Analysis

Announcements. Parallel Data Processing in the 20 th Century. Parallel Join Illustration. Introduction to Database Systems CSE 414

Locality- Sensitive Hashing Random Projections for NN Search

Map-Reduce Applications: Counting, Graph Shortest Paths

Parallel Programming Principle and Practice. Lecture 10 Big Data Processing with MapReduce

CSE 544 Principles of Database Management Systems. Alvin Cheung Fall 2015 Lecture 10 Parallel Programming Models: Map Reduce and Spark

Map-Reduce. Marco Mura 2010 March, 31th

Programming Systems for Big Data

Assignment 3 ITCS-6010/8010: Cloud Computing for Data Analysis

Map-Reduce Applications: Counting, Graph Shortest Paths

EE657 Spring 2012 HW#4 Zhou Zhao

PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS

Introduction to Data Management CSE 344

Fall 2018: Introduction to Data Science GIRI NARASIMHAN, SCIS, FIU

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2016)

CSE Lecture 11: Map/Reduce 7 October Nate Nystrom UTA

Attacking & Protecting Big Data Environments

Data Platforms and Pattern Mining

L22: SC Report, Map Reduce

TI2736-B Big Data Processing. Claudia Hauff

Big Data Programming: an Introduction. Spring 2015, X. Zhang Fordham Univ.

CS5112: Algorithms and Data Structures for Applications

STATS Data Analysis using Python. Lecture 7: the MapReduce framework Some slides adapted from C. Budak and R. Burns

Introduction to Map/Reduce & Hadoop

GraphLab: A New Framework for Parallel Machine Learning

Introduction to Map Reduce

Interfaces 3. Reynold Xin Aug 22, Databricks Retreat. Repurposed Jan 27, 2015 for Spark community

Map Reduce.

TITLE: PRE-REQUISITE THEORY. 1. Introduction to Hadoop. 2. Cluster. Implement sort algorithm and run it using HADOOP

Index Construction. Dictionary, postings, scalable indexing, dynamic indexing. Web Search

A brief history on Hadoop

Transcription:

Case Study 2: Document Retrieval Clustering Documents Machine Learning for Big Data CSE547/STAT548, University of Washington Sham Kakade April, 2017 Sham Kakade 2017 1 Document Retrieval n Goal: Retrieve documents of interest n Challenges: Tons of articles out there How should we measure similarity? Sham Kakade 2017 2 1

Task 1: Find Similar Documents n So far Input: Query article Output: Set of k similar articles Sham Kakade 2017 3 Task 2: Cluster Documents n Now: Cluster documents based on topic Sham Kakade 2017 4 2

Some Data Sham Kakade 2017 5 K- means 1. Ask user how many clusters they d like. (e.g. k=5) Sham Kakade 2017 6 3

K- means 1. Ask user how many clusters they d like. (e.g. k=5) 2. Randomly guess k cluster Center locations Sham Kakade 2017 7 K- means 1. Ask user how many clusters they d like. (e.g. k=5) 2. Randomly guess k cluster Center locations 3. Each datapoint finds out which Center it s closest to. (Thus each Center owns a set of datapoints) Sham Kakade 2017 8 4

K- means 1. Ask user how many clusters they d like. (e.g. k=5) 2. Randomly guess k cluster Center locations 3. Each datapoint finds out which Center it s closest to. 4. Each Center finds the centroid of the points it owns Sham Kakade 2017 9 K- means 1. Ask user how many clusters they d like. (e.g. k=5) 2. Randomly guess k cluster Center locations 3. Each datapoint finds out which Center it s closest to. 4. Each Center finds the centroid of the points it owns 5. and jumps there 6. Repeat until terminated! Sham Kakade 2017 10 5

K- means Randomly initialize k centers µ (0) = µ 1 (0),, µ k (0) Classify: Assign each point j Î {1,N} to nearest center: arg min µ i x j 2 2 i Recenter: µ i becomes centroid of its point: X µ (t+1) i arg min µ x j 2 2 µ z j j:z j =i Equivalent to µ i average of its points! Sham Kakade 2017 11 Case Study 2: Document Retrieval Parallel Programming Map- Reduce Machine Learning for Big Data CSE547/STAT548, University of Washington Sham Kakade April, 2017 Sham Kakade 2017 12 6

Needless to Say, We Need Machine Learning for Big Data 6 Billion Flickr Photos 28 Million Wikipedia Pages 1 Billion Facebook Users 72 Hours a Minute YouTube data a new class of economic asset, like currency or gold. Sham Kakade 2017 13 CPUs Stopped Getting Faster 10 processor speed GHz 1 0.1 0.01 constant 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 2008 2010 release date Sham Kakade 2017 14 7

ML in the Context of Parallel Architectures GPUs Multicore Clusters Clouds Supercomputers But scalable ML in these systems is hard, especially in terms of: 1. Programmability 2. Data distribution 3. Failures Sham Kakade 2017 15 SGD for LR: Programmability Challenge 1: Designing Parallel Programs For each data point x (t) : n o w (t+1) i w (t) i + t w (t) i + i (x (t) )[y (t) P (Y =1 (x (t) ), w (t) )] Sham Kakade 2017 16 8

Programmability Challenge 2: Race Conditions We are used to sequential programs: Read data, think, write data, read data, think, write data, read data, think, write data, read data, think, write data, read data, think, write data, read data, think, write data But, in parallel, you can have non- deterministic effects: One machine reading data while other is writing Called a race- condition: Very annoying One of the hardest problems to debug in practice: because of non- determinism, bugs are hard to reproduce Sham Kakade 2017 17 Data Distribution Challenge Accessing data: Main memory reference: 100ns (10-7 s) Round trip time within data center: 500,000ns (5 * 10-4 s) Disk seek: 10,000,000ns (10-2 s) Reading 1MB sequentially: Local memory: 250,000ns (2.5 * 10-4 s) Network: 10,000,000ns (10-2 s) Disk: 30,000,000ns (3*10-2 s) Conclusion: Reading data from local memory is much faster è Must have data locality: Good data partitioning strategy fundamental! Bring computation to data (rather than moving data around) Sham Kakade 2017 18 9

Robustness to Failures Challenge From Google s Jeff Dean, about their clusters of 1800 servers, in first year of operation: 1,000 individual machine failures thousands of hard drive failures one power distribution unit will fail, bringing down 500 to 1,000 machines for about 6 hours 20 racks will fail, each time causing 40 to 80 machines to vanish from the network 5 racks will go wonky, with half their network packets missing in action the cluster will have to be rewired once, affecting 5 percent of the machines at any given moment over a 2- day span 50% chance cluster will overheat, taking down most of the servers in less than 5 minutes and taking 1 to 2 days to recover How do we design distributed algorithms and systems robust to failures? It s not enough to say: run, if there is a failure, do it again because you may never finish Sham Kakade 2017 19 Move Towards Higher- Level Abstraction Distributed computing challenges are hard and annoying! 1. Programmability 2. Data distribution 3. Failures High- level abstractions try to simplify distributed programming by hiding challenges: Provide different levels of robustness to failures, optimizing data movement and communication, protect against race conditions Generally, you are still on your own WRT designing parallel algorithms Some common parallel abstractions: Lower- level: Pthreads: abstraction for distributed threads on single machine MPI: abstraction for distributed communication in a cluster of computers Higher- level: Map- Reduce (Hadoop: open- source version): mostly data- parallel problems GraphLab: for graph- structured distributed problems Sham Kakade 2017 20 10

Simplest Type of Parallelism: Data Parallel Problems You have already learned a classifier What s the test error? You have 10B labeled documents and 1000 machines Problems that can be broken into independent subproblems are called data- parallel (or embarrassingly parallel) Map- Reduce is a great tool for this Focus of today s lecture but first a simple example Sham Kakade 2017 21 Data Parallelism (MapReduce) 1 2 72 4 72 4 48 1 4 5 CPU. 1 CPU. 2 CPU.. 3 CPU.. 4 59 1 64 8 5 3 1 2 94 3 38 34 2 8 Solve a huge number of independent subproblems, e.g., extract features in images Sham Kakade 2017 22 11

Counting Words on a Single Processor (This is the Hello World! of Map- Reduce) Suppose you have 10B documents and 1 machine You want to count the number of appearances of each word in this corpus Similar ideas useful for, e.g., building Naïve Bayes classifiers and computing TF- IDF Code: Sham Kakade 2017 23 Naïve Parallel Word Counting Simple data parallelism approach: Merging hash tables: annoying, potentially not parallel è no gain from parallelism??? Sham Kakade 2017 24 12

Counting Words in Parallel & Merging Hash Tables in Parallel Generate pairs (word,count) Merge counts for each word in parallel Thus parallel merging hash tables Sham Kakade 2017 25 Map: Map- Reduce Abstraction Data- parallel over elements, e.g., documents Generate (key,value) pairs value can be any data type Reduce: Aggregate values for each key Must be commutative- associate operation Data- parallel over keys Generate (key,value) pairs Map- Reduce has long history in functional programming But popularized by Google, and subsequently by open- source Hadoop implementation from Yahoo! Sham Kakade 2017 26 13

Map Code (Hadoop): Word Count public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); { } public void map(longwritable key, Text value, Context context) throws <stuff> } String line = value.tostring(); StringTokenizer tokenizer = new StringTokenizer(line); while (tokenizer.hasmoretokens()) { word.set(tokenizer.nexttoken()); context.write(word, one); } Sham Kakade 2017 27 Reduce Code (Hadoop): Word Count public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> { } public void reduce(text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { } int sum = 0; for (IntWritable val : values) { sum += val.get(); } context.write(key, new IntWritable(sum)); Sham Kakade 2017 28 14

Map- Reduce Parallel Execution Sham Kakade 2017 29 Map- Reduce Execution Overview Map Phase Shuffle Phase Reduce Phase M1 (k 1,v 1 ) (k 2,v 2 ) M1 (k 1,v 1 ) (k 2,v 2 ) Big Data Split data across machines M2 (k 1,v 1 ) (k 2,v 2 ) Assign tuple (ki,vi) to machine h[ki] M2 (k 3,v 3 ) (k 4,v 4 ) M1000 (k 1,v 1 ) (k 2,v 2 ) M1000 (k 5,v 5 ) (k 6,v 6 ) Sham Kakade 2017 30 15

Map- Reduce Robustness to Failures 1: Protecting Data: Save To Disk Constantly Map Phase Shuffle Phase Reduce Phase M1 (k1,v1) (k2,v2) M1 (k1,v1) (k2,v2) Big Data Split data across machines M2 (k1,v1 ) (k2,v2 ) Assign tuple (ki,vi) to machine h[ki] M2 (k3,v3) (k4,v4) M1000 (k1,v1 ) (k2,v2 ) M1000 (k5,v5) (k6,v6) Sham Kakade 2017 31 Distributed File Systems Saving to disk locally is not enough è If disk or machine fails, all data is lost Replicate data among multiple machines! Distributed File System (DFS) Write a file from anywhere è automatically replicated Can read a file from anywhere è read from closest copy If failure, try next closest copy Common implementations: Google File System (GFS) Hadoop File System (HDFS) Important practical considerations: Write large files Many small files è becomes way too slow Typically, files can t be modified, just replaced è makes robustness much simpler Sham Kakade 2017 32 16

Map- Reduce Robustness to Failures 2: Recovering From Failures: Read from DFS Big Data Split data across machines Map Phase M1 M2 M1000 (k1,v1) (k2,v2) (k1,v1 ) (k2,v2 ) (k1,v1 ) (k2,v2 ) Shuffle Phase Assign tuple (ki,vi) to machine h[ki] Reduce Phase M1 M2 M1000 (k1,v1) (k2,v2) (k3,v3) (k4,v4) (k5,v5) (k6,v6) Communication in initial distribution & shuffle phase automatic Done by DFS If failure, don t restart everything Otherwise, never finish Only restart Map/Reduce jobs in dead machines Sham Kakade 2017 33 Improving Performance: Combiners Naïve implementation of M- R very wasteful in communication during shuffle: Combiner: Simple solution, perform reduce locally before communicating for global reduce Works because reduce is commutative- associative Sham Kakade 2017 34 17

(A few of the) Limitations of Map- Reduce Too much synchrony E.g., reducers don t start until all mappers are done Map Phase M1 (k1,v1) (k2,v2) Shuffle Phase Reduce Phase M1 (k1,v1) (k2,v2) Too much robustness Writing to disk all the time Not all problems fit in Map- Reduce E.g., you can t communicate between mappers Big Data Split data across machines M2 (k1,v1 ) (k2,v2 ) Assign tuple (ki,vi) to machine h[ki] M2 (k3,v3) (k4,v4) Oblivious to structure in data E.g., if data is a graph, can be much more efficient For example, no need to shuffle nearly as much M1000 (k1,v1 ) (k2,v2 ) M1000 (k5,v5) (k6,v6) Nonetheless, extremely useful; industry standard for Big Data Though many many companies are moving away from Map- Reduce (Hadoop) Sham Kakade 2017 35 What you need to know about Map- Reduce Distributed computing challenges are hard and annoying! 1. Programmability 2. Data distribution 3. Failures High- level abstractions help a lot! Data- parallel problems & Map- Reduce Map: Data- parallel transformation of data Parallel over data points Reduce: Data- parallel aggregation of data Parallel over keys Combiner helps reduce communication Distributed execution of Map- Reduce: Map, shuffle, reduce Robustness to failure by writing to disk Distributed File Systems Sham Kakade 2017 36 18

Case Study 2: Document Retrieval Parallel K- Means on Map- Reduce Machine Learning for Big Data CSE547/STAT548, University of Washington Sham Kakade April, 2017 Sham Kakade 2017 37 Map- Reducing One Iteration of K- Means Classify: Assign each point j Î {1,N} to nearest center: Recenter: µ i becomes centroid of its point: X µ (t+1) i arg min µ x j 2 2 µ Equivalent to µ i average of its points! Map: z j arg min i µ i x j 2 2 j:z j =i Reduce: Sham Kakade 2017 38 19

Classification Step as Map Classify: Assign each point j Î {1,,N} to nearest center: Map: z j arg min i µ i x j 2 2 Sham Kakade 2017 39 Recenter Step as Reduce Recenter: µ i becomes centroid of its point: X µ (t+1) i arg min µ x j 2 2 µ j:z j =i Equivalent to µ i average of its points! Reduce: Sham Kakade 2017 40 20

Some Practical Considerations K- Means needs an iterative version of Map- Reduce Not standard formulation Mapper needs to get data point and all centers A lot of data! Better implementation: mapper gets many data points Sham Kakade 2017 41 What you need to know about Parallel K- Means on Map- Reduce Map: classification step; data parallel over data points Reduce: recompute means; data parallel over centers Sham Kakade 2017 42 21