Distributed Computing with Spark and MapReduce

Similar documents
Distributed Computing with Spark

Matrix Computations and " Neural Networks in Spark

Distributed Machine Learning" on Spark

Introduction to Apache Spark

Scaled Machine Learning at Matroid

Big Data Infrastructures & Technologies

MLlib and Distributing the " Singular Value Decomposition. Reza Zadeh

An Introduction to Apache Spark

Big Data Infrastructures & Technologies Hadoop Streaming Revisit.

Outline. CS-562 Introduction to data analysis using Apache Spark

Apache Spark is a fast and general-purpose engine for large-scale data processing Spark aims at achieving the following goals in the Big data context

15.1 Data flow vs. traditional network programming

Apache Spark 2.0. Matei

2/26/2017. Originally developed at the University of California - Berkeley's AMPLab

Lightning Fast Cluster Computing. Michael Armbrust Reflections Projections 2015 Michast

Lecture 11 Hadoop & Spark

Data processing in Apache Spark

Spark, Shark and Spark Streaming Introduction

Data processing in Apache Spark

Hadoop 2.x Core: YARN, Tez, and Spark. Hortonworks Inc All Rights Reserved

Data processing in Apache Spark

MapReduce, Hadoop and Spark. Bompotas Agorakis

Integration of Machine Learning Library in Apache Apex

Spark Overview. Professor Sasu Tarkoma.

DATA SCIENCE USING SPARK: AN INTRODUCTION

MapReduce Spark. Some slides are adapted from those of Jeff Dean and Matei Zaharia

Introduction to Apache Spark. Patrick Wendell - Databricks

Analytic Cloud with. Shelly Garion. IBM Research -- Haifa IBM Corporation

Lambda Architecture with Apache Spark

Specialist ICT Learning

RESILIENT DISTRIBUTED DATASETS: A FAULT-TOLERANT ABSTRACTION FOR IN-MEMORY CLUSTER COMPUTING

A Tutorial on Apache Spark

Resilient Distributed Datasets

08/04/2018. RDDs. RDDs are the primary abstraction in Spark RDDs are distributed collections of objects spread across the nodes of a clusters

2/26/2017. RDDs. RDDs are the primary abstraction in Spark RDDs are distributed collections of objects spread across the nodes of a clusters

Fast, Interactive, Language-Integrated Cluster Computing

Spark & Spark SQL. High- Speed In- Memory Analytics over Hadoop and Hive Data. Instructor: Duen Horng (Polo) Chau

Cloud, Big Data & Linear Algebra

Data Analytics and Machine Learning: From Node to Cluster

RDDs are the primary abstraction in Spark RDDs are distributed collections of objects spread across the nodes of a clusters

Cloud Computing 3. CSCI 4850/5850 High-Performance Computing Spring 2018

An Introduction to Apache Spark Big Data Madison: 29 July William Red Hat, Inc.

An Overview of Apache Spark

Spark. Cluster Computing with Working Sets. Matei Zaharia, Mosharaf Chowdhury, Michael Franklin, Scott Shenker, Ion Stoica.

Analytics in Spark. Yanlei Diao Tim Hunter. Slides Courtesy of Ion Stoica, Matei Zaharia and Brooke Wenig

Apache Spark. CS240A T Yang. Some of them are based on P. Wendell s Spark slides

Chapter 1 - The Spark Machine Learning Library

CS Spark. Slides from Matei Zaharia and Databricks

Webinar Series TMIP VISION

Research challenges in data-intensive computing The Stratosphere Project Apache Flink

Massive Online Analysis - Storm,Spark

Big Data Architect.

Big Data Analytics. C. Distributed Computing Environments / C.2. Resilient Distributed Datasets: Apache Spark. Lars Schmidt-Thieme

Processing of big data with Apache Spark

Databases 2 (VU) ( / )

Programming Systems for Big Data

Data-intensive computing systems

Distributed Systems. Distributed computation with Spark. Abraham Bernstein, Ph.D.

Spark: A Brief History.

Analyzing Flight Data

Beyond MapReduce: Apache Spark Antonino Virgillito

MLI - An API for Distributed Machine Learning. Sarang Dev

Dell In-Memory Appliance for Cloudera Enterprise

a Spark in the cloud iterative and interactive cluster computing

Introduction to Hadoop, Hive, an d Apache Spark

/ Cloud Computing. Recitation 13 April 17th 2018

2/4/2019 Week 3- A Sangmi Lee Pallickara

Agenda. Spark Platform Spark Core Spark Extensions Using Apache Spark

We are ready to serve Latest Testing Trends, Are you ready to learn?? New Batches Info

Spark. In- Memory Cluster Computing for Iterative and Interactive Applications

An Introduction to Big Data Analysis using Spark

New Developments in Spark

Unifying Big Data Workloads in Apache Spark

CSE 444: Database Internals. Lecture 23 Spark

Blended Learning Outline: Developer Training for Apache Spark and Hadoop (180404a)

Spark. In- Memory Cluster Computing for Iterative and Interactive Applications

Overview. Prerequisites. Course Outline. Course Outline :: Apache Spark Development::

Shark. Hive on Spark. Cliff Engle, Antonio Lupher, Reynold Xin, Matei Zaharia, Michael Franklin, Ion Stoica, Scott Shenker

Spark and Spark SQL. Amir H. Payberah. SICS Swedish ICT. Amir H. Payberah (SICS) Spark and Spark SQL June 29, / 71

IBM Data Science Experience White paper. SparkR. Transforming R into a tool for big data analytics

Apache Spark and Scala Certification Training

Chapter 4: Apache Spark

Big Data Analytics using Apache Hadoop and Spark with Scala

Apache Spark. Easy and Fast Big Data Analytics Pat McDonough

CompSci 516: Database Systems

Big data systems 12/8/17

/ Cloud Computing. Recitation 13 April 14 th 2015

Topics. Big Data Analytics What is and Why Hadoop? Comparison to other technologies Hadoop architecture Hadoop ecosystem Hadoop usage examples

Summary of Big Data Frameworks Course 2015 Professor Sasu Tarkoma

Putting it together. Data-Parallel Computation. Ex: Word count using partial aggregation. Big Data Processing. COS 418: Distributed Systems Lecture 21

Higher level data processing in Apache Spark

Big Data. Big Data Analyst. Big Data Engineer. Big Data Architect

Turning Relational Database Tables into Spark Data Sources

Today s content. Resilient Distributed Datasets(RDDs) Spark and its data model

Page 1. Goals for Today" Background of Cloud Computing" Sources Driving Big Data" CS162 Operating Systems and Systems Programming Lecture 24

Certified Big Data Hadoop and Spark Scala Course Curriculum

Using Existing Numerical Libraries on Spark

Announcements. Reading Material. Map Reduce. The Map-Reduce Framework 10/3/17. Big Data. CompSci 516: Database Systems

An Introduction to Apache Spark

Machine Learning With Spark

Transcription:

Distributed Computing with Spark and MapReduce Reza Zadeh @Reza_Zadeh http://reza-zadeh.com

Traditional Network Programming Message-passing between nodes (e.g. MPI) Very difficult to do at scale:» How to split problem across nodes? Must consider network & data locality» How to deal with failures? (inevitable at scale)» Even worse: stragglers (node not failed, but slow)» Ethernet networking not fast» Have to write programs for each machine Rarely used in commodity datacenters

Data Flow Models Restrict the programming interface so that the system can do more automatically Express jobs as graphs of high-level operators» System picks how to split each operator into tasks and where to run each task» Run parts twice fault recovery Biggest example: MapReduce Map Map Map Reduc e Reduce

MapReduce + GFS Most of early Google infrastructure, tremendously successful Replicate disk content 3 times, sometimes 8 Rewrite algorithms for MapReduce

Diagram of typical cluster http://insightdataengineering.com/blog/pipeline_map.html

Example MapReduce Algorithms Matrix-vector multiplication Power iteration (e.g. PageRank) Gradient descent methods Stochastic SVD Tall skinny QR Many others!

Why Use a Data Flow Engine? Ease of programming» High-level functions instead of message passing Wide deployment» More common than MPI, especially near data Scalability to very largest clusters» Even HPC world is now concerned about resilience Examples: Pig, Hive, Scalding

Limitations of MapReduce

Limitations of MapReduce MapReduce is great at one-pass computation, but inefficient for multi-pass algorithms No efficient primitives for data sharing» State between steps goes to distributed file system» Slow due to replication & disk storage

Example: Iterative Apps Input file system read file system write file system read file system write iter. 1 iter. 2... file system read query 1 query 2 result 1 result 2 Input query 3 result 3... Commonly spend 90% of time doing I/O

Example: PageRank Repeatedly multiply sparse matrix and vector Requires repeatedly hashing together page adjacency lists and rank vector Neighbors (id, edges) Same file grouped over and over Ranks (id, rank) iteration 1 iteration 2 iteration 3

Result While MapReduce is simple, it can require asymptotically more communication or I/O

Verdict MapReduce algorithms research doesn t go to waste, it just gets sped up and easier to use Still useful to study as an algorithmic framework, silly to use directly

Spark computing engine

Spark Computing Engine Extends a programming language with a distributed collection data-structure» Resilient distributed datasets (RDD) Open source at Apache» Most active community in big data, with 50+ companies contributing Clean APIs in Java, Scala, Python, R

Resilient Distributed Datasets (RDDs) Main idea: Resilient Distributed Datasets» Immutable collections of objects, spread across cluster» Statically typed: RDD[T] has objects of type T val sc = new SparkContext() val lines = sc.textfile("log.txt") // RDD[String] // Transform using standard collection operations val errors = lines.filter(_.startswith("error")) val messages = errors.map(_.split( \t )(2)) lazily evaluated messages.saveastextfile("errors.txt") kicks off a computation

Key Idea Resilient Distributed Datasets (RDDs)» Collections of objects across a cluster with user controlled partitioning & storage (memory, disk,...)» Built via parallel transformations (map, filter, )» The world only lets you make make RDDs such that they can be: Automatically rebuilt on failure

Python, Java, Scala, R // Scala: val lines = sc.textfile(...) lines.filter(x => x.contains( ERROR )).count() // Java (better in java8!): JavaRDD<String> lines = sc.textfile(...); lines.filter(new Function<String, Boolean>() { Boolean call(string s) { return s.contains( error ); } }).count();

Fault Tolerance RDDs track lineage info to rebuild lost data file.map(lambda rec: (rec.type, 1)).reduceByKey(lambda x, y: x + y).filter(lambda (type, count): count > 10) map reduce filter Input file

Fault Tolerance RDDs track lineage info to rebuild lost data file.map(lambda rec: (rec.type, 1)).reduceByKey(lambda x, y: x + y).filter(lambda (type, count): count > 10) map reduce filter Input file

Partitioning RDDs know their partitioning functions file.map(lambda rec: (rec.type, 1)).reduceByKey(lambda x, y: x + y).filter(lambda (type, count): count > 10) Known to be hash-partitioned map reduce filter Also known Input file

State of the Spark ecosystem

Spark Community Most active open source community in big data 200+ developers, 50+ companies contributing 150 Contributors in past year 100 50 0 Giraph Storm

Project Activity 1600 1400 Spark 350000 300000 Spark 1200 250000 1000 800 600 400 200 MapReduce YARN HDFS Storm 200000 150000 100000 50000 MapReduce YARN HDFS Storm 0 Commits 0 Lines of Code Changed Activity in past 6 months

Continuing Growth Contributors per month to Spark source: ohloh.net

Built-in libraries

Standard Library for Big Data Big data apps lack libraries of common algorithms Python Scala Java R Spark s generality + support for multiple languages make it suitable to offer this SQL ML graph Core Much of future activity will be in these libraries

A General Platform Standard libraries included with Spark Spark SQL structured Spark Streaming real-time GraphX graph MLlib machine learning Spark Core

Machine Learning Library (MLlib) points = context.sql( select latitude, longitude from tweets ) model = KMeans.train(points, 10) 40 contributors in past year

MLlib algorithms classification: logistic regression, linear SVM, naïve Bayes, classification tree regression: generalized linear models (GLMs), regression tree collaborative filtering: alternating least squares (ALS), non-negative matrix factorization (NMF) clustering: k-means decomposition: SVD, PCA optimization: stochastic gradient descent, L-BFGS

GraphX 31

GraphX General graph processing library Build graph using RDDs of nodes and edges Large library of graph algorithms with composable steps 32

Spark Streaming Run a streaming computation as a series of very small, deterministic batch jobs Chop up the live stream into batches of X seconds live data stream Spark Streaming Spark treats each batch of data as RDDs and processes them using RDD operations Finally, the processed results of the RDD operations are returned in batches batches of X seconds processed results Spark 33

Spark Streaming Run a streaming computation as a series of very small, deterministic batch jobs Batch sizes as low as ½ second, latency ~ 1 second live data stream Spark Streaming Potential for combining batch processing and streaming processing in the same system batches of X seconds processed results Spark 34

Spark SQL // Run SQL statements val teenagers = context.sql( "SELECT name FROM people WHERE age >= 13 AND age <= 19") // The results of SQL queries are RDDs of Row objects val names = teenagers.map(t => "Name: " + t(0)).collect()

Spark SQL Enables loading & querying structured data in Spark From Hive: c = HiveContext(sc) rows = c.sql( select text, year from hivetable ) rows.filter(lambda r: r.year > 2013).collect() From JSON: c.jsonfile( tweets.json ).registerastable( tweets ) c.sql( select text, user.name from tweets ) tweets.json { text : hi, user : { name : matei, id : 123 }}