Distributed Machine Learning" on Spark

Similar documents
MLlib and Distributing the " Singular Value Decomposition. Reza Zadeh

Distributed Computing with Spark

Matrix Computations and " Neural Networks in Spark

Scaled Machine Learning at Matroid

Distributed Computing with Spark and MapReduce

An Introduction to Apache Spark

Apache Spark is a fast and general-purpose engine for large-scale data processing Spark aims at achieving the following goals in the Big data context

MLI - An API for Distributed Machine Learning. Sarang Dev

Chapter 1 - The Spark Machine Learning Library

Big Data Infrastructures & Technologies

Introduction to Apache Spark

2/26/2017. Originally developed at the University of California - Berkeley's AMPLab

Using Existing Numerical Libraries on Spark

DATA SCIENCE USING SPARK: AN INTRODUCTION

Using Numerical Libraries on Spark

Data Analytics and Machine Learning: From Node to Cluster

Apache Spark 2.0. Matei

Integration of Machine Learning Library in Apache Apex

Spark & Spark SQL. High- Speed In- Memory Analytics over Hadoop and Hive Data. Instructor: Duen Horng (Polo) Chau

Analytic Cloud with. Shelly Garion. IBM Research -- Haifa IBM Corporation

Apache Spark MLlib. Machine Learning Library for a parallel computing framework. Review by Renat Bekbolatov (June 4, 2015) BSP

Processing of big data with Apache Spark

A Tutorial on Apache Spark

Big data systems 12/8/17

Machine Learning for Large-Scale Data Analysis and Decision Making A. Distributed Machine Learning Week #9

Hadoop 2.x Core: YARN, Tez, and Spark. Hortonworks Inc All Rights Reserved

Lightning Fast Cluster Computing. Michael Armbrust Reflections Projections 2015 Michast

Big Data Infrastructures & Technologies Hadoop Streaming Revisit.

Rapid growth of massive datasets

Scalable Machine Learning in R. with H2O

Cloud, Big Data & Linear Algebra

Spark. Cluster Computing with Working Sets. Matei Zaharia, Mosharaf Chowdhury, Michael Franklin, Scott Shenker, Ion Stoica.

a Spark in the cloud iterative and interactive cluster computing

Higher level data processing in Apache Spark

MLlib. Distributed Machine Learning on. Evan Sparks. UC Berkeley

Lecture 11 Hadoop & Spark

Specialist ICT Learning

Cloud Computing & Visualization

Spark. In- Memory Cluster Computing for Iterative and Interactive Applications

We consider the general additive objective function that we saw in previous lectures: n F (w; x i, y i ) i=1

CSC 261/461 Database Systems Lecture 24. Spring 2017 MW 3:25 pm 4:40 pm January 18 May 3 Dewey 1101

Accelerating Spark Workloads using GPUs

Unifying Big Data Workloads in Apache Spark

New Developments in Spark

15.1 Data flow vs. traditional network programming

Backtesting with Spark

MapReduce, Hadoop and Spark. Bompotas Agorakis

Apache SystemML Declarative Machine Learning

Olivia Klose Technical Evangelist. Sascha Dittmann Cloud Solution Architect

Spark Overview. Professor Sasu Tarkoma.

Programming Systems for Big Data

Analytics in Spark. Yanlei Diao Tim Hunter. Slides Courtesy of Ion Stoica, Matei Zaharia and Brooke Wenig

Apache Spark 2 X Cookbook Cloud Ready Recipes For Analytics And Data Science

L6: Introduction to Spark Spark

Machine Learning With Spark

Fast, Interactive, Language-Integrated Cluster Computing

Overview. Prerequisites. Course Outline. Course Outline :: Apache Spark Development::

An Overview of Apache Spark

Outline. CS-562 Introduction to data analysis using Apache Spark

Blended Learning Outline: Developer Training for Apache Spark and Hadoop (180404a)

Chapter 4: Apache Spark

Big Data Architect.

Cloud Computing 3. CSCI 4850/5850 High-Performance Computing Spring 2018


Spark, Shark and Spark Streaming Introduction

Big Data processing: a framework suitable for Economists and Statisticians

Data processing in Apache Spark

Partitioning Data. IRDS: Evaluation, Debugging, and Diagnostics. Cross-Validation. Cross-Validation for parameter tuning

An Introduction to Big Data Analysis using Spark

Big Data. Big Data Analyst. Big Data Engineer. Big Data Architect

2/4/2019 Week 3- A Sangmi Lee Pallickara

Index. bfs() function, 225 Big data characteristics, 2 variety, 3 velocity, 3 veracity, 3 volume, 2 Breadth-first search algorithm, 220, 225

A Brief Look at Optimization

Data processing in Apache Spark

Spark and Machine Learning

CSE 444: Database Internals. Lecture 23 Spark

IBM Data Science Experience White paper. SparkR. Transforming R into a tool for big data analytics

An Introduction to Apache Spark Big Data Madison: 29 July William Red Hat, Inc.

CS Spark. Slides from Matei Zaharia and Databricks

Apache Spark and Scala Certification Training

General Instructions. Questions

Apache Giraph. for applications in Machine Learning & Recommendation Systems. Maria Novartis

SCALABLE, LOW LATENCY MODEL SERVING AND MANAGEMENT WITH VELOX

CS 179 Lecture 16. Logistic Regression & Parallel SGD

CSE 547: Machine Learning for Big Data Spring Problem Set 2. Please read the homework submission policies.

CS-541 Wireless Sensor Networks

Webinar Series TMIP VISION

MapReduce review. Spark and distributed data processing. Who am I? Today s Talk. Reynold Xin

Apache Spark. Easy and Fast Big Data Analytics Pat McDonough

Spark and distributed data processing

A very short introduction

Data Platforms and Pattern Mining

COSC 6339 Big Data Analytics. Introduction to Spark. Edgar Gabriel Fall What is SPARK?

Evolution From Shark To Spark SQL:

Analysis of Extended Performance for clustering of Satellite Images Using Bigdata Platform Spark

15.1 Optimization, scaling, and gradient descent in Spark

Resilient Distributed Datasets

Khadija Souissi. Auf z Systems November IBM z Systems Mainframe Event 2016

Machine Learning in Action

Shark: SQL and Rich Analytics at Scale. Michael Xueyuan Han Ronny Hajoon Ko

Transcription:

Distributed Machine Learning" on Spark Reza Zadeh @Reza_Zadeh http://reza-zadeh.com

Outline Data flow vs. traditional network programming Spark computing engine Optimization Example Matrix Computations MLlib + {Streaming, GraphX, SQL} Future of MLlib

Data Flow Models Restrict the programming interface so that the system can do more automatically Express jobs as graphs of high-level operators» System picks how to split each operator into tasks and where to run each task» Run parts twice fault recovery Biggest example: MapReduce Map Map Map Reduce Reduce

Spark Computing Engine Extends a programming language with a distributed collection data-structure» Resilient distributed datasets (RDD) Open source at Apache» Most active community in big data, with 50+ companies contributing Clean APIs in Java, Scala, Python Community: SparkR

Key Idea Resilient Distributed Datasets (RDDs)» Collections of objects across a cluster with user controlled partitioning & storage (memory, disk,...)» Built via parallel transformations (map, filter, )» The world only lets you make make RDDs such that they can be: Automatically rebuilt on failure

MLlib History MLlib is a Spark subproject providing machine learning primitives Initial contribution from AMPLab, UC Berkeley Shipped with Spark since Sept 2013

MLlib: Available algorithms classification: logistic regression, linear SVM," naïve Bayes, least squares, classification tree regression: generalized linear models (GLMs), regression tree collaborative filtering: alternating least squares (ALS), non-negative matrix factorization (NMF) clustering: k-means decomposition: SVD, PCA optimization: stochastic gradient descent, L-BFGS

Optimization At least two large classes of optimization problems humans can solve:» Convex Programs» Spectral Problems

Optimization Example

Logistic Regression data = spark.textfile(...).map(readpoint).cache() w = numpy.random.rand(d) for i in range(iterations): gradient = data.map(lambda p: (1 / (1 + exp(- p.y * w.dot(p.x)))) * p.y * p.x ).reduce(lambda a, b: a + b) w - = gradient print Final w: %s % w

Logistic Regression Results 4000 Running Time (s) 3500 3000 2500 2000 1500 1000 500 0 1 5 10 20 30 Number of Iterations 110 s / iteration Hadoop Spark first iteration 80 s further iterations 1 s 100 GB of data on 50 m1.xlarge EC2 machines

Behavior with Less RAM 100 Iteration time (s) 80 60 40 20 68.8 58.1 40.7 29.7 11.5 0 0% 25% 50% 75% 100% % of working set in memory

Distributing Matrix Computations

Distributing Matrices How to distribute a matrix across machines?» By Entries (CoordinateMatrix)» By Rows (RowMatrix)» By Blocks (BlockMatrix) As of version 1.3 All of Linear Algebra to be rebuilt using these partitioning schemes

Distributing Matrices Even the simplest operations require thinking about communication e.g. multiplication How many different matrix multiplies needed?» At least one per pair of {Coordinate, Row, Block, LocalDense, LocalSparse} = 10» More because multiplies not commutative

Singular Value Decomposition on Spark

Singular Value Decomposition

Singular Value Decomposition Two cases» Tall and Skinny» Short and Fat (not really)» Roughly Square SVD method on RowMatrix takes care of which one to call.

Tall and Skinny SVD

Tall and Skinny SVD Gets us V and the singular values Gets us U by one matrix multiplication

Square SVD ARPACK: Very mature Fortran77 package for computing eigenvalue decompositions" JNI interface available via netlib-java" Distributed using Spark how?

Square SVD via ARPACK Only interfaces with distributed matrix via matrix-vector multiplies The result of matrix-vector multiply is small. The multiplication can be distributed.

Square SVD With 68 executors and 8GB memory in each, looking for the top 5 singular vectors

Communication-Efficient All pairs similarity on Spark (DIMSUM)

All pairs Similarity All pairs of cosine scores between n vectors» Don t want to brute force (n choose 2) m» Essentially computes " Compute via DIMSUM» Dimension Independent Similarity Computation using MapReduce

Intuition Sample columns that have many non-zeros with lower probability. " On the flip side, columns that have fewer nonzeros are sampled with higher probability." Results provably correct and independent of larger dimension, m.

Spark implementation

MLlib + {Streaming, GraphX, SQL}

A General Platform Standard libraries included with Spark Spark SQL structured Spark Streaming" real-time GraphX graph MLlib machine learning Spark Core

Benefit for Users Same engine performs data extraction, model training and interactive queries Separate engines DFS read parse DFS write DFS read train DFS write DFS read query DFS write Spark DFS read parse train query DFS

MLlib + Streaming As of Spark 1.1, you can train linear models in a streaming fashion, k-means as of 1.2 Model weights are updated via SGD, thus amenable to streaming More work needed for decision trees

MLlib + SQL points = context.sql( select latitude, longitude from tweets )! model = KMeans.train(points, 10)!! DataFrames coming in Spark 1.3! (March 2015)

MLlib + GraphX

Future of MLlib

Research Goal: General Distributed Optimization Distribute CVX by backing CVXPY with PySpark Easy- to- express distributable convex programs Need to know less math to optimize complicated objectives

Spark Community Most active open source community in big data 200+ developers, 50+ companies contributing 150 Contributors in past year 100 50 0 Giraph Storm

Continuing Growth Contributors per month to Spark source: ohloh.net

Spark and ML Spark has all its roots in research, so we hope to keep incorporating new ideas!