Big Data Analytics. Map-Reduce and Spark. Slides by: Joseph E. Gonzalez Guest Lecturer: Vikram Sreekanti
|
|
- Giles Berry
- 6 years ago
- Views:
Transcription
1 Big Data Analytics Map-Reduce and Spark Slides by: Joseph E. Gonzalez Guest Lecturer: Vikram Sreekanti
2 From SQL to Big Data (with SQL) Ø A few weeks ago Ø Databases Ø (Relational) Database Management Systems Ø SQL: Structured Query Language Ø Today Ø More on databases and database design Ø Enterprise data management and the data lake Ø Introduction to distributed data storage and processing Ø Spark
3 Data in the Organization A little bit of buzzword bingo!
4 Inventory How we like to think of data in the organization
5 The reality Sales (Asia) Inventory Sales (US) Advertising
6 Sales (Asia) Sales (US) Inventory Advertising Operational Data Stores Ø Capture the now Ø Many different databases across an organization Ø Mission critical be careful! Ø Ø Serving live ongoing business operations Managing inventory Ø Different formats (e.g., currency) Ø Different schemas (acquisitions...) Ø Live systems often don t maintain history We would like a consolidated, clean, historical snapshot of the data.
7 Sales (Asia) Sales (US) ETL ETL Data Warehouse Collects and organizes historical data from multiple sources Inventory Advertising ETL ETL Data is periodically ETLed into the data warehouse: Ø Extracted from remote sources Ø Transformed to standard schemas Ø Loaded into the (typically) relational (SQL) data system
8 Extract à Transform à Load (ETL) Extract & Load: provides a snapshot of operational data Ø Historical snapshot Ø Data in a single system Ø Isolates analytics queries (e.g., Deep Learning) from business critical services (e.g., processing user purchase) Ø Easy! Transform: clean and prepare data for analytics in a unified representation Ø Difficult à often requires specialized code and tools Ø Different schemas, encodings, granularities
9 Sales (Asia) Sales (US) ETL ETL Data Warehouse Collects and organizes historical data from multiple sources Inventory Advertising ETL ETL How is data organized in the Data Warehouse?
10 Example Sales Data pname category price qty date day city state country Corn Food /30/16 Wed. Omaha NE USA Corn Food /31/16 Thu. Omaha NE USA Corn Food /1/16 Fri. Omaha NE USA Galaxy Phones /30/16 Wed. Omaha NE USA Ø Big table: many columns and rows Ø Ø Substantial Galaxy redundancy Phones 18 à expensive 20 3/31/16 to store Thu. Omaha NE USA and access Make Galaxymistakes Phones while updating /1/16 Fri. Omaha NE USA Ø Could we organize the data more efficiently? Galaxy Phones /30/16 Wed. Omaha NE USA Peanuts Food /31/16 Thu. Seoul Korea Galaxy Phones /1/16 Fri. Seoul Korea
11 Multidimensional Data Model Sales Fact Table pid timeid locid sales Locations locid city state country 1 Omaha Nebraska USA 2 Seoul Korea 5 Richmond Virginia USA Products pid pname category price 11 Corn Food Galaxy 1 Phones Peanuts Food 2 Time timeid Date Day 1 3/30/16 Wed. 2 3/31/16 Thu. 3 4/1/16 Fri. Dimension Tables Ø Multidimensional Cube of data Product Id Location Id Time Id
12 Multidimensional Data Model Sales Fact Table pid timeid locid sales Locations locid city state country 1 Omaha Nebraska USA 2 Seoul Korea 5 Richmond Virginia USA Products pid pname category price 11 Corn Food Galaxy 1 Phones Peanuts Food 2 Time timeid Date Day 1 3/30/16 Wed. 2 3/31/16 Thu. 3 4/1/16 Fri. Ø Ø Ø Ø Dimension Tables Fact Table Ø Minimizes redundant info Ø Reduces data errors Dimensions Ø Easy to manage and summarize Ø Rename: Galaxy1 à Phablet Normalized Representation How do we do analysis? Ø Joins!
13 The Star Schema Products Time pid pname category price timeid Date Day Sales Fact Table pid timeid locid sales ß This looks like a star Locations locid city state country
14 The Star Schema ß This looks like a star
15 The Star Schema ß This looks like a star
16 The Snowflake Schema ß This looks like a snowflake?
17 Which schema illustration would best organize this data? Data(calid, student_name, year, major, major_grade_req, asg1_name, asg1_pts, asg1_score, asg1_grader_name asg2_name, asg2_pts, asg2_score, asg2_grader_name avg_grade) Grades Students Graders Students Graders Majors Graders Grades Grades Majors (A) (B) (C) Assignments Assignments Students
18 Data(calid, student_name, year, major, major_grade_req, asg1_name, asg1_pts, asg1_score, asg1_grader_name asg2_name, asg2_pts, asg2_score, asg2_grader_name avg_grade) Grades(calid, asg_name, grader_id, score) Graders(grader_id, grader_name) Student(calid, name, year, major_name, avg_grade) Majors(major_name, grade_req) Assignments(asg_name, asg_pts) Grades Students Graders Students Graders Majors Graders Grades Grades Majors (A) (B) (C) Assignments Assignments Students
19 Online Analytics Processing (OLAP) Users interact with multidimensional data: Ø Constructing ad-hoc and often complex SQL queries Ø Using graphical tools that to construct queries Ø Sharing views that summarize data across important dimensions
20 Cross Tabulation (Pivot Tables) Item Color Quantity Desk Blue 2 Desk Red 3 Sofa Blue 4 Sofa Red 5 Color Item Desk Sofa Sum Blue Red Sum Ø Aggregate data across pairs of dimensions Ø Pivot Tables: graphical interface to select dimensions and aggregation function (e.g., SUM, MAX, MEAN) Ø GROUP BY queries Ø Related to contingency tables and marginalization in stats. Ø What about many dimensions?
21 Cube Operator Ø Generalizes crosstabulation to higher dimensions. ØIn SQL: SELECT Item, Color, SUM(Quantity) AS QtySum FROM Furniture GROUP BY CUBE (Item, Color); Item Color Quantity Desk Blue 2 Desk Red 3 Sofa Blue 4 Sofa Red 5 * Product Id Location Id 1 2 * Time Id Item Color QtySum Desk Blue 2 Desk Red 3 Desk * 5 Sofa Blue 4 Sofa Red 5 Sofa * 9 * * 14 * Blue 6 * Red 8 * What is here?
22 Ø Slicing: selecting a value for a dimension OLAP Queries Ø Dicing: selecting a range of values in multiple dimension
23 Ø Rollup: Aggregating along a dimension OLAP Queries Ø Drill-Down: de-aggregating along a dimension Day 3 Day Day Day 3 Day Day
24 Reporting and Business Intelligence (BI) Ø Use high-level tools to interact with their data: Ø Automatically generate SQL queries Ø Queries can get big! Ø Common!
25 Sales (Asia) Sales (US) ETL ETL Data Warehouse Collects and organizes historical data from multiple sources Inventory Advertising ETL ETL So far Ø Star Schemas Ø Data cubes Ø OLAP Queries
26 Sales (Asia) Sales (US) Inventory Advertising Text/Log Data ETL ETL ETL ETL ETL? Photos & Videos It is Terrible! Data Warehouse Collects and organizes historical data from multiple sources Ø How do we deal with semistructured and unstructured data? Ø Do we really want to force a schema on load?
27 Sales (Asia) Data Warehouse How do we cleane Sales (US) this TL and organize data? Inventory ETL Advertising ETL Collects and organizes historical data from multiple sources Depends on use ETL Text/Log Data ET? L Photos & Videos Ø How do we deal with semihow do we load and process this structured and unstructured data in a relational system? data? Depends on use Canto beforce difficult... Ø Do we really want a Requires thought... schema on load? It is Terrible!
28 Sales (Asia) Sales (US) Data Lake* ETL Store a copy of all the data Inventory ETL Ø in one place Advertising ETL Ø in its original natural form ETL Text/Log Data ET? L Photos & Videos It is Terrible! Enable data consumers to choose how to transform and use data. Ø Schema on Read What could go wrong? *Still being defined [Buzzword Disclaimer]
29 The Dark Side of Data Lakes Ø Cultural shift: Curate à Save Everything! Ø Noise begins to dominate signal Ø Limited data governance and planning Example: hdfs://important/joseph_big_file3.csv_with_json Ø What does it contain? Ø When and who created it? Ø No cleaning and verification à lots of dirty data Ø New tools are more complex and old tools no longer work Enter the data scientist
30 A Brighter Future for Data Lakes Enter the data scientist Ø Data scientists bring new skills Ø Distributed data processing and cleaning Ø Machine learning, computer vision, and statistical sampling Ø Technologies are improving Ø SQL over large files Ø Self describing file formats & catalog managers Ø Organizations are evolving Ø Tracking data usage and file permissions Ø New job title: data engineers
31 How do we store and compute on large unstructured datasets Ø Requirements: Ø Handle very large files spanning multiple computers Ø Use cheap commodity devices that fail frequently Ø Distributed data processing quickly and easily Ø Solutions: Ø Distributed file systems à spread data over multiple machines Ø Assume machine failure is common à redundancy Ø Distributed computing à load and process files on multiple machines concurrently Ø Assume machine failure is common à redundancy Ø Functional programming computational pattern à parallelism
32 Distributed File Systems Storing very large files
33 Fault Tolerant Distributed File Systems Big File How do we store and access very large files across cheap commodity devices? [Ghemawat et al., SOSP 03]
34 Fault Tolerant Distributed File Systems Big File Machine1 Machine 2 Machine 3 Machine 4 [Ghemawat et al., SOSP 03]
35 Fault Tolerant Distributed File Systems A B Big File C D Ø Split the file into smaller parts. Ø How? Ø Ideally at record boundaries Ø What if records are big? Machine1 Machine 2 Machine 3 Machine 4 [Ghemawat et al., SOSP 03]
36 Fault Tolerant Distributed File Systems B Big File C D Machine1 Machine 2 Machine 3 Machine 4 A A A [Ghemawat et al., SOSP 03]
37 Fault Tolerant Distributed File Systems Big File C D Machine1 Machine 2 Machine 3 Machine 4 B A B A A B [Ghemawat et al., SOSP 03]
38 Fault Tolerant Distributed File Systems Big File D Machine1 Machine 2 Machine 3 Machine 4 B C A B A C A B C [Ghemawat et al., SOSP 03]
39 Fault Tolerant Distributed File Systems Big File Machine1 Machine 2 Machine 3 Machine 4 B C A B A C A B D C D D [Ghemawat et al., SOSP 03]
40 Fault Tolerant Distributed File Systems Ø Split large files over multiple machines Ø Easily support massive files spanning machines Big File Ø Read parts of file in parallel Ø Fast reads of large files Ø Often built using cheap commodity storage devices Cheap commodity storage devices will fail! Machine1 B C D Machine 3 A C D Machine 2 A B C Machine 4 A B D
41 Fault Tolerant Distributed File Systems Failure Event Big File Machine1 Machine 2 Machine 3 Machine 4 B C A B A C A B D C D D [Ghemawat et al., SOSP 03]
42 Fault Tolerant Distributed File Systems Failure Event Big File Machine1 B C D Machine 2 Machine 3 Machine 4 A B A C A B C D D [Ghemawat et al., SOSP 03]
43 Fault Tolerant Distributed File Systems Failure Event Big File Machine 2 Machine 4 A B A B C D [Ghemawat et al., SOSP 03]
44 Fault Tolerant Distributed File Systems Failure Event A Big File C B D Machine 2 Machine 4 A B A B C D [Ghemawat et al., SOSP 03]
45 Map-Reduce Distributed Aggregation Computing are very large files
46 How would you compute the number of occurrences of each word in all the books using a team of people?
47 Simple Solution
48 Simple Solution 1) Divide Books Across Individuals
49 Simple Solution 1) Divide Books Across Individuals 2) Compute Counts Locally Word Count Apple 2 Bird 7 Word Count Apple 0 Bird 1
50 Simple Solution 1) Divide Books Across Individuals 2) Compute Counts Locally 3) Aggregate Tables Word Count Apple 2 Bird 7 Word Count Apple 2 Bird 8 Word Count Apple 0 Bird 1
51 The Map Reduce Abstraction Key Value Value Record Map Key Reduce Key Value Key Value Value Example: Word-Count Map(docRecord) { for (word in docrecord) { emit (word, 1) } Key Value } Reduce(word, counts) { emit (word, SUM(counts)) } Map: Deterministic Reduce: Commutative and Associative [Dean & Ghemawat, OSDI 04]
52 Key properties of Map And Reduce Ø Deterministic Map: allows for re-execution on failure Ø If some computation is lost we can always re-compute Ø Issues with samples? Ø Commutative Reduce: allows for re-order of operations Ø Reduce(A,B) = Reduce(B,A) Ø Example (addition): A + B = B + A Ø Is floating point math commutative? Ø Associative Reduce: allows for regrouping of operations Ø Reduce(Reduce(A,B), C) = Reduce(A, Reduce(B,C)) Ø Example (max): max(max(a,b), C) = max(a, max(b,c))
53 Executing Map Reduce Record Map Key Key Value Value
54 Machine1 B C D Executing Map Reduce Machine 2 A B C Machine 3 A C D Machine 4 A B D Map Distributing the Map Function
55 Machine1 B C D Machine 2 A B C Machine 3 A C D Machine 4 A B D Map Map Map Map Executing Map Reduce Distributing the Map Function
56 Machine1 B C D Machine 2 A B C Machine 3 A C D Map Map Map apple cat the big cat the cat dog Executing Map Reduce The map function applied to a local part of the big file. Run in Parallel. Machine 4 A B D Map big dog 1 1 Output is cached for fast recovery on node failure
57 Machine1 B C D Machine 2 A B C Map Map apple cat the big cat the Executing Map Reduce Machine 3 A C D Map cat dog 1 1 Key Value Value Reduce Key Value Machine 4 A B D Map big dog 1 1 Reduce function can be run on many machines
58 Machine1 B C D Machine 2 A B C Machine 3 A C D Map Map Map Run in Parallel apple 1 Reduce apple 1 the big Reduce the 3 cat Reduce cat 5 Reduce big 2 Machine 4 A B D Map dog 1 1 Reduce dog 2
59 Machine 2 A B C Machine 4 A B D Machine 3 A C D Machine1 B C D Map Map Map Map Reduce 1 big 1 1 apple Reduce 2 the 1 Reduce 3 1 Reduce 1 cat Reduce 1 1 dog 2 big 1 apple 3 the 5 cat 2 dog Output File
60 Machine1 B C D Machine 2 A B C Machine 3 A C D Map Map Map apple the cat big Reduce Reduce Reduce Reduce Output File apple the cat big dog If part of the file or any intermediate computation is lost we can simply recompute it without recomputing everything. Machine 4 A B D Map dog 1 1 Reduce
61 Interacting with Scale Map-Reduce
62 Interacting With the Data Good for smaller datasets Ø Faster more natural interaction Ø Lots of tools! Compute Locally = M r2data f (r) Request Data Sample Sample of Data Analysis Algorithm Computation Can we send the computation to the data? Yes!
63 Statistical Query Pattern Common Machine Learning Pattern Ø Computing aggregates of user defined functions Ø Data-Parallel computation = M r2data f (r) Iterative Algorithm Query:f Response: f : User defined function [UDF] M : User defined aggregate [UDA] D. Caragea et al., A Framework for Learning from Distributed Data Using Sufficient Statistics and Its Application to Learning Decision Trees. Int. J. Hybrid Intell. Syst. 2004
64 = M f (r) = M f (r) r2data r2data Query:f Response: M Query:f Response: Query:f = M r2data f (r) Iterative Algorithm Response:
65 Interacting With the Data = M r2data f (r) Good for smaller datasets Ø Faster more natural interaction Ø Lots of tools! Compute Locally = M r2data f (r) Request Data Sample Sample of Data Algorithm Query:f Response: Algorithm Good for bigger datasets and compute intensive tasks Cluster Compute
66 Map Reduce Technologies
67 Hadoop Ø First open-source map-reduce software Ø Managed by Apache foundation Ø Based on Google s Ø Google File System Ø MapReduce Ø Companies formed around Hadoop: Ø Cloudera Ø Hortonworks Ø MapR
68 Hadoop Ø Very active open source ecosystem Ø Several key technologies Ø HDFS: Hadoop File System Ø MapReduce: map-reduce compute framework Ø YARN: Yet another resource negotiator Ø Hive: SQL queries over MapReduce Ø
69 In-Memory Dataflow System Developed at the UC Berkeley AMP Lab M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, and I. Stoica. Spark: cluster computing with working sets. HotCloud 10 M. Zaharia, M. Chowdhury, T. Das, A. Dave, J. Ma, M. McCauley, M.J. Franklin, S. Shenker, I. Stoica. Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing, NSDI 2012
70 What Is Ø Parallel execution engine for big data processing Ø General: efficient support for multiple workloads Ø Easy to use: 2-5x less code than Hadoop MR Ø High level API s in Python, Java, and Scala Ø Fast: up to 100x faster than Hadoop MR Ø Can exploit in-memory when available Ø Low overhead scheduling, optimized engine
71 Spark Programming Abstraction Ø Write programs in terms of transformations on distributed datasets Ø Resilient Distributed Datasets (RDDs) Ø Distributed collections of objects that can stored in memory or on disk Ø Built via parallel transformations (map, filter, ) Ø Automatically rebuilt on failure Slide provided by M. Zaharia
72 RDD: Resilient Distributed Datasets Ø Collections of objects partitioned & distributed across a cluster Ø Stored in RAM or on Disk Ø Resilient to failures Ø Operations Ø Transformations Ø Actions
73 Operations on RDDs Ø Transformations f(rdd) => RDD Lazy (not computed immediately) E.g., map, filter, groupby Ø Actions: Triggers computation E.g. count, collect, saveastextfile
74 Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns
75 Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns Driver
76 Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textfile( hdfs://file.txt ) Driver
77 Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns Base RDD lines = spark.textfile( hdfs://file.txt ) Driver
78 Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textfile( hdfs://file.txt ) errors = lines.filter(lambda s: s.startswith( ERROR )) Driver
79 Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns Transformed RDD lines = spark.textfile( hdfs://file.txt ) errors = lines.filter(lambda s: s.startswith( ERROR )) Driver
80 Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textfile( hdfs://file.txt ) errors = lines.filter(lambda s: s.startswith( ERROR )) messages = errors.map(lambda s: s.split( \t )[2]) messages.cache() Driver messages.filter(lambda s: mysql in s).count()
81 Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textfile( hdfs://file.txt ) errors = lines.filter(lambda s: s.startswith( ERROR )) messages = errors.map(lambda s: s.split( \t )[2]) messages.cache() messages.filter(lambda s: mysql in s).count() Driver Action
82 Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textfile( hdfs://file.txt ) errors = lines.filter(lambda s: s.startswith( ERROR )) messages = errors.map(lambda s: s.split( \t )[2]) messages.cache() Driver Partition 1 messages.filter(lambda s: mysql in s).count() Partition 2 Partition 3
83 Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textfile( hdfs://file.txt ) errors = lines.filter(lambda s: s.startswith( ERROR )) messages = errors.map(lambda s: s.split( \t )[2]) messages.cache() messages.filter(lambda s: mysql in s).count() Driver tasks tasks Partition 1 tasks Partition 2 Partition 3
84 Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textfile( hdfs://file.txt ) errors = lines.filter(lambda s: s.startswith( ERROR )) messages = errors.map(lambda s: s.split( \t )[2]) messages.cache() messages.filter(lambda s: mysql in s).count() Driver Partition 1 Read HDFS Partitio Partition 3 Read HDFS Partition Partition 2 Read HDFS Partitio
85 Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textfile( hdfs://file.txt ) errors = lines.filter(lambda s: s.startswith( ERROR )) messages = errors.map(lambda s: s.split( \t )[2]) messages.cache() messages.filter(lambda s: mysql in s).count() Driver Cache 3 Partition 3 Process & Cache Data Cache 1 Partition 1 Process & Cache Data Cache 2 Partition 2 Process & Cache Data
86 Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textfile( hdfs://file.txt ) errors = lines.filter(lambda s: s.startswith( ERROR )) messages = errors.map(lambda s: s.split( \t )[2]) messages.cache() messages.filter(lambda s: mysql in s).count() Driver results Cache 3 Partition 3 results results Cache 1 Partition 1 Cache 2 Partition 2
87 Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textfile( hdfs://file.txt ) errors = lines.filter(lambda s: s.startswith( ERROR )) messages = errors.map(lambda s: s.split( \t )[2]) messages.cache() Driver Cache 1 Partition 1 messages.filter(lambda s: mysql in s).count() messages.filter(lambda s: php in s).count() Cache 3 Partition 3 Cache 2 Partition 2
88 Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textfile( hdfs://file.txt ) errors = lines.filter(lambda s: s.startswith( ERROR )) messages = errors.map(lambda s: s.split( \t )[2]) messages.cache() messages.filter(lambda s: mysql in s).count() messages.filter(lambda s: php in s).count() Driver tasks Cache 3 Partition 3 tasks tasks Cache 1 Partition 1 Cache 2 Partition 2
89 Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textfile( hdfs://file.txt ) errors = lines.filter(lambda s: s.startswith( ERROR )) messages = errors.map(lambda s: s.split( \t )[2]) messages.cache() messages.filter(lambda s: mysql in s).count() messages.filter(lambda s: php in s).count() Driver Cache 3 Partition 3 Process from Cache Cache 1 Partition 1 Process from Cache Cache 2 Partition 2 Process from Cache
90 Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textfile( hdfs://file.txt ) errors = lines.filter(lambda s: s.startswith( ERROR )) messages = errors.map(lambda s: s.split( \t )[2]) messages.cache() messages.filter(lambda s: mysql in s).count() messages.filter(lambda s: php in s).count() Driver results Cache 3 Partition 3 results results Cache 1 Partition 1 Cache 2 Partition 2
91 Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textfile( hdfs://file.txt ) errors = lines.filter(lambda s: s.startswith( ERROR )) messages = errors.map(lambda s: s.split( \t )[2]) messages.cache() Driver Cache 1 Partition 1 messages.filter(lambda s: mysql in s).count() messages.filter(lambda s: php in s).count() Cache your data è Faster Results Full-text search of Wikipedia 60GB on 20 EC2 machines 0.5 sec from mem vs. 20s for on-disk Cache 3 Partition 3 Cache 2 Partition 2
92 Abstraction: Dataflow Operators map filter groupby sort union join leftouterjoin rightouterjoin reduce count fold reducebykey groupbykey cogroup cross zip sample take first partitionby mapwith pipe save...
93 Abstraction: Dataflow Operators map filter groupby sort union join leftouterjoin rightouterjoin reduce count fold reducebykey groupbykey cogroup cross zip sample take first partitionby mapwith pipe save...
94 Language Support Python lines = sc.textfile(...) lines.filter(lambda s: ERROR in s).count() Scala val lines = sc.textfile(...) lines.filter(x => x.contains( ERROR )).count() Java JavaRDD<String> lines = sc.textfile(...); lines.filter(new Function<String, Boolean>() { Boolean call(string s) { return s.contains( error ); } }).count(); Standalone Programs Python, Scala, & Java Interactive Shells Python & Scala Performance Java & Scala are faster due to static typing
Big Data Infrastructures & Technologies
Big Data Infrastructures & Technologies Spark and MLLIB OVERVIEW OF SPARK What is Spark? Fast and expressive cluster computing system interoperable with Apache Hadoop Improves efficiency through: In-memory
More informationIntroduction to Apache Spark. Patrick Wendell - Databricks
Introduction to Apache Spark Patrick Wendell - Databricks What is Spark? Fast and Expressive Cluster Computing Engine Compatible with Apache Hadoop Efficient General execution graphs In-memory storage
More informationWhat is Data Warehouse like
What is Data Warehouse like in the Big Data Era? Sales (Asia) Data Warehouse Sales (US) ETL ETL Collects and organizes historical data from multiple sources Inventory Advertising ETL ETL So far Ø Star
More informationSpark and distributed data processing
Stanford CS347 Guest Lecture Spark and distributed data processing Reynold Xin @rxin 2016-05-23 Who am I? Reynold Xin PMC member, Apache Spark Cofounder & Chief Architect, Databricks PhD on leave (ABD),
More informationDistributed Computing with Spark
Distributed Computing with Spark Reza Zadeh Thanks to Matei Zaharia Outline Data flow vs. traditional network programming Limitations of MapReduce Spark computing engine Numerical computing on Spark Ongoing
More informationApache Spark. CS240A T Yang. Some of them are based on P. Wendell s Spark slides
Apache Spark CS240A T Yang Some of them are based on P. Wendell s Spark slides Parallel Processing using Spark+Hadoop Hadoop: Distributed file system that connects machines. Mapreduce: parallel programming
More informationBig Data Infrastructures & Technologies Hadoop Streaming Revisit.
Big Data Infrastructures & Technologies Hadoop Streaming Revisit ENRON Mapper ENRON Mapper Output (Excerpt) acomnes@enron.com blake.walker@enron.com edward.snowden@cia.gov alex.berenson@nyt.com ENRON Reducer
More informationLambda Architecture with Apache Spark
Lambda Architecture with Apache Spark Michael Hausenblas, Chief Data Engineer MapR First Galway Data Meetup, 2015-02-03 2015 MapR Technologies 2015 MapR Technologies 1 Polyglot Processing 2015 2014 MapR
More informationApache Spark. Easy and Fast Big Data Analytics Pat McDonough
Apache Spark Easy and Fast Big Data Analytics Pat McDonough Founded by the creators of Apache Spark out of UC Berkeley s AMPLab Fully committed to 100% open source Apache Spark Support and Grow the Spark
More informationCSE 444: Database Internals. Lecture 23 Spark
CSE 444: Database Internals Lecture 23 Spark References Spark is an open source system from Berkeley Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing. Matei
More informationSpark: A Brief History. https://stanford.edu/~rezab/sparkclass/slides/itas_workshop.pdf
Spark: A Brief History https://stanford.edu/~rezab/sparkclass/slides/itas_workshop.pdf A Brief History: 2004 MapReduce paper 2010 Spark paper 2002 2004 2006 2008 2010 2012 2014 2002 MapReduce @ Google
More informationResilient Distributed Datasets
Resilient Distributed Datasets A Fault- Tolerant Abstraction for In- Memory Cluster Computing Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael Franklin,
More informationConquering Big Data with Apache Spark
Conquering Big Data with Apache Spark Ion Stoica November 1 st, 2015 UC BERKELEY The Berkeley AMPLab January 2011 2017 8 faculty > 50 students 3 software engineer team Organized for collaboration achines
More informationMapReduce review. Spark and distributed data processing. Who am I? Today s Talk. Reynold Xin
Who am I? Reynold Xin Stanford CS347 Guest Lecture Spark and distributed data processing PMC member, Apache Spark Cofounder & Chief Architect, Databricks PhD on leave (ABD), UC Berkeley AMPLab Reynold
More informationApache Spark is a fast and general-purpose engine for large-scale data processing Spark aims at achieving the following goals in the Big data context
1 Apache Spark is a fast and general-purpose engine for large-scale data processing Spark aims at achieving the following goals in the Big data context Generality: diverse workloads, operators, job sizes
More informationFast, Interactive, Language-Integrated Cluster Computing
Spark Fast, Interactive, Language-Integrated Cluster Computing Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker, Ion Stoica www.spark-project.org
More informationData Science 100 Databases Part 2 (The SQL) Previously. How do you interact with a database? 2/22/18. Database Management Systems
Data Science 100 Databases Part 2 (The SQL) Slides by: Joseph E. Gonzalez & Joseph Hellerstein, jegonzal@berkeley.edu jhellerstein@berkeley.edu? Previously Database Management Systems A database management
More informationBlended Learning Outline: Developer Training for Apache Spark and Hadoop (180404a)
Blended Learning Outline: Developer Training for Apache Spark and Hadoop (180404a) Cloudera s Developer Training for Apache Spark and Hadoop delivers the key concepts and expertise need to develop high-performance
More informationShark. Hive on Spark. Cliff Engle, Antonio Lupher, Reynold Xin, Matei Zaharia, Michael Franklin, Ion Stoica, Scott Shenker
Shark Hive on Spark Cliff Engle, Antonio Lupher, Reynold Xin, Matei Zaharia, Michael Franklin, Ion Stoica, Scott Shenker Agenda Intro to Spark Apache Hive Shark Shark s Improvements over Hive Demo Alpha
More informationSpark. In- Memory Cluster Computing for Iterative and Interactive Applications
Spark In- Memory Cluster Computing for Iterative and Interactive Applications Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker,
More informationAnalytics in Spark. Yanlei Diao Tim Hunter. Slides Courtesy of Ion Stoica, Matei Zaharia and Brooke Wenig
Analytics in Spark Yanlei Diao Tim Hunter Slides Courtesy of Ion Stoica, Matei Zaharia and Brooke Wenig Outline 1. A brief history of Big Data and Spark 2. Technical summary of Spark 3. Unified analytics
More informationData Science 100. Databases Part 2 (The SQL) Slides by: Joseph E. Gonzalez & Joseph Hellerstein,
Data Science 100 Databases Part 2 (The SQL) Slides by: Joseph E. Gonzalez & Joseph Hellerstein, jegonzal@berkeley.edu jhellerstein@berkeley.edu? Previously Database Management Systems A database management
More informationNew Developments in Spark
New Developments in Spark And Rethinking APIs for Big Data Matei Zaharia and many others What is Spark? Unified computing engine for big data apps > Batch, streaming and interactive Collection of high-level
More informationDATA SCIENCE USING SPARK: AN INTRODUCTION
DATA SCIENCE USING SPARK: AN INTRODUCTION TOPICS COVERED Introduction to Spark Getting Started with Spark Programming in Spark Data Science with Spark What next? 2 DATA SCIENCE PROCESS Exploratory Data
More informationDell In-Memory Appliance for Cloudera Enterprise
Dell In-Memory Appliance for Cloudera Enterprise Spark Technology Overview and Streaming Workload Use Cases Author: Armando Acosta Hadoop Product Manager/Subject Matter Expert Armando_Acosta@Dell.com/
More informationIntroduction to MapReduce Algorithms and Analysis
Introduction to MapReduce Algorithms and Analysis Jeff M. Phillips October 25, 2013 Trade-Offs Massive parallelism that is very easy to program. Cheaper than HPC style (uses top of the line everything)
More informationProgramming Systems for Big Data
Programming Systems for Big Data CS315B Lecture 17 Including material from Kunle Olukotun Prof. Aiken CS 315B Lecture 17 1 Big Data We ve focused on parallel programming for computational science There
More information2/26/2017. Originally developed at the University of California - Berkeley's AMPLab
Apache is a fast and general engine for large-scale data processing aims at achieving the following goals in the Big data context Generality: diverse workloads, operators, job sizes Low latency: sub-second
More informationCSE 544 Principles of Database Management Systems. Alvin Cheung Fall 2015 Lecture 8 - Data Warehousing and Column Stores
CSE 544 Principles of Database Management Systems Alvin Cheung Fall 2015 Lecture 8 - Data Warehousing and Column Stores Announcements Shumo office hours change See website for details HW2 due next Thurs
More informationDistributed Systems. Distributed computation with Spark. Abraham Bernstein, Ph.D.
Distributed Systems Distributed computation with Spark Abraham Bernstein, Ph.D. Course material based on: - Based on slides by Reynold Xin, Tudor Lapusan - Some additions by Johannes Schneider How good
More informationAn Introduction to Apache Spark
An Introduction to Apache Spark Anastasios Skarlatidis @anskarl Software Engineer/Researcher IIT, NCSR "Demokritos" Outline Part I: Getting to know Spark Part II: Basic programming Part III: Spark under
More informationSpark & Spark SQL. High- Speed In- Memory Analytics over Hadoop and Hive Data. Instructor: Duen Horng (Polo) Chau
CSE 6242 / CX 4242 Data and Visual Analytics Georgia Tech Spark & Spark SQL High- Speed In- Memory Analytics over Hadoop and Hive Data Instructor: Duen Horng (Polo) Chau Slides adopted from Matei Zaharia
More informationMapReduce and Spark (and MPI) (Lecture 22, cs262a) Ali Ghodsi and Ion Stoica, UC Berkeley April 11, 2018
MapReduce and Spark (and MPI) (Lecture 22, cs262a) Ali Ghodsi and Ion Stoica, UC Berkeley April 11, 2018 Context (1970s 1990s) Supercomputers the pinnacle of computation Solve important science problems,
More informationSpark. In- Memory Cluster Computing for Iterative and Interactive Applications
Spark In- Memory Cluster Computing for Iterative and Interactive Applications Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker,
More informationMapReduce, Hadoop and Spark. Bompotas Agorakis
MapReduce, Hadoop and Spark Bompotas Agorakis Big Data Processing Most of the computations are conceptually straightforward on a single machine but the volume of data is HUGE Need to use many (1.000s)
More informationSpark. Cluster Computing with Working Sets. Matei Zaharia, Mosharaf Chowdhury, Michael Franklin, Scott Shenker, Ion Stoica.
Spark Cluster Computing with Working Sets Matei Zaharia, Mosharaf Chowdhury, Michael Franklin, Scott Shenker, Ion Stoica UC Berkeley Background MapReduce and Dryad raised level of abstraction in cluster
More informationCS Spark. Slides from Matei Zaharia and Databricks
CS 5450 Spark Slides from Matei Zaharia and Databricks Goals uextend the MapReduce model to better support two common classes of analytics apps Iterative algorithms (machine learning, graphs) Interactive
More informationData Warehousing and Decision Support (mostly using Relational Databases) CS634 Class 20
Data Warehousing and Decision Support (mostly using Relational Databases) CS634 Class 20 Slides based on Database Management Systems 3 rd ed, Ramakrishnan and Gehrke, Chapter 25 Introduction Increasingly,
More information2/4/2019 Week 3- A Sangmi Lee Pallickara
Week 3-A-0 2/4/2019 Colorado State University, Spring 2019 Week 3-A-1 CS535 BIG DATA FAQs PART A. BIG DATA TECHNOLOGY 3. DISTRIBUTED COMPUTING MODELS FOR SCALABLE BATCH COMPUTING SECTION 1: MAPREDUCE PA1
More informationOverview. Prerequisites. Course Outline. Course Outline :: Apache Spark Development::
Title Duration : Apache Spark Development : 4 days Overview Spark is a fast and general cluster computing system for Big Data. It provides high-level APIs in Scala, Java, Python, and R, and an optimized
More informationData processing in Apache Spark
Data processing in Apache Spark Pelle Jakovits 5 October, 2015, Tartu Outline Introduction to Spark Resilient Distributed Datasets (RDD) Data operations RDD transformations Examples Fault tolerance Frameworks
More informationHadoop 2.x Core: YARN, Tez, and Spark. Hortonworks Inc All Rights Reserved
Hadoop 2.x Core: YARN, Tez, and Spark YARN Hadoop Machine Types top-of-rack switches core switch client machines have client-side software used to access a cluster to process data master nodes run Hadoop
More informationLightning Fast Cluster Computing. Michael Armbrust Reflections Projections 2015 Michast
Lightning Fast Cluster Computing Michael Armbrust - @michaelarmbrust Reflections Projections 2015 Michast What is Apache? 2 What is Apache? Fast and general computing engine for clusters created by students
More informationCS435 Introduction to Big Data FALL 2018 Colorado State University. 10/22/2018 Week 10-A Sangmi Lee Pallickara. FAQs.
10/22/2018 - FALL 2018 W10.A.0.0 10/22/2018 - FALL 2018 W10.A.1 FAQs Term project: Proposal 5:00PM October 23, 2018 PART 1. LARGE SCALE DATA ANALYTICS IN-MEMORY CLUSTER COMPUTING Computer Science, Colorado
More informationCSE 544 Principles of Database Management Systems. Alvin Cheung Fall 2015 Lecture 10 Parallel Programming Models: Map Reduce and Spark
CSE 544 Principles of Database Management Systems Alvin Cheung Fall 2015 Lecture 10 Parallel Programming Models: Map Reduce and Spark Announcements HW2 due this Thursday AWS accounts Any success? Feel
More informationCloud, Big Data & Linear Algebra
Cloud, Big Data & Linear Algebra Shelly Garion IBM Research -- Haifa 2014 IBM Corporation What is Big Data? 2 Global Data Volume in Exabytes What is Big Data? 2005 2012 2017 3 Global Data Volume in Exabytes
More informationCompSci 516: Database Systems
CompSci 516 Database Systems Lecture 12 Map-Reduce and Spark Instructor: Sudeepa Roy Duke CS, Fall 2017 CompSci 516: Database Systems 1 Announcements Practice midterm posted on sakai First prepare and
More informationDistributed Computing with Spark and MapReduce
Distributed Computing with Spark and MapReduce Reza Zadeh @Reza_Zadeh http://reza-zadeh.com Traditional Network Programming Message-passing between nodes (e.g. MPI) Very difficult to do at scale:» How
More informationCloud Computing 2. CSCI 4850/5850 High-Performance Computing Spring 2018
Cloud Computing 2 CSCI 4850/5850 High-Performance Computing Spring 2018 Tae-Hyuk (Ted) Ahn Department of Computer Science Program of Bioinformatics and Computational Biology Saint Louis University Learning
More informationData processing in Apache Spark
Data processing in Apache Spark Pelle Jakovits 8 October, 2014, Tartu Outline Introduction to Spark Resilient Distributed Data (RDD) Available data operations Examples Advantages and Disadvantages Frameworks
More informationL3: Spark & RDD. CDS Department of Computational and Data Sciences. Department of Computational and Data Sciences
Indian Institute of Science Bangalore, India भ रत य व ज ञ न स स थ न ब गल र, भ रत Department of Computational and Data Sciences L3: Spark & RDD Department of Computational and Data Science, IISc, 2016 This
More informationCloud Computing 3. CSCI 4850/5850 High-Performance Computing Spring 2018
Cloud Computing 3 CSCI 4850/5850 High-Performance Computing Spring 2018 Tae-Hyuk (Ted) Ahn Department of Computer Science Program of Bioinformatics and Computational Biology Saint Louis University Learning
More informationCloud Computing & Visualization
Cloud Computing & Visualization Workflows Distributed Computation with Spark Data Warehousing with Redshift Visualization with Tableau #FIUSCIS School of Computing & Information Sciences, Florida International
More informationTwitter data Analytics using Distributed Computing
Twitter data Analytics using Distributed Computing Uma Narayanan Athrira Unnikrishnan Dr. Varghese Paul Dr. Shelbi Joseph Research Scholar M.tech Student Professor Assistant Professor Dept. of IT, SOE
More informationPrincipal Software Engineer Red Hat Emerging Technology June 24, 2015
USING APACHE SPARK FOR ANALYTICS IN THE CLOUD William C. Benton Principal Software Engineer Red Hat Emerging Technology June 24, 2015 ABOUT ME Distributed systems and data science in Red Hat's Emerging
More informationAnnouncements. Reading Material. Map Reduce. The Map-Reduce Framework 10/3/17. Big Data. CompSci 516: Database Systems
Announcements CompSci 516 Database Systems Lecture 12 - and Spark Practice midterm posted on sakai First prepare and then attempt! Midterm next Wednesday 10/11 in class Closed book/notes, no electronic
More informationData processing in Apache Spark
Data processing in Apache Spark Pelle Jakovits 21 October, 2015, Tartu Outline Introduction to Spark Resilient Distributed Datasets (RDD) Data operations RDD transformations Examples Fault tolerance Streaming
More informationApplied Spark. From Concepts to Bitcoin Analytics. Andrew F.
Applied Spark From Concepts to Bitcoin Analytics Andrew F. Hart ahart@apache.org @andrewfhart My Day Job CTO, Pogoseat Upgrade technology for live events 3/28/16 QCON-SP Andrew Hart 2 Additionally Member,
More informationData Warehousing and Decision Support
Data Warehousing and Decision Support Chapter 23, Part A Database Management Systems, 2 nd Edition. R. Ramakrishnan and J. Gehrke 1 Introduction Increasingly, organizations are analyzing current and historical
More informationSpark Overview. Professor Sasu Tarkoma.
Spark Overview 2015 Professor Sasu Tarkoma www.cs.helsinki.fi Apache Spark Spark is a general-purpose computing framework for iterative tasks API is provided for Java, Scala and Python The model is based
More informationSpark, Shark and Spark Streaming Introduction
Spark, Shark and Spark Streaming Introduction Tushar Kale tusharkale@in.ibm.com June 2015 This Talk Introduction to Shark, Spark and Spark Streaming Architecture Deployment Methodology Performance References
More informationMapReduce Spark. Some slides are adapted from those of Jeff Dean and Matei Zaharia
MapReduce Spark Some slides are adapted from those of Jeff Dean and Matei Zaharia What have we learnt so far? Distributed storage systems consistency semantics protocols for fault tolerance Paxos, Raft,
More informationData Warehousing and Decision Support
Data Warehousing and Decision Support [R&G] Chapter 23, Part A CS 4320 1 Introduction Increasingly, organizations are analyzing current and historical data to identify useful patterns and support business
More informationChapter 4: Apache Spark
Chapter 4: Apache Spark Lecture Notes Winter semester 2016 / 2017 Ludwig-Maximilians-University Munich PD Dr. Matthias Renz 2015, Based on lectures by Donald Kossmann (ETH Zürich), as well as Jure Leskovec,
More informationEvolution of Database Systems
Evolution of Database Systems Krzysztof Dembczyński Intelligent Decision Support Systems Laboratory (IDSS) Poznań University of Technology, Poland Intelligent Decision Support Systems Master studies, second
More informationAn Introduction to Big Data Analysis using Spark
An Introduction to Big Data Analysis using Spark Mohamad Jaber American University of Beirut - Faculty of Arts & Sciences - Department of Computer Science May 17, 2017 Mohamad Jaber (AUB) Spark May 17,
More informationTopics. Big Data Analytics What is and Why Hadoop? Comparison to other technologies Hadoop architecture Hadoop ecosystem Hadoop usage examples
Hadoop Introduction 1 Topics Big Data Analytics What is and Why Hadoop? Comparison to other technologies Hadoop architecture Hadoop ecosystem Hadoop usage examples 2 Big Data Analytics What is Big Data?
More information15.1 Data flow vs. traditional network programming
CME 323: Distributed Algorithms and Optimization, Spring 2017 http://stanford.edu/~rezab/dao. Instructor: Reza Zadeh, Matroid and Stanford. Lecture 15, 5/22/2017. Scribed by D. Penner, A. Shoemaker, and
More informationAnalytic Cloud with. Shelly Garion. IBM Research -- Haifa IBM Corporation
Analytic Cloud with Shelly Garion IBM Research -- Haifa 2014 IBM Corporation Why Spark? Apache Spark is a fast and general open-source cluster computing engine for big data processing Speed: Spark is capable
More informationData Platforms and Pattern Mining
Morteza Zihayat Data Platforms and Pattern Mining IBM Corporation About Myself IBM Software Group Big Data Scientist 4Platform Computing, IBM (2014 Now) PhD Candidate (2011 Now) 4Lassonde School of Engineering,
More informationMapReduce & Resilient Distributed Datasets. Yiqing Hua, Mengqi(Mandy) Xia
MapReduce & Resilient Distributed Datasets Yiqing Hua, Mengqi(Mandy) Xia Outline - MapReduce: - - Resilient Distributed Datasets (RDD) - - Motivation Examples The Design and How it Works Performance Motivation
More informationData-Intensive Distributed Computing
Data-Intensive Distributed Computing CS 451/651 431/631 (Winter 2018) Part 5: Analyzing Relational Data (1/3) February 8, 2018 Jimmy Lin David R. Cheriton School of Computer Science University of Waterloo
More informationRESILIENT DISTRIBUTED DATASETS: A FAULT-TOLERANT ABSTRACTION FOR IN-MEMORY CLUSTER COMPUTING
RESILIENT DISTRIBUTED DATASETS: A FAULT-TOLERANT ABSTRACTION FOR IN-MEMORY CLUSTER COMPUTING Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael J. Franklin,
More informationDecision Support. Chapter 25. CS 286, UC Berkeley, Spring 2007, R. Ramakrishnan 1
Decision Support Chapter 25 CS 286, UC Berkeley, Spring 2007, R. Ramakrishnan 1 Introduction Increasingly, organizations are analyzing current and historical data to identify useful patterns and support
More informationData Warehouses. Yanlei Diao. Slides Courtesy of R. Ramakrishnan and J. Gehrke
Data Warehouses Yanlei Diao Slides Courtesy of R. Ramakrishnan and J. Gehrke Introduction v In the late 80s and early 90s, companies began to use their DBMSs for complex, interactive, exploratory analysis
More informationData Warehousing and Decision Support. Introduction. Three Complementary Trends. [R&G] Chapter 23, Part A
Data Warehousing and Decision Support [R&G] Chapter 23, Part A CS 432 1 Introduction Increasingly, organizations are analyzing current and historical data to identify useful patterns and support business
More informationAn Introduction to Apache Spark Big Data Madison: 29 July William Red Hat, Inc.
An Introduction to Apache Spark Big Data Madison: 29 July 2014 William Benton @willb Red Hat, Inc. About me At Red Hat for almost 6 years, working on distributed computing Currently contributing to Spark,
More informationModern Data Warehouse The New Approach to Azure BI
Modern Data Warehouse The New Approach to Azure BI History On-Premise SQL Server Big Data Solutions Technical Barriers Modern Analytics Platform On-Premise SQL Server Big Data Solutions Modern Analytics
More informationDatabases and Big Data Today. CS634 Class 22
Databases and Big Data Today CS634 Class 22 Current types of Databases SQL using relational tables: still very important! NoSQL, i.e., not using relational tables: term NoSQL popular since about 2007.
More informationMODERN BIG DATA DESIGN PATTERNS CASE DRIVEN DESINGS
MODERN BIG DATA DESIGN PATTERNS CASE DRIVEN DESINGS SUJEE MANIYAM FOUNDER / PRINCIPAL @ ELEPHANT SCALE www.elephantscale.com sujee@elephantscale.com HI, I M SUJEE MANIYAM Founder / Principal @ ElephantScale
More informationSTATS Data Analysis using Python. Lecture 7: the MapReduce framework Some slides adapted from C. Budak and R. Burns
STATS 700-002 Data Analysis using Python Lecture 7: the MapReduce framework Some slides adapted from C. Budak and R. Burns Unit 3: parallel processing and big data The next few lectures will focus on big
More informationUnifying Big Data Workloads in Apache Spark
Unifying Big Data Workloads in Apache Spark Hossein Falaki @mhfalaki Outline What s Apache Spark Why Unification Evolution of Unification Apache Spark + Databricks Q & A What s Apache Spark What is Apache
More informationDatabases 2 (VU) ( / )
Databases 2 (VU) (706.711 / 707.030) MapReduce (Part 3) Mark Kröll ISDS, TU Graz Nov. 27, 2017 Mark Kröll (ISDS, TU Graz) MapReduce Nov. 27, 2017 1 / 42 Outline 1 Problems Suited for Map-Reduce 2 MapReduce:
More informationIBM Data Science Experience White paper. SparkR. Transforming R into a tool for big data analytics
IBM Data Science Experience White paper R Transforming R into a tool for big data analytics 2 R Executive summary This white paper introduces R, a package for the R statistical programming language that
More informationSurvey on Incremental MapReduce for Data Mining
Survey on Incremental MapReduce for Data Mining Trupti M. Shinde 1, Prof.S.V.Chobe 2 1 Research Scholar, Computer Engineering Dept., Dr. D. Y. Patil Institute of Engineering &Technology, 2 Associate Professor,
More informationMachine Learning for Large-Scale Data Analysis and Decision Making A. Distributed Machine Learning Week #9
Machine Learning for Large-Scale Data Analysis and Decision Making 80-629-17A Distributed Machine Learning Week #9 Today Distributed computing for machine learning Background MapReduce/Hadoop & Spark Theory
More informationData Warehousing and OLAP
Data Warehousing and OLAP INFO 330 Slides courtesy of Mirek Riedewald Motivation Large retailer Several databases: inventory, personnel, sales etc. High volume of updates Management requirements Efficient
More informationCERTIFICATE IN SOFTWARE DEVELOPMENT LIFE CYCLE IN BIG DATA AND BUSINESS INTELLIGENCE (SDLC-BD & BI)
CERTIFICATE IN SOFTWARE DEVELOPMENT LIFE CYCLE IN BIG DATA AND BUSINESS INTELLIGENCE (SDLC-BD & BI) The Certificate in Software Development Life Cycle in BIGDATA, Business Intelligence and Tableau program
More informationBlended Learning Outline: Cloudera Data Analyst Training (171219a)
Blended Learning Outline: Cloudera Data Analyst Training (171219a) Cloudera Univeristy s data analyst training course will teach you to apply traditional data analytics and business intelligence skills
More informationBig Data Hadoop Stack
Big Data Hadoop Stack Lecture #1 Hadoop Beginnings What is Hadoop? Apache Hadoop is an open source software framework for storage and large scale processing of data-sets on clusters of commodity hardware
More informationHive and Shark. Amir H. Payberah. Amirkabir University of Technology (Tehran Polytechnic)
Hive and Shark Amir H. Payberah amir@sics.se Amirkabir University of Technology (Tehran Polytechnic) Amir H. Payberah (Tehran Polytechnic) Hive and Shark 1393/8/19 1 / 45 Motivation MapReduce is hard to
More informationHadoop. Introduction / Overview
Hadoop Introduction / Overview Preface We will use these PowerPoint slides to guide us through our topic. Expect 15 minute segments of lecture Expect 1-4 hour lab segments Expect minimal pretty pictures
More informationBacktesting with Spark
Backtesting with Spark Patrick Angeles, Cloudera Sandy Ryza, Cloudera Rick Carlin, Intel Sheetal Parade, Intel 1 Traditional Grid Shared storage Storage and compute scale independently Bottleneck on I/O
More informationShark: Hive (SQL) on Spark
Shark: Hive (SQL) on Spark Reynold Xin UC Berkeley AMP Camp Aug 21, 2012 UC BERKELEY SELECT page_name, SUM(page_views) views FROM wikistats GROUP BY page_name ORDER BY views DESC LIMIT 10; Stage 0: Map-Shuffle-Reduce
More informationWebinar Series TMIP VISION
Webinar Series TMIP VISION TMIP provides technical support and promotes knowledge and information exchange in the transportation planning and modeling community. Today s Goals To Consider: Parallel Processing
More informationMassive Online Analysis - Storm,Spark
Massive Online Analysis - Storm,Spark presentation by R. Kishore Kumar Research Scholar Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur Kharagpur-721302, India (R
More informationAn Overview of Apache Spark
An Overview of Apache Spark CIS 612 Sunnie Chung 2014 MapR Technologies 1 MapReduce Processing Model MapReduce, the parallel data processing paradigm, greatly simplified the analysis of big data using
More informationAsanka Padmakumara. ETL 2.0: Data Engineering with Azure Databricks
Asanka Padmakumara ETL 2.0: Data Engineering with Azure Databricks Who am I? Asanka Padmakumara Business Intelligence Consultant, More than 8 years in BI and Data Warehousing A regular speaker in data
More informationLecture 11 Hadoop & Spark
Lecture 11 Hadoop & Spark Dr. Wilson Rivera ICOM 6025: High Performance Computing Electrical and Computer Engineering Department University of Puerto Rico Outline Distributed File Systems Hadoop Ecosystem
More informationWhere We Are. Review: Parallel DBMS. Parallel DBMS. Introduction to Data Management CSE 344
Where We Are Introduction to Data Management CSE 344 Lecture 22: MapReduce We are talking about parallel query processing There exist two main types of engines: Parallel DBMSs (last lecture + quick review)
More information