Introduction to Hadoop. Scott Seighman Systems Engineer Sun Microsystems

Size: px
Start display at page:

Download "Introduction to Hadoop. Scott Seighman Systems Engineer Sun Microsystems"

Transcription

1 Introduction to Hadoop Scott Seighman Systems Engineer Sun Microsystems 1

2 Agenda Identify the Problem Hadoop Overview Target Workloads Hadoop Architecture Major Components > HDFS > Map/Reduce Demo Resources Java User Group December '09 2

3 Solving a Problem How do you scale up applications? > 100 s of terabytes of data > Takes 11 days to read on 1 computer Need lots of cheap computers > Fixes speed problem (15 minutes on 1000 computers), but > Introduce reliability problems In large clusters, computers fail every day Cluster size is not fixed Need common infrastructure > Must be efficient and reliable Java User Group December '09 3

4 A Solution Hadoop Open Source Apache Project Hadoop is named after Cutting s son s stuff elephant Hadoop Core includes: > Distributed File System - distributes data > Map/Reduce - distributes application Written in Java Runs on > Linux, Mac OS/X, Windows, and Solaris > Commodity hardware Java User Group December '09 4

5 A Solution Hadoop Distributed File System > Modeled on GFS Distributed Processing Framework > Using Map/Reduce metaphor Java User Group December '09 5

6 A Solution Hadoop It s a framework for large-scale data processing: > Inspired by Google s architecture: Map Reduce and GFS > A top-level Apache project > Hadoop is open source > Written in Java, plus a few shell scripts Java User Group December '09 6

7 A Solution Hadoop The Apache Hadoop project develops open-source software for reliable, scalable, distributed computing. Hadoop includes: > Hadoop Common utilities > Avro: A data serialization system with scripting languages > Chukwa: managing large distributed systems > HBase: A scalable, distributed database for large tables > HDFS: A distributed file system > Hive: data summarization and ad hoc querying (SQL) > MapReduce: distributed processing on compute clusters > Pig: A high-level data-flow language for parallel computation > ZooKeeper: coordination service for distributed applications Java User Group December '09 7

8 A Solution Hadoop Fault-tolerant hardware is expensive Hadoop is designed to run on cheap commodity hardware It automatically handles data replication and node failure It does the hard work, you can focus on processing data Java User Group December '09 8

9 Hadoop History Java User Group December '09 9

10 Who is Using Hadoop? Java User Group December '09 10

11 Workload Targets for Hadoop When you must process lots of unstructured data When your processing can easily be made parallel When running batch jobs is acceptable When you have access to lots of cheap hardware Java User Group December '09 11

12 Avoid Hadoop if... The app includes intense calculations with little or no data Your processing cannot be easily made parallel Your data is not self-contained You need interactive results You own stock in supercomputer companies Java User Group December '09 12

13 Workload Examples/Anti-Examples Good choice for... > Indexing log files > Sorting vast amounts of data > Image analysis Hadoop would be a poor choice for... > Figuring Pi to 1,000,000 digits > Calculating Fibonacci sequences > A general RDBMS replacement Java User Group December '09 13

14 Hadoop Architecture Java User Group December '09 14

15 Hadoop Architecture Java User Group December '09 15

16 Hadoop Components Name Node There is only one (active) name node per cluster It manages the filesystem namespace and metadata The one place to spend $$$ for good hardware Java User Group December '09 16

17 Hadoop Components Job Tracker There is exactly one job tracker per cluster Receives job requests submitted by client Schedules and monitors MR jobs on task trackers Java User Group December '09 17

18 Hadoop Components Task Tracker There are typically many task trackers Responsible for executing MR operations Reads blocks from data nodes Java User Group December '09 18

19 Hadoop Components Data Nodes There are typically many data nodes Manages data blocks, serves them to clients Data is replicated, failure is no big deal Java User Group December '09 19

20 Hadoop Distributed File System (HDFS) Java User Group December '09 20

21 Hadoop Distributed File System HDFS is perhaps Hadoop s most interesting feature HDFS = userspace Inspired by Google File System (GFS) High aggregate throughput for streaming large files Replication and locality Java User Group December '09 21

22 Hadoop Distributed File System Single namespace for entire cluster > Managed by a single namenode. > Hierarchal directories > Optimized for streaming reads of large files. Files are broken in to large blocks. > Typically 64 or 128 MB > Replicated to several datanodes, for reliability > Clients can find location of blocks Client talks to both namenode and datanodes > Data is not sent through the namenode. Java User Group December '09 22

23 Hadoop Distributed File System API + implementation for working with Map Reduce More importantly, it provides infrastructure: > Job configuration and efficient scheduling > Browser-based monitoring of important cluster stats > Handling failures in both computation and data nodes > A distributed FS optimized for HUGE amounts of data Java User Group December '09 23

24 How HDFS Works Data copied into HDFS is split into blocks Typical block size: UNIX = 4KB vs. HDFS = 64/128MB Java User Group December '09 24

25 Distributed Workloads User submits Map/Reduce job to JobTracker System: > Splits job into lots of tasks > Schedules tasks on nodes close to data > Monitors tasks > Kills and restarts if they fail/hang/disappear Pluggable file systems for input/output > Local file system for testing, debugging, etc Java User Group December '09 25

26 How HDFS Replication Works Each data blocks is replicated to multiple machines Allows for node failure without data loss Java User Group December '09 26

27 Hadoop Distributed File System Single petabyte file system for entire cluster > Managed by a single namenode > Files are written, read, renamed, deleted, but append-only > Optimized for streaming reads of large files Files are broken in to large blocks > Transparent to the client > Blocks are typically 128 MB > Replicated to several datanodes, for reliability Client talks to both namenode and datanodes > Data is not sent through the namenode > Throughput of file system scales nearly linearly Access from Java, C, or command line Java User Group December '09 27

28 Hadoop Modes of Operation Hadoop supports three modes of operation: > Standalone > Pseudo-distributed > Fully-distributed Java User Group December '09 28

29 Map Reduce Java User Group December '09 29

30 Map/Reduce Map/Reduce is a programming model for efficient distributed computing It works like a Unix pipeline: > cat input grep sort uniq -c cat > output > Input Map Shuffle & Sort Reduce Output Efficiency from > Streaming through data, reducing seeks > Pipelining A good fit for a lot of applications > Log processing > Web index building > Data mining and machine learning Java User Group December '09 30

31 Map/Reduce Features Java, C++, and text-based APIs > In Java use Objects, with C++ bytes > Text-based (streaming) great for scripting or legacy apps > Higher level interfaces: Pig, Hive, Jaql Automatic re-execution on failure > In a large cluster, some nodes are always slow or flaky > Framework re-executes failed tasks Locality optimizations > With large data, bandwidth to data is a problem > Map-Reduce queries HDFS for locations of input data > Map tasks are scheduled close to the inputs when possible Java User Group December '09 31

32 Map/Reduce Features Abstracts a very common pattern (munge, regroup, munge) Designed for: > Building or updating offline databases (e.g. Indexes) > Computing statistics (e.g. query log analysis) Software framework > Frozen part: distributed sort, and reliability via re-execution > Hot parts: input, map, partition, compare, reduce, and output Java User Group December '09 32

33 Map/Reduce Features Data is a stream of keys and values Mapper > Input: key1,value1 pair > Output: key2, value2 pairs Reducer > Called once per a key, in sorted order > Input: key2, stream of value2 > Output: key3, value3 pairs Launching Program > Creates a JobConf to define a job > Submits JobConf and waits for completion Java User Group December '09 33

34 Map/Reduce Optimizations Overlap of maps, shuffle, and sort Mapper locality > Schedule mappers close to the data Combiner > Mappers may generate duplicate keys > Side-effect free reducer run on mapper node > Minimize data size before transfer > Reducer is still run Speculative execution > Some nodes may be slower > Run duplicate task on another node Java User Group December '09 34

35 Simple Application Java User Group December '09 35

36 Writing a Basic Application To write a distributed word count program: Mapper: Given a line of text, break it into words and output the word and the count of 1: > hi Apache bye Apache -> > ( hi, 1), ( Apache, 1), ( bye, 1), ( Apache, 1) Combiner/Reducer: Given a word and a set of counts, output the word and the sum > ( Apache, [1, 1]) -> ( Apache, 2) Launcher: Builds the configuration and submits job Java User Group December '09 36

37 Word Count Example: Mapper public class WCMap extends MapReduceBase implements Mapper { private static final IntWritable ONE = new IntWritable(1); } public void map(writablecomparable key, Writable value, OutputCollector output, Reporter reporter) throws IOException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasmoretokens()) { output.collect(new Text(itr.next()), ONE); } } Java User Group December '09 37

38 Word Count Example: Reduce public class WCReduce extends MapReduceBase implements Reducer { public void reduce(writablecomparable key, Iterator values, OutputCollector output, Reporter reporter) throws IOException { int sum = 0; while (values.hasnext()) { sum += ((IntWritable) values.next()).get(); } output.collect(key, new IntWritable(sum)); } } Java User Group December '09 38

39 Word Count Example: Launcher public static void main(string[] args) throws IOException { JobConf conf = new JobConf(WordCount.class); conf.setjobname("wordcount"); conf.setoutputkeyclass(text.class); conf.setoutputvalueclass(intwritable.class); conf.setmapperclass(wcmap.class); conf.setcombinerclass(wcreduce.class); conf.setreducerclass(wcreduce.class); conf.setinputpath(new Path(args[0])); conf.setoutputpath(new Path(args[1])); } JobClient.runJob(conf); Java User Group December '09 39

40 Running a Hadoop Job Basic Steps Compile your job into a JAR file Copy input data into HDFS Execute bin/hadoop jar with relevant args Monitor tasks via Web interface (optional) Examine output when job is complete Java User Group December '09 40

41 Running on Amazon EC2/S3 Amazon sells cluster services > EC2: $0.10/cpu hour > S3: $0.20/gigabyte month Hadoop supports: > EC2: cluster management scripts included > S3: file system implementation included Tested on 400 node cluster Combination used by several startups Java User Group December '09 41

42 Block Placement Default is 3 replicas, but configurable Blocks are placed (writes are pipelined): > On same node > On different rack > On the other rack Clients read from closest replica If the replication for a block drops below target, it is automatically re-replicated Java User Group December '09 42

43 Data Validation Data is checked with CRC32 File Creation > Client computes checksum per 512 byte > DataNode stores the checksum File access > Client retrieves the data and checksum from DataNode > If Validation fails, Client tries other replicas Periodic validation by DataNode Java User Group December '09 43

44 Installing Hadoop Java User Group December '09 44

45 Installing Hadoop The installation process, for distributed modes: > Requirements: Java 1.6, sshd, rsync > Configure SSH for password-free authentication > Unpack Hadoop distribution > Edit a few configuration files > Format the DFS on the name node > Start all the daemon processes Java User Group December '09 45

46 Installing Hadoop Modify hadoop-site.xml to set directories and master hostnames. Create a slaves file that lists the worker machines one per a line. Run bin/start-dfs on the namenode. Run bin/start-mapred on the jobtracker Java User Group December '09 46

47 Demo Java User Group December '09 47

48 Resources For more information: > Website: Mailing lists: > > IRC: #hadoop on irc.freenode.org Java User Group December '09 48

49 Thanks! Scott Seighman 49

50 Job Launch: Client Client program creates a JobConf Identify classes implementing Mapper and Reducer interfaces setmapperclass(), setreducerclass() Specify inputs, outputs setinputpath(), setoutputpath() Optionally, other options too: setnumreducetasks(), setoutputformat() Java User Group December '09 50

51 Appendix Java User Group December '09 51

52 Job Launch: JobClient Pass JobConf to JobClient.runJob() // blocks JobClient.submitJob() // does not block JobClient: Determines proper division of input into InputSplits Sends job data to master JobTracker server Java User Group December '09 52

53 Job Launch: JobTracker JobTracker: Inserts jar and JobConf (serialized to XML) in shared location Posts a JobInProgress to its run queue Java User Group December '09 53

54 Job Launch: TaskTracker TaskTrackers running on slave nodes periodically query JobTracker for work Retrieve job-specific jar and config Launch task in separate instance of Java main() is provided by Hadoop Java User Group December '09 54

55 Job Launch: Task TaskTracker.Child.main(): Sets up the child TaskInProgress attempt Reads XML configuration Connects back to necessary MapReduce components via RPC Uses TaskRunner to launch user process Java User Group December '09 55

56 Job Launch: TaskRunner TaskRunner, MapTaskRunner, MapRunner work in a daisy-chain to launch Mapper Task knows ahead of time which InputSplits it should be mapping Calls Mapper once for each record retrieved from the InputSplit Running the Reducer is much the same Java User Group December '09 56

57 Creating the Mapper Your instance of Mapper should extend MapReduceBase One instance of your Mapper is initialized by the MapTaskRunner for a TaskInProgress Exists in separate process from all other instances of Mapper no data sharing! Java User Group December '09 57

58 HDFS Limitations Almost GFS (Google FS) No file update options (record append, etc); all files are write-once Does not implement demand replication Designed for streaming Random seeks devastate performance Java User Group December '09 58

59 NameNode Head interface to HDFS cluster Records all global metadata Java User Group December '09 59

60 Secondary NameNode Not a failover NameNode! Records metadata snapshots from real NameNode Can merge update logs in flight Can upload snapshot back to primary Java User Group December '09 60

61 NameNode Death No new requests can be served while NameNode is down Secondary will not fail over as new primary So why have a secondary at all? Java User Group December '09 61

62 NameNode Death, cont d If NameNode dies from software glitch, just reboot But if machine is hosed, metadata for cluster is irretrievable! Java User Group December '09 62

63 Bringing the Cluster Back If original NameNode can be restored, secondary can re-establish the most current metadata snapshot If not, create a new NameNode, use secondary to copy metadata to new primary, restart whole cluster ( ) Is there another way? Java User Group December '09 63

64 Keeping the Cluster Up Problem: DataNodes fix the address of the NameNode in memory, can t switch in flight Solution: Bring new NameNode up, but use DNS to make cluster believe it s the original one Java User Group December '09 64

65 Further Reliability Measures Namenode can output multiple copies of metadata files to different directories Including an NFS mounted one May degrade performance; watch for NFS locks Java User Group December '09 65

66 Making Hadoop Work Basic configuration involves pointing nodes at master machines mapred.job.tracker fs.default.name dfs.data.dir, dfs.name.dir hadoop.tmp.dir mapred.system.dir See Hadoop Quickstart in online documentation Java User Group December '09 66

67 Configuring for Performance Configuring Hadoop performed in base JobConf in conf/hadoop-site.xml Contains 3 different categories of settings Settings that make Hadoop work Settings for performance Optional flags/bells & whistles Java User Group December '09 67

68 Configuring for Performance mapred.child.java.opts -Xmx512m dfs.block.size mapred.reduce.parallel.copies dfs.datanode.du.reserved io.sort.factor 100 io.file.buffer.size 32K 128K io.sort.mb tasktracker.http.threads Java User Group December '09 68

69 Number of Tasks Controlled by two parameters: mapred.tasktracker.map.tasks.maximum mapred.tasktracker.reduce.tasks.maximum Two degrees of freedom in mapper run time: Number of tasks/node, and size of InputSplits Current conventional wisdom: 2 map tasks/core, less for reducers See Java User Group December '09 69

70 Dead Tasks Student jobs would run away, admin restart needed Very often stuck in huge shuffle process Students did not know about Partitioner class, may have had non-uniform distribution Did not use many Reducer tasks Lesson: Design algorithms to use Combiners where possible Java User Group December '09 70

71 Working With the Scheduler Remember: Hadoop has a FIFO job scheduler No notion of fairness, round-robin Design your tasks to play well with one another Decompose long tasks into several smaller ones which can be interleaved at Job level Java User Group December '09 71

Introduction to Hadoop. Owen O Malley Yahoo!, Grid Team

Introduction to Hadoop. Owen O Malley Yahoo!, Grid Team Introduction to Hadoop Owen O Malley Yahoo!, Grid Team owen@yahoo-inc.com Who Am I? Yahoo! Architect on Hadoop Map/Reduce Design, review, and implement features in Hadoop Working on Hadoop full time since

More information

HDFS: Hadoop Distributed File System. CIS 612 Sunnie Chung

HDFS: Hadoop Distributed File System. CIS 612 Sunnie Chung HDFS: Hadoop Distributed File System CIS 612 Sunnie Chung What is Big Data?? Bulk Amount Unstructured Introduction Lots of Applications which need to handle huge amount of data (in terms of 500+ TB per

More information

Hadoop. copyright 2011 Trainologic LTD

Hadoop. copyright 2011 Trainologic LTD Hadoop Hadoop is a framework for processing large amounts of data in a distributed manner. It can scale up to thousands of machines. It provides high-availability. Provides map-reduce functionality. Hides

More information

TITLE: PRE-REQUISITE THEORY. 1. Introduction to Hadoop. 2. Cluster. Implement sort algorithm and run it using HADOOP

TITLE: PRE-REQUISITE THEORY. 1. Introduction to Hadoop. 2. Cluster. Implement sort algorithm and run it using HADOOP TITLE: Implement sort algorithm and run it using HADOOP PRE-REQUISITE Preliminary knowledge of clusters and overview of Hadoop and its basic functionality. THEORY 1. Introduction to Hadoop The Apache Hadoop

More information

Parallel Programming Principle and Practice. Lecture 10 Big Data Processing with MapReduce

Parallel Programming Principle and Practice. Lecture 10 Big Data Processing with MapReduce Parallel Programming Principle and Practice Lecture 10 Big Data Processing with MapReduce Outline MapReduce Programming Model MapReduce Examples Hadoop 2 Incredible Things That Happen Every Minute On The

More information

Distributed Systems 16. Distributed File Systems II

Distributed Systems 16. Distributed File Systems II Distributed Systems 16. Distributed File Systems II Paul Krzyzanowski pxk@cs.rutgers.edu 1 Review NFS RPC-based access AFS Long-term caching CODA Read/write replication & disconnected operation DFS AFS

More information

Hadoop An Overview. - Socrates CCDH

Hadoop An Overview. - Socrates CCDH Hadoop An Overview - Socrates CCDH What is Big Data? Volume Not Gigabyte. Terabyte, Petabyte, Exabyte, Zettabyte - Due to handheld gadgets,and HD format images and videos - In total data, 90% of them collected

More information

Cloud Computing and Hadoop Distributed File System. UCSB CS170, Spring 2018

Cloud Computing and Hadoop Distributed File System. UCSB CS170, Spring 2018 Cloud Computing and Hadoop Distributed File System UCSB CS70, Spring 08 Cluster Computing Motivations Large-scale data processing on clusters Scan 000 TB on node @ 00 MB/s = days Scan on 000-node cluster

More information

MapReduce Simplified Data Processing on Large Clusters

MapReduce Simplified Data Processing on Large Clusters MapReduce Simplified Data Processing on Large Clusters Amir H. Payberah amir@sics.se Amirkabir University of Technology (Tehran Polytechnic) Amir H. Payberah (Tehran Polytechnic) MapReduce 1393/8/5 1 /

More information

50 Must Read Hadoop Interview Questions & Answers

50 Must Read Hadoop Interview Questions & Answers 50 Must Read Hadoop Interview Questions & Answers Whizlabs Dec 29th, 2017 Big Data Are you planning to land a job with big data and data analytics? Are you worried about cracking the Hadoop job interview?

More information

Introduction to MapReduce. Instructor: Dr. Weikuan Yu Computer Sci. & Software Eng.

Introduction to MapReduce. Instructor: Dr. Weikuan Yu Computer Sci. & Software Eng. Introduction to MapReduce Instructor: Dr. Weikuan Yu Computer Sci. & Software Eng. Before MapReduce Large scale data processing was difficult! Managing hundreds or thousands of processors Managing parallelization

More information

Distributed File Systems II

Distributed File Systems II Distributed File Systems II To do q Very-large scale: Google FS, Hadoop FS, BigTable q Next time: Naming things GFS A radically new environment NFS, etc. Independence Small Scale Variety of workloads Cooperation

More information

Hadoop File System S L I D E S M O D I F I E D F R O M P R E S E N T A T I O N B Y B. R A M A M U R T H Y 11/15/2017

Hadoop File System S L I D E S M O D I F I E D F R O M P R E S E N T A T I O N B Y B. R A M A M U R T H Y 11/15/2017 Hadoop File System 1 S L I D E S M O D I F I E D F R O M P R E S E N T A T I O N B Y B. R A M A M U R T H Y Moving Computation is Cheaper than Moving Data Motivation: Big Data! What is BigData? - Google

More information

Lecture 11 Hadoop & Spark

Lecture 11 Hadoop & Spark Lecture 11 Hadoop & Spark Dr. Wilson Rivera ICOM 6025: High Performance Computing Electrical and Computer Engineering Department University of Puerto Rico Outline Distributed File Systems Hadoop Ecosystem

More information

HADOOP FRAMEWORK FOR BIG DATA

HADOOP FRAMEWORK FOR BIG DATA HADOOP FRAMEWORK FOR BIG DATA Mr K. Srinivas Babu 1,Dr K. Rameshwaraiah 2 1 Research Scholar S V University, Tirupathi 2 Professor and Head NNRESGI, Hyderabad Abstract - Data has to be stored for further

More information

Informa)on Retrieval and Map- Reduce Implementa)ons. Mohammad Amir Sharif PhD Student Center for Advanced Computer Studies

Informa)on Retrieval and Map- Reduce Implementa)ons. Mohammad Amir Sharif PhD Student Center for Advanced Computer Studies Informa)on Retrieval and Map- Reduce Implementa)ons Mohammad Amir Sharif PhD Student Center for Advanced Computer Studies mas4108@louisiana.edu Map-Reduce: Why? Need to process 100TB datasets On 1 node:

More information

Outline. What is Big Data? Hadoop HDFS MapReduce Twitter Analytics and Hadoop

Outline. What is Big Data? Hadoop HDFS MapReduce Twitter Analytics and Hadoop Intro To Hadoop Bill Graham - @billgraham Data Systems Engineer, Analytics Infrastructure Info 290 - Analyzing Big Data With Twitter UC Berkeley Information School September 2012 This work is licensed

More information

International Journal of Advance Engineering and Research Development. A Study: Hadoop Framework

International Journal of Advance Engineering and Research Development. A Study: Hadoop Framework Scientific Journal of Impact Factor (SJIF): e-issn (O): 2348- International Journal of Advance Engineering and Research Development Volume 3, Issue 2, February -2016 A Study: Hadoop Framework Devateja

More information

Cluster Setup. Table of contents

Cluster Setup. Table of contents Table of contents 1 Purpose...2 2 Pre-requisites...2 3 Installation...2 4 Configuration... 2 4.1 Configuration Files...2 4.2 Site Configuration... 3 5 Cluster Restartability... 10 5.1 Map/Reduce...10 6

More information

Chapter 5. The MapReduce Programming Model and Implementation

Chapter 5. The MapReduce Programming Model and Implementation Chapter 5. The MapReduce Programming Model and Implementation - Traditional computing: data-to-computing (send data to computing) * Data stored in separate repository * Data brought into system for computing

More information

A BigData Tour HDFS, Ceph and MapReduce

A BigData Tour HDFS, Ceph and MapReduce A BigData Tour HDFS, Ceph and MapReduce These slides are possible thanks to these sources Jonathan Drusi - SCInet Toronto Hadoop Tutorial, Amir Payberah - Course in Data Intensive Computing SICS; Yahoo!

More information

Introduction to HDFS and MapReduce

Introduction to HDFS and MapReduce Introduction to HDFS and MapReduce Who Am I - Ryan Tabora - Data Developer at Think Big Analytics - Big Data Consulting - Experience working with Hadoop, HBase, Hive, Solr, Cassandra, etc. 2 Who Am I -

More information

Big Data Analytics. Izabela Moise, Evangelos Pournaras, Dirk Helbing

Big Data Analytics. Izabela Moise, Evangelos Pournaras, Dirk Helbing Big Data Analytics Izabela Moise, Evangelos Pournaras, Dirk Helbing Izabela Moise, Evangelos Pournaras, Dirk Helbing 1 Big Data "The world is crazy. But at least it s getting regular analysis." Izabela

More information

Big Data landscape Lecture #2

Big Data landscape Lecture #2 Big Data landscape Lecture #2 Contents 1 1 CORE Technologies 2 3 MapReduce YARN 4 SparK 5 Cassandra Contents 2 16 HBase 72 83 Accumulo memcached 94 Blur 10 5 Sqoop/Flume Contents 3 111 MongoDB 12 2 13

More information

Top 25 Hadoop Admin Interview Questions and Answers

Top 25 Hadoop Admin Interview Questions and Answers Top 25 Hadoop Admin Interview Questions and Answers 1) What daemons are needed to run a Hadoop cluster? DataNode, NameNode, TaskTracker, and JobTracker are required to run Hadoop cluster. 2) Which OS are

More information

Distributed Filesystem

Distributed Filesystem Distributed Filesystem 1 How do we get data to the workers? NAS Compute Nodes SAN 2 Distributing Code! Don t move data to workers move workers to the data! - Store data on the local disks of nodes in the

More information

Chase Wu New Jersey Institute of Technology

Chase Wu New Jersey Institute of Technology CS 644: Introduction to Big Data Chapter 4. Big Data Analytics Platforms Chase Wu New Jersey Institute of Technology Some of the slides were provided through the courtesy of Dr. Ching-Yung Lin at Columbia

More information

Map-Reduce. Marco Mura 2010 March, 31th

Map-Reduce. Marco Mura 2010 March, 31th Map-Reduce Marco Mura (mura@di.unipi.it) 2010 March, 31th This paper is a note from the 2009-2010 course Strumenti di programmazione per sistemi paralleli e distribuiti and it s based by the lessons of

More information

UNIT-IV HDFS. Ms. Selva Mary. G

UNIT-IV HDFS. Ms. Selva Mary. G UNIT-IV HDFS HDFS ARCHITECTURE Dataset partition across a number of separate machines Hadoop Distributed File system The Design of HDFS HDFS is a file system designed for storing very large files with

More information

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2013/14

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2013/14 Systems Infrastructure for Data Science Web Science Group Uni Freiburg WS 2013/14 MapReduce & Hadoop The new world of Big Data (programming model) Overview of this Lecture Module Background Cluster File

More information

Hortonworks HDPCD. Hortonworks Data Platform Certified Developer. Download Full Version :

Hortonworks HDPCD. Hortonworks Data Platform Certified Developer. Download Full Version : Hortonworks HDPCD Hortonworks Data Platform Certified Developer Download Full Version : https://killexams.com/pass4sure/exam-detail/hdpcd QUESTION: 97 You write MapReduce job to process 100 files in HDFS.

More information

MI-PDB, MIE-PDB: Advanced Database Systems

MI-PDB, MIE-PDB: Advanced Database Systems MI-PDB, MIE-PDB: Advanced Database Systems http://www.ksi.mff.cuni.cz/~svoboda/courses/2015-2-mie-pdb/ Lecture 10: MapReduce, Hadoop 26. 4. 2016 Lecturer: Martin Svoboda svoboda@ksi.mff.cuni.cz Author:

More information

Map Reduce & Hadoop Recommended Text:

Map Reduce & Hadoop Recommended Text: Map Reduce & Hadoop Recommended Text: Hadoop: The Definitive Guide Tom White O Reilly 2010 VMware Inc. All rights reserved Big Data! Large datasets are becoming more common The New York Stock Exchange

More information

Parallel Data Processing with Hadoop/MapReduce. CS140 Tao Yang, 2014

Parallel Data Processing with Hadoop/MapReduce. CS140 Tao Yang, 2014 Parallel Data Processing with Hadoop/MapReduce CS140 Tao Yang, 2014 Overview What is MapReduce? Example with word counting Parallel data processing with MapReduce Hadoop file system More application example

More information

Big Data Programming: an Introduction. Spring 2015, X. Zhang Fordham Univ.

Big Data Programming: an Introduction. Spring 2015, X. Zhang Fordham Univ. Big Data Programming: an Introduction Spring 2015, X. Zhang Fordham Univ. Outline What the course is about? scope Introduction to big data programming Opportunity and challenge of big data Origin of Hadoop

More information

Cloud Computing. Leonidas Fegaras University of Texas at Arlington. Web Data Management and XML L12: Cloud Computing 1

Cloud Computing. Leonidas Fegaras University of Texas at Arlington. Web Data Management and XML L12: Cloud Computing 1 Cloud Computing Leonidas Fegaras University of Texas at Arlington Web Data Management and XML L12: Cloud Computing 1 Computing as a Utility Cloud computing is a model for enabling convenient, on-demand

More information

Dept. Of Computer Science, Colorado State University

Dept. Of Computer Science, Colorado State University CS 455: INTRODUCTION TO DISTRIBUTED SYSTEMS [HADOOP/HDFS] Trying to have your cake and eat it too Each phase pines for tasks with locality and their numbers on a tether Alas within a phase, you get one,

More information

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2012/13

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2012/13 Systems Infrastructure for Data Science Web Science Group Uni Freiburg WS 2012/13 MapReduce & Hadoop The new world of Big Data (programming model) Overview of this Lecture Module Background Google MapReduce

More information

Cloud Computing. Up until now

Cloud Computing. Up until now Cloud Computing Lecture 9 Map Reduce 2010-2011 Introduction Up until now Definition of Cloud Computing Grid Computing Content Distribution Networks Cycle-Sharing Distributed Scheduling 1 Outline Map Reduce:

More information

Introduction to MapReduce

Introduction to MapReduce Basics of Cloud Computing Lecture 4 Introduction to MapReduce Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed

More information

Semantics with Failures

Semantics with Failures Semantics with Failures If map and reduce are deterministic, then output identical to non-faulting sequential execution For non-deterministic operators, different reduce tasks might see output of different

More information

Hadoop Quickstart. Table of contents

Hadoop Quickstart. Table of contents Table of contents 1 Purpose...2 2 Pre-requisites...2 2.1 Supported Platforms... 2 2.2 Required Software... 2 2.3 Installing Software...2 3 Download...2 4 Prepare to Start the Hadoop Cluster...3 5 Standalone

More information

2/26/2017. For instance, consider running Word Count across 20 splits

2/26/2017. For instance, consider running Word Count across 20 splits Based on the slides of prof. Pietro Michiardi Hadoop Internals https://github.com/michiard/disc-cloud-course/raw/master/hadoop/hadoop.pdf Job: execution of a MapReduce application across a data set Task:

More information

Vendor: Cloudera. Exam Code: CCA-505. Exam Name: Cloudera Certified Administrator for Apache Hadoop (CCAH) CDH5 Upgrade Exam.

Vendor: Cloudera. Exam Code: CCA-505. Exam Name: Cloudera Certified Administrator for Apache Hadoop (CCAH) CDH5 Upgrade Exam. Vendor: Cloudera Exam Code: CCA-505 Exam Name: Cloudera Certified Administrator for Apache Hadoop (CCAH) CDH5 Upgrade Exam Version: Demo QUESTION 1 You have installed a cluster running HDFS and MapReduce

More information

Introduction to Hadoop and MapReduce

Introduction to Hadoop and MapReduce Introduction to Hadoop and MapReduce Antonino Virgillito THE CONTRACTOR IS ACTING UNDER A FRAMEWORK CONTRACT CONCLUDED WITH THE COMMISSION Large-scale Computation Traditional solutions for computing large

More information

Hadoop and HDFS Overview. Madhu Ankam

Hadoop and HDFS Overview. Madhu Ankam Hadoop and HDFS Overview Madhu Ankam Why Hadoop We are gathering more data than ever Examples of data : Server logs Web logs Financial transactions Analytics Emails and text messages Social media like

More information

Google File System (GFS) and Hadoop Distributed File System (HDFS)

Google File System (GFS) and Hadoop Distributed File System (HDFS) Google File System (GFS) and Hadoop Distributed File System (HDFS) 1 Hadoop: Architectural Design Principles Linear scalability More nodes can do more work within the same time Linear on data size, linear

More information

We are ready to serve Latest Testing Trends, Are you ready to learn?? New Batches Info

We are ready to serve Latest Testing Trends, Are you ready to learn?? New Batches Info We are ready to serve Latest Testing Trends, Are you ready to learn?? New Batches Info START DATE : TIMINGS : DURATION : TYPE OF BATCH : FEE : FACULTY NAME : LAB TIMINGS : PH NO: 9963799240, 040-40025423

More information

Map-Reduce in Various Programming Languages

Map-Reduce in Various Programming Languages Map-Reduce in Various Programming Languages 1 Context of Map-Reduce Computing The use of LISP's map and reduce functions to solve computational problems probably dates from the 1960s -- very early in the

More information

Large-scale Information Processing

Large-scale Information Processing Sommer 2013 Large-scale Information Processing Ulf Brefeld Knowledge Mining & Assessment brefeld@kma.informatik.tu-darmstadt.de Anecdotal evidence... I think there is a world market for about five computers,

More information

Topics. Big Data Analytics What is and Why Hadoop? Comparison to other technologies Hadoop architecture Hadoop ecosystem Hadoop usage examples

Topics. Big Data Analytics What is and Why Hadoop? Comparison to other technologies Hadoop architecture Hadoop ecosystem Hadoop usage examples Hadoop Introduction 1 Topics Big Data Analytics What is and Why Hadoop? Comparison to other technologies Hadoop architecture Hadoop ecosystem Hadoop usage examples 2 Big Data Analytics What is Big Data?

More information

Programming with Hadoop MapReduce. Kostas Solomos Computer Science Department University of Crete, Greece

Programming with Hadoop MapReduce. Kostas Solomos Computer Science Department University of Crete, Greece Programming with Hadoop MapReduce Kostas Solomos Computer Science Department University of Crete, Greece What we will cover Diving deeper into Hadoop architecture Using a shared input for the mappers/reducers

More information

PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS

PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS By HAI JIN, SHADI IBRAHIM, LI QI, HAIJUN CAO, SONG WU and XUANHUA SHI Prepared by: Dr. Faramarz Safi Islamic Azad

More information

Clustering Lecture 8: MapReduce

Clustering Lecture 8: MapReduce Clustering Lecture 8: MapReduce Jing Gao SUNY Buffalo 1 Divide and Conquer Work Partition w 1 w 2 w 3 worker worker worker r 1 r 2 r 3 Result Combine 4 Distributed Grep Very big data Split data Split data

More information

CS60021: Scalable Data Mining. Sourangshu Bhattacharya

CS60021: Scalable Data Mining. Sourangshu Bhattacharya CS60021: Scalable Data Mining Sourangshu Bhattacharya In this Lecture: Outline: HDFS Motivation HDFS User commands HDFS System architecture HDFS Implementation details Sourangshu Bhattacharya Computer

More information

Hadoop. Course Duration: 25 days (60 hours duration). Bigdata Fundamentals. Day1: (2hours)

Hadoop. Course Duration: 25 days (60 hours duration). Bigdata Fundamentals. Day1: (2hours) Bigdata Fundamentals Day1: (2hours) 1. Understanding BigData. a. What is Big Data? b. Big-Data characteristics. c. Challenges with the traditional Data Base Systems and Distributed Systems. 2. Distributions:

More information

Introduction to Map/Reduce. Kostas Solomos Computer Science Department University of Crete, Greece

Introduction to Map/Reduce. Kostas Solomos Computer Science Department University of Crete, Greece Introduction to Map/Reduce Kostas Solomos Computer Science Department University of Crete, Greece What we will cover What is MapReduce? How does it work? A simple word count example (the Hello World! of

More information

Cloud Computing. Leonidas Fegaras University of Texas at Arlington. Web Data Management and XML L3b: Cloud Computing 1

Cloud Computing. Leonidas Fegaras University of Texas at Arlington. Web Data Management and XML L3b: Cloud Computing 1 Cloud Computing Leonidas Fegaras University of Texas at Arlington Web Data Management and XML L3b: Cloud Computing 1 Computing as a Utility Cloud computing is a model for enabling convenient, on-demand

More information

HDFS Architecture Guide

HDFS Architecture Guide by Dhruba Borthakur Table of contents 1 Introduction...3 2 Assumptions and Goals...3 2.1 Hardware Failure... 3 2.2 Streaming Data Access...3 2.3 Large Data Sets...3 2.4 Simple Coherency Model... 4 2.5

More information

CCA-410. Cloudera. Cloudera Certified Administrator for Apache Hadoop (CCAH)

CCA-410. Cloudera. Cloudera Certified Administrator for Apache Hadoop (CCAH) Cloudera CCA-410 Cloudera Certified Administrator for Apache Hadoop (CCAH) Download Full Version : http://killexams.com/pass4sure/exam-detail/cca-410 Reference: CONFIGURATION PARAMETERS DFS.BLOCK.SIZE

More information

Parallel Processing - MapReduce and FlumeJava. Amir H. Payberah 14/09/2018

Parallel Processing - MapReduce and FlumeJava. Amir H. Payberah 14/09/2018 Parallel Processing - MapReduce and FlumeJava Amir H. Payberah payberah@kth.se 14/09/2018 The Course Web Page https://id2221kth.github.io 1 / 83 Where Are We? 2 / 83 What do we do when there is too much

More information

Introduction to BigData, Hadoop:-

Introduction to BigData, Hadoop:- Introduction to BigData, Hadoop:- Big Data Introduction: Hadoop Introduction What is Hadoop? Why Hadoop? Hadoop History. Different types of Components in Hadoop? HDFS, MapReduce, PIG, Hive, SQOOP, HBASE,

More information

MapReduce. U of Toronto, 2014

MapReduce. U of Toronto, 2014 MapReduce U of Toronto, 2014 http://www.google.org/flutrends/ca/ (2012) Average Searches Per Day: 5,134,000,000 2 Motivation Process lots of data Google processed about 24 petabytes of data per day in

More information

Hadoop Distributed File System(HDFS)

Hadoop Distributed File System(HDFS) Hadoop Distributed File System(HDFS) Bu eğitim sunumları İstanbul Kalkınma Ajansı nın 2016 yılı Yenilikçi ve Yaratıcı İstanbul Mali Destek Programı kapsamında yürütülmekte olan TR10/16/YNY/0036 no lu İstanbul

More information

CPSC 426/526. Cloud Computing. Ennan Zhai. Computer Science Department Yale University

CPSC 426/526. Cloud Computing. Ennan Zhai. Computer Science Department Yale University CPSC 426/526 Cloud Computing Ennan Zhai Computer Science Department Yale University Recall: Lec-7 In the lec-7, I talked about: - P2P vs Enterprise control - Firewall - NATs - Software defined network

More information

BigData and Map Reduce VITMAC03

BigData and Map Reduce VITMAC03 BigData and Map Reduce VITMAC03 1 Motivation Process lots of data Google processed about 24 petabytes of data per day in 2009. A single machine cannot serve all the data You need a distributed system to

More information

Hadoop MapReduce Framework

Hadoop MapReduce Framework Hadoop MapReduce Framework Contents Hadoop MapReduce Framework Architecture Interaction Diagram of MapReduce Framework (Hadoop 1.0) Interaction Diagram of MapReduce Framework (Hadoop 2.0) Hadoop MapReduce

More information

ΕΠΛ 602:Foundations of Internet Technologies. Cloud Computing

ΕΠΛ 602:Foundations of Internet Technologies. Cloud Computing ΕΠΛ 602:Foundations of Internet Technologies Cloud Computing 1 Outline Bigtable(data component of cloud) Web search basedonch13of thewebdatabook 2 What is Cloud Computing? ACloudis an infrastructure, transparent

More information

Clustering Documents. Document Retrieval. Case Study 2: Document Retrieval

Clustering Documents. Document Retrieval. Case Study 2: Document Retrieval Case Study 2: Document Retrieval Clustering Documents Machine Learning for Big Data CSE547/STAT548, University of Washington Sham Kakade April, 2017 Sham Kakade 2017 1 Document Retrieval n Goal: Retrieve

More information

Big Data Hadoop Developer Course Content. Big Data Hadoop Developer - The Complete Course Course Duration: 45 Hours

Big Data Hadoop Developer Course Content. Big Data Hadoop Developer - The Complete Course Course Duration: 45 Hours Big Data Hadoop Developer Course Content Who is the target audience? Big Data Hadoop Developer - The Complete Course Course Duration: 45 Hours Complete beginners who want to learn Big Data Hadoop Professionals

More information

Clustering Documents. Case Study 2: Document Retrieval

Clustering Documents. Case Study 2: Document Retrieval Case Study 2: Document Retrieval Clustering Documents Machine Learning for Big Data CSE547/STAT548, University of Washington Sham Kakade April 21 th, 2015 Sham Kakade 2016 1 Document Retrieval Goal: Retrieve

More information

PARLab Parallel Boot Camp

PARLab Parallel Boot Camp PARLab Parallel Boot Camp Cloud Computing with MapReduce and Hadoop Matei Zaharia Electrical Engineering and Computer Sciences University of California, Berkeley What is Cloud Computing? Cloud refers to

More information

UNIT V PROCESSING YOUR DATA WITH MAPREDUCE Syllabus

UNIT V PROCESSING YOUR DATA WITH MAPREDUCE Syllabus UNIT V PROCESSING YOUR DATA WITH MAPREDUCE Syllabus Getting to know MapReduce MapReduce Execution Pipeline Runtime Coordination and Task Management MapReduce Application Hadoop Word Count Implementation.

More information

TP1-2: Analyzing Hadoop Logs

TP1-2: Analyzing Hadoop Logs TP1-2: Analyzing Hadoop Logs Shadi Ibrahim January 26th, 2017 MapReduce has emerged as a leading programming model for data-intensive computing. It was originally proposed by Google to simplify development

More information

A brief history on Hadoop

A brief history on Hadoop Hadoop Basics A brief history on Hadoop 2003 - Google launches project Nutch to handle billions of searches and indexing millions of web pages. Oct 2003 - Google releases papers with GFS (Google File System)

More information

Distributed Systems. 15. Distributed File Systems. Paul Krzyzanowski. Rutgers University. Fall 2017

Distributed Systems. 15. Distributed File Systems. Paul Krzyzanowski. Rutgers University. Fall 2017 Distributed Systems 15. Distributed File Systems Paul Krzyzanowski Rutgers University Fall 2017 1 Google Chubby ( Apache Zookeeper) 2 Chubby Distributed lock service + simple fault-tolerant file system

More information

CS /30/17. Paul Krzyzanowski 1. Google Chubby ( Apache Zookeeper) Distributed Systems. Chubby. Chubby Deployment.

CS /30/17. Paul Krzyzanowski 1. Google Chubby ( Apache Zookeeper) Distributed Systems. Chubby. Chubby Deployment. Distributed Systems 15. Distributed File Systems Google ( Apache Zookeeper) Paul Krzyzanowski Rutgers University Fall 2017 1 2 Distributed lock service + simple fault-tolerant file system Deployment Client

More information

Map- reduce programming paradigm

Map- reduce programming paradigm Map- reduce programming paradigm Some slides are from lecture of Matei Zaharia, and distributed computing seminar by Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet. In pioneer days they

More information

Big Data 7. Resource Management

Big Data 7. Resource Management Ghislain Fourny Big Data 7. Resource Management artjazz / 123RF Stock Photo Data Technology Stack User interfaces Querying Data stores Indexing Processing Validation Data models Syntax Encoding Storage

More information

GFS: The Google File System

GFS: The Google File System GFS: The Google File System Brad Karp UCL Computer Science CS GZ03 / M030 24 th October 2014 Motivating Application: Google Crawl the whole web Store it all on one big disk Process users searches on one

More information

Certified Big Data and Hadoop Course Curriculum

Certified Big Data and Hadoop Course Curriculum Certified Big Data and Hadoop Course Curriculum The Certified Big Data and Hadoop course by DataFlair is a perfect blend of in-depth theoretical knowledge and strong practical skills via implementation

More information

itpass4sure Helps you pass the actual test with valid and latest training material.

itpass4sure   Helps you pass the actual test with valid and latest training material. itpass4sure http://www.itpass4sure.com/ Helps you pass the actual test with valid and latest training material. Exam : CCD-410 Title : Cloudera Certified Developer for Apache Hadoop (CCDH) Vendor : Cloudera

More information

MapReduce-style data processing

MapReduce-style data processing MapReduce-style data processing Software Languages Team University of Koblenz-Landau Ralf Lämmel and Andrei Varanovich Related meanings of MapReduce Functional programming with map & reduce An algorithmic

More information

Big Data for Engineers Spring Resource Management

Big Data for Engineers Spring Resource Management Ghislain Fourny Big Data for Engineers Spring 2018 7. Resource Management artjazz / 123RF Stock Photo Data Technology Stack User interfaces Querying Data stores Indexing Processing Validation Data models

More information

The Google File System. Alexandru Costan

The Google File System. Alexandru Costan 1 The Google File System Alexandru Costan Actions on Big Data 2 Storage Analysis Acquisition Handling the data stream Data structured unstructured semi-structured Results Transactions Outline File systems

More information

Actual4Dumps. Provide you with the latest actual exam dumps, and help you succeed

Actual4Dumps.   Provide you with the latest actual exam dumps, and help you succeed Actual4Dumps http://www.actual4dumps.com Provide you with the latest actual exam dumps, and help you succeed Exam : HDPCD Title : Hortonworks Data Platform Certified Developer Vendor : Hortonworks Version

More information

Hadoop File Management System

Hadoop File Management System Volume-6, Issue-5, September-October 2016 International Journal of Engineering and Management Research Page Number: 281-286 Hadoop File Management System Swaraj Pritam Padhy 1, Sashi Bhusan Maharana 2

More information

Getting Started with Hadoop

Getting Started with Hadoop Getting Started with Hadoop May 28, 2018 Michael Völske, Shahbaz Syed Web Technology & Information Systems Bauhaus-Universität Weimar 1 webis 2018 What is Hadoop Started in 2004 by Yahoo Open-Source implementation

More information

Hadoop Development Introduction

Hadoop Development Introduction Hadoop Development Introduction What is Bigdata? Evolution of Bigdata Types of Data and their Significance Need for Bigdata Analytics Why Bigdata with Hadoop? History of Hadoop Why Hadoop is in demand

More information

Introduction to MapReduce

Introduction to MapReduce Basics of Cloud Computing Lecture 4 Introduction to MapReduce Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed

More information

MapReduce, Hadoop and Spark. Bompotas Agorakis

MapReduce, Hadoop and Spark. Bompotas Agorakis MapReduce, Hadoop and Spark Bompotas Agorakis Big Data Processing Most of the computations are conceptually straightforward on a single machine but the volume of data is HUGE Need to use many (1.000s)

More information

Distributed Systems. 15. Distributed File Systems. Paul Krzyzanowski. Rutgers University. Fall 2016

Distributed Systems. 15. Distributed File Systems. Paul Krzyzanowski. Rutgers University. Fall 2016 Distributed Systems 15. Distributed File Systems Paul Krzyzanowski Rutgers University Fall 2016 1 Google Chubby 2 Chubby Distributed lock service + simple fault-tolerant file system Interfaces File access

More information

Introduction to Hadoop

Introduction to Hadoop Hadoop 1 Why Hadoop Drivers: 500M+ unique users per month Billions of interesting events per day Data analysis is key Need massive scalability PB s of storage, millions of files, 1000 s of nodes Need cost

More information

Big Data and Scripting map reduce in Hadoop

Big Data and Scripting map reduce in Hadoop Big Data and Scripting map reduce in Hadoop 1, 2, connecting to last session set up a local map reduce distribution enable execution of map reduce implementations using local file system only all tasks

More information

Hadoop. Introduction to BIGDATA and HADOOP

Hadoop. Introduction to BIGDATA and HADOOP Hadoop Introduction to BIGDATA and HADOOP What is Big Data? What is Hadoop? Relation between Big Data and Hadoop What is the need of going ahead with Hadoop? Scenarios to apt Hadoop Technology in REAL

More information

April Final Quiz COSC MapReduce Programming a) Explain briefly the main ideas and components of the MapReduce programming model.

April Final Quiz COSC MapReduce Programming a) Explain briefly the main ideas and components of the MapReduce programming model. 1. MapReduce Programming a) Explain briefly the main ideas and components of the MapReduce programming model. MapReduce is a framework for processing big data which processes data in two phases, a Map

More information

CSE6331: Cloud Computing

CSE6331: Cloud Computing CSE6331: Cloud Computing Leonidas Fegaras University of Texas at Arlington c 2017 by Leonidas Fegaras Map-Reduce Fundamentals Based on: J. Simeon: Introduction to MapReduce P. Michiardi: Tutorial on MapReduce

More information

Big Data Hadoop Course Content

Big Data Hadoop Course Content Big Data Hadoop Course Content Topics covered in the training Introduction to Linux and Big Data Virtual Machine ( VM) Introduction/ Installation of VirtualBox and the Big Data VM Introduction to Linux

More information

Big Data Analytics. 4. Map Reduce I. Lars Schmidt-Thieme

Big Data Analytics. 4. Map Reduce I. Lars Schmidt-Thieme Big Data Analytics 4. Map Reduce I Lars Schmidt-Thieme Information Systems and Machine Learning Lab (ISMLL) Institute of Computer Science University of Hildesheim, Germany original slides by Lucas Rego

More information

Batch Inherence of Map Reduce Framework

Batch Inherence of Map Reduce Framework Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 6, June 2015, pg.287

More information