Pentaho MapReduce. 3. Expand the Big Data tab, drag a Hadoop Copy Files step onto the canvas 4. Draw a hop from Start to Hadoop Copy Files

Similar documents
Hitachi Data Ingestor Hyper-V Installation Guide 6.0.0

HITACHI VIRTUAL STORAGE PLATFORM G SERIES FAMILY MATRIX

Hitachi Converged Adapter 2.0 for Microsoft SCOM 2012 Administration Manual HCA for SCOM Administration Manual

HITACHI VIRTUAL STORAGE PLATFORM F SERIES FAMILY MATRIX

HCP Data Migrator Release Notes Release 6.1

Hitachi Modern Data Protection and Copy Data Management Solutions

BBI Quick Guide Networking OS for 1/10Gb LAN Switch Module

Hitachi Hyper Scale-Out Platform (HSP) Hortonworks Ambari VM Quick Reference Guide

Transform Data Center Operations With a Flash- Powered Cloud Platform

Hitachi Virtual Storage Platform Family: Virtualize and Consolidate Storage Management

Hitachi Converged Adapter 2.0 for VMware vcenter Operations HCA for vc OPs Third-Party Copyrights and Licenses

Smart Data Center From Hitachi Vantara: Transform to an Agile, Learning Data Center

Transformation Variables in Pentaho MapReduce

ALPINE CertMetrics Credential Management System

Hitachi Virtual Storage Platform

Hitachi Enterprise Cloud Container Platform

Lab Guide for Managing Hitachi Storage With Hitachi Command Suite v8.x

Hitachi Command Suite

Processing Big Data with Hadoop in Azure HDInsight

Instructor : Dr. Sunnie Chung. Independent Study Spring Pentaho. 1 P a g e

Hitachi Content Platform HCP Data Migrator 6.1 Release Notes HCP-DM Version June 27, 2014

Storage as a Service From Hitachi Vantara

Hitachi Enterprise Cloud Family of Solutions

Hitachi Compute Blade HVM Navigator Installation Manual

Getting Started. i-net Designer

Deploying Custom Step Plugins for Pentaho MapReduce

Building Self-Service BI Solutions with Power Query. Written By: Devin

Hitachi Compute Rack Series RAID Driver Instruction Manual For Red Hat Enterprise Linux

BlueMix Hands-On Workshop

AWS Snowball: User Guide

Hitachi Data Ingestor

Hadoop Map Reduce 10/17/2018 1

How to set up a VPN connection between EAGLE20 and the LANCOM Advanced VPN Client (NCP client)?

Customer Attributes. Creating New Customer Attributes. For more details see the Customer Attributes extension page.

Hitachi Data Instance Director

Hitachi Data Ingestor

Getting Started with Pentaho Data Integration

Hitachi Command Suite

Altium Designer Viewer - Generating Output

Hitachi Data Ingestor

Hitachi Compute Blade 2500 Intel LAN Driver Instruction Manual for SUSE Linux Enterprise Server

Simplify Hybrid Cloud

Processing Big Data with Hadoop in Azure HDInsight

Hitachi NAS Platform F1000 Series

[ Getting Started with Analyzer, Interactive Reports, and Dashboards ] ]

Getting Started with Pentaho Data Integration

Installation Guide. . All right reserved. For more information about Specops Inventory and other Specops products, visit

HCP Streamer Adapter for Veritas Enterprise Vault

Management Abstraction With Hitachi Storage Advisor

Hitachi Data Ingestor

Hitachi Data Systems. Hitachi Virtual Storage Platform Gx00 with NAS Modules NAS Platform (System LU) Migration Guide MK-92HNAS078-00

Processing Big Data with Hadoop in Azure HDInsight

Hitachi Data Ingestor

Symantec Backup Exec System Recovery Granular Restore Option User's Guide

Use Restrictions for Hitachi Compute Blade 500 Series FASTFIND LINKS. Getting Help Contents MK-91CB

Hitachi Compute Blade 500/2500 LSI MegaRAID SAS 3004 (imr) firmware update User s Guide

Installation Guide. . All right reserved. For more information about Specops Deploy and other Specops products, visit

Using Hive for Data Warehousing

Hitachi Compute Blade CB Base-T 4-port LAN Adapter User's Guide

AWS Snowball: User Guide

Installation Guide. . All right reserved. For more information about Specops Command and other Specops products, visit

Reference Guide. Adding a Generic File Store - Importing From a Local or Network ShipWorks Page 1 of 21

Configuring an SMTP Journal Account for Microsoft Exchange 2003

Clear-Com Concert. License Quick Start Guide. For Concert 2.7 Software. PN Z Rev B OCT

System Management Unit (SMU)

Hitachi Compute Blade 500 Series

Pentaho MapReduce with MapR Client

Hadoop is supplemented by an ecosystem of open source projects IBM Corporation. How to Analyze Large Data Sets in Hadoop

Hitachi Unified Compute Platform Pro for VMware vsphere

Processing Big Data with Hadoop in Azure HDInsight

Triple Play Testing with

Actual4Dumps. Provide you with the latest actual exam dumps, and help you succeed

Windows Drivers. u-blox GNSS drivers for Microsoft Windows systems. Application Note

Lab 3: Create the Error Message Data Stream

Data Resource Centre, University of Guelph CREATING AND EDITING CHARTS. From the menus choose: Graphs Chart Builder... 20/11/ :06:00 PM Page 1

Rosemount 3493 LOGView

Big Data XML Parsing in Pentaho Data Integration (PDI)

TESTING WITH DIFFERENT PROFILES

Hitachi Content Platform Failover Processing Using Storage Adapter for Symantec Enterprise Vault

Upload, Model, Analyze, and Report

Hitachi Unified Storage and Hitachi NAS Platform Specifications

Lab 4: Pass the Data Streams to a Match Processor and Define a Match Rule

Date : June 3, IBM Cognos Open Mic. Extensible Visualization to Cognos Active Reports IBM Corporation

Big Data Hadoop Developer Course Content. Big Data Hadoop Developer - The Complete Course Course Duration: 45 Hours

ATA DRIVEN GLOBAL VISION CLOUD PLATFORM STRATEG N POWERFUL RELEVANT PERFORMANCE SOLUTION CLO IRTUAL BIG DATA SOLUTION ROI FLEXIBLE DATA DRIVEN V

Hitachi Compute Blade 2000 LSI MegaRAID SAS 2208 RAID controller firmware update User s Guide

Accessing Hadoop Data Using Hive

Cisco Service Control Usage Analysis and Reporting Solution Guide,

Automation Engine 16 Chapter 'CDI Platemaking

EA/Studio Installation Guide

Blended Learning Outline: Cloudera Data Analyst Training (171219a)

PCB 3D Video. Making a PCB 3D Video. Modified by Admin on Sep 13, D PCB 'flyovers'

Welcome to InSite: A GUIDE TO PROOFING ONLINE. Getting Started Viewing Job Information Uploading Files Viewing Pages Requesting Corrections

Getting Started. Citi Online Investments. Treasury and Trade Solutions

Roadmap to UPK 3.1 Session #1 UPK 3.1 Practice Exercises. A Solbourne White Paper April 2008

Guidelines - Configuring PDI, MapReduce, and MapR

How to Run the Big Data Management Utility Update for 10.1

Tutorial - Getting Started with PCB Design

Big Data Technology Ecosystem. Mark Burnette Pentaho Director Sales Engineering, Hitachi Vantara

New Features and Enhancements in Big Data Management 10.2

Transcription:

Pentaho MapReduce Pentaho Data Integration, or PDI, is a comprehensive data integration platform allowing you to access, prepare and derive value from both traditional and big data sources. During this lesson, you will be introduced to Pentaho MapReduce, a powerful alternative to authoring MapReduce jobs that will reduce the technical skills required to use MapReduce, improve productivity and has demonstrable performance advantages over alternative authoring approaches such as Hive, Pig and hand-coded MapReduce jobs. Pentaho MapReduce jobs are designed from the ground up using Pentaho Data Integration s easy-to-use graphical designer, and the resulting jobs leverage the Data Integration engine running in-cluster for maximum performance. In this lesson, we will re-create the standard Word Count MapReduce example using Pentaho MapReduce. Begin by creating a new Job and adding the Start entry onto the canvas. Jobs in Pentaho Data Integration are used to orchestrate events such as moving files, checking conditions like whether or not a target database table exists, or calling other jobs and transformations. The first thing we want to do in our job is copy the input files containing the words we want to count from our local file system into HDFS. 1. In PDI, click File - New - Job 2. Expand the General tab, drag a Start entry onto the canvas 3. Expand the Big Data tab, drag a Hadoop Copy Files step onto the canvas 4. Draw a hop from Start to Hadoop Copy Files The Hadoop Copy Files entry makes browsing HDFS as simple as working with your local file system. We ll first specify the folder on our local file system from which we ll copy our Word Count input files. Next, we ll specify the target copy location in HDFS. Finally, we ll define a Wildcard expression to grab all of the files in our source directory that contain the word README, and click Add and OK. 1. Double-click on the Hadoop Copy Files entry to open the job entry editor 2. Enter a job entry name of Copy Files to HDFS 3. For File/Folder Source enter file:///home/cloudera/pentaho/design-tools/data-integration or the source of the data for your implementation. 4. For File/Folder destination copy hdfs://cloudera:cloudera@localhost:8020/wordcount/input or the HDFS directory you are using for your implementation. If you are unsure, please contact your Hadoop systems administrator. 5. Copy or enter.*readme.* in the Wildcard (RegExp) field 6. Click OK to exit the job entry editor In the next steps we ll add the Pentaho MapReduce job entry into the flow. Opening the entry editor, you can see that configuring a Pentaho MapReduce job entry is as simple as specifying a Pentaho Data Integration transformation to use as the Mapper, one for the Reducer, and optionally choosing one to act as a Combiner. Since we have not designed our Mapper and Reducer transformations yet, we ll save our job here and pause to go create them.

1. From the Big Data tab drag a Pentaho MapReduce job entry onto the canvas and draw a hop from the Copy Files to HDFS step to it. 1. Expand the Transform folder in the design palate, drag a Split field to rows step onto the canvas 2. Click the Save button, enter wordcount-example as the File name, then click Save 2. Draw a hop from MapReduce Input to Split field to rows step 3. Double-click on the Split fields to rows step to open the 4. In the Field to split drop down, select value 5. In the Delimiter field, replace the default semi-colon with a SPACE character 6. Enter a value of word in the New field name input field 7. Click OK to exit the step edito Next, we will design the transformation to be used as a Mapper in our Word Count example. Any transformation that will be used as a Mapper, Combiner, or Reducer will need to begin with the MapReduce Input step, and end with the MapReduce Output step. These steps are how Hadoop will feed key-value pairs for processing into our transformation, and how the transformation will pass key-value pairs back to Hadoop. 1. Click File New Transformation 2. Expand the Big Data folder in the design palate and drag a MapReduce Input step onto the canvas 3. Drag a MapReduce Output step onto the canvas 4. Double-click the MapReduce Input step and change Type for both Key field and Value field to String. As key-value pairs are fed into this transformation, the first step we want to do is split the strings coming in as values into individual words that we can then use for counting. For this, we ll use the Split fields to rows step. The Field to split will be our value field, we ll split the words based on a SPACE character, and we ll name the resulting field containing the individual words word. This new word field will be used later as our key field for the Reducer part of the Pentaho MapReduce job. Next in our transformation flow, we ll want to add a new field containing a value of 1 for each individual word extracted. This field will represent our values during the Reducer phase of the Pentaho MapReduce job, used to aggregate the individual instances of a word from our input files. For this, we ll use the Add constants step to create a new field called count, with a data type of Integer and a constant value of 1. 1. From the Transform folder in the design palate, drag an Add constants step onto the canvas 2. Draw a hop from Split field to rows to the Add constants step

3. Double-click on the Add constants step to open the 4. On line one, enter count as the Name, select Integer as the Type, and a 1 for the Value 5. Click OK to exit the Finally, we ll connect Add constants step to the MapReduce Output step, and configure it to use the new fields word and count as our keys and values to be passed back to Hadoop. 1. Draw a hop from Add constants to MapReduce Output We ve now completed a Mapper transformation that will read in key-value pairs, split the incoming values into individual words to be used as our new keys, added a constant of 1for each individual word found that will be used as our values to aggregate during the Reducer phase. Let s save our transformation and move on to designing the Reducer transformation. 2. Double-click on MapReduce Output to open the 3. Select word as the Key field 4. Select count as the Value field 5. Click OK to exit the 6. Click Save 7. Enter wordcount-mapper as the File name, then click Save. The Reducer transformation will be very simple. We ll create a new transformation and add our MapReduce input and MapReduce output steps. The only business logic we need to add is to aggregate each of our values for the key fields being passed in. For this, we will use the Group By step. Our Group field in this case will be our key field. We ll name the field that will contain our aggregates sum, the source column (or Subject) will be the value field, and the type of aggregation we ll perform is Sum. Now this step is configured to group all of the common keys together and create a field called sum containing the total of all values for each instance of a given key, resulting in the total count for how many times a given word appeared in our input files. 1. Click File New Transformation 2. Expand the Big Data folder in the design palate, drag a MapReduce Input and a MapReduce Output step onto the canvas 3. Expand the Statistics folder in the design palate and drag a Group by step onto the canvas between the Input and Output steps 4. Draw a hop from the MapReduce Input step to the Group by step 5. Draw a hop from the Group by step to the MapReduce Output step 6. Double-click on the Group by step to open the 7. In row 1 of The fields that make up the group: table, select key as the Group field 8. In row 1 of the Aggregates:, enter sum as the Name, select value as the Subject, and select Sum as the Type, then click OK to exit the

9. Double-click on MapReduce Output to open the 10. Select key as the Key field 11. Select sum as the Value field 1. Click on the Reducer tab 2. Click Browse, navigate to and select the wordcount-reducer.ktr transformation 3. Enter MapReduce Input as the Mapper Input 4. Enter MapReduce Output as the Mapper Output The Reducer transformation is now complete, so we ll return to our Pentaho MapReduce Job and point it at our newly created Mapper and Reducer transformations. 8. Click Save, enter wordcount-reducer as the File name, click Save. 9. Click on the wordcount-example (job) tab Re-open the Pentaho MapReduce job entry. Provide a name for the Hadoop job. Now we will point the mapper at our wordcount-mapper transformation by selecting the transformation, and specifying the steps within that transformation that represent the Hadoop Input and Output steps. 1. Double-click on the Pentaho MapReduce job entry 2. Enter Pentaho MapReduce wordcount 3. Click on the Mapper tab (may already by selected) 4. Click Browse, navigate to and select the wordcount-mapper.ktr transformation 5. Enter MapReduce Input as the Mapper Input 6. Enter MapReduce Output as the Mapper Output Next, point the reducer to the wordcount-reducer transformation. Next, we ll configure our input and output directories for the Hadoop job on the Job Setup tab. The Input Path will be the path we configured to copy our input files to in the Hadoop Copy step, and we ll use /wordcount/output as the location to output our results. Pentaho MapReduce will support all common formats for input and output data. For this example, we will use the simple TextInputFormat and TextOutputFormat and will select the option to Clean output path before execution 1. Click on the Job Setup tab 2. Enter /wordcount/input in the Input Path field 3. Enter /wordcount/output in the Output Path field 4. Enter org.apache.hadoop.mapred.textinputformat in the Input format 5. Enter org.apache.hadoop.mapred.textoutputformat in the Outputformat 6. Tick the option to Clean output path before execution.

Finally, we need to point to the Hadoop Cluster on which we want to run the Pentaho MapReduce job. The next step is to enter the cluster details on the Cluster tab. These will depend on how you have set up your environment. If you are unsure of the cluster details, contact your Hadoop systems administrator. Now we re ready to save our finished job, run it, and view the results. We ll run this job locally, and as the job is executing, we can view the status of the job through feedback gathered from the Hadoop cluster. 1. Click Save to save your job 2. Click the Play, then click Launch 3. Click on the Logging tab in the Execution results section and pause as the job is running Congratulations, you ve created your first Pentaho MapReduce job! To find out more information about the powerful Pentaho platform try another lesson or start your free proof of concept with the expertise of a Pentaho sales engineer. Contact Pentaho at http://www.pentaho.com/contact/. Hitachi Vantara Corporate Headquarters 2845 Lafayette Street Santa Clara, CA 95050-2639 USA www.community.hitachivantara.com www.hitachivantara.com community.hitachivantara.com Regional Contact Information Americas: +1 866 374 5822 or info@hitachivantara.com Europe, Middle East and Africa: +44 (0) 1753 618000 or info.emea@hitachivantara.com Asia Pacific: +852 3189 7900 or info.marketing.apac@hitachivantara.com HITACHI is a registered trademark of Hitachi, Ltd. VSP is a trademark or registered trademark of Hitachi Vantara Corporation. IBM, FICON, GDPS, HyperSwap, zhyperwrite and FlashCopy are trademarks or registered trademarks of International Business Machines Corporation. Microsoft, Azure and Windows are trademarks or registered trademarks of Microsoft Corporation. All other trademarks, service marks and company names are properties of their respective owners. P-040-A DS January 2019