Running various Bigtop components

Similar documents
Running Kmeans Spark on EC2 Documentation

Hadoop is essentially an operating system for distributed processing. Its primary subsystems are HDFS and MapReduce (and Yarn).

Using The Hortonworks Virtual Sandbox Powered By Apache Hadoop

We are ready to serve Latest Testing Trends, Are you ready to learn.?? New Batches Info

Introduction to BigData, Hadoop:-

Big Data Hadoop Developer Course Content. Big Data Hadoop Developer - The Complete Course Course Duration: 45 Hours

About 1. Chapter 1: Getting started with oozie 2. Remarks 2. Versions 2. Examples 2. Installation or Setup 2. Chapter 2: Oozie

Hadoop Quickstart. Table of contents

Hortonworks Data Platform

Apache Hadoop Installation and Single Node Cluster Configuration on Ubuntu A guide to install and setup Single-Node Apache Hadoop 2.

Part II (c) Desktop Installation. Net Serpents LLC, USA

Hortonworks Data Platform

commands exercises Linux System Administration and IP Services AfNOG 2015 Linux Commands # Notes

Product Documentation. Pivotal HD. Version 2.1. Stack and Tools Reference. Rev: A Pivotal Software, Inc.

docs.hortonworks.com

Installing Hadoop / Yarn, Hive 2.1.0, Scala , and Spark 2.0 on Raspberry Pi Cluster of 3 Nodes. By: Nicholas Propes 2016

Installing Datameer with MapR on an Edge Node

Hortonworks Technical Preview for Apache Falcon

Hortonworks Data Platform

Hadoop An Overview. - Socrates CCDH

Cloudera Manager Quick Start Guide

Linux Essentials Objectives Topics:

Hadoop. Introduction to BIGDATA and HADOOP

HDI+Talena Resources Deployment Guide. J u n e

Installation and Configuration Documentation

4/19/2017. stderr: /var/lib/ambari-agent/data/errors-652.txt. stdout: /var/lib/ambari-agent/data/output-652.txt 1/6

Big Data Syllabus. Understanding big data and Hadoop. Limitations and Solutions of existing Data Analytics Architecture

SQL SERVER INSTALLATION AND CONFIGURATION ON RED HAT LINUX. Details to the Presentation

Hands-on Exercise Hadoop

Installing Hadoop. You need a *nix system (Linux, Mac OS X, ) with a working installation of Java 1.7, either OpenJDK or the Oracle JDK. See, e.g.

Hortonworks PR PowerCenter Data Integration 9.x Administrator Specialist.

Introduction to Hadoop and MapReduce

Getting Started with Hadoop

Create Test Environment

Important Notice Cloudera, Inc. All rights reserved.

Hadoop-PR Hortonworks Certified Apache Hadoop 2.0 Developer (Pig and Hive Developer)

Precursor Steps & Storage Node

CMU MSP Intro to Hadoop

Shell and Utility Commands

Hadoop. Course Duration: 25 days (60 hours duration). Bigdata Fundamentals. Day1: (2hours)

HBase Installation and Configuration

Lab 2: Linux/Unix shell

Beta. VMware vsphere Big Data Extensions Administrator's and User's Guide. vsphere Big Data Extensions 1.0 EN

CS November 2017

Distributed Systems. 09r. Map-Reduce Programming on AWS/EMR (Part I) 2017 Paul Krzyzanowski. TA: Long Zhao Rutgers University Fall 2017

Important Notice Cloudera, Inc. All rights reserved.

If you had a freshly generated image from an LCI instructor, make sure to set the hostnames again:

Creating a Multi-Container Pod

Docker task in HPC Pack

VMware vsphere Big Data Extensions Administrator's and User's Guide

BIG DATA TRAINING PRESENTATION

COMS 6100 Class Notes 3

$HIVE_HOME/bin/hive is a shell utility which can be used to run Hive queries in either interactive or batch mode.

More Raspian. An editor Configuration files Shell scripts Shell variables System admin

Introduction to the UNIX command line

Contents. Note: pay attention to where you are. Note: Plaintext version. Note: pay attention to where you are... 1 Note: Plaintext version...

Introduction to Hadoop. High Availability Scaling Advantages and Challenges. Introduction to Big Data

Configuring and Deploying Hadoop Cluster Deployment Templates

Introduction into Big Data analytics Lecture 3 Hadoop ecosystem. Janusz Szwabiński

Want to read more? You can buy this book at oreilly.com in print and ebook format. Buy 2 books, get the 3rd FREE!

Perl and R Scripting for Biologists

Introduction. What is Linux? What is the difference between a client and a server?

Shell and Utility Commands

Introduction to Linux

Linux Kung Fu. Ross Ventresca UBNetDef, Fall 2017

TangeloHub Documentation

Upgrading a HA System from to

Important Notice Cloudera, Inc. All rights reserved.

The TinyHPC Cluster. Mukarram Ahmad. Abstract

Automation of Rolling Upgrade for Hadoop Cluster without Data Loss and Job Failures. Hiroshi Yamaguchi & Hiroyuki Adachi

Dell EMC ME4 Series vsphere Client Plug-in

Exploring UNIX: Session 3

Tutorial 1. Account Registration

Quick Deployment Step-by-step instructions to deploy Oracle Big Data Lite Virtual Machine

Ambari Managed HDF Upgrade

Oracle Big Data Appliance

Hortonworks SmartSense

HOD User Guide. Table of contents

Hortonworks Cybersecurity Platform

Hadoop Online Training

Managing High Availability

HDP Security Audit 3. Managing Auditing. Date of Publish:

NAV Coin NavTech Server Installation and setup instructions

Introduction to Linux

CISC 220 fall 2011, set 1: Linux basics

Rubix Documentation. Release Qubole

*nix Crash Course. Presented by: Virginia Tech Linux / Unix Users Group VTLUUG

IBM AIX Basic Operations V5.

Creating an Inverted Index using Hadoop

Part 1: Installing MongoDB

Important Notice Cloudera, Inc. All rights reserved.

Hortonworks Data Platform

Linux Kung Fu. Stephen James UBNetDef, Spring 2017

Download the current release* of VirtualBox for the OS on which you will install VirtualBox. In these notes, that's Windows 7.

Getting the Source Code

Hadoop & Big Data Analytics Complete Practical & Real-time Training

Chase Wu New Jersey Institute of Technology

HADOOP COURSE CONTENT (HADOOP-1.X, 2.X & 3.X) (Development, Administration & REAL TIME Projects Implementation)

Hawk Server for Linux. Installation Guide. Beta Version MHInvent Limited. All rights reserved.

a. puppet should point to master (i.e., append puppet to line with master in it. Use a text editor like Vim.

Transcription:

Running various Bigtop components Running Hadoop Components One of the advantages of Bigtop is the ease of installation of the different Hadoop Components without having to hunt for a specific Hadoop Component distribution and matching it with a specific Hadoop version. Running Pig Install Pig sudo apt-get install pig create a tab delimited text file using your favorite editor, 1 A 2 B 3 C Create a tab delimited file using a text editor and import it into HDFS under your user directory /user/$user. By default PIG will look here for yoru file. Start the pig shell and verify a load and dump work. Make sure you have a space on both sides of the = sign. The statement using PigStorage('\t') tells Pig the columns in the text file are delimited using tabs. $pig grunt>a = load '/pigdata/pigtesta.txt' using PigStorage('\t'); grunt>dump A 2013-07-06 07:22:56,272 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapreducelayer.mapredu celauncher - Success! 2013-07-06 07:22:56,276 [main] WARN org.apache.hadoop.conf.configuration - fs.default.name is deprecated. Instead, use fs.defaultfs2013-07-06 07:22:56,295 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.mapredutil - Total input paths to process : 1 (1,A) (2,B) (3,C) () 2013-07-06 07:22:56,295 [main] INFO org.apache.hadoop.mapreduce.lib.input.fileinputformat - Total input paths to process : 12013-07-06 07:22:56,295 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.mapredutil - Total input paths to process : 1 (1,A)((3,C)( Running HBase Install HBase

sudo apt-get install hbase\* For bigtop-0.0 uncomment and set JAVA_HOME in /etc/hbase/conf/hbase-env.sh For bigtop-0.0 this shouldn't be necessary because JAVA_HOME is auto detected sudo service hbase-master start hbase shell Test the HBase shell by creating a HBase table named t1 with 3 columns f1, f2 and f Verify the table exists in HBase hbase(main):001:0> create 't2','f1','f2','f3' SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/lib/hbase/lib/slf4j-log4j12-5.8.jar!/org/slf4j/impl/ StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-jar!/org/slf4j/impl /StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-6.jar!/org/slf4j/i mpl/staticloggerbinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 0 row(s) in 4390 seconds hbase(main):002:0> list TABLE t2 2 row(s) in 0.0220 seconds hbase(main):003:0> you should see a verification from HBase the table t2 exists, the symbol t2 which is the table name should appear under list Running Hive This is for bigtop-0.0 where hadoop-hive, hadoop-hive-server, and hadoop-hive-metastore are installed automatically because the hive services start with the word hadoop. For bigtop-0.0 if you use the sudo apt-get install hadoop* command you won't get the Hive components installed because the Hive Daemon names are changed in Bigtop. For bigtop-0.0 you will have to do sudo apt-get install hive hive-server hive-metastore Create the HDFS directories Hive needs The Hive Post install scripts should create the /tmp and /user/hive/warehouse directories. If they don't exist, create them in HDFS. The Hive post install script doesn't create these directories because HDFS is not up and running during the deb file installation because JAVA_HOME is buried in hadoop-env.sh and HDFS can't start to allow these directories to be created.

hadoop fs -mkdir /tmp hadoop fs -mkdir /user/hive/warehouse hadoop -chmod g+x /tmp hadoop -chmod g+x /user/hive/warehouse If the post install scripts didn't create directories /var/run/hive and /var/lock/subsys, create directory /var/run/hive and create directory /var/lock/subsys sudo mkdir /var/run/hive sudo mkdir /var/lock/subsys start the Hive Server sudo /etc/init.d/hive-server start create a table in Hive and verify it is there ubuntu@ip-10-101-53-136:~$ hive WARNING: org.apache.hadoop.metrics.jvm.eventcounter is deprecated. Please use org.apache.hadoop.log.metrics.eventcounter in all the log4j.properties files. Hive history file=/tmp/ubuntu/hive_job_log_ubuntu_201203202331_281981807.txt hive> create table doh(id int); OK Time taken: 1458 seconds hive> show tables; OK doh Time taken: 0.283 seconds hive> Running Mahout Set bash environment variables HADOOP_HOME=/usr/lib/hadoop, HADOOP_CONF_DIR=$HADOOP_HOME/conf Install Mahout, sudo apt-get install mahout Go to /usr/share/doc/mahout/examples/bin and unzip cluster-reuters.sh.gz export HADOOP_HOME=/usr/lib/hadoop export HADOOP_CONF_DIR=$HADOOP_HOME/conf 5. 6. modify the contents of cluster-reuters.sh, replace MAHOUT="../../bin/mahout" with MAHOUT="/usr/lib/mahout/bin/mahout" make sure the Hadoop file system is running and you have "curl" command on your system./cluster-reuters.sh will display a menu selection ubuntu@ip-10-224-109-199:/usr/share/doc/mahout/examples/bin$./cluster-reuters.sh

Please select a number to choose the corresponding clustering algorithm kmeans clustering fuzzykmeans clustering lda clustering dirichlet clustering 5. minhash clustering Enter your choice : 1 ok. You chose 1 and we'll use kmeans Clustering creating work directory at /tmp/mahout-work-ubuntu Downloading Reuters-21578 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 7959k 100 7959k 0 0 346k 0 0:00:22 0:00:22 -::- 356k Extracting... AFTER WAITING 1/2 HR... Inter-Cluster Density: 0.8080922658756075 Intra-Cluster Density: 0.6978329770855537 CDbw Inter-Cluster Density: 0.0 CDbw Intra-Cluster Density: 89.38857003754612 CDbw Separation: 304892272989769 12/03/29 03:42:56 INFO clustering.clusterdumper: Wrote 19 clusters 12/03/29 03:42:56 INFO driver.mahoutdriver: Program took 261107 ms (Minutes: 351783333333334) 7. 8. run classify-20newsgroups.sh, first modify the../bin/mahout to /usr/lib/mahout/bin/mahout. Do a find and replace using your favorite editor. There are several instances of../bin/mahout which need to be replaced by /usr/lib/mahout/bin/mahout run the rest of the examples under this directory except the netflix data set which is no longer officially available Running Whirr Set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in.bashrc according to the values under your AWS account. Verify using echo $AWS_ACCESS_KEY_ID this is valid before proceeding. run the zookeeper recipe as below. ~/whirr-0.7.1:bin/whirr launch-cluster --config recipes/hadoop-ecproperties if you get an error message like: Unable to start the cluster. Terminating all nodes. org.apache.whirr.net.dnsexception: java.net.connectexception: Connection refused at org.apache.whirr.net.fastdnsresolver.apply(fastdnsresolver.java:83) at org.apache.whirr.net.fastdnsresolver.apply(fastdnsresolver.java:40) at org.apache.whirr.cluster$instance.getpublichostname(cluster.java:112) at org.apache.whirr.cluster$instance.getpublicaddress(cluster.java:94) at org.apache.whirr.service.hadoop.hadoopnamenodeclusteractionhandler.dobeforeconfigure(hadoopnamenodeclustera ctionhandler.java:58) at org.apache.whirr.service.hadoop.hadoopclusteractionhandler.beforeconfigure(hadoopclusteractionhandler.java:87) at org.apache.whirr.service.clusteractionhandlersupport.beforeaction(clusteractionhandlersupport.java:53) at org.apache.whirr.actions.scriptbasedclusteraction.execute(scriptbasedclusteraction.java:100) at org.apache.whirr.clustercontroller.launchcluster(clustercontroller.java:109) at org.apache.whirr.cli.command.launchclustercommand.run(launchclustercommand.java:63) at org.apache.whirr.cli.main.run(main.java:64) at org.apache.whirr.cli.main.main(main.java:97) 5. 6. apply Whirr patch 459: https://issues.apache.org/jira/browse/whirr-459 When whirr is finished launching the cluster, you will see an entry under ~/.whirr to verify the cluster is running cat out the hadoop-proxy.sh command to find the EC2 instance address or you can cat out the instance file. Both will give you the Hadoop namenode address even though you started the mahout service using whirr. ssh into the instance to verify you can login. Note: this login is different than a normal EC2 instance login. The ssh key is id_rsa and there is no user name for the instance IP address ~/.whirr/mahout:ssh -i ~/.ssh/id_rsa ec2-50-16-85-59.compute-amazonaws.com #verify you can access the HDFS file system from the instance

dc@ip-10-70-18-203:~$ hadoop fs -ls / Found 3 items drwxr-xr-x - hadoop supergroup 0 2012-03-30 23:44 /hadoop drwxrwxrwx - hadoop supergroup 0 2012-03-30 23:44 /tmp drwxrwxrwx - hadoop supergroup 0 2012-03-30 23:44 /user Running Oozie 5. Stop the Oozie daemons using ps -ef grep oozie to find them then sudo kill -i pid ( the pid from the ps -ef command) Stopping the Oozie daemons may not remove the oozie.pid file which tells the system an oozie process is running. You may have to manually remove the pid file using sudo rm -rf /var/run/oozie/oozie.pid cd into /usr/lib/oozie and setup the oozie environment variables using bin/oozie-env.sh Download ext-js from http://incubator.apache.org/oozie/quickstart.html Install ext-js using bin/oozie-setup.sh -hadoop 0.1 ${HADOOP_HOME} -extjs ext-zip 6. You will get an error message change the above to the highest Hadoop version available, sudo bin/oozie-setup.sh -hadoop 0.20.200 ${HADOOP_HOME} -extjs ext-zip 7. 8. 9. start oozie, sudo bin/oozie-start.sh run oozie, sudo bin/oozie-run.sh you will get a lot of error messages, this is ok. go to the public DNS EC2 address/oozie/11000, my address looked like: http://ec2-67-202-18-159.compute-amazonaws.com:11000/oo zie/ go to the Oozie apache page and run the oozie examples

Running Zookeeper Zookeeper is installed as part of HBase. Add the zookeeper echo example Running Sqoop Install SQOOP using: [redhat@ip-10-28-189-235 ~]$ sudo yum install sqoop * You should see: Loaded plugins: amazon-id, rhui-lb, security Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package sqoop.noarch 0:1-fc16 will be installed ---> Package sqoop-metastore.noarch 0:1-fc16 will be installed --> Finished Dependency Resolution Dependencies Resolved Package Arch Version Repository Size Installing: sqoop noarch 1-fc16 bigtop-0.0-incubating 4 M sqoop-metastore noarch 1-fc16 bigtop-0.0-incubating 9 k Transaction Summary Install 2 Package(s) Total download size: 4 M Installed size: 9 M Is this ok [y/n]: y Downloading Packages: (1/2): sqoop-1-fc16.noarch.rpm 4 MB 00:01 (2/2): sqoop-metastore-1-fc16.noarch.rpm 9 kb 00:00 ---------------------------------------------------------------------------------------------- Total 0 MB/s 4 MB 00:01 Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Installing : sqoop-1-fc16.noarch 1/2 Installing : sqoop-metastore-1-fc16.noarch 2/2 Installed: sqoop.noarch 0:1-fc16 sqoop-metastore.noarch 0:1-fc16 Complete! Loaded plugins: amazon-id, rhui-lb, security Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package sqoop.noarch 0:1-fc16 will be installed ---> Package sqoop-metastore.noarch 0:1-fc16 will be installed --> Finished Dependency Resolution Dependencies Resolved Package Arch Version Repository Size Installing: sqoop noarch 1-fc16 bigtop-0.0-incubating 4 M

sqoop-metastore noarch 1-fc16 bigtop-0.0-incubating 9 k Transaction Summary Install 2 Package(s) Total download size: 4 M Installed size: 9 M Is this ok [y/n]: y Downloading Packages: (1/2): sqoop-1-fc16.noarch.rpm 4 MB 00:01 (2/2): sqoop-metastore-1-fc16.noarch.rpm 9 kb 00:00 ---------------------------------------------------------------------------------------------- Total 0 MB/s 4 MB 00:01 Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Installing : sqoop-1-fc16.noarch 1/2 Installing : sqoop-metastore-1-fc16.noarch 2/2 Installed: sqoop.noarch 0:1-fc16 sqoop-metastore.noarch 0:1-fc16 Complete! To test SQOOP is running run the CLI: Running Flume/FlumeNG