Cloud Computing II. Exercises

Similar documents
Open a browser and download the Apache Tomcat 7 and Oracle JDBC 6 JAR from these locations. The Oracle site may require that you register as a user.

cpouta exercises

Hadoop Setup on OpenStack Windows Azure Guide

Bitnami MEAN for Huawei Enterprise Cloud

Processing Big Data with Hadoop in Azure HDInsight

Running Kmeans Spark on EC2 Documentation

Part II (c) Desktop Installation. Net Serpents LLC, USA

Big Data Retrieving Required Information From Text Files Desmond Hill Yenumula B Reddy (Advisor)

Parallel Programming Pre-Assignment. Setting up the Software Environment

GIT. A free and open source distributed version control system. User Guide. January, Department of Computer Science and Engineering

Bitnami Apache Solr for Huawei Enterprise Cloud

Installation of Hadoop on Ubuntu

Bitnami Ruby for Huawei Enterprise Cloud

CSCI 350 Virtual Machine Setup Guide

Bitnami JRuby for Huawei Enterprise Cloud

Contents. Note: pay attention to where you are. Note: Plaintext version. Note: pay attention to where you are... 1 Note: Plaintext version...

OpenStack Havana All-in-One lab on VMware Workstation

Ansible Tower Quick Setup Guide

Installing Cisco MSE in a VMware Virtual Machine

Apache Hadoop Installation and Single Node Cluster Configuration on Ubuntu A guide to install and setup Single-Node Apache Hadoop 2.

Hadoop Tutorial. General Instructions

Installation and Configuration Documentation

CSE 101 Introduction to Computers Development / Tutorial / Lab Environment Setup

Bitnami HHVM for Huawei Enterprise Cloud

Anvil: HCC's Cloud. June Workshop Series - June 26th

UNIT II HADOOP FRAMEWORK

TangeloHub Documentation

Bitnami Re:dash for Huawei Enterprise Cloud

Tutorial 1. Account Registration

Bitnami Tiny Tiny RSS for Huawei Enterprise Cloud

SUREedge MIGRATOR INSTALLATION GUIDE FOR HYPERV

SUREedge MIGRATOR INSTALLATION GUIDE FOR NUTANIX ACROPOLIS

BlueMix Hands-On Workshop

1. INTRODUCTION to AURO Cloud Computing

Lab 2A> ADDING USERS in Linux

Bitnami ez Publish for Huawei Enterprise Cloud

Booting a Galaxy Instance

Hadoop Quickstart. Table of contents

Setting up a Chaincoin Masternode

Reset the Admin Password with the ExtraHop Rescue CD

School of Computing Science Gitlab Platform - User Notes

Bitnami Pimcore for Huawei Enterprise Cloud

Downloading and installing Db2 Developer Community Edition on Red Hat Enterprise Linux Roger E. Sanders Yujing Ke Published on October 24, 2018

Installing and Upgrading Cisco Network Registrar Virtual Appliance

Sandbox Setup Guide for HDP 2.2 and VMware

Problem Set 0. General Instructions

Linux Operating System Environment Computadors Grau en Ciència i Enginyeria de Dades Q2

SUSE Cloud Admin Appliance Walk Through. You may download the SUSE Cloud Admin Appliance the following ways.

Bitnami ProcessMaker Community Edition for Huawei Enterprise Cloud

Bitnami OSQA for Huawei Enterprise Cloud

Redhat OpenStack 5.0 and PLUMgrid OpenStack Networking Suite 2.0 Installation Hands-on lab guide

Bitnami Coppermine for Huawei Enterprise Cloud

Deploying a distributed application with OpenStack

Bitnami MySQL for Huawei Enterprise Cloud

USING NGC WITH GOOGLE CLOUD PLATFORM

Bitnami Dolibarr for Huawei Enterprise Cloud

Hands-on Exercise Hadoop

Bitnami MariaDB for Huawei Enterprise Cloud

Getting started with Raspberry Pi (and WebIoPi framework)

Installing Hadoop. You need a *nix system (Linux, Mac OS X, ) with a working installation of Java 1.7, either OpenJDK or the Oracle JDK. See, e.g.

Downloading and installing Db2 Developer Community Edition on Ubuntu Linux Roger E. Sanders Yujing Ke Published on October 24, 2018

LAB EXERCISE: RedHat OpenShift with Contrail 5.0

Installing Hadoop / Yarn, Hive 2.1.0, Scala , and Spark 2.0 on Raspberry Pi Cluster of 3 Nodes. By: Nicholas Propes 2016

Bitnami Piwik for Huawei Enterprise Cloud

Lab 1 1 Due Wed., 2 Sept. 2015

HOW TO FLASK. And a very short intro to web development and databases

Getting the Source Code

Bitnami TestLink for Huawei Enterprise Cloud

Bitnami OroCRM for Huawei Enterprise Cloud

CS 410/510: Web Security X1: Labs Setup WFP1, WFP2, and Kali VMs on Google Cloud

Microsoft OneDrive. How to login to OneDrive:

Introduction to the UNIX command line

Kardia / Centrallix VM Appliance Quick Reference

LiveNX Upgrade Guide from v5.2.0 to v5.2.1

Linux Essentials Objectives Topics:

Enable SSH Access on the Tenable Virtual Appliance (4.4.x-4.7.x) Last Revised: February 27, 2018

Hadoop is essentially an operating system for distributed processing. Its primary subsystems are HDFS and MapReduce (and Yarn).

Zadara Enterprise Storage in

CloudMan cloud clusters for everyone

Sample Spark Web-App. Overview. Prerequisites

Setting up VPS on Ovh public cloud and installing lamp server on Ubuntu instance

CMU MSP Intro to Hadoop

Getting Started with Hadoop/YARN

Lab 03 Finish and Deploy an Application. Lab 3-1: Add DELETE to the Spring Boot REST Application

Cisco UCS Director Baremetal Agent Installation and Configuration Guide, Release 5.2

Installing MediaWiki using VirtualBox

Deploy Oracle Spatial and Graph Map Visualization Component to Oracle Cloud

VIRTUAL GPU LICENSE SERVER VERSION , , AND 5.1.0

Application Notes for Virsae Service Management for Unified Communications with Avaya Aura System Manager - Issue 1.0

VMware vsphere Big Data Extensions Administrator's and User's Guide

Getting Started Guide

Setting Up Resources in VMware Identity Manager (SaaS) Modified 15 SEP 2017 VMware Identity Manager

Bitnami Open Atrium for Huawei Enterprise Cloud

EGit/Gerrit Hands-on training #1: Installation and Configuration

EECS 1710 SETTING UP A VIRTUAL MACHINE (for EECS labs)

LIVENX UPGRADE GUIDE (AIO)

SUREedge MIGRATOR INSTALLATION GUIDE FOR VMWARE

Bitnami Phabricator for Huawei Enterprise Cloud

About the Tutorial. Audience. Prerequisites. Copyright & Disclaimer. Gerrit

This guide assumes that you are setting up a masternode for the first time. You will need:

Transcription:

Cloud Computing II Exercises

Exercise 1 Creating a Private Cloud Overview In this exercise, you will install and configure a private cloud using OpenStack. This will be accomplished using a singlenode deployment which is adequate for development purposes. Objectives At the conclusion of this exercise, you should be able to: Install and configure the OpenStack cloud platform using the provided image. Create users and projects within the platform Start and stop instances. Verify the status of the cloud resources Step 1: Overview and VirtualBox Install In this exercise, you will obtain, install, and configure the OpenStack Cloud by setting up a virtual machine using a CentOS distribution. Required for this exercise: 1. VirtualBox virtualization tool 2. Provided CentOS ova file Obtain VirtualBox from http://www.virtualbox.org/wiki/downloads. If you have already accomplished this task (from the pre install instructions) you may skip this step. Installing all defaults should lead to a safe install. W 2

A Note About This Exercise: Due to the differences in platforms (OS + versions + h/w devices), this exercise may not work for everyone. The.iso image file is 64 bit. It will not run on 32 bit OS's. This is most Windows XP versions and some OS X versions. See additional notes at the end of this exercise. W 3

Step 2: Import pre built CentOS VM In this step you will import the VM for launching. This VM is a CentOS7 machine with Openstack installed. The installation process is a bit exhaustive and time consuming, so we provided it already installed: Import the VM and verify settings by launching VirtualBox, clicking File > Import Appliance menu option. Browse to the location of the unzipped classfiles and select the CentOS7sm.ova W 4

Leave all the defaults on the Appliance Settings screen: After a minute or so you should have the application imported into Virtualbox. We need to verify that the network settings specifically the Port Forwarding is still setup. Select the CentOS7 virtual box and then click the Settings Button: On the settings page select Network: W 5

Click on the Port Forwarding Button and verify that your forwards match the following diagram: We are not able to verify that these ports are open on every student computer. We are guessing. We may need to come back here and change the ports later, but for now leave these three ports as the diagram shows. After your port forwards match, close the windows and press the start button for the VM: W 6

Step 3: Access OpenStack Your VM should have started and you should have a console with a login prompt: Leave this window running. We will not be using this window, but it needs to stay running. (The cut and paste into this window is complex.. When we need it we will use a different terminal tool instead) W 7

Open a browser on your local computer and access: http://localhost:8082/dashboard Login with the username: student and password: password W 8

Step 4: Explore the dashboard Open the System Information tab and look at the status of all the OpenStack modules: It should look like the following: This is showing that all the components of Openstack are up and running. Select the Resource Usage tab: You can see what each of the Openstack modules is consuming W 9

Step 5: Create User and Projects On the left side of Horizon select the identity menu. You will see projects and users. Projects would refer to teams needing to use Openstack. A Project is a management unit for Openstack resources. When we create instances and volumes they will be tied to a particular project. Select the Users item and you should see the users that were created when OpenStack was set up. Select the Projects menu item (not the Projects tab! above it), and choose the Create New Project button. Enter the name MyProject and give a simple description, such as 'This is the OpenStack lab project.' Click to Create Project. You should now have a project as follows: W 10

Next create a user and associate the user with the project. Select the Users menu item and then the Create User button. Enter your username, an email, a password, confirm password, and then associate the user with the Primary Project you just created in the last step (MyProject). Leave the role as member Sign out of Horizon and sign back in as the new user. W 11

Step 6: Launch an Instance Launch an instance of the one you registered earlier. Do this by selecting Instances and then Launch Instance. Specify the name of instance1. Flavor: m1.tiny Instance Count: 1 Instance Boot Source: Image Image Name: Cirros (12.6MB) Select the Network Tab: Drag private from the Available Networks to the Selected Networks: Press the Launch Button W 12

After a short period, you can select Instances & Volumes and see your instance running. You may also View Logs and create an instance snapshot. Step 8: Connect to the Instance In your instance view select the action pull down. Select console. The console should display in the current window. If you don't have a log in you may need to wait a few minutes, but it should show up.. Login as cirros use password cubswin:) type exit when finished W 13

Exercise 2 OpenStack Swift Overview In this exercise you will use your virtual CentOS installation to run the Openstack Swift project. Objectives At the conclusion of this exercise, you should be able to: Add resources to a swift installation Access those hosted files using the RESTful APIs 1. Re launch VirtualBox and start your VM from Ex 1. It should resume from the state you left it in exercise 1. 2. On your local computer Create a text file names data.txt with the following contents: I am just a text file. Hello from Swift!! 3. Log into the Openstack console in your web browser: http://localhost:8082/dashboard Remember that the dashboard is in Openstack Which is running on the CentOS VM Log in as student/password The "real" port is actually 80, but we mapped it to 8082 to avoid conflicts that frequently come up on local students computers W 14

4. Navigate to Project >Object Store > Containers: You should see all the containers that have been defined. (None) A container is like a folder on a hard drive. We use them to organize the objects we will upload. 5. Create a container and upload a file: Select the "Create Container" button and name the new container datafolder For "Container Access" set it to Public After you have created the container, Click on the "Upload Object" button Browse and select the data.txt file you created earlier. W 15

You should see your data.txt on the right side of the Containers screen: 6. Obtain the URL of the container by selecting the "View Details": for your datafolder: W 16

7. Download your file from the URL: copy the container portion of the URL and add /data.txt to the end of it. This will be your URL. In the example on this page my full URL will be: http://127.0.0.1:8080/v1/auth_163195f417b24fafb5c6d0339e598050 /datafolder/data.txt From a Putty shell that is connected to your CentOS machine run the following command: (Replace YOURURL with the value of Your URL) wget YOURURL Hint: if you are on a PC and don't have putty download it from www.putty.org and then connect to the CentOS machine. on a mac try ssh localhost -p 2022 -l student You should see the data.txt file downloaded to the current directory. You have downloaded a file from the Openshift Object store. W 17

Exercise 3 Apache Hadoop Overview In this exercise you will use your virtual CentOS installation to run the Apache Hadoop framework within a Hadoop Server. First, you will configure the Hadoop server, and then you will test it out. Objectives At the conclusion of this exercise, you should be able to: Configure and test an application within the Apache Hadoop framework 1. Re launch VirtualBox and start your VM from Ex 1. It should resume from the state you left it in exercise 1. 2. Create a hadoop user Open a new shell window if your one from lab 1 is no longer available. sudo s groupadd hadoop useradd g hadoop hadoop specify hadoop for the password: passwd hadoop 3. Generate keys for ssh communications su hadoop W 18

ssh-keygen -t rsa P "" (hit enter for file name) cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys chmod 600 $HOME/.ssh/authorized_keys 4. Obtain Hadoop exit cd /usr/local (returns you to root) wget http://archive.apache.org/dist/hadoop/core/hadoop 1.0.4/hadoop 1.0.4.tar.gz (note: this is a mirror and it changes occasionally, see www.apache.org/dyn/closer.cgi/hadoop/common for other mirrors if this one is invalid) 5. Extract Hadoop, rename the directory, and change the owner of it tar xzf hadoop-1.0.4.tar.gz mv hadoop-1.0.4 hadoop chown -R hadoop:hadoop hadoop 6. Obtain the Hadoop config files: core site.xml, hdfs site.xml, mapred site.xml cd /usr/local/hadoop/conf wget N -trust-server-names http://db.tt/dae1bjwu wget N -trust-server-names http://db.tt/z4hqzy1c wget N -trust-server-names http://db.tt/bwrvpvpv If your mapred site.xml, core site.xml and hdfs site.xml are untouched you may need to manually copy: W 19

DAE1bjWU to core-site.xml z4hqzy1c to hdfs-site.xml bwrvpvpv to mapred-site.xml 7. Set environment variables vi./hadoop-env.sh (add the following lines to the top of the file) export JAVA_HOME=/opt/jdk1.7.0_79 export HADOOP_HOME=/usr/local/hadoop export PATH=$PATH:$HADOOP_HOME/bin 8. Start the Server, Format the Storage System (in a new terminal window) su hadoop ssh localhost cd /usr/local/hadoop/bin./hadoop namenode -format./start-all.sh /opt/jdk1.7.0_79/bin/jps (to check it) Hadoop should be running now!!! 9. Obtain a test file (Mark Twain's Huck Finn book) and copy it into Hadoop cd /usr/local/hadoop wget -N -trust-server-names http://db.tt/q4pnfi2p (obtains twain.txt from remote location) If twain.txt is not displayed in your folder you may need to manually copy: W 20

q4pnfi2p to twain.txt cd bin./hadoop fs put /usr/local/hadoop/twain.txt /usr/local/hadoop/tmp./hadoop fs ls /usr/local/hadoop/tmp (moves twain.txt into the HDFS and verifies it) ON ONE LINE run:./hadoop jar../hadoop-examples-1.0.4.jar wordcount /usr/local/hadoop/tmp/twain.txt /usr/local/hadoop/outtmp (runs the wordcount job, specifying output to go into the outtmp directory)./hadoop fs -ls /usr/local/hadoop/outtmp (verifies the contents of the outtmp directory)./hadoop fs -get /usr/local/hadoop/outtmp/part-r-00000 /usr/local/hadoop/results.txt (moves the results out of HDFS to the local file system) 10. List the files within HDFS./hadoop fs ls /./hadoop job list (these list the jobs running and files in HDFS respectively) The above cited job command to view the JobTracker jobs can sometimes hang. You can also use Hadoop server's web based interface to view the jobs managed by the Job tracker in your browser. You may need to add a port forward from W 21

your virtual box to the guest CentOS. Try browsing to the following URL in your browser if Hadoop servers are running: http://localhost:50030/jobtracker.jsp 11. Stop the servers:./stop-all.sh 12. End your Hadoop Session exit exit cd /usr/local/hadoop gedit./results.txt (exits the ssh session, returns to root, views the results.txt document) W 22

Exercise 4 OpenShift PAAS Overview In this exercise you will deploy an app to OpenShift. Objectives At the conclusion of this exercise, you should be able to: Deploy an application to OpenShift View the application in a browser View the administrative settings and disable the application Step 1: Create a OpenShift Account This exercise will require an OpenShift cloud free account. Go to Openshift.com and create a new free account. W 23

Step 2: Install OpenShift client tools OpenShift online provides a set of client tools that allow developers to create and manage applications without having to always be in the web interface. In a CentOS Putty shell run the following command: sudo gem install rhc This will take a minute or so, let it run. When the rhc tools are done installing we need to configure them. Run the following command: rhc setup Leave the servername as the default When prompted for your credentials enter your openshift account credentials. When prompted we do need to generate a credential and we need to upload it to the server. When you are complete you should see a message that tells you that the tools are now configured. We are ready to interact with the OpenShift host. Step 3: Create Python Application The Client tools allow us to interact with the OpenShift hosted cloud from our local box. We can create applications graphically in the cloud, or we can create them locally with the client tools. We will do the local version: W 24

Run the following command in your CentOS putty shell to create a Python 27 application and upload it to the cloud server: rhc create app mypythonapp python 2.7 This process will create a local GIT repository, install a default application page, and then upload it to the server. You should see a screen like: Make a note of your URL. You will need it. You can always get it back by going to the web console for Openshift (www.openshift.com), but we will manage from here for now. The project is running on the cloud PAAS provider and we can now begin the process of deploying changes and versioning the application. In a web browser hit your application URL: W 25

The default application was created and uploaded to Openshift. The application is in your home directory and the git project has been initialized. Change into your mypythonapp folder. This is the Git project that contains the default application. wsgi.py is the application setup.py is the dependencies for the application Edit the file wsgi.py file. Strip out everything between the two <body> tags and change the content to look like this: <body> </body> <h1>hello world!</h1> Save your changes. The file you edited was local to your machine. In the cloud we need to deploy this and let the server have your code. Run the following commands from inside the mypythonapp folder to update git and push this project up to OpenShift: git add all. W 26

git commit m "Rewrote wsgi.py for the class" git push In a web browser hit your application URL: W 27

Exercise 5 OpenShift and a Python webapp Framework Overview In this exercise you will continue with the OpenShift Python Development Platform. We will create a new python project using a web library and then deploy it to OpenShift. Objectives At the conclusion of this exercise, you should be able to: Use a library based application in OpenShift PAAS Uses classes in Python Stop an application instance Step 1: Add a dependency to the current application The application that we created in the last exercise was too rudimentary for any real experience with a PAAS. We (hopefully) were able to get a feel for interacting with a hosted platform, but that was about it. The strength in PAAS comes from the flexibility in what we can deploy as well as the provided infrastructure. Python has a number of different web frameworks. Django is probably the most common and largest. OpenShift can support Django, but it takes more than we are able to do in this class to use it. We will use a lighter alternative to Django called Flask. Create a new application in our Openshift account: rhc app create myflaskapp python 2.7 W 28

The project needs to know that we need an additional library. There are several ways to specify the dependency. In the requirements.txt add the Flask dependency to the top of the file: Flask==0.10.1 Create a new file named flaskapp.py in the root of our repository (In the myflaskapp folder). Add the following code to that file: from flask import Flask app = Flask( name ) @app.route('/') def hello_world(): return 'Hello World!' if name == ' main ': app.run() The wsgi.py file is the entire application at this point. The file has a few important lines that we need to keep. The first 10 lines ( all the way to the pass statement) need to stay. The remaining lines can all be removed. Open the wsgi.py file and delete all the lines that follow the pass statement on line 10. The wsgi.py file should be mostly empty. Add the following line at the bottom of the file to update the entry point to the application: from flaskapp import app as application Save your work then run the following commands to perform the necessary git commits and push: W 29

git add all git commit m "Adding Flask application" git push Your app should be updated and you should be able to access it in the project URL that was displayed when we created the application. If you do not remember the URL you can access it in the Openshift.com console. W 30