Table of Contents HOL-1730-USE-2

Size: px
Start display at page:

Download "Table of Contents HOL-1730-USE-2"

Transcription

1 Table of Contents Lab Overview - - Cloud Native Apps With Photon Platform...2 Lab Guidance... 3 Module 1 - What is Photon Platform (15 minutes)... 9 Introduction What is Photon Platform - How Is It Different From vsphere? Cloud Administration - Multi-Tenancy and Resource Management...13 Cloud Administration - Images and Flavors Conclusion Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes) Introduction Multi-Tenancy and Resource Management in Photon Platform Set Up Cloud VM Operational Elements Through Definition of Base Images, Flavors, Networks and Persistent Disks Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts Monitor and Troubleshoot Photon Platform Conclusion Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes) Introduction Container Orchestration With Kubernetes on Photon Platform Container Orchestration With Docker Machine Using Rancher on Photon Platform Conclusion Page 1

2 Lab Overview - - Cloud Native Apps With Photon Platform Page 2

3 Lab Guidance Note: It will take more than 90 minutes to complete this lab. You should expect to only finish 2-3 of the modules during your time. The modules are independent of each other so you can start at the beginning of any module and proceed from there. You can use the Table of Contents to access any module of your choosing. The Table of Contents can be accessed in the upper right-hand corner of the Lab Manual. Photon Platform is a distributed, multi-tenant host controller optimized for containers. The Photon Platform delivers: API-first Model: A user-experience focused on the automation of infrastructure consumption and operations using simple RESTful APIs, SDKs and CLI tooling, all fully multi-tenant. Allows a small automaton-savvy devops team to efficiently leverage fleets of servers. Fast, Scale-out Control Plane: A built-from-scratch infrastructure control plane optimized for massive scale and speed, allowing the creation of 1000s of new VMisolated workloads per minute, and supporting 100,000s of total simultaneous workloads Native Container Support: Developer teams consuming infrastructure get their choice of open container orchestration frameworks (e.g. Kubernetes, Docker Swarm, Pivotal CF / Lattice, and Mesos). The Photon controller is built for large environments to run workloads designed for cloud-native (distributed) apps. Examples include a modern scale-out SaaS/mobile-backend apps, highly dynamic continuous integration or simulation environments, sizable data analytics clusters (e.g., Hadoop/Spark), or large-scale platform-as-a-service deployments (e.g., Cloud Foundry). The objective of this lab is to provide an introduction to Photon Platform constructs and architecture, then deep dive into how to consume Infrastructure as a Service (IaaS) using this platform. Finally, the user will learn how to deploy OpenSource frameworks and applications onto Photon Platform using standard deployment methods for the frameworks. Lab Module List: Module 1 - What is Photon Platform (15 minutes) (Basic) Walk through control plane mgmt. layout. Intro to images, flavors, tenants, resource pools, projects. Mostly viewing an existing setup. Module 2 - Photon Platform IaaS Deep Dive(45 minutes) (Advanced) From the start create tenant, resource ticket,project, image, flavors, vm, persistent disk, network, mgmt. UI, attach/detach. Review troubleshooting through logs. Page 3

4 Module 3 - Container Frameworks with Photon Platform(30 minutes) (Advanced) Create clusters kubernetes, Docker Machine with standard opensource methods and deploy apps on each. Lab Captains: Module 1 - Michael West, Technical Architect Cloud Native Applications, USA Module 2 - Randy Carson, Senior Systems Engineer, USA This lab manual can be downloaded from the Hands-on Labs Document site found here: [ This lab may be available in other languages. To set your language preference and have a localized manual deployed with your lab, you may utilize this document to help guide you through the process: Page 4

5 Location of the Main Console 1. The area in the RED box contains the Main Console. The Lab Manual is on the tab to the Right of the Main Console. 2. A particular lab may have additional consoles found on separate tabs in the upper left. You will be directed to open another specific console if needed. 3. Your lab starts with 90 minutes on the timer. The lab can not be saved. All your work must be done during the lab session. But you can click the EXTEND to increase your time. If you are at a VMware event, you can extend your lab time twice, for up to 30 minutes. Each click gives you an additional 15 minutes. Outside of VMware events, you can extend your lab time up to 9 hours and 30 minutes. Each click gives you an additional hour. Activation Prompt or Watermark When you first start your lab, you may notice a watermark on the desktop indicating that Windows is not activated. One of the major benefits of virtualization is that virtual machines can be moved and run on any platform. The Hands-on Labs utilizes this benefit and we are able to run the labs out of multiple datacenters. However, these datacenters may not have identical processors, which triggers a Microsoft activation check through the Internet. Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft licensing requirements. The lab that you are using is a self-contained pod and does not have full access to the Internet, which is required for Windows to verify the activation. Page 5

6 Without full access to the Internet, this automated process fails and you see this watermark. This cosmetic issue has no effect on your lab. Alternate Methods of Keyboard Data Entry During this module, you will input text into the Main Console. Besides directly typing it in, there are two very helpful methods of entering data which make it easier to enter complex data. Click and Drag Lab Manual Content Into Console Active Window You can also click and drag text and Command Line Interface (CLI) commands directly from the Lab Manual into the active window in the Main Console. Accessing the Online International Keyboard You can also use the Online International Keyboard found in the Main Console. Page 6

7 1. Click on the Keyboard Icon found on the Windows Quick Launch Task Bar. Click once in active console window In this example, you will use the Online Keyboard to enter the sign used in addresses. The sign is Shift-2 on US keyboard layouts. 1. Click once in the active console window. 2. Click on the Shift key. Click on key 1. Click on the "@" key. Notice sign entered in the active console window. Page 7

8 Look at the lower right portion of the screen Please check to see that your lab is finished all the startup routines and is ready for you to start. If you see anything other than "Ready", please wait a few minutes. If after 5 minutes you lab has not changed to "Ready", please ask for assistance. Page 8

9 Module 1 - What is Photon Platform (15 minutes) Page 9

10 Introduction This module will introduce you to the new operational model for cloud native apps. You will walk through the Photon Platform control plane management architecture and will get a guided introduction to image management, resource management and multitenancy. You will use a combination of the Management UI and CLI to become familiar with Photon Platform. For a detailed dive into platform, proceed to Module 2 - Cloud Admin Operations. 1) What is Photon Platform and what is the architecture 2) Cloud Administration - Multi-Tenancy and Resource Management in Photon Platform 3) Cloud Administration - Images and Flavors. Page 10

11 What is Photon Platform - How Is It Different From vsphere? The VMware Photon Platform is a new infrastructure stack optimized for cloud-native applications. It consists of Photon Machine and the Photon Controller, a distributed, APIdriven, multi-tenant control plane that is designed for extremely high scale and churn. Photon Platform has been open sourced so we could engage directly with developers, customers and partners. If you are a developer interested in forking and building the code or just want to try it out, go to vmware.github.com Photon Platform differs from vsphere in that it has been architected from the ground up to provide consumption of infrastructure through programmatic methods. Though we provide a Management UI, the primary consumption model for DevOps will be through the Rest API directly or the CLI built on top of it. The platform has a native multi-tenancy model that allows the admin to abstract and pool physical resources and allocate them into multiple Tenant and Project tiers. Base images used for VM and Disk creation are centrally managed and workload placement is optimized through the use of Linked Clone (Copy On Write) technology. The Control plane itself is architected as a highly available, redundant set of services that facilitates large numbers of simultaneous placement requests and prevents loss of service. Photon Platform is not a replacement for vcenter. It is designed for a specific class of applications that require support for the services described above. It is not feature compatible with vcenter, and does not implement things like vmotion, HA, FT - which are either not a requirement for Cloud Native Applications, or are generally implemented by the application framework itself. The High Level architecture of the Photon Controller is as shown on the next page. Page 11

12 Photon Platform Overview - High Level Architecture (Developer Frameworks Represent a Roadmap. Not all are implemented in the Pre-GA Release) Page 12

13 Cloud Administration - Multi-Tenancy and Resource Management Administration at cloud scale requires new paradigms. Bespoke VMs nurtured through months or years are not the norm. Transient workloads that may live for hours, or even minutes are the order of the day. DevOps processes that create continuous integration pipelines need programatic access to infrastructure and resource allocation models that are dynamic, Multi-tenant - and do not require manual admin intervention. Photon Platform implements a hierarchical tenant model. Tenants represent a segmentation between companies, business units or teams. Cloud resources are allocated to Tenants using a set of Resource Tickets. Allocated resources can be further carved up into individual projects within the Tenant. Let's dive in and explore Multi-tenancy and resource management in Photon Platform. Connect To Photon Platform Management UI 1. From the Windows Desktop, Launch a Chrome or Firefox Web Browser Page 13

14 Photon Controller Management UI 1. Select the Photon Controller Management Bookmark from the Toolbar or enter in the browser. Page 14

15 The Control Plane Resources The Photon Platform environment contains Management Resources and Cloud Resources. Resources designated as Management are used for Control Plane VMs. Resources designated as Cloud are used for Tenants that will be running applications on the cloud. In our simple Lab deployment we have 2 ESXi hosts and 1 Datastore and we have designated that all of the resources can be used as Management and Cloud. In a Production Cloud, you would tend to separate them. Our management Plane also only consists of a single node. Again, in a production cloud, you can scale this out significantly to provide multiple API endpoints for consuming the infrastructure and to provide high availability. 1. Click on Management Note1: We are seeing some race conditions in our lab startup. If you see no Host or Datastore data in this screen, you will need to restart the Photon Controller Management VM. Details are in the next step. Note2: If the browser does not show the management panel on the left, then change the Zoom to 75%. Click on the 3-bar icon on the upper right and find the Zoom. Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen From the Windows Desktop: 1. Click on the Putty Icon 2. Select PhotonControllerCLI connection 3. Click Open - You are now in the PhotonControllerCLI VM Page 15

16 4. ssh into the PhotonController Management VM. Execute: ssh Password is vmware 5. You must change to the root user. Execute; su Password is vmware 6. Reboot the VM. Execute: reboot This should take about 2 minutes to complete Page 16

17 Control Plane Services The Photon Platform Control Plane runs as a set of Java Services deployed in Docker Containers that are running in a MGMT VM. Each MGMT VM will run a copy of these services and all meta-data is automatically synced between the Cloud_Store service running in each VM to provide Availability. 1. Click on Cloud Page 17

18 Cloud Resources This screen shows the resources that have been allocated for use by applications running on this cloud: 1. Two hosts have been allocated as available to place application workloads. 2. One Tenant has been created. (We will drill further into this in a minute.) 3. We have set no resource limit on vcpu or Storage, but we have created a Resource-Ticket with a limit of 1000GB of RAM and Allocated all 1000GB to individual projects. ( You will see the details in a minute) Page 18

19 Tenants 1. Click on Tenants Page 19

20 Our Kubernetes Tenant We have created a Single Tenant that has been used to create a Kubernetes Cluster (You will use this in Module 3). You can see that a limit has been placed on Memory resource for this tenant and 100% of that resource has been allocated to Projects within the Tenant. 1. Click on Kube-Tenant Kube-Tenant Detail You can see a little more detail on what has been allocated to the tenant. The User Interface is still a prototype. We will use the CLI in module 2 to drill into how these resources are really allocated. Notice that the Project within the Kube-Tenant is using only 1% of the total Memory allocated to it. You may have to scroll to the bottom of the screen to see this. 1. Click on Kube-Project Page 20

21 Kube-Project Detail At the project detail level we can see the actual consumption of allocated resources and the VMs that have been placed into these allocations. We have deployed a Kubernetes Cluster, which contains a Master and 2 worker node VMs. You will immediately notice that this model is about allocating large pools and managing consumption rather than providing a mechanism for management of individual VMs. (Note: These VMs will be used in Module 3. If you delete them, you will have to restart the lab environment in order to take that module. Page 21

22 Kube Tenant Resource-Ticket Remember that resource limits are created for a Tenant by providing the Tenant with one or more Resource-Tickets. Each Resource Ticket can be carved up into individual projects. Lets add a Resource-Ticket to Kube-Tenant 1. Click on Kube-Tenant and Scroll the screen to the bottom Page 22

23 Create Resource-Ticket 1. Click on Resource Ticket 2. Click on the + sign 3. Enter Resource Ticket Name (No Spaces in the Name) 4. Enter numeric values for each field 5. Click OK 6. Optionally, Click on Projects and follow the Tenant Create steps to Create a New project to allocate the Resource Ticket to. You have now made additional resource available to Kube Tenant and can allocate it to a new Project. Check the Tenant Details page to see the updated totals. You can create a new project if you want, but we will not be using it in the other modules. To do that, click on Projects Page 23

24 Cloud Administration - Images and Flavors Continuing on the theme from the previous lesson, Cloud automation requires abstractions for consumption of allocated resources as well as centralized management of images used for VM and Disk creation. In this lesson, you will see how Images and Flavors are used as part of the operational model to create Cloud workloads. Images Photon Platform provides a centralized image management system. Base images are uploaded into the system and can then be used to create both VMs and disks within the environment. Users can upload either an OVA or VMDK file. Once a VM is deployed, and potentially modified, its disk can be saved as an image in the shared image repository. The image repository is a set of Datastores defined by the Administrator. Datastores can be local or shared storage. When a user creates a VM or disk, a linked clone is created from the base image to provide the new object. This copy on write technology means that the new disk takes up very little space and captures only the disk changes from the original image. Users can optimize the images for performance or storage efficiency by specifying whether the image should be copied to Cloud datastores immediately on upload or only when a placement request is executed. This is referred to as an EAGER or ON_DEMAND image in Photon Platform 1. Click on the gear in the upper right of the screen and then Images. Kube-Image You notice that we have a few images in our system. The Photon-management image is the image that was used to create the Control Plane management VMs mentioned in the Page 24

25 earlier steps, and the kube image that was used for the Kubernetes Cluster VMs you also saw earlier. You will use the PhotonOS and Ubuntu images in a later module. 1. Click the X to close the panel. Flavors 1. Click on the gear again and then Click Flavors When you are done, close the images panel so that you can see the gear icon again Kube-Flavor Flavors need a bit of explanation. There are three kinds of Flavors in Photon Platform; VM, Ephemeral Disk and Persistent disk Flavors. Ephemeral disks are what you are used to with your current ESXi environment. They are created as part of the VM create and their lifecycle is tied to the VM. Persistent disks can be created independent from any VM and then subsequently attached/detached. A VM can be created, a persistent disk attached, then if the VM dies, the disk could be attached to another VM. Flavors define the size of the VMs (CPU and RAM), but also define the characteristics of the storage Page 25

26 that will be used for ephemeral (Boot) disks and persistent storage volumes. specify the vm and disk flavors as part of the VM or Disk creation command. You will 1. In our environment we have created specific VM flavors to define the size of our Kubernetes Master and Worker node vms. Notice that the Master node Flavor will create a larger VM than the other Flavors 2. Click on Ephemeral Disks Page 26

27 Ephemeral Disk Flavors Notice That we have Four Ephemeral Disk Flavors in our environment. We haven't done much with them here, but there are two primary use cases for Disk flavors. The first is to associate a Cost with the storage you are deploying, in order to facilitate Chargeback or Showback. The second use case is Storage Profiles. Datastores can be tagged based on whatever criteria may be needed (Availability/Performance/Cost/Local/Shared/ etc.) and the flavor can specify that tag. The tag will become part of the scheduling constraints when Photon Platform attempts to place a disk. Persistent disks work the same way. Though we haven't yet created a persistent disk, we will do so in module 2. Page 27

28 Persistent Disk Flavors 1. Click on Persistent Disks We have a single persistent disk flavors for you. It is used in our Kubernetes Cluster. You will create another Flavor when you create persistent disks in Module 2. Page 28

29 Conclusion Cloud Scale administration requires a different way of operating. Administrators do not have the luxury of meticulously caring for individuals VMs. There are just too many of them, and they tend to have short lifetimes. Administration is about thinking at scale - abstracting huge amounts of physical resources, pooling them together and then allocating parts of the pools to entities that consume them through programmatic interfaces. You now have a basic understanding of what Photon Platform is - and how it is different from vsphere. You have seen that the operational model for administrators is very different from what you might be used to with UI driven management through vcenter. You have been introduced to Multi-Tenancy and a new paradigm for resource allocation through Resource Tickets, as well as a different consumption model using Images and Flavors. In Module 2, you will deep dive into the Infrastructure As A Service components of Photon Platform. You've finished Module 1 Congratulations on completing Module 1. If you are looking for additional information on Photon Platform, Use your smart device to scan the QRC Code. Proceed to any module below which interests you most. [Add any custom/optional information for your lab manual.] Module 2 - Cloud Admin Operations With Photon Platform (IAAS Deep Dive) (60 minutes) (Advanced) Module 3 - Container Orchestration Frameworks With Photon Platform(45 minutes) (Advanced) Page 29

30 How to End Lab To end your lab click on the END button. Page 30

31 Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes) Page 31

32 Introduction This module will engage you in the Cloud Native operational model by setting up the environment and deploying a container application through Photon Platform API. You will learn how to define tenant resources, create images, flavors, vms, and networks. You will also be introduced to persistent disks which are independent of your VM lifecycle and extend Docker volumes to multiple hosts. You will use both the CLI and management UI in performing these tasks. Finally, you will build an application with (nginx) to display a web page with port mapping to show some basic networking capabilities. Basic troubleshooting and Monitoring through LogInsight and Grafana will also be performed. 1) Multi-tenancy and Resource management in Photon Platform You will use the Photon Platform CLI to create tenants, allocate resources (CPU, Memory, storage) through the use of Resource Tickets and carve those resources into individual projects. This lesson will also provide you with a basic overview of working with the CLI. 2) Set up Cloud VM operational elements through definition of base images, flavors, networks and disks Photon Platform includes centralized management of base images used for VM and Disk creation. You will be introduced to managing those images. VM and disk profiles are abstracted through a concept called Flavors. You will see how to define those flavors, as well as use them to create VMs and Persistent disks. You will create a network and combine it with a Flavor and Image to create a VM. (Note: ESXi Standard networking is used in this lab, however NSX support is also available) 3) Persistent disks enable container restart across hosts Persistent Disks are different from standard vsphere ephemeral disks in that they are not tied to the lifecycle of a VM. You will create a Persistent disk and see that it can be attached to a VM, then detached and reattached to a second VM. You will combine this with Docker Volumes to allow container data to persist across hosts. 4) Monitor and Troubleshoot Applications running on Photon Platform See how Photon Platform integration with LogInsight and Graphite/Grafana simplify Troubleshooting and Monitoring of applications across distributed infrastructure. Page 32

33 Multi-Tenancy and Resource Management in Photon Platform You will use the Photon Platform CLI to create tenants, allocate resources (CPU, Memory, storage) through the use of Resource Tickets and carve those resources into individual projects. This lesson will also provide you with a basic overview of working with the CLI. Login To CLI VM Photon Platform CLI is available for MAC, Linux and Windows. For this lab, the CLI is installed in a Linux VM. From the Windows Desktop: 1. Click on the Putty Icon 2. Select PhotonControllerCLI connection 3. Click Open Authentication should be done through SSH keys, however if you are prompted for a password use vmware Page 33

34 Verify Photon CLI Target The Photon Platform CLI can be used to manage many instances of the Control Plane, so you must point it to the API Endpoint for the Control Plane you want to use. 1. Execute the following command: photon target show It should point to the endpoint referenced in the image. If it does not then execute: photon target set Note: If you are seeing strange HTTP: 500 errors when executing photon CLI commands, then execute the next step. We are sometimes seeing race conditions on startup of the labs that require a reboot of the Photon Controller services. Page 34

35 Execute This Step Only If You Had photon HTTP Errors In The Previous Step 1. ssh into the PhotonController Management VM. Execute: ssh Password is vmware 2. You must change to the root user. Execute; su Password is vmware 3. Reboot the VM. Execute: reboot This should take about 2 minutes to complete 4. Now return to the previous step that caused the HTTP: 500 error and try it again. Page 35

36 Photon CLI Overview The Photon CLI has a straightforward syntax. It is the keyword "photon", followed by the type of object you want to work on (vm, disk, tenant, project, etc.) and then a list of arguments. We will be using this CLI extensively in the module. Context sensitive help is available by appending -h or --help onto any command. 1. Execute: photon -h Note: If you experience problems with keyboard input not showing up in the Putty session, this is probably because the Taskbar is blocking the Command prompt: Type: Clear and hit Return to move the prompt to the top of the screen Photon CLI Context Help From that list we might want to take action on a VM. So let's see the command arguments for VMs. 1. Execute: Page 36

37 photon vm -h As we go through the module, use the help command to see details of the actual commands you are executing. Create Tenant Photon Platform implements a hierarchical tenant model. Tenants represent a segmentation between companies, business units or teams. Cloud resources are allocated to Tenants using a set of Resource Tickets. Allocated resources can be further carved up into individual projects within the Tenant. Let's start by creating a new Tenant for our module. 1. Execute the following command: photon tenant create lab-tenant Hit Return on the Security Group Prompt. Photon Platform can be deployed using external authentication. In that case you would specify the Admin Group for this Tenant. We have deployed with no authentication to make the lab a little easier. Page 37

38 Once you have created the Tenant, you must set the CLI to execute as that Tenant. You can do this or refer to the Tenant with CLI command line switches. There is an option to enable Authentication using Lightwave, the Open Source Identity Management Platform from VMware. We have not done that in this lab. 1. Execute the following command: photon tenant set lab-tenant Create Resource Ticket Creating a Resource Ticket specifies a pool of resources that are available to the Tenant and can later be consumed through the placement of workloads in the infrastructure. 1. Execute the following command: photon resource-ticket create --name lab-ticket --limits "vm.memory 200 GB, vm 1000 COUNT" 2. To view your Resource Tickets, Execute the following command: photon resource-ticket list We have allocated 200 GB of memory and placed a limit of 1000 VMs for this Tenant. Other resources are unlimited because we have not specified a Limit. 3. Also note the Entity UUID printed after the command completes. You will use UUIDs to manipulate objects in the system and they can always be found by using "photon Page 38

39 entity-type list" commands. "Entity-type" can be one of many types, like: vm, image, resource-ticket, cluster, flavor, etc. Page 39

40 Create Project Tenants can have many Projects. In our case, we are going to create a single project within the lab-tenant Tenant. This project will only be allocated a subset of the resources already allocated to the Tenant. Notice that the Tenant has a limit of 200GB and 1000 VMs, but the project can only use 100GB and create 500 VMs. 1. To create the Project, Execute the following command: photon project create --resource-ticket lab-ticket --name lab-project --limits "vm.memory 100 GB, vm 500 COUNT" 2. To view your Projects, Execute the following command: photon project list Notice that you can see the Limit that was set and the actual Usage of the allocated resources. 3. To Set the CLI to the Project, Execute the following command: photon project set lab-project Now we have a Tenant with resources allocated to it and Project that can consume those resources. Now we will move on to create objects within the Project. Page 40

41 Set Up Cloud VM Operational Elements Through Definition of Base Images, Flavors, Networks and Persistent Disks Photon Platform includes centralized management of base images used for VM creation. You will be introduced to managing those images. VM and disk profiles are abstracted through a concept called Flavors. You will see how to define those flavors, as well as use them to create VMs and Persistent disks. You will create a network and combine it with a Flavor and Image to create a VM. (Note: ESXi Standard networking used in this lab, however NSX support is also available) View Images Photon Platform provides a centralized image management system. Base images are uploaded into the system and can then be used to create VMs within the environment. Users can upload either an OVA or VMDK file. Once a VM is deployed, and potentially modified, its disk can be saved as an image in the shared image repository. The image repository is a set of Datastores defined by the Administrator. Datastores can be local or shared storage. When a user creates a VM, a linked clone is created from the base image to provide the new object. This copy on write technology means that the new disk takes up very little space and captures only the disk changes from the original image. Users can optimize the images for performance or storage efficiency by specifying whether the image should be copied to Cloud datastores immediately on upload or only when a placement request is executed. 1. To see the images already uploaded, execute the following command: photon image list Do not upload an image in this environment because of bandwidth constraints, however the command to do it is: photon image create filename -name PhotonOS Notice that your photon image list command shows several images that have been uploaded for you. 1) photon-management is the image used to create the original management plane VMs and any new management VMs that you add in the future. 2) kube is the boot image for the nodes in a running Kubernetes Cluster that you will use in Module 3 3) PhotonOS is the latest version of our Photon Linux distro, which ships with Docker configured and is optimized for container deployment. You will use this image later in this module. Each image has a Replication Type; EAGER or ON_DEMAND. EAGER images are copied to every datastore tagged as CLOUD, so VMs can be cloned very quickly - at the Page 41

42 expense of storing many copies of the image. ON_DEMAND images are downloaded to the datastore where the scheduler decided on placement at the time of the placement. The creation takes longer, but storage usage is more efficient. 2. To see more detail on a particular image, execute the following command: photon image show "UUID of image" UUID of the image is in the photon image list command results. Page 42

43 View Flavors Flavors need a bit of explanation. There are three kinds of Flavors in Photon Platform; VM, Ephemeral Disk and Persistent disk Flavors. Ephemeral disks are what you are used to with your current ESXi environment. They are created as part of the VM create and their lifecycle is tied to the VM. Persistent disks can be created independently from any VM and then subsequently attached/detached. A VM can be created, a persistent disk attached, then if the VM dies, the disk could be attached to another VM. Flavors define the size of the VMs (CPU and RAM), but also define the characteristics of the storage that will be used for ephemeral (Boot) disks and persistent storage volumes. You will specify the vm and disk flavors as part of the VM or Disk creation command. 1. To view existing Flavors, Execute the following command: photon flavor list In our environment we have created specific VM flavors to define the size of our Kubernetes Master and Worker node vms. Notice that the Master node Flavor will create a larger VM than the other Flavors. Create New Flavors We are going to create 1 of each type of Flavor to be used in this module: 1. Execute: photon -n flavor create -n my-vm -k vm -c "vm.cpu 1 COUNT,vm.memory 1 GB" Page 43

44 VMs created with this Flavor will have 1 vcpu and 1 GB of RAM 2. Execute: photon -n flavor create -n my-pers-disk -k persistent-disk -c "persistent-disk 1.0 COUNT" This Flavor could have been tagged to match tags on Datastores so that storage Profiles are part of the Disk placement. In this case we have simply added a COUNT. This could be used as a mechanism for capturing Cost as part of a Chargeback process. 3. Execute: photon -n flavor create -n my-eph-disk -k ephemeral-disk -c "ephemeral-disk 1.0 COUNT" 4. To easily see the Flavors you just created, execute: photon flavor list grep my- Create Networks By default Photon Controller will discover the available networks on your Cloud Hosts and choose one of them for VM placement. To limit the scope of this discovery, you can create a network object and reference it when creating a vm or cluster. This network object is also the basis for creating logical networks with NSX. That functionality will be available shortly after VMworld In our lab environment, there is only one Portgroup available, so you wouldn't actually need to specify a network in your VM create command, but we are going to use it to show the functionality. We have already created this network for you. 1. If you needed to create a network you would issue the following command: photon network create -n lab-network -p VM Network -d My cloud Network The -p option is a list of the portgroups that you want to be used for VM placement. Its essentially a whitelist of networks available to the scheduler when evaluating where to place a VM. The -d option is just a description of your network. Page 44

45 2. To easily see the Network we have created, execute: photon network list Page 45

46 Create VM We are now ready to create a VM using the elements we have gone through in the previous steps. 1. Execute the following command: photon vm create --name lab-vm1 --flavor my-vm --disks "disk-1 my-eph-disk boot=true" -w UUID of your Network -i UUID of your PhotonOS image Note: You can get the UUID of your network with the command: photon network list and the UUID of your image with the command: photon image list Let's break down the elements of this command. --name is obvious. it's the name of the VM. --flavor says to use the my-vm flavor you defined above to size the RAM and vcpu count. --disks is a little confusing. disk-1 is the name of the ephemeral disk that is created. It will be created using the my-eph-disk flavor you created earlier. We didn't do much with that flavor definition, however it could have defined a Cost for Chargeback, or been tagged with a storage profile. The tag would have been mapped to a datastore tag and would be part of the scheduling constraints used during VM placement. Boot=true means that this is the boot disk for this VM. -w is optional and contains the UUID of the network you just created. -i is the UUID of the Image that you want to use. In this case, we want to the PhotonOS image. To get the UUID of the image, execute: photon image list Create a Second VM This VM will be used later in the lab, but its very easy to create now. 2. Execute the following command: photon vm create --name lab-vm2 --flavor my-vm --disks "disk-1 my-eph-disk boot=true" -w UUID of your Network -i UUID of your PhotonOS image Page 46

47 Note: The easiest way to create this is to hit Up Arrow on your keyboard to get to the previous photon vm create command. Then hit left arrow key until you get to the name and change the 1 to a 2. Finally hit Return to execute. Start VM The VMs were created, but not powered on. We want to power on the first VM only. The second VM needed to be powered off for now. 1. To start the VM, execute: photon vm start UUID of lab-vm1 The UUID of the VM is at the end of the Create VM command output. by executing photon vm list You can also get it Page 47

48 Show VM details More information about the VM can be found using the show command. 1. To show VM details, execute: photon vm show UUID of lab-vm1 Notice that you can see the disk information and the Network IP. The IP metadata takes a couple of minutes to migrate from ESXi into the Photon Platform Cloudstore, so you may not see it right away, even if you see it through the vsphere Client. Page 48

49 Stop VM We are going to shutdown the VM in order to attach a Persistent Disk to it. Our boot image is not configured to support hot add of storage so we will shut the VM down first. 1. To Stop the VM, Execute: photon vm stop UUID of lab-vm1 Page 49

50 Persistent Disks So far we have created a VM with a single Ephemeral disk. If we delete the VM, the disk is deleted as well. In a Cloud environment, there is the need to have ephemeral VMs that may be created/destroyed frequently, but need access to persistent data. Persistent Disks are VMDKs that live independently of individual Virtual Machines. They can be attached to a VM, and when that VM is destroyed, can be attached to another newly created VM. We will also see later on that Docker Volumes can be mapped to these disks to provide persistent storage to containers running in the VM. Let's create a persistent disk. 1. To Create a persistent disk, Execute: photon disk create --name disk-2 --flavor my-pers-disk --capacitygb 2 Let's look at the details: --name is the name of the disk --flavor says to use the mypers-disk flavor to define placement constraints and --capacity of the disk will be 2 GB 2. More information about the disk can be found using: photon disk show UUID of the Disk Notice that the disk is DETACHED, meaning it is not associated with any VM. Let's ATTACH it to our VM. Attach Persistent Disk To VM Now we will attach that newly created persistent disk to the VM we created previously. Page 50

51 1. To find the VM UUID, Execute: photon vm list 2. To find the Disk UUID, Execute: photon disk list 3. To attach the disk to the VM, Execute: photon vm attach-disk uuid of lab-vm1 --disk uuid of disk Page 51

52 Show VM Details Now we will see the attached Disk using the VM Show command again. 1. To Show VM details, execute: photon vm show UUID of lab-vm1 Notice that you can see the disk information and both disk-1 (the ephemeral boot disk) and disk-2 (your newly added persistent disk) are attached to the VM. Page 52

53 Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts Persistent Disks are different from standard vsphere ephemeral disks in that they are not tied to the lifecycle of a VM. You will use your previously created persistent disk to store Web content for Nginx. Web content stored in an individual container is static. It must be manually updated or files must be copied in to each container that might present it. Our content will be presented to the containers through Docker volumes that will be mounted on our persistent disk. So it can be changed in one place and made available wherever we present it. We will make changes to the content on one Docker host, then attach the disk to a new host and create a new container on that host. The website on that host will reflect the changed content. Docker volumes provide the ability to persist disks across containers. Photon Platform persistent disks extend that capability across Docker hosts. Page 53

54 Deploy Nginx Web Server We will use your two previously created VMs, lab-vm1 and lab-vm2, for these exercises. Lets start the VM and get the IP address for lab-vm1 1. To find the vm UUID, Execute: photon vm list 2. To start lab-vm1, Execute: photon vm start UUID of lab-vm1 2. To find the vm IP for lab-vm1, Execute: photon vm networks UUID of lab-vm1 Note: It may take a couple of minutes for the IP address to be updated in the Photon Controller Meta Data and appear in this command. Keep trying, or log into vcenter and grab the IP from there Page 54

55 Connect to lab-vm1 1. From the CLI, execute: ssh of lab-vm1 password is VMware1! Page 55

56 Setup filesystem The storage device is attached to the VM, however we still need to format the disk and mount the filesystem. We have provided a script to execute these steps for you. 1. To set up the filesystem, Execute:./mount-disk-lab-vm1.sh 2. You will see that the device /dev/sdb is mounted at /mnt/dockervolume. This is the Persistent disk you previously created. Create The Nginx Container With Docker Volume We will now create an Nginx container on our Docker host (lab-vm1). The container will have a volume called /volume that is mounted on /mnt/dockervolume from the host. This means that any changes to /volume from the container will be persisted on our physical persistent disk. Page 56

57 1. To create the nginx container, Execute: docker run -v /mnt/dockervolume:/volume -d -p 80: :5000/nginx Let's look at this command: docker run creates a container. The -v says to create a Docker volume in the container that is mounted on /mnt/dockervolume from the host. The -d means to keep the container running until it is explicitly stopped. The -p maps container port 80 to port 80 on the host. So you will be able to access the Nginx Web Server on port 80 from your browser. Lastly, nginx is the Docker image to use for container creation. Notice that the image is specified as IP:port/image. This is because we are using a local Docker registry and have tagged the image with the ip address and port of the registry. Page 57

58 Verify Webserver Is Running 1. Open one of the Web Browsers on the desktop 2. Enter the IP address of lab-vm1. The IP may be different from the one in the image above. It is the same IP you used in the previous ssh command from the CLI. The default http port is 80, so you do not need to enter it. You should see the Nginx homepage. Modify Nginx Home Page We will copy the Nginx default home page to our Docker volume and modify it. Once we have done that, we will move the disk to a new VM, Create a new container with Docker Volume and verify that the changes we made have persisted. 1. Connect to your running container. From the CLI, you should still have have an ssh connection to lab-vm1, Execute docker exec -it first3charsofcontainerid bash This command says to connect to the container through an interactive terminal and run a bash shell. You should see a command prompt within the container. If you cannot find your containerid, Execute: docker ps to find it 2. To see the filesystem inside the container and verify your Docker volume (/volume), Execute: Page 58

59 df 3. We want to copy the Nginx home page to our Persistent disk. Execute: cp /usr/share/nginx/html/index.html /volume 4. To Exit the container, Execute: exit Edit The Index.html You will use the vi editor to make a change to the index.html page. If you are comfortable with vi and html, then make whatever modifications you want. These are the steps for a very simple modification. 1. Execute: vi /mnt/dockervolume/index.html 2. Press the down arrow until you get to the line 14 with "Welcome To Nginx" 3. Press right arrow until you are at the character "N" in "Nginx" 4. Press the "cw" keys to change word and type "the Hands On Lab At VMWORLD 2016 " 5. Press the "esc" key and then ":" key 6. At the ":" prompt, enter "wq" to save changes and exit vi Page 59

60 7. At the Linux Prompt: Type "exit" to close the ssh session. You are now back in the Photon CLI Detach The Persistent Disk We now want to remove this disk from the VM. Remember that detaching the disk does not delete it. Detach the Persistent Disk from lab-vm1 1. To get the UUID of the lab-vm1, Execute: photon vm list 2. To get the UUID of the Persistent Disk, Execute: photon disk list 3. Execute: photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2 Page 60

61 Reminder that you can get the UUID of the VM with photon vm list and the UUID of the disk with photon disk list commands. Attach The Persistent Disk To New VM You will attach the persistent disk to the lab-vm2 VM you created earlier. 1. To get the UUID of lab-vm2, Execute: photon vm list 2. To attach the disk to lab-vm2, Execute: photon vm attach-disk uuid of lab-vm12 --disk uuid of disk Start and Connect to lab-vm2 1. To start the VM lab-vm2, Execute: photon vm start UUID lab-vm2 2. To get the network IP of lab-vm2, Execute: photon vm networks UUID lab-vm2 Page 61

62 Note: You may have to wait a minute or two for the IP to appear. If you are impatient you can open the vsphere client and get it there. 3. From the CLI, execute: ssh root@ip of lab-vm2 password is VMware1! Page 62

63 Setup Filesystem The storage device is attached to the VM, however we still need to format the disk and mount the filesystem. We have provided a script to execute these steps for you. Note that you must run mount-disk-lab-vm2.sh not mount-disk-lab-vm1.sh on this vm. mount-disk-lab-vm1.sh will reformat the disk and you will not see the changes you made. 1. To set up the filesystem, Execute:./mount-disk-lab-vm2.sh You will see that the device /dev/sdb is mounted at /mnt/dockervolume. Create The New Nginx Container We will now create a New Nginx container on our second Docker host (lab-vm2). This container will have a volume called /usr/shared/nginx/html that is mounted on /mnt/ dockervolume from the host. Nginx uses /usr/shared/nginx/html as the default path for its configuration files. So our changed home page on the persistent disk will be used as the default page. 1. To create the nginx container, Execute: docker run -v /mnt/dockervolume:/usr/share/nginx/html -d -p 80: :5000/nginx To return to the Photon CLI, type: exit Page 63

64 Let's look at this command: docker run creates a container. The -v says to create a Docker volume in the container that is mounted on /mnt/dockervolume from the host. The -d means to keep the container running until it is explicitly stopped. The -p maps container port 80 to port 80 on the host. So you will be able to access the Nginx Web Server on port 80 from your browser. Lastly, nginx is the Docker image to use for container creation. It resides on a local Docker Registry we created on port Extra Credit: From the CLI, Execute docker ps and you will see the Docker Registry we are using Page 64

65 Verify That Our New Webserver Reflects Our Changes You should see the New Nginx homepage on the IP of lab-vm2. 1. Open one of the Web Browsers on the desktop 2. Enter the IP address of lab-vm2. The default http port is 80, so you do not need to enter it. You should see the modified Nginx homepage. Clean Up VMs Our lab resources are very constrained. In order to complete Module 3, you will need to delete the two VMs you created in this part of the lab. 1. To delete a VM, Execute: photon vm list note the UUIDs of the two VMs 2. Execute: photon vm stop UUID of lab-vm2 3. Execute: Page 65

66 photon vm detach-disk UUID of lab-vm2 --disk UUID of disk 4. Execute: photon vm delete UUID of lab-vm2 5. Repeat steps 2 and 4 for lab-vm1 Page 66

67 Monitor and Troubleshoot Photon Platform Photon Platform can be configured to push logs to any syslog server endpoint. We have configured this deployment for LogInsight. You will troubleshoot a failure in VM deployment using LogInsight and will monitor your infrastructure through integration with Graphite and Grafana. Page 67

68 Enabling Statistics and Log Collection Photon platform provides the capability to push log files to any Syslog server. Infrastructure statistics can also be captured and pushed to a monitoring endpoint. Both of these are enabled during control plane deployment. In this example, we are pushing statistics to a Graphite server and then using a visualization tool called Grafana to provide some nicer graphs. Our Syslog server in this lab is LogInsight Monitoring Photon Platform With Graphite Server Let's start by seeing what statistics are available from Photon. In this Pre-GA version we are primarily capturing ESXi performance statistics, but will enhance this over time. Page 68

69 1. Connect to the Graphite Server by opening a browser 2. Select the Graphite Browser Bookmark from the Toolbar. Page 69

70 Expand To View Available Metrics Expand the Metrics folder and then select the Photon Folder. You can see Two ESXi Hosts and statistics for CPU, Memory, Storage and Networking. 1. Expand cpu and select usage 2. Expand mem and select usage If you do not see any data, this is because the photon controller agent plugin on your hosts did not start correctly when the lab deployed. Perform the following Step Only if no data displayed in Graphite. No Performance Data in Graphite If you saw performance data in Graphite, then skip to step "View Graphite Data Through Grafana" You will ssh into our two esxi hosts and restart the photon controller agent process. If you are seeing performance data from only one host, then only restart that host's agent Page 70

71 1. Login to the the PhotonControllerCLI through Putty. 2. From the PhotonControllerCLI, Execute: ssh password is VMware1! 3. Execute: /etc/init.d/photon-controller-agent restart 4. Execute: exit 5) repeat steps 2-4 for host It will take a couple of minutes for the stats to begin showing up in the browser. You may need to refresh the page. You may also want to jump to the LogInsight Section of the lab and come back here if you don't want to wait for the stats to collect. Page 71

72 View Graphite Data Through Grafana Graphite can also act as a sink for other visualization tools. In this case, we will take the data from Graphite and create a couple of charts in Grafana. 1. From your browser, Select the Grafana Bookmark from the toolbar. Graphite Data Source For Grafana We have previously set up Graphite as the source for Data used by Grafana. To see this setup 1. Click on Data Sources. We simply pointed to our Graphite Server Endpoint. Create Grafana Dashboard Grafana has the capability to create a lot of interesting graphics. That is beyond the scope of this lab, but feel free to play and create whatever you want. We will create a simple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite. Page 72

73 1. Click on Dashboards 2. Click on Home 3. Click on New Page 73

74 Add A Panel 1. Select the Green tab 2. Add Panel 3. Graph Open Metrics Panel This is not intuitive, but you must click where it says "Click Here" and then Click Edit to add metrics Add Metrics To Panel 1. Select "Select Metrics" and select photon. Page 74

75 2. Select "Select Metrics" again and select one of the esxi hosts (This is the same Hierarchy you saw in Graphite). Continue selecting until your metrics look like this. This is a pretty straightforward way to monitor performance of Photon Platform resources. Page 75

76 Page 76

77 Troubleshooting Photon Platform With LogInsight We will try to create a VM that needs more resource than is available in our environment. The create task will error out. Rather than search through individual log files, we will use LogInsight to see more information. 1. Execute the following command: photon vm create --name lab-vm1 --flavor cluster-master-vm --disks "disk-1 cluster-vm-disk boot=true" -w UUID of your Network -i UUID of your PhotonOS image the cluster-master-vm will try to create a VM with 8GB of Memory. We do not have that available on our Cloud hosts, so it will fail. The error message here tells us the problem, but we want to walk through the process of getting more detail from the logs. 2. Note the Task ID from the Create command. We are going to use that in a LogInsight Query. Page 77

78 Connect To Loginsight 1. From Your browser, select the LogInsight Bookmark from the toolbar and login as User: admin password: VMware1! Query For The Create Task Once you Login, you will see the Dashboard screen 1. Click on Interactive Analytics 2. Paste the Task ID into Filter Field 3. Change the Time Range to Last Hour of Data 4. Click the Search Icon You can look through these task results to find an error. More interesting is looking through RequestIDs 5. In Photon Platform, every Request through the API gets a requestid. There could be many ReqIDs that are relevant to a task. It takes a little work to see the right entries to drill into. For instance, this entry shows an error, but the RequestID is related to querying the CloudStore for the Task. So you see the Create VM task itself was in Page 78

79 error, but the RequestID is for a request that was successful (querying the task info). So we need to scroll for a more interesting request. Page 79

80 Browse The Logs For Interesting Task Error, Then Find RequestID 1. Scroll down in the Log and look for RESERVE_RESOURCE. 2. Find the RequestID and Paste it into the Filter Field Your log files will be slightly different, but you should see something similar. Page 80

81 Search The RequestID For RESERVE_RESOURECE Once you click on the Search Icon, you will see log hits for that RequestID. These are actual requests made by the Photon Controller Agent Running on the ESXi hosts. In this case the Agent Request Errors were surfaced to the task level, so there isn't a lot of additional information, but that is not always true. In many instances, the requestid will provide new data to root cause the initial Task Failure. This is especially useful as the scale of your system grows. Page 81

82 Conclusion The operational model for Cloud Native infrastructure is dramatically different from traditional platform 2 kinds of environments. The expectation is that the control plane will be highly scalable, supporting both large numbers of physical hosts, as well as high churn-transient work loads. The application frameworks handling application provisioning and availability, removing that requirement from the infrastructure. The applications are very dynamic and infrastructure must be consumable through programmatic methods, rather than traditional Admin Interfaces. In this module, you have been introduced to Photon Platform Multi-tenancy and its associated model for managing resources at scale. You have also seen the API, consumed in this instance through the Command Line Interface. You have also seen how storage persistence in the infrastructure can add value to Microservice applications that take advantage of Docker containers. Finally, you have been exposed to monitoring and troubleshooting of this distributed environment. Page 82

83 Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes) Page 83

84 Introduction This module provides an introduction to the operational model for developers of cloud native applications. Deploying containers at scale will not be done through individual Docker run commands (as seen in the previous module), but through the use of higher level frameworks that provide orchestration of the entire application. Orchestration could include application deployment, restart on failure, as well as up/downscaling of applications instances. In this module you will focus on container frameworks that manage micro service applications running on Photon Platform. You will build and deploy a simple web application using Opensource Kubernetes and Docker. You will also see how orchestration at scale can be administered through a tool like Rancher. 1) Container Orchestration With Kubernetes on Photon Platform. We have provided a small Kubernetes cluster, deployed on Photon Platform. You will see the process for deploying Opensource Kubernetes on Photon Platform, but Due to timing and resource constraints in the lab, we could not create it as part of the lab. You will deploy the Nginx Webserver application (Manually deployed in Module Two) via Kubernetes. You will verify that multiple instances have been deployed and see how to scale additional instances. You will kill an instance of the webserver and see that kubernetes detects the failure and restarts a new container for you. 2) Container Orchestration with Rancher on Photon Platform Rancher is another Opensource Container management platform. You will see how the Rancher UI allows you to provision Docker-Machine nodes on Photon platform and deploy will then deploy an Nginx Webserver onto the Docker hosts. Rancher provides that higher level container orchestration and takes advantage of the resource and tenant isolation provided by the underlying Photon Platform. Page 84

85 Container Orchestration With Kubernetes on Photon Platform We have provided a small Kubernetes cluster, deployed on Photon Platform. You will see the process for deploying Opensource Kubernetes on Photon Platform, but due to timing and resource constraints in the lab, we could not create it as part of the lab. You will deploy the Nginx/Redis application (Manually deployed in Module Two) via Kubernetes. You will verify that multiple instances have been deployed and see how to scale additional instances. You will kill an instance of the webserver and see that kubernetes detects the failure and restarts a new container for you. Also troubleshoot the outage via LogInsight. Kubernetes Deployment On Photon Platform Photon Platform provides two methods for deploying Kubernetes Clusters. The first method is an opinionated deployment where we have pre-defined all of the elements of the deployment. We will briefly look at the CLI commands to support this. 1) From the Windows Desktop, login to the PhotonControllerCLI VM. SSH key login has been enabled, but if you have a problem, the password is vmware. Page 85

86 Photon Cluster Create Command The CLI supports a Cluster Create command. This command allows you to specify the cluster type (Kubernetes, Mesos, Swarm are currently supported) and size of the cluster. You will also provide additional IP configuration information. Photon Platform will Create the Master and Worker node VMs, configure the services (for Kubernetes in this example), setup the internal networking and provide a running environment with a single command. We are not going to use this method in the lab. If you try to create a Cluster, you will get an error because there is not enough resource available to create more VMs. Example: photon cluster create -n Kube5 -k KUBERNETES --dns dns-server --gateway Gateway --netmask Netmask --master-ip KubermasterIP --container-network KubernetesContainerNetwork --etcd1 StaticIP -w uuid demo network -s 5 With this command we are creating a cluster called Kube5 of type Kubernetes. We are specifying the networking configuration for the Kuberetes Master VM and a separate etcd VM (etcd is a backing datastore that holds networking information used by Flannel internal to Kubernetes). The Worker node VMs will receive IPs from DHCP. You will specify the network on which to place these VMs through the -w option and -s is the number of Worker nodes in the cluster. The Kubernetes container network is a private network that is used by Flannel to connect Containers within the Cluster. 1. To see the command syntax, Execute: photon cluster create -h Page 86

87 Kube-Up On Photon Platform You just saw the Photon Cluster Create command. This is an easy way to get a cluster up and running very quickly, and also provides capability to scale it up as needed. Awesome for a large number of use cases, but you probably noticed that there is no way to customize it beyond the parameters provided in the command line. What if you want a different version of Kubernetes, or Docker within the VMs. How about replacing Flannel with NSX for networking or using a different Operating System in the Nodes? These are not easily done with Cluster Create at this point. We have provided a second option for creating the cluster. We have modified Open Source Kubernetes directly to support Photon Platform. Your process for deploying the cluster is to clone the Kubernetes Repo from github, build it and run the kube-up command while passing in the environment variable that tells it to use our deployment scripts. This allows you complete freedom to configure the cluster however you want. Our Lab Kubernetes Cluster Details We have created a Kubernetes Cluster with one Master and 2 Worker nodes. You are welcome to take a look at the configuration files in ~/kubernetes/cluster/photoncontroller/ You can look through the config-default and config-common files to see how some of the configuration is done. 1. Let's take a look at the VMs that make up our cluster. Execute: photon tenant set kube-tenant This points to the kube tenant that we created for our cluster. For details on tenants and projects, return to module 1 2. To set our kube project, Execute: photon project set kube-project 3. To see our VMs, Execute: photon vm list Page 87

88 You can see that our cluster consists of one Master VM and 2 Worker VMs. Kubernetes will create Pods that are deployed as Docker containers within the Worker VMs. Page 88

89 Basic Introduction To Kubernetes Application Components Before we deploy the app, let get a little familiarity with Kubernetes concepts. This is not meant to be a Kubernetes tutorial, but to get you familiar with the pieces of our application. A node represents the Worker nodes in our Kubernetes Cluster. Kubernetes has a basic unit of work called a Pod. A Pod is a group of related containers that will be deployed to a single Node. you can generally think of a Pod as the set of containers that make up an application. You can also define a Service that acts as a Load Balancer across a set of containers. Lastly, Replication Controllers facilitate replicated pods and are responsible for maintaining the desired number of copies of a particular Pod. In our application, you will deploy 3 replicated copies of the Nginx Webserver, with a frontend Service. The command line utility for managing Kubernetes is called kubectl. Let's start by looking at the nodes. 1. From the CLI VM, Execute: kubectl get nodes You will see the two worker nodes associated with our cluster. This is slightly different from seeing the VMs that the nodes run on as you did previously. Deploying An Application On Kubernetes Cluster Our application is defined through 3 yaml files. One for each of the Pod, Replication Controller and Service. These files provide the configuration Kubernetes uses to deploy and maintain the application. To look at these configuration files: 1. Execute: Page 89

90 cat ~/demo-nginx/nginx-pod.yaml 2. Execute: cat ~/demo-nginx/nginx-service.yaml 3. Execute: cat ~/demo-nginx/nginx-rc.yaml Page 90

91 Kubectl To Deploy The App We are now going to deploy the application. From the CLI VM: 1. To deploy the pod, Execute: kubectl create -f ~/demo-nginx/nginx-pod.yaml 2. To deploy the service, Execute: kubectl create -f ~/demo-nginx/nginx-service.yaml 3. To deploy the Replication Controller, Execute: kubectl create -f ~/demo-nginx/nginx-rc.yaml Page 91

92 Kubernetes UI Shows Our Running Application After you have deployed your application, you can view it through the Kubernetes UI 1. Open your Web Browser and enter If you are prompted for username and password, they are admin/4hjyqnfzk4tntbuz Sorry about the randomly generated password. You may get an invalid certificate authority error. Click on Advanced and Proceed to the site. nginx-demo is your application 2. Note the port number for the External endpoint. We will use it in a couple of steps Page 92

93 Application Details 1. Click on the 3 dots and select "View Details" to see what you have deployed. Page 93

94 Your Running Pods You can see the Replication Controller is maintaining 3 Replicas. They each have their own internal IP and are running on the 2 Nodes. 3 Replicas is not particularly useful given that we have only 2 Nodes, but the concept is valid. Explore the logs if you are interested. We can connect to the application directly through the Node IP and the port number we saw earlier. Page 94

95 Connect To Your Application Web Page Now let's see what our application does. We will choose one of the node IP addresses with the port number shown earlier to see our nginx webserver homepage. It's just a simple dump of the application configuration info. 1. From your browser, Connect to: Note that your port number may be different than the lab manual port number. IP will be the same. Page 95

96 Container Orchestration With Docker Machine Using Rancher on Photon Platform Rancher is another Opensource Container management platform. You will use the Rancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro- Service application onto the newly created Docker hosts. Rancher provides that higher level container orchestration and takes advantage of the resource and tenant isolation provided by the underlying Photon Platform. Login To Photon ControllerCLI VM 1. Open Putty from the desktop and Click on PhotonControllerCLI link 2. Click on Open Page 96

97 Deploy Rancher Server You will first deploy a new version of the Rancher Server container into our environment. Before that you need to delete the existing container. 1. Execute: docker ps grep rancher/server to see the running container. Find the Container ID for the Rancher/Server container. That is the one we want to remove 2. Execute: docker kill "ContainerID" This will remove the existing Rancher Server container 3. Execute:!885 This will execute command number 885 stored in Linux history. It will create a new Docker container. Note that your new container is tagged with :5000. This is the local Docker Registry that is used to serve our lab's images. Page 97

98 Clean Up Rancher Host The VM that we will use as a Rancher Host (more explanation below) needs have a few files removed prior to deploying the Rancher Agent. 1. Execute: ssh root@ The password is: vmware 2. Execute: rm -rf /var/lib/rancher/state 3. Execute: docker rm -vf rancher-agent 4. Execute: docker rm -vf rancher-agent-state Page 98

99 Connect To Rancher UI Now we can add a Rancher host. Rancher server is running in a container on You can connect from your browser at Rancher hosts are VMs running Docker. This will be where application containers are deployed. Much like Kubernetes Worker nodes you saw in the previous section. We will first add a Rancher host. The host is a VM that we previously created for you. 1. From your browser, Connect to and then click Add Host 2. If you get this page, just click Save Page 99

100 Page 100

101 Add Rancher Host Rancher has several options for adding hosts. There are a couple of direct drivers for cloud platforms, as well as machine drivers supported through Docker Machine plugins. There is a Docker Machine Plugin for Photon Controller available. In this lab we are using the Custom option to show you how to manually install the Rancher Agent on your Host VM and see it register with Rancher Server. 1. Note: that the Custom icon is selected 2. Cut the pre-formed Docker run command by dragging the mouse over the command and doing a Ctrl-C or click the "Copy to Clipboard" icon at the right of the box. Page 101

102 Paste In The Docker Run Command To Start Rancher Agent Go back to the Putty session. You should still be connected to your Rancher Host VM. You will now paste in the Docker Run command you captured from the Rancher UI. Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command line. Note: You must cut/paste the command from the Rancher UI and not use the command in the image. The registration numbers are specific to your host. 1. Execute: Either Right Click of the mouse or Ctrl-v and hit Return View the Agent Container To view your running container: 1. Execute: docker ps Page 102

103 Verify New Host Has Been Added To view your new host, return to the Rancher UI in your browser 1. Click the Close button 2. Click on Infrastructure and Hosts 3. This is your host Page 103

104 Page 104

105 Deploy Nginx Webserver To deploy our application, we are going to create an Nginx Container Service. Services in Rancher can be a group of containers, but in this case we will be deploying a single container application. 1. Click on Containers 2. Click on Add Container Configure Container Info We need to define the container we want to deploy. 1. Enter a Name for your container 2. Specify the Docker Image that you will run. This image is in a local Registry, so the name is the IP:port/image-name. Enter :5000/nginx 3. This image is already cached locally on this VM, so uncheck the box to Pull the latest image Page 105

106 4. We now want to map the container port to the host port that will be used to access the Webserver. Nginx by default is listening on Port 80. We will map it to Host port Note that you might have to click on the + Portmap sign to see these fields. 5. Click on Create Button It may take a minute or so for the container to come up. Its possible the screen will not update, so try holding Shift-Key while clicking Reload on the browser page Page 106

107 Container Information 1. Once your container is running, Check out the performance charts 2. Note that the you can see the container status, Its internal IP address - this is a Rancher managed network that containers communication on. Open Your Webserver From you Browser, Enter the IP address of the Rancher Host VM and the Port you mapped. 1. From your Internet Browser, enter :2000 to view the default Nginx webpage Page 107

108 Rancher Catalogs Rancher also provides the capability to deploy multi-container applications in catalogs that are provided directly by the application vendors. Browse through some of the available applications. You will not be able to deploy them because the lab does not have an external internet connection. Page 108

Table of Contents HOL SLN

Table of Contents HOL SLN Table of Contents Lab Overview - - Modernizing Your Data Center with VMware Cloud Foundation... 3 Lab Guidance... 4 Module 1 - Deploying VMware Cloud Foundation (15 Minutes)... 7 Introduction... 8 Hands-on

More information

Table of Contents HOL-1703-SDC-4

Table of Contents HOL-1703-SDC-4 Table of Contents Lab Overview - - VMware NSX: Installation and Configuration...2 Lab Guidance... 3 Module 1 - NSX Manager Installation and Configuration (15 Minutes)... 6 Introduction... 7 Hands-on Labs

More information

Table of Contents HOL-1701-CHG-5

Table of Contents HOL-1701-CHG-5 Table of Contents Lab Overview: vrealize Operations Application Monitoring: Challenge Lab... 2 Lab Overview... 3 Lab Guidance... 5 Module 1 - Configuring a Custom Application (15 minutes)... 10 Introduction...

More information

Table of Contents HOL-1708-CHG-3

Table of Contents HOL-1708-CHG-3 Table of Contents Lab Overview - - Virtual SAN 6.2: Challenge Lab... 2 Lab Guidance... 3 Module Switcher... 8 Challenge 1 - Set Up a Virtual SAN Cluster (15 Mins)... 10 Introduction... 11 Module Switcher...

More information

Table of Contents HOL CMP

Table of Contents HOL CMP Table of Contents Lab Overview - - Monitor and Troubleshoot Your Infrastructure and Applications with vrealize Operations and vrealize Log Insight... 2 Lab Guidance... 3 Module 1 - Troubleshoot Infrastructure

More information

Table of Contents HOL CNA

Table of Contents HOL CNA Table of Contents Lab Overview - - Kubernetes - Getting Started... 2 Lab Guidance... 3 Module 1 - Introduction to Kubernetes (30 minutes)... 9 Introduction... 10 What is container orchestration and why

More information

Table of Contents HOL-1757-MBL-6

Table of Contents HOL-1757-MBL-6 Table of Contents Lab Overview - - VMware AirWatch: Technology Partner Integration... 2 Lab Guidance... 3 Module 1 - F5 Integration with AirWatch (30 min)... 8 Getting Started... 9 F5 BigIP Configuration...

More information

Table of Contents HOL NET

Table of Contents HOL NET Table of Contents Lab Overview - - VMware NSX Multi-Site and SRM in an Active- Standby Setup... 2 Lab Guidance... 3 Lab Introduction... 9 Module 1 - Review Pre-Configured Multi-Site NSX and Configure Site-Local

More information

Table of Contents HOL-PRT-1467

Table of Contents HOL-PRT-1467 Table of Contents Lab Overview - - Virtual Volumes with Dell EqualLogic... 2 Lab Guidance... 3 Pre-flight Instructions... 5 Module 1 - Working with Dell EqualLogic and VVOLs (60 Mins)... 11 Creating a

More information

Table of Contents HOL-SDC-1422

Table of Contents HOL-SDC-1422 Table of Contents - VMware Development Tools and SDKs... 2 Lab Overview... 3 Module 1 - Developer Center, Workbench IS, and the vsphere Management SDK (30 min)... 4 Introduction... 5 Using Workbench IS

More information

Table of Contents. VMware AirWatch: Technology Partner Integration

Table of Contents. VMware AirWatch: Technology Partner Integration Table of Contents Lab Overview - HOL-1857-08-UEM - Workspace ONE UEM - Technology Partner Integration... 2 Lab Guidance... 3 Module 1 - F5 Integration with Workspace ONE UEM (30 min)... 9 Introduction...

More information

VMware AirWatch: Directory and Certificate Authority

VMware AirWatch: Directory and Certificate Authority Table of Contents Lab Overview - HOL-1857-06-UEM - VMware AirWatch: Directory and Certificate Authority Integration... 2 Lab Guidance... 3 Module 1 - Advanced AirWatch Configuration, AD Integration/Certificates

More information

VMware AirWatch - Workspace ONE, Single Sign-on and VMware Identity Manager

VMware AirWatch - Workspace ONE, Single Sign-on and VMware Identity Manager VMware AirWatch - Workspace ONE, Single Sign-on and VMware Identity Table of Contents Lab Overview - HOL-1857-03-UEM - Workspace ONE UEM with App & Access Management... 2 Lab Guidance... 3 Module 1 - Workspace

More information

Table of Contents HOL-1710-SDC-6

Table of Contents HOL-1710-SDC-6 Table of Contents Lab Overview - - What's New: vsphere with Operations Management.. 2 Lab Guidance... 3 Module 1 - What's New in vsphere (90 minutes)... 9 vcenter Server Appliance (VSCA)... 10 vcenter

More information

VMware AirWatch - Unified Endpoint Management for Windows 10

VMware AirWatch - Unified Endpoint Management for Windows 10 VMware AirWatch - Unified Endpoint Management for Windows 10 Table of Contents Lab Overview - HOL-1857-02-UEM - Workspace ONE UEM - Managment for Windows 10 2 Lab Guidance... 3 Module 1 - Windows 10 Software

More information

NexentaStor VVOL

NexentaStor VVOL NexentaStor 5.1.1 VVOL Admin Guide Date: January, 2018 Software Version: NexentaStor 5.1.1 VVOL Part Number: 3000-VVOL-5.1.1-000065-A Table of Contents Preface... 3 Intended Audience 3 References 3 Document

More information

Table of Contents 1.1. Introduction. Overview of vsphere Integrated Containers 1.2

Table of Contents 1.1. Introduction. Overview of vsphere Integrated Containers 1.2 Table of Contents Introduction Overview of vsphere Integrated Containers 1.1 1.2 2 Overview of vsphere Integrated Containers This document provides an overview of VMware vsphere Integrated Containers.

More information

Think Small to Scale Big

Think Small to Scale Big Think Small to Scale Big Intro to Containers for the Datacenter Admin Pete Zerger Principal Program Manager, MVP pete.zerger@cireson.com Cireson Lee Berg Blog, e-mail address, title Company Pete Zerger

More information

Table of Contents HOL NET

Table of Contents HOL NET Table of Contents Lab Overview - - VMware NSX-T Data Center Operations, Troubleshooting and API Consumption... 2 Lab Guidance... 3 Module 1 - NSX-T Datacenter Operations - Use Tools within NSX-T Datacenter

More information

Table of Contents HOL CNA

Table of Contents HOL CNA Table of Contents Lab Overview - - VMware Enterprise PKS... 2 Lab Guidance... 3 Module 1 - Introduction to Kubernetes (45 minutes)... 9 Introduction... 10 What is container orchestration and why do I need

More information

Table of Contents HOL-SDC-1635

Table of Contents HOL-SDC-1635 Table of Contents Lab Overview - - vrealize Log Insight... 2 Lab Guidance... 3 Module 1 - Log Management with vrealize Log Insight - (45 Minutes)... 7 Overview of vrealize Log Insight... 8 Video Overview

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme CNA2080BU Deep Dive: How to Deploy and Operationalize Kubernetes Cornelia Davis, Pivotal Nathan Ness Technical Product Manager, CNABU @nvpnathan #VMworld #CNA2080BU Disclaimer This presentation may contain

More information

VMware Integrated OpenStack with Kubernetes Getting Started Guide. VMware Integrated OpenStack 4.0

VMware Integrated OpenStack with Kubernetes Getting Started Guide. VMware Integrated OpenStack 4.0 VMware Integrated OpenStack with Kubernetes Getting Started Guide VMware Integrated OpenStack 4.0 VMware Integrated OpenStack with Kubernetes Getting Started Guide You can find the most up-to-date technical

More information

Table of Contents HOL CMP

Table of Contents HOL CMP Table of Contents Lab Overview - - vrealize Orchestrator - Advanced... 2 Lab Guidance... 3 Module 1 - Creating Advanced vrealize Orchestrator Workflows (45 min)...9 Introduction... 10 Prompting User Input

More information

UDS Enterprise Free & Evaluation Edition. Lab UDS Enterprise + VMware vsphere + RDP/XRDP

UDS Enterprise Free & Evaluation Edition. Lab UDS Enterprise + VMware vsphere + RDP/XRDP UDS Enterprise Free & Evaluation Edition Lab UDS Enterprise + VMware vsphere + RDP/XRDP 1 INDEX Introduction 03 Deployment of UDS Enterprise Free & Evaluation Edition 04 Upload UDS Appliances to VMware

More information

Table of Contents HOL SDC

Table of Contents HOL SDC Table of Contents Lab Overview - - Site Recovery Manager: Data Center Migration and Disaster Recovery... 3 Overview of Site Recovery Manager... 4 Lab Overview... 5 Lab Guidance... 7 Module 1 - Lightning

More information

UDS Enterprise Free & Evaluation Edition. Lab UDS Enterprise + VMware vsphere + RDP/XRDP

UDS Enterprise Free & Evaluation Edition. Lab UDS Enterprise + VMware vsphere + RDP/XRDP UDS Enterprise Free & Evaluation Edition Lab UDS Enterprise + VMware vsphere + RDP/XRDP 1 INDEX Introduction 03 Deployment of UDS Enterprise Free & Evaluation Edition 04 Upload UDS Appliances to VMware

More information

USING NGC WITH GOOGLE CLOUD PLATFORM

USING NGC WITH GOOGLE CLOUD PLATFORM USING NGC WITH GOOGLE CLOUD PLATFORM DU-08962-001 _v02 April 2018 Setup Guide TABLE OF CONTENTS Chapter 1. Introduction to... 1 Chapter 2. Deploying an NVIDIA GPU Cloud Image from the GCP Console...3 2.1.

More information

Table of Contents HOL-PRT-1305

Table of Contents HOL-PRT-1305 Table of Contents Lab Overview... 2 - Abstract... 3 Overview of Cisco Nexus 1000V series Enhanced-VXLAN... 5 vcloud Director Networking and Cisco Nexus 1000V... 7 Solution Architecture... 9 Verify Cisco

More information

Table of Contents HOL-HBD-1301

Table of Contents HOL-HBD-1301 Table of Contents Lab Overview... 2 - vcloud Hybrid Service Jump Start for vsphere Admins...3 Module 1 - vcloud Hybrid Service: Architecture and Consumption Principles...5 vcloud Hybrid Service... 6 vcloud

More information

TECHNICAL WHITE PAPER AUGUST 2017 REVIEWER S GUIDE FOR VIEW IN VMWARE HORIZON 7: INSTALLATION AND CONFIGURATION. VMware Horizon 7 version 7.

TECHNICAL WHITE PAPER AUGUST 2017 REVIEWER S GUIDE FOR VIEW IN VMWARE HORIZON 7: INSTALLATION AND CONFIGURATION. VMware Horizon 7 version 7. TECHNICAL WHITE PAPER AUGUST 2017 REVIEWER S GUIDE FOR VIEW IN VMWARE HORIZON 7: INSTALLATION AND CONFIGURATION VMware Horizon 7 version 7.x Table of Contents Introduction.... 3 JMP Next-Generation Desktop

More information

Using the Horizon vrealize Orchestrator Plug-In

Using the Horizon vrealize Orchestrator Plug-In Using the Horizon vrealize Orchestrator Plug-In VMware Horizon 6 version 6.2.3, VMware Horizon 7 versions 7.0.3 and later Modified on 4 JAN 2018 VMware Horizon 7 7.4 You can find the most up-to-date technical

More information

Redhat OpenStack 5.0 and PLUMgrid OpenStack Networking Suite 2.0 Installation Hands-on lab guide

Redhat OpenStack 5.0 and PLUMgrid OpenStack Networking Suite 2.0 Installation Hands-on lab guide Redhat OpenStack 5.0 and PLUMgrid OpenStack Networking Suite 2.0 Installation Hands-on lab guide Oded Nahum Principal Systems Engineer PLUMgrid EMEA November 2014 Page 1 Page 2 Table of Contents Table

More information

VMware vsphere 5.5: Install, Configure, Manage Lab Addendum. Lab 3: Configuring VMware ESXi

VMware vsphere 5.5: Install, Configure, Manage Lab Addendum. Lab 3: Configuring VMware ESXi VMware vsphere 5.5: Install, Configure, Manage Lab Addendum Lab 3: Configuring VMware ESXi Document Version: 2014-07-08 Copyright Network Development Group, Inc. www.netdevgroup.com NETLAB Academy Edition,

More information

getting started guide

getting started guide Pure commitment. getting started guide Cloud Native Infrastructure version 2.0 Contents Introduction... 3 Intended audience... 3 Logging in to the Cloud Native Infrastructure dashboard... 3 Creating your

More information

REVISED 1 AUGUST REVIEWER'S GUIDE FOR VMWARE APP VOLUMES VMware App Volumes and later

REVISED 1 AUGUST REVIEWER'S GUIDE FOR VMWARE APP VOLUMES VMware App Volumes and later REVISED 1 AUGUST 2018 REVIEWER'S GUIDE FOR VMWARE APP VOLUMES VMware App Volumes 2.13.1 and later Table of Contents Introduction Audience What You Will Learn Navigating This Document for App Volumes Use

More information

REVISED 1 AUGUST QUICK-START TUTORIAL FOR VMWARE APP VOLUMES VMware App Volumes and later

REVISED 1 AUGUST QUICK-START TUTORIAL FOR VMWARE APP VOLUMES VMware App Volumes and later REVISED 1 AUGUST 2018 QUICK-START TUTORIAL FOR VMWARE APP VOLUMES VMware App Volumes 2.13.1 and later Table of Contents Introduction Audience What You Will Learn Navigating This Document for App Volumes

More information

Hands-On Lab. Windows Azure Virtual Machine Roles. Lab version: Last updated: 12/14/2010. Page 1

Hands-On Lab. Windows Azure Virtual Machine Roles. Lab version: Last updated: 12/14/2010. Page 1 Hands-On Lab Windows Azure Virtual Machine Roles Lab version: 2.0.0 Last updated: 12/14/2010 Page 1 CONTENTS OVERVIEW... 3 EXERCISE 1: CREATING AND DEPLOYING A VIRTUAL MACHINE ROLE IN WINDOWS AZURE...

More information

Table of Contents 1.1. Overview. Containers, Docker, Registries vsphere Integrated Containers Engine

Table of Contents 1.1. Overview. Containers, Docker, Registries vsphere Integrated Containers Engine Table of Contents Overview Containers, Docker, Registries vsphere Integrated Containers Engine Management Portal Registry Roles and Personas 1.1 1.1.1 1.1.2 1.1.2.1 1.1.2.2 1.1.2.3 1.1.2.4 2 Overview of

More information

Configuring High Availability for VMware vcenter in RMS All-In-One Setup

Configuring High Availability for VMware vcenter in RMS All-In-One Setup Configuring High Availability for VMware vcenter in RMS All-In-One Setup This chapter describes the process of configuring high availability for the VMware vcenter in an RMS All-In-One setup. It provides

More information

Scrutinizer Virtual Appliance Deployment Guide Page i. Scrutinizer Virtual Appliance Deployment Guide. plixer

Scrutinizer Virtual Appliance Deployment Guide Page i. Scrutinizer Virtual Appliance Deployment Guide. plixer Scrutinizer Virtual Appliance Deployment Guide Page i Scrutinizer Virtual Appliance Deployment Guide Contents What you need to know about deploying a Scrutinizer virtual appliance.. 1 System Requirements..................................2

More information

Using vrealize Operations Tenant App as a Service Provider

Using vrealize Operations Tenant App as a Service Provider Using vrealize Operations Tenant App as a Service Provider Using vrealize Operations Tenant App as a Service Provider You can find the most up-to-date technical documentation on the VMware Web site at:

More information

Tenant Onboarding. Tenant Onboarding Overview. Tenant Onboarding with Virtual Data Centers

Tenant Onboarding. Tenant Onboarding Overview. Tenant Onboarding with Virtual Data Centers Overview, page 1 with Virtual Data Centers, page 1 with Resource Groups, page 5 Overview In Cisco UCS Director, tenants enable you to securely control and allocate the virtual and physical infrastructure

More information

Version 1.26 Installation Guide for SaaS Uila Deployment

Version 1.26 Installation Guide for SaaS Uila Deployment Version 1.26 Installation Guide for SaaS Uila Deployment Table of Contents Introduction... 2 Scope and Purpose... 2 Architecture Overview... 2 Virtual Architecture... 2 Getting Started... 3 System Requirements...

More information

Reset the Admin Password with the ExtraHop Rescue CD

Reset the Admin Password with the ExtraHop Rescue CD Reset the Admin Password with the ExtraHop Rescue CD Published: 2018-01-19 This guide explains how to reset the administration password on physical and virtual ExtraHop appliances with the ExtraHop Rescue

More information

Paperspace. Deployment Guide. Cloud VDI. 20 Jay St. Suite 312 Brooklyn, NY Technical Whitepaper

Paperspace. Deployment Guide. Cloud VDI. 20 Jay St. Suite 312 Brooklyn, NY Technical Whitepaper Deployment Guide Cloud VDI Copyright 2017 Paperspace, Co. All Rights Reserved September - 1-2017 Technical Whitepaper Whitepaper: Deployment Guide Paperspace Content 1. Overview... 3 2. User Management...

More information

VMware Horizon View 5.2 Reviewer s Guide REVIEWER S GUIDE

VMware Horizon View 5.2 Reviewer s Guide REVIEWER S GUIDE VMware Horizon View 5.2 Reviewer s Guide REVIEWER S GUIDE Table of Contents Introduction... 5 What Is VMware Horizon View?... 5 Simplify.... 5 Manage and Secure.... 5 Empower... 5 Architecture and Components

More information

Zenoss Resource Manager Upgrade Guide

Zenoss Resource Manager Upgrade Guide Zenoss Resource Manager Upgrade Guide Release 6.2.0 Zenoss, Inc. www.zenoss.com Zenoss Resource Manager Upgrade Guide Copyright 2018 Zenoss, Inc. All rights reserved. Zenoss, Own IT, and the Zenoss logo

More information

Table of Contents DevOps Administrators

Table of Contents DevOps Administrators DevOps Administrators Table of Contents DevOps Administrators Overview for DevOps Admins Managing Images, Projects, Users Configure a Registry Create Users Assign the Administrator Role Create a Project

More information

Preparing Virtual Machines for Cisco APIC-EM

Preparing Virtual Machines for Cisco APIC-EM Preparing a VMware System for Cisco APIC-EM Deployment, page 1 Virtual Machine Configuration Recommendations, page 1 Configuring Resource Pools Using vsphere Web Client, page 4 Configuring a Virtual Machine

More information

Preparing Virtual Machines for Cisco APIC-EM

Preparing Virtual Machines for Cisco APIC-EM Preparing a VMware System for Cisco APIC-EM Deployment, on page 1 Virtual Machine Configuration Recommendations, on page 1 Configuring Resource Pools Using vsphere Web Client, on page 4 Configuring a Virtual

More information

Installing Cisco MSE in a VMware Virtual Machine

Installing Cisco MSE in a VMware Virtual Machine Installing Cisco MSE in a VMware Virtual Machine This chapter describes how to install and deploy a Cisco Mobility Services Engine (MSE) virtual appliance. Cisco MSE is a prebuilt software solution that

More information

OpenNebula on VMware: Cloud Reference Architecture

OpenNebula on VMware: Cloud Reference Architecture OpenNebula on VMware: Cloud Reference Architecture Version 1.2, October 2016 Abstract The OpenNebula Cloud Reference Architecture is a blueprint to guide IT architects, consultants, administrators and

More information

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA. This Reference Architecture Guide describes, in summary, a solution that enables IT organizations to quickly and effectively provision and manage Oracle Database as a Service (DBaaS) on Federation Enterprise

More information

Table of Contents HOL-SDC-1317

Table of Contents HOL-SDC-1317 Table of Contents Lab Overview - Components... 2 Business Critical Applications - About this Lab... 3 Infrastructure Components - VMware vcenter... 5 Infrastructure Components - VMware ESXi hosts... 6

More information

VMware vrealize Log Insight Getting Started Guide

VMware vrealize Log Insight Getting Started Guide VMware vrealize Log Insight Getting Started Guide vrealize Log Insight 2.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced

More information

Using the Horizon vcenter Orchestrator Plug-In. VMware Horizon 6 6.0

Using the Horizon vcenter Orchestrator Plug-In. VMware Horizon 6 6.0 Using the Horizon vcenter Orchestrator Plug-In VMware Horizon 6 6.0 You can find the most up-to-date technical documentation on the VMware Web site at: https://docs.vmware.com/ The VMware Web site also

More information

Table of Contents HOL-1757-MBL-4

Table of Contents HOL-1757-MBL-4 Table of Contents Lab Overview - - VMware AirWatch: Productivity Apps... 2 Lab Guidance... 3 Module 1 - VMware Boxer (30 minutes)... 8 Login to the AirWatch Console... 9 AirWatch Console Configuration...

More information

Data Protection Guide

Data Protection Guide SnapCenter Software 4.0 Data Protection Guide For VMs and Datastores using the SnapCenter Plug-in for VMware vsphere March 2018 215-12931_C0 doccomments@netapp.com Table of Contents 3 Contents Deciding

More information

Installing and Upgrading Cisco Network Registrar Virtual Appliance

Installing and Upgrading Cisco Network Registrar Virtual Appliance CHAPTER 3 Installing and Upgrading Cisco Network Registrar Virtual Appliance The Cisco Network Registrar virtual appliance includes all the functionality available in a version of Cisco Network Registrar

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design 4.0 VMware Validated Design for Software-Defined Data Center 4.0 You can find the most up-to-date technical

More information

UDP Director Virtual Edition Installation and Configuration Guide (for Stealthwatch System v6.9.0)

UDP Director Virtual Edition Installation and Configuration Guide (for Stealthwatch System v6.9.0) UDP Director Virtual Edition Installation and Configuration Guide (for Stealthwatch System v6.9.0) Installation and Configuration Guide: UDP Director VE v6.9.0 2016 Cisco Systems, Inc. All rights reserved.

More information

Kubernetes Integration with Virtuozzo Storage

Kubernetes Integration with Virtuozzo Storage Kubernetes Integration with Virtuozzo Storage A Technical OCTOBER, 2017 2017 Virtuozzo. All rights reserved. 1 Application Container Storage Application containers appear to be the perfect tool for supporting

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design for Software-Defined Data Center 4.0 This document supports the version of each product listed and supports

More information

Installing the Cisco CSR 1000v in VMware ESXi Environments

Installing the Cisco CSR 1000v in VMware ESXi Environments Installing the Cisco CSR 1000v in VMware ESXi Environments VMware ESXi Support Information, page 1 VMware Requirements Cisco IOS XE Fuji 16.7, page 2 Supported VMware Features and Operations, page 3 Deploying

More information

All - In - One for Hyper- V

All - In - One for Hyper- V All - In - One for Hyper- V INSTALL GUIDE LiveNX All- In- One Server Types and Requirements Here are specifications to LiveNX All- In- One Server types and its OVA requirements. Storage can be expanded

More information

Installing the Cisco Nexus 1000V Software Using ISO or OVA Files

Installing the Cisco Nexus 1000V Software Using ISO or OVA Files Installing the Cisco Nexus 1000V Software Using ISO or OVA Files This chapter contains the following sections: Installing the VSM Software, page 1 Installing the VSM Software Installing the Software from

More information

VMware Integrated OpenStack User Guide. VMware Integrated OpenStack 4.1

VMware Integrated OpenStack User Guide. VMware Integrated OpenStack 4.1 VMware Integrated OpenStack User Guide VMware Integrated OpenStack 4.1 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme SER1906BU VMware and Chef - Leveraging the vsphere API Together #VMworld #SER1906BU Disclaimer This presentation may contain product features that are currently under development. This overview of new

More information

Virtual Appliance User s Guide

Virtual Appliance User s Guide Cast Iron Integration Appliance Virtual Appliance User s Guide Version 4.5 July 2009 Cast Iron Virtual Appliance User s Guide Version 4.5 July 2009 Copyright 2009 Cast Iron Systems. All rights reserved.

More information

VMware Integrated OpenStack with Kubernetes Getting Started Guide. VMware Integrated OpenStack 4.1

VMware Integrated OpenStack with Kubernetes Getting Started Guide. VMware Integrated OpenStack 4.1 VMware Integrated OpenStack with Kubernetes Getting Started Guide VMware Integrated OpenStack 4.1 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Table of Contents HOL SLN

Table of Contents HOL SLN Table of Contents Lab overview - - VMware Cloud Foundation 3.0 Getting Started... 2 Lab Overview and Guidance... 3 Module 1 - Workload Domain Exploration (30 mins)... 9 Workload Domain Overview... 10 Module

More information

Table of Contents HOL NET

Table of Contents HOL NET Table of Contents - VMware AppDefense - Secure Datacenter Endpoints...2 Lab Guidance... 3 Module 1- Overview of VMware AppDefense (15 Minutes)... 9 AppDefense Platform Overview... 10 Conclusion... 14 Module

More information

Version 1.26 Installation Guide for On-Premise Uila Deployment

Version 1.26 Installation Guide for On-Premise Uila Deployment Version 1.26 Installation Guide for On-Premise Uila Deployment Table of Contents Introduction... 2 Scope and Purpose... 2 Architecture Overview... 2 Virtual Architecture... 2 Getting Started... 3 System

More information

Quick Start Guide to Compute Canada Cloud Service

Quick Start Guide to Compute Canada Cloud Service Quick Start Guide to Compute Canada Cloud Service Launching your first instance (VM) Login to the East or West cloud Dashboard SSH key pair Importing an existing key pair Creating a new key pair Launching

More information

Log & Event Manager UPGRADE GUIDE. Version Last Updated: Thursday, May 25, 2017

Log & Event Manager UPGRADE GUIDE. Version Last Updated: Thursday, May 25, 2017 UPGRADE GUIDE Log & Event Manager Version 6.3.1 Last Updated: Thursday, May 25, 2017 Retrieve the latest version from: https://support.solarwinds.com/success_center/log_event_manager_(lem)/lem_documentation

More information

IaaS Integration for Multi- Machine Services. vrealize Automation 6.2

IaaS Integration for Multi- Machine Services. vrealize Automation 6.2 IaaS Integration for Multi- Machine Services vrealize Automation 6.2 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about

More information

Oracle Enterprise Manager 11g Ops Center 2.5 Hands-on Lab

Oracle Enterprise Manager 11g Ops Center 2.5 Hands-on Lab Oracle Enterprise Manager 11g Ops Center 2.5 Hands-on Lab Introduction to Enterprise Manager 11g Oracle Enterprise Manager 11g is the centerpiece of Oracle's integrated IT management strategy, which rejects

More information

VMware vrealize Operations for Horizon Installation

VMware vrealize Operations for Horizon Installation VMware vrealize Operations for Horizon Installation vrealize Operations for Horizon 6.4 Installation vrealize Operations for Horizon 6.4 This document supports the version of each product listed and supports

More information

Data Protection Guide

Data Protection Guide SnapCenter Software 4.1 Data Protection Guide For VMs and Datastores using the SnapCenter Plug-in for VMware vsphere September 2018 215-13399_B0 doccomments@netapp.com Table of Contents 3 Contents Deciding

More information

Cisco ACI Simulator VM Installation Guide

Cisco ACI Simulator VM Installation Guide Cisco ACI Simulator VM Installation Guide New and Changed Information 2 About the Application Policy Infrastructure Controller 2 About the ACI Simulator Virtual Machine 2 Simulator VM Topology and Connections

More information

VMware Photon Controller Quick Start Guide

VMware Photon Controller Quick Start Guide VMware Photon Controller Quick Start Guide Contents 1 Introduction 2 1.1 Version................................................ 2 1.2 Overview............................................... 2 1.3 Lightwave

More information

Table of Contents HOL-1757-MBL-5

Table of Contents HOL-1757-MBL-5 Table of Contents Lab Overview - - VMware AirWatch: Mobile App Management and App Development... 2 Lab Guidance... 3 Module 1 - Introduction to AppConfig (30 minutes)... 8 Login to the AirWatch Console...

More information

DEEP DIVE: OPENSTACK COMPUTE

DEEP DIVE: OPENSTACK COMPUTE DEEP DIVE: OPENSTACK COMPUTE Stephen Gordon Technical Product Manager, Red Hat @xsgordon AGENDA OpenStack architecture refresher Compute architecture Instance life cycle Scaling compute

More information

Table of Contents HOL SLN

Table of Contents HOL SLN Table of Contents Lab Overview - - Modernizing Data Center for Maximum Business Flexibility... 2 Lab Guidance... 3 Module 1 - Introduction to Modernizing the Data Center (15 minutes)... 9 Introduction...

More information

VMware Validated Design Monitoring and Alerting Guide

VMware Validated Design Monitoring and Alerting Guide VMware Validated Design Monitoring and Alerting Guide VMware Validated Design for Software- Defined Data Center 2.0 This document supports the version of each product listed and supports all subsequent

More information

F5 BIG-IP Access Policy Manager: SAML IDP

F5 BIG-IP Access Policy Manager: SAML IDP Agility 2018 Hands-on Lab Guide F5 BIG-IP Access Policy Manager: SAML IDP F5 Networks, Inc. 2 Contents: 1 Welcome 5 2 Class 1: SAML Identity Provider (IdP) Lab 7 2.1 Lab Topology & Environments...................................

More information

How to Deploy vcenter on the HX Data Platform

How to Deploy vcenter on the HX Data Platform First Published: 2016-07-11 Last Modified: 2019-01-08 vcenter on HyperFlex Cisco HX Data Platform deployment, including installation and cluster configuration and management, requires a vcenter server

More information

Table of Contents HOL CMP

Table of Contents HOL CMP Table of Contents Lab Overview - - vrealize Suite Standard: Automated, Proactive Management... 2 Lab Guidance... 3 Module 1 - Automated workload placement and predictive DRS (30 minutes)...9 Workload Balancing

More information

Cisco VDS Service Broker Software Installation Guide for UCS Platforms

Cisco VDS Service Broker Software Installation Guide for UCS Platforms Cisco VDS Service Broker 1.0.1 Software Installation Guide for UCS Platforms Revised: May 2013 This document provides detailed instructions for installing the Cisco Videoscape Distribution Suite Service

More information

Installing Cisco CMX in a VMware Virtual Machine

Installing Cisco CMX in a VMware Virtual Machine Installing Cisco CMX in a VMware Virtual Machine This chapter describes how to install and deploy a Cisco Mobility Services Engine (CMX) virtual appliance. Cisco CMX is a prebuilt software solution that

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design for Software-Defined Data Center 3.0 This document supports the version of each product listed and supports

More information

VMware vcloud Air User's Guide

VMware vcloud Air User's Guide vcloud Air This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document,

More information

vsphere Host Profiles 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7

vsphere Host Profiles 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about

More information

Autopology Installation & Quick Start Guide

Autopology Installation & Quick Start Guide Autopology Installation & Quick Start Guide Version 1.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. You

More information

Introduction to Virtualization

Introduction to Virtualization Introduction to Virtualization Module 2 You Are Here Course Introduction Introduction to Virtualization Creating Virtual Machines VMware vcenter Server Configuring and Managing Virtual Networks Configuring

More information

VMware vfabric Data Director 2.5 EVALUATION GUIDE

VMware vfabric Data Director 2.5 EVALUATION GUIDE VMware vfabric Data Director 2.5 EVALUATION GUIDE Introduction... 2 Pre- requisites for completing the basic and advanced scenarios... 3 Basic Scenarios... 4 Install Data Director using Express Install...

More information

Deploying the Cisco Tetration Analytics Virtual

Deploying the Cisco Tetration Analytics Virtual Deploying the Cisco Tetration Analytics Virtual Appliance in the VMware ESXi Environment About, on page 1 Prerequisites for Deploying the Cisco Tetration Analytics Virtual Appliance in the VMware ESXi

More information

IaaS Integration for Multi-Machine Services

IaaS Integration for Multi-Machine Services IaaS Integration for Multi-Machine Services vcloud Automation Center 6.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by

More information

Installation of Cisco Business Edition 6000H/M

Installation of Cisco Business Edition 6000H/M Installation Overview, page 1 Installation Task Flow of Cisco Business Edition 6000H/M, page 2 Installation Overview This chapter describes the tasks that you must perform to install software on your Business

More information