Designing the Stable Infrastructure for Kernel-based Virtual Machine using VPN-tunneled VNC presented by : Berkah I. Santoso Informatics, Bakrie University International Conference on Computer Science 2014 December 5 th, 2014 Grand Clarion Hotel & Convention, Makassar
Table of Content Abstract.. 3 Introduction... 4 Related Works 7 Proposed System.. 9 Implementation and Diagnosis. 14 Conclusion 29 ** Preview : list of running VMs... 32 ** Preview : log on form VNC... 33 ** Preview : successful VNC for VM... 34 2
Abstract The key aspect of cloud computing is virtualization that aimed to achieve stability, scalability and flexibility in a cloud infrastructure. The shared computing resources such as processor, memory, storage and network have important roles for operating the cloud infrastructure they must be managed and monitored carefully. The virtual machine management(vmm) tools were played important role in giving real computing resources information to the system administrator. The VMM, VPN and VNC approach could be used for helping the users to get the stable, secure cloud services. The models are functioning stable, secure in a highly consumption VM computational resource. The critical event for providing the shared computing resources is the VM s booting processes. The increase in the number of VMs does not affect the working of physical host that indicates its practicability in a flexible environment. 3
Introduction There are two different forms of virtualization architectures. Bare-metal virtualization, which exists no host O/S because the VMM sits just above the underlying physical hardware. Hosted-virtualization, which the VMM sits just on top of the host O/S and runs as application [4]. 4
Introduction Virtual Network Computing (VNC) is a desktop sharing system which uses the Remote Frame Buffer (RFB) protocol to remotely control another computer. It transmits the user events from one computer to another relaying the screen updates back in other direction, over a network using buffered I/O stream[8]. Virtual Private Network is a computer network in which some of the links between nodes are carried by open connections or virtual circuits in some larger networks, such as the internet, as opposed to running across a single private network. The link layer protocols of the virtual network are said to be tunneled through the transport network. It provides private network connections over a publicly accessible shared network [9]. 5
Introduction We present the stable and secure hostedvirtualization and present a case study of deployment of private virtualized infrastructure using KVM. The KVM runs on Red Hat Enterprise Linux v6.5. We installed the Linux and Unix variant for VMs with minimal installation option. We also configured the boot loader for each VM s O/S as the default configuration. 6
Related Works The Papaya[2] management platform which based on Eucalyptus. Centralized monitoring and management. Using the web interface for managing and monitoring VMs. The Snooze[11] private cloud management platform to monitor and manage the resource. Self-organizing hierarchical and distributed cloud management using Java. The fault tolerance features of framework do not impact application performance and the system remained highly scalable with increasing amounts of resources. 7
Related Works Our works proposed the combination of VM manager. Distributed cloud management. VM migration. Build using Python. Intuitive graphical user interface for VM monitoring. 8
Proposed System We proposed the design, implementation and evaluation of stable and secure-private hostedvirtualization using below ingredients : KVM (qemu-kvm v.0.12.1.2-2) VM service manager (libvirt v.0.10.2-29) Red Hat Enterprise Linux v.6.5 (kernel 2.6.32-431.el6.x86-64 stable) OpenVPN ver.2.1.32-2 DHCP ver.4.1.1 VM manager (virt-manager-0.9.0-19.el6.x86-64) 9
Proposed System The hardware consist of below information : The tower server for constructing the hosted-virtualization KVM. We are using the Quad core Intel Xeon CPU E3-1220 V2 @3.10 GHzbased processor, 8 GB RAM, 1 TB RAID 10 local disk, dual 1 GB Network Interface Card. The tower server for constructing the VPN services and Domain Host Control Protocol (DHCP) services. We are using the Quad core Intel Xeon CPU E3-1220 V2 @3.10 GHz-based processor, 8 GB RAM, 1 TB RAID 10 local disk, single 1 GB Network Interface Card. The Cisco Catalyst 2960-1 Gbps 24 port Layer-2 switch. The switch has also Virtual LAN (VLAN) configured. The Cisco Router 2801 for connecting the internal network to the external networks. The router has also access control list configured. The desktop PCs (HP Compaq 5700) for accessing the VMs services. 10
Proposed System The bridge technique separated the private network for VMs system administration and service network for VMs. The description of every VM can be defined as shown below : VM2 is installed the Unix FreeBSD 9.0 and the IP address is 192.168.122.3/24. VM3 is installed the Fedora Linux 18 and the IP address is 192.168.122.4/24. VM4 is installed the Ubuntu 12.04 Long Time Support (LTS) Server and the IP address is 192.168.122.5/24. VM5 is installed the Debian Linux 7.5 and the IP address is 192.168.122.6/24. VM6 is installed the Unix OpenBSD 5.5 and the IP address is 192.168.122.7/24. The VMs have assigned VNC's port related to their administration services. The graphical monitoring for every VM was shown such as VM's CPU usage, host CPU usage, disk I/O, network I/O. 11
Proposed System The Security Enhanced (SE) Linux for securing (masquerading) the physical host and VMs when they're connected to the trusted or un-trusted network. The router manages inbound and outbound traffic using its access control list (ACL) and Network Address Translation (NAT) configuration and makes secure connection for both VMs users. Whenever the VMs users have requested for private connection, the authentication system challenged VMs users with the log on page. We added the signature algorithm SHA1 with RSA encryption mechanism on OpenVPN established connection. The client certification was generated from the second tower (authentication) server for connecting the router to client with their authorized VM. 12
Proposed System 13
Implementation & Diagnosis The libvirt daemon acts a service which run on start up process. The virtual machine manager monitors and manages the VMs on cloud environment. We run the five VMs using the virtual machine manager queue mechanism and started the VM2 to VM6. We made a pertinent record related to both experiment until the log on form of every O/S has viewed. 14
Implementation & Diagnosis We make a sufficient record related to the execution time, processor utilization, memory utilization related to every VM for both experiment. The physical average load in 1 minute, 5 minutes and 15 minutes. We obtained the physical CPU and memory usage. We also obtained the CPU and memory usage for libvirtd process. The CPU and memory usage for every qemu-kvm process related to virtual machine. We obtained the CPU and memory usage of physical host versus virtual machines. 15
Implementation & Diagnosis The measurement of average load for physical host has started in 14:28:54pm to 15:20:18pm. The average load increased when the virtual machines started up (14:29:24pm to 14:30:00pm). The peak average load for last 1 minute load is 5.91 in 14:44:29pm, last 5 minutes is 5.4 in 14:46:14pm and last 15 minutes is 3.97 in 14:48:53pm. 16
14:28:54 14:30:00 14:32:00 14:34:19 14:37:07 14:40:37 14:42:53 14:46:14 14:48:53 14:52:18 14:54:21 14:56:42 14:58:57 15:01:25 15:04:04 15:06:01 15:08:19 15:10:38 15:12:32 15:15:02 15:17:02 15:19:24 Implementation & Diagnosis 7 6 Physical Average Load in 1 minute, 5 minutes and 15 minutes 5 4 3 2 1 0 Load AVG1 (Last one minute load) Load AVG3 (last 15 minutes load) Load AVG2 (last 5 minutes load) The average load of physical host Execution Time 17
Implementation & Diagnosis 70.00 60.00 Physical CPU & Memory Usage 50.00 40.00 30.00 20.00 10.00 0.00 CPU (%) Memory % The physical CPU and memory usage (%) 18
Implementation & Diagnosis 2.5 CPU & Memory Usage for Libvirtd 2 1.5 1 0.5 0 LibVirt CPU% Lib Virt Memory % The CPU and memory usage (%) for libvirtd 19
Implementation & Diagnosis 120 CPU & Memory Usage for VM2 (FreeBSD 9 installed) 100 80 60 40 20 0 CPU(%), VM2(FreeBSD 9) Memory (%), VM2(FreeBSD 9) The CPU and memory usage (%) for virtual machine 2 (Unix FreeBSD v.9 installed) 20
Implementation & Diagnosis 120 CPU & Memory Usage for VM3 (Fedora 18 installed) 100 80 60 40 20 0 CPU(%), VM3(Fedora 18) Memory (%), VM3(Fedora 18) The CPU and memory usage (%) for virtual machine 3 (Fedora Linux v.18 installed) 21
Implementation & Diagnosis 120 100 CPU & Memory Usage for VM4 (Ubuntu 12.04 installed) 80 60 40 20 0 CPU(%), VM4(Ubuntu 12.04) Memory (%), VM4(Ubuntu 12.04) The CPU and memory usage (%) for virtual machine 4 (Ubuntu Linux v.12.04 installed) 22
14:28:54 14:30:00 14:32:00 14:34:19 14:37:07 14:40:37 14:42:53 14:46:14 14:48:53 14:52:18 14:54:21 14:56:42 14:58:57 15:01:25 15:04:04 15:06:01 15:08:19 15:10:38 15:12:32 15:15:02 15:17:02 15:19:24 Implementation & Diagnosis 120 100 CPU & Memory Usage for VM5 (Debian 7.5 installed) 80 60 40 20 0 CPU(%), VM5(Debian 7.5) Memory (%), VM5(Debian 7.5) The CPU and memory usage (%) for virtual machine 5 (Debian Linux v.7.5 installed) 23
Implementation & Diagnosis 120 100 80 60 40 20 0 CPU(%), VM6(OpenBSD 5.5) Memory (%), VM6(OpenBSD 5.5) The CPU and memory usage (%) for virtual machine 6 (Open BSD v.5.5 installed) 24
14:28:54 14:29:24 14:30:00 14:31:00 14:32:00 14:33:19 14:34:19 14:36:16 14:37:07 14:39:01 14:40:37 14:41:53 14:42:53 14:44:29 14:46:14 14:47:26 14:48:53 14:50:09 14:52:18 14:53:18 14:54:21 14:55:27 14:56:42 14:57:57 14:58:57 15:00:06 15:01:25 15:02:46 15:04:04 15:05:04 15:06:01 15:07:22 15:08:19 15:09:37 15:10:38 15:11:29 15:12:32 15:13:38 15:15:02 15:16:02 15:17:02 15:18:03 15:19:24 15:20:18 Implementation & Diagnosis Physical Memory (%) Memory (%) Libvirtd Memory (%), VM2(FreeBSD 9) Memory (%), VM3(Fedora18) Memory (%), VM4(Ubuntu1204) Memory (%), VM6(OpenBSD 5.5) Memory (%), VM5(Debian 7.5) The memory usage (%) of physical and virtual machines 25
14:28:54 14:29:24 14:30:00 14:31:00 14:32:00 14:33:19 14:34:19 14:36:16 14:37:07 14:39:01 14:40:37 14:41:53 14:42:53 14:44:29 14:46:14 14:47:26 14:48:53 14:50:09 14:52:18 14:53:18 14:54:21 14:55:27 14:56:42 14:57:57 14:58:57 15:00:06 15:01:25 15:02:46 15:04:04 15:05:04 15:06:01 15:07:22 15:08:19 15:09:37 15:10:38 15:11:29 15:12:32 15:13:38 15:15:02 15:16:02 15:17:02 15:18:03 15:19:24 15:20:18 Implementation & Diagnosis CPU Usage of VMs CPU(%) Physical Host CPU(%) Libvirtd CPU(%), VM2(FreeBSD 9) CPU(%), VM3(Fedora18) CPU(%), VM4(Ubuntu1204) CPU(%), VM6(OpenBSD 5.5) CPU(%), VM5(Debian 7.5) The CPU usage (%) of physical and virtual machines 26
Implementation & Diagnosis The VM2 (Unix FreeBSD v.9 installed) was consuming the most amount of memory resource (15.2%) for boot up process. The VM5 (Debian Linux v.7.5 installed) was consuming the least amount of memory resource (6.9%) for boot up process. The VM3 (Fedora Linux v.18 installed) was consuming the most amount of CPU resource (100.5%) for boot up process. The VM6 (Unix OpenBSD v.5.5 installed) was consuming the least amount of CPU resource (99.8%) for boot up process. 27
Implementation & Diagnosis The fastest boot up process came to VM6 (Unix OpenBSD v.5.5 installed) which took the time 14:28:54pm to 14:44:29pm or 15 minutes and 35 seconds. The slowest boot up process came to VM3 (Fedora Linux v.18 installed) which took the time 14:28:54pm to 15:12:32pm or 43 minutes and 38 seconds. 28
Conclusion The computing resources of physical host remain stable after the VM s boot processes. Deployment of stable and secure private hostedvirtualization could be done effectively whereas the users concern to monitoring and virtual machine management. The physical host is affected by the booting process of several VMs such as processor and memory resources. However, the physical host continued to decrease the computing resources whereas the VMs have finished their booting ICCS 2014 process STMIK Handayani, Makassar to O/S log on form. 29
Conclusion The physical host memory usage was consumed more resource than VMs memory usage for VMs boot up processes. The CPU and memory usage for libvirtd remained stable in conjunction with VMs boot up process (CPU usage : 0.3% to 1.7% and memory usage : 0.2% to 0.4%). The VPN-tunneled VNC was designed for protecting the VMs resources. Thus, the main goal of presented prototype is managing effectively the technological needs using the open source virtualization platform. 30
Future Work In future, we will analyze the conjunction of security testing for the private cloud environment and its impact to computing resources usage. Moreover, we will also perform the analysis of its security vulnerabilities related to computing resources monitoring and management. 31
Preview The list of running VMs 32
Preview The list of running VMs The log on form of VNC for VM3 33
Preview The list of running VMs The successful VNC for VM3 (Fedora Core 18) 34
Preview The list of running VMs The log on form of VNC for VM5 35
Preview The list of running VMs The successful VNC for VM5 (Debian Linux 7.5 36
Thank You! 37