Xen Project 4.4: Features and Futures. Russell Pavlicek Xen Project Evangelist Citrix Systems

Similar documents
SCALE 14X. The Bare-Metal Hypervisor as a Platform for Innovation. By Russell Pavlicek Xen Project Evangelist

Virtualization in the Cloud: Featuring Xen Lars Kurth Xen Community Manager

Xen on ARM. Stefano Stabellini

Transforming XenServer into a proper open-source project

Linux Virtualization Update

Virtualization Introduction

The only open-source type-1 hypervisor

1 Virtualization Recap

The Architecture of Virtual Machines Lecture for the Embedded Systems Course CSD, University of Crete (April 29, 2014)

What is KVM? KVM patch. Modern hypervisors must do many things that are already done by OSs Scheduler, Memory management, I/O stacks

Spring 2017 :: CSE 506. Introduction to. Virtual Machines. Nima Honarmand

Virtualization. Pradipta De

Virtualization. Michael Tsai 2018/4/16

Module 1: Virtualization. Types of Interfaces

Xen on ARM ARMv7 with virtualization extensions

CHAPTER 16 - VIRTUAL MACHINES

The Challenges of X86 Hardware Virtualization. GCC- Virtualization: Rajeev Wankar 36

Operating Systems 4/27/2015

Virtualization in the Cloud Lars Kurth Xen Community Manager

Virtual Machine Security

LINUX Virtualization. Running other code under LINUX

DOUG GOLDSTEIN STAR LAB XEN SUMMIT AUG 2016 ATTACK SURFACE REDUCTION

Hostless Xen Deployment

Chapter 5 C. Virtual machines

Xen. past, present and future. Stefano Stabellini

Virtualization. ...or how adding another layer of abstraction is changing the world. CIS 399: Unix Skills University of Pennsylvania.

Hypervisor security. Evgeny Yakovlev, DEFCON NN, 2017

CLOUD COMPUTING IT0530. G.JEYA BHARATHI Asst.Prof.(O.G) Department of IT SRM University

Xen Project Status Ian Pratt 12/3/07 1

Performance Evaluation of Live Migration based on Xen ARM PVH for Energy-efficient ARM Server

Virtualisation: The KVM Way. Amit Shah

Distributed Systems COMP 212. Lecture 18 Othon Michail

Xen on ARM. How fast is it, really? Stefano Stabellini. 18 August 2014

SUSE An introduction...

Virtualization. Starting Point: A Physical Machine. What is a Virtual Machine? Virtualization Properties. Types of Virtualization

Virtualization. ! Physical Hardware Processors, memory, chipset, I/O devices, etc. Resources often grossly underutilized

Xen and the Art of Virtualization

CSE 120 Principles of Operating Systems

Towards a configurable and slimmer x86 hypervisor

CSE543 - Computer and Network Security Module: Virtualization

Virtualization. Operating Systems, 2016, Meni Adler, Danny Hendler & Amnon Meisels

CSE543 - Computer and Network Security Module: Virtualization

Xen Summit Spring 2007

Xen Automotive Hypervisor Automotive Linux Summit 1-2 July, Tokyo

Introduction to Cloud Computing and Virtualization. Mayank Mishra Sujesha Sudevalayam PhD Students CSE, IIT Bombay


CSC 5930/9010 Cloud S & P: Virtualization

CS370 Operating Systems

Nested Virtualization and Server Consolidation

I/O and virtualization

What is Cloud Computing? Cloud computing is the dynamic delivery of IT resources and capabilities as a Service over the Internet.

Virtual Machines Disco and Xen (Lecture 10, cs262a) Ion Stoica & Ali Ghodsi UC Berkeley February 26, 2018

Xen is not just paravirtualization

PVHVM Linux guest why doesn't kexec work? Vitaly Kuznetsov Red Hat Xen Developer Summit, 2015

Cloud Computing Virtualization

Originally prepared by Lehigh graduate Greg Bosch; last modified April 2016 by B. Davison

Micro VMMs and Nested Virtualization

Optimizing and Enhancing VM for the Cloud Computing Era. 20 November 2009 Jun Nakajima, Sheng Yang, and Eddie Dong

Virtual Machines. Part 2: starting 19 years ago. Operating Systems In Depth IX 1 Copyright 2018 Thomas W. Doeppner. All rights reserved.

Virtualization. Adam Belay

SUSE Linux Enterprise Server: Supported Virtualization Technologies

Virtualization with XEN. Trusted Computing CS599 Spring 2007 Arun Viswanathan University of Southern California

QEMU 2.0 and Beyond. CloudOpen Anthony Liguori

Virtualization and Performance

Linux and Xen. Andrea Sarro. andrea.sarro(at)quadrics.it. Linux Kernel Hacking Free Course IV Edition

64-bit ARM Unikernels on ukvm

Advanced Systems Security: Virtual Machine Systems

Unikernels? Thomas [Twitter]

CHAPTER 16 - VIRTUAL MACHINES

Advanced Systems Security: Virtual Machine Systems

Virtualization. Dr. Yingwu Zhu

Cloud and Datacenter Networking

Unit 2. VMMs and hypervisors 2966-Network and Services Virtualisation First semester Assistant professor: Katja Gilly Departament: Physics

Xen and CloudStack. Ewan Mellor. Director, Engineering, Open-source Cloud Platforms Citrix Systems

COMPUTER ARCHITECTURE. Virtualization and Memory Hierarchy

Xen VT status and TODO lists for Xen-summit. Arun Sharma, Asit Mallick, Jun Nakajima, Sunil Saxena

Virtualization (II) SPD Course 17/03/2010 Massimo Coppola

CS 550 Operating Systems Spring Introduction to Virtual Machines

Lecture 7. Xen and the Art of Virtualization. Paul Braham, Boris Dragovic, Keir Fraser et al. 16 November, Advanced Operating Systems

Oracle-Regular Oracle-Regular

Virtualization, Xen and Denali

CrashOS: Hypervisor testing tool

Virtual Machines. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Background. IBM sold expensive mainframes to large organizations. Monitor sits between one or more OSes and HW

ARMv8 port of the Jailhouse hypervisor

Multiprocessor Scheduling. Multiprocessor Scheduling

Virtualization. Application Application Application. MCSN - N. Tonellotto - Distributed Enabling Platforms OPERATING SYSTEM OPERATING SYSTEM

LISA The Next Generation Cloud: Unleashing the Power of the Unikernel. Russell Pavlicek Xen Project Evangelist

CSE543 - Computer and Network Security Module: Virtualization

About the XenClient Enterprise Solution

Intel s Virtualization Extensions (VT-x) So you want to build a hypervisor?

Nested Virtualization Update From Intel. Xiantao Zhang, Eddie Dong Intel Corporation

Junhong Jiang, Kevin Tian, Chris Wright, Don Dugger

OPS-9: Fun With Virtualization. John Harlow. John Harlow. About John Harlow

Paperspace. Architecture Overview. 20 Jay St. Suite 312 Brooklyn, NY Technical Whitepaper

KVM CPU MODEL IN SYSCALL EMULATION MODE ALEXANDRU DUTU, JOHN SLICE JUNE 14, 2015

Virtualization for Embedded Systems

Porting bhyve on ARM. Mihai Carabas, Peter Grehan BSDCan 2016 University of Ottawa Ottawa, Canada June 10 11, 2016

Virtual Virtual Memory

OS Virtualization. Why Virtualize? Introduction. Virtualization Basics 12/10/2012. Motivation. Types of Virtualization.

Transcription:

Xen Project 4.4: Features and Futures Russell Pavlicek Xen Project Evangelist Citrix Systems

About This Release Xen Project 4.4.0 was released on March 10, 2014. This release is the work of 8 months of development, with 1193 changesets. Xen Project 4.4 is our first release made with an attempt at a 6-month development cycle. Between Christmas, and a few important blockers, we missed that by about 6 weeks; but still not too bad overall.

Xen Project 101: Basics

Hypervisor Architectures Type 1: Bare metal Hypervisor A pure Hypervisor that runs directly on the hardware and hosts Guest OS s. VM n VM 1 VM 0 Guest OS and Apps Hypervisor Device Drivers/Models Scheduler MMU Host HW I/O Memory CPUs Provides partition isolation + reliability, higher security

Hypervisor Architectures Type 1: Bare metal Hypervisor A pure Hypervisor that runs directly on the hardware and hosts Guest OS s. Type 2: OS Hosted A Hypervisor that runs within a Host OS and hosts Guest OS s inside of it, using the host OS services to provide the virtual environment. Hypervisor Device Drivers/Models Scheduler MMU VM n VM 1 VM 0 Guest OS and Apps User Apps Host OS Device Drivers User-level VMM Device Models Ring-0 VM Monitor Kernel VM n VM 1 VM 0 Guest OS and Apps Host HW I/O Memory CPUs Host HW I/O Memory CPUs Provides partition isolation + reliability, higher security Low cost, no additional drivers Ease of use & installation

Xen Project: Type 1 with a Twist Type 1: Bare metal Hypervisor VM n VM 1 VM 0 Guest OS and Apps Hypervisor Device Drivers/Models Scheduler MMU Host HW I/O Memory CPUs

Xen Project: Type 1 with a Twist Type 1: Bare metal Hypervisor Xen Architecture VM n VM 1 VM n VM 0 VM 1 Hypervisor Scheduler Guest OS and Apps VM 0 Guest OS and Apps Device Drivers/Models MMU Hypervisor Scheduler MMU Host HW I/O Memory CPUs Host HW I/O Memory CPUs

Xen Project: Type 1 with a Twist Type 1: Bare metal Hypervisor Xen Architecture Control domain (dom0) VM n VM 1 Device Models VM n VM 0 VM 1 Hypervisor Scheduler Guest OS and Apps Drivers Linux & BSD VM 0 Guest OS and Apps Device Drivers/Models MMU Hypervisor Scheduler MMU Host HW I/O Memory CPUs Host HW I/O Memory CPUs

Basic Xen Project Concepts Control domain (dom0) Dom0 Kernel VM n VM 1 VM 0 Guest OS and Apps Console Interface to the outside world Control Domain aka Dom0 Dom0 kernel with drivers Xen Management Toolstack Guest Domains Your apps Driver/Stub/Service Domain(s) A driver, device model or control service in a box De-privileged and isolated Lifetime: start, stop, kill Hypervisor Scheduler MMU XSM Host HW I/O Memory CPUs Trusted Computing Base 9

Basic Xen Project Concepts: Toolstack+ Control domain (dom0) Toolstack Dom0 Kernel Console VM n VM 1 VM 0 Guest OS and Apps Console Interface to the outside world Control Domain aka Dom0 Dom0 kernel with drivers Xen Management Toolstack Guest Domains Your apps Driver/Stub/Service Domain(s) A driver, device model or control service in a box De-privileged and isolated Lifetime: start, stop, kill Hypervisor Scheduler MMU XSM Host HW I/O Memory CPUs Trusted Computing Base 10

Basic Xen Project Concepts: Disaggregation Control domain (dom0) Toolstack Dom0 Kernel Console One or more driver, stub or service domains VM n VM 1 VM 0 Guest OS and Apps Console Interface to the outside world Control Domain aka Dom0 Dom0 kernel with drivers Xen Management Toolstack Guest Domains Your apps Driver/Stub/Service Domain(s) A driver, device model or control service in a box De-privileged and isolated Lifetime: start, stop, kill Hypervisor Scheduler MMU XSM Host HW I/O Memory CPUs Trusted Computing Base 11

Xen Project 4.4 Features

Improved Event Channel Scalability Event channels are paravirtualized interrupts Previously limited to either 1024 or 4096 channels per domain Domain 0 needs several event channels for each guest VM (for network/disk backends, qemu etc.) Practical limit of total number of VMs to around 300-500 (depending on VM configuration)

Improved Event Channel Scalability (2) New FIFO-based event channel ABI allows for over 100,000 event channels Improve fairness Allows for multiple priorities The increased limit allows for more VMs, which benefits large systems and cloud operating systems such as MirageOS, ErlangOnXen, OSv, HalVM Also useful for VDI applications

Experimental PVH Guest Support PVH mode combines the best elements of HVM and PV PVH takes advantage of many of the hardware virtualization features that exist in contemporary hardware Potential for significantly increased efficiency and performance Reduced implementation footprint in Linux,FreeBSD Enable with "pvh=1" in your config

Xen Project Virtualization Vocabulary PV Paravirtualization Hypervisor provides API used by the OS of the Guest VM Guest OS needs to be modified to provide the API HVM Hardware-assisted Virtual Machine Uses CPU VM extensions to handle Guest requests No modifications to Guest OS But CPU must provide the VM extensions FV Full Virtualization (Another name for HVM)

Xen Project Virtualization Vocabulary PVHVM PV on HVM drivers Allows H/W virtualized guests to use PV disk and I/O drivers No modifications to guest OS Better performance than straight HVM PVH PV in HVM Container (New in 4.4) Almost fully PV Uses HW extensions to eliminate PV MMU Possibly best mode for CPUs with virtual H/W extensions

The Virtualization Spectrum VS VH P Virtualized (SW) Virtualized (HW) Paravirtualized Disk and Network Interrupts, Timers Emulated Motherboard, Legacy boot Privileged Instructions and page tables HVM mode/domain 4.4 PV mode/domain

The Virtualization Spectrum Optimal performance Scope for improvement Poor performance Disk and Network Interrupts, Timers Emulated Motherboard, Legacy boot Privileged Instructions and page tables HVM mode/domain 4.4 PV mode/domain

Improved Disk Driver Domains Linux driver domains used to rely on udev events in order to launch backends for guests Dependency on udev is replaced with a custom daemon built on top of libxl Now feature complete and consistent between Linux and non-linux guests Provides greater flexibility in order to run user-space backends inside of driver domains Example of capability: driver domains can now use Qdisk backends, which was not possible with udev

Improved Support for SPICE SPICE is a protocol for virtual desktops which allows a much richer connection than display-only protocols like VNC Added support for additional SPICE functionality, including: Vdagent clipboard sharing USB redirection

GRUB 2 Support of Xen Project PV Images In the past, Xen Project software required a custom implementation of GRUB called pvgrub The upstream GRUB 2 project now has a build target which will construct a bootable PV Xen Project image This ensures 100% GRUB 2 compatibility for pvgrub going forward Delivered in upcoming GRUB 2 release (v2.02?)

Indirect Descriptors for Block PV Protocol Modern storage devices work much better with larger chunks of data Indirect descriptors have allowed the size of each individual request to triple, greatly improving I/O performance when running on fast storage technologies like SSD and RAID This support is available in any guest running Linux 3.11 or higher (regardless of Xen Project version)

Improved kexec Support kexec allows a running Xen Project host to be replaced with another OS without rebooting Primarily used execute a crash environment to collect information on a Xen Project hypervisor or dom0 crash The existing functionality has been extended to: Allow tools to load images without requiring dom0 kernel support (which does not exist in upstream kernels) Improve reliability when used from a 32-bit dom0 kexec-tools 2.0.5 or later is required

Improved XAPI and Mirage OS support XAPI and Mirage OS are sub-projects within the Xen Project written in OCaml Both are also used in XenServer (http://xenserver.org) and rely on the Xen Project OCaml language bindings to operate well These language bindings have had a major overhaul Produces much better compatibility between XAPI, Mirage OS and Linux distributions going forward

Tech Preview of Nested Virtualization Nested virtualization provides virtualized hardware virtualization extensions to HVM guests Can now run Xen Project, KVM, VMWare or HyperV inside of a guest for debugging or deployment testing (only 64 bit hypervisors currently) Also allows Windows 7 "XP Compatibility mode" Tech Preview not yet ready for production use, but has made significant gains in functionality and reliability Enable with "hap=1" and "nestedhvm=1" More information on nested virtualization: http://wiki.xenproject.org/wiki/xen_nested

Experimental Support for Guest EFI boot EFI is the new booting standard that is replacing BIOS Some operating systems only boot with EFI Some features, like SecureBoot, only work with EFI

Improved Integration With GlusterFS You can find a blog post to set up an iscsi target on the Gluster blog: http://www.gluster.org/2013/11/a-gluster-bl ock-interface-performance-and-configuration /

Improved ARM Support A number of new features have been implemented: 64 bit Xen on ARM now supports booting guests Physical disk partitions and LVM volumes can now be used to store guest images using xen-blkback (or is PV drivers better in terms of terminology) Significant stability improvements across the board ARM/multiboot booting protocol design and implementation PSCI support

Improved ARM Support (2) Some DMA in Dom0 even with no hardware IOMMUs ARM and ARM64 ABIs are declared stable and maintained for backwards compatibility Significant usability improvements, such as automatic creation of guest device trees and improved handling of host DTBs

Improved ARM Support (3) Adding new hardware platforms to Xen Project on ARM has been vastly improved, making it easier for Hardware vendors and embedded vendors to port to their board Added support for the Arndale board, Calxeda ECX-2000 (aka Midway), Applied Micro X-Gene Storm, TI OMAP5 and Allwinner A20/A31 boards ARM server class hardware (Calxeda Midway) has been introduced in the Xen Project OSSTest automated testing framework

Early Microcode Loading The hypervisor can update the microcode in the early phase of boot time The microcode binary blob can be either as a standalone multiboot payload, or part of the initial kernel (dom0) initial ramdisk (initrd) To take advantage of this use latest version of dracut with --early-microcode parameter and on the Xen Project command line specify: ucode=scan. For details see dracut manpage and http:// xenbits.xenproject.org/docs/unstable/misc/xen-com mand-line.html

Xen Project Futures

More Fun to Come Xen Automotive Xen Project in the entertainment center of your car? XenGT Virtualized GPU support Even More ARM Support On your server, in your phone, wherever PVH stability and performance The new hypervisor mode to get harder and faster Domain 0 support, AMD support

And Still More Fun to Come Native support of VMware VMDK format Better distribution integration (CentOS, Ubuntu) Improvements in NUMA performance and support Additional libvirt support Automated Testing System http:// blog.xenproject.org/index.php/2014/02/21/xen-proje ct-automatic-testing-on-community-infrastructure/ General performance enhancements

Questions? Russell.Pavlicek@XenProject.org Twitter: @RCPavlicek