TA7750 Understanding Virtualization Memory Management Concepts. Kit Colbert, Principal Engineer, VMware, Inc. Fei Guo, Sr. MTS, VMware, Inc.

Similar documents
Taking a trip down vsphere memory lane

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Performance Sentry VM Provider Objects April 11, 2012

What s New in VMware vsphere 4.1 Performance. VMware vsphere 4.1

VIRTUALIZATION PERFORMANCE: VMWARE VSPHERE 5 VS. RED HAT ENTERPRISE VIRTUALIZATION 3

DRS: Advanced Concepts, Best Practices and Future Directions

vsphere Design and Deploy Fast Track v6 Additional Slides

Hyper-V Top performance and capacity tips

Vmware VCP410. VMware Certified Professional on vsphere 4. Download Full Version :

VIRTUAL APPLIANCES. Frequently Asked Questions (FAQ)

VMWare. Inc. 발표자 : 박찬호. Memory Resource Management in VMWare ESX Server

The VMware vsphere 4.0 Edition

vsphere Resource Management

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

PAC485 Managing Datacenter Resources Using the VirtualCenter Distributed Resource Scheduler

Measuring VMware Environments

Adaptive Resync in vsan 6.7 First Published On: Last Updated On:

vsphere Resource Management Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5

Best practices to achieve optimal memory allocation and remote desktop user experience

Resource Management Guide ESX Server and VirtualCenter 2.0.1

Running DME on VMware ESX

Managing Performance Variance of Applications Using Storage I/O Control

Resource Management Guide. ESX Server 3 and VirtualCenter 2

vsphere Resource Management Update 1 11 JAN 2019 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7

VMware vsphere Optimize and Scale [v5.5] Custom Lab 12 Diagnosing CPU performance issues Copyright 2014 Global Knowledge Network Training Ltd.

Parallels Virtuozzo Containers

VMware vsphere Beginner s Guide

Setting Up Quest QoreStor with Veeam Backup & Replication. Technical White Paper

Cloud Infrastructure Launch vsphere Licensing Overview Your Cloud. Intelligent Virtual Infrastructure. Delivered Your Way.

VMware Horizon Design and Deploy v6 Additional Slides

role at the the host. The the Administrator r can perform a user with Note Directory. Caution If you do

Understanding VMware Capacity

EMC VSPEX END-USER COMPUTING

Setting Up the DR Series System on Veeam

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Setting Up the Dell DR Series System on Veeam

Avoiding the 16 Biggest DA & DRS Configuration Mistakes

CSE 120 Principles of Operating Systems

VMware vsphere. Using vsphere VMware Inc. All rights reserved

Microsoft Virtualization Delivers More Capabilities, Better Value than VMware

Architecture and Performance Implications

Best Practices for designing Farms and Clusters

davidklee.net gplus.to/kleegeek linked.com/a/davidaklee

CS370 Operating Systems

WebSphere. Virtual Enterprise Version Virtualization and WebSphere Virtual Enterprise Version 6.1.1

VMWARE TUNING BEST PRACTICES FOR SANS, SERVER, AND NETWORKS

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Infinio Accelerator Product Overview White Paper

Performance issues in Cerm What to check first?

Performance & Scalability Testing in Virtual Environment Hemant Gaidhani, Senior Technical Marketing Manager, VMware

Best Practices for Virtualizing Active Directory

VMware vsphere with ESX 4 and vcenter

OPS-9: Fun With Virtualization. John Harlow. John Harlow. About John Harlow

Common non-configured options on a Database Server

Understanding Data Locality in VMware vsan First Published On: Last Updated On:

Back To The Future - VMware Product Directions. Andre Kemp Sr. Product Marketing Manager Asia - Pacific

Exam : VMWare VCP-310

Table of Contents HOL-SDC-1317

vsphere Guest Programming Guide VMware vsphere Guest SDK 4.0

WHITE PAPER. Optimizing Virtual Platform Disk Performance

vrealize Operations Manager User Guide Modified on 17 AUG 2017 vrealize Operations Manager 6.6

WHITE PAPER SEPTEMBER VMWARE vsphere AND vsphere WITH OPERATIONS MANAGEMENT. Licensing, Pricing and Packaging

Storage Optimization with Oracle Database 11g

5 Performance-Boosting vsphere Features You re Missing out on

Free up rack space by replacing old servers and storage

vsphere Monitoring and Performance

Memory - Paging. Copyright : University of Illinois CS 241 Staff 1

vsan Remote Office Deployment January 09, 2018

vsan Stretched Cluster Bandwidth Sizing First Published On: Last Updated On:

Consulting Solutions WHITE PAPER Citrix XenDesktop XenApp 6.x Planning Guide: Virtualization Best Practices

The vsphere 6.0 Advantages Over Hyper- V

Performance Evaluation of Virtualization Technologies

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

VMware vsphere: Install, Configure, Manage plus Optimize and Scale- V 6.5. VMware vsphere 6.5 VMware vcenter 6.5 VMware ESXi 6.

监控您的 SmartCloud 刘鹤 IBM 软件部

vrealize Operations Manager User Guide

Native vsphere Storage for Remote and Branch Offices

IBM Emulex 16Gb Fibre Channel HBA Evaluation

PAGE REPLACEMENT. Operating Systems 2015 Spring by Euiseong Seo

Virtual Memory: Mechanisms. CS439: Principles of Computer Systems February 28, 2018

Huawei FusionCloud Desktop Solution 5.1 Resource Reuse Technical White Paper HUAWEI TECHNOLOGIES CO., LTD. Issue 01.

Memory Allocation. Copyright : University of Illinois CS 241 Staff 1

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Lesson 1: Using Task Manager

vrealize Operations Manager User Guide 11 OCT 2018 vrealize Operations Manager 7.0

VMware vfabric Data Director Installation Guide

Virtuozzo Containers

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

SANDPIPER: BLACK-BOX AND GRAY-BOX STRATEGIES FOR VIRTUAL MACHINE MIGRATION

Understanding Virtual System Data Protection

vrealize Operations Manager User Guide

File Systems. OS Overview I/O. Swap. Management. Operations CPU. Hard Drive. Management. Memory. Hard Drive. CSI3131 Topics. Structure.

Availability & Resource

Consulting Solutions WHITE PAPER Citrix XenDesktop XenApp Planning Guide: Virtualization Best Practices

PRESENTATION TITLE GOES HERE

vrealize Operations Definitions for Metrics, Properties, and Alerts vrealize Operations Manager 6.6

Server Virtualization Approaches

Key metrics for effective storage performance and capacity reporting

CS399 New Beginnings. Jonathan Walpole

Increase KVM Performance/Density

Transcription:

TA7750 Understanding Virtualization Memory Management Concepts Kit Colbert, Principal Engineer, VMware, Inc. Fei Guo, Sr. MTS, VMware, Inc.

Disclaimer This session may contain product features that are currently under development. This session/overview of the new technology represents no commitment from VMware to deliver these features in any generally available product. Features are subject to change, and must not be included in contracts, purchase orders, or sales agreements of any kind. Technical feasibility and market demand will affect final delivery. Pricing and packaging for any new technologies or features discussed or presented have not been determined. THESE FEATURES ARE REPRESENTATIVE OF FEATURE AREAS UNDER DEVELOPMENT. FEATURE COMMITMENTS ARE SUBJECT TO CHANGE, AND MUST NOT BE INCLUDED IN CONTRACTS, PURCHASE ORDERS, OR SALES AGREEMENTS OF ANY KIND. TECHNICAL FEASIBILITY AND MARKET DEMAND WILL AFFECT FINAL. 2

Agenda Motivation Concepts Statistics Best practices Summary Q & A 3

4 Motivation

Lots of Questions How to understand differences between memory metrics? Consumed vs. granted, shared vs. shared common? What memory metrics are important? Which ones should I be monitoring? What memory metrics should I use to determine if an ESX host is at its memory capacity? How do I tell if a VM is suffering due to memory contention? 5

6 Concepts

Discussion of Concepts Focus on high-level concepts, not implementation details Simpler and easier to understand Keep the fundamental ideas and drop the unimportant ones Other presentations may present this information differently May discuss more implementation than concept That s ok different viewpoints for same thing! 7

Virtual Memory Creates uniform memory address space Operating system maps application virtual addresses to physical addresses Gives the operating system memory management abilities that are transparent to the application Hypervisor adds extra level of indirection Maps guest s physical addresses to machine addresses Gives the hypervisor memory management abilities that are transparent to the guest virtual memory guest physical memory hypervisor machine memory 8

Virtual Memory virtual memory Application VM virtual guest App memory guest physical memory Operating System physical OS hypervisor memory hypervisor machine memory Hypervisor machine Hypervisor memory 9

Application Memory Management Starts with no memory Allocates memory through syscall to operating system Often frees memory voluntarily through syscall App OS Explicit memory allocation interface with operating system Hyper visor 10

Operating System Memory Management Assumes it owns all physical memory App No memory allocation interface with hardware Does not explicitly allocate or free physical memory Defines semantics of allocated and free memory Maintains free list and allocated lists of physical memory OS Hyper visor Memory is free or allocated depending on which list it resides 11

How Allocated and Free Lists Work Allocated List Free List App OS Memory becomes allocated or free based on which list (piece of memory) has a pointer to it 12

VM Memory Allocation VM starts with no physical memory allocated to it Physical memory allocated on demand Guest OS will not explicitly allocate Allocate on first VM access to memory (read or write) App OS Hyper visor 13

VM Memory Reclamation Guest physical memory not freed in typical sense Guest OS moves memory to its free list Data in freed memory may not have been modified Hypervisor isn t aware when guest frees memory Freed memory state unchanged No access to guest s free list Unsure when to reclaim freed guest memory App OS Hyper visor Guest Free List 14

VM Memory Reclamation Cont d Guest OS (inside the VM) Allocates and frees And allocates and frees And allocates and frees VM Allocates And allocates And allocates Hypervisor can t reclaim memory through guest frees! Inside the VM Guest free list VM App OS Hyper visor 15

What to Do About VM Memory Reclamation? 16

What to Do? Nothing Hypervisor is unable to reclaim VM memory All VM s must be pre-allocated their configured memory size May lead to inefficient use of physical RAM Something Hypervisor uses special techniques to reclaim VM memory VM memory doesn t need to be pre-allocated Much more efficient use of physical RAM Let s explore these techniques 17

18 VM Memory Reclamation Techniques

Transparent Page Sharing (TPS) Simple idea: why maintain many copies of the same thing? If 4 Windows VMs are running, there are 4 copies of Windows code Only one copy is needed Share memory between VMs when possible Background hypervisor thread identifies identical sets of memory Points all VMs at one set of memory, frees the others VMs are unaware of the change VM 1 VM 2 VM 3 Hyper visor VM 1 VM 2 VM 3 Hyper visor 19

Ballooning Hypervisor wants to reclaim memory Guest OS is not aware of this Thinks it owns all physical memory Sits inside its own box, unaware it s running in a VM or that other VMs are running Goal: make the guest aware so it frees up some of its memory Solution: artificially create memory pressure inside the VM Push memory pressure from the hypervisor into the VM Use balloon driver inside the VM to create memory pressure VM 1 VM 2 App OS memory pressure Hyper visor App OS 20

Inflating Balloon 1. Balloon driver allocates memory 2. Balloon driver pins allocated memory 3. Guest may reclaim other memory 4. Balloon driver tells hypervisor what memory it allocated 5. Hypervisor frees machine memory backing memory allocated by balloon driver 6. Hypervisor now has more free physical memory Guest free list Guest: 36 pieces of memory are allocated Guest App OS Hyper visor Hypervisor: 85 pieces of memory are allocated Balloon Driver List of memory 21

Inflating Balloon Cont d Why can the hypervisor reclaim ballooned memory? What in the VM assumes the memory has a specific value? 1. Balloon driver doesn t Because of a contract with the hypervisor Balloon Driver 2. OS doesn t Because the memory allocated to an app 3. Nothing in the VM relies on the memory s value for correctness OS Hyper visor List of memory Hypervisor can safely reclaim ballooned memory because the VM does not rely on a particular value for that memory. 22

Inflating Balloon Cont d Guest OS swapping, a possible side effect of ballooning Two possibilities for guest free memory: Guest free list Guest free list Guest App Balloon Driver 2. Needs to swap! OS 1. No swapping necessary! Hyper visor 1. VM has lots of free memory 2. VM doesn t have much free memory Guest OS chooses whether to swap or not! 23

Swapping Hypervisor can swap VM memory Swaps out to a per-vm swap file App Transparent to the VM Hypervisor swapping is a last resort Both TPS and ballooning preferred Low overhead for TPS Guest swapping due to ballooning performs better than hypervisor swapping But both take time OS Hyper visor Hypervisor swapping quickly reclaims memory, but is more expensive overall 24

Compression (new in vsphere 4.1!) OS Hyper visor Another simple idea: attempt to compress memory to save space When needing to reclaim memory through swapping, try to compress If memory compresses well (by at least 50%), keep the compressed data If it doesn t compress well, fall back to swapping Decompression up to 100x faster than swap-in! Compression only used when memory would otherwise be swapped Thus it s not like TPS, which runs all the time 25

26 When to Reclaim Memory

Quick Review Physical memory is not reclaimed on a guest that is freed VM may accrue lots of physical memory Transparent page sharing is always running May or may not help reduce a VM s physical memory consumption Ballooning, compression, or swapping is the only other way to reclaim memory Both have performance overhead Want to use only when necessary Question: when to invoke ballooning or swapping? 27

Memory Overcommitment VM 1 VM 2 VM 3 Hyper visor Aggregate guest memory greater than host memory Not enough physical memory to satisfy all VM memory needs Eventually VM may want physical memory when none is left 28

When To Reclaim Memory * Not memory overcommitted Never reclaim memory from guest through ballooning or swapping No need to! (Transparent page sharing is always running) Memory overcommitted Only reclaim once free physical memory drops below threshold Start ballooning when memory starts to fall toward threshold Start swapping as memory nears/passes threshold * Assumes no VM or resource pool memory limits are set! 29

30 Statistics

Differences Between Memory Statistics Biggest difference is physical memory vs. machine memory Accounting very different between the two layers! App Physical memory statistics Active, Balloon, Granted, Shared, Swapped, Usage OS Machine memory statistics Consumed, Overhead, Shared Common Hyper visor 31

Memory Shared vs. Shared Common Memory Shared Amount of physical memory whose mapped machine memory has multiple pieces of physical memory mapped to it 6 pieces of memory (VM 1 & 2) VM 1 VM 2 Memory Shared Common Amount of machine memory with multiple pieces of physical memory mapped to it Hyper visor 3 pieces of memory 32

Memory Granted vs. Consumed Memory Granted Amount of physical memory mapped to machine memory 9 pieces of memory (VM 1 & 2) Memory Consumed VM 1 VM 2 Amount of machine memory that has physical memory mapped to it 6 pieces of memory Hyper visor Difference due to page sharing! 33

Memory Active vs. Host Memory Memory Active/Granted/Shared All measure physical memory Host Memory VM 1 VM 2 Total machine memory on host Be careful to not mismatch physical and machine statistics! Hyper visor Guest physical memory can/will be greater than machine memory due to memory overcommitment and page sharing 34

Memory Metric Diagram* VM VM memsize vmmemctl (ballooned) swapped guest physical memory zipped granted active write zipped - zipsaved active shared <unallocated> (no stat) shared savings (no stat) <unallocated or used by other overhead VMs> (no stat) host physical memory consumed <unallocated or used by other VMs> (no stat) Host clusterservices.effectivemem (aggregated over all hosts in cluster) sysusage reserved consumed shared common unreserved Service console (no stat) host physical memory * Figure not to scale! 35

Important VM Memory Statistics mem.consumed How much machine memory is allocated to the VM esxtop: SZTGT How much machine memory the VM is entitled to use mem.active Estimate of how much guest physical memory the VM is actively using mem.swapinrate How much memory is being swapped in for the VM (by the hypervisor) 36

Important VM Memory Statistics Cont d cpu.swapwait How much time VM is blocked waiting for memory to be swapped in (by the hypervisor) The larger the number, the larger the impact on VM performance How to use If cpu.swapwait or mem.swapinrate is consistently greater than zero or higher than normal Is mem.active greater than normal? If yes, there is a VM workload spike Is SZTGT lower than normal? If yes, there is contention with other VMs May need to increase reservation or move the VM off the host 37

Important VM Memory Statistics Cont d virtualdisk.read* & virtualdisk.write* For the virtual disk that has the VM s swap partition The larger the numbers, the more in-guest swapping is happening and the larger the impact to VM performance How to use Put the guest s swap file/partition on a separate vdisk If virtualdisk.read/write is high or higher than normal, check the guest applications and/or inform the VM owner May need to increase VM memory size or increase VM memory reservation if in-guest swapping is due to ballooning * Only available in vsphere 4.1 38

Important Host Memory Statistics mem.consumed How much machine memory is allocated to running VMs and system services Derived stat: machine active mem.active = all VMs mem.active on the host machine active mem.active * mem.sharedcommon / mem.shared How much total machine memory is actively used by running VMs mem.reservedcapacity How much machine memory has been reserved for use by VMs or system services Important to always leave a little extra memory unreserved for VM overhead memory growth 39

Important Host Memory Statistics Cont d mem.swapinrate How much memory is being swapped in for all VMs (by the hypervisor) How to use If mem.swapinrate is consistently greater than zero or higher than normal, more than likely the aggregate working set of all VMs is too high for the amount of host machine memory In those cases, should VMotion some VMs off the host 40

Balloon Statistics mem.vmmemctl [VM, host, resource pool, cluster] Amount of memory ballooned (i.e. reclaimed) from the guest mem.vmmemctltarget [VM] Desired amount of memory to be ballooned from the guest virtualdisk.read* & virtualdisk.write* [VM] Amount of I/O traffic to a vdisk If that vdisk contains only the guest s swap file/partition, then it s the amount of swap activity by the guest * Only available in vsphere 4.1 41

Swap Statistics mem.swapped [VM] Amount of memory currently swapped out for the VM mem.swaptarget [VM] Desired amount of memory to be swapped from the VM mem.swapin & mem.swapout [VM, host] Total amount of memory swapped in/out to/from the VM (or all VMs on the host) since the VM was last powered on or VMotion ed mem.swapinrate & mem.swapoutrate [VM, host] Current amount of memory actively being swapped in/out cpu.swapwait [VM] Amount of time VM is blocked waiting for memory to be swapped in 42

Compression Statistics mem.compressed* [VM, host] Amount of memory currently compressed for the VM or host mem.compressionrate* & mem.decompressionrate* [VM, host] Rate of memory compression/decompression for the VM or host mem.zipped* [VM] Amount of memory currently compression for the VM mem.zipsaved* [VM] Amount of memory saved through compression for the VM * Only available in vsphere 4.1 43

44 Best Practices

VM Memory Best Practices TPS always running Do I enable ballooning or just rely on hypervisor swapping? 45

Memory Terminology memory size total amount of memory presented to a guest allocated memory memory assigned to applications free memory memory not assigned active memory allocated memory recently accessed or used by applications idle memory allocated memory not recently accessed or used 46

Memory Reclamation with Ballooning Free memory Idle memory Active memory OS Hyper visor Ballooning preferentially selects free or idle VM memory rather than active memory Because guest OS allocates from free memory 47

Memory Reclamation with ESX Swapping Free memory Idle memory Active memory OS Hyper visor Swapping randomly selects VM memory to reclaim, potentially including a VM s active memory 48

Normalized Throughput Ballooning vs. Swapping Performance Swingbench (4G VM memory) Throughout (Balloon only) Throughput (Swapping only) 1.2 1 0.8 0.6 0.4 0.2 0 0 256 512 768 1024 1280 1536 1792 2048 2304 Reclaimed memory (MB) Takeaway: for memory reclamation, ballooning generally outperforms swapping 49

Ballooning Best Practices Install VMware Tools and enable ballooning on ALL VMs Memory reclamation through ballooning much better for performance than through swapping Provide sufficient swap space inside guest Guest must have enough space to swap as a result of ballooning Rule of thumb: guest OS s swap file should be at least as large as the VM s configured memory size Tip: be sure to update guest swap space when changing VM memory size! Place guest s swap file/partition on separate vdisk Allows you to monitor guest swap activity through virtualdisk stats 50

Memory Reclamation with Ballooning Cont d Free memory Idle memory Active memory OS Hyper visor Ballooning preferentially selects free or idle VM memory rather than active memory BUT if asked to reclaim too much, ballooning will eventually start reclaiming active memory! 51

Normalized Throughput Ballooning vs. Swapping Performance Cont d Swingbench (4G VM memory) Throughout (Balloon only) Throughput (Swapping only) 1.2 1 0.8 0.6 0.4 0.2 0 0 256 512 768 1024 1280 1536 1792 2048 2304 Reclaimed memory (MB) 52

Normalized Throughput Ballooning vs. Swapping Performance Cont d Swingbench (4G VM memory) Throughout (Balloon only) Throughput (Swapping only) 1.2 1 0.8 0.6 0.4 0.2 0 0 256 512 768 1024 1280 1536 1792 2048 2304 Reclaimed memory (MB) 53

Normalized Throughput Ballooning vs. Swapping Performance Cont d Swingbench (4G VM memory) Throughout (Balloon only) Throughput (Swapping only) 1.2 1 0.8 0.6 0.4 0.2 0 0 256 512 768 1024 1280 1536 1792 2048 2304 Reclaimed memory (MB) Takeaway: to maximize VM performance, it s vital to keep VM s active memory in physical RAM! 54

VM Memory Sizing Best Practices Setting the right VM memory size VM memory size should be larger than the VM s highest level of active memory during peak loads Ensures guest OS does not swap heavily Setting the right VM memory reservation VM memory reservation should be set slightly above average VM active memory size Ensures ESX does not balloon or swap the VM s active memory, maximizing performance Tip: to track a VM s average and maximum active memory usage, use the charts in the Performance tab Tip: Use CapacityIQ to help determine VM memory size 55

Host Memory Best Practices Q: How many VMs can I put on a host? A: As many as will fit. :) A2: As many whose active memory will fit in physical RAM, while leaving some room for memory spikes. 56

Memory Overcommit Redux Two types of memory overcommitment Configured memory overcommitment (Sum of VMs configured memory) / host memory available for VMs This is what is usually meant by memory overcommitment Active memory overcommitment (Sum of VMs machine active memory) / host memory available for VMs Equations Sum of VM s machine active memory (computed for a host): mem.active * mem.sharedcommon / mem.shared Use stats from a host with powered-on VMs Host memory available for VMs: total host memory mem.sysusage (total host memory *.06) Use stats from a host without any powered-on VMs 57

Memory Overcommit Redux Cont d Impact of overcommitment Configured memory overcommitment > 1 = zero to negligible VM performance degradation Active memory overcommitment 1 = very high likelihood of VM performance degradation! 58

Configured Memory Overcommitment Parts of idle and free memory not in physical RAM VM 1 VM 2 VM 3 free idle active free idle active free idle active Hypervisor All VMs active memory stays resident in physical RAM, allowing for maximum VM performance 59

Active Memory Overcommitment No idle and free memory in physical RAM VM 1 VM 2 VM 3 active active active Hypervisor Some VM active memory not in physical RAM, which will lead to VM performance degradation! 60

Memory Overcommitment Takeaways Configured Memory Overcommitment > 1 = Good You re maximizing value of physical RAM by keeping VM active memory resident in it and ballooning away idle and free memory Active Memory Overcommitment 1 = Not recommended You may have pushed overcommitment too far and now VM performance may suffer because of it Monitor overcommitment by employing the equations defined in the previous slide If you hit this state, you should VMotion some VMs off the host! Remember: it is vital to keep active memory in physical RAM for best performance! 61

Normalized Throughput A Note on Compression Performance Swingbench Performance 16VMs, 80GB total VM memory (page sharing disabled) 1.2 1 1 0.991265121 0.945398185 0.8 1 0.991265121 0.944677419 0.802726815 0.700015121 0.6 No-MemZip MemZip 0.656955645 0.4 0.418245968 0.2 0 96 80 70 60 50 Host Memory Size (GB) 62

A Note on Compression Performance Cont d Compression performs much better than swapping Especially on heavily memory overcommitted hosts Impact of high active memory overcommitment is lowered Compression can quickly resolve memory contention This allows you to increase configured memory overcommitment and thus get closer to active memory overcommitment = 1 63

Avoid Active Memory Overcommitment Q: How many VMs can I put on a host? Q: How to ensure you don t hit active memory overcommitment 1? A (for both): Use VM memory reservations! By setting VM memory reservations to VM s average active memory, you ensure host won t become overcommitted on active memory DRS and HA also adhere to VM memory reservation requirements, ensuring DRS-invoked VMotions and HA relocations won t impact VM performance Tip: be sure to leave a little extra unreserved memory on each host to accommodate memory usage spikes Tip: Use CapacityIQ to help estimate desired reservation 64

65 Summary

Concepts Summary Guest doesn t free memory in a typical sense Hypervisor isn t aware of memory a guest has freed Hypervisor uses transparent page sharing, ballooning, compression, and swapping to reclaim memory Ballooning, compression, and swapping used only when host is memory overcommitted! 66

Statistics Summary Difference between statistics that measure guest physical memory and statistics that measure machine memory Be careful if you try to compare them! Useful VM stats mem.consumed, SZTGT (esxtop), mem.active, mem.swapinrate, cpu.swapwait, virtualdisk.read, virtualdisk.write Useful host stats mem.consumed, mem.reservedcapacity, mem.swapinrate, machine active (derived: mem.active * mem.sharedcommon / mem.shared) 67

Best Practices Summary VM Memory Install VMware Tools and enable ballooning on ALL VMs Provide sufficient swap space inside guest Place the guest s swap file/partition on a separate vdisk VM memory size should be larger than the VM s highest level of active memory during peak loads VM memory reservation should be set slightly above average VM active memory size Host Memory Keep active memory overcommitment high, but under 1 Use VM memory reservations to enforce this Configured memory overcommitment > 1 is OK though! 68

Plan Your Upgrade to ESXi Today Future proof your VMware deployments VMware ESXi architecture will be the only hypervisor in future vsphere releases after vsphere 4.1 For more information: Visit the ESXi and ESX Info Center: www.vmware.com/go/esxiinfocenter Read VMware ESXi: Planning, Implementation, and Security by Dave Mischenko (Release Date: September 2010, list price $49.99) Register for VMware training Transitioning to ESXi : www.vmware.com/go/esxi/education 69

70 Q&A

72 Backup Slides

Why Not Just Watch the Free List? Idea Hypervisor could identify memory used by guest for it s free list Watch that memory area for changes and free physical memory that backs guest memory added to free list Problem Free list may be hard to identify Depends on guest type and may change from release to release Free is not always free Free memory may be used in buffer cache or elsewhere by guest Guest may not free memory when hypervisor wants it to 73

Why Is Memory Overcommitment Important? Ensures physical memory is actively used as much as possible Guest VMs may have lots of unused or inactive memory No reason to back one VM s unused/inactive memory with physical memory if other VMs can use it Hypervisor can reclaim unused/inactive memory and redistribute to other VMs who will actively use it Increases VM-to-host consolidation ratio Each VM has a smaller physical memory footprint since only its active memory is resident You can fit more VMs in the same amount of physical memory 74

Measuring Active Guest Memory Guest self-measurement won t work Each guest uses different method to estimate active memory Can t compare activeness data from Windows guest with Linux guest Comparable estimate important for making allocation decisions Statistical Sampling Select a subset of memory at random Compute the percentage of pages accessed in a minute Average percentage gives an estimate of active guest memory 75

Host/Guest Memory Usage Different Inside Guest Why is host/guest memory usage different than what I see inside the guest OS? Guest memory Guest has better visibility while estimating active memory ESX active memory estimate technique can take time to converge Host memory Host memory usage doesn t correspond to any memory metric within the guest Host memory usage size is based on a VM s relative priority on the physical host and memory usage by the guest Again, this is expected! 76