The Power of PowerVM Power Systems Virtualization Eyal Rubinstein deyal@il.ibm.com
The Power Equation: Power = i + p System i i515 i525 i550 i570 i595 IBM BladeCenter BladeCenter JS12/22/23 Express BladeCenter JS43 Express IBM Power Systems Power 520 Express Power 550 Express Power 560 Express Power 570 System p p5-520 p5-550 p5-570 p5-575 p5-595 Power 575 Power 595
pseries pseries IBM Power System Blades IBM JS12 IBM JS22 IBM JS23 IBM JS43 Footprint, Packaging Processor # of processors (# of cores) GHz clock L3 Cache DDR2 GB memory Internal storage Maximum rperf PCIe PCI-X slots Max I/O drawers Max micro-partitions IBM i Operating System AIX support Linux support Blade POWER6 2 3.8 0 4 to 64 73GB 600GB 14.71 1 1 N/A 40 5.4 & 6.1 5.3, 6.1 RHEL 4.6 / 5.1 SLES 10 / 11 Blade POWER6 4.0 0 4 to 32 73GB 300GB 30.26 1 1 N/A 40 5.4 & 6.1 5.3, 6.1 RHEL 4.6/ 5.1 SLES 10 / 11 1 Requires purchase of optional feature to support micro-partitions Optional 4 Blade POWER6+ 4 4.2 32MB 4 to 64 69GB 300GB 36.28 2 N/A 40 1 6.1 5.3, 6.1 RHEL 4.6 / 5.1 SLES 10 / 11 Blade POWER6+ 8 4.2 32MB 8 to 128 69GB 600GB 68.2 4 N/A 80 1 6.1 5.3, 6.1 RHEL 4.6 / 5.1 SLES 10 / 11
pseries pseries IBM Power System Servers Power 520 Power 550 Power 560 Footprint, Packaging Processor # of processors (# of cores) GHz clock DDR2 GB memory Internal storage* Maximum rperf PCIe PCI-X slots PCI-X 266 slots GX bus slots Max I/O drawers Max micro-partitions IBM i Operating System AIX support Linux support 19-inch 4U rack Deskside POWER6 / 6+ 1, 2, 4 4.2 / 4.7 2 to 64 73GB 30.6TB 39.73 3 to 42 0 to 56 2 to 50 2 8 (PCI-X) 4 (PCIe)) 40 1 5.4 & 6.1 5.3, 6.1 RHEL 4.5 / 5.1 SLES 10 / 11 19-inch 4U rack Deskside POWER6/ 6+ 2, 4, 6, 8 3.5 / 4.2 / 5.0 2 to 256 73GB 30.6TB 78.6 3 to 42 0 to 56 2 to 50 2 8 (PCI-X) 4 (PCIe)) 80 1 5.4 & 6.1 5.3, 6.1 RHEL 4.5 / 5.1 SLES 10 / 11 1 Requires purchase of optional feature to support micro-partitions *With maximum I/O drawers Optional 19-inch 4U rack POWER6+ 4, 8, 16 3.6 2 to 384 73GB 68.4TB 100.3 4 to 38 0 to 126 2 to 76 2 4 12 (12X) / 18(RIO-2) 6 (PCIe) 160 1 5.4 & 6.1 5.3, 6.1 RHEL 4.5 / 5.1 SLES 10 / 11
pseries pseries IBM Power System Servers Power 570/16 Power 570/32 Power 575 Power 595 Footprint, Packaging 19-inch 4U rack 19-inch 4U rack 24-inch frame 24-inch frame Processor # of processors (# of cores) GHz clock DDR2 GB memory Internal storage* Maximum rperf PCIe PCI-X slots PCI-X 266 slots GX bus slots Max I/O drawers Max micro-partitions IBM i Operating System AIX support POWER6 / 6+ 2, 4, 8, 12, 16 4.4, 5.0 2 to 768 73GB 180TB 141.21 4 to 16 0 to 140 2 to 200 2 8 32 (12X) / 48(RIO-2) 16 (PCIe) 160 1 5.4 & 6.1 5.3, 6.1 POWER6+ 4, 8, 16, 24, 32 4.2 2 to 768 73GB 180TB 193.25 4 to 16 0 to 140 2 to 200 2 8 32 (12X) / 48(RIO-2) 16 (PCIe) 160 1 5.4 & 6.1 5.3, 6.1 POWER6 32 4.7 32 to 256 146.8GB 5.1TB N/A 0 to 4 0 to 20 0 to 16 2 1 254 1 N / A 5.3, 6.1 POWER6 8 to 64 4.2 / 5.0 16 to 4 TB 146.8GB 5.1TB 553 0 0 to 240 / 180 0 to 420 4 to 32 30 (RIO-2) / 30 ( 12X) 30 (PCIe) 254 1 5.4 & 6.1 5.3, 6.1 Linux support RHEL 4.5 / 5.1 SLES 10 / 11 RHEL 4.5 / 5.1 SLES 10 / 11 RHEL 4.5 / 5.1 SLES 10 / 11 RHEL 4.5 / 5.1 SLES 10 / 11 1 Requires purchase of optional feature to support micro-partitions *With maximum I/O drawers Optional
IBM s History of Virtualization Leadership A 40 year tradition continues with PowerVM 1967 1973 1987 1999 2004 2007 2008 IBM develops hypervisor that would become VM on the mainframe IBM announces first machines to do physical partitioning IBM announces LPAR on the mainframe IBM announces LPAR on POWER IBM intro s POWER Hypervisor for System p and System i IBM announces POWER6, the first UNIX servers with Live Partition Mobility IBM announces PowerVM
Optimizing IT with Industrial Strength Virtualization Introduced in 1999 100,000s of partitions 65% of Power servers * % of POWER6 processor based servers shipped with PowerVM in 2008 7
PowerVM: Virtualization for Power Systems Building a foundation for a Dynamic Infrastructure Industrial Strength Virtualization Unified offerings for AIX, IBM i, and Linux Share processor, memory and I/O across operating environments Reduce Cost with Consolidation Reduce hardware, software, and energy footprints with micro-partitioning supporting up to 10 partitions per core Improve Service with Virtualization Respond to changes in workload demands with automatic movement of processor and memory resources Enhance IT infrastructure flexibility with I/O virtualization 65% of Power Systems shipped with PowerVM in 2008 Reduce Risk with Mobility Eliminate planned outages and balance workloads across systems with Live Partition Mobility
No Partitioning Multiple 4-way Servers Partitioning Evolution POWER4 Partitioning Partition Servers POWER5/6 Partitioning 4 core Server Up to 40 Partitions 4 core Server 8 Servers / 4 cores each Dedicated resources 4 core Server Multiple partitions dynamically dispatched CPU resources
Power Systems Virtualization with PowerVM Virtual 1 I/O Cores Server Partition Linux Int Virt Manager Storage Sharing Ethernet Sharing 2 Cores Linux Dynamically Resizable 2 Cores IBM AIX i V6.1 Virtual I/O paths 5 Cores AIX V6.1 POWER Hypervisor 3 6 3 Cores Cores Cores Micro-partitioning Linux AIX V6.1 Linux Linux AIX V5.3 Virtual LAN AIX V5.3 IBM i AIX V5.3 AIX V6.1 Micro-Partitioning Feature Share processors across multiple partitions Minimum partition 1/10 th core 254 partition maximum AIX V5.3/6.1, Linux, & IBM i Managed via HMC or IVM Virtual I/O server Shared Ethernet Shared SCSI & Fibre Channel attached disk subsystems Benefits Fewer Processors & Adapters Reduced Environmental Cost Rapid Service provisioning Network Network Web Browser IVM
Logical Partitioning can Reduce Cost Improve the total cost of IT infrastructure while successfully addressing mounting economic pressures and service delivery expectations Power Systems support partitioning Core(s) dedicated to partitions Up to 64 partitions Power Hypervisor PowerVM adds Micro-partitioning Up to 10 partitions per core Granularity of 1/100th of a core Up to 254 partitions Power Hypervisor Reducing Hardware, Software, Energy, and Management Costs
Shared Processor Partitions (Micro-Partitions) Dynamic LPARs Whole Processors Micro-partitions - Shared Processor Pool of 6 CPUs Share a pool of processors All licensed, unallocated processors form the shared pool AIX V5.2 AIX V5.3 AIX V5.3 Linux IBM i Linux AIX V6.1 Partitioning options Micro-partitions: Up to 254 Configured via the HMC or IVM Entitled capacity Entitled capacity In units of 1/100 of a CPU Minimum 1/10 of a CPU Capped or uncapped partitions Min Max Variable weight share (priority) of surplus capacity Hypervisor
VIOS I/O Virtualization can Improve Service Respond quickly and flexibly to business opportunities and customer demands; align physical and IT assets to the business to enable rapid, agile response to changing business circumstances Power Systems support dedicated I/O I/O resources assigned to partitions Adapters can be moved between partitions Power Hypervisor PowerVM adds I/O virtualization Virtual I/O Server (VIOS) enables sharing of I/O resources among partitions NPIV support simplifies SAN management* Multiple VIOS partitions provide redundancy* Reduce costs while improving IT infrastructure flexibility VIOS Power Hypervisor VIOS hosting of IBM i 6.1 partitions requires POWER6 processor-based servers * Support planned for IBM I * Support planned for IBM i
IBM i I/O Virtualization can Improve Service Respond quickly and flexibly to business opportunities and customer demands; align physical and IT assets to the business to enable rapid, agile response to changing business circumstances Power Systems support dedicated I/O I/O resources assigned to partitions Adapters can be moved between partitions Power Hypervisor IBM i supports I/O virtualization IBM i can host I/O for i 6.1, AIX, & Linux partitions VIOS Power Hypervisor Reduce costs while improving IT infrastructure flexibility IBM i 6.1 hosting of IBM i 6.1 partitions requires POWER6 processor-based servers
Virtual I/O Server (VIOS) VIOS #1 AIX IBM i VIOS #2 Virtual SCSI Function Virtual Ethernet Function Ethernet A Ethernet B B Virtual Ethernet Function Virtual SCSI Function POWER Hypervisor Ethernet Ethernet B Allows sharing of network and storage devices Physical and virtual resources can be mixed in the same partition Vital for shared processor partitions Overcomes potential limit of adapter slots due to high number of possible Micro-Partitions Allows the creation of logical partitions without the need for additional physical resources Allows attachment of previously unsupported solutions in selected OS clients (e.g.: Linux, IBM i) B
Virtual SCSI Power Server External Storage Micro-partitions A1 A2 A3 A4 A5 Shared Fiber Channel Adapter Shared SCSI Adapter v S C SI VIOS v L A N AIX 5.3 A1 Linux AIX 6.1 A2 B1 B2 B3 IBM i A3 B1 B2 B3 B4 B5 Virtual SCSI Allows sharing of storage devices Vital for shared processor partitions POWER Hypervisor Overcomes potential limit of adapter slots due to Micro-Partitioning Allows the creation of logical partitions without the need for additional physical resources
Virtual SCSI Basic Architecture Client Partition (AIX, Linux, IBM i) Virtual I/O Server Partition vscsi Target Device SCSI DVD SCSI Disk vscsi Client Adapter vscsi Server Adapter PV VSCSI LVM Adapter / Drivers LV VSCSI Multi-Path or Disk Drivers Optical Driver Optical VSCSI POWER Hypervisor FC or SCSI Device Virtual SCSI is based on a client/server relationship Virtual SCSI enables sharing of SCSI and Fiber Channel disk drives as well as optical devices (DVD-ROM and DVD-RAM). Virtual disks are defined as Physical Volumes (PVs) or Logical Volumes (LVs) in the Virtual I/O Server partition Appear as generic SCSI disks in the hosted partition Virtual optical devices appear as SCSI optical devices in the hosted partition
PowerVM Virtual Tape Support VIOS Partition Int Virt Manager Virt Enet Virt SCSI T Linux Dynamically Resizable Dedicated Proc. AIX V5.2 AIX V5.3 Micro-partitioning AIX V5.3 Linux Linux AIX V6.1 Vt IBM i AIX V6.1 AIX V5.3 Low function SAS Tape devices SCSI (SAS) interface No support for Tape robotics Features / Functions Only one partition has control of tape device Tape handling is provide by the OS of the partition Tape eject, etc. Linux VIOS 2.1 Shared SCSI Operating Systems AIX IBM i Linux
NPIV - N-Port ID Virtualization Tape Library SAN Storage Fiber Chan Switch N_Port ID Virtualization (NPIV) provides direct Fibre Channel connections from client partitions to SAN resources, simplifying SAN management Fibre Channel Host Bus Adapter is owned by VIOS partition VIOS Fiber Channel adapter supports Multiple World Wide Port Names / Source Identifiers * Statement of Direction for IBM i and Linux support VIOS FC Adapter Power Hypervisor Physical adapter appears as multiple virtual adapters to SAN / end-point device Virtual adapter can be assigned to multiple operating systems sharing the physical adapter LPARs have direct visibility on SAN (Zoning/Masking) I/O Virtualization configuration effort is dramatically reduced Tape Library Support Virtual FC Adapter Virtual FC Adapter
Virtual SCSI model N-Port ID Virtualization Virtualized disks POWER5 or POWER6 generic scsi disk AIX generic scsi disk Disks POWER6 DS8000 AIX EMC Virtual SCSI Virtual FC FC Adapter FC Adapters VIOS Shared FC Adapter FC Adapters VIOS SAN SAN DS8000 EMC DS8000 EMC
Redundant VIOS I/O Virtualization Redundant VIOS partitions provide two paths to attached SAN storage AIX and Linux partitions One set of disk Client partitions use MPIO VIOS Power Hypervisor VIOS Redundant VIOS partitions provide access to mirrored SAN storage AIX, i, and Linux partitions Mirrored set of disk Mirroring done by client partitions (e.g., IBM i) VIOS Power Hypervisor VIOS Note: Redundant VIOS partitions are not supported on BladeCenter JS12, JS22, JS23, and JS43
Virtual Ethernet Memory-based inter-partition LAN Physical network adapters are not needed for inter-partition communication VLAN technology implementation Partitions can only access data directed to them. Virtual Ethernet switch provided by the POWER Hypervisor Virtual Ethernet adapters appear to the OS as physical adapters MAC-Address is generated by the HMC..Two methods for connecting Virtual Ethernet to external network: Routing via a partition that owns a physical Ethernet adapter Bridging via a Shared Ethernet Adapter a VIOS capability. AIX partition Virtual Ethernet adapter IBM i partition Virtual Ethernet adapter Linux partition Virtual Ethernet adapter Virtual Ethernet switch POWER Hypervisor
Shared Ethernet Adapter (SEA) The Virtual I/O Server is configured with at least one physical Ethernet adapter. SEA is a VIOS service that acts as a layer 2 network switch. Securely bridges network traffic from a virtual Ethernet adapter to a real network adapter One Shared Ethernet Adapter can be shared by multiple VLANs. Multiple subnets can connect using a single adapter on the Virtual I/O Server. Virtual I/O Server Shared Ethernet Adapter ent0 VLAN 2 Physical adapter VLAN 1 VLAN 1 AIX partition VLAN 1 10.1.1.11 Virtual Ethernet switch POWER Hypervisor VLAN 2 Linux partition VLAN 2 10.1.2.11 AIX Server 10.1.1.14 Linux Server 10.1.2.15
Active Memory Sharing Enables Higher Memory Utilization Partitions with dedicated memory Memory is allocated to partitions As workload demands change, memory remains dedicated Memory allocation is not optimized to 5 workload 0 Memory allocation Memory (GB) 25 20 15 10 Memory requirements Partition 3 Partition 2 Partition 1 Time Partitions with shared memory Memory is allocated to shared pool Memory is used by partition that needs it enabling more throughput Higher memory utilization Memory Usage (GB) 25 20 15 10 5 0 Partition 3 Partition 2 Partition 1 Time
PowerVM Active Memory Sharing PowerVM Active Memory Sharing intelligently flows memory from one partition to another for increased utilization and flexibility of memory usage Memory virtualization enhancement for Power Systems Partitions share a pool of memory Memory dynamically allocated based on partition s workload demands Supports over-commitment of logical memory Overflow managed by VIOS paging devices Two VIOS partitions can be used for redundancy Compatible with Live Partition Mobility Designed for partitions with variable memory requirements Workloads that peak at different times across the partitions Mixed workloads with different time of day peaks (e.g. CRM by day, batch at night) Low average memory requirements Available with PowerVM Enterprise Edition Supports AIX 6.1, i 6.1, and SUSE Linux Enterprise Server 11 Partitions must use VIOS and shared processors POWER6 processor-based systems Memory Usage (GB) Memory Usage (GB) Memory Usage (GB) 15 10 5 0 15 10 5 0 15 10 5 0 Around the World Time Day and Night Time Infrequent Use Time Asia Americas Europe Night Day #10 #9 #8 #7 #6 #5 #4 #3 #2 #1
30 25 20 15 10 5 Dedicated vs Active Memory Sharing Environment USA Asia Europe USA Asia Europe Dedicated Memory 30 25 20 15 10 5 Active Shared Memory 0 Time 0 Time
Resource Movement can Improve Service To respond to opportunities and challenges with agility and speed an organization must have business-driven service management that scales dynamically Power Systems support dynamic resource movement (without partition reboot): Add or remove whole processor cores Add or remove memory Add or remove I/O devices PowerVM adds automatic movement Processor resources are automatically moved among partitions with uncapped partitions Memory resources are automatically moved among partitions with Active Memory Sharing Adds support for Dynamic movement of 1/100 th of processor Power Hypervisor Quickly and easily respond to changing workload demands
PowerVM Live Partition Mobility Move running AIX and Linux operating system workloads from one POWER6 processor-based server to another! Improves Availability Helps eliminate many planned outages Balances Workloads During peaks and to address spikes in workload Included with PowerVM Enterprise Edition Supports AIX and Linux partitions with VIOS on Power servers Virtualized Virtualized SAN SAN and and Network Network Infrastructure Infrastructure
PowerVM Flexibility: Multiple Shared Processor Pools Up to 64 shared processor pools on POWER6 processor-based servers Grouping of partitions into subsets called Pools Can manage processors resources at the subset AIX, Linux and IBM i 6.1 partitions Can assign caps at the group level Provides the ability to balance processing resources between partitions assigned to the shared pools. Enables Processor sharing by multiple LPARs while potentially reducing processor-based software licensing costs Pools based on OS or Business Unit or Dev/Test/Prod or 3-Tier architecture Helps to optimize use of processor cycles Partition Mobility supported Processor Pool 1 Processor Pool 2 Processor Pool 3 Processor Pool 4 Database Database Web Server Web Server Web Server Prod Prod Dev Dev Dev Processor Pool 5 Processor Pool 6 Processor Pool 7 Processor Pool 8 BU1 BU1 BU2 BU2 BU2 AIX AIX Linux Linux Linux
PowerVM Lx86 Run x86 Linux applications on Power Systems along with your AIX, i and Linux applications Creates an x86 Linux application environment running on Linux for Power Systems Dynamically translates and maps x86 Linux instructions to POWER Reduces Costs Simplifies migration of Linux on x86 applications enabling customers to realize the energy and administration savings of consolidation Strengthens Application Portfolio Run most existing 32-bit x86 Linux applications* with no application changes x86 Linux OS App PowerVM Lx86 POWER Linux OS Application Supported Linux OS PowerVM AIX OS Application AIX OS Power Systems Platform Included with all PowerVM Editions Runs in a Linux partition * Visit http://ibm.com/systems/p/linux/qual.html for detailed qualifications.
Power Virtualization Management Hardware Management Console (HMC) Dedicated console for server and virtualization management Integrated Virtualization Manager (IVM) Browser based management tool for Power Express servers and blades Runs in the VIOS partition
Power Systems Hardware Management Console (HMC) 7316 display 7042-CRx 7310-CRx (rack) 7042-C0x 7310-C0x (desktop) HMC is dedicated to console function Required on Power Systems to create/change partitions or to use Capacity on Demand Ethernet connection Graphical User Interface for configuring and operating pseries servers Support: Max 254 partitions per HMC across up to 32 servers Max two HMCs per server
HMC V7 Web based User Interface with remote browser access
PowerVM Integrated Virtualization Manager (IVM) A virtualization solution for small and mid-size companies Simplifies Management Brower-based tool for creating and managing partitions Reduces Costs Eliminates the need to purchase a dedicated hardware console Included with all PowerVM Editions Runs in the Virtual I/O Server partition
IVM support for i partitions PowerVM Integrated Virtualization Manager provides an easier to use, lower cost of entry virtualization solution Supports virtualization without an HMC Provided with PowerVM Express, Standard, and Enterprise Editions i 6.1 partitions are supported with IVM on BladeCenter JS12/22/23/43 and Power 520 (8203) and 550 (8204) systems VIOS partition owns disk, DVD and Ethernet hardware resources IBM i is a purely virtual partition VIOS With IVM VIOS With IVM Power Hypervisor Power Hypervisor PowerVM Express Edition available for i clients Entry solution supporting up to 3 partitions VIOS plus 2 others
PowerVM Editions Feature/Function Express Edition Standard Edition Enterprise Edition Servers Supported Power 520 / Power 550 POWER6 Blades, Power Systems (POWER6) POWER6 Blades, Power Systems (POWER6) Operating Systems AIX / Linux / i AIX / Linux / i AIX / Linux / i Max LPARs 1 VIOS + 2 LPARS 10 / Core 10 / Core Management IVM IVM & HMC IVM & HMC VIOS PowerVM Lx86 Multiple Shared Processor Pools Live Partition Mobility Active Memory Sharing
IBM PowerVM Reduce Costs Improve Service Manage Risk The leading virtualization platform for UNIX, i and Linux clients
AIX 6 Workload Partitions (WPARs) Separate regions of application space within a single AIX image Improved administrative efficiency by reducing the number of AIX images to maintain Software partitioned system capacity Each Workload Partition obtains a regulated share of system resources Each Workload Partition can have unique network, filesystems and security Two types of Workload Partitions System Partitions Application Partitions Separate administrative control Each System Workload partition is a separate administrative and security domain Workload Partition Application Server Workload Partition BI Workload Partition Test Workload Partition Web Server Workload Partition Billing Workload Partition Test Shared system resources Operating System, I/O, Processor, Memory AIX
AIX Workload Partitions (WPARs) can be used inside LPARs Dedicated Processor LPAR Finance Dedicated Processor LPAR Planning Shared Processor Pool LPAR LPAR Americas LPAR Asia LPAR EMEA WPAR #1 Bus Dev WPAR #1 MFG WPAR #1 email VIO Server WPAR #2 Test WPAR #2 Planning WPAR #3 Billing POWER Hypervisor
Two WPAR AIX Offerings AIX 6 Workload Partitions (WPAR) included in AIX 6 Ethernet (single system) WPAR Management Workload Partitions Manager Enablement for Live Application Mobility Cross System Management for Workload Partitions Automated, Policy-based Application Mobility Part of IBM System Director Family WPAR Manager
Workload Partitions Manager Browser Management of WPARS across multiple systems Lifecycle operations Single Console for: Graphical Interface Create & Remove Start & Stop Checkpoint & Restart Monitoring & Reporting Manual Relocation Automated Relocation Policy Driven Change Infrastructure Optimization Load Balancing Workload Partition Manager Server 1 WPAR Agent Web Service Server 2 WPAR Agent Server 3 WPAR Agent System/Application WPARs System/Application WPARs System/Application WPARs
Graphical WPAR Manager & Application Mobility Workload Partition Manager
AIX Live Application Mobility Move a running Workload Partition from one server to another for outage avoidance and multi-system workload balancing Workload Partition App Server Workload Partition e-mail Workload Partition Data Mining Workload Partition Web Workload Partition Dev AIX Workload Partitions Manager Policy Workload Partition Billing AIX Workload Partition QA Works on any hardware supported by AIX 6, including POWER5 and POWER4
Trademarks and disclaimers Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries./ Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency which is now part of the Office of Government Commerce. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office. UNIX is a registered trademark of The Open Group in the United States and other countries. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others. Information is provided "AS IS" without warranty of any kind. The customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer. Information concerning non-ibm products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by IBM. Sources for non-ibm list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-ibm products. Questions on the capability of non-ibm products should be addressed to the supplier of those products. All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such commitments are only made in IBM product announcements. The information is presented here to communicate IBM's current investment and development activities as a good faith effort to help with our customers' future planning. Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements equivalent to the ratios stated here. Prices are suggested U.S. list prices and are subject to change without notice. Starting price may not include a hard drive, operating system or other features. Contact your IBM representative or Business Partner for the most current pricing in your geography. Photographs shown may be engineering prototypes. Changes may be incorporated in production models. IBM Corporation 1994-2009. All rights reserved. References in this document to IBM products or services do not imply that IBM intends to make them available in every country. Trademarks of International Business Machines Corporation in the United States, other countries, or both can be found on the World Wide Web at http://www.ibm.com/legal/copytrade.shtml.