PowerVM Single Root I/O Virtualization: Fundamentals, Configuration, and Advanced Topics
|
|
- Loreen Cameron
- 6 years ago
- Views:
Transcription
1 PowerVM Single Root I/O Virtualization: Fundamentals, Configuration, and Advanced Topics Allyn Walsh Consulting IT Specialist Many contributions from Chuck Graham STSM, Lead Architect Alexander Paul Senior Systems and Network Engineer 2016 IBM Corporation
2 Topics Overview Performance vnic and Live Partition Mobility Fault-Tolerant Configurations vnic Failover Performance Monitor & Topology Maintenance Additional information 1
3 Network Technologies on POWER Systems Dedicated s Best possible performance exclusively bound to particular partition; no resource sharing Virtual Ethernet Hypervisor internal switching VIOS Shared Ethernet Hypervisor Switch uplink through VIOS Options for high availability SEA failover, SEA failover w. load sharing, NIB Single Root I/O Virtualization () and vnic is PCIe standard for hardware resource sharing vnic is new virtual adapter type vnic Announced 5 th October 2015 Host Ethernet (HEA) virtualization technology Not available for P7+ and P8 servers 2
4 Power Systems Solutions PowerVM Single Root I/O Virtualization () technology Sharing by up to 64 partitions per PCIe adapter. Direct Access I/O Performance - provides CPU utilization and latency characteristics similar to a dedicated adapter Function - partition access to advanced adapter features (e.g. RSS, LSO, etc.) port (VF) resource provisioning (e.g. desired bandwidth) Flexible deployment models Single partition Multi-partition without VIOS using Direct Access I/O Multi-partition thru VIOS Multi-partition mix of thru VIOS and Direct Access I/O PowerVM virtual Network Interface Controller (vnic) technology Leverages technology Advanced virtualization (e.g. LPM) capable Sharing by up to 64 partitions per adapter port (VF) resource provisioning (e.g. desired bandwidth) Requires VIOS 3
5 HW/SW minimums for POWER8 - June 2015 GA IBM Power System E870 (9119-MME), IBM Power System E880 (9119-MHE), IBM Power System E850 (8408-E8E) IBM Power System S824 ( A), IBM Power System S814( A), IBM Power System S822( A), IBM Power System S824L( L), IBM Power System S822L ( L), IBM Power System S812L( L) support for PCIe Gen3 I/O expansion drawer HMC is required for Server firmware 830 PowerVM is required - standard or enterprise edition PowerVM express edition allows only one partition to use the logical ports per adapter Minimum client operation systems: AIX 6.1 TL9 SP5 and APAR IV68443, or later AIX 7.1 TL3 SP5 and APAR IV68444, or later IBM i 7.1 TR10, or later IBM i 7.2 TR2, or later Red Hat Enterprise Linux 6.5, or later Red Hat Enterprise Linux 7, or later SUSE Linux Enterprise Server 11 SP3, or later SUSE Linux Enterprise Server 12, or later Ubuntu 15.04, or later logical ports assigned to VIOS requires VIOS , or later 4
6 Systems with Support 4/2014 GA 9117-MMD (IBM Power 770 ), 9179-MHD (IBM Power 780), 8412-EAD (IBM Power ESE ) System node PCIe slots 3/2015 GA 9119-MME (IBM Power System E870 ), 9119-MHE (IBM Power System E880 ) System node PCIe slots 6/2015 GA POWER8 scale-out servers, expanded options for Power E870 and E880, and Power E850 PCIe Gen3 I/O expansion drawer. The following POWER8 PCIe slots are capable: All Power E870/E880 and Power E850 system node slots. Slots C6, C7, C10, and C12 of a Power S814 (1S 4U) or S812L (1S 2U) server. Slots C2, C3, C4, C5, C6, C7, C10, and C12 of a S824 or S824L server (2-socket, 4U) with both sockets populated. If only one socket is populated, then C6, C7, C10, and C12. Slots C2, C3, C5, C6, C7, C10, and C12 of a S822 or S822L server (2-socket, 2U) with both sockets populated. If only one socket is populated, then C6, C7, C10, and C12. Slots C1 and C4 of the 6-slot Fan-out Module in a PCIe Gen3 I/O drawer. If system memory is <128GB only slot C1 of a Fan-out Module is capable. 12/2015 GA vnic support for POWER8 scale-out servers, Power E870 and E880, and PCIe Gen3 I/O drawer 5
7 Capable Slots in P8 Scale-out models such as the S824 placement in scale-out models will vary depending on 1 core or 2 core and may have other considerations such as installed memory adapter placement will vary - 2U vs 4U - example below S824 - URL to Knowledge Center 6 For platforms with less than 64GB of total system memory, should not be configured in slots: C2, C4, C10 and C12 as performance may be severely impacted. Two slot positions per Fan-out module are capable See adapter placement rules example below placement for in MEX drawer The capability varies in slots P1-C4 and P2-C4 based on the amount of system memory. If the EMX0 PCIe3 expansion drawer is connected to a system with a total amount of physical memory greater than or equal to 128 GB, slots P1-C4 and P2-C4 are capable. 6
8 2-port 10GbE CNA & 2-port 1GbE FC #EN0J: 10GbE Optical SR, Low Profile FC #EN0H: 10GbE Optical SR, Full High FC #EN0L: 10GbE Active Copper Twinax, Low Profile FC #EN0K: 10GbE Active Copper Twinax, Full High FC #EN0N: 10GbE Optical LR, Low Profile FC #EN0M: 10GbE Optical LR, Full High 20 VFs per port + 4 VFs per port = 48 VFs per adapter Dual 10 GbE SR Dual 1 GbE 1GBASE-T Announced GA EN0H,EN0K support for Power 770/780/ESE system node. April 8, 2014 April 2014 EN0J,EN0L,EN0N support for Power E870/E880 system node. Feb. 24, 2015 March 2015 EN0H,EN0K,EN0M support for other POWER8 systems and PCIe Gen3 I/O expansion drawer April 28, 2015 June
9 New PCIe Gen3 s PCIe3 4-port 10Gb Optical SR or Active Copper twinax #EN15: 10GbE Optical SR, Full Height #EN16: 10GbE Optical SR, Low Profile #EN17: 10GbE Active Copper Twinax, Full Height #EN18: 10GbE Active Copper Twinax, Low Profile 16 VFs per port 4 ports 10GbE CNA = 64 VFs per adapter Announced GA Supported in PCIe Gen3 I/O drawer and the 4U scale-out system units and in the Power E850/E870/E880 system node slots. April 28, 2015 June
10 s with Support s # of logical ports per adapter and per port Low profile - multi OS Full high - multi OS Low profile - Linux only - PowerVM Full high - Linux only - PowerVM PCIe3 4-port (2x10GbE+2x1GbE) SR Optical fiber and RJ45 PCIe3 4-port (2x10GbE+2x1GbE) copper twinax and RJ45 PCIe3 4-port (2x10GbE+2x1GbE) LR Optical fiber and RJ45 PCIe3 4-port 10GbE SR optical fiber PCIe3 4-port 10GbE copper twinax 48 20/20/4/ /20/4/ /20/4/ /16/16/ /16/16/16 #EN0J 1 #EN0H 3 #EL38 #EL56 #EN0L 1 #EN0K 3 #EL3C #EL57 #EN0N #EN0M n/a n/a #EN16 2 #EN15 n/a n/a #EN18 2 #EN17 n/a n/a Notes: 1. announced February 2015 for Power E870/E880 system node. Now available in other POWER8 servers. 2. is only available in Power E870/E880 system node, not 2U server. 3. announced April 2014 for Power 770/780/ESE system node. With April 2015 announce, available in POWER8 servers. 9
11 Power Systems Capable PCIe Slots IBM Power 770 (9117-MMD), IBM Power 780 (9179-MHD), or Power ESE (8412-EAD) Power Systems servers: all PCIe slot within the system units are capable. PCIe slots in the I/O expansion drawers are not capable. POWER8 Systems: consult IBM Knowledge Center for the specific system of interest. In some cases total system memory may determine if a PCIe slots is capable. Power System or I/O Expansion Drawer L, L, or A IBM Knowledge Center PCIe adapter placement rules L A and A E8E 9119-MHE or 9119-MME PCIe Gen3 I/O expansion drawer E8E/p8eab/p8eab_85x_slot_details.htm?cp=8408-E8E%2F MME/p8eab/p8eab_87x_88x_slot_details.htm 10
12 Architecture Internal Switching in conjunction with SEA Partition A Partition B Virtual I/O Server Partition C SEA Virtual Eth Virtual Eth 10 % Virtual hypervisor switch 2x 10 GbE SR 2x 1 GbE copper 10 Gbps - 20 Virtual Functions (VF) 10 Gbps - 20 Virtual Functions (VF) 1 Gbps - 4 Virtual Functions (VF) 1 Gbps - 4 Virtual Functions (VF) 4-port 10GbE CNA & 1GbE 11
13 Flexible Deployment Single partition All adapter resources available to a single partition Device Driver PF LPAR A (Dedicated mode) Device vfunc Driver DevDrv PF OR VF VF LPAR A Virtual Fabric vfunc DevDrv VF VF Multi-partition without VIOS Direct access to adapter features Capacity per logical port Fewer adapters for redundant adapter configurations. VF LPAR B VF VF VF VF VF Virtual Fabric Virtual Fabric SRIOV LPAR A VF VF VF Virtual Function PF Physical Function 12
14 Flexible Deployment Multi-partition thru VIOS s shared by VIOS partitions Fewer adapters for redundancy VIOS client partitions eligible for Live Partition Mobility VIOS LPAR 1 Virtual Virtual VF Virtual LPAR A Virtual VF VF VF VF VF Virtual Fabric Virtual Fabric SRIOV VF Virtual VF VIOS LPAR 2 Virtual Multi-partition mix of VIOS and nonvios For VIOS partition behavior is the same as Multi-partition thru VIOS above Direct access partitions Path length & latency comparable to dedicated adapter Direct access to adapter features Entitled capacity per logical port VIOS LPAR Virtual Virtual VF LPAR C Virtual VF Virtual Fabric VF Virtual LPAR B VF VF = Virtual Function LPAR A 13
15 PowerVM Single Root I/O Virtualization: Fundamentals, Configuration, and Advanced Topics Performance 14
16 Throughput [Gbps] Traditional Virtual Ethernet Performance 2.9 Throughput Virtual Ethernet MTU 1500 Maximum out-of-the-box throughput ~ 2.9 Gbit/s TP 9117-MMB default "TP 8202-E4C default" TP 9117-MMD default TP POWER MME CPU units 15
17 Internal Switching Setup on POWER8 S824 Benchmark with 8 parallel TCP sessions Client LPAR: Power S A AIX 7.1 TL3 SP 3 Capped 4 VPs MTU size: 1500 bytes Server LPAR: Power S A (Same as client) AIX 7.1 TL3 SP 3 EC=3.0 Units, uncapped 4 VPs MTU size: 1500 bytes 16
18 Thoughput [Gbit/s] POWER8 Internal Switching on POWER8 S824 POWER8 provides access to adapter line-speed with less CPU units compared to POWER Throughput internal vs. Virtual Ethernet VENT MTU 1500 VENT MTU 9000 P7+ SRIOV MTU 1500 P8 SRIOV MTU Proc Units 17
19 POWER8 External Switching on POWER8 S824 Throughput internal and external switching MTU 1500 byte 12,00 Thoughput [Gbit/s 10,00 8,00 6,00 4,00 2,00 P8 SRIOV internal P8 SRIOV external 0,00 0,40 0,50 0,60 0,70 0,80 0,90 1,00 1,10 1,20 1,30 1,40 Proc Units 18
20 CPU units consumption: SEA / Throughput: 2.5 Gbit/s 5 Gbit/s 5 Gbit/s 10 Gbit/s CPU Consumption 5.8 VIOS Rx VIOS Tx 1.65 Server 1.25 VIOS Rx 1.0 VIOS Tx 0.85 Server 0.81 Client 1.23 Server 0.59 Client 0.37 Server 0.43 Client 0.40 Client 0.80 E870 SEA default E870 SEA + LSO (large send offload) S824 S824 19
21 Case Study: Transaction Rate Performance Customer migrated SAP from POWER7 to POWER8. Expectation for SAP ERP was high network transaction rate performance. Issue: Benchmarks on POWER7 with Virtual Ethernet s showed limited TPS performance for small packets. Better results were expected from new POWER8 systems. Customer additionally evaluated for transaction rate tuning. Systems: Power (old) / Power8 E870 (new) 20
22 Transaction Rate Performance SAP ERP sizing for high network transaction rates. Packet size of 700 bytes assumed. Systems: Power8 E870 / Power
23 & vnic Capacity Capacity Controls adapter and system resource levels, including desired minimum bandwidth Desired minimum percent of physical port resources Unallocated or unused bandwidth is available for logical ports with more demand than their minimum Actual consumable bandwidth is an approximation Configured logical ports reserve a small amount of bandwidth even when they don t have demand LSO traffic on a logical port with a small Capacity value may result in overshoot of the minimum bandwidth This is especially visible for 1Gbs links 23
24 & vnic Capacity Capacity value can not be changed dynamically To change a capacity value for a logical port: dynamic remove the logical port, then dynamic add a logical port with new capacity value update profile with new value and activate profile It may be desirable to leave some capacity for new logical ports. Capacity setting must be a multiple of the default (2%). 24
25 & vnic Desired Bandwidth ports is using physical port exclusively 10 seconds competitive situation with other ports port with 2% capacity 26
26 PowerVM Single Root I/O Virtualization: Fundamentals, Configuration, and Advanced Topics vnic and Live Partition Mobility 27
27 Live Partition Mobility Options with Virtual Network Interface Controller (vnic) vnic is new virtual adapter type vnic leverages to provide a performance optimized virtual NIC solution vnic enables advanced virtualization features such as live partition mobility with adapter sharing Leverages Capacity value resource provisioning (e.g. minimum bandwidth) December 2015 GA for AIX & IBM i Linux in progress Each Distro will certify Power E850 support GA on 3/4/2016 with firmware SP Pre-req AIX 7.1 TL4 or later or AIX 7.2 or later IBM i 7.1 TR10 or later or 7.2 TR3 or later VIOS 2.2.4, or later Firmware 840 or later HMC V8R8.4.0 or later 28
28 vnic Architecture Virtual I/O Server Client Partition Data Buffer vnic Server vnic Client 30 % Hypervisor 2x 10 GbE SR 2x 1 GbE copper 4-port 10GbE CNA/FCoE & 1GbE 29
29 vnic Architecture Virtual I/O Server Client Partition Data Buffer Partition Mobility vnic Server vnic Client 30 % Hypervisor 2x 10 GbE SR 2x 1 GbE copper 4-port 10GbE CNA/FCoE & 1GbE 30
30 Comparison of Virtual Enet & vnic VIOS LPAR 1 VIOS LPAR 2 SEA Virtual LPAR A Data Virtual LPAR B Data Virtual Control flow Data flow vnic Server vnic Server LPAR A Data vnic Client LPAR B Data vnic Client vswitch Power Hypervisor Power Hypervisor VF VF VF Virtual Ethernet/SEA (current) Multiple copies of data VF Many-to-one relationship between virtual adapters and physical adapter QoS based on VLAN tag PCP bits (i.e. 8 traffic classes) VF VF VF VF vnic with advanced virtualization features (e.g. LPM) Improved performance Eliminates data copies Optimized control flow, no overhead from the vswitch or SEA Multiple queue support Efficient Lower CPU and Memory usage (no data copy) Leverages adapter offload for LPAR to LPAR communication Deterministic QoS One-to-one relationship between vnic client adapter and logical port Extends logical port QoS 31
31 Live Partition Mobilty/Remote Restart with vnic Target system: Must support vnic Must have at least one running adapter in shared mode Must to have at least one running VIOS that supports vnic Target physical port User can select any physical port on the target for each vnic on the LPAR to be migrated If physical port not selected, the HMC will map physical port by port label and port switch mode (VEB/VEPA). Empty string is a valid label. Target physical port must have sufficient available capacity and available logical port count 32
32 Miscellaneous vnic Notes vnic backing device logical ports owned by VIOSs will not be captured in the system templates vnic backing device logical ports will not be in the VIOS profile even when sync last activated profile with current configuration is on. vnic backing device logical ports can not be modified. Deleting a vnic backing device logical port is blocked by HMC unless its associated vnic is already gone Activating a client partition profile with vnics requires the specified hosting VIOSs to be in running state with RMC connection 33
33 THROUGHPUT [GBPS] vnic / : Throughput / CPU Comparison VNIC / NATIVE THROUGHPUT EXTERNAL SWITCHING vnic 4% difference in max. TP CPU UNITS 34
34 Total CPU Consumption: SEA / Throughput: 2.5 Gbit/s 5 Gbit/s 5 Gbit/s 10 Gbit/s 5 Gbit/s 10 Gbit/s CPU Consumption 5.8 VIOS Rx VIOS Tx 1.65 Server 1.25 Client 1.23 E870 SEA default VIOS Rx 1.0 VIOS Tx 0.85 Server 0.59 Client 0.37 E870 SEA + LSO (large segment offload) Server 0.43 Client 0.40 S824 Server 0.81 Client 0.80 S824 VIOS Rx 0.89 VIOS Tx 0.48 Server 0.7 Client 0.37 E870 vnic VIOS Rx 1.47 VIOS Tx 1.00 Server 1.54 Client 0.92 E870 vnic 36
35 vnic / : TPS Maximum w. Small Packets VNIC / NATIVE TPS EXTERNAL SWITCHING SEA 1 Gigabit SEA 10 Gigabit vnic 10 Gigabit 10 Gigabit 38
36 Live Partition Mobility Options with Virtual Ethernet configuration VIOS LPAR 1 VIOS LPAR 2 Use current Virtual Ethernet support with logical ports as Shared Ethernet (SEA) physical connections to the network SEA Virtual LPAR A Virtual LPAR B Virtual Virtual SEA Does not receive performance benefits provided with Direct Access vswitch Power Hypervisor No client partition resource allocation (e.g. desired bandwidth) Benefits: LPM Capability /port sharing to reduce number of adapters VF VF VF VF VF VF Virtual Fabric Virtual Fabric SRIOV VF VF 39
37 Live Partition Mobility Options with Active-backup configuration Configure logical port as Active connection and Virtual Ethernet adapter or vnic client virtual adapter as backup Prior to migration, use dynamic LPAR operation to remove SR- IOV logical port VIOS LPAR Virtual Virtual LPAR Virtual Ethernet becomes Active connection Backup Active Migrate the partition On target system, configure logical port as Active connection Option for AIX and Linux Physical I/O can not be assigned (even temporarily) to an IBM i LPM capable partitions VF VF Virtual Fabric VF VF Normal Active-Backup configuration 40
38 PowerVM Single Root I/O Virtualization: Fundamentals, Configuration, and Advanced Topics Fault-Tolerant Configurations 41
39 Link Aggregation AIX EtherChannel or Static Link Aggregation IEEE 802.3ad/802.1ax Link Aggregation Control Protocol (LACP) Network Interface Backup (NIB) IBM i EtherChannel or Static Link Aggregation IEEE 802.3ad/802.1ax Link Aggregation Control Protocol (LACP) Virtual IP Address (VIPA) Linux Several Bonding/port trunking modes including LACP and Active-Backup 42
40 Link Aggregation Using LACP Issue LPAR A LPAR B Link Aggregation (LACP) will not function properly with multiple logical ports using the same physical port Switch expects a single partner (MAC physical layer) on a link Multiple logical ports on the same physical port creates multiple partners on the link VF VF VF VF VF VF Virtual Fabric Virtual Fabric Switch SRIOV VF VF Invalid Link Aggregation configuration. Two logical ports assigned to one physical port. VF = Virtual Function 43
41 EtherChannel or Static Link Aggregation Issue LPAR A LPAR B logical ports may go down while the physical link remains up Switch port failover occurs when physical link goes down Switch does not recognize a logical port going down and will continue to send traffic on the physical port Etherchannel is not recommended for an configuration Failed logical port EtherChannel VF VF VF VF VF VF Virtual Fabric Virtual Fabric SRIOV Switch VF VF Not recommended Switch will not detect if logical link fails 44
42 Link Aggregation Recommendations If you require bandwidth greater than a single link s bandwidth with link failover? Use Link Aggregation (LACP) with one logical port per physical port. Provides greater bandwidth than a single link with failover Other adapter ports may be shared or used in a LACP configuration by other partitions Best Practice Assign 100% capacity to each logical port in the Link Aggregation Group to prevent accidental assignment of another SR- IOV logical port to the same physical port VF LPAR A Virtual Fabric VF VF VF VF VF Virtual Fabric SRIOV LPAR B VF VF Link aggregation with one logical port assigned to each physical port. Link Agg. (LACP) 45
43 Link Aggregation Recommendations If you require bandwidth less than a single link s bandwidth and failover? Use Active-Backup approach (e.g. AIX NIB, IBM i VIPA, or Linux bonding driver active-backup) For Linux the fail_over_mac parameter must be set to active or 1 or follow or 2. Allows sharing of the physical port by multiple partitions When configured in an active-backup configuration, you should configure the capability to detect when to failover On AIX, configure a backup adapter and an IP address to ping. On IBM i with VIPA, options for detecting network failures besides link failures include Routing Information Protocol (RIP), Open Shortest Path First (OSPF) or customer monitor script. On Linux, use the bonding support to configure monitoring to detect network failures. VF LPAR A Virtual Fabric VF VF VF VF VF Virtual Fabric SRIOV LPAR B VF VF Active backup configuration (no switch configuration required) allows sharing of physical port 46
44 PowerVM Single Root I/O Virtualization: Fundamentals, Configuration, and Advanced Topics vnic Failover 53
45 vnic Failover vnic with vnic server side redundancy (analogous to SEA failover) Multiple backing devices (up to 6) for per vnic client One active, others as inactive standby Backing device configuration includes selection of VIOS physical port Failover priority Capacity Provides flexible deployment and load balancing options Health check of active and inactive backing devices Hypervisor manages failover based on operational state and failover priority Dynamic add and remove of backing devices 54
46 vnic Failover Architecture Virtual I/O Server Virtual I/O Server Virtual I/O Server Client Partition Standby Failover Priority 100 Standby Failover Priority 50 Active Failover Priority 1 Data Buffers vnic Server vnic Server vnic Server vnic Client Hypervisor vnic failover configuration Up to 6 backing devices per vnic client Select VIOS & adapter physical port for each backing device Set Failover priority for each backing device Auto Priority Failover: Enabled or Disabled 56
47 vnic Failover Architecture Virtual I/O Server Virtual I/O Server Virtual I/O Server Standby Active Not Operational Failover Priority 100 Failover Priority 50 Failover Priority 1 Client Partition Data Buffers vnic Server vnic Server vnic Server vnic Client Hypervisor vnic failover configuration Up to 6 backing devices per vnic client Select VIOS & adapter physical port for each backing device Set Failover priority for each backing device Auto Priority Failover: Enabled or Disabled Failover triggers No VIOS heartbeat 57
48 vnic Failover Architecture Virtual I/O Server Active Failover Priority 100 Virtual I/O Server Link Down Failover Priority 50 Virtual I/O Server Not Operational Failover Priority 1 Client Partition Data Buffers vnic Server vnic Server vnic Server vnic Client Hypervisor vnic failover configuration Up to 6 backing devices per vnic client Select VIOS & adapter physical port for each backing device Set Failover priority for each backing device Auto Priority Failover: Enabled or Disabled Failover triggers No VIOS heartbeat or adapter link failures 58
49 vnic Failover Architecture Virtual I/O Server Standby Failover Priority 100 Virtual I/O Server Link Down Failover Priority 50 Virtual I/O Server Active Failover Priority 1 Client Partition Data Buffers vnic Server vnic Server vnic Server vnic Client Hypervisor vnic failover configuration Up to 6 backing devices per vnic client Select VIOS & adapter physical port for each backing device Set Failover priority for each backing device Auto Priority Failover: Enabled or Disabled Failover triggers No VIOS heartbeat or adapter link failures Auto Priority Failover if Enabled 59
50 vnic Failover Architecture Virtual I/O Server Virtual I/O Server Virtual I/O Server Active Link Down Standby Failover Priority 100 Failover Priority 50 Failover Priority 1 Client Partition Data Buffers vnic Server vnic Server vnic Server vnic Client Hypervisor vnic failover configuration Up to 6 backing devices per vnic client Select VIOS & adapter physical port for each backing device Set Failover priority for each backing device Auto Priority Failover: Enabled or Disabled Failover triggers No VIOS heartbeat or adapter link failures Auto Priority Failover if Enabled HMC user initiated (Sets APF to disabled) 60
51 vnic Failover Configuration Partition Properties->Virtual NICS New Virtual NICs interface, includes support for multiple backing devices To Add vnic Client click on Add Virtual NIC 61
52 vnic Failover Configuration Add vnic Select the desired physical port 62
53 vnic Failover Configuration Add vnic Physical port information updated Select Hosting Partition (VIOS) Optionally set Capacity % Optionally set Failover Priority lower # is more favored 63
54 vnic Failover Configuration Add vnic Click on the Advanced Virtual NIC Settings for additional options 64
55 vnic Failover Configuration Additional backing devices Create additional backing devices per vnic client Up to 6 backing devices total Click the Add Entry button to add new backing device 65
56 vnic Failover Configuration Add Entry New row for new backing device information 66
57 vnic Failover Configuration Add Entry Select Physical, Hosting Partition (VIOS), Capacity value, Failover Priority Physical port selection list limited to previously unselected physical ports Capacity % may be different for each backing device 67
58 vnic Failover Configuration Add Entry Select vnic Allow Auto Priority Failover option Enabled: Always failover to more favored (lower Failover Priority number) backing device Disabled: Only failover when current backing device is not operational Click OK to create vnic with backing devices 68
59 vnic Failover Configuration vnic list Device Name indicates device name in client partition vnic Auto Prioirty Failover state selected Other columns associated with backing devices Backing Device State indicates operational state of backing devices and which one is the Active backing device 69
60 vnic Failover Configuration Modify Backing Devices Click on the Action button for a list of Actions you can perform on the vnic Click on Modify Backing Device 70
61 vnic Failover Configuration Modify Backing Device Modify the vnic Auto Priority Failover options Click on Add Backing Devices button to configure additional backing devices 71
62 vnic Failover Configuration Add vnic Backing Devices Select a physical port, VIOS, Capacity %, and Failover Priority 72
63 vnic Failover Configuration Add Backing Device Click Add entry to configure additional backing devices Click OK when done 73
64 vnic Failover Configuration Live Partition Migration Partition Migration wizard provides a proposed mapping from Source Backing Device to Destination Backing Device. The Destination Backing Device for a Source Backing Device can be modified. 74
65 vnic Failover Configuration Live Partition Migration The modify Destination Backing Device allows for the following Change the physical adapter Change the physical port Change the Host VIOS Change the Capacity(%) 75
66 vnic Failover Target 4Q2016 GA (except E850 (8408-E8E)) E850 (8408-E8E) GA 1H2017 Pre-req (current targets) AIX 7.1 TL4 or later or AIX 7.2 or later IBM i 7.1 TR11 or later or 7.2 TR3 or later Linux not supported at GA. Phased implementation via Linux community in 2017 VIOS , or later Firmware FW or later HMC V8R8.6.0 with mandatory PTF or later 76
67 PowerVM Network Virtualization Comparison Technology Live Partition Mobility Quality of service (QoS) Direct access perf. Link Aggregation Server Side Failover Requires VIOS No 1 Yes Yes Yes 2 No No vnic Yes Yes No 3 Yes 2 vnic failover 4 Yes SEA/vEth Yes No No Yes SEA failover Yes Notes: 1. can optionally be combined with VIOS and virtual Ethernet to use higher-level virtualization functions like Live Partition Mobility (LPM); however, client partition will not receive the performance or QoS benefit. 2. Some limitations apply. For and vnic see link aggregation support 3. Generally better performance and requires fewer system resources when compared to SEA/virtual Ethernet 4. Available 2H
68 PowerVM Single Root I/O Virtualization: Fundamentals, Configuration, and Advanced Topics Performance Monitor & Topology 78
69 Performance Monitor for From the performance monitor screen click Network Utilization Trend -> More Graphs -> adapters The breakdown by physical ports shows how heavily utilized a physical port is and can be used to determine whether or not there is additional bandwidth available 79
70 Performance Monitor for The breakdown by partitions shows each logical port individually and which LPAR owns it This can be used to determine which logical ports are using the physical ports bandwidth 80
71 Physical and Counters Physical and logical port counters available via the HMC GUI or CLI 81
72 New HMC GUI: and vnic Diagram 82
73 New HMC GUI: and vnic Diagram 83
74 New HMC GUI: and vnic Diagram 88
75 New HMC GUI: and vnic Diagram 89
76 PowerVM Single Root I/O Virtualization: Fundamentals, Configuration, and Advanced Topics Maintenance 90
77 Firmware Update There are 2 pieces of firmware for that are built into system firmware driver firmware The driver code that configures the adapter and logical ports firmware The firmware that runs on the adapter Both levels of firmware are automatically updated to the levels included in the active system firmware in the following cases System boot/reboot transitioned into mode level concurrent maintenance The level description will indicate if a new version of adapter firmware is included 91
78 Firmware Update When system firmware is updated concurrently, the levels on currently configured adapters are not automatically updated Updating the levels will cause a temporary network outage on the logical ports on the affected adapter Starting with system firmware level FW830, the firmware levels can be viewed and updated using the HMC GUI On the HMC enhanced+ GUI select the Server -> Actions -> Firmware Update 92
79 Firmware Update Each adapter in mode will display the active adapter driver and adapter levels. If an update is available the update available column will indicate Yes Select the set of adapters to update, then right click to launch the context menu Update Driver Firmware - will update only the adapter driver code and will not update the adapter firmware. This type of update results in a shorter network outage on logical ports If only the adapter driver firmware is updated then another update may still be available for the adapter firmware Update Driver and Firmware - will update both levels s are updated serially to ensure that both devices in a multipath setup are not affected at the same time. This means that the totally time to update can be significant if a lot of adapters are selected. 93
80 Physical Replacement adapters can be added, removed, and replaced without disrupting the system or shutting down the partitions. For adapter replacement, all the logical ports must be de-configured. The HMC provides a GUI for adapter concurrent maintenance operations. (Serviceability Hardware MES Tasks Exchange FRU) New adapter must have the same capabilities (same type/feature code). When new adapter is plugged into the same slot as the original adapter, the hypervisor will automatically associate the old adapter s configuration with the new adapter. If the new adapter is plugged in to a different slot, the chhwres command is needed to associate the original adapter configuration with the new adapter. $ chhwres -m Server1 -r sriov rsubtype adapter -o m -a \ slot_id= b,target_slot_id=
81 Known / vnic Issues AIX NIB configuration issue AIX APARs: IV77944, IV80034, IV80127, IV82254, IV82479 IBM i logical port / vnic VLAN restrictions issue V7R1 Resolution is not required as OS generated VLAN tags are not supported in V7R1 V7R2 For apply PTFs MF62338, MF62348, MF62349; For vnic PTF MF62676 V7R3 For apply PTFs MF62340, MF62350, MF62351; For vnic PTF MF62703 PVID issue FW or later FW830.xx (target availability 12/2016) Transmit hang issue FW or later FW or later FW or later 95
82 Links and Additional Information IBM Power Systems : Technical Overview and Introduction Redpaper Linkedin PowerVM group FAQs blog and wiki troduction%20to%20%20faqs R-IOV%20Frequently%20Asked%20Questions vnic FAQs blog and wiki troduction%20to%20vnic%20faqs NIC%20Frequently%20Asked%20Questions Introducing New PowerVM Virtual Networking Technology Knowledge center links for firmware update on prior releases 01.ibm.com/support/knowledgecenter/POWER7/p7hb1/p7hb1_updating_sriov_firmware.htm?cp=POWER7 %2F Our contact info Allyn Walsh awalsh@us.ibm.com or Chuck Graham csg@us.ibm.com 96
10Gb LAN and SR-IOV on Power. Marie-Lorraine Bontron - Jean-Manuel Lenez
10Gb LAN and on Power Marie-Lorraine Bontron - Jean-Manuel Lenez Network Technologies on POWER Systems Dedicated s Best possible performance exclusively bound to particular partition; no resource sharing
More informationPowerVM simplification enhancements. PowerVM simplification enhancements. PowerVM infrastructure overview
PowerVM simplification enhancements PowerVM infrastructure overview IBM PowerVM is the virtualization solution that enables workload consolidation for AIX, IBM i, and Linux environments on IBM Power Systems
More informationIBM POWER8 100 GigE Adapter Best Practices
Introduction IBM POWER8 100 GigE Adapter Best Practices With higher network speeds in new network adapters, achieving peak performance requires careful tuning of the adapters and workloads using them.
More informationQuickSpecs. HP Z 10GbE Dual Port Module. Models
Overview Models Part Number: 1Ql49AA Introduction The is a 10GBASE-T adapter utilizing the Intel X722 MAC and X557-AT2 PHY pairing to deliver full line-rate performance, utilizing CAT 6A UTP cabling (or
More informationEmulex Universal Multichannel
Emulex Universal Multichannel Reference Manual Versions 11.2 UMC-OCA-RM112 Emulex Universal Multichannel Reference Manual Corporate Headquarters San Jose, CA Website www.broadcom.com Broadcom, the pulse
More informationPower Systems with POWER8 Scale-out Technical Sales Skills V1
Power Systems with POWER8 Scale-out Technical Sales Skills V1 1. An ISV develops Linux based applications in their heterogeneous environment consisting of both IBM Power Systems and x86 servers. They are
More informationExperiences with VIOS support for IBM i
Experiences with VIOS support for IBM i Pete Stephen, Power Systems / AIX Architect Sirius Computer Solutions Pete.Stephen@siriuscom.com Agenda VIO overview Virtualization Trends. PowerVM and how do I
More informationPower Systems Firmware Management - What You Need to Know
Power Systems Firmware Management - What You Need to Know Tracy Smith Executive IT Specialist (ATS) IBM Power Systems - Rochester, MN Tues. June 2, 2015 Technical University/Symposia materials may not
More informationPlanning for Virtualization on System P
Planning for Virtualization on System P Jaqui Lynch Systems Architect Mainline Information Systems Jaqui.lynch@mainline.com com http://www.circle4.com/papers/powervm-performance-may09.pdf http://mainline.com/knowledgecenter
More informationConfiguring SR-IOV. Table of contents. with HP Virtual Connect and Microsoft Hyper-V. Technical white paper
Technical white paper Configuring SR-IOV with HP Virtual Connect and Microsoft Hyper-V Table of contents Abstract... 2 Overview... 2 SR-IOV... 2 Advantages and usage... 2 With Flex-10... 3 Setup... 4 Supported
More informationLive Partition Mobility Update
Power Systems ATS Live Partition Mobility Update Ron Barker Power Advanced Technical Sales Support Dallas, TX Agenda Why you should be planning for partition mobility What are your options? Which is best
More informationSystem i and System p. Creating a virtual computing environment
System i and System p Creating a virtual computing environment System i and System p Creating a virtual computing environment Note Before using this information and the product it supports, read the information
More informationVIRTUALIZING SERVER CONNECTIVITY IN THE CLOUD
VIRTUALIZING SERVER CONNECTIVITY IN THE CLOUD Truls Myklebust Director, Product Management Brocade Communications 2011 Brocade Communciations - All Rights Reserved 13 October 2011 THE ENTERPRISE IS GOING
More informationPLUSOPTIC NIC-PCIE-2SFP+-V2-PLU
PLUSOPTIC NIC-PCIE-2SFP+-V2-PLU PCI Express v3.0 x8 Dual Port SFP+ 10 Gigabit Server Adapter (Intel X710- BM2 Based) Overview: NIC-PCIE-2SFP+-V2-PLU is PLUSOPTIC a new generation of high-performance server
More informationPass-Through Technology
CHAPTER 3 This chapter provides best design practices for deploying blade servers using pass-through technology within the Cisco Data Center Networking Architecture, describes blade server architecture,
More informationSUN BLADE 6000 VIRTUALIZED 40 GbE NETWORK EXPRESS MODULE
SUN BLADE 6000 VIRTUALIZED 40 GbE NETWORK EXPRESS MODULE IN-CHASSIS VIRTUALIZED 40 GBE ACCESS KEY FEATURES Virtualized 1 port of 40 GbE or 2 ports of 10 GbE uplinks shared by all server modules Support
More informationBMC Capacity Optimization Extended Edition AIX Enhancements
BMC Capacity Optimization Extended Edition 9.5.1 AIX Enhancements Support for AIX Active Memory Expansion (AME) mode Metrics displayed in BCO AIX PowerVM view for CPU and Memory Allocation Frame name,
More informationThe Missing Piece of Virtualization. I/O Virtualization on 10 Gb Ethernet For Virtualized Data Centers
The Missing Piece of Virtualization I/O Virtualization on 10 Gb Ethernet For Virtualized Data Centers Agenda 10 GbE Adapters Built for Virtualization I/O Throughput: Virtual & Non-Virtual Servers Case
More informationIBM EXAM QUESTIONS & ANSWERS
IBM 000-106 EXAM QUESTIONS & ANSWERS Number: 000-106 Passing Score: 800 Time Limit: 120 min File Version: 38.8 http://www.gratisexam.com/ IBM 000-106 EXAM QUESTIONS & ANSWERS Exam Name: Power Systems with
More informationA Mainframe Guy Discovers Blades..as in zenterprise Blade Extension
SHARE in Anaheim March 2011 A Mainframe Guy Discovers Blades..as in zenterprise Blade Extension Session ID: zzs18 Glenn Anderson, IBM Training 2011 IBM Corporation 1 2 3 1999 History of Blades.. Data center
More informationvsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN
Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition.
More informationvsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN
Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check
More informationNetwork Adapters. FS Network adapter are designed for data center, and provides flexible and scalable I/O solutions. 10G/25G/40G Ethernet Adapters
Network Adapters IDEAL FOR DATACENTER, ENTERPRISE & ISP NETWORK SOLUTIONS FS Network adapter are designed for data center, and provides flexible and scalable I/O solutions. 10G/25G/40G Ethernet Adapters
More informationPCI Express x8 Quad Port 10Gigabit Server Adapter (Intel XL710 Based)
NIC-PCIE-4SFP+-PLU PCI Express x8 Quad Port 10Gigabit Server Adapter (Intel XL710 Based) Key Features Quad-port 10 GbE adapters PCI Express* (PCIe) 3.0, x8 Exceptional Low Power Adapters Network Virtualization
More informationIBM Power Systems Update. David Spurway IBM Power Systems Product Manager STG, UK and Ireland
IBM Power Systems Update David Spurway IBM Power Systems Product Manager STG, UK and Ireland Would you like to go fast? Go faster - win your race Doing More LESS With Power 8 POWER8 is the fastest around
More informationVirtualization Technical Support for AIX and Linux - v2
IBM 000-109 Virtualization Technical Support for AIX and Linux - v2 Version: 5.0 Topic 1, Volume A QUESTION NO: 1 An administrator is attempting to configure a new deployment of 56 POWER7 Blades across
More informationvnetwork Future Direction Howie Xu, VMware R&D November 4, 2008
vnetwork Future Direction Howie Xu, VMware R&D November 4, 2008 Virtual Datacenter OS from VMware Infrastructure vservices and Cloud vservices Existing New - roadmap Virtual Datacenter OS from VMware Agenda
More informationARISTA: Improving Application Performance While Reducing Complexity
ARISTA: Improving Application Performance While Reducing Complexity October 2008 1.0 Problem Statement #1... 1 1.1 Problem Statement #2... 1 1.2 Previous Options: More Servers and I/O Adapters... 1 1.3
More informationEmulex Drivers for VMware ESXi for OneConnect Adapters Release Notes
Emulex Drivers for VMware ESXi for OneConnect Adapters Release Notes Versions: ESXi 5.5 driver FCoE: 11.2.1153.13 NIC: 11.2.1149.0 iscsi: 11.2.1153.2 ESXi 6.0 driver FCoE: 11.2.1153.13 NIC: 11.2.1149.0
More information2014 LENOVO INTERNAL. ALL RIGHTS RESERVED.
2014 LENOVO INTERNAL. ALL RIGHTS RESERVED. Connectivity Categories and Selection Considerations NIC HBA CNA Primary Purpose Basic Ethernet Connectivity Connection to SAN/DAS Converged Network and SAN connectivity
More informationPCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter (Intel 82599ES Based) Single-Port 10 Gigabit SFP+ Ethernet Server Adapters Provide Ultimate
NIC-PCIE-1SFP+-PLU PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter (Intel 82599ES Based) Single-Port 10 Gigabit SFP+ Ethernet Server Adapters Provide Ultimate Flexibility and Scalability in Virtual
More informationCisco UCS Virtual Interface Card 1225
Data Sheet Cisco UCS Virtual Interface Card 1225 Cisco Unified Computing System Overview The Cisco Unified Computing System (Cisco UCS ) is a next-generation data center platform that unites compute, networking,
More informationFebruary 5, 2008 Virtualization Trends On IBM s System P Unraveling The Benefits In IBM s PowerVM
Virtualization Trends On IBM s System P Unraveling The Benefits In IBM s PowerVM by Brad Day with Simon Yates and Rachel Batiancila EXECUTIVE SUMMARY IBM s PowerVM (formerly Advanced POWER Virtualization)
More informationPOWER Block Course Assignment 2: System Management. Hasso-Plattner Institute
POWER Block Course Assignment 2: System Management Hasso-Plattner Institute Agenda 1. General Information 2. Assignment 2 System Management 3. Helpful Links / Troubleshooting Assignment 2 System management
More informationEmulex 10GbE Virtual Fabric Adapter II for IBM BladeCenter IBM Redbooks Product Guide
Emulex 10GbE Virtual Fabric Adapter II for IBM BladeCenter IBM Redbooks Product Guide The Emulex 10 GbE Virtual Fabric Adapter II and Emulex 10 GbE Virtual Fabric Adapter Advanced II are enhancements to
More informationLive Partition Mobility
Live Partition Mobility Jaqui Lynch lynchj@forsythe.com Presentation at: http://www.circle4.com/forsythe/lpm2014.pdf 1 1 Why Live Partition Mobility? LPM LIVE PARTITION MOBILITY Uses Server Consolidation
More informationVirtualization Benefits IBM Corporation
Virtualization Benefits 1 Virtualization Benefits CPU Utilization 100 90 80 70 60 50 40 30 20 10 0 8:00 10:00 12:00 2:00 4:00 Time Purchased Peak Average Increase Utilization Non-virtualized servers often
More informationSupported Linux distributions for POWER8 Linux on Power systems
Supported Linux distributions for POWER8 Linux on Power systems Use this topic to find the Linux distributions optimized for POWER8 servers running Linux Note: The recommended Linux distribution for a
More informationC IBM. Virtualization Technical Support for AIX and Linux V2
C4040-109 Dumps C4040-109 Braindumps C4040-109 Real Questions C4040-109 Practice Test C4040-109 dumps free IBM C4040-109 Virtualization Technical Support for AIX and Linux V2 http://killexams.com/pass4sure/exam-detail/c4040-109
More informationQuickSpecs. Overview. HPE Ethernet 10Gb 2-port 535 Adapter. HPE Ethernet 10Gb 2-port 535 Adapter. 1. Product description. 2.
Overview 1. Product description 2. Product features 1. Product description HPE Ethernet 10Gb 2-port 535FLR-T adapter 1 HPE Ethernet 10Gb 2-port 535T adapter The HPE Ethernet 10GBase-T 2-port 535 adapters
More informationManaging Network Adapters
Managing Network Adapters This chapter includes the following sections: Overview of the Cisco UCS C-Series Network Adapters, page 1 Viewing Network Adapter Properties, page 3 Configuring Network Adapter
More informationA-GEAR 10Gigabit Ethernet Server Adapter X520 2xSFP+
Product Specification NIC-10G-2BF A-GEAR 10Gigabit Ethernet Server Adapter X520 2xSFP+ Apply Dual-port 10 Gigabit Fiber SFP+ server connections, These Server Adapters Provide Ultimate Flexibility and Scalability
More informationApplication and Partition Mobility
IBM System p Application and Mobility Francis COUGARD IBM Products and Solutions Support Center Montpellier Copyright IBM Corporation 2009 Index Application and Mobility Presentation Videocharger and DB2
More informationKillTest *KIJGT 3WCNKV[ $GVVGT 5GTXKEG Q&A NZZV ]]] QORRZKYZ IUS =K ULLKX LXKK [VJGZK YKX\OIK LUX UTK _KGX
KillTest Q&A Exam : 000-108 Title : Enterprise Technical Support for AIX and Linux -v2 Version : Demo 1 / 7 1.Which power reduction technology requires a software component in order to be activated? A.
More informationCloud Networking (VITMMA02) Server Virtualization Data Center Gear
Cloud Networking (VITMMA02) Server Virtualization Data Center Gear Markosz Maliosz PhD Department of Telecommunications and Media Informatics Faculty of Electrical Engineering and Informatics Budapest
More informationIntroduction to the Cisco ASAv
Hypervisor Support The Cisco Adaptive Security Virtual Appliance (ASAv) brings full firewall functionality to virtualized environments to secure data center traffic and multitenant environments. You can
More informationBROCADE 8000 SWITCH FREQUENTLY ASKED QUESTIONS
FAQ BROCADE 8000 SWITCH FREQUENTLY ASKED QUESTIONS Overview Brocade provides the industry s leading family of Storage Area Network (SAN) and IP/Ethernet switches. These high-performance, highly reliable
More informationQuickSpecs. HP Auto Port Aggregation. Overview
HP's Auto Port Aggregation (APA) provides the ability to logically group two or more physical network ports into a single Fat Pipe, often called a trunk. Network traffic is load balanced across all of
More informationHMC ENHANCED GUI QUICK START GUIDE 1.0 Classic GUI to Enhanced GUI Mappings and Enhanced GUI Improvements
HMC ENHANCED GUI QUICK START GUIDE 1.0 Classic GUI to Enhanced GUI Mappings and Enhanced GUI Improvements August 2017 Table of Contents Contributing Author: Karyn Corneli Co-Author: Jacobo Vargas Introduction...
More informationLAN Ports and Port Channels
Port Modes, page 2 Port Types, page 2 UCS 6300 Breakout 40 GB Ethernet Ports, page 3 Unified Ports, page 7 Changing Port Modes, page 10 Server Ports, page 16 Uplink Ethernet Ports, page 17 Appliance Ports,
More informationvsphere Networking Update 1 Modified on 04 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5
Update 1 Modified on 04 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware Web site at: https://docs.vmware.com/ The VMware
More information1 Copyright 2011, Oracle and/or its affiliates. All rights reserved.
1 Copyright 2011, Oracle and/or its affiliates. All rights ORACLE PRODUCT LOGO Solaris 11 Networking Overview Sebastien Roy, Senior Principal Engineer Solaris Core OS, Oracle 2 Copyright 2011, Oracle and/or
More information10GbE RoCE Express and Shared Memory Communications RDMA (SMC-R) Frequently Asked Questions
10GbE RoCE Express and Shared Memory Communications RDMA (SMC-R) Frequently Asked Questions 10GbE RoCE Express and Shared Memory Communications RDMA (SMC-R) Frequently Asked Questions... 1 1. What are
More informationIBM i supports additional IBM POWER6 hardware
, dated April 28, 2009 IBM i supports additional IBM POWER6 hardware Table of contents 1 At a glance 6 Product number 1 Overview 6 Ordering information 2 Key prerequisites 8 Prices 2 Planned availability
More informationUnify Virtual and Physical Networking with Cisco Virtual Interface Card
White Paper Unify Virtual and Physical Networking with Cisco Virtual Interface Card Simplicity of Cisco VM-FEX technology and Power of VMware VMDirectPath What You Will Learn Server virtualization has
More informationIBM Exam A Virtualization Technical Support for AIX and Linux Version: 6.0 [ Total Questions: 93 ]
s@lm@n IBM Exam A4040-101 Virtualization Technical Support for AIX and Linux Version: 6.0 [ Total Questions: 93 ] IBM A4040-101 : Practice Test Question No : 1 Which of the following IOS commands displays
More information45 10.C. 1 The switch should have The switch should have G SFP+ Ports from Day1, populated with all
Addendum / Corrigendum Dated 29/09/2017 Tender Ref No. - 236/387/DCCS/2010/IREDA/1 Dated: 22/09/2017 Name of Project - Supply Installation and Support Services of Data centers S. No. Document Reference
More informationUCS Technical Deep Dive: Getting to the Heart of the Matter
UCS Technical Deep Dive: Getting to the Heart of the Matter Session ID Agenda Introductions UCS Architecture, Innovations, Topology Physical Building Blocks Logical Building Blocks Typical Use Cases (Live
More informationHMC on POWER and Enhanced UI Deep Dive
HMC on POWER and Enhanced UI Deep Dive Allyn Walsh and Bob Schuster Power Systems Washington Systems Center awalsh@us.ibm.com 2016 IBM Corporation Roadmap 15-Sep 2017 8.870 PowerVM support 2 11-Nov 2015
More informationRoCE Update. Liran Liss, Mellanox Technologies March,
RoCE Update Liran Liss, Mellanox Technologies March, 2012 www.openfabrics.org 1 Agenda RoCE Ecosystem QoS Virtualization High availability Latest news 2 RoCE in the Data Center Lossless configuration recommended
More informationQuickSpecs. Models. HP NC110T PCI Express Gigabit Server Adapter. Overview. Retired
Overview The HP NC110T is a cost-effective Gigabit Ethernet server adapter that features single-port, copper, single lane (x1) PCI Express capability, with 48KB onboard memory that provides 10/100/1000T
More informationiseries Tech Talk Linux on iseries Technical Update 2004
iseries Tech Talk Linux on iseries Technical Update 2004 Erwin Earley IBM Rochester Linux Center of Competency rchlinux@us.ibm.com Agenda Enhancements to the Linux experience introduced with i5 New i5/os
More informationEtherChannel and Redundant Interfaces
This chapter tells how to configure EtherChannels and redundant interfaces. Note For multiple context mode, complete all tasks in this section in the system execution space. To change from the context
More informationCavium FastLinQ 25GbE Intelligent Ethernet Adapters vs. Mellanox Adapters
Cavium FastLinQ 25GbE Intelligent Ethernet Adapters vs. Mellanox Adapters Cavium FastLinQ QL45000 25GbE adapters provide maximum performance and flexible bandwidth management to optimize virtualized servers
More informationQuickSpecs. Models. Standard Features Server Support. HP Integrity PCI-e 2-port 10GbE Cu Adapter. HP Integrity PCI-e 2-port 10GbE LR Adapter.
Overview The is an eight lane (x8) PCI Express (PCIe) 10 Gigabit network solution offering optimal throughput. This PCI Express Gen 2 adapter ships with two SFP+ (Small Form-factor Pluggable) cages suitable
More informationIBM Power Systems Power Enterprise Pools. March Tracy Smith Executive I/T Specialist IBM Power Systems - Rochester, MN
IBM Power Systems Power Enterprise Pools March 2017 Tracy Smith Executive I/T Specialist IBM Power Systems - Rochester, MN Allyn Walsh Certified I/T Specialist IBM Power Systems - Rochester, MN Power Enterprise
More informationConfiguring Virtual Port Channels
This chapter contains the following sections: Information About vpcs, page 1 Guidelines and Limitations for vpcs, page 10 Verifying the vpc Configuration, page 11 vpc Default Settings, page 16 Configuring
More informationIBM p5 and pseries Enterprise Technical Support AIX 5L V5.3. Download Full Version :
IBM 000-180 p5 and pseries Enterprise Technical Support AIX 5L V5.3 Download Full Version : https://killexams.com/pass4sure/exam-detail/000-180 A. The LPAR Configuration backup is corrupt B. The LPAR Configuration
More informationIndustry Standards for the Exponential Growth of Data Center Bandwidth and Management. Craig W. Carlson
Industry Standards for the Exponential Growth of Data Center Bandwidth and Management Craig W. Carlson 2 Or Finding the Fat Pipe through standards Creative Commons, Flikr User davepaker Overview Part of
More informationGUIDE. Optimal Network Designs with Cohesity
Optimal Network Designs with Cohesity TABLE OF CONTENTS Introduction...3 Key Concepts...4 Five Common Configurations...5 3.1 Simple Topology...5 3.2 Standard Topology...6 3.3 Layered Topology...7 3.4 Cisco
More informationIBM BladeCenter Layer 2-7 Gigabit Ethernet Switch Module (Withdrawn) Product Guide
IBM BladeCenter Layer 2-7 Gigabit Ethernet Switch Module (Withdrawn) Product Guide The IBM BladeCenter Layer 2-7 Gigabit Ethernet Switch Module serves as a switching and routing fabric for the IBM BladeCenter
More informationCisco UCS Virtual Interface Card 1227
Data Sheet Cisco UCS Virtual Interface Card 1227 Cisco Unified Computing System Overview The Cisco Unified Computing System (Cisco UCS ) is a next-generation data center platform that unites computing,
More informationData Sheet FUJITSU PLAN EP Intel X710-DA2 2x10GbE SFP+
Data Sheet FUJITSU PLAN EP Intel X710-DA2 2x10GbE SFP+ Data Sheet FUJITSU PLAN EP Intel X710-DA2 2x10GbE SFP+ Dual-port 10 Gbit PCIe 3.0 Ethernet cards enable data exchange between all the devices connected
More informationBROCADE ICX 6610 SWITCHES FREQUENTLY ASKED QUESTIONS
FAQ BROCADE ICX 6610 SWITCHES FREQUENTLY ASKED QUESTIONS Introduction The Brocade ICX 6610 Switch redefines the economics of enterprise networking by providing unprecedented levels of performance and availability
More informationCisco UCS Network Performance Optimisation and Best Practices for VMware
1 Cisco UCS Network Performance Optimisation and Best Practices for VMware Chris Dunk Technical Marketing Engineer, Cisco UCS #clmel Agenda Server to Server East West Traffic Flow Architecture Why it is
More informationQuickSpecs. HPE Synergy 3820C 10/20Gb Converged Network Adapter Quick install card Product warranty statement
QuickSpecs The HPE Synergy 3820C 10/20Gb Converged Network Adapter for HPE Synergy Gen9 Compute Module provides high performance and converged 20Gb Ethernet that accelerates IT services and increases data
More informationDPDK Summit China 2017
Summit China 2017 Embedded Network Architecture Optimization Based on Lin Hao T1 Networks Agenda Our History What is an embedded network device Challenge to us Requirements for device today Our solution
More informationVirtualization And High Availability. Howard Chow Microsoft MVP
Virtualization And High Availability Howard Chow Microsoft MVP Session Objectives And Agenda Virtualization and High Availability Types of high availability enabled by virtualization Enabling a highly
More informationCISCO EXAM QUESTIONS & ANSWERS
CISCO 642-999 EXAM QUESTIONS & ANSWERS Number: 642-999 Passing Score: 800 Time Limit: 90 min File Version: 32.5 http://www.gratisexam.com/ Sections 1. Questions 2. Drag & Drop 3. Hot Spot CISCO 642-999
More informationStorage Protocol Offload for Virtualized Environments Session 301-F
Storage Protocol Offload for Virtualized Environments Session 301-F Dennis Martin, President August 2016 1 Agenda About Demartek Offloads I/O Virtualization Concepts RDMA Concepts Overlay Networks and
More informationSingle Root I/O Virtualization (SR-IOV) and iscsi Uncompromised Performance for Virtual Server Environments Leonid Grossman Exar Corporation
Single Root I/O Virtualization (SR-IOV) and iscsi Uncompromised Performance for Virtual Server Environments Leonid Grossman Exar Corporation Introduction to Exar iscsi project and related datacenter trends
More informationIBM Virtualization Technical Support for AIX and Linux - v2.
IBM 000-109 Virtualization Technical Support for AIX and Linux - v2 http://killexams.com/exam-detail/000-109 QUESTION: 170 A Power Systems server has two HMCs connected to its Flexible Service Processor
More informationC Number: C Passing Score: 800 Time Limit: 120 min File Version: 5.0. IBM C Questions & Answers
C4040-251 Number: C4040-251 Passing Score: 800 Time Limit: 120 min File Version: 5.0 http://www.gratisexam.com/ IBM C4040-251 Questions & Answers Power Systems with POWER8 Scale-out Technical Sales Skills
More informationChapter 7 Hardware Overview
Chapter 7 Hardware Overview This chapter provides a hardware overview of the HP 9308M, HP 930M, and HP 6308M-SX routing switches and the HP 6208M-SX switch. For information about specific hardware standards
More informationCOSC6376 Cloud Computing Lecture 15: IO Virtualization
COSC6376 Cloud Computing Lecture 15: IO Virtualization Instructor: Weidong Shi (Larry), PhD Computer Science Department University of Houston IOV Outline PCI-E Sharing Terminology System Image 1 Virtual
More informationPower Systems High Availability & Disaster Recovery
Power Systems High Availability & Disaster Recovery Solutions Comparison of various HA & DR solutions for Power Systems Authors: Carl Burnett, Joe Cropper, Ravi Shankar Table of Contents 1 Abstract...
More informationBest Practices for Mixed Speed Devices within a 10 Gb EqualLogic Storage Area Network Using PowerConnect 8100 Series Switches
Best Practices for Mixed Speed Devices within a 10 Gb EqualLogic Storage Area Network Using PowerConnect 8100 Series Switches A Dell EqualLogic Best Practices Technical White Paper Storage Infrastructure
More informationFast packet processing in the cloud. Dániel Géhberger Ericsson Research
Fast packet processing in the cloud Dániel Géhberger Ericsson Research Outline Motivation Service chains Hardware related topics, acceleration Virtualization basics Software performance and acceleration
More informationAltos R320 F3 Specifications. Product overview. Product views. Internal view
Product overview The Altos R320 F3 single-socket 1U rack server delivers great performance and enterprise-level scalability in a space-saving design. Proactive management utilities effectively handle SMB
More informationThe vsphere 6.0 Advantages Over Hyper- V
The Advantages Over Hyper- V The most trusted and complete virtualization platform SDDC Competitive Marketing 2015 Q2 VMware.com/go/PartnerCompete 2015 VMware Inc. All rights reserved. v3b The Most Trusted
More informationvsphere Networking 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7
17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about
More information8-port GPON OLT P Compact high performance GPON OLT for medium and small operator
8-port GPON OLT P1200-08 Compact high performance GPON OLT for medium and small operator Product Overview TP-Link P1200-08 is a small capacity cassette GPON OLT. It`s designed with high performance GPON
More informationSun Dual Port 10GbE SFP+ PCIe 2.0 Networking Cards with Intel GbE Controller
Sun Dual Port 10GbE SFP+ PCIe 2.0 Networking Cards with Intel 82599 10GbE Controller Oracle's Sun Dual Port 10 GbE PCIe 2.0 Networking Cards with SFP+ pluggable transceivers, which incorporate the Intel
More informationBuilding a Phased Plan for End-to-End FCoE. Shaun Walsh VP Corporate Marketing Emulex Corporation
Building a Phased Plan for End-to-End FCoE Shaun Walsh VP Corporate Marketing Emulex Corporation 1 Server Virtualization - # Driver of ELX Value #1 Driver of 10Gb Deployment Core to Cloud Computing Top
More informationNOTE: A minimum of 1 gigabyte (1 GB) of server memory is required per each NC510F adapter. HP NC510F PCIe 10 Gigabit Server Adapter
Overview The NC510F is an eight lane (x8) PCI Express (PCIe) 10 Gigabit Ethernet SR (10GBASE-SR fiber optic) network solution offering the highest bandwidth available in a ProLiant Ethernet adapter. The
More informationHMC 860 Enhanced+ & Classic is DEAD
HMC 860 Enhanced+ & Classic is DEAD Nigel Griffiths Power Systems Advanced Technology Support IBM Europe 2016 IBM Corporation HMC Classic User Interface 2 Copyright IBM Corporation 2011 1 HMC Classic User
More informationLenovo ThinkSystem NE Release Notes. For Lenovo Cloud Network Operating System 10.6
Lenovo ThinkSystem NE10032 Release Notes For Lenovo Cloud Network Operating System 10.6 Note: Before using this information and the product it supports, read the general information in the Safety information
More informationCisco UCS Virtual Interface Card 1400 Series
Data Sheet Cisco UCS Virtual Interface Card 1400 Series Cisco Unified Computing System overview The Cisco Unified Computing System (Cisco UCS ) is a next-generation data center platform that unites computing,
More informationIBM p5 Virtualization Technical Support AIX 5L V5.3. Download Full Version :
IBM 000-062 p5 Virtualization Technical Support AIX 5L V5.3 Download Full Version : https://killexams.com/pass4sure/exam-detail/000-062 A. Virtual Ethernet B. Micro-Partitioning C. Virtual I/O Server D.
More informationALLNET ALL0141-4SFP+-10G / PCIe 10GB Quad SFP+ Fiber Card Server
ALLNET ALL0141-4SFP+-10G / PCIe 10GB Quad SFP+ Fiber Card Server EAN CODE 4 0 3 8 8 1 6 0 6 9 1 2 2 Highlights: Quad-port 10GbE SFP+ server adapters PCI Express (PCIe) v3.0, 8.0 GT/s, x8 lanes SFP+ Connectivity
More information