FICON Advice and Best Practices Guide

Size: px
Start display at page:

Download "FICON Advice and Best Practices Guide"

Transcription

1 h DATA CENTER FICON Advice and Best Practices Guide Version 2.0 The FICON Advice and Best Practices Guide provides practical information and occasional tool commands to assist our partners and customers deploy Brocade switching devices within FICON environments. This guide focuses primarily on pure FICON environments but occasionally references material that is applicable to Protocol Intermix Mode (PIM) environments. This guide was compiled from a broad spectrum of materials, with the help of a working group of subject matter experts from Brocade Headquarters, Brocade Support, and the Brocade field organization. This update to the previous FICON Advice and Best Practices Guide v1 is in response to the availability of Fabric OS (FOS) v7.1 and FOS v7.2 and their many new and updated capabilities.

2 CONTENTS BY SECTION TITLE Addressing...9 Auto-Negotiation...7 Brocade Gen and 6510 Platforms overview...5 Buffer credit recovery Buffer Credits Cabling...5 Channel Path Performance Connectivity Blades Creating Five-9s High Availability Fabrics Dagnostic port (D_Port) Domain ID ESCON FCIP enhancements starting at fos v7.x FICON Hops FICON Read and write tape processing using fcip emulation FICON-Centric FOS Features Matrix Fill Words an 8 gbps consideration Forward Error Correction Frame data encoding, transmitter training and retimers IBM TS7700 Virtual Tape Solution Integrated isl compression and encryption Inter-Chassis Links (ICLs) Inter-switch links (isl) ISLs and FCIP Links ISLs with concurrent DASD, Tape and/or FCP I/O traffic Miscellaneous changes, improvements and considerations at fos v Missing Interrupt Handler Primary Time Out Value (MIHPTO) Node Port ID Virtualization (NPIV) Notes about Transaction Processing Facility (TPF) Operating Mode Optics (SFPs)...7 Optionally Licensed Software (from brocade fos v7.1 release notes) Port decommissioning Port Nicknames (Aliases) Prohibit Dynamic Connectivity Mask (PDCM) and Port Blocking Protocol Intermix Mode (PIM) Resource Management Facility (RMF), Systems Automation (SA), and Control Unit Port (CUP) Switch ID and Switch Address Switch ID Switch Software Switching Device Error Detect Time Out Values (E_D_TOV) Switching Device Resource Allocation Time Out Values (R_A_TOV) Switching Device Time Synchronization Switching Devices Teradata and FCIP Extension Trunking Two-Byte Link Addressing Using Local Switching Vendor Switching Device Cross-Reference List Virtual channels Virtual Fabrics Zoning Names Zoning FICON Advice and Best Practices 2 of 77

3 CONTENTS BY PAGE NUMBER Brocade Gen and 6510 Platforms overveiw...5 Cabling...5 Auto-Negotiation...7 Optics (SFPs)...7 Addressing...9 Connectivity Blades Operating Mode Switch Software Protocol Intermix Mode (PIM) Missing Interrupt Handler Primary Time Out Value (MIHPTO) Switching Device Resource Allocation Time Out Values (R_A_TOV) Switching Device Error Detect Time Out Values (E_D_TOV) Resource Management Facility (RMF), Systems Automation (SA), and Control Unit Port (CUP) Switching Devices Creating Five-9s High Availability Fabrics Domain ID Switch ID Switch ID and Switch Address Port Nicknames (Aliases) Zoning Zoning Names Using Local Switching Prohibit Dynamic Connectivity Mask (PDCM) and Port Blocking Switching Device Time Synchronization Fill Words, and 8 Gbps Consideration Inter-Chassis Links (ICLs) Inter-switch Links (ISLs) ISLs and FCIP Links ISLs with Concurrent Disk, Tape and/or FCP I/O Traffic Diagnostic Port (D_Port) Port Decommissioning FCIP Enhancements starting at Brocade FOS v7.x FICON Read and Write Tape Processing using FCIP Emulation Two-Byte Link Addressing Trunking FICON Hops Frame Data Encoding, Transmitter Training and Retimers Forward Error Correction Buffer Credits Integrated ISL Compression and Encryption Buffer Credit Recovery IBM TS7700 Virtual Tape Solution Channel Path Performance Node Port ID Virtualization (NPIV) Virtual Fabrics Virtual Channels ESCON Teradata and FCIP Extension Notes about Transaction Processing Facility (TPF) Miscellaneous Improvements at FOS v Optionally Licensed Software (from Brocade FOS v7.x Release Notes) Vendor Switching Device Cross-Reference List FICON-Centric FOS Features Matrix FICON Advice and Best Practices 3 of 77

4 FIGURES Figure 1 : Multimode cable distances 06 Figure 2: OM3 and OM4 cabling identification 06 Figure 3: Slotshow CLI command example 11 Figure 4: Techniques for installing blades into Directors 12 Figure 5: Chassisshow CLI command example 12 Figure 6: Path Group Redundancy 22 Figure 7: Tape Fan In Fan Out 23 Figure 8: DASD Fan In Fan Out 23 Figure 9: Configuring Insistent Domain ID 24 Figure 10: Switch= parameter of the CHPID macro in IOCP 25 Figure 11: Setting Domain ID on McDATA 6140 using EFCM or DCFM 25 Figure 12: The Allow/Prohibit Matrix and PDCMs 30 Figure 13: Gen4 Inter-switch Link (ICL) connections 33 Figure 14: Gen5 Inter-switch Link (ICL) connections 33 Figure 15: Portdporttest CLI command example 41 Figure 16: Redundant tape path configuration for Tape Pipelining 45 Figure 17: Porttrunkarea CLI command example 48 Figure 18: Framelog CLI command example 54 Figure 19: Porterrshow CLI command example 54 Figure 20: PID is used for Virtual Channel selection 67 Figure 21: 32-port blade and Virtual Channel selection 68 Figure 22: 48-port blade and Virtual Channel selection 68 Figure 23: Cascaded FICON fabric using Virtual Channels 68 Figure 24: Typical deployment of CHPID and Storage ports on a chassis 69 Figure 25: Configuring CHPIDs and Storage to make the best use of Virtual Channels 69 Figure 26: Configuring CHPIDs and Storage to make use of Virtual Channels and Logical Switching 70 Figure 27: Vendor cross-reference table for Brocade switching devices 77 Figure 28: FICON-centrc FOS feature matrix 78 FICON Advice and Best Practices 4 of 77

5 BROCADE GEN AND 6510 PLATFORMS OVERVIEW New naming conventions have been created and will be utilized beginning with the release of FOS 7.1: Gen1 means 1 Gbps switching devices such as Silkworm Gen2 means 2 Gbps switching devices such as Silkworm Gen3 means 4 Gbps switching devices such as the Director and 5000 switch Gen4 means 8 Gbps switching devices such as the DCX and DCX-4S Directors and 5100 switch Gen5 means 16 Gbps switching devices such as the 8510 Director family and 6510 switch The Brocade Gen Directors and the Brocade Gen switch use the new 16 Gbps Condor3 ASIC. The Brocade 6510 Gen5 switch became supported for FICON at Brocade FOS 7.0.0d. Up to four logical switches can be provisioned on the device at this release level. Control Unit Port (CUP) was not certified nor supported at this release. CUP became supported on the Brocade 6510 Gen5 switch for FICON at Brocade FOS 7.1. A Gen chassis does not support any 16-port blades. There are no new 16 Gbps 16-port blades, and customers cannot use 8 Gbps 16-port blades in a 16 Gbps Brocade DCX 8510 chassis. The Gen Directors, and Gen switches, can use 8 / 10 or 16 Gbps FC Brocade Branded optics. There is no support for 1 Gbps devices on a Gen chassis or on a 6510 switch. 4 Gbps SFPs are not supported on the Gen chassis and/or on 6510 switches. 8 Gbps SFPs are supported on the Gen chassis and/or on 6510 switches 16 Gbps ports, with proper SFPs, are designed to auto-negotiate back to 4 Gbps and 8 Gbps. New Brocade branded 16 Gbps FC SFPs: 16GFC SWL SFP+, 16GFC LWL 10km SFP+ A new 16 Gbps, ELWL, 25km, Brocade-branded optic can now be used for FICON starting at FOS 7.2. New Brocade branded 10 Gbps FC SFPs: 10GFC SWL SFP+, 10GFC LWL 10km SFP+ At FOS 7.0 Brocade branded 64 Gbps FC QSFP (ICL universal cable supports lengths less than or equal to 50 meters [164 ft]) but that increases to 100 meters with FOS Gbps ports can support 10 Gbps data rates. The 10 Gbps feature requires a 10 Gbps FC slot-based license for each blade in the Brocade Gen that has 10 Gbps FC ports, and 10 Gbps FC licenses on Brocade Gen switches. User must also deploy 10 Gbps SFP+ optics on those 10 Gbps port(s) in order to provision that 10 Gbps connectivity. 4 Gbps Brocade FR4-18i FCIP blades and 8 Gbps Brocade FX8-24 blades cannot be hosted on a Brocade DCX/DCX-4S chassis if it is using Brocade FOS 7.0 or higher. Buffer Credits (BCs) have been doubled to 8K BCs per Condor3 ASIC. A DCX chassis fully loaded with 16 Gbps port blades (384 ports total) should be supplied with four power supplies connected to VAC lines. Even if they are using 8G versus 16G optics. Customers often purchase only two power supplies (PS). But if a PS fails, a single PS cannot power all of the 48-ports blades that could potentially populate a fully loaded chassis. CABLING Back when fibre channel was fresh and new and ran at 1 Gbps, the common multi-mode fiber cable that we used had a glass core that was 62.5 microns in diameter. This became known as OM1 type fiber cable. We rapidly switched to 50 micron cores because users could get a reliable signal across a longer distance, say 500 meters maximum rather than 300 meters. The 50 micron cable became known as OM2 type cable. What has happened since then is that fibre channel speeds have moved from 1 Gbps to 2 GBps to 4 Gbps to 8 Gbps to 16 Gbps. This is exciting stuff, but with every increase in speed, we suffer a decrease in maximum distance. This means that something else needs to change... and that something is the quality of the cables, or more specifically, the modal bandwidth (the signaling rate per distance unit). With the evolution of 10 Gbps Ethernet, the industry produced a new standard of fiber cable which the fibre channel world can happily use. It is called laser optimized cable, or more correctly: OM3. Since then, OM3 has been joined by an even higher standard known as OM4. Let us look at the distances we can achieve with different cable types. Users can see in the table below that the modal bandwidth improves as we move to higher quality glass. Users can also see that single mode fiber (with the 9 micron core) has not suffered the same issue with decreasing maximum distances as speeds have increased: FICON Advice and Best Practices 5 of 77

6 Figure 1 Cable colors are not standardized although Orange and Aqua for multi-mode (MM) and Yellow for singlemode (SM) are de facto standards. So how can a user tell what sort of multi-mode cable they have? They will need to read the printing on the cable. Cables that are 50 micron (50μ) and orange are almost certainly OM2. They will indicate 50 / 125 Optical Cable in writing on it. Cables that are 50μ and aqua (blue) in color are either OM3 or OM4. In the example below it is easy to tell which cable is OM3. Figure 2 OM4 fiber has been on the market since 2005, first sold as premium OM3 or OM3+ fiber. The OM4 designation standardizes the nomenclature across all manufacturers so that a user has a better idea of the product that they are buying. OM4 is completely backwards compatible with OM3 fiber and shares the same distinctive aqua-colored jacket. For user requirements, at 8 Gigabit FC and beyond: Use OM3/OM4 MM cables and/or OS1 SM cables. OM1 and OM2 multimode cables do have very limited distance when used at 8 Gbps and beyond. Most mainframe shops have converted to using all long wave SFP+ optics combined with OS1 single-mode cables for connectivity. Long wave and short wave FICON channel cards are the same cost. Long wave is more expensive to deploy as switch and storage ports. Long wave provides the most flexibility and the most investment protection. The Fibre Optic Industry Association (FIA) has published information detailing that OS1, single mode cabling was defined by a very old specification in 1995 and delivered in 2002 for transmission at 1310 nm and 1550 nm. It was designed, as was multi-mode, to be an indoor cable to be for used for in premise purposes. In comparison, OS2, introduced in 2006, requires the optical fibre to be compliant with updated specifications to support transmission at 1310 nm, 1550 nm (same as OS1 ) and 1383 nm. The purpose of OS2 was to be different than OS1. It was born within the industrial premises standards to support 5 km and 10 km channels - which are by definition outdoor cable to be used for duct and directly buried in-premise cabling. More importantly, the low attenuation values of OS2 are only realistic in loose-tube cables in which the original optical fibre performance is almost unaltered by the cabling process. FIA suggests that there is a slight problem of guaranteed interoperability between OS1 and OS2. An OS1 cable is not simply an indoor version of an OS2 cable. The performance of OS1 cables is not directly compatible with OS2 cables. As indoor cables tend to be of a buffered, tight construction, the low attenuation coefficients of an OS2 are unlikely to be maintained if connected across the same link as OS1. FICON Advice and Best Practices 6 of 77

7 It might be in a user s best interest try to avoid cabling paths (i.e. patch panels) where a link would traverse both OS1 and OS2 cabling in order to avoid any interoperability issues the tolerance mismatches in these cables might create. See this article: It is a truly bad practice to have fibre cables hanging in the breeze in a data center with no dust covers, their precious glass connectors exposed to the world. Maybe even worse are a user s fiber patch panels and cables hanging around without dust covers. When new equipment arrives, every CHPID, SFP, switch, patch panel port and fiber optic cable will have a dust cover protecting the optic. So what to do with these little guys once a user removes them? Keep them! When a user unplugs a cable later they need to immediately re-install those precious covers both onto the cable and into the CHPID, switch, patch-panel or storage port, to protect the fibre optics from contamination. Consider storing these easy-to-lose dust covers in sealed plastic bags, preferably kept in the relevant rack so they are close to hand. AUTO-NEGOTIATION Some device optics support more than one speed, A device optic that supports multiple speeds and/or duplex s needs a mechanism to decide what speed and duplex to link at and auto-negotiation is that mechanism. Auto-negotiation occurs as ports are logging into a fabric (PLOGI) and allows all ports (CHPID, switch, storage) to auto-negotiate to the highest common supported data rate, and avoid manually setting these data rates. Optics manufacturers provide for only three data rates per SFP: the highest and the two previous data rates. At 8 Gbps, optics cannot auto-negotiate (and cannot be set to manually transfer at) 1 Gbps. 1 Gbps storage requires the use of 4 Gbps optics (Small Form-Factor Pluggable, or SFP), and 4 Gbps SFPs can be inserted into 8 Gbps ports but not into 16 Gbps ports. At 16 Gbps, optics cannot auto-negotiate (and cannot be set to manually transfer at) 2 Gbps. 2 Gbps storage requires the use of 8 Gbps optics (Small Form-Factor Pluggable, or SFP), and 8 Gbps SFPs can be inserted into 16 Gbps ports. It is not possible to attach 1 Gbps storage to Gen5 Brocade 16 Gbps switching devices. OPTICS (SFPS) The Small Form factor Pluggable (SFP) specification provides capabilities for data rates up to 4.25Gbps (4Gbps), while the updated SFP+ specification supports data rates up to (16 Gbps) today. Multi-mode cables, attached to short wave length SFP+ (Small Form factor Pluggable) optics, are designed to carry light waves for only short distances. Single-mode cables, attached to long wave length SFP optics, are designed to carry light waves for much longer distances. There are three things that control a user s ability to reliably send data over distances longer than 984 feet (300 meters): The intensity (brightness) of the transmitter (e.g. lasers are better at long distance compared to LEDs). Comparing a 10km longwave SFP versus a 25km longwave SFP, the main difference between them is that the latter uses a more intense (brighter) transmit signal than the former. The transmit value is measured in dbm. There is a concept called a link budget that is arrived at by adding the number of joins across the end-to-end link and the total length of the link to determine if the Tx (transmit) value will fall below the minimum Rx (receive) value of the SFP at the receiving side. If it does, the user will have an optical quality issue that they need to solve before using the link. On the other hand, if an SFP is too bright the user will need a device called an optical attenuator, or they will need to add meters of additional cable length attached to the SFP, in order to dim the light and make it useable. The wavelength of the laser. Traditional longwave is 1310 nm (that value effectively being the 'color' of the light). For very long haul, devices like CWDM and DWDM use SFPs in the nm range. FICON Advice and Best Practices 7 of 77

8 SFPs at each end of a link need to use the same wavelength or they will not be able to communicate with each other. The number of buffer to buffer credits (BCs) available to the link, especially if this is a cross site Inter Switch Link (ISL). This is not a big issue for Director-class switches, but can be a major problem if the user has small/midrange switches. If the user has Brocade switches, and they do not have the Extended Fabrics license on each switch, they probably will not have enough BCs to keep the link 100% utilized if their cross site link is more than 10km long. The bad news is that this license is not free. The good news is that Brocade can provide an evaluation license so users can test to see if purchasing that license will really help them bridge that distance. Follow this link to read a great article about FICON and buffer credits: BB Credits and FICON Consistent with the IBM recommendation, the best practice, in a mainframe environment, is to use single-mode fiber and long wave optics for FICON I/O connectivity. Only Brocade branded optics (or SmartOptics) can be used in Brocade Gen4 or Gen5 switching devices. 4 Gbps Brocade branded optics can be used in Gen4 8 Gbps ports; however, 8 Gbps optics cannot be used in Gen3 4 Gbps ports. 8 Gbps Brocade branded optics can be used in Gen5 16 Gbps ports; however, 16 Gbps optics cannot be used in Gen4 8 Gbps ports. Order 9 u cables (OS1) for single-mode, long wave ports. Order 50u 2000 mhz cables (OM3) or 50u 4700 mhz cables (OM4) for multimode, short wave ports. Try to not mix SM and MM optics on the same line card. This is in case users have to swap ports and they do not think to check to see if the optics and cables are compatible. SmartOptics are available for FCP connections but have not been certified for use with FICON links. It is a best practice to avoid the use of 10 Gbps blades on the Gen3, 4 Gbps Director and Gen4, 8 Gbps DCX Directors if possible; these links can only be used for Inter-Switch Links (ISLs). Gen5 16 Gbps 8510 Directors do not support the FC10-8, 10 Gbps blades. Gen5 Director blade ports can be enabled to use 10 Gbps SFP+ optics. The first eight ports on the FC16-32 or FC16-48 blades can be enabled to accept 10 Gbps, SFP+ optics. That would provide a 10 Gbps, single speed transfer rate for use with metro-based DWDM or as native ISL links. Note: Brocade 10 Gbps FC ports will not interoperate with Brocades 10 Gbps GbE ports. 10 Gbps speed for FC16-xx blades requires that the 10G license be enabled on each slot using 10 Gbps SFPs. 10 Gbps speed on FC10-6 blades in Gen3 and Gen4 Directors do not require any license in order to be used. 10 GbE ports on any FX8-24 blade housed in a Gen5 chassis also requires a 10G license to be enabled on the slot housing the FX8-24 blade in order to be used. 10 GbE ports on any FX8-24 blade housed in a Gen4 chassis do not require a 10G license in order to be enabled and used. The 10 Gigabit FCIP/Fibre Channel license (10G license) is slot-based when applied to a Gen5 Brocade Director. It is chassis-based when applied to a Brocade Gen switch. For each slot on a Gen that houses a 10 Gbps SFP+ optic (FC or FCIP), a license must be obtained and enabled. Gen5, 16 Gbps Brocade 8510/6510 ports used at 10 GbFC can connect to other Gen5, 16 Gbps Brocade 8510/6510 ports used at 10 GbFC but cannot connect to any other type of McDATA or Brocade 10 Gbps port. Brocade Fabric Watch has been enhanced to monitor the Gen5 16 Gbps SFP+, 10 Gbps SFP+, and 4 x 16 Gbps QSFP. It also allows thresholds to be configured based on the type of SFP. In comparison to long-distance fiber-optic links between Brocade Condor3 based switches, which can run natively at 16 Gbps, the ability to run ports at 10 Gbps might not seem like a benefit. However, in the event that the physical link between the FC fabrics is provided through alternate service providers, this capability can allow fabric architects the required flexibility in designing a metro-area fabric architecture by providing compatibility with other wide-area network technology. Most enterprises discover that their bandwidth costs can be substantially reduced through their DWDM networks when they utilize one or a few 10 Gbps FC links rather than numerous 2 Gbps or 4 Gbps FC links. Most xwdm technology does not currently support 16 Gbps rates. Rather than having to throttle down to either 8 Gbps or 4 Gbps line rates, and waste additional lambda circuits to support required bandwidth, the new Brocade Gen5, Condor3 switches can drive a given lambda circuit at a 10 Gbps line rate, optimizing the link. Brocade has successfully tested this configuration with DWDM solutions: FICON Advice and Best Practices 8 of 77

9 Adva, in the form of the Adva Optical FSP 3000 Ciena, in the form of the Ciena ActivSpan 4200 Brocade will continue to test additional DWDM solutions in the future, in order to ensure compatibility with a wide variety of DWDM technology providers. IBM typically uses GDPS as its guideline for qualifying DWDM, probably because it provides very strict deployment rules. Depending upon vendor, features and speed, from 100km to 300km of distance can be supported for DWDM. A user should check with each storage vendor that is attached to a DWDM device in order to stay within their support guidelines. For example: For SRDF/S the EMC stated support for DWDM is 200km. It has been said that EMC will supported up to 500km, under specific conditions, using DWDM devices. If users need more than 200km synchronous I/O but less than 300km (per the IBM limit), users would use the EMC RPQ process. Each and every storage vendor has qualifications just like EMC states above. Check with the storage vendors so that the enterprise can deploy DWDM correctly and successfully. If an optic begins to fail or flap, customers can use the Port Auto Disable capabilities to cause these intermittently failing ports to become disabled when they encounter an event that would cause them to reinitialize. This negates their ability to cause a performance and/or availability issue since error recovery does not have to try to recover the port. The Port Auto Disable feature is enabled through use of the portcfgautodisable CLI command. If an optic begins to fail or flap, then it is a best practice to disable (vary offline) that port prior to switching out the SFP and replacing it. Users do not want any of the traffic to be dropped in the wire. Although it might be small, there is almost always some data being transferred across a fiber cable. By disabling the port, users will allow for a soft stop of traffic, rather than an unexpected (and potentially disruptive) stop. A note of warning: Over time, a tiny dust ring builds up around the light source at each end of the Fibre Channel (FC) cables. Undisturbed, these cables work fine as the light passes through the clear area in the center of the dust ring. But if disturbed, this fragile dust ring dissipates into the optical carrying area of the fiber, and these dust particles often start causing intermittent and/or hard cable (bit) errors. When upgrading FICON Channel cards, switching ports or storage ports, and/or upgrading a user s FC cabling, all of the fiber optic cable ends (including patch panel connections) must be cleaned by hand and reseated into their receiving ports. In any migration plans, be sure to allot time for cable cleaning, as there are usually hundreds or thousands of cables to be cleaned. Testing Optics on the Gen4 and Gen5-families of switching devices: Customers working with old McDATA Directors and switches often used what was called a Wrap Test which allowed them to check the physical port and the SFP on that port. The Gen5 switching devices can often utilize the Diagnostic Port (D_Port) Capabilities new at FOS 7. The Gen4 and Gen5 switching devices can utilize the portloopbacktest and spinfab CLI commands: The portloopbacktest is the closest to the McDATA 6140 wrap test. The spinfab test is for testing ISLs. The Gen5 switching devices can also utilize the Diagnostic Port (D_Port) Capabilities for ISL links. ADDRESSING When the FICON Management Server (FMS) license is installed and the Control Unit Port (CUP) is enabled, the CUP utilizes an internal embedded port 0xFE for in-band communications: Physical ports 0xFE and 0xFF become unavailable to be used for FICON connectivity. Physical port 0xFF can still be used when users need a vacant port to do a Port Swap activity. Consider using Brocade Virtual Fabrics if there is a need to utilize all of the ports for connectivity. Brocade Virtual Fabrics allow users to create a Logical Switch that has only ports x 00-FD (0 254) and another Logical Switch that contains the remainder of the ports. Since the VFs do not have any physical port FE or FF defined, all ports can now be used. If a port gets assigned a port address of 0xFE in an Open System logical switch, an RSCN will be sent by FOS on behalf of port address 0xFE during CP failover or firmware download. This may result in an IFCC for any FCP channel running traffic to port address 0xFE. FICON Advice and Best Practices 9 of 77

10 Avoid having any physical or logical switch with a physical port 0xFE available. When creating a CNTLUNIT entry for CUP in the I/O Configuration Program (IOCP), all FICON switches, regardless of model or manufacturer, use a device type of is how IOCP knows that a user is identifying a switching device that has the FMS license installed on it, so that the Systems Automation and Resource Management Facility (RMF) can monitor and manage this device. For example: CNTLUNIT CUNUMBR=00C0,PATH=((CSS(0),AA,AB)),UNIT=2032,UNITADD=((00)), LINK=((CSS(0),05FE,05FE)) Addressing Modes: 10-bit addressing mode is not supported in a FICON environment. FOS 7.0 does not allow addressing mode 1. If set, Control Unit Port (CUP) will not function. Considerations about FICON and blade support on logical switches in DCX family systems: On default logical switches (and non-virtual-fabrics switches), FICON is not supported if address mode 1 (dynamic address mode) is enabled. Address mode 1 is not supported if FICON CUP is enabled on the default logical switch. On default logical switches with an address mode other than mode 1, any 48-port and 64-portblades are disabled. CONNECTIVITY BLADES In addition to 16 Gbps speed, the Condor3 ASIC, for Gen bladed Directors and the 6510 switch, includes more bandwidth (768 Gbps), faster I/O performance (420 million frames switched per second), more functionality (including D_Port, in-flight encryption and compression, and Forward Error Correction (FEC)), and higher energy efficiency (less than 1 watt/gbps). The relatively old FC10-6 blades (six 10 Gbps ports) are not supported on Gen5 Directors and FC10-6 ports deployed on and Gen4 Directors cannot be connected to a 10 Gbps SPF+ on a Gen5 Director or a Gen5 switch. FC16-32 (32-port) and FC16-48 (48-port) blades may be used and even intermixed in any Brocade Gen chassis. 48-port blades are supported for FICON but with restrictions: 48-port blades can only be utilized for FICON in an all B-Series Fabric: Only on a Brocade DCX, DCX-4S, or Backbone chassis. 48-port blades require Interop Mode 0 (IM=0) which is the only mode supported at FOS 7 and higher: Eight slots filled with 48-port blades would put 384 ports into a single Domain ID. To power all of these blades in the event of a single power supply (PS) failure requires that 4 PSs be installed in the chassis -- n + 1 characteristics for high availability. IBM z/os only allows up to 256 ports to be within a single Domain ID. This creates a requirement for the use of zero-based addressing and Brocade Virtual Fabrics on a Brocade DCX. Therefore, on an 8-slot Brocade DCX or 8510 family device, zero-based addressing requires the use of Brocade Virtual Fabrics (VF) thereby making VF a requirement. On a 4-slot Brocade DCX-4S or 8510 family device, the chassis has only four slots; filled with 48-port blades, it would put 192 ports into a single Domain ID. Since z/os allows up to 256 ports to be within a single Domain ID, the Brocade DCX-4S and do not have to be run using Brocade Virtual Fabrics (although VF can still be deployed if desired). Brocade FOS does not allow FMS to be enabled (CUP support), if there is a 48-port card in the chassis that does not conform to the supported configuration rules. At Brocade FOS 7.0.0c and higher, 64-port blades can be used on the 8 Gbps Gen4 Brocade DCX and DCX-4S chassis that are supporting FICON connectivity. However, the 64-port blade can only be used for FCP traffic and not FICON traffic. The 64-port blade has to be in a logical switch that is not used for FICON, but can be in the same chassis where other logical switches are used for FICON. At Brocade FOS 7.0.0c and higher, 64-port blades cannot be used on any 16 Gbps Gen5 Brocade DCX or DCX chassis that is also running FICON traffic. 8 Gbps Brocade FX8-24 FCIP Extension Blades can be deployed in a Gen4 Brocade DCX family chassis or Gen5 Brocade 8510 family chassis and used for FCIP long-distance connectivity. The old FR4-18i FCIP blade (4 Gbps) is not supported in the Gen5 Brocade 8510 family chassis. FICON Advice and Best Practices 10 of 77

11 The easiest way to tell which blades are installed in a Brocade DCX is to issue a FOS command: slotshow m Figure 3 IBM is not qualifying the FC8-32E or FC8-48E blades for use with FICON fabrics. Mainframe customers generally do not deploy reduced function equipment and these enhanced 8G blades for Brocade DCX Gbps chassis fit that description. The Brocade FC8-32E and FC8-48E are enhanced 8 Gbps blades that deliver enhanced fabric resiliency and application uptime through advanced features enabled by the Condor3 ASIC, including increased buffers and no oversubscription for traffic across the backplane. But these blades do not support native 10 Gbps Fibre Channel, in-flight encryption and compression, or diagnostic ports. If there are not enough blades to fill up an 8-slot Gen4 or Gen5 chassis one that will be utilized without virtual fabrics do not put a blade in the right-most slot (slot 12) when using 32-port and/or 48-port blades. If CUP is going to be used to monitor and manage the FICON switching devices, users will lose two ports on the right-most blade (0xFE and 0xFF) of the 8-slot DCX chassis Since there are other empty slots in the chassis, this will be a needless waste. When there are fewer blades than slots in a chassis, several methodologies are used to deploy the blades within a chassis (see Figure3): Populating the blades from left to right (slots 1 and 2 first) is most common Others like to populate blades within the chassis as if it's divided in two, either in a mirrored fashion (such as slots 1 and 12 which could be bad as discussed above), or in a symmetric way (such as slots 1,2 and 9,10) Cable management usually becomes a factor when deciding how to distribute the blades in a chassis. FICON Advice and Best Practices 11 of 77

12 Figure 4 If virtual fabrics are in use, it is possible to change the port addressing so that physical ports 0xFE and 0xFF never exist at all. See the Virtual Fabrics section for more information about the VF capability. Gen5 16 Gbps port blades (FC16-32, FC16-48, CR16-8, and CR16-4) support real-time measurement of power being consumed by those blades: The real-time power consumption of 16 Gbps blades (along with some other interesting variables) can be displayed using the chassisshow CLI command. Figure 5 Some practical considerations about deploying 48-port blades for FICON connectivity: In a mainframe environment, if a blade has a physical problem (bad physical port, or ASIC, etc.) then that blade will have to be removed from the Director, hopefully non-disruptively. Just one more good reason to have at least two FICON fabrics so that when one device or blade or port is unavailable, IOS can utilize a device or blade or port across a different fabric to complete its I/O requests non-disruptively. In order to remove a physical blade the user will have to vary all of the ports on the failing blade offline on every LPAR, on every CEC in their system. Those vary offline commands could range from ~300 to over 63,000 (to remove just one failing 48 port blade) depending upon how large their mainframe environment really is. Then once a user replaces the failing blade they will have to vary all of the ports on the new blade online on every LPAR, on every CEC in their system. Those vary online commands could also range from ~300 to over 63,000. Easy to imagine that for larger shops it could take many hours of activity to replace a blade non-disruptively. Luckily there are z/os commands (e.g. Route) to make this a little easier to accomplish but someone must verify that all of the vary offline commands executed correctly and then that all of the vary online commands also executed correctly. FICON Advice and Best Practices 12 of 77

13 The point is that operationally it becomes very labor intensive to replace even a single large port count blade when a rare blade failure does occur in a mainframe environment. This is a mainframe-only problem because the IOCP ties each switch port to a specific CHPID or storage port and these connections remain static unless a port swap is accomplished or the IOCP is changed. FCP does not have this problem as it uses the switch-centric name server(s) rather than IOCP. The use of Virtual Fabrics just exacerbates the problem. See the Virtual Fabrics section for more detail. It should be noted that System z Discovery and Configuration (zdac) is capable of discovering cabling changes and then updating system information so that CHPID and/or storage connectivity is maintained and I/O operations can resume. This can possibly save many hours of manual labor. For mainframe FICON, because of the enormous operational difficulty of replacing a failing or failed blade in a virtualized or non-virtualized environment, most enterprises deploy 32-port blades to minimize their operational exposure while also minimizing their total cost of acquisition expenses. OPERATING MODE Interoperability modes have been used by vendors for years to let the switching devices know about their planned environment. Environments could be: At FOS 7 and higher, only IM=0 is available. Users can no longer connect IM=0 devices to McDATA nor Cisco. At Brocade FOS 7.0 and higher, IM=2 is eliminated as an interop mode option. At FOS 7 and higher, for Brocade Gen4 and Gen5 fabric environments, the chassis is automatically configured to IM=0. If a user is currently using Interop mode 2 or Interop mode 3 on the Gen4 Brocade DCX/DCX-4S, then a best practice is to change the Interop mode to IM=0 before, or at the time that, the upgrade to Brocade FOS 7.0 or higher is scheduled. Changing interop mode requires the switch to be offline so that action is disruptive. A word of caution: If an enterprise is not using Integrated Routing (IR) (and FICON cannot use IR), when the user changes the Interop mode on a Brocade FOS device, the zoning database is erased. Ensure the shop has good backups, because the user will need them to restore any zoning that they have created. SWITCH SOFTWARE Many of the preventable issues that occur in a FICON and/or SAN fabric can be avoided by using the right management and monitoring software suite. Brocade Network Advisor (e.g. IBM Network Advisor, EMC Connectrix Manager Converged Network Edition (CMCNE)) is the successor product to Data Center Fabric Manager (DCFM). Brocade Network Advisor actually has its heritage in a great product called EFCM (Enterprise Fabric Connectivity Manager), which Brocade picked up when they bought McDATA. When Brocade purchased McDATA they combined EFCM with their own Fabric Manager to create DCFM. That has since been combined with other Network switch management software to create Network Advisor, bringing things to a whole new level. Brocade Network Advisor provides the industry s first unified network management solution for data, storage, and converged networks. It supports Fibre Channel Storage Area Networks (SANs), Fibre Channel over Ethernet (FCoE) networks, Layer 2/3 IP switching and routing networks, wireless networks, application delivery networks, and Multiprotocol Label Switching (MPLS) networks. Now the first thing a user may be wondering is: OK so this software sounds great, but how much will it cost? The good news is that trying it out will not cost a user anything. It is free to download and trial for 75 days. Users can find more information here: To demo it, a user can spin up a Windows 2008 guest from a template in their favorite Hypervisor. This means that the user does not even need to request additional hardware in order to do this trial. So what benefits should a user expect to see? Well, a user might be able to prevent issues like these: Mistakes made when performing zoning updates Failure to create regular configuration backups (which especially hurts after a switch failure) FICON Advice and Best Practices 13 of 77

14 Difficulties upgrading firmware or simply too many upgrades to get through Poor (or no) switch and performance monitoring Poor (or no) error notification (including notification back to IBM and other partners) Difficulty collecting log data Lack of report creation software It is a best practice to migrate away from the old Brocade Enterprise Fabric Connectivity Manager (EFCM) and Brocade Data Center Fabric Manager (DCFM) management software and into the newer and more robust Brocade Network Advisor product. Users are strongly cautioned against trying to discover switching devices with both DCFM and BNA as it can cause a number of serious issues. The Brocade 8-slot Gen4 and Gen5 DCX and 8510 Directors typically come packaged with an Enterprise Bundle : Extended Fabric, Fabric Watch, Trunking, Advanced Performance Monitoring, and Adaptive Networking May vary by OEM The Enterprise Bundle is optional for the 4-slot Gen4 and Gen5 Brocade DCX-4S and 8510 Directors (again, it varies by OEM). Individual licenses can be ordered for those devices. The 4-slot Brocade Directors can ship with or without the Enterprise Bundle. Inter-Chassis Link (ICL), FICON Management Server (FMS), and Integrated Routing (not used with FICON) licenses are different for the 4-slot Brocade Directors than for the 8-slot Brocade Directors. It is a best practice that the Brocade FOS firmware on all of the switching devices in a fabric should be at the same firmware level. Director-class switching devices always provide for non-disruptive firmware upgrades and can host a FICON infrastructure with five-9s of availability. But motherboard-based fixed-port switching devices can create fabric disruptions during firmware upgrades, so at best they can host a FICON infrastructure with four-9s of availability. There is a blind period that occurs during a firmware upgrade. It happens when the CP is rebooting the OS and re-initializing Brocade FOS. On director-class products (those with dual CP systems), this blind period lasts a short time from 2 to 4 seconds. The blind period lasts only as long as it takes for mastership to change (hence, a few seconds) and does not cause fabric disruption. Single CP systems (such as the Brocade 5100, 5300 and 6510 fixed-port switches) have a blind period that can last 1 to 2 minutes, which can potentially cause fabric disruptions. These switches allow data traffic to continue flowing during the firmware upgrade CP blind period, unless a response from the CP is required. In the mainframe world, this can occur more frequently, since FICON implementations utilize Extended Link Services (ELSs) for path and device validation. In addition, FICON uses class-2 for delivery of certain sequences, which increases the chances that the CP needs to become involved. Even when redundant fabrics have been deployed on motherboard-based switching fabrics, if a failover occurs to the redundant fabric(s), users might have problems arise. These fabrics are not as tough and resilient as Director-based fabrics and must be monitored much more closely to be sure that when a failing fabric failovers to the remaining fabric(s), the workload can be fully absorbed by the remaining elements. When deploying motherboard-based switch fabrics it is a strong recommendation that a minimum of three (3) of these fabrics be utilized. If one fails then there are at least two more fabrics to pick up the additional workload. And if CUP is active on any FICON switching device (it should be taken offline before upgrading firmware), then customers may experience recoverable Interface Control Checks (IFCCs) when CUP requests cannot be serviced in a timely fashion. During firmware upgrades it is necessary to consider the effect of that upgrade on the FICON CUP capability: In the past it has been a best practice to vary the CUP port (0xFE) on a switching device offline, from every LPAR on every CEC, before starting a switch firmware download and install procedure. z/os provided some help with this in the form of the Route command. Varying the CUP offline kept it from becoming boxed and then issuing an IFCC while new firmware was being implemented. After the firmware was successfully updated, then the CUP had to be varied back online to every LPAR on every CEC. Now there is a new solution, to this decades old problem, that has been engineered into FOS 7 and higher. When a switching device begins a firmware download, the CUP sends an unsolicited status back to IOS to indicate that it will be non-responsive for an unknown period of time (i.e. long busy ). When the firmware download is complete, CUP automatically sends another unsolicited status to IOS to indicate that it is responsive again (i.e. ready ). FICON Advice and Best Practices 14 of 77

15 If by chance the CUP receives a command while it is in Long Busy, it will send a reject that says it is still not able to accept work (i.e. did you not hear me, I am on a long busy break ). All of this is done under within the FOS firmware and does not require the user to do anything special. This is just a way of tightening up the integration between device and system functions in order to reduce the probability that the CUP could get boxed during a firmware download. In order to maintain a five-9s high-availability environment, it is necessary to provide call home functionality. Users should implement Brocade Network Advisor to help configure and manage their FICON environment. It is a best practice to enable the Fabric Watch feature in any FICON switching environment. Fabric Watch enables each switch to constantly monitor the fabrics for potential faults and automatically alert network managers to problems before they become costly failures. It is a best practice to download and utilize the Brocade SAN Health free software. It provides users with a host of reports and a storage network diagram, and can point out inconsistencies between the deployed FICON networks and the IOCP. When users purchase a licensed feature they get a Transaction Key. Users then go to the Brocade Web site and convert that Transaction Key to a License Key to install on a switching device. The License Key is specific to a unique switching device. The system should prevent users from creating another License Key from the same Transaction Key, to be installed on a different switch. Also, a switching device should reject a License Key generated for a different switch if users try to apply it. A customer is not allowed to move a License Key from one switch to another. PROTOCOL INTERMIX MODE (PIM) General Statement: Brocade and IBM fully support FICON and FCP protocol intermix on virtualized and non-virtualized Brocade switch chassis s. FICON and FCP I/O must not be allowed to communicate with each other on the fabric but that can be accomplished in several ways: One way to accomplish this is with Virtual Fabrics. VF can completely isolate FICON I/O traffic from FCP I/O traffic including ISL links. It is always recommended that customers deploy VF on their chassis. Just do not run the FICON/FCP ports in the Default Switch (created when users enable VF). Create one Logical Switch for FICON and then move all of the FICON ports from the Default Switch to the FICON Logical Switch. Create one or more Logical Switches for open systems/replication and then move all of various protocols ports (AIX to one VF; Unix to a different VF; Linux to a different VF; replication to a different VF; etc.) from the Default Switch to the FCP Logical Switches. Can run up to 4 CUPs on a virtualized DCX/DCX-4S chassis. Managing multiple VF on a chassis does require a little more effort but it provides the most complete and safest port isolation that can be deployed. Another way to accomplish Protocol Intermix is through normal and standard protocol mechanisms: z/os IOCP will keep the FICON traffic from trying to use any FCP ports. Proper and accurate Zoning will be used to keep the FCP I/O traffic from being able to connect to FICON ports and vice versa. Both FICON and FCP must be zoned so they cannot communicate together. It is recommended that if users have any ports on the chassis without a cable attached then use PDCM prohibits or a CLI command to disable those ports to avoid future cabling and connectivity mistakes. Either of the above implementations of Protocol Intermix on a Brocade Director chassis is approved, qualified and supported. Many, many customers have deployed FICON and FCP on the same Brocade chassis and this has IBM s blessing. Details: PIM is typically in use with Linux on the mainframe, data replication activities, and/or storage area network (SAN) sharing of a common I/O infrastructure. FICON Advice and Best Practices 15 of 77

16 It is a best practice to place FICON ports and FCP ports into different zones. It is a best practice to have at least one specific zone for FICON and then at least one specific zone for each non- FICON protocol that is in use on the infrastructure (for instance, Windows, AIX, UNIX, and so forth). Additional specific zones may be implemented to meet a customer s configuration requirements. Customers wanting to provide isolation between their FICON and FCP environments should consider creating a Virtual Fabric (logical switch) for FICON and one or more additional Virtual Fabrics to host their FCP environment(s). This will not keep someone from inadvertently plugging an FCP host/storage cable into the FICON virtual fabric or vice versa. FICON cannot and will not utilize any host or storage ports that are not identified in HCD so FCP is always safe from FICON intrusion FCP, if zoned to do so, will utilize any host or storage port to which it has access. FCP can create problems for the FICON environment, so be cautious when deploying FCP links onto protocol intermixed switching devices. For example, if a Windows server accidently gets connected into a FICON zone it will write a signature file on each and every volume owned by z/os. This could create a situation where all of the mainframe online storage has to be recovered. Customers might consider utilizing Prohibit Dynamic Connectivity Mask (PDCM) on switching devices to prohibit FICON and FCP F_Ports from being able to exchange frames with each other. This can provide added security and isolation for customers deploying PIM environments. See the Prohibit Dynamic Connectivity Mask (PDCM) and Port Blocking section of this document for additional information. Use either B-Series Traffic Isolation Zones or Virtual Fabrics (or both) to influence which I/O frames will traverse a specific ISL link. To maximize the performance of ISL links it is a best practice to: Keep DASD traffic and real, physical, standalone tape traffic separated and on their own ISLs. Tape I/O exchanges can reduce DASD I/O performance. Keep data replication traffic that is utilizing switching devices on its own ISLs. Data replication implies synchronization of consistency groups and users would not want other heavy I/O traffic to delay that synchronization. Refer to the section on ISLs with Concurrent Disk, Tape and/or FCP I/O Traffic. MISSING INTERRUPT HANDLER PRIMARY TIME OUT VALUE (MIHPTO) The Missing Interrupt Handler (MIH) detects missing I/O interrupt conditions. If the MIH detects a missing interrupt, processing will be done that is dependent on the detected condition. The MIHPTO is the amount of time host programming should wait before timing out an I/O request. The value is reported when a logical path with the CUP is established and host programming issues the Read Configuration Data CCW to the CUP. The purpose of the MIHPTO is to enable the device to tell host programming how long to wait after issuing a request before discarding that request and retrying the I/O. When specifying time intervals, consider the following: The MIH detects a missing interrupt condition within 1 second of the time interval that users specify. If the time interval is too short, a false missing interrupt can occur and cause early termination of the channel program. For example, if a 30-second interval is specified for a tape drive, a rewind might not complete before the MIH detects a missing interrupt. A best practice is to be sure that the MIHPTO for the CUP on switching devices has been set to 3 min (180 sec). Since the release of Brocade FOS 6.1.0c, the default for MIHPTO, and sent from the factory, has been 180 seconds (3 min). However, the MIHPTO was not changed on a switching device if an older version of Brocade FOS (prior to 6.1.0) was upgraded in place to Brocade FOS 6.1.0c or higher versions of Brocade FOS. If this is true in your enterprise, then please check and possibly change the MIHPTO setting. The Missing Interrupt Handler (MIH) timers should be the same on both the switch and z/os operating system. The MIHPTO setting persists across reboots/por(power-on Reset)/failovers, and so forth. The IBM specification on MIHPTO requires that the CUP rounds down to the nearest 10 seconds for any MIHPTO values greater than 63 seconds. This is taken care of by Brocade FOS on a B-Series device. FICON Advice and Best Practices 16 of 77

17 Therefore, users might enter 187 seconds, but what is set is 180 seconds, and that is displayed back. SWITCHING DEVICE RESOURCE ALLOCATION TIME OUT VALUES (R_A_TOV) R_A_TOV is the amount of time given to devices to allocate the resources needed to process received frames. In practice, this may be the time for recalculation of routing tables in network devices A best practice is check to be sure that the R_A_TOV on Gen4 and Gen5 devices has been set correctly. Gen4 and Gen5 switching devices set their R_A_TOV in units of thousandths of seconds. To set a R_A_TOV value of 10 seconds, the recommended default value, users would set a value of on the Gen4 or Gen5 switching device. See the Brocade Fabric OS Administrator s Guide for more information. SWITCHING DEVICE ERROR DETECT TIME OUT VALUES (E_D_TOV) E_D_TOV is the basic error timeout used for all Fibre Channel error detection. A best practice is check to be sure that the E_D_TOV on Gen4 and Gen5 devices has been set correctly. Gen4 and Gen5 switching devices set their E_D_TOV in units of thousandths of seconds. To set an E_D_TOV value of 2 seconds, the recommended default value, users would set a value of 2000 on the Gen4 or Gen5 switching device. See the Brocade Fabric OS Administrator s Guide for more information. RESOURCE MANAGEMENT FACILITY (RMF), SYSTEMS AUTOMATION (SA), AND CONTROL UNIT PORT (CUP) Control Unit Port (CUP) is an available feature for the following devices: Gen maximum of 4 CUPs on the chassis from FOS 7.0 including 7.1 Gen maximum of 4 CUPs on the chassis from FOS 7.0 including 7.1 Gen maximum of 2 CUPs on the chassis from FOS 7.1 Gen4 DCX maximum of 4 CUPs on the chassis Gen4 DCX-4S maximum of 4 CUPs on the chassis Gen maximum of 2 CUPs on the chassis Gen maximum of 2 CUPs on the chassis Gen maximum of 2 CUPs on the chassis Gen maximum of 1 CUP on the chassis Gen maximum of 1 CUP on the chassis Gen2 Mi10K maximum of 4 CUPs on the chassis Gen2 M6140 maximum of 1 CUP on the chassis It is a best practice to purchase and install the FICON Management Server (FMS) key on each FICON switching device so that Control Unit Port (CUP) can assist RMF in providing a FICON Director Activity Report per each FICON device in the environment. FMS must be enabled on the local switch (e.g. the one connected to the System z) in order for a remote CUP (accessed through ICLs or ISLs) to work. Here are some reasons why users should always utilize CUP on their FICON switching devices: RMF can have in-band access to the FICON switching. Systems Automation for z/os, with the I/O Operations (IOOPs) module implemented can have in-band access to the FICON switching devices. FICON Dynamic Channel Management (DCM) can dynamically add and remove channel resources at the Workload Manager s discretion. When CUP is enabled, z/os will use Coordinated Universal Time (UTC) to update the time/date on all of the CUP-enabled switching devices. z/os keeps everything in sync and all logs are also synchronized. The Allow/Prohibit Addressing Matrix becomes available (discussed in other sections). FICON Advice and Best Practices 17 of 77

18 The Active=Saved condition becomes user configurable (discussed in other sections). Once FMS is enabled on their switching device, users should check the FICON CUP settings to be sure that Active=Saved is enabled, so that any changes to the switching environment will persist across PORs. If an FMS License is installed and CUP is enabled, and a port is "blocked," that port sets a persistent Offline Sequence (OLS), and the light on that port is turned off. However, OLS is sent for a period of at least 5 ms before the light is turned off, in order to ensure orderly shutdown of the port. It is a best practice to have only 1 Logical Partition (LPAR) polling the CUP for statistics. That LPAR then should distribute the information to other systems in the sysplex as required. The first reference for this was IBM Authorized Program Analysis Report (APAR) 0A02187 many years ago. It is a best practice that in the Control Unit statement for the switch in the Input/Output Configuration Data Set (IOCDS) and Hardware Configuration Definition (HCD), customers should configure two channels, and only two channels, as paths to each CUP in the enterprise. This limitation reduces the number of possible logical paths that the channel subsystem can potentially establish to any CUP (1 logical path per CHPID per LPAR). This limitation helps to prevent stress on any given CUP in situations where the switching device is busy. Option FCD in ERBRMFxx parmlib member and STATS=YES in IECIOSnn tell RMF to produce the 74-7 records that RMF then uses to produce the FICON Director Activity Report on every RMF interval for every switch (or domain ID if a physical chassis is virtualized) using CUP. Prior to Brocade FOS 7.0.0c, a maximum of two CUPs could be enabled on a chassis utilizing virtual fabrics. At Brocade FOS 7.0.0c and higher, a maximum of four CUPs can be enabled on a Gen4 or Gen5 Director chassis utilizing virtual fabrics. On any physical chassis (Director or modular switch) that is not utilizing virtual fabrics, only a single CUP can be enabled to report on that chassis. CUP and FCIP: CUP can be run on the 7800 extension device. CUP can be run on a chassis that houses an FCIP blade(s). But CUP is only useful for the fibre channel ports and not the IP ports on those devices/blades. The IP ports or any of the associated virtual ports to a GbE interface are not managed through CUP. CUP does not monitor any of the virtual ports (VE) associated with the GbE port (the FCIP tunnels). RMF, system automation, and z/os incident reporting, when for an FCIP device, is for the FC ports only. Be aware that there is a difference between Link Incident Reporting (which does not directly involve CUP/FMS), and Asynchronous Error Reporting, which does: Link incident reporting occurs independently of the FMS license and CUP. All FICON channels register for link incidents by sending the Link Incident Registration Request (LIRR) extended link service request upon logging into a switch. In the LIRR payload, FICON channels set the Registration Function to conditionally receive. Link incident reporting is defined in FC-FS and in FC-SB standards. Asynchronous Error Reporting is an optional feature on a FICON switching device and should not be changed if the user does not understand what this feature does. Asynchronous Error Reporting is distinct from link incident reporting, and it does directly involve the FMS license and CUP. It is the mechanism by which a FICON switch, with FMS enabled, reports internal switch errors back to the mainframe. In contrast to link incident reporting, Asynchronous Error Reporting requires that the switch CUP address must be defined in IOCP, FMS must be enabled, and the mainframe must have at least one logical path to the CUP. Asynchronous Error Reporting is a function of the CUP and is defined only by the IBM FICON Director Programming Interface. For additional information, refer to the Brocade Tech Note: FICON B-type Switch I/O Configuration Quick Reference, available on the internet. If Systems Automation is using its I/O Operations (IOOPs) application to poll remote CUPs across ISL links, it is using small frames to do this polling. The IOOPs module polling can use up all of an ISL link s buffer credits (BCs), so be alert to this possibility. It is a best practice to have the Systems Automation IOOPs module use a dedicated ISL link when possible. Otherwise, be sure that sufficient BCs are allocated on the ISLs to handle the very small average frame size. Business and technical reasons for deploying CUP on FICON switching devices: System z sets the FICON switch s time and date through the CUP so that all logs and devices in a sysplex are using the same time/date for troubleshooting without CUP this synchronization cannot be done to exactly emulate z/os times on the switching devices. FICON Advice and Best Practices 18 of 77

19 Systems Automation, through the I/O Operations (IOOPs) module, allows customers to manage and control FICON switches through their HMC consoles. RMF can report on the usage of all FICON I/O ports, including ISL links, and on whether buffer credits might be causing any performance problems. For switched-ficon fabrics, since z/os v1.11, the mainframe allows pools of FICON CHPIDs to be unassigned (Dynamic Channel Management) so that workload manager can use them in goal mode. CUP is required if it is important for the customer to get the Service information Messages (SIM) up to the z/os console for any FICON device hardware failures (asynchronous error reporting). There is a z/os IOS Health Check that monitors and reports on inconsistent initial Command Response Time amongst paths to a control unit. IBM added this IOS Health Checking function to z/os to detect when FICON fabric issues were causing unexpected delays. Brocade provides visibility of our fabrics, and ISL link Bottleneck detection mechanism, into z/os host programming through enhancements to the CUP capability Setting Quality of Service (QoS) for I/O has been useful for SAN (high-medium-low I/O priorities) but not so useful for FICON until now. But QoS will soon be supported on System z. On an LPAR, if important work is missing its goals due to I/O contention on channels shared with other work, it will be given a higher channel subsystem I/O priority (by Workload Manager) than the less important work. This function works together with Dynamic Channel Management which requires CUP to function. As additional channels are moved to the partition running the important work, channel subsystem priority queuing is designed so the important work that really needs it receives the additional I/O resource. When used on a chassis that also houses FCP ports for replication, Linux on System z or SAN, CUP can report on the average frame size, bandwidth utilization and buffer credit usage on the FCP ports as well as the FICON ports. This is information that has never been practically available to SAN storage managers before. These SAN port statistics are reported up through CUP to RMF just like the FICON port usage statistics. SWITCHING DEVICES The following switching devices (last 3 generations) can be utilized with FICON: Gen from FOS 7.0 and higher Gen from FOS 7.0 and higher Gen from FOS 7.0.0d and higher Gen4 DCX from FOS 6.0 and higher Gen4 DCX-4S from FOS 6.0 and higher Gen from FOS 6.0 and higher Gen from FOS 6.0 and higher Gen from FOS 6.0 and higher Gen from FOS 5.0 and higher It is a best practice to utilize FICON switching devices rather than deploying FICON with direct-attached connectivity. Many of the new z/os features require switched FICON to function: zdac and DCM, for example. Switching devices can provide many long-distance buffer credits that are no longer available on 8 Gbps CHPIDs. There are motherboard-based switching devices and redundant component-based switching devices. Motherboard-based switching devices (such as the Brocade 6510 Switch) are best used for physical tape drive connectivity (since tape is a two-9s device). For very scalable FCIP long-distance connectivity there is the motherboard-based Brocade 7800 FCIP switch. Redundant component-based switching devices (Brocade Gen4 DCX family and Gen family) are best used for user core FICON fabrics and to provide Brocade FX8-24 with 1 GbE and 10 GbE FCIP long-distance connectivity. It is a best practice to populate a 42U cabinet with only 2 Brocade 8-slot Director chassis or 2 8-slot Director chassis and 1 4-slot Director chassis. The Brocade DCX Hardware Install Guide indicates an ability to install 3 Brocade 8-slot Directors in a 42U cabinet, but usually 2 proves to be more practical. Each OEM has its own recommendations, so users should ask the OEM. Users need to be aware of the method that Brocade uses to manage the firmware releases on switching devices. FICON Advice and Best Practices 19 of 77

20 All Gen4 and Gen5 Brocade switching devices use a Linux based firmware which Brocade calls Fabric Operating System or FOS. Updates to this firmware are released in families. This started with version 4, then version 5 and then version 6 and now version 7. Each family has had a series of updates: Version 5.0.x went to 5.1.x, 5.2.x and 5.3.x. Version 6.0.x went to 6.1.x, 6.2.x, 6.3.x and then 6.4.x. Version 7.0.0c went to 7.1. Each major version of firmware is typically in response to a need to support faster line rates on the ports of new devices but it also supports the line rate of the previous generation of devices: Version 5 and version 6 support 4 Gbps devices. Version 6 and version 7 support 8 Gbps devices. Version 7 supports 16 Gbps devices. The good news is that a user can almost always non-disruptively update firmware on Brocade switches: So users can move to higher releases without an outage (but always read the FOS release notes to be sure). However users need to be aware of a rule regarding the from and to versions: Since FOS Brocade has had a one-release migration policy to allow more reliable and robust migrations for customers. By having fewer major changes in internal databases, configurations, and subsystems, the system is able to perform the upgrade more efficiently, taking less time and ensuring a truly seamless and non-disruptive process for the fabric. The one-release migration policy also reduces the large number of upgrade/downgrade permutations that must be tested, allowing Brocade to spend more effort ensuring the supported migration paths are thoroughly and completely verified. Disruptive upgrades are allowed, but only for a two-level migration (for instance from 6.4 to 7.1, skipping 7.0). So why should users care? Well their upgrade philosophy may be: If it is not broke then do not fix it. Or they may have the policy: We do fix on fail, apart from that, we do not update firmware at all. Unfortunately, when a user finally does perform an update, they may find themselves having to do many upgrade iterations with much time and effort involved in those upgrades. Let us document a possible upgrade iteration from an old (11/2009) to what is currently the newest release (2013) for IBM FICON qualified and supported Fabric Operating System (FOS) firmware: 6.3.0b _ 6.3.0d _6.4.0a _ 6.4.0c _ 6.4.2a _ 7.0.0c _ 7.0.0d _ 7.1.x As can be seen from the steps above, a user might have to have a very long change window if they are choosing not to perform updates on a regular basis. As a user updates firmware, there can be lots of caveats and restrictions based on the hardware of the switch they are running. It is very important for users to consult the FOS release notes before doing firmware upgrades. CREATING FIVE-9S HIGH AVAILABILITY FABRICS Single switches (6510 and so forth) that host all of a users FICON connectivity create an environment with two- 9s of availability. Redundant switches in redundant fabrics, with path groups utilizing both devices, potentially create an environment with four-9s of availability. Users must also consider the impact of losing 50 percent of their bandwidth and connectivity. Single directors (DCX, DCX-4S, 8510 and so forth) that host all of the FICON connectivity create an environment with four-9s of availability. Dual directors in dual fabrics, with path groups utilizing both devices, potentially create an environment with five-9s of availability. Users must also consider the impact of losing a fabric and some percentage of their bandwidth and connectivity. Many mainframe customers ensure a five-9s high-availability environment by deploying four or eight redundant fabrics, shared across a path group, so that the bandwidth loss in the event of a fabric failure is not disruptive to operations. It is a best practice that the redundant fabrics be split up across two or more cabinets. It is also optimal that those cabinets be located some distance apart, so that a physical misfortune cannot disrupt all of the FICON fabric cabinets and their switching devices at the same time. FICON Advice and Best Practices 20 of 77

21 Complete and non-disruptive firmware upgrading is supported on director-class switches (Brocade DCX , DCX , DCX, and DCX-4S) that are not using Brocade FX8-24 blades. Comprehensive non-disruptive firmware upgrading is not supported on the Brocade DCX , DCX , DCX, DCX-4S with Brocade FX8-24 blades, or on the Brocade 7800 Extension Switch, since the FCIP tunnels will go down for 10 to 15 seconds, and all traffic in the tunnels will be disrupted. When upgrading firmware on the Brocade 6510 fixed-port switch, customers should perform upgrades during scheduled maintenance windows where traffic is minimized, to avoid fabric disruption. A Deeper Look into Path Groups: The System z operating system has a built in capability known as Path Group to balance and provide performance-oriented I/O. On the mainframe a user can group up to 8 of their physical connections between the Channel Path IDs (CHPIDs), which are the mainframe I/O ports, out to connected storage ports. Those Path Group links should also be deployed from 8 different FICON Express channel features, if at all possible, to improve redundancy at the FICON Express card level. It is the mainframe channel subsystem that decides which path in the path group will be used by deciding which path is least busy and which paths are operational, etc. Path Groups allow I/O to be automatically spread evenly and fairly across a number of physical channel paths without over-subscribing any given I/O path. Path Groups provide instantaneous fail over to operational links if a path group link fails. Each of the mainframe s Path Groups (PG) should be redundantly spread across each of the switching device fabrics. The most common FICON fabric deployment worldwide is four FICON fabrics which allow 2 PG links to each switch which provides excellent redundancy. Some very large users will deploy one path group link across eight FICON fabrics to reduce the amount of bandwidth loss they will incur if a fabric were to fail. Figure 6 A deeper look into Fan In Fan Out for Disk and Tape connections: DASD is typically 90% read and 10% write while tape is just the opposite. Let us discuss real, physical Tape First (this is not a discussion pertinent to virtualized tape): Tape really cannot have any Fan In Fan Out since a tape drive has a compression chip (assume 2:1) that handles the data before it is placed on across the heads and onto tape. Also, an 8 Gbps CHPID can only do a maximum of 620MBps when the data is large blocks of sequential read and write data. (Cathy Cronin testing of large sequential read and write data in the IBM lab.) Also, the most modern tape drives have a head speed of between 180 to 190MBps. But since data is compressed before going to the read/write head one must send at least twice the throughput or the tape will stop-start and really hurt performance. FICON Advice and Best Practices 21 of 77

22 This means that one must send at least 2x the head rate (180 x 2 = 360MBps; 190 x 2 = 380MBps) to the tape device. Since the CHPID can only handle 620MBps then: 620/360 = 1.72 tape drives worth of bandwidth (1 tape runs at full speed the other stops and starts). And it is a bit worse for the 190MBps head rate tape drives. So, once again, no Fan In Fan Out for tape drives. That means it takes one CHPID to run one tape drive. Also, tape cannot participate in high performance FICON (zhpf) at the time this document was written. Figure 7 Now let us talk about DASD: First, DASD does participate in zhpf and if lots of zhpf is occurring in the shop then channel link (CHPID to storage port) connections can be very busy. zhpf acts more like FCP than like FICON. With heavy zhpf usage, customers need to have a minimum of Fan In Fan Out. Secondly, DASD command mode I/O is sporadic and bursty. It is small blocked data for the most part and there are normally gaps of time (little utilization on the link) between I/O exchanges. Also, since DASD is 90% read the major consideration is that the CHPIDs be equal or faster in line rate than the DASD. Regarding DASD ports doing a majority of command mode FICON, here is a bit more of a breakdown: If the CHPID is 4 Gbps and the DASD is 4 Gbps then the flow between them should be OK. If the CHPID is 8 Gbps and the DASD is 4 Gbps then the flow between them should be OK. If the CHPID is 4 Gbps and the DASD is 8 Gbps then the flow between them could cause congestion and backpressure within the fabric. It is a best practice that the CHPIDs be equal or greater in line rate than the DASD interface ports. There are mainframe customers doing 4:1, 8:1 and even 12:1 link consolidation ratios in FICON (versus what was used with ESCON) and working exceptionally well even with zhpf. But every vendor who supports a FICON fabric infrastructure has rules about how much consolidation they will support. Users must check with their FICON infrastructure vendor for the details. Figure 8 DOMAIN ID The Domain ID defines the FICON Director address (switch address in System z environment). It is specified in the FICON Director as a decimal number and must be converted to a hexadecimal value for use in the System z server. Always make domain IDs unique and insistent. Insistent domain IDs are always required for 2-byte addressing (FICON cascading). As of FOS 7.1, Domain ID can be set hex instead of just a decimal notation. There is no need for a domain ID to ever change in a FICON environment; fabrics come up faster after recovering from a failure if the domain ID is insistent. FICON Advice and Best Practices 22 of 77

23 Each switch needs its own Switch ID in the IOCP so, assuming users are making the Switch ID = Switch Address, each domain ID must be unique. Note that the switch address is based on the domain ID. This allows the directors to be cascaded in the future without having to take any director off line. Make sure that the domain offset (normally x60) is the same on all of the devices in all of your FICON fabrics. IMPORTANT NOTE: When setting insistent domain ID on a switch using the FICON Configuration Wizard, the switch will always be taken offline and insistent domain ID set, even if insistent domain ID is already set. Figure 9 SWITCH ID The switch ID must be assigned by the user, and it must be unique within the scope of the definitions (HCD, HCM, or IOCP). The switch ID in the CHPID statement is basically used as an identifier or label. Although the switch ID can be different than the switch address or Domain ID, we recommend that you use the same value as the switch address and the Domain ID when referring to a FICON Director. Set the switch ID in the IOCP to be the same as the switch address (on Brocade DCX switches this is also the Domain ID) of the FICON switching device. The Switch keyword (Switch ID) in the CHPID macro of the IOCP is only for the purposes of documentation. The number can be from 0x00 to 0xFF. It can be specified however you desire. It is a number that is never checked for accuracy by HCD and the IOCP process. The best practice is to make it the same as the Domain ID, but there is no requirement that you do so. It is a best practice for customers to set the switch ID to the hex equivalent of the switch domain ID. There are two reasons for this: When entering 2-byte link address the operator is less likely to make mistakes because the first byte is always the switch ID When in debugging mode, the first byte of the link address and FC address is the switch ID no translations to be done. In Interop Mode 0 (this is the typical mode and is used for Gen4 and Gen5 fabric switches) the switch address and the domain ID are the same thing. Note that channel statements in the IOCP are associated with a switch ID; however, the link statements, when using 2-byte addressing, are the switch address. This avoids any confusion as to which switch the users are using. FICON Advice and Best Practices 23 of 77

24 Figure 10 SWITCH ID AND SWITCH ADDRESS The switch address is a hexadecimal number used in defining the FICON Director in the System z environment. The Domain ID and the switch address must be the same when referring to a Director in a System z environment. The Brocade Gen3, Gen4 and Gen5 Brocade Directors and switches support the full range of standard domain IDs, 1 239, and this value is recommended to be set for these products. There is no offset when the full domain ID range is set. For these devices the Switch ID and the Switch Address are exactly the same thing. Brocade FOS v7.0 and higher supports a new fabric naming feature that allows users to assign a user-friendly name to identify and manage a logical fabric. Figure 11 PORT NICKNAMES (ALIASES) Brocade FOS v7.0 and higher allows users to assign a port name up to a maximum of 128 characters, an increase from the maximum of 32 characters in FOS v6.x and below. FOS 7.0 and higher creates Default Port Names when configuring switching devices. Be aware that these default port names are shown in decimal and not hexadecimal. Once established, they can be modified by the user. FICON Advice and Best Practices 24 of 77

25 ZONING Zoning is an important FICON and SAN fabric capability that is too often overlooked or glossed over when deploying a FICON infrastructure. There is traditional zoning to allow ports to communicate with each other in a fabric and, for Brocade, there is also Traffic Isolation Zoning (TIZ) which helps manage the flow of traffic across ISL links. This discussion is about traditional zoning. All too often mainframe architects believe that zoning is not something to concern themselves about in their System z homogeneous environments. Unfortunately that is too narrow a view point of what a mainframe I/O infrastructure really consists of in our enterprise environments today. One result of disregarding zoning best practices is that a mainframe I/O deployment can very likely suffer from simple to severe issues that could have been avoided. For example, a port that is not zoned cannot communicate (send frames) to any other port. There is a myth that FICON does not have to be zoned. That is not correct. The fibre channel protocol requires that ports be zoned together in order for a port to send frames to other ports in its zone. Vendor s ship switches from the factory with a default setting that allows all ports to communicate with each other. No user action is required because of this default setting. That has lead vendors and users to believe that zoning is not required for FICON. Since zoning allows ports to communicate with each other, it is very important for mainframe professionals to understand how to control and manage that communications capability. First of all, what is zoning? At a high level, zoning is a fabric management service that can be used to create logical subsets of ports (devices) within a FC fabric and enable partitioning of resources for management and access control purposes. Zoning allows only members of a zone to communicate with each other within that zone. All others attempting to access from outside the zone are rejected, hence zoning also provides a security function. A zone is comprised of a collection of CHPID (initiator) and storage (target) ports within the I/O environment. The ports in a zone can only communicate with other ports in that zone. However, ports can be members of more than one zone. Zoning provides a software control facility at the Node World Wide Name (nwwn) level which is then assisted by the name server of a switching device. Brocade also supports Domain/Port zoning or, what was called on old McDATA switching devices, port zoning. Zoning allows a user to view the zone information currently active in the fabric, create and modify zones and zone sets in the software zone library, activate a zone set in order to publish the zone information in the selected fabric, deactivate the current active zone set, configure zoning policies in the selected fabric, and generate zoning reports for the fabric. In a single director fabric with just FICON, it is a common practice to place all ports in a single specific zone. If supporting multiple operating systems from LPARs on the same director, such as z/os and z/tpf, create separate zones for each operating system from the LPAR. In intermixed environments, place the FICON ports in a specific FICON port zone. Then use standard open systems best practices for creating World Wide Name (WWN) zones for the FCP traffic on that same chassis. It is a bad practice, which could lead to future outages, to mix FICON and FCP ports within the same zone. In cascaded environments, you might consider placing the paths using 2-byte addressing to access their remote devices in zones that are separate from non-cascaded paths. There are two types of Zoning identification: port World Wide Name (pwwn) and Domain,Port (D,P). There is a near-myth about hard and soft zoning that needs to be discussed. The terms hard and soft zoning persist today with the incorrect belief that using the Domain/Port identification will be more secure than using the pwwn identification. Today, all zoning should be viewed as a security mechanism for fabrics with two identification options and three enforcement methods. Enforcement can be: Software based: Software enforcement occurs when the Name Server service in the fabric masks the Name Server entries that a host should not access. However, this is not a secure method of enforcement. Hardware based: Hardware enforcement is performed by the Application-Specific Integrated Circuits (ASICs) in fabric switches. Unlike software enforcement, hardware enforcement is a proactive security mechanism. Every port has a filter that allows only the traffic defined by the Zoning configuration to pass through. If traffic disallowed by the Zoning configuration is initiated, the ASIC will discard the traffic. Session based: This occurs when zones contain several zoning formats (e.g. pwwn; D,P) or there is an overlapping zone. Regardless, hardware enforcement is performed but enforcement is session-based and not frame-based. FICON Advice and Best Practices 25 of 77

26 For Brocade, since 2 Gbps was released, all zoning is hardware enforced. An Important Note: Do not put the E_Ports (ISL ports) in a zone: An ISL link is the most likely link to go up and down and there is no need or capability for anyone to directly address those ports. If the ISL link goes down and then up (flapping) it will generate Registered State Change Notifications (RSCNs) within the zone so why have those RSCNs disturbing that fabric? Zoning for Gen3, Gen4 and Gen5 switching devices: Classic Brocade switching devices can have a default zoning mode and/or specific zones, on the same chassis. We recommend that a user not use the Default Zoning Mode (all access) for FICON, but rather define port zones (domain/index) for FICON and WWN zones for FCP if using protocol intermix. Brocade generally recommends that customers simply place all FICON devices into one large zone, which achieves the same behavior as the default zoning mode s all access, but provides the added protection that any new device is not allowed to talk to any other device until it is explicitly added to the zone. By activating specific port zones, users ensure that if they utilize Prohibit Dynamic Connectivity Matrix (PDCM) entries that they are honored and enforced. Since default zoning mode all access is not activated it cannot enforce any PDCM prohibits. Zoning Registered State Change Notifications (RSCNs), one of many types of RSCNs, are used to notify switching devices when port connectivity has changed in the fabric, RSCNs produce additional traffic on the FC links that is not accounted for by user I/O activity and not monitored by RMF. Since FICON uses its IOCP to determine I/O connectivity, and really does not count on zoning for connectivity, it would be a great feature to be able to turn Fabric Format (zoning) RSCNs off but they cannot be suppressed or ignored on Brocade switching devices. Brocade FOS v7.0 and higher allows a switch using a Default Zoning Mode of no access, and with no zoning configuration, to merge with a fabric that has an active zone configuration. For Traffic Isolation Zones (TIZ) with FA=ENABLED AND with lossless enabled: When activating the TI zone there should not be any data disruption if connections or exchanges are moved to the preferred path in the TI zone(s) as a result of the activation. For Traffic Isolation Zones with FA=DISABLED, regardless of whether lossless is enabled or not: When activating the TIZ a user could suffer data loss. This occurs when there are static routes hanging off one of the end-point nodes of the TIZ that could have allowed traffic to pass through under non-tiz circumstances. Those routes might now get eliminated with the TIZ activation, which means loading a revised table into hardware. Some of these could be backend routes the user does not even know about. Unfortunately, at least at FOS 7.0.0d and below, the FA=DISABLED setting DOES NOT utilize the lossless feature to ensure no data loss so an enterprise could see a disruption occur. As discussed above, under certain circumstances, enabling multiple TIZs with failover disabled may cause some frames to be dropped due to the timing of when paths are re-routed while the zones are being implemented. To avoid this as much as possible, all Traffic Isolation Zones should be enabled with failover enabled first so that all desired routes are established. Then change the TIZ to failover disabled. It has to do with synchronization of pathing in the nodes, and this process prevents the TIZ process from getting hung up on one side or the other. ZONING NAMES M-Series zoning allowed dashes (-) in the zone names. In B-Series, dashes are illegal in zone names. Here is probably the easiest way to convert zone names over to a Gen4 or Gen5 Brocade switching device if dashes have been used in zone names: Using Brocade EFCM/DCFM, export the M-Series saved zone set to an XML file. Edit that file with MS Word, and do a global search/replace of dashes to underscores. Import the zone set back into Brocade DCFM/Brocade Network Advisor and then activate it to the desired fabric. FICON Advice and Best Practices 26 of 77

27 USING LOCAL SWITCHING Brocade Local Switching is a performance enhancement which provides two capabilities: Local switching reduces the traffic across the backplane of the chassis which reduces any oversubscription across that backplane for other I/O traffic flows. Local switching provides the fastest possible performance through a switching device by significantly reducing the latency time of a frame passing through the locally switched ports. Brocade FOS-based director and backbone platforms provide Local Switching capability. Brocade s Local Switching occurs when two ports are switched within a single Application Specific Integrated Circuit (ASIC). With Local Switching no I/O traffic transverses the backplane (the core blades) of a director so core blade switching capacities are unaffected. The ASIC recognizes that it controls both the ingress and egress port and simply moves the frame from the ingress to the egress port without ever having the frame move over the backplane. Local Switching always occurs at the full speed negotiated by the switching ports (gated by the SFP) regardless of any backplane oversubscription ratio for the port card. Backplane switching on Gen4 and Gen5 Brocade Directors is sustained at approximately 2.1 µsec (microsecond).per I/O. Local switching on Gen4 and Gen5 Brocade Directors is sustained at approximately 700 ns (nanoseconds).per I/O 3x faster than backplane switching. Local Switching always occurs at full speed in an ASIC as long as the SFP supports the highest data rate that the port can sustain. Example 32port blade slot 1 left column of ports is 00 to 0F; right column of ports is 80 to 8F (this assumes that no logical switches have been created and users deploy the standard slot addressing): Port 00 and port 8F (each on a different ASIC) on the FC8-32 or FC16-32 blade communicate over the backplane and use the core blade s back-end switching ASICs. Each frame s latency is 2.1 µs. For an FC8-32 or FC16-32 blade, port 00 can switch locally at full speed with ports 01 to 07 and with ports 80 to 87 (all on the same ASIC) and port 08 can switch locally at full speed with ports 09 to 0F and with ports 88 to 8F. Each frame s latency is ~ 700 ns. These port groups are called Local Switching groups, since they each share the same port blade ASIC. 48-port blades have two ASICs, 2 local switching groups, but the ports numbers referenced within each local switching group are different ports than those of the 32-port blade. To maximize the performance of specific systems and/or applications, users can architect into their fabric design the use of Local Switching for its I/O connectivity and superior performance. Also use Local Switching to minimize the oversubscription of blades such as the 48-port blade. A Gen4 Director slot can sustain 512 MBps of full duplex bandwidth. A Gen5 Director slot can sustain 1,024 MBps of full duplex bandwidth. If a Gen5 Director slot hosts a 48-port blade, and all ports want to communicate at full line rate simultaneously for both send and receive, it would require 1,536 MBps of bandwidth (16 Gbps x 48 ports). The blade can theoretically be oversubscribed at 1.5 : 1. But if some of the I/O traffic is architected to utilize local switching then that bandwidth never uses the backplane s bandwidth freeing it up for other ports to use while lowering that blade s oversubscription ratio. PROHIBIT DYNAMIC CONNECTIVITY MASK (PDCM) AND PORT BLOCKING Historically a mainframe capability, PDCM controls whether communication between a pair of ports in the ESCON and/or FICON switch is prohibited or not. There are connectivity attributes that control whether all the communication is blocked for a port. At FOS 7.0 and above, the FMSenable setting can only be activated when an FMS license has been installed on the chassis to allow in-band management through CUP. Since the FMS control must be enabled in order to utilize the PDCMs of the Allow/Prohibit Addressing Matrix, PDCMs will only be available to a customer who implements CUP on that chassis. FICON Advice and Best Practices 27 of 77

28 It is best practice to use PDCMs to prohibit the use of ports that do not have any cables attached to an SFP. This keeps accidental mis-cabling of FCP attachments into a FICON zone and/or VF from becoming a problem for the data center. If PDCM is not available to the customer (e.g. FOS 7.0 or higher but no FMS license installed on the chassis), consider using port blocking or persistentportdisable to keep accidental mis-cabling of FCP attachments into a FICON zone and/or VF from becoming a problem for the data center. It is a best practice to use PDCMs to prohibit FCP ports from communicating with FICON ports. FCP SCSI I/O and FICON CCW I/O are incompatible and must be kept apart. Because FICON is address-centric and will not communicate with any ports that are not defined in IOCP, FICON acts in a very responsible, mature fashion when executing in a Protocol Intermix environment. But SCSI is discovery-oriented and can find and try to use FICON storage devices. This could cause serious issues for the mainframe users. FCP I/O must be restricted to devices suitable for SCSI and PDCMs can be used to limit FCP ports from discovering FICON ports. The Allow/Prohibit Addressing Matrix provides users with two functions that affect ports. Port blocking/unblocking is one function, and PDCM is the other function. PDCMs are fundamentally different from the block/unblock state and should not be confused: The block/unblock state pertains to whether a port is disabled/enabled, with its light either turned on or turned off. PDCMs pertain only to connectivity between specific port pairs within a switch, and the light on the port is kept on even if connectivity is prohibited. A port s PDCMs have no meaning if a port is blocked, because a blocked port cannot connect to anything. On a port, if a blocked state is set, it should be persistent even through a POR. To ensure that it is persistent across POR, the port must become configured as blocked in the special Initial Program Load (IPL) file of the switching device. This is accomplished through the Active=Saved (ASM mode) control. ASM mode is found under the FICON CUP tab in Brocade DCFM/Brocade Network Advisor. It should be checked so that it is enabled. Figure 12 If Active=Saved is enabled, the IPL file is in sync with the active configuration on the switching device. But if for any reason the ASM is not enabled, then the port states in the active configuration may differ from those in the IPL file of the switch, and the switch s IPL file governs how the port is configured (its state) following any POR. It is a best practice to make sure that ASM mode is enabled in FICON environments. FICON Advice and Best Practices 28 of 77

29 When blocking a port on a Brocade Gen4 or Gen5 switching device, an amber LED illuminates; this does not occur on most of the earlier switching devices. Brocade switching device PDCMs are enforced by the zoning module. That is why Brocade FOS firmware requires that a switch must have an active zoning configuration in order for PDCM settings to be useful. If only the default zoning mode (all access) has been implemented in the fabric, then there is no active zoning configuration because no zones/zone-sets have been activated, and PDCM changes can be set but they are not enforced. For hardware level zone enforcement (and for PDCMs to work) customers need to be sure to have either HARD PORT or HARD WWN zoning enforcement. Customers can use the portzoneshow CLI command to display their zoning enforcement. PDCMs are not supported by session-based zoning enforcement on a port, which occurs when there are overlapping Domain, Index (D, I) and WWN zones being enforced on the port. It can also occur in some other circumstances, such as in very large zoning configurations where all available zoning filters on an ASIC have been consumed. PDCMs provide allowed/prohibited port-to-port connectivity at the hardware level. It is the strongest form of port isolation within a domain ID. Utilizing PDCM can make troubleshooting connectivity problems more difficult. When there is a connectivity problem it is seldom that a user will consider that a PDCM prohibit might have been set somewhere. There are just so many other things that could also keep connectivity from occurring. Gen3, Gen4 and Gen5 allows F_Ports to be allowed/prohibited from other F_Ports, but prohibiting cannot be used on E_Ports. When FMS is enabled, the CLI commands portcfgpersistentdisable and portcfgpersistentenable become invalid commands. The commands portdisable and portenable are the proper CLI commands to use with FMS. With FMS enabled, the port state following POR must be determined by the IPL file on the switching device. In Brocade FOS, this is basically implemented via the Persistent state in the port configuration (Portcfg), the state being either OFF (unblocked in the IPL file), or ON (blocked in the IPL file). So If the IPL file has a port state set as blocked, Brocade FOS must set the Persistent Disable state in the Portcfg. Use the portcfgdefault command to turn off the persistent enable/disable attribute. SWITCHING DEVICE TIME SYNCHRONIZATION With Brocade device element managers, when users go to Configure > Operating Parameters, they have an option to set Date/Time parameters. If CUP is not being utilized on a switching device, then users should synchronize Date/Time using the element manager. If CUP has been implemented and is being utilized on a switching device, then users must not synchronize Date/Time using the element manager. In the device element manager, if users check the Periodic Date/Time Synchronization box, this causes the date and time to periodically be updated with the date and time of the Network Time Protocol (NTP) server. Users do not want to do this when FMS is enabled and CUP is being used to manage the director. When a mainframe controls the director through the CUP, it periodically sets the director s clock, using its Coordinated Universal Time (UTC) service, to GMT. This ensures that all mainframe logs accurately reflect the same time and date settings. A summary of the NTP behavior for Brocade devices follows: If NTP is to be used, it has to be configured with the tsclockserver command in the CLI to point to the IP address of the NTP server. When NTP is enabled: The NTP configuration is pushed to the entire fabric when the configuration is changed on the principal switch. Initially, the NTP configuration, including the IP address, and the time distribution are sent. Thereafter, the time distribution occurs every 64 seconds from the principal switch. The NTP configuration is distributed only at initial activation or when it is updated. This allows the other switches to assume control of the NTP function, if the fabric should segment, or a new switch becomes the principal switch. FMS behavior when NTP is enabled: FICON Advice and Best Practices 29 of 77

30 The CUP code does not act on the Set Time-stamp Clock CCW sent from the host. The CCW is accepted and processed, but the call to actually set the time is not executed. FMS behavior when NTP is disabled (when it is not configured and the local clock on the switch is used): The time-stamp from the host is sent in UTC format, and the switch is required to use it as the official time. Display of the time is implementation-specific and can adjust based on the time zone. FOS allows the time zone offset to be configured, which is applied to the UTC value sent by the host. The time zone offset is unique to each switch. FILL WORDS AN 8 GBPS CONSIDERATION One of the interesting byplays that occurred with the advent of 8 Gbps Fibre Channel is that it required a change to the way a switching device handles its idle time, the quiet time when no one is speaking and nothing is said across a link. In these periods of quiet contemplation, a FC switch will send a primitive character called Idles. When the speed of the link increased from 4 Gbps to 8 Gbps, the bit pattern used for these Idles proved to be unsuitable, so a different fill pattern was adopted, known as an ARB. All of this came to intrude on our lives when it became apparent that some 8 Gbps storage devices, as well as the System z 8 Gbps CHPIDs, were having trouble connecting to 8 Gbps capable switching devices. This led to two things: 1. Brocade changed our firmware to better handle this situation. 2. IBM and other Brocade partners released several alerts regarding how to handle the connection of 8 Gbps capable devices to 8 Gbps capable fibre channel switches. Prior to 8 Gbps, the fill word on switching devices was always Idle and had been for decades. When 8 Gbps was released, copper wire EMI emissions were too high; using ARBs instead of Idles helps to lower emissions. This really had no effect on fiber cables just copper cables. There were also problems with synchronizing the signal during port login which changing the fill word helped overcome. The FC specifications were maturing at the time that Brocade, and other vendors, implemented the fill word for Gen4, 8 Gbps switching devices. The original FC-PI (Physical Interface) specification did not articulate what to do during initialization: As a result, different vendors implemented different solutions. IBM ships 8 Gbps host and storage ports defaulted to mode 1 (ARBs). There are other OEMs that ship 8 Gbps defaulted to mode 0 (Idles). At Brocade FOS and above, Brocade now offers four mode settings. Use the Command Line Interface (CLI) portcfgfillword to set the mode for the port login Link Initialization and as the primitive spacing character (fill word) between frames on a link: Mode 0 - Idle/Idle - Sets IDLE in the Link Init and IDLE as the fill word Mode 1 - Arb/Arb - Sets ARB(ff) in the Link Init and ARB(ff) as the fill word Mode 2 - Idle/Arb - Sets IDLE in the Link Init and ARB(ff) as the fill word Mode 3 - Try mode 1 first, then try mode 2 Example: portcfgfillword 1/8, 0 - command changes the fill word for Slot1, Port8 to Idles. Example: portcfgfillword 1/8, 1 - command changes the fill word for Slot1, Port8 to ARB(FF). HDS storage supports only mode 0 or 2. HDS recommends mode 2 for their 8 Gbps devices. Obviously, if a vendor asks for a specific mode, users should try that mode. It is important to note that the portcfgfillword CLI command is only for 8 Gbps switching devices and is not supported on any 16 Gbps port card, regardless of the SFP installed (8 or 16 Gbps). The fill word still needs to be set on 8 Gbps products (Condor2 ASIC). This includes ports on the Brocade FX8-24 Extension Blade even if it is installed in a Brocade DCX Fill words are not required to be set on ports controlled by Condor3 ASICs (16 Gbps platforms) The 16 Gbps ASIC auto-detects which fill word it should use and therefore there is no need for the CLI portcfgfillword command to be used. The Condor3 autosets the correct primitives to be used. Keep in mind that the FC8-32E and FC8-48E blades are Condor3 ASIC-based blades although they are not allowed to be used for FICON connectivity. FICON Advice and Best Practices 30 of 77

31 When FX8-24 blades are used in Brocade DCX 8510 platforms, the 8 Gbps FC ports will have to have their fill word set through use of the portcfgfillword CLI command as the FX8-24 uses the Condor2 ASIC. INTER-CHASSIS LINKS (ICLS) First generation Inter-Chassis Link (ICL) connectivity is a unique Brocade Gen4 DCX and DCX-4S feature that provides short-distance connectivity between two Director chassis a good option for customers who want to build a powerful core without sacrificing device ports for Inter-Switch Link (ISL) connectivity: Inter-Chassis Links connect Brocade DCX and DCX-4S Backbones together with special ICL copper cables connected to dedicated ICL ports. Each ICL connects core routing blades of two Brocade DCX chassis and provides the equivalent of 16 x 8 Gbps in a Brocade DCX and 8 x 8 Gbps links in a Brocade DCX-4S without taking up chassis slots. The Inter-Chassis Link (ICL) ports for the Gen4 Directors are special ICL ports on the Core Blades, so they use from two to four special ICL connectors and cables. The maximum cable distance for these high speed, copper ICL links is 2 meters. Figure 13 Now in its second generation, the Brocade optical ICLs based on Quad Small Form Factor Pluggable (QSFP) technology replace the original copper cables with MPO cables and connect the core routing blades of two Brocade DCX 8510 chassis: The second generation Inter-Chassis Link (ICL) ports for the Gen5 Directors are quad-ports, so they use MPO cables and a Quad-SFP (QSFP) rather than a standard SFP+. Each QSFP-based ICL port combines four 16 Gbps links, providing up to 64 Gbps of throughput within a single cable. Available with Brocade FOS v7.0 and later, Brocade offers up to 32 QSFP ICL ports on the Brocade DCX and up to 16 QSFP ICL ports on the DCX The optical form factor of the Gen5 Brocade QSFP-based ICL technology offers several advantages over the original copper-based ICL design in the Gen4 Brocade DCX platforms: First, Brocade has increased the supported ICL cable distance from 2 meters (FOS 6.0) to 50 meters (FOS 7.0) and now (FOS 7.1 +), to 100 meters, providing greater architectural design flexibility. The 100-meter ICL is supported when using 100-meter-capable QSFPs over MPO cable only. Second, the combination of four cables into a single QSFP provides incredible flexibility for deploying a variety of different topologies. In addition to these significant advances in ICL technology, the Brocade DCX 8510 ICL capability still provides dramatic reduction in the number of ISL cables required a four to one reduction compared to traditional ISLs with the same amount of interconnect bandwidth. And since the QSFP-based ICL connections reside on the core routing blades instead of consuming traditional ports on the port blades, up to 33 percent more FC ports are available for server and storage connectivity. Figure 14 FICON Advice and Best Practices 31 of 77

32 A license is required to deploy ICLs. Check with the Brocade and/or storage vendor sales team. When deploying ICLs in a 2- or 3-node configuration, there is no requirement that all of the Brocade Gen4 and Gen5 Director chassis must be on the same version of firmware. It is a best practice to keep all switches in a fabric no more than one major release level apart. For FICON, at FOS 7.0 or earlier, ICLs can only be chained together in series; they cannot be arranged in a mesh or a ring. For FICON, at FOS 7.1 or later, ICLs can be arranged in a three Director ring or in a three Director series. Gen4, 8 Gbps Directors can be ICL d together but cannot be ICL d to Gen5, 16 Gbps Directors. The ICL connections and cables are different. Brocade DCX 8510 ICL cables connect ICL ports over optical cables in the following manner: Brocade DCX Gbps Fibre Channel QSFPs require MPO 1 12 ribbon cable connectors and MPO ribbon fiber cable, limited to 50 meters at FOS 7.0 and 100 meters at FOS Although the connectors have 12 lanes in a row, the 4 16 Gbps Fibre Channel QSFP uses only the center eight lanes. The remaining four lanes are unused. Plug orientation does not matter, because the plug is polarized it takes care of itself just like RJ-45 does. Cables are available from Molex (PN M, M, and M) and Corning. INTER-SWITCH LINKS (ISL) There are three common terms that are used when doing switch-to-switch connections: E_Port (Expansion Port), ISL (Inter-Switch Link) and Cascaded Link (Cascading). They are all the same. An ISL is a link joining two Fibre Channel switches through E_Ports. For FOS 7.1 or higher environments, diagnostic port (D_Port) can be configured within a FICON logical switch, and can be used to validate an ISL in any logical switch (FICON or FCP). Testing an ISL link as a D_Port connection before deploying that link as an ISL is a best practice. For additional information see the Diagnostic Port (D_Port) section of this document. For ISLs, an Extended Fabrics license is required on each chassis participating in the long-distance connection if it is =>10km. For dark fiber connections, users need to deploy long wave optics and single-mode cabling. A best practice is to use the Brocade DCFM/Brocade Network Advisor Cascaded FICON Fabric Merge wizard to create cascaded fabrics. The merging of two FICON fabrics into a cascaded FICON fabric may be disruptive to current I/O operations in both fabrics, as the wizard needs to disable and enable the switches in both fabrics. Make sure that Fabric Binding is enabled. Make sure that a security policy (SCC) is in effect with strict tolerance. On M-Series, Rerouting Delay must be set to disabled. On B-Series, In-Order-Delivery (IOD) must be enabled. Issue iodset CLI command or use Brocade Network Advisor to enable IOD. The default behavior when In Order Delivery (IOD) is enabled is to delete routes when a route change occurs, wait for 500 microseconds (ms), then update the route definition and re-enable routes. This behavior results in frame discards for frames that arrive between the time the routes are deleted and

33 the routes are re-enabled. The old McDATA ReRoute Delay behaved similarly but not exactly the same; however, the end result is the same frames are discarded and device level recovery is required. The Lossless mode for Dynamic Load Sharing (DLS) was created to improve fabric behavior when fabric events occurred that required updates to the ISL route programming. Thus when Lossless is enabled, three modifications to the FOS behavior are added to the process. First, ingress frame flow is paused by holding off the return of R_RDY or VC_RDY. Second, the delay between deleting routes and re-enabling routes became 10ms. Third, ingress frame flow is unpaused after the routes are re-enabled. It is the pause/unpause behavior that prevents frame loss and it is the delay that insures in-order-delivery. When IOD and Lossless DLS are both enabled, the Lossless behavior takes precedence. That is, when routes are updated, the process is to pause the ingress, delete routes, delay 10ms, update routes, re-enable routes, and unpause the ingress. Effectively, the IOD setting becomes superfluous. However, it is still recommended to enable IOD for FICON environments. The reason for maintaining both the Lossless mode of DLS and IOD enabled configuration settings is to support potential changes in FOS firmware implementation in the future. Make sure every switch has a different Domain ID. Make sure that all Domain IDs are insistent. Port Identifiers (called PIDs) are used by the routing and zoning services in FC fabrics to identify ports in the network. All devices in a fabric must use the same PID format, so keep in mind that when someone adds new equipment to a FICON fabric, they might need to change the PID format if it is legacy equipment. The PID is a 24-bit address built from the following three 8-bit fields: Domain Area_ID AL_PA Make sure PID format is consistent (the same) across all switches in the fabric. Brocade FOS v6.0 and higher only supports PID format 1 (Core PID), which supports up to 256 ports per switch. (See the Brocade Fabric OS Administrator s Guide): McDATA and Brocade used different PID formats. It is a best practice to not mix tape I/O and Direct Access Storage Device (DASD) I/O across the same ISL links, as the tape I/O often uses the full capacity of a link. This leaves very little for the DASD I/O, which can become very sluggish in performance. For additional information see the ISLs with Concurrent Disk, Tape and/or FCP I/O Traffic section of this document. For M-Series devices, use the Preferred Path feature, or its PDCM allow/prohibit, to influence which I/O frames are allowed to traverse a specific ISL link. For B-Series devices, use the Traffic Isolation Zones to influence which I/O frames will traverse a specific ISL link. Use the CLI command: portcfgshow slot/port to verify port settings. If all of the switches in a fabric are Brocade then users should normally set R_RDY mode to off, and use VC_RDY buffer credit acknowledgements, so that the enterprise can take advantage of the virtual channel technology within their ISL links. If the switch on the other end of your ISL is non-brocade (or McDATA) then the user will need to be sure that R_RDY mode is ON and VC_RDY mode is off. The only way to turn off VC_RDYs is to start with Quality of Service (QoS) OFF and then turn on ISL R_RDY mode. VC_INIT is part of the portcfglongdistance CLI command. If users do not start with QoS OFF, the switch might act like it turned on ISL R_RDY mode, while in reality it still is using VC_RDYs. Users might want to set each of their ISL links as shown below: Hard-code all E_ports to the same speed: portcfgspeed 1/0, 1 Use the following CLI command: portcfgqos disable to disable QoS. Use the following CLI command: portcfgcreditrecovery disable to disable BC recovery. Use the following CLI command portcfgislmode 1/0, 1 - enables R_RDY acknowledgment. Disable trunking on the port portcfgtrunkport 1/0, 0 To disable R_RDY mode, because all the switches in a fabric are Brocade, then use the CLI to do so: portcfgislmode 1/0, 0 to disable R_RDY acknowledgments. FICON Advice and Best Practices 33 of 77

34 There is one deployment when a user might need R_RDY mode even between two Brocade switches. If an enterprise is using a WDM device for ISL distance extension, some WDM devices take an active role in port buffer handling. These devices offload this task from the Brocade switch and they will require the R_RDY setting to be on. See the discussion about TDM, CWDM and DWDM below. Users must have the same settings on both ends of their ISL (whether R_RDY or VC_RDY). It is not a best practice to cascade between M-Series and B-Series, but sometimes customers must do this. It is supported by IBM for FICON. It is only supported at Brocade FOS v6.4.2 and below. Brocade FOS v7.0 does not support this interoperability. Must be using R_RDY buffer credit acknowledgements as described above. If users do this, then they need to change the FCID address on the DCX to have an ALPA of 13; otherwise they need to change the M-Series to have an ALPA of 00 one of these must be done. The AL_PA field, until NPIV, was used as a vendor code; McDATA was 13 and Brocade was 00. Both must be either 13 or 00 in order to for the directors to cascade and successfully merge together. From FOS 7.0 to FOS 7.0.0d,when using 10 Gbps or 16 Gbps ISL links, any FCP frames traversing ISL links can be compressed and encrypted. FICON ISL compression and encryption (IPSec) was not been certified at this time. At FOS 7.1 or higher, when using 10 Gbps or 16 Gbps ISL links, any FICON or FCP frames traversing ISL links can be compressed and encrypted. Compression/encryption will cause the average frame size of both FCP and FICON frames to become smaller and therefore might require adjustment to the number of buffer credits allocated for servicing ISL links. This new compression/encryption functionality is only available for links that are attached to Condor3 ASICs (Brocade 8510 and 6510 switches). For additional information see the Integrated ISL Compression and Encryption section of this document. Considerations when using TDM, CWDM and DWDM: For xwdm (Wavelength Division Multiplexer) and FCIP long-distance connections, users can deploy either long wave or short wave connections to the xwdm/fcip devices that are located in the local and also in the remote locations. From an FC perspective multiplexers like TDM, CWDM or DWDM are transparent or non-transparent. Transparent in this context means that: 1. They don't appear as a device or switch in the fabric. 2. Everything that enters the multiplexer on one site will come out of the (de-)multiplexer on the other site in exactly the same way. While the first point is true for most of the solutions, the second point is the crux of the matter. The word "everything" implies that all the traffic comes out in an orderly fashion, not only the frames, but also the ordered sets. So it should be really the same. Bit-by-bit by bit, exactly the same. This means that if the multiplexing solution can guarantee the transfer of only the frames, it is non-transparent. This could become a significant problem for the user: An ISL does not only transport "user frames" (CCWs and data over FC frames from actual I/O between a CHPID and a device) but also a lot of control primitives (the ordered sets) and administrative communications to maintain the fabric and distribute configuration changes (RSCNs). In addition there are techniques like Virtual Channels and/or QoS (Quality of service) to minimize the influence of different I/O types as well as techniques to maintain the link in a good condition like fill words for synchronization and/or buffer credit recovery. All these techniques rely on a transparent connection between the switches. If users don't have a transparent multiplexer, they have to ensure that these techniques are disabled and, of course, they cannot benefit from these advantages. Problems start when a user tries to deploy these techniques but their multiplexer doesn't meet the requirements. What can happen? First example: Buffer credit recovery cannot work if IDLEs are used as a fill word on a link. If a multiplexer cuts out all the fill words and just inserts IDLEs at the other site (some TDMs do that) or if the link is configured to use IDLEs, it will start toggling the link with probable disastrous impact for the I/O across the whole fabric. FICON Advice and Best Practices 34 of 77

35 A second example is less obvious and it has to do with Virtual Channels (VC). When a physical ISL link is virtualized (buffer credit segments) into several, logical links, Brocade calls them Virtual Channels. FC frames still pass across the ISL, one by one, but the buffer management is actually attached to the VCs. Each VC has its own buffer-to-buffer credits (BCs). The multiple VCs on a physical link provide various services: There are VCs solely used for administrative communication like VC0 for Class_F (Fabric Class) traffic. Then there several VCs dedicated to "user traffic". Which VC is used by a certain frame is determined by the destination address in its header. A modulo operation calculates the correct VC over which to send the I/O frame. The advantage of that is to minimize head of line blocking. For example, a slow draining device should not completely block an ISL just because no BC acknowledgements are sent back to enable the switch to send the next frame over to the other side. On a typical N_Port-to-F_Port attachment (e.g. CHPID attached to a switch port), BCs are returned to the transmitter as R_RDYs. If users have VCs the BCs are returned back to the transmitter as VC_RDY acknowledgments. If a multiplexer does not support VC_RDY acknowledgments, as well as ARB(ff) fill words (because it's not transparent), it cannot utilize VCs and "R_RDYs will be used instead to acknowledge BCs back to the transmitter. The result will be that the user will actually have a non-virtualized physical link upon which Class_F and "user frames" (Class_3 and Class_2) will share the same BCs and the switches will always prioritize Class_F higher than user traffic. If, for any reason, a user s fabric sustains fabric state changes or has one or more slow draining devices, performance and throughput will begin to suffer for all of the users of the ISL since these diverse types of I/O traffic will interfere with each other: BCs drop to zero (tim_txcrd_z is greater than zero in the portstatsshow CLI command). I/O traffic gets stalled. Frames will be delayed and then dropped after 500ms ASIC hold time (er_rx_c3_timeout is greater than zero in the portstatsshow command). Error Recovery will generate even more traffic and will have an impact on the applications which might become visible as additional frame timeouts. The user can suffer performance degradation, lost paths and even access losses. It is probably an even worse scenario for the user when the multiplexer in use actually is transparent but the wrong settings have been enabled. So if a user encounters such problems or other similar issues and they use a multiplexer on the affected paths, check if the multiplexer is transparent (with the matrix links below) and if the correct configuration has been configured: Refer to the FOS Administrators Guide that discusses the currently deployed firmware Check one of the compatibility matrices below: FOS 7.x Compatibility Matrix: FOS 6.4 Compatibility Matrix: For DWDM that is bit-transparent (e.g. ADVA Enterprise modules without Time Division Multiplexing), users can set the brocade switch so that it sends VC_RDY acknowledgments through the use of CLI commands such as: Assuming 50 km (31 mile) distance: Configure an ISL port for buffer credits and acknowledgments: portcfglongdistance 1/0 LS Configure the port for VC_RDY mode: portcfgislmode 1/0, 0; enables VC_RDY acknowledgments. In most cases, if the customer is using a DWDM device (such as old Nortel; Huawei) that uses TDM cards (which split the laser signal), then those ISL links cannot trunk together. However, Adva is a highly qualified DWDM vendor whose Enterprise TDM cards have been engineered, and qualified, to work well with Brocade Trunking. When connecting to an extension device that does not support ARB primitives (such as some TDM products), the following must be considered: Most TDM requires that a user configure their DWDM links to use R_RDYs and not VC_RDYs. FICON Advice and Best Practices 35 of 77

36 The only way to turn off VC_RDYs is to start with Quality of Service (QoS) OFF and then turn on ISL R_RDY mode. VC_INIT is part of the portcfglongdistance CLI command. If users do not start with QoS OFF, the switch might act like it turned on ISL R_RDY mode, while in reality it still is using VC_RDYs. Users might want to set each of their ISL links as shown below: 1. Hard-code all E_ports to the same speed: portcfgspeed 1/0, 1 2. Use the following CLI command: portcfgqos disable to disable QoS. 3. Use the following CLI command: portcfgcreditrecovery disable to disable BC recovery. 4. Use the following CLI command portcfgislmode 1/0, 1 - enables R_RDY acknowledgment. 5. Disable trunking on the port portcfgtrunkport 1/0, 0 If users want to disable R_RDY mode, then use the CLI to do so: portcfgislmode 1/0, 0 to disable R_RDY acknowledgments. Common problems when trying to establish switch-to-switch connections: A user might have a valid configuration but is unable to establish one or more ISL links between two switching devices: The pathinfo CLI command can be used to troubleshoot that issue. The portdporttest CLI command can be used to troubleshoot this issue. Brocade Network Advisor 11.3 and higher can run D_Port tests from its FC Troubleshooting menu Some customers attempt to deploy cascading after they have established a local switching environment. And occasionally these users, and others, cannot seem to establish their switch-to-switch connectivity. Below are the issues that usually keep that cascading connectivity from occurring: All connectivity ports must be zoned but E_Ports (ISLs) should not be in a zone. There are CLI commands that can disable ports so users should make sure that all of the ports are enabled and ready for I/O traffic: Go to the Command Line Interface (CLI) and issue a switchshow command to see how ports are configured: switch:admin> switchshow Users must be coding two-byte link addressing in their IOCP for any path that is going to utilize a cascaded fabric (ISL links). Check this coding over thoroughly as this is an area in which it is easy to code in a mistake or two. The switch must have Insistent Domain ID set: Go to Brocade Network Advisor; double click on a switching device; click on Switch Admin in the upper left of the Element Manager screen; choose the CONFIGURE tab; make sure Insistent Domain ID is checked. The switch must have fabric binding set. Go to Brocade Network Advisor; double click on a switching device; click on Switch Admin in the upper left of the Element Manager screen; choose the SECURITY POLICIES tab; make sure that there is an SCC policy active. The SCC policy must be in STRICT mode. If the user has a zone for local FICON ports (1-byte addressing) and a different zone for cascaded FICON ports (2-byte addressing) then be sure that all ports are zoned correctly. If the ISL distance is 10km or more then users must be sure to add the Extended Fabrics license to each switch and then add buffer credits to optimize those links. Use the portcftlongdistance CLI command, using LS mode, to configure enough buffer credits. At FOS 7.0.0d and below, triple the actual distance of the ISL links in this command to be sure that links acquire enough buffer credits. At FOS 7.1 and higher, use the distance and framesize options to be sure that users optimize the buffer credits on the ISL links. Long Distance ISL improvements at FOS 7.1: Enhancements have been made to allow users to optimize performance on FC long distance ISLs: Two new options for the portcfglongdistance CLI command - one option to configure the number of buffers, and another option to configure the frame size for LD and LS modes. Display of a port s average buffer usage and average frame size in portbuffershow CLI output A new CLI command, portbuffercalc, to calculate the number of buffer credits required on a long distance port, by providing the distance, speed and average frame size. FICON Advice and Best Practices 36 of 77

37 ISLS AND FCIP LINKS Fibre Channel over IP (FCIP or FC/IP, also known as Fibre Channel tunneling or storage tunneling) is an Internet Protocol. An FCIP entity functions to encapsulate Fibre Channel frames and forward them over an IP network. FCIP entities are peers that communicate using TCP/IP. Some would say FCIP was incorrectly named and should have been FC over TCP since it is the TCP protocol that ensures that frames arrive as expected over the IP link without discarding them for some reason. FCIP technology overcomes the distance limitations of native Fibre Channel, enabling geographically distributed storage area networks to be connected using existing IP infrastructure, while keeping fabric services intact. The TCP over IP protocol architecture uses completely different mechanisms than the Fibre Channel architecture to transport frames across a link. Within those differences lies the ability of TCP to transport frames for very long distances. When using FCIP for frame transport across a link, the fibre channel fabric, and its devices, remains unaware of the presence of the IP Network. For FCIP 1GbE and 10GbE tunnels the upper limit for FICON extension is around 200 microseconds (µs)roundtrip time. If the round-trip time is greater than that, and the IP links are not 100 percent clean, users will have issues getting the FICON long-distance paths online. Speed of light, C, is about 3 x 10 8 µs in a vacuum. In a fibre it is about 2/3C or 2 x 10 8 µs. That comes out to approximately 5 µs of latency per km of distance when using optical fiber cabling. If using this number for latency calculations then remember to calculate the round-trip distance and not just the one way distance. For FCIP, customers can choose their preferred compression methodology: Brocade and its representatives do not quote compression ratios other than Brocade lab testing numbers and the statement, Your mileage will vary. Compression Mode 0: No compression. Compression Mode 1: Static Huffman Encoded LZ at about 2:1 compression, and it can accommodate full line ingress rate into a blaster, which provides 20 Gbps on both the Brocade FX8-24 and the Brocade Compression Mode 2: Dynamic Huffman Encoded LZ at about 2.5:1 compression, and it can accommodate approximately 8 Gbps into the Cavium processor. Compression Mode 3: Deflate at about 4:1 compression, and it can accommodate approximately 2.5 Gbps into the Cavium processor. It is a best practice to use the Brocade 7800 and Brocade FX8-24 FCIP Trunking features to overcome the limitation of one Ethernet interface, one IP address, and one FCIP tunnel. In Brocade FOS 6.3 and later, an FCIP tunnel is created with multiple FCIP circuits over different IP interfaces, to provide WAN load balancing and failover recovery in the event of a limited WAN outage. This provides a highly redundant WAN configuration for all FICON or FCP emulation technologies with Brocade FOS. See the section on Buffer Credit Recovery that is important if ISL link performance is to be maintained. The Brocade USD-X and Edge M3000 FCIP extension devices had the common property of connecting geographically dispersed fabrics together without merging those fabrics into a single entity. They allowed each geographically dispersed data center to be isolated from each other. Brocade FX8-24 FCIP blades and the Brocade 7800 FCIP Extension Switches do not provide this isolation. When replacing USDx/M3000 with Brocade FX8-24/Brocade 7800, the two data center I/O fabrics are merged into a single common fabric. Since the fabrics are merging, they must both share commonalities, such as zone sets, in order for the merge to be successful. Careful planning and preparation for this fabric merging is required. If a customer wants to provide as much isolation as possible between geographically separated data centers while providing data traffic connectivity between sites, a customer can place the Brocade FX8-24 blades in their own virtual fabric within each chassis in the connected data centers. The virtual fabrics containing the Brocade FX8-24 blades will create a single, geographically broad fabric, but the other virtual fabrics will remain isolated from each other. This also allows different routing policies to be defined on the virtual fabrics that contain the Brocade FX8-24 blades. FICON Advice and Best Practices 37 of 77

38 ISLS WITH CONCURRENT DASD, TAPE AND/OR FCP I/O TRAFFIC Obviously, tape and disk (DASD) use data differently which requires careful consideration about how users deploy disk and Tape I/O across ISL links. Disk is typically stored using small block sizes (6-8k) so that a minimum of cache will be used when records are accessed via the control unit. Disk I/O is usually very important to normal applications and transactions and would probably be viewed as high priority I/O by most shops. Disk is bursty. It does a little work then stops. Does a little work then stops. There are lots of gaps of its utilization of a FC link. That is why Fan In-Fan Out is so useful for disk host and storage links. A single disk device seldom uses up much of the bandwidth of a link. If data blocks are stored at 8k on average on disk, then it will take approximately seven frames to pass the data and send the status. That is only averaging about 1,170 bytes of data per frame payload. One would think it would take only four or five frames to pass 8K of data but because of how the FICON protocol works it takes more than that. Tape is typically stored using large block sizes (up to 128K for example) so that the tape read and write heads will be continuously streaming and not stopping and starting. Tape I/O is sometimes viewed as important I/O but it is often being created just for disaster recovery/business continuance purposes so does not affect day-to-day operations. Physical tape drives stream data and are not bursty. A mainframe FICON Express8 or FICON Express8S CHPID for tape, using Command Mode FICON, can only push a max of 620MBps which means that Fan In- Fan Out does not work for tape. Tape drives are just about the only device that, by itself, can use up most of the bandwidth capability of a link. Tape drives have an IDRC chip to compress the data before it gets written to the tape drive. Compression is usually about 2:1 but is often better. This means that users must stream at least 2X the data down the link to the IDRC chip which then compresses the data and places it on the tape media. If a tape drive has a sustained rate of 185MBps then a user would want to stream 370MBps of data to the drive down the link to account for the eventual compression. It might take as many as 185 frames of data (370 x 1024 / 2048 = 185 frames of data) to keep the tape drive moving streaming. So each time a read or write is done there are an enormous number of tape data frames that are concatenated together and sent in a very short period of time down a link and down ISL links. When a user attempts to combine disk data traffic and tape data traffic over an ISL link it is the disk frame I/O traffic that is going to suffer performance degradation. Since its nature is to be bursty, almost every time it tries to place a frame on the link it is going to find that tape frames are in the way and it will have to queue up the disk frames behind possibly hundreds of tape frames before getting processed. And because we use Fan In-Fan Out for disk links, there will be many disk I/O processes attempting to get on that ISL link with the tape data. FCP I/O traffic might also need to traverse ISL links. Can this data traffic share ISL links with FICON? FCP I/O traffic from, for example, Linux on System z using NPIV, and FCP replication I/O traffic can be very different. FCIP I/O traffic using NPIV-enabled links can be very robust and bandwidth intensive if properly deployed. Typical SAN FCP traffic, from open systems servers, can be heavy if optimized fan in-fan out is deployed. Replication I/O traffic can be heavy if many changes are being made to local disk data sets. In any of the cases above it would be wise to utilize independent ISL links rather than mixing this high intensity I/O traffic with typical FICON disk I/O traffic across the same ISL links. At FOS 7.1 and higher, Brocade provides 16 Gbps ISL links that can also be deployed with compression and encryption for FCP and FICON. At publication, 10 Gbps did not yet support compression/encryption. But 16 Gbps ISL link compression might help, just a little bit, to get the disk frames passed sooner. A good amount of disk traffic can be already compressed (HSM and DB2 for examples) before it gets to the I/O links. So ISL compression/encryption will not affect those frames much. Tape data is not compressed (except on the device) so it will be uncompressed data when it gets to the I/O links. Compressed/encrypted tape data will require fewer frames. Disk frames should be able to sneak onto ISL links just a little bit quicker between the blocks of tape data that are being sent. FICON Advice and Best Practices 38 of 77

39 Active disk I/O is still going to suffer performance problems if fighting active tape I/O for room on an ISL link! Most customers are very concerned about the performance of their disk I/O. Anytime that is the case then the user needs to keep disk I/O and tape I/O away from each other on ISL links. Now, if the user only has tape active for a few hours each day across the ISL links, they might find that they can combine these data flows with only minimal disk I/O performance degradation. That is just not usually the way data centers work. DWDM links (and other network links) are expensive. It is a balancing act. Can a user afford a potential decrease in overall disk performance across the enterprise in order to save money on long distance links? For financial institutions this will often be a no. ATMs and tellers need very fast response to provide great customer service. And financial brokers do trillions of dollars per day and cannot afford any I/O delay. For insurance, transportation, energy, government, etc. each knows how their market works and whether the cost of additional long distance links will be worth it to preserve performance. Regardless of what financial decision a user makes regarding deploying additional DWDM (or other network links) it is, and will probably always be, a best practice (regardless of FICON or FCP) to keep disk I/O and tape I/O away from each other across long distance links. DIAGNOSTIC PORT (D_PORT) It has been said that 50% of fabric problems are related to bad SFPs and/or bad cables. D_Port helps prevent these. At FOS 7.0 and higher, Brocade Condor3 based switches deliver enhanced diagnostic capabilities in the form of a new port type, called the Diagnostic Port (D_Port). FOS v7.0.0a and later support the execution of D_Port tests concurrently on up to 8 ports on the switch. FOS 7.0.0d and lower does not support D_Port for FICON fabrics but is supported on FCP-only ISL links. FOS 7.1 and higher does support D_Port for FICON fabrics, FCP fabrics and Protocol Intermixed fabrics. These diagnostic and recovery features will enable smooth operation of metro-connected FICON and FCP fabrics. Initially supported only for ISLs, the D_Port will be able to perform electrical as well as optical loopback tests, and will also be able to perform link-distance measurement and link saturation testing: Identify and isolate SFP and cable problems. Reduce the time to successfully deploy an extended fabric. Non-intrusively verify transceiver and cable health. Ensure predictable I/O traffic performance over ISL links. Can test 10 Gbps or 16 Gbps SFPs that will be deployed as ISL connections. D_Port can test the saturation of 16 Gbps ISL links, but when doing these tests it does not compress any of the test frames. So this test is of no help when users are trying to determine the number of buffer credits they need on 16 Gbps FICON ISLs when compression/encryption is enabled. The D_Port does not provide any I/O rates or % path utilization; it only scores a PASS or FAIL. D_Ports can also be used to test active ISL links. However, the link must first be taken down in order to enable the D_Port configuration and tests. Brocade 16 Gbps SFP+ optics support all D_Port tests, including loopback and link tests. The accuracy of the 16 Gbps SFP+ link measurement is within 5 meters. At the time of publication, Brocade 10 Gbps SFP+ optics do not support the loopback tests, but do support the link measurement as well as link saturation tests. Provides link measurement accuracy to within 50 meters. For FOS 7.0.0d or lower environments, a diagnostic port (D_Port) cannot be configured within a FICON logical switch, but it can be used to validate an ISL in other logical switches. FICON customers can move the ISL port to a non-ficon logical switch and validate the link in that logical switch before moving it back to the FICON logical switch. Once a future ISL link has passed its D_Port tests that port can be reconfigured as an E_Port. FICON Advice and Best Practices 39 of 77

40 Figure 15 D_Port Improvements at FOS 7.1: D_Port testing on the optical ICLs of the DCX 8510 platforms. This support however does not include Electrical and Optical loop-back functionality as these are not supported by the QSFPs used on the optical ICLs ports. New D_Port test options users can specify number of frames, frame size, test duration, etc. Support of D-Port is extended to R_RDY flow control mode. The R_RDY mode is useful for active DWDM links that do not work in VC_RDY or EXT_VC_RDY flow control modes. A new sub-option -dwdm is added to portcfgdport --enable CLI to configure D-Port over active DWDM links. The -dwdm option will not execute the optical loopback test while performing D-Port tests as the active DWDM links do not provide necessary support to run optical loopback tests. PORT DECOMMISSIONING At FOS 7.0 the Port Decommissioning feature became available for use for Brocade Gen4 and Gen5 switching devices. This function provides an ability to gently remove the traffic from an ISL link, reroute that I/O traffic onto another link and then bring that ISL link offline. This is all done non-disruptively. At this FOS level Port Decommissioning is done only through CLI commands. Use the CLI portdecom command to allow traffic on an ISL to complete before taking that ISL port offline. At FOS 7.1 and higher, Port Decommissioning can be accomplished either through CLI commands or by using Brocade Network Advisor 12.0 or higher. This feature can automatically coordinate the decommissioning of ports in a switch, ensuring that ports are gracefully taken out of service without unnecessary interruption of service and triggering of automatic recovery functionality that may occur during manual decommissioning of ports. Port Decommissioning provides an automated mechanism to remove an E-Port from use (decommission) and to put it back in use (recommission). In the future Brocade plans to offer Port Decommissioning for F_Ports as well as E_Ports. This feature identifies the target port and communicates the intention to decommission or recommission the port to those systems within the fabric affected by the action. Each affected system can agree or disagree with the action, and these responses are automatically collected before a port is decommissioned or recommissioned. The local switch and the remote switch on the other end of the E-Port or F-Port must both be running Fabric OS or later. There are a few restrictions to take note about when trying to utilize Port Decommissioning: Users must enable Lossless DLS on both the source and destination switches before they can decommission an E-Port. FICON Advice and Best Practices 40 of 77

41 Port commissioning is not supported on links configured for encryption or compression. Port commissioning is not supported on ports with DWDM, CWDM, or TDM connections. Where ISL link aggregation (trunking) is provided for in the fabric, decommissioning of a port in a link aggregation group may be performed if there are other operational links in the group. Decommissioning of a non-aggregated port (or the last operational port in a link aggregation group) may be performed if there is a redundant path available. Decommissioning an E_Port using Brocade Network Advisor: E-Port Decommissioning requires that the lossless feature is enabled on both the local switch and the remote switch. Fabric tracking must be enabled to maintain the decommissioned port details (such as port type, device port wwn, and so on). Do not accept changes in the Management application client. Using Brocade Network Advisor, select the E-Port, then select Configure > Port Commissioning > Decommission > Port. While decommissioning is in progress, a down arrow icon displays next to the port icon in the Product List. Users can view the port commissioning results in the deployment reports. When the decommission is complete, an application event displays in the Master Log detailing success or failure. Recommissioning an E_Port using Brocade Network Advisor: Users do not need to enable Lossless DLS before they recommission an E-Port, but having Lossless DLS enabled is always a best practice. Using Brocade Network Advisor, select the E-Port, then select Configure > Port Commissioning > Recommission > Port. While recommissioning is in progress, an up arrow icon displays next to the port icon in the Product List. Users can view the port commissioning results in the deployment reports. When the recommission is complete, an application event displays in the Master Log detailing success or failure. Non-disruptively decommissioning and recommissioning of E_ports proved to be so successful that IBM decided to support this capability for F_Ports as well which now allow the non-disruptive removal of CHPID and/or storage ports from a fabric. Fabric OS 7.1 and later provides F_Port decommissioning and recommissioning using Brocade Network Advisor 12.0 and later or CLI commands. There are also z/os requirements in order to utilize this capability. With that in mind, rather than providing information here the user should reference the Brocade Network Advisor SAN User Manual supporting Network Advisor or later for the most current requirements and information about using the F_Port decommissioning/recommissioning capability. FCIP ENHANCEMENTS STARTING AT FOS V7.X See Brocade FOS v7.0 release notes for a full list of functionality. A 10 GigE lossless failover feature allows users to configure a new set of IP addresses and allow a tunnel to failover to the secondary 10 GigE port. For the above, there are two types of configurations that are supported: active/active and active/passive Multi-gigabit circuits on the Brocade FX8-24, allow more than 1 Gbps to be configured on a single FCIP circuit The multi-gigabit circuit implementation has the following capabilities: Support for circuits with a rate configured at more than 1 Gbps Allow for up to 10 Gbps minimum rate circuit using a single IP address pair Support for 10 GbE to 10 GbE connections only on multi-gigabit circuits This feature is supported on the Brocade FX8-24 blades only in 10 Gbps mode. FICON Advice and Best Practices 41 of 77

42 The 10 GbE Adaptive Rate Limiting (ARL) feature allows users to configure minimum and maximum rates for each circuit of a tunnel that is using xge0 or xge1 (the 10 GbE ports) on a Brocade FX8-24 blade. The 10 GbE ARL feature provides the following capabilities: Support for ARL on tunnels over the 10 GbE ports Maximum guaranteed rate of 10 Gbps combined for all tunnels over a single 10 GigE port Maximum rate of 10 Gbps for any single circuit The configurable QoS option allows a user to override the default percentages of bandwidth assigned to each of the QoS classes within an FCIP tunnel. The default values will be the same as those defined prior to Brocade FOS v7.0: 50% for high, 30% for medium, and 20% for low priority traffic This feature is supported on the Brocade FX8-24 and Auto-mode option for compression: Starting with Brocade FOS v7.0, a new compression mode called auto-mode is supported on the Brocade FX8-24 and This feature will adjust the compression mode (1, 2 or 3) based on the maximum configured tunnel bandwidth. In the auto mode, the best compression mode is selected based on the configured maximum bandwidth of the tunnel and advanced compression bandwidth usage in the system. XISL Support on the Brocade FX8-24 provides the ability to use VE ports as XISLs: This feature enables multiple logical fabrics to share a single base fabric while providing fabric-level isolation in a virtual fabric environment Specifically, it enables logical connectivity over FCIP between otherwise disconnected segments of a fabric This feature is supported only on Brocade FX8-24 blades in both 1 and 10 GbE modes FCIP FICON Acceleration Enhancements: There are several new capabilities added for FICON acceleration in Brocade FOS v7.0, including: Support for Optica Technologies ( Prizm (FICON-to-ESCON converter switch) connected to 3480, 3490 and 3590 ESCON tape control units Support for Optica s Prizm and their Bus/Tag Interface Module (ESBT) connected to 3480 Bus and Tag tape control units New FICON Acceleration support for Teradata controllers Increase in number of circuits on 1 GbE ports: In Brocade FOS v7.0 and higher, the number of circuits supported per tunnel on 1 GbE ports has been increased as follows: Up to 6 circuits per tunnel on the Brocade 7800 Up to 10 circuits per tunnel on the Brocade FX8-24 When 8 Gbps FX8-24 blades are used in Brocade Gen5 Directors, the 8 Gbps FC ports must have their fill word updated through use of the portcfgfillword CLI command since the FX8-24 uses the Condor2 ASIC. FCIP improvements at FOS 7.1: Virtual Fabrics (VF) support on the Brocade 7800: Adds Virtual Fabrics (VF) support on the 7800 platform with support for up to four logical switches. However, the 7800 VF configuration does not support Base Switch functionality or XISL usage. FICON support is limited to only two (s) Virtual Fabrics per Enable IP Security on XGE0 of an FX8-24 blade: Adds IPsec support for the XGE0 port of an FX8-24 blade. This enables creation of IP Security enabled FCIP tunnels on VE ports of an FX8-24 blade. Please note that this functionality requires the new FX8-24 hardware SKU that supports this functionality. FCIP Tunnel TCP Statistics Monitoring Enhancements: Added two new sub-options (reset and lifetime) to the portshow fciptunnel and portshow fcipcircuit CLI commands. The --reset option allows the user to start a new statistical checkpoint for the tunnel, circuits and the TCP connections while the --lifetime option allows the user to see statistics related to the lifetime of the tunnel, circuits and TCP connections. These enhancements allow users to see statistics that represent a specific time period for tunnel(s), circuit(s) and TCP connections. FCIP Enforcement of ED_TOV (Error Detection Time Out Value) for FC frames: FCIP now enforces an internal queue time limit (typically 2 seconds) for all FCIP received FC frames to address issues in congested networks. This ensures that no old FC sequences will be forwarded to the destination if the age (queue time) of the FC frame on an FCIP FC send queue exceeds 2 seconds. CLI to Display GE Port Errors: FICON Advice and Best Practices 42 of 77

43 A new CLI command, geporterrshow, that shows GE ports and their statistics. FCIP RASlog Enhancements: New FCIP RASlog enhancements: Improved FCIP RASlog messages in a Virtual Fabrics environment. FCIP RASlogs to be logged against the appropriate Logical Switch or chassis. Minimize the creation of FICON Emulation RASlog messages (applies to the FICN-XXXX RASlog messages). FICON READ AND WRITE TAPE PROCESSING USING FCIP EMULATION Brocade Advanced Accelerator for FICON (Brocade Advanced Accelerator) is an optional software license for the Gen4 Brocade 7800 Extension Switch and Gen4 Brocade FX8-24 Extension Blade. This software uses advanced networking technologies, data management techniques, and protocol intelligence to accelerate FICON disk and tape read and write operations over distance, while maintaining the integrity of command and acknowledgement sequences. It supports Tape Pipelining for FICON tape and virtual tape as well as emulation for IBM z/os Global Mirror (formerly extended Remote Copy or XRC). Brocade Advanced Accelerator provides unprecedented application performance across IP WANs over distances that would otherwise be impossible, impractical, or too expensive with standard Fibre Channel connections. Brocade s FICON tape and virtual tape offerings perform a function that is called tape pipelining: Pipelining allows storage vendor tape offerings to run at significantly improved performance when compared to the expected performance for typical ISL connections between the sites. FICON performance, like ESCON, does droop when there is distance between the z/os channel and the Tape controller. Brocade s FICON Pipelining software (emulation) minimizes the impact of that distance on single tape device performance. FICON Tape Write Pipelining FICON Write pipelining operations enhance the performance of channel extended tape and tape like facilities by pre-acknowledging write chains for a specific device up to a Brocade configured number of write commands. The default mode of operation for Brocade FICON Tape pipelining operates with at configurable number of outstanding write CCW commands to a device at any one time. The control of this processing is located in a single local Brocade FICON interface that is directly connected to the z/os LPAR that is creating the tape data. Processing within the single Brocade local FICON node will ensure that all writes to the device are completed in z/os order. The design of Brocade Tape Pipelining restricts these tape devices to a single online and active path from one LPAR to one device. In other words, a single Brocade FICON extended device can only have one online path from one z/os system LPAR through the Brocade network. This does NOT mean that the device can only be online to one system. It can be online to the control unit limited number of LPARs but Brocade s processing limits it to one active path per LPAR to a device. FICON Tape Read Pipelining Brocade s FICON Read pipelining operations enhance the performance of channel extended tape and tape like facilities by pre-reading tape responses from a remote device before the z/os Channel has requested that data. The read pipelining logic attempts to pre-read a default number of read blocks ahead of the channel when monitoring the read sequences to and from a tape device. The default mode of operation for Brocade FICON Tape Read pipelining operates with a configurable number of outstanding read CCW commands to a device at any one time. The control of this processing is located in a single local Brocade FICON interface that is directly connected to the z/os LPAR that is reading the tape data. Processing within the single Brocade local FICON node will ensure that all reads from the device are completed and presented to the z/os channel in the correct order. Normal z/os Tape processing allows multiple paths online between an LPAR and a single tape controller and devices. FICON Advice and Best Practices 43 of 77

44 Without Brocade in between the z/os system and the tape controller, z/os can effectively and safely utilize multiple paths to that single tape device. Brocade s tape pipelining operations CANNOT be controlled over multiple paths and insure in sequence delivery to or from the tape. Therefore Brocade pipelining to a device, over multiple FICON (or ESCON) paths, is not supported. z/os professionals are accustomed to IOS being able to handle multiple data streams across multiple links. But since tape pipelining is done between one Brocade device and another (outside of IOS s control) this restriction allows these devices to insure in-order delivery of frames just as IOS would have done if it were in control. If we did allow multiple online paths to a device and performed Tape Write Pipelining operations on multiple paths to same device over multiple FICON connections, there would be the possibility of data being written to the device out of host (write) order. This same issue could occur in a read situation, but the problem there is presentation of the read data to the channel out of tape block order. Therefore, there are safety mechanisms in place within the Brocade processing to ensure that only one Brocade path to a device or set of devices is online to a single z/os system at a time. Path activation failures (Assigned Elsewhere Errors) will occur if multiple Brocade FICON paths are attempted to be activated at the same time. Designing Brocade Configurations for Redundant Tape Paths In order to get ESCON or FICON path redundancy in a Brocade configuration to a tape controller, the Brocade network can be configured to have primary and secondary paths to the tape devices on the controller. The diagram below shows how two paths can be configured in a Brocade network to redundantly provide two paths to a set of 16 tape devices. Figure 16 If CHPID C1 experiences a hardware error or the network between Brocade10 and Brocade30 fails, then the secondary path via CHPID C2 can be utilized to access devices In a Brocade FICON configuration, the customer operations personnel only need to vary the devices offline, vary path C1 offline and then vary the paths to online to CHPID C2. Multiple paths are available via the Brocade network, they just cannot be online at the same time. TWO-BYTE LINK ADDRESSING Unless users are certain that their FICON fabrics will never need to be cascaded, it is a best practice to always utilize two-byte link addressing in their IOCP. Two-byte link addressing on the Brocade Gen4 and Gen5 devices requires, before activating it, that users have a security policy set, that users are using insistent domain IDs, and that users have fabric binding enabled. A mainframe channel with two-byte link addressing queries the switch for the proper security settings (Query Security Attributes or QSA) when it logs in. If the security attributes are changed (for instance, fabric binding or insistent domain ID) on that channel after login, then nothing happens. However, if that channel logs out for any reason, it is not able to log back in. QSA is initiated across CHPID paths when two-byte link addressing is processed in the IOCP. QSA checks to be sure that there is a high integrity fabric before it allows the CHPID links to become active in a cascaded fabric. Insistent Domain ID, fabric binding (SCC policy) and identical time out values (TOV) settings must be accomplished on each switch of the cascaded link. FICON Advice and Best Practices 44 of 77

45 If the QSA check fails, the channel goes into an Invalid Attach state resulting in un-modulated light to the switching device. This un-modulated light can cause the switch port to get inundated with interrupts in an attempt to assist with speed negotiation. If enough ports are in this condition, the interrupts can overwhelm the switch s processing capability resulting in an unresponsive switch. At FOS 7.1 or higher, at the date of publication of this document, if any port gets into this situation that port is throttled. What that means is that if a port causes too many interrupts, it will be disabled for a short period of time before automatically being re-enabled. This prevents the switch s processor from becoming overwhelmed and since ports are automatically reenabled, no user action will be required once the channel becomes active. The security policy (SCC_Policy) for cascaded FICON should be strict. In unusual cases the policy might have to be set to tolerant. It is always best to get an RPQ (Request for Price Quote) from IBM if the SCC_Policy must be set to tolerant. If integrated routing is going to be used on the chassis, integrated routing requires a tolerant SCC policy. This conflicts with FICON s requirement to have a strict SCC policy on the chassis. There is a special command, ficonmanualcascading, that allows a Brocade Gen4 or Gen5 device with a tolerant SCC_Policy to answer the QSA command acceptably. Unlike the M-Series, where fabric binding only has to include all the Switch WWNs, B-Series switch binding requires that all Switch WWNs be in the list and that they also be in the same order in each list, so that the fabrics can successfully merge. TRUNKING Brocade s Trunk Group feature (known as Brocade Trunking or frame-based routing) allows a user to deploy multiple high-speed load-sharing ISL links between two Brocade switching devices: The user can configure from 2 ports up to 8 ports within an ASIC-based trunk group, where each link in the trunk supports transfer rates of up to 8 Gbps (Gen4) or 16 Gbps (Gen5) of bi-directional traffic. The ports in a trunk group make a single logical link (fat pipe). Therefore, all the ports in a trunk group must be connected to the same device-type at the other end. In addition to enabling load sharing of traffic, trunk groups provide redundant, alternate paths for traffic if any of the segments fail. M-Series used a licensed software facility called Brocade OpenTrunking to react to congestion on ISL links. B-Series does not have a reactive, Brocade OpenTrunking-like mechanism. Rather, the B-Series attempts to proactively manage its ISL links by assigning traffic flows so that congestion does not occur (to the best of its ability). The fibre channel protocol has a function called Fabric Shortest Path First (FSPF) to assign ingress ports to ISL links via a costing mechanism. Brocade preempts FSPF with its own capability, Dynamic Load Sharing (DLS), to evenly assigns ingress ports to egress ISL links at port login (PLOGI) time. DLS is valid for FICON environments regardless of the routing policy setting. Thus, DLS with Lossless mode enabled should be used for all FICON environments, whether port-based, device-based or exchange-based routing is deployed. To be clear, for FICON, Lossless DLS is required to be enabled when utilizing Exchange-based routing or Device-based routing. If users want to enable Lossless DLS at Brocade FOS 6.4 and higher, then issue these CLI commands: iodset dlsset - -enable - lossless portcfgcreditrecovery disable slot/port If users want to disable DLS (not recommended), then issue these CLI commands: dlsreset dlsshow Frame-based Trunking (aka Brocade Trunking) is supported in all cascaded FICON fabric implementations. I/O traffic load sharing is accomplished very efficiently by the hardware ASIC-pair to which the ISL links are connected. FICON Advice and Best Practices 45 of 77

46 Exchange-based routing (once known as Dynamic Path Selection [DPS]) would be the preferred method of frame load sharing within a cascaded FICON environment. It is compatible with Brocade Trunking, and both can run on the same chassis. Exchange-based routing (EBR) is used to load share the traffic flow across all of the ISL links for FICON traffic using the SID/DID/OXID metrics to assign traffic flows to an ISL link. IBM has discovered a bug in their z/os I/O code and are now testing the fix. Until the new code is released, IBM is not supporting exchange-based routing (aka DPS) in FICON environments. Do not use EBR for FICON until IBM has once again certified it for use with FICON. Currently, EBR can be used to load share the traffic flow across all of the ISL links for FCP traffic only. Device Based Routing (DBR), starting at FOS 7.1, is used to load share the traffic flow across all of the ISL links for FICON traffic using the SID/DID metrics to assign traffic flows to an ISL link. Device-based routing (DBR), at the time of publication of this document, is the preferred method of frame load sharing within a cascaded FICON environment. It is compatible with Brocade Trunking, and both can run on the same chassis. DBR, unlike EBR, does not reserve an ISL link for each and every active F_Port. F_Ports are only assigned to an ISL link when the system must flow traffic to a device that is attached to those ISL links. DBR is supported for FICON environments only. DBR is not available for use with FCP I/O. When using Brocade Trunking, ISL trunking skew is a condition that results from the differences in the path length of the links (cable lengths) in a trunk or trunks. It is a best practice to configure trunks where the cable length of each trunked link is roughly equal to the cable length all of the other links in that trunk. Trunks are compatible with both short wavelength and long wavelength fiber optic cables and transceivers. The cable difference between all ports in a trunking group must be less than 400 m (1312 feet) and, for optimal performance and bandwidth use, should be 30 m (98 feet) or less within a trunk. Hardware primitives are sent out on the link to measure the skew on each of the ISL links. Since FC is full duplex, each switch port has two paths, one for Transmit (TX) and one for Receive (RX). Each port transmits the hardware primitives to the neighbor port that receives it. So skew is the time difference between: the traffic traveling over each ISL other than the shortest ISL in the group and the traffic traveling over the shortest ISL. Use the porttrunkarea --show trunk CLI command to display trunk information including deskew information. The de-skew number corresponds to nanoseconds divided by 10. The firmware automatically sets the minimum de-skew value of the shortest ISL to 15. The higher the de-skew counter value, the longer the cable length difference actually is. Both connected switches do this, since there can be different skew values on the two paths in the link. The switch on the RX side, decides if the skew variance for all the ISL links is within the acceptable range. ISL paths that pass the skew test are automatically added to a Brocade ISL Trunk if trunking is supported. For example, trunking is not supported on DWDM devices utilizing TDM modules, except when using ADVA enterprise modules. Figure 17 Each of the ISLs in a trunk has the appropriate amount of de-skew applied to eliminate the skew variance. The longer the skew in the ISL links, the worse the performance problem becomes. At 80 m (262 feet) of skew and beyond it is often better to disable trunking and just utilize individual ISLs. In this case, the ISLs are used on a round-robin basis, and some might be heavily used and others hardly used at all. FICON Advice and Best Practices 46 of 77

47 Although a poor practice, an ISL trunk will form with up to 400 meters (1312 feet) of skew. However, if a customer configures more than 30 m (98 feet) of skew, then trunk performance begins to degrade and can become very poor. Under such unusual cable distance skewing circumstances, a customer is better off using discrete ISLs rather than trying to create an ISL trunk. On Brocade switching devices, even though the routing policy may be different in different logical switches on the same chassis, a customer cannot mix port-based routing, exchange-based routing and device-based routing on the same ASIC, since that has not yet been tested by IBM. The CLI command, trunkdebug, can help a user to debug a trunk link failure. See the FOS 7.0 or 7.1 Command Reference Guide for more information. The FCIP Blade, the Brocade FX8-24, supports exchange-based routing, but FICON device emulation (IBM Extended Remote Copy [XRC] and/or FICON Tape Pipelining) must use ASIC-based Brocade Trunking only. FCP protocol (Fast Write and Open Systems Tape Pipelining) supports exchange-based routing. There is a FICON Accelerator License (FAL) that allows FICON emulation (FICON Acceleration) to be used on Brocade FCIP links. The license is a slot license. Each slot doing emulation (acceleration) requires a license. IBM certifies IBM Global Mirror (XRC) with our non-accelerated (no FAL) FCIP products at up to 300 km (186 mi). At Brocade FOS 6.0.4c and higher, IBM certifies IBM Global Mirror (XRC) with FICON Accelerated (FAL installed) FCIP products at up to 700 km (435 mi) when using the Brocade 7800 and/or Brocade FX8-24. For IBM TS7700 tape, 100 km (62 miles) is the maximum distance supported from switching devices. Emulation support for FCIP Tape Pipelining for the IBM TS7700 is supported for up to a 3,000 mi (4,828 km) distance. Switching/extension devices should be running firmware level FOS 6.1.1a or 6.4.0c or higher. Use of FOS 6.4.0c has a co-requisite Channel Fix for 2097/2098 System z models, which is included in Driver 79F Bundle 37a. Trunking and high latency (slow drain) devices: Consider two Gen4 or Gen5 Directors with 8 equal cost ISLs between them. 50% aggregate utilization of the ISL links. Which (if any) of the below is most vulnerable to experiencing or spreading the ill effects of a high-latency device (slow draining device using up the available buffer credits) connected to either director? a. One 8-port trunk. b. Two 4-port trunks with device-based load balancing. c. Four 2-port trunks with device-base load balancing. d. No trunking, device-base load balancing across all 8 links. e. No trunking, no device-base load balancing, just static routing accomplished by DLS. Many users would assume that a. or b. above would provide them with the most benefit as it creates the fattest logical pipe and/or provides redundancy. But more thought needs to go into creating this fabric and avoiding any high latency device link clogging. A user might discover that they should use multiple, small trunks or even no trunks at all. Let us understand why that could be so. When creating a trunking environment, the system is designed to fill up each link of a trunk, and not move off of that link until its utilization meets a certain high threshold. Round robin rotation of I/O across trunk links was only used at 2 Gbps. If a trunk link gets clogged due to a high-latency (slow draining) device, then the whole trunk is now at risk to become crippled because flows to the high-latency device are: Still trying to be put down the same link, or Moved to another link in the trunk that eventually also clogs up. At best, all the trunk did was move the backpressure around and maybe let some other I/O flows through for a little while. When a member (link) of a trunk group gets into an extended buffer credit starvation situation (there are frames to send but no buffer credits to send them), that path becomes tagged as high-latency at the ASIC level. Subsequent I/O traffic, destined to the same end-device port, are programmatically moved by the ASIC to the next member link of the trunk group: The ASIC is attempting to contain this bad performance characteristic to as few trunk links as possible. That action is acceptable as long as nothing in the fabric is really wrong and/or there are not too many slow draining devices and/or there are not too many fabric ports attempting to access just a few slow draining device ports. The big problem occurs when something really is wrong and then the high latency condition gets replicated to other trunk links. FICON Advice and Best Practices 47 of 77

48 Ultimately all member links of the trunk might become clogged because the trunks programming simply marched the feeding of the high latency devices across all of the trunk links and now all of the link members in the trunk are clogged and providing poor performance and throughput. This is a situation that could be short lived (intermittent) or last for a long time (sustained). Resetting a link to regain buffer credits is only a temporary solution as slow draining devices will quickly reduce I/O flow once again. A user might think that device-based routing (DBR) or, eventually, exchange-based routing (EBR), will minimize this situation by making better use of the available physical trunks, but it will not. For example, EBR and DBR statically spread the I/O exchanges across all of the available paths, which means that they will also spread the high-latency exchanges across all available paths as well. Thus, the same affect that users get when they have one ISL or a big Trunk, occurs with EBR/DBR when a particularly persistent, high-latency device is introduced into the fabric. Furthermore, a high-latency device can also be a device that is not speed matched to the rest of the devices; such as, a 2G device in a fabric of 8G CHPIDs and devices. Lastly, it doesn t matter if this scenario is played out on an open systems or a FICON fabric, it still results in the same bad behavior by the trunk(s). The solution to this tricky problem is to find a way to discover and isolate the high-latency devices: a. Use bottleneckmon: a. This CLI feature will help a user determine which ports are hosting high latency devices. b. Use Traffic Isolation Zones (TIZs): a. Add traffic isolation zones and isolate the ISLs used by the high-latency device(s). b. The TIZ approach is the most effective, but is also the most administratively complex since all the devices that access the high latency device must be included in the zone and an ISL dedicated to the high latency devices must be allocated. c. Use Virtual Fabrics: a. Define a high-latency virtual fabric for all of the 2G and other high-latency devices. b. The VF approach provides an option if isolation of the high latency devices is practical and the rest of the devices don t need to access the high latency devices. d. Use Device-based Routing: a. DBR (where the choice of routing path is based on the FC address of the source device [S_ID] and the destination device [D_ID]), is a partial solution since it does not completely isolate the high latency device b. However, it has the nice side effect of always using the same path through the fabric for the traffic to the high latency device. i. Thus, if there are only a few high latency devices, they will only affect a few ISL paths and allow the rest of the system to run unobstructed. ii. However, any normal, low-latency device that gets paired with the high-latency device is going to suffer performance degradation. 1. Solved by TIZ. e. Use Port Based Routing: a. PBR (where the choice of routing path is based on the incoming port [ingress] and the destination domain), like DBR, reduces the exposure to only the path that the high latency device is assigned. 1. Solved by TIZ Detection before harm is the key element here: If users know that they have devices that could cause slow drain situations then they must make all effort to remove or reduce the impact upon their fabric(s). When the flow of data from end-to-end is identical, or almost identical, then high latency will not happen: 8 Gbps CHPIDs to 8/16 Gbps switch ports to 8/10/16 Gbps ISLs to 8/16 Gbps switch ports to 8 Gbps DASD storage. 8 Gbps CHPIDs to 8/16 Gbps switch ports to 8/10/16 Gbps ISLs to 8/16 Gbps switch ports to 4 Gbps Tape storage. When the flow of data from end-to-end is not at the same rate, then high latency can happen: 8 Gbps CHPIDs to 8 Gbps switch ports to 8/10/16 Gbps ISLs to 8 Gbps switch ports to 2 Gbps DASD storage. 4 Gbps CHPIDs to 8/16 Gbps switch ports to 8/10/16 Gbps ISLs to 8/16 Gbps switch ports to 8 Gbps DASD storage. A best practice is to deploy multiple 2-port ISL Trunks, assuming they re 8 Gbps links or faster. If users need four ISL links that would be deployed as two 2-port Trunks. If users need eight ISL links that would be deployed as four 2-port Trunks. FICON Advice and Best Practices 48 of 77

49 This also works very well when users deploy compression and/or encryption on ISL links since only two 16 Gbps ports per ASIC on a Gen5 switch blade can perform those functions concurrently. When in real world operation, the notion of a single big trunk (fat pipe) being better than multiple smaller ones really is not better although they should behave similarly. And multiple, smaller trunks help provide the redundancy necessary to maintain a five-9s highly available environment within and between locations of the enterprise. FICON HOPS FICON continues to be allowed only one official hop. There are configurations that appear to create multihops, but they are certified by IBM to be hops of no concern : In-line Inter-Chassis Links between two or three Gen4 or Gen5 Directors. ISL links from the Brocade Gen4 or Gen5 Director to the Brocade 7500/7800, as long as the 7500/7800 is only providing extension services and not doing any source-target FCP or FICON switching. FRAME DATA ENCODING, TRANSMITTER TRAINING AND RETIMERS In order to transfer data over a high-speed serial interface (e.g. ESCON, FICON and Ethernet) data is encoded prior to transmission and decoded upon reception. The encoding process insures that sufficient clock information is present in the serial data stream to allow the receiver to synchronize to the embedded clock information and successfully recover the data at the required error rate. In addition, this encoding improves the line characteristics, enabling long transmission distances and more effective error-detection. Learn more about 16 GBps fibre channel here: Offering considerable improvements from the previous Fibre Channel speeds, 16 Gbps FC uses 64b/66b encoding, retimers in modules, and transmitter training. Doubling the throughput of 8 Gbps (800 MBps) to 16 Gbps (1,600 MBps), 16 Gbps links use 64b/66b encoding to increase the efficiency of their links far beyond what 8b/10b was capable of providing. 1, 2, 4 and 8 Gbps links use 8b/10b data encoding within a frame. For every 8 bits of data 2 check bits are added for error checking purposes Adding 2 additional bits to the original 8 bits adds 20% overhead to the data stream. These links provide a maximum of 80% efficiency. 10 Gbps and 16 Gbps use 64b/66b data encoding within a frame. For every 8 BYTES of data 2 check bits are added for error checking purposes Adding 2 additional bits to the original 8 BYTES adds 3% overhead to the data stream. These links provide a maximum of 97% efficiency. Of the two check bits generated while encoding the data, the ASIC captures the hi-order check bit to use for Forward Error Correction (see that section). If 8b/10b encoding was used for 16 Gbps FC, the line rate would have been 17 Gbps and the quality of links would be a significant challenge because of higher distortion and attenuation at higher speeds. By using 64b/66b encoding, almost 3 Gbps of bandwidth was dropped from the line rate so that the links could run over 100 meters of distance on Optical Multimode 3 (OM3) fiber. While 16 Gbps doubles the throughput of 8 Gbps FC to 1600 MBps, the line rate of the signals only increases to Gbps because of the more efficient encoding scheme. 16 Gbps FC links also use retimers in the optical modules to improve link performance characteristics, Retimers are Clock and Data Recovery (CDR) circuitry in the SFP+ modules. The most significant challenge of standardizing a high-speed serial link is developing a link budget that manages the jitter of a link. Jitter is the variation in the bit width of a signal due to various factors, and retimers eliminate most of the jitter in a link. FICON Advice and Best Practices 49 of 77

50 By placing a retimer in the optical modules, link characteristics are improved so that the links can be extended for optical fiber distances of 100 meters on OM3 fiber. To remain backward compatible with previous Fibre Channel speeds, the Fibre Channel Application-Specific Integrated Circuit (ASIC) must support both 8b/10b encoders and 64b/66b encoders. A Fibre Channel ASIC that is connected to an SFP+ module has a coupler that connects to each encoder. The speed-dependent switch directs the data stream toward the appropriate encoder depending on the selected speed. During speed negotiation, the two ends of the link determine the highest supported speed that both ports support. Electronic Dispersion Compensation (EDC) and transmitter training are used to improve backplane links and provide backwards compatibility to the older, Gen4 8 Gbps technology. Transmitter training is an interactive process between the electrical transmitter and receiver that tunes lanes for optimal performance. 16 Gbps FC references the IEEE standards for 10GBASE-KR, which is known as Backplane Ethernet for the fundamental technology to increase lane performance. The main difference between the two standards is that 16 Gbps FC backplanes run 40% faster than 10GBASE-KR backplanes for increased performance. The combination of these technologies enables 16 Gbps FC to provide some of the highest throughput density available anywhere. FORWARD ERROR CORRECTION This is a capability that became available as part of the 16 Gbps FC standard. The Brocade Condor3 ASIC includes integrated Forward Error Correction, (FEC), technology, which can be enabled only on E_Ports connecting ISLs between switches. FEC is a system of error control for data transmissions, whereby the sender adds systematically generated errorcorrecting code (ECC) to its transmission. This allows the receiver to detect and correct errors without the need to ask the sender for additional data. FEC only corrects bits in the payload portion of a frame. FEC does not provide any reporting about the corrections that it might be making within frames, or how much total correcting it is doing. ISL links using FEC must be directly connected together between Gen5 switching devices. If a DWDM or a 7800 or FX8-24 blade provides intermediate transport for an ISL flow then FEC is not providing any bit correction capability. Though FEC capability is generally supported on Condor3 (16G capable FC) ports when operating at either 10G or 16G speed, it is not supported when using active DWDM links. Hence FEC must be disabled on Condor3 ports when using active DWDM links by using the portcfgfec CLI command. Failure to disable FEC on active DWDM links may result in link failure during port bring up. The Brocade Condor3 implementation of FEC enables the ASIC to recover bit errors in both 16 Gbps and 10 Gbps data streams. The Condor3 FEC implementation can enable corrections of up to 11 error bits in every 2,112-bit transmission. This effectively enhances the reliability of data transmissions and is enabled by default on Condor3 E_ports. Saying this in another way, for approximately every 264 bytes of data in the payload of a frame, up to 11 bit errors can be corrected. Based on the ratio between enc in and crc err - which basically shows how many bit errors there are in a frame on the average this has the potential to solve over 90% of the physical problems users have when connecting FC fabrics together today. Among them: Less time consuming end-device-driven error recovery. Fewer aborts Fewer time-outs Reduced slow draining devices because of physical problems 64b/66b data encoding plays a big part in enabling Forward Error Correction to function. For every 8 BYTES of data 2 check bits are added for error checking purposes. Of the two check bits generated, the hi-order check bit is captured and saved. At the end of every 32, eight byte groups, 32 hi-order check bits have been captured. This goes into a 32 bit check sum to enable Forward Error Correction to clean up dirty links. FICON Advice and Best Practices 50 of 77

51 Enabling FEC does increase the latency of FC frame transmission by approximately 400 nanoseconds, which means that the time it takes for a frame to move from a source port to a destination port on a single Condor3 ASIC with FEC enabled is approximately.7 to 1.2 microseconds. Fabric administrators also have the option of disabling FEC on E-Ports. Forward Error Correct Improvements at FOS 7.1: Enable or Disable FEC for Long Distance Port. FOS v7.1 allows users to enable or disable the FEC feature while configuring a long distance link. This allows end users to turn off FEC where it is not recommended, such as when configuring active DWDM links. BUFFER CREDITS Buffer Credits (BCs) define the maximum amount of data (frames in flight) that can be sent down a link prior to any acknowledgment being received of frame delivery. On M-Series, the default of 16 BCs per port is more than adequate for local FICON I/O. On M-Series M6140, buffer credits could be set on a port by port basis. No sharing of buffer credits was done. Up to 60 BCs on M6140 with UPM cards and up to 125 BCs on M6140 with QPM cards. On M-Series Mi10K, buffer credits were pooled and could be assigned out to ports as needed. On B-Series, the default of 8 BCs per port is more than adequate for local FICON I/O. On Brocade s Gen3, Gen4 and Gen5 switching devices, buffer credits are pooled by ASIC, and are not isolated on a port by port basis. Users can take BCs from the ASIC pool and deploy them to F_Ports and E_Ports as required. Gen5, Condor3 based switches will be able to utilize a buffer credit pool of 8,192 buffers, which quadruples the Brocade Gen4, 8 Gbps Condor2 buffer credit pool of 2,048 buffers. The Condor3 ASIC architecture has the ability to link to the buffer pools of other Condor3 ASICs. Brocade is contemplating making use of this feature in a future Brocade FOS update. Users must use the CLI command portcfglongdistance in order to set long-distance buffer credits. The portcfglongdistance CLI command requires that the Extended Fabrics license be enabled on the chassis. It was mentioned in a previous version of this document that at one time IBM TS7700 had a requirement that 16 buffer credits be allocated for these tape devices. This is no longer the case as IBM has made a modification to the TS7700 firmware and now Brocade s standard 8 buffer credits per port is acceptable to TS7700. Users can use the RMF FICON Director Activity Report to see if there is any indication of buffer credit problems. A non-zero figure in the Average Frame Pacing Delay column means that a frame was ready to be sent, but there were no buffer credits available for 2.5 microseconds (μsec) or longer. Users want to see this value at zero most of the time. The question arises, Why not just use the maximum number of BCs on a long-distance port? If the switching device gets too busy internally handling small frames, such that it is not servicing ports in a timely manner, and the short fused mainframe I/O timer pops, then it becomes a huge burden on the switching device to discard all of those frame buffers on each port and then re-drive the I/O for those ports. Since BCs also serve a flow control purpose, adding more credits than needed for distance causes the port to artificially withhold backpressure, which can affect error recovery. It is a best practice to allocate just enough buffer credits on a long-distance link to optimize its link utilization without wasting BC resources. If a port is running close to or at zero buffer credits it is not an indication that there is a problem, and by itself it is NOT an indication of a problem in the fabric. If a user s fabric environment has been optimized, then time at zero buffer credits can simply mean that the link is being fully utilized. If the user begins experiencing performance problems, and believes that it might be buffer credit related, then the user needs to correlate time at zero buffer credits along with C3 discards to determine if there is a problem. C3 discards mean that a port cannot deliver a frame in 500ms or less and those frames are then being dropped. There are several ways to discover information about a port running out of BCs: At FOS 6.4.2a and below, use the portstatsshow CLI Command: admin> portstatsshow 2/8 Optimized fabric with no apparent slow draining devices: (edited output from the portstatsshow shows an optimized system functioning well) tim_txcrd_z Time BB credit zero (2.5Us ticks) er_rx_c3_timeout 0 Class 3 receive frames discarded due to timeout FICON Advice and Best Practices 51 of 77

52 Non-optimized fabric with apparent slow draining devices: (edited output from the portstatsshow a high latency system with slow drain devices) tim_txcrd_z Time TX Credit Zero (2.5Us ticks) er_rx_c3_timeout Class 3 receive frames discarded due to timeout At FOS 7 and higher, the user can also execute the excellent framelog CLI command: If a frame sticks in a port of the ASIC (the "brain" behind the port) for half a second (500ms), the switch has to assume that something's going wrong and so the frame cannot be delivered in time anymore. That is when the ASIC drops the frame from the port. Until FOS v7, the ASIC just increased a counter by one. Beginning at FOS v6.2, the drop was logged against the TX port (the direction towards the reason for the drop). In earlier FOS versions the counter increased only for the origin port, which made no sense at all. But at FOS v7 and beyond there is now a log for this which will store all the frames the switch had to discard. This Frame Log becomes very useful for troubleshooting as it contains the exact time, plus the TX and the RX port (keep in mind the TX is the important one) and even information from the frame itself. In the summary view users see the fibre channel addresses of the source device (SID) and of the destination device (DID). The Frame Viewer feature (framelog command) is intended to provide more visibility to systems programmers and SAN administrators regarding discarded frames due to timeout reasons. In FOS v7.1 the Frame Viewer was enhanced to display class 3 discard log messages associated with the backend ports (internal ports) of a chassis. Users can exploit the Frame Viewer feature to determine which flows contained the dropped frames, and this can help the user determine which applications might be impacted. Using Frame Viewer, a user can see exactly what time the frames were dropped. (timestamps are accurate to within one second). Users can view and filter up to 20 discarded frames per ASIC per second for 1200 seconds (20 minutes) by configuring a number of fields within the framelog CLI command. The frame log can also show a user exactly which frame was dropped. If the user needs to find out if a particular I/O timeout in their host was caused by a timeout discard in the fabric, this is how they can do it. If the user sees their storage array complaining about aborts for certain sequences, just look them up in the Frame Log. In the "dump mode" of the command users even see the first 64 bytes of each discarded frame. The output of the framelog CLI command below is just a summary view. Figure 18 At FOS 7 and higher, the user can execute the porterrshow CLI Command: As mentioned previously, if a frame is kept in a port's buffer for 500ms, because it cannot be delivered in time, it will be dropped. So these drops are a good indicator for a performance problem. Brocade s porterrshow CLI command is a popular way to discover how often the ASIC connected to a specific port had to drop a frame that was intended to be sent to this port. This single command displays an error summary for all ports on a switching device. Counts are reported on frames transmitted by the port (Tx) or on frames received by the port (Rx). The display contains one FICON Advice and Best Practices 52 of 77

53 output line per port. Numeric values exceeding 999 are displayed in units of thousands (k), or millions (m) or billions (g) if indicated. This command is very helpful to mainframers because it provides a single table for all FC ports showing the most important error counters. Unfortunately it has only one cumulative counter for all reasons that can cause frame discards - and there are a lot more beside of those of out-of-buffer credit time-outs. But starting at FOS 7 there are two additional counters in this table: c3-timeout tx and c3-timeout rx. Out of them the tx counter is the important one as described above. The rx counter just gives users an idea where the dropped frames might have come from: Figure 19 If the port is not discarding any frames, then there should be no concern that buffer credits are causing a problem. If a user never sees C3 discards, and only sees buffer credit time at zero, it is not a slow draining device problem.. The only concern may be that since the user is fully utilizing the link, additional loads in the future could push them over the edge. C3 discards and time at zero buffer credits together begin to point to a high latency fabric problem: It might indicate that enough buffer credits were not allocated to that port. Use the portcfglongdistance CLI command in LS mode and indicate the real distance and the average frame size to optimize buffer credit allocation to a port. Keep in mind, this will be disruptive to the traffic on that port. It might indicate a high latency (aka slow drain) device, and steps should be taken to isolate that device or path to prevent back pressure throughout the rest of the fabric. That backpressure could cause performance impacts to other seemingly unrelated devices that are sharing those ISLs. In these cases, the initial response is typically to add additional ISLs for more bandwidth, but this will not really resolve the issue. Isolating slow draining devices to their own ISL links and/or replacing these slower devices with newer, more modern and faster interfaces will improve this situation. Using the CLI command bottleneckmon would be the first recommendation for monitoring these congestion and latency conditions. See the section on Channel Path Performance for more information about bottleneckmon. A word of caution: In recent FOS releases Brocade has implemented a check for "stuck VCs" and this check might find one or more in the shop s switching devices during a firmware upgrade. A stuck VC is a virtual channel that has run out of buffer credits and has not had buffer credits for a long period of time The stuck VC was actually there before but now, after the firmware upgrade, Fabric OS is able to point it out and generates a warning message about it. Use the CLI command bottleneckmon to detect and repair stuck VCs. Once enabled the bottleneckmon agent will monitor the internal links and if there is a 2-second window without any traffic on a backlink with a stuck VC, it will reset it to solve the stuck VC. Buffer credit recovery works on the Condor ASIC (48000, 4100, 4900), Condor2 ASIC (DCX family, 5100, 5300) and Condor3 ASIC (8510 family, 6510) devices. This approach minimizes the impact of the link reset. Remember, however, that bottleneckmon is not enabled and running by default. To enable it to run: bottleneckmon --cfgcredittools -intport -recover onlronly Setting Buffer Credits on Gen4 and Gen5 switching devices at FOS 7.0.0d and earlier: Prior to FOS 7.1, the CLI command portcfglongdistance always assumed that full 2K frames are used with each BC. For FICON this simply is not the case: Users can create the RMF FICON Director Activity Report to determine their FICON average frame size. Use the WRITE average frame size from the RMF report. FICON Advice and Best Practices 53 of 77

54 Another technique is to use either a conservative 512-byte frame Rule of Thumb (ROT) or the commonly used ROT of a 1,024-byte frame. A better way would be to use the CLI command portsstatsshow: If CUP is not being utilized on the Brocade switching devices, a user can calculate the average frame size from the results of executing the portstatsshow CLI command. The results will resemble this: stat_wtx byte words transmitted stat_wrx byte words received stat_ftx Frames transmitted stat_frx Frames received Portsstatsshow does not provide users with just the number of bytes but rather with the number of 4-byte words. Fill words do not count into this number, so it really is a valid number for our average frame size calculation. There is a big differences between what RMF reports and what portsstatsshow provides: RMF provides statistics based on a standard interval of time. Each RMF report shows only what has occurred from the end of the last interval to the end of this interval. It is a delta number. Portsstatsshow is a continuous counter. It is reset on a POR event or when a user manually resets the counters. These counters may contain too much information (weekends and holidays) or too little information (a reset was done just an hour ago). Users must know what data they are using. A user might want to consider running portstatsshow after they have reset the values and they have allowed the link to run quite a while during a peak period of time. In this way the user will help ensure that they are working with valid data when calculating average frame size. Users will just multiply the number of 4-byte words, listed out by portsstatsshow, by 4 and then divide that result by the total number of frames in order to calculate that link s average frame size: ( * 4) bytes / frames = 128 bytes per frame transmitted average frame size. ( * 4) bytes / frames = 238 bytes per frame received average frame size. When using an average frame size to calculate buffer credits required on a link, the user must use the number of bytes per frame transmitted (written). It is the transmitter that utilizes buffer credits while it is the receiver that acknowledges frames received and sends R_RDY or VC_RDY back to the transmitter to return a buffer credit to the transmitter s buffer credit pool. If the user is deploying cascaded links for the first time then they will probably have to tweak the buffer credit setting over time. It would be reasonable to assume that the actual frames that will traverse a new FICON fabric could be 10-15% smaller than the overall average frame size of the system. And if compression/encryption is enabled on the ISL links, that will further reduce average frame size by about one-half. The LD (dynamic) option of the CLI command provides a dynamic setting of the BCs based on the distance. However, it has limitations and users should avoid the use of LD mode, and use LS (static) mode instead, if the cable distance is known. Use the LS option since the systems programmer usually needs to double or triple the real cable distance (depending upon the average frame size that is typically created in their environment). Also, potentially users should add in a little extra distance in order for the CLI command to provide the link with adequate buffer credits to keep it fully utilized. To compensate for the CLI command s inability to compute with anything other than full frame sizes, we must manipulate the actual link distance in order to make this CLI command produce enough buffer credits to keep long distance ISL links fully utilized: Example: Assume that the FICON average frame size is 862 bytes (2148 / 862 = 2.5). If the distance between a user s two sites is 80 km, then specify a distance of 220 km (80 * percent extra). The CLI command would look similar to: portcfglongdistance 1/0 ls for 80 km with VC_Link_Init enabled for IDLE Fillword (0). portcfglongdistance 1/0 ls for 80 km with VC_Link_Init enabled for ARB Fillword (0). The configuration must be the same for the remote ISL port that is connected to this port. Setting Buffer Credits on Gen4 and Gen5 switching devices at FOS 7.1 and later: Use portcfglongdistance to set a port s buffer credits to handle the distance and use the LS mode LD mode always assumes full frames are used for I/O but FICON NEVER has full frames. It is more likely, even with zhpf, that FICON I/O frames are only 1/3 to 3/4 of a full frame. But starting with FOS 7.1, there are new options that can be used on the portcfglongdistance CLI command. FICON Advice and Best Practices 54 of 77

55 Use the -distance and framesize options, that became available for LS mode, so that enough BCs are allocated to the ISL port Framesize will be hard to predict when compression/encryption is utilized but start by assuming that there will be a need for twice as many buffer credits as might have been needed if compression/encryption were not enabled. At 16 Gbps as many as 5,188 (32-port blade) or 4,484 (48-port blade) buffer credits can be reserved for data traffic across an ISL, depending on the specified -distance value. Here is an example of going 25 km on an 8 Gbps ISL link and getting enough buffer credits allocated to the port for a FICON cascaded link whose typical frame size is 1,024 bytes: switch:admin> portcfglongdistance 2/35 LS 1 distance 25 framesize 1024 Here is an example of going 100 km on a 16 Gbps ISL link and getting enough buffer credits allocated to the port for a FICON cascaded link whose typical frame size is 870 bytes and where compression and encryption have been enabled: switch:admin> portcfglongdistance 2/35 LS 1 distance 100 framesize 435 The example assumes 2:1 compression has occurred on each frame FOS 7.2 buffer credits improvements: FOS v7.2 allows buffer credit assignment even for normal distance (regular) E_ports The portcfgeportcredits CLI allows users to perform fine grained performance tuning on L0 (local distance) E_ports by allowing users to specify buffer credits. Prior to FOS 7.2, in L0 mode with QoS enabled, the four data VCs for FICON only had 2 buffer credits each. Now that can be modified to as many as 40 BC per VC. For customers using Core-Edge cascading configurations, mainframe environments almost always have QoS enabled but do not use QoS zoning statements. This significantly limited each FICON data VC to a maximum of 2 or 3 BCs. That is now fixed with this new CLI command. INTEGRATED ISL COMPRESSION AND ENCRYPTION The Brocade Gen Directors and 6510 switch enables high-speed replication and backup solutions over metro or WAN links with native Fibre Channel (10/16 Gbps) and optional FCIP (1/10 GbE) extension support. The integrated metro connectivity includes in-flight compression and encryption to optimize bandwidth and minimize the risk of unauthorized access. It is only switch-to-switch compression, not device or data-at-rest compression. In-flight data compression optimizes network performance within the data center and over long-distance links. Data is compressed at the source and uncompressed at the destination. Performance varies by data type, but Brocade uses an efficient algorithm to generally achieve 2:1 compression with minimal impact on performance. Compression can be used in conjunction with in-flight encryption. In-flight compression is only available on 16 Gbps port blades. In-flight data encryption minimizes the risk of unauthorized access for traffic within the data center and over long-distance links. It is only switch-to-switch encryption, not device or data-at-rest encryption. Data is encrypted at the source and decrypted at the destination. Encryption and decryption is performed in hardware using the AES-GCM-256 algorithm, minimizing any impact on performance. Encryption can be used in conjunction with in-flight compression. In-flight encryption is only available on 16 Gbps port blades. Compression/Encryption Improvements at FOS 7.1: FICON Advice and Best Practices 55 of 77

56 Increase the number of Encryption/Compression ports based on port speed For any supported 16G blade or 16G switch, the number of ports supported for Encryption/Compression at 8G speed is twice the number of ports supported at 16G speed. Starting with FOS v7.1 users can enable port decommissioning on a port that also has in-flight Encryption/Compression enabled. In-flight Encryption/Compression on EX_Ports requires 16G capable ports at both ends of the link. The portstatsshow CLI command is enhanced to display compression ratio on a compression enabled port. The portenccompshow CLI command is enhanced to display the port speed of Encryption/Compression enabled ports ISL Compression Implementing the Condor3 ISL compression capability requires no additional hardware and no additional licensing. Brocade Condor3 based switches provide the capability to compress all data in flight, over an ISL. This requires a Brocade Condor3 based switch on both sides of the ISL. A maximum of 4 ports per Brocade DCX 8510 blade (2 per ASIC), or 2 ports per Brocade 6510 switch can be utilized for this data compression. At FOS 7.0.0d and below IBM did not qualify the compression/encryption features for FICON. At FOS 7.1 and higher IBM has qualified compression/encryption features for FICON fabrics. Each Condor3 ASIC can provide up to 32 Gbps of compression, via a maximum of two (2) 16 Gbps FC ports, which can be combined and load-balanced, utilizing Brocade ISL Trunking. Because 32-port and 48-port 16 Gbps port blades are equipped with two Condor3 ASICs, a single port blade in the Brocade DCX 8510 can provide up to 64 Gbps of ISL data compression, utilizing four ports. The maximum DCX configuration supported provides 512 Gbps of compression across all 8 port blades in the Brocade DCX , or 256 Gbps of compression across all 4 port blades in the Brocade DCX The Brocade 6510 switch is limited to providing up to 32 Gbps of compression, on up to two 16 Gbps FC ports. Future enhancements will include support for compression over 10 Gbps FC links. This compression technology is described as in-flight because this ASIC feature is enabled only between E_Ports, allowing ISL links to have the data compressed as it is sent from the Condor3 based switch on one side of an ISL and then decompressed as it is received by the Condor3 based switch that is connected to the other side of the ISL. As mentioned earlier, in-flight ISL data compression is supported across trunked ISLs, as well as multiple ISLs and long distance ISLs. Users might want to deploy many, smaller trunks to make better use of the compression/encryption capability. Brocade Fabric QoS parameters are also honored across these ISL configurations. FICON, at the time of publication of this document, does not support QoS but FCP traffic does. Quality of Service (QoS) is enabled by default and should be left enabled. As long as the user does not set any of the QoS parameters (QoS zones) they are not actually using it so they do not create an unsupported configuration by leaving it on. The FICON qualification testing completed between IBM and Brocade was accomplished with QoS enabled. It is believed that most, if not all, mainframe customers have left QoS enabled but unused. The compression technology utilized is a Brocade developed implementation that utilizes a Lempel-Ziv- Oberhumer (LZO) lossless data compression algorithm. The compression algorithm provides an average compression ratio of 2:1, and all Fibre Channel Protocol (FCP), as well as FICON, frames that transit the ISL are compressed. The exceptions being the Basic Link Services (BLS) as defined in the ANSI T11.3 FC-FS standard and Extended Link Services (ELS) as defined in the ANSI T11.3 FC-LS standard frames. Enabling in-flight ISL data compression increases the time it takes for the Condor3 ASIC to move the frame. This is described as latency and should be understood by FC architects. Normally the transit time for a 2kb frame to move from one port to another port on a single Condor3 ASIC is approximately 700 nanoseconds, a nanosecond representing one-billionth (10 9 ) of a second. Adding in-flight data compression increases the overall latency by approximately 5.5 microseconds, a microsecond representing one millionth (10 6 ) of a second. This means there is an approximate latency time of 6.2 microseconds for a 2kb frame to move from a source port, be compressed, and then move to the destination port on a single Condor3 ASIC. Of course, calculating the total latency across an ISL link means including the latency calculations for both ends. FICON Advice and Best Practices 56 of 77

57 For example, compressing a 2kb frame and sending it from one Condor3 switch to another would result in a total latency of 12.4 microseconds, (6.2 * 2), not counting the link transit time. One of the use cases for utilizing Condor3 integrated ISL data compression is when a metro-area SAN infrastructure includes an ISL for which there are either bandwidth caps or bandwidth usage charges. Compression/encryption of frames on ISL links will cause the average frame size of both FCP and FICON frames to become smaller and therefore will very probably require additional buffer credits be allocated for servicing ISL links. The Port Decommissioning/Port Commissioning feature is not supported on links configured for encryption or compression. ISL Encryption Implementing the Condor3 ISL encryption capability requires no additional hardware and no additional licensing. Brocade Condor3 based switches provide the capability to encrypt all data in flight, over an ISL. This requires a Brocade Condor3 based switch on both sides of the ISL. A maximum of 4 ports per Brocade DCX 8510 blade (2 per ASIC), or 2 ports per Brocade 6510 switch can be utilized for this data encryption. Both encryption and compression can be enabled on the ISL link simultaneously. As a note of information, when these two features are active, data is compressed before it is encrypted. At FOS 7.0.0d and below IBM did not qualify the compression/encryption features for FICON. At FOS 7.1 and higher IBM has qualified compression/encryption features for FICON fabrics. Each Condor3 ASIC can provide up to 32 Gbps of encryption, via a maximum of two (2) - 16 Gbps FC ports, which can, again, be combined and load-balanced, utilizing Brocade ISL Trunking. The two Condor3 ASICs on both the 32-port and 48-port 16 Gbps port blades enable a single port blade in the Brocade DCX 8510 to provide up to 64 Gbps of ISL data encryption, utilizing four ports. The maximum DCX configuration supported will provide 512 Gbps of encryption across all 8 port blades in the Brocade DCX , or 256 Gbps of encryption across all 4 port blades in the Brocade DCX The Brocade 6510 switch is limited to providing up to 32 Gbps of encryption, on up to two 16 Gbps FC ports. As with Condor3 integrated compression, the integrated encryption is supported in-flight, exclusively for ISLs, linking Condor3 based switches. Enabling ISL encryption results in all data being encrypted as it is sent from the Condor3 based switch on one side of an ISL and then decrypted as it is received by the Condor3 based switch connected to the other side of the ISL. As with integrated ISL compression, this integrated ISL encryption capability is supported across trunked ISLs, as well as multiple ISLs and long-distance ISLs. Users might want to deploy many, smaller trunks to make better use of the encryption/compression capability. Brocade Fabric QoS parameters are also honored across these ISL configurations. FICON, at the time of publication of this document, does not support QoS but FCP traffic does. Quality of Service (QoS) is enabled by default and should be left enabled. As long as the user does not set any of the QoS parameters they are not actually using it so they do not create an unsupported configuration by leaving it on. The FICON qualification testing completed between IBM and Brocade was accomplished with QoS enabled. It is believed that most, if not all, mainframe customers have left QoS enabled but unused. It is important to note that, when implementing ISL encryption, using multiple ISLs between the same switch pair requires that all ISLs be configured for encryption or none at all. While the Condor3 based switches support a Federal Information Processing Standard (FIPS) mode for providing FIPS 140 Level 2 compliance, upon release the integrated ISL encryption will only work with the FIPS mode disabled. A future Brocade FOS update will allow enabling integrated ISL encryption with FIPS mode enabled. Additionally, in order to implement ISL encryption, some room is necessary within the FC frame payload area. For FCP I/O traffic, the maximum payload size of an FC frame is 2,112 bytes, which is typically the maximum size used by most drivers. FICON, however, places a maximum of 2,048 bytes of data into a frame payload. In order to support this requirement for integrated ISL encryption, Brocade will change the default frame payload size setting for the Brocade HBA driver (for open systems servers) to 2,048 bytes. In all other cases, in order to enable ISL data encryption, all other drivers will need to be manually configured to utilize a maximum payload size of 2,048 bytes. Both compression and encryption can be enabled, utilizing the integrated features of the Brocade Condor3 based switches. FICON Advice and Best Practices 57 of 77

58 As is the case with integrated data compression, enabling integrated data encryption adds approximately 5.5 microseconds to the overall latency. This means an approximate latency time of 6.2 microseconds for a 2kb frame to move from a source port, be encrypted, and then move to the destination port on a single Condor3 ASIC. Also, calculating the total latency across an ISL link means including the ASIC latency calculations for both ends. Encrypting a 2kb frame and sending it from one Condor3 switch to another would result in a total latency of 12.4 microseconds (6.2 * 2), not counting the link transit time. If both encryption and compression are enabled, those latency times are not cumulative. For example, compressing and then encrypting a 2kb frame incurs approximately 6.2 microseconds of latency on the sending Condor3 based switch and incurs approximately 6.2 microseconds of latency at the receiving Condor3 based switch in order to decrypt and uncompress the frame. This would result in a total latency time of 12.4 microseconds, again, not counting the link transit time. The encryption method utilized for the Condor3 integrated ISL encryption is the Advanced Encryption Standard (AES) AES-256 algorithm using 256 bit keys, and uses the Galois Counter Mode (GCM) of operation. AES-GCM was developed to support high-throughput message authentication codes (MAC) for high data rate applications such as high-speed networking. In AES-GCM, the MACs are produced using special structures called Galois field multipliers, which are multipliers that use Galois field operations to produce their results. The key is that they are scalable and can be selected to match the throughput requirement of the data. As with integrated ISL data compression, when enabling integrated ISL encryption, all FCP and non-fcp (FICON) frames that transit the ISL are encrypted, with the exception of BLS and ELS frames. In order to enable integrated Condor3 ISL encryption, port-level authentication is required and Diffie-Hellman Challenge Handshake Authentication Protocol (DH-CHAP) must be enabled. The Internet Key Exchange (IKE) protocol is used for key generation and exchange. The key size is 256 bits, the Initialization Vector (IV) size is 64 bits, and the Salt size is 32 bits. Unlike traditional encryption systems that require a key management system for creating and managing the encryption keys, the integrated Condor3 ISL encryption capability is implemented with a simpler design, utilizing a non-expiring set of keys that are reused. While this represents a security concern because the keys are non-expiring and reused, it also allows this integrated ISL encryption to be implemented with very little management impact. One use case for utilizing Condor3 integrated ISL encryption is to enable a further layer of security for a metroarea FC fabric infrastructure. The Port Decommissioning/Port Commissioning feature is not supported on links configured for encryption or compression. BUFFER CREDIT RECOVERY Buffer Credit Recovery allows the switches in a fabric to exchange information about the used buffer-to-buffer credits and offers the possibility to react if any credit loss has occurred. It should be evident by now that the management of buffer credits in metro-area and wide-area storage network fabrics is critically important. Furthermore, many issues can arise in the storage network fabric whenever there are instances of either buffer credit starvation or buffer credit loss. Conditions where a particular link may be starved of buffer credits could include: Incorrect long-distance buffer credit allocations Links where buffer credits are being lost. Lost buffer credits can be attributed to error conditions such as a faulty physical layer component or misbehaving end node devices. If this condition persists untreated, it can result in a stuck link condition whereby the link is left without buffer credits for an extended time period, (e.g. 600 milliseconds), stopping all communications across the link. These problem conditions are only exacerbated when they exist in wide-area storage networking architectures. The Brocade Fibre Channel network implements a multiplexed ISL architecture called Virtual Channels (VCs), which enables efficient utilization of E_Port to E_Port ISL links and avoids head-of-line blocking. So in terms of being able to diagnose and troubleshoot buffer credit issues, being able to do so at the VC granularity is very important. FICON Advice and Best Practices 58 of 77

59 While the Brocade Gen4 Condor2 ASIC and FOS provide the ability to detect buffer credit loss and recover buffer credits at the port level, the Brocade Gen5 Condor3 ASIC diagnostic and error recovery feature set includes not only the port level BC recovery capability, it also includes the following features: The ability to detect and recover from buffer credit loss at the VC level The ability to detect and recover stuck links at the VC level Brocade Gen5 Condor3 based switches can actually detect buffer credit loss at the VC level of granularity: If the ASICs detect only a single buffer credit lost they can restore the buffer credit without interrupting the ISL data flow. If the ASICs detect more than one buffer credit lost or if they detect a stuck VC, they can recover from the condition by resetting the link, which would require retransmission of frames that were in transit across the link at the time of the link reset. Virtual Channel (VC) level BC recovery is a feature that is implemented on 16 Gbps Gen5 Condor3 ASICs. Both sides of a link must contain Condor3 ASICs. It is supported on local and long-distance ISL links. Loss of a single buffer credit on a VC is recovered automatically by the Condor3 ASICs through the use of a VC reset. Detection of a stuck VC occurs by the Condor3 ASIC after a zero BCs condition is timed for 600 milliseconds more than half a second. Loss of multiple BCs on a VC is recovered automatically, and often non-disruptively, by the Condor3 ASICs through the use of Link Reset commands (similar to Condor2 BC recovery). Buffer credit recovery (CR) allows links to recover after buffer credits are lost when the buffer credit recovery logic is enabled. The buffer credit recovery feature also maintains performance. If a credit is lost, a recover attempt is initiated. During link reset, the frame and credit loss counters are reset without performance degradation. On Condor2 Linkreset is disruptive E_D_TOV time (~2 seconds), without I/O traffic, is required to reset the ISL link On Condor3 It is a Hardware Reset and it is non-disruptive This feature is supported on E_Ports and F_Ports. VE_Ports do not support the portcfgfportbuffers or portcfglongdistance commands. Buffer credit recovery is enabled automatically across any long-distance connection for which the E_Port or F_Port buffer credit recovery mechanism is supported. For Gen5, 16-Gbps FC devices and blades (Brocade 6510, CR16-4, CR16-8, FC16-32, FC16-48), you can use the portcfgcreditrecovery CLI command to disable or enable buffer credit recovery on a port. Buffer credit recovery over an E_Port To support buffer credit recovery on FICON switch devices, E_Ports must be connected between the following switch or blade models: Brocade 5100, 5300, 6510 FC8-16, FC8-32, FC8-48, FC16-32, FC16-48 If a long-distance E_Port from one of these supported switches or blades is connected to any other switch or blade type, the buffer credit recovery feature is disabled. The buffer credit recovery feature for E_Ports is enabled for the following flow-control modes: Normal (R_RDY) Virtual Channel (VC_RDY) Extended VC (EXT_VC_RDY) Buffer credit recovery over an F_Port Buffer credit recovery for F_Ports is supported for F_Port-to-N_Port links between a Brocade switch and an open system HBA adapter (obviously, this is for FCP and not FICON). F_Port buffer credit recovery is not supported between a Brocade switch and a CHPID. For an F_Port on a Brocade switch connected to an open systems adapter, the following conditions must be met: The Brocade switch must run Fabric OS v7.1 or later. Fabric OS must support buffer credit recovery at both ends of the link. If a Brocade open systems adapter is utilized in the fabric it must be running HBA v3.2 firmware or later. Those adapters must operate at maximum speed. The flow-control mode must be R_RDY. See the section on ISLs and FCIP Links for how to turn off VC_Rdy flow control mode. FICON Advice and Best Practices 59 of 77

60 The feature is enabled automatically during a link reset if the conditions are met. If the conditions for buffer credit recovery are not met, the link will come up, but buffer credit recovery will not be enabled. Enabling and disabling buffer credit recovery To disable buffer credit recovery on a port, perform the following steps. Connect to the switch and log in using an account assigned to the administrator role. Enter the portcfgcreditrecovery CLI command and include the -disable option. The following example disables buffer credit recovery on port 1/20. switch:admin> portcfgcreditrecovery 1/20 -disable To enable buffer credit recovery on a port for which it has been disabled, perform the following steps. Connect to the switch and log in using an account assigned to the admin role. Enter the portcfgcreditrecovery CLI command and include the -enable option. The following example enables buffer credit recovery on port 1/20. switch:admin> portcfgcreditrecovery 1/20 enable General Front-end and Back-end Port buffer credit recovery The port blades in Brocade Directors communicate with each other thru the Core blades. So, in an 8-slot Director port blades 1-4 and 9-12 are connected to the core blades 6-7 via the backplane. When I/O travels from one port blade to another port blade, it will traverse a core blade (backplane). Brocade Directors use buffer credits internally to communicate from port blades to core blades, just like buffer credits are used to communicate from the Director to servers/storage and Directors to other switches. When users have a buffer credit issue on the front facing ports, they can disable/enable ports or disconnect and reconnect devices. But on the back end ports, in the past, a user had to pull out the blade and put it back in. Beginning at FOS 6, there is a buffer credit recovery command to help users avoid having to reseat blades in order to fix buffer credit problems. Here's an example of how a buffer credit problem might occur: A user has two switches connected to each other and each ISL has 50 buffer credits to use when transmitting data. Over time there could be bit errors or some other problem that causes a frame acknowledgement to become corrupted and that buffer credit does not get refreshed. So, now one of the switches only has 49 buffer credits to use. Not really a big deal, but if this continues, eventually the link can run low or run out of buffer credits. Brocade has improved its bottle neck detection capability and it now detects these dropped buffer credits and can recover them. This buffer credit recovery mechanism must also take place on the back end ports as well. Brocade s Bottleneck detection capability helps eliminate the need to reseat blades when buffer credits get lost. The bottleneckmon CLI command tool will continuously check a user s storage network for performance problems. Configured correctly it will pinpoint the cause of performance problems - at least the bigger ones. The bottleneckmon capability was introduced with FOS v6.3x and from v6.4x and higher it became a must-have tool by offering two useful features. Congestion bottleneck detection This just measures the link utilization. Fabric watch (a licensed product which is pre-loaded on many of the OEM and Partner sold switches and Directors) can do that already and has for a long time. But the bottleneckmon offers a bit more convenience and displays it in the proper context. Latency bottleneck detection This feature provides the user with important information about most of the situations influenced by buffer credit starvation. If a port runs out of buffer credits, it is not allowed to send frames over the fiber link. If a customer discovers a latency bottleneck reported against an F_Port they most probably have found a slow drain device in their fabric. If it's reported against an ISL, there are two possible reasons: 1. There could be a slow drain device "down the road" - the slow drain device could be connected to the adjacent switch or to another one connected to it. Credit starvation typically pressures back to affect wide areas of the fabric. FICON Advice and Best Practices 60 of 77

61 2. The ISL could have too few buffers. Maybe the link is just too long. Or the average framesize is much smaller than expected. Or QoS is configured on the link but the user does not have QoS-Zones created for prioritizing the I/O. This could have a huge negative impact! Another reason could be a misconfigured long distance ISL. Whatever it is, it is either the reason for the user s performance problem, or at least contributing to it, and should definitely be solved. With FOS v7 and higher, bottleneckmon was improved again. While the core-policy which detects credit starvation situations was pretty much pre-defined before v7.0 a user is now able to configure it in great detail. Regardless, a best practice is to use bottleneckmon with the defaults. Experiment from there. Buffer credit recovery on back-end ports: Use the -cfgcredittools option of the bottleneckmon CLI command to enable or disable buffer credit recovery of external back-end ports and also to display the configuration. When this feature is enabled, buffer credits are recovered on external back-end ports (ports connected to the core blade or core blade back-end ports) when credit loss has been detected. When used with the command s -recover onlronly option, the recovery mechanism takes the following escalating actions: When it detects credit loss, it performs a link reset and logs a RASlog message (RAS Cx-1014). If the link reset fails to recover the port, the port reinitializes. A RASlog message is generated (RAS Cx- 1015). Note that this port re-initialization does not fault the blade. If the port fails to reinitialize, the port is faulted. A RASlog message (RAS Cx-1016) is generated. If a port is faulted and there are no more online back-end ports in the trunk, the core blade is faulted. Note that the port blade will always be faulted and a RASlog message is generated (RAS Cx-1017). When used with the command s -recover onlrthresh option, recovery is attempted through repeated link resets and a count of the link resets is kept. If the threshold of more than two link resets per hour is reached, the blade is faulted. Regardless of whether the link reset occurs on the port blade or on the core blade, the port blade is always faulted. Here are the commands: Enable back-end port credit recovery with the link reset only option and also display the configuration: Switch:admin> bottleneckmon -cfgcredittools -intport -recover onlronly Switch:admin> bottleneckmon -showcredittools IBM TS7700 VIRTUAL TAPE SOLUTION For a while, during 2012, there was a recommendation that when using the IBM TS7700 a customer should increase the connected switching device ports to 16 buffer credits. This recommendation stemmed from a channel interface problem on the TS7700 that has since been fixed. The recommended best practice is to leave the switching device F-Ports at their default BB credit setting of 8 BCs per port. If a customer is having time-out problems with the IBM TS7700, they should contact I BM to get the latest TS7700 interface firmware. CHANNEL PATH PERFORMANCE When executing I/O, there is an I/O source and an I/O target. For example: For a DASD read operation, the DASD array is the I/O source, and the CHPID (and application) is the target. For a tape write operation, the CHPID (application) is the I/O source, and the tape drive is the target. A user should always try to ensure that the target of an I/O has an equal or greater data rate than the source of the I/O. An example of doing Good I/O would be when deploying 8 Gbps CHPIDs which are ultimately connected to 4 Gbps DASD arrays the DASD I/O source (4 Gbps) is slower than the CHPID I/O target (8 Gbps). FICON Advice and Best Practices 61 of 77

62 An example of doing Bad I/O would be when deploying 4 Gbps CHPIDs which are ultimately connected to 8 Gbps DASD arrays the DASD I/O source (8 Gbps) is faster than the I/O target (4 Gbps). Performance can be negatively impacted since the source can deliver I/O faster than the target receiver can accept that data. Backpressure (using the buffer credits on the switching ports and the target device) can build to the point that all BCs are consumed, and all I/Os from that CHPID (potentially servicing many LPARs and applications) must wait on R_RDYs from the slow 4 Gbps CHPID. With the FICON channel subsystem, there are situations where a switching port can bounce for some reason. If the port does bounce, there is an elaborate recovery process that starts. Once the recovery process starts, if the bounced port comes back up, this often causes additional problems as the recovery process with the host was already underway. So for FICON it is a best practice to leave a bounced port disabled and let the host handle recovery. Then customers can resolve/re-enable the port at a later time. The Port AutoDisable feature minimizes traffic disruption that is introduced in some instances when automatic port recovery is performed. An automatically disabled port may be brought back into service using the portenable CLI command. FICON, at the time of publication of this document, does not support QoS but FCP traffic does. Quality of Service (QoS) is enabled by default and should be left enabled. As long as the user does not set any of the QoS parameters they are not actually using it so this does not create an unsupported configuration by leaving it on. The FICON qualification testing completed between IBM and Brocade was accomplished with QoS enabled. It is believed that most, if not all, mainframe customers have left QoS enabled but unused. There is a Brocade FOS 6.4.x or higher command to help users identify any high latency and/or congestion situations in the fabric. I is, called bottleneckmon. To enable bottleneckmon on all ports with default thresholds and send it to RASlog, do the following: admin> bottleneckmon --enable alert * Although not required, and probably not of much use in a single switch fabric, the recommended best practice is to enable bottleneckmon alerting now so that the user will not forget about it later when fabrics become merged together. Bottleneck Detection improvements at FOS 7.1: Usability enhancements have been made to the bottleneckmon command in FOS v7.1 such that, when changing bottleneck detection configuration, unspecified parameters do not revert back to their default values, if they currently have non-default values. Please see the bottleneckmon command description for additional details. Edge Hold Time (EHT) Related Enhancements: FOS v7.1 adds support for the user defined EHT configuration only in the default switch in a Virtual Fabrics environment. In addition, pre-defined EHT values can be configured for individual Logical Switches. Frame Viewer (Class 3 Discard Log) Enhancements: In FOS v7.1 Frame Viewer has been enhanced to display class 3 discard log messages associated with the backend ports (internal ports) of a chassis. NODE PORT ID VIRTUALIZATION (NPIV) Node Port (N-Port) ID virtualization (NPIV) is a method for assigning multiple Fibre Channel addresses to a single N_Port. This feature is mainly used for systems that support multiple images behind a single N_Port. NPIV must be set up on the FICON entry switching device before it is set up on the z9, z10, z196, z114 or zec12 mainframe. Use the element manager to go to Configure > Operating Parameters, and then choose the Domain tab.under the Domain tab the user will be able to enable Node Port Virtualization. Then, the NPIV login limit of each port must be determined. Each port on an NPIV-enabled switch has an NPIV Login Limit. This is the maximum number of WWNs that can log into a single port. When NPIV is enabled, all ports default to a login limit of one WWN. For NPIV to be useful, this number must be changed.users can use either CLI commands or Brocade DCFM/Brocade Network Advisor to configure each individual port for its World Wide Port Name (WWPN) login limit. The portcfgnpivport CLI command can be used to modify the WWPN login limit by port. The portcfgshow CLI command displays the NPIV capability of the switch ports. Allow no more than 32 virtual node logins per CHPID-switch port pair link. FICON Advice and Best Practices 62 of 77

63 This recommendation is due to potential timeout issues that IBM experienced at 2 Gbps link speeds. Enabling NPIV on the z9, z10, z114, z196 or zec12 can be accomplished from the CHPID operations menu in the Systems Element (SE):See the IBM Red Paper, Introducing N_Port Identifier Virtualization for IBM System z9 REDP Also do a browser search on SHARE in Anaheim: Planning and Implementing NPIV for System z by Dr. Stephen Guendert. There are only 1,024 NPIV nodes maximum on M-Series limiting NPIV-capable ports per chassis to <=32 (32 virtual node logins x 32 ports = 1024). There are only 2,048 NPIV nodes maximum on the Brocade Gen4 and Gen5 switching devices limiting NPIVcapable ports per chassis to <=64 (32 virtual node logins x 64 ports = 2048).These limitations are a function of both memory resources and processing power on today s switches. A vital role of NPIV is to allow users to consolidate their I/O from many Linux guests onto just a few CHPIDs that are running in FCP mode. Users must be careful that this consolidation does not create a bottleneck. Too many Linux Guests might push the utilization of a CHPID link beyond what can be handled without congestion. After deploying NPIV be sure to check the channel performance statistics to be sure that congestion is not occurring. Deployment of the FCP CHPIDs must be careful to utilize as many Virtual Channel paths as possible to avoid congestion at the VC level of any of the ISL links that might be utilized by the NPIV links. See the section on Virtual Channels for more information. VIRTUAL FABRICS It is a best practice to always enable Virtual Fabrics (VF) even if users do not deploy them (Brocade FOS 6.0 and higher) since it is disruptive to change this mode at a later time. Once users have enabled VF, a Default Switch is automatically created. But this should not be used for FICON connectivity purposes. When the Virtual Fabrics feature is enabled on a switch or Director chassis, all physical ports are automatically assigned to the Default Switch (a logical switch that is automatically created when VF is enabled). This is a system-created Logical Switch that exists as long as the Virtual Fabrics feature is enabled. Any physical port supported by the chassis is initially allocated to the Default Switch, and is managed just like a physical switch. The Default Switch should NOT be utilized to provide host/storage connectivity: Although it might work for a time, it is a bad practice to place any FICON or FCP connectivity ports in the Default Switch. Move connectivity ports that are in the Default Switch to some other logical switch that has been created. FICON addressing assumes the ALPA byte of the FCID for the control units will be the same as the ALPA byte for the port where the channel logged in. Address mode 1 can be a problem if there are more than 256 ports which the default switch must be able to accept since it is always available when VF is enabled. This is why the recommended best practice is to create a logical switch for FICON, use address mode 1, and never use the default switch for FICON connectivity This will ensure users never run into circumstances where FICON is not supported. After enabling Virtual Fabrics on a chassis, users should then create their own Logical Switches (LS). A Logical Switch is an implementation of a Fibre Channel switch in which physical ports can be dynamically added or removed. After creation, it is managed just like a physical switch. Each Logical Switch is associated with a Logical Fabric (Virtual Fabric). If users only create 1 LS then move all of the FICON connectivity ports into that LS. If you create several LS s then move FICON ports to one or more of them and move any FCP ports to other, unique LS s. Any port on a Logical Switch containing FCP connectivity ports will support N_Port ID Virtualization (NPIV) Control Unit Port (CUP) should never be allocated on the Default Switch. CUP should only be run on logical switches that users have created. In the future users might find that FOS firmware will change such that FMS cannot be enabled if it is in the default switch. A Base Switch is an optional user-defined Logical Switch. If Integrated Routing will be used on the switch (FCP traffic only), a Base Logical Switch is created to act as the Backbone Fabric. Base Switches provide common connectivity (at Layer 2 or Layer 3) shared by other Logical Switches. FICON Advice and Best Practices 63 of 77

64 One Base Switch can be defined per chassis. A Base Fabric can be used by multiple Logical Fabrics, and can transport frames from one logical switch to another. There is some unique functionality associated with a base fabric: XISL, is a shared, Extended ISL (or trunk) connecting base switches. More than one Logical Fabric (LF) may use the same physical XISL from the base fabric and ICLs may be used as XISLs. LISL, A Logical ISL connecting two logical switches over an XISL. A LISL is a logical portion of an XISL, the physical connection joining Base Switches. "ISL", A dedicated ISL physically connecting two logical switches. An ISL carries traffic for a single LF. Only the Base Switch supports XISL and LISL connections. If both physical ISLs and a XISL exist on the same Director, which will be chosen for use? By default, the physical ISL path is favored over the logical path (over the XISL) because the physical path has a lower cost. This behavior can be changed by configuring the cost of the dedicated physical ISL to match the cost of the logical ISL. At FOS 7.1 and higher, FICON can utilize the base switches XISL and share that connection with FCP traffic. Brocade Virtual Fabrics create new domain IDs within a chassis and all of the FC protocol services per domain ID. VFs create a more complex environment and additional management duties, so it is a best practice to use VFs but not to use more VFs on a chassis than necessary. In many FICON environments a single, user created Logical Switch, and its Virtual Fabric, will be all that is required. In particular, consider using VF under the following circumstances: When 48-port blades are being deployed in a Brocade DCX chassis (does not apply to Brocade DCX-4S) When users need absolute isolation between FICON and FCP workloads running across the same chassis When users need absolute isolation between LPARs and customers running across the same chassis When users are trying to isolate some of their I/O traffic across specific links of their long-distance ISLs To eliminate the possibility that the fabric takes multiple hops to re-drive FICON I/O traffic, in the event of a fabric failure in a three-site configuration If an 8-slot Gen4 or Gen5 Director uses 48-port blades with FICON, the user is required to enable Virtual Fabrics on the chassis and then create at least two logical switches that are configured with zero-based Area Assignment (address mode 1). 8-slots of 48-port blades would create an Director chassis containing 384 ports and z/os allows a maximum of 256 addresses within a switching domain ID. For support and qualification IBM decided that if any 48-port blades are used in an 8-slot chassis then that chassis must have VF enabled and the FICON connectivity ports allocated to specifically created logical switches. At FOS 6.2 and above, with VF enabled, there will always be one default logical switch created. The system will then allow the user to create additional logical switches which should contain all of the FICON connectivity ports. The user will move ports out of the default switch and into the logical switches they created in order to make best use of this virtual environment. Often zoning, along with HCD, provides adequate port isolation within FICON fabrics and requires less management and multiple logical switches. Brocade Gen4 and Gen5 Directors can have up to 8 Logical Switches (VFs). Brocade 5100 can have a maximum of 3 VFs and Brocade 5300 can have a maximum of 4 VFs. Brocade 7800, at FOS 7.1 and higher, can have a maximum of 4 VFs. Brocade 6510, at FOS 7.0 and higher can have a maximum of 4 VFs (3 if using a base switch). Some vendors ship Brocade switching products with Virtual Fabrics Enabled. Other vendors ship Brocade switching products with Virtual Fabrics Disabled. Sometimes a vendor will claim that Virtual Fabrics will come either enabled or disabled and actually ship them the other way. What this means to the user is that they will need to check each and every switching product after it arrives from their vendor of choice to see whether Virtual Fabrics are enabled or disabled! As of January 2013 this is what some of our vendors claim: IBM ships VF enabled HDS ships VF disabled EMC ships VF disabled Dell ships VF disabled FICON Advice and Best Practices 64 of 77

65 Virtual Fabrics will isolate the kind of I/O traffic that crosses specific ISL links (not referring to XISL links here) in a cascaded environment. A FICON virtual fabric will have only FICON traffic on its ISLs and an FCP virtual fabric will have only FCP traffic on their ISLs. If further subdivisions of I/O traffic across ISLs are required (keeping DASD and tape I/O separate for example) then use Traffic Isolation Zones along with the Virtual Fabrics. If CUP is to be used on a FICON director that contains 256 (addresses 0 255) or more ports, the physical ports x FE and x FF (ports ) cannot be used for normal FICON connectivity. If all ports on the chassis must be usable by the customer, then Virtual Fabrics can allow all ports to become usable. Virtual Fabrics allow a customer to create a Logical Switch that contains only addresses x 00 FD (addresses 0 254) and another Logical Switch that contains the remainder of the ports. Since the domain does not have a physical port FE or FF defined, all ports can now be used. Some practical considerations about deploying Virtual Fabrics on a chassis: If a user constrains mainframe to a unique set of blades and FCP to a unique set of blades then at least if a blade fails on the any of the VFs the error and repair activities will only affect that specific VF and no other. If a user spreads FICON and FCP ports across every blade, such that every blade is a member of every Virtual Fabric (which would seem to make sense to mitigate risk), then a blade error and repair activity will always affect multiple Virtual Fabrics and both FICON and FCP traffic on the chassis. The user must decide which method is best. In the first case a user is isolating the risk of a rare blade failure to only one of possibly several VFs on a chassis. The other VFs will continue to run unimpeded. In the second case two or more VFs will be affected by a blade failure. In a mainframe environment the users will want to quickly remove a failing blade to maintain a five-9s environment but this will impact the other users of the chassis. However, this will also, almost certainly, force all of the business lines to work together cooperatively, whenever a failure occurs, as the next failure might be to their VF. A note of consideration about Virtual Fabrics and upgrading from FOS 6.4 to FOS 7.x: At FOS 6.4.2a it is possible for a FICON user to have Virtual Fabrics enabled, not be running CUP, but have the FOS FMS-control enabled in order to utilize Prohibit Dynamic Connectivity Mask (PDCM) functionality, When the user upgrades that environment to FOS 7.x they will not be able to add a new logical switch, with the FMS-control enabled, until the FMS license (CUP) is added to the chassis. CUP and Prohibit Dynamic Connectivity Mask (PDCM) can only be utilized on FOS 7.x systems when a FICON Management Server (CUP) license has been purchased. VIRTUAL CHANNELS Virtual channels (VC) provide a unique feature of Brocade switching devices that first became available when 2 Gbps was introduced on our switching devices over a decade ago. To ensure reliable, ISL communications, VC technology logically partitions bandwidth within an ISL into many different virtual channels and then prioritizes I/O traffic to optimize performance and prevent head of line blocking. Of course an ISL is still just one fibre link so only one signal is passing across it, in each direction, physically at a time: A virtual channel is really just a segment of buffer credits that are dedicated to a specific VC number. On switches that use the Condor (4Gbps) ASIC there are 16 VCs numbered On switches that use the Condor2 (8Gbps) ASIC there are 16 VCs numbered On switches that use the Condor3 (16Gbps) ASIC there are 40 VCs numbered Even though there are now many more VCs on Condor3 ASICs than before, Brocade is not currently taking advantage of these for QoS or any other purpose. Each VC is assigned a specific role in handling I/O traffic across an ISL link: VC0 is for all Class F traffic; priority level is 0 (highest) frames sent only between switching devices All class F traffic for the entire fabric automatically receives its own queue and the highest priority, this ensures that the important control frames (Name Server updates, Zoning distribution, RSCNs etc.) are never waiting behind normal pay-load traffic (also referred to as Class 2 or 3 traffic). VC1 is for F_BSY, F_RJT, and Class 2 link control traffic; priority level is 0. VC2 through VC5 are for all Class 2 and Class 3 traffic; priority level is 2 or 3. VC6 is for multicast traffic; priority level is 2 or 3. VC7 is for multicast and broadcast traffic; priority level is 2 or 3. FICON Advice and Best Practices 65 of 77

66 Since Brocade supports IP over FC (IP-FC), multicast and broadcast traffic are assigned on VCs (6 and 7) to avoid any congestion on the data VCs from broadcast storms or other unwanted IP behavior. FICON users can have Quality of Service (QoS) enabled but are not yet allowed to deploy any QoS zones. Even though QoS is enabled, when there are no QoS zones, the ISLs are in a non-qos format. Non-QoS ISLs will handle all I/O traffic across the medium priority VCs which are VC2, VC3, VC4 and VC5. The decision about which VC (2-5), out of the group of VCs on an ISL link, that a frame takes (and therefore which segment of the buffers credits it uses for that frame) is made by looking into the destination fibre channel address (D_ID). For Class 2/3 traffic (i.e. Host and storage devices), individual SID/DID pairs are automatically assigned in a round-robin fashion based on D_ID (Destination ID) across the four data lanes. This prevents HoLB throughout the fabric and since each VC has its own credit mechanism and flow control, slow draining devices will have a more difficult time in starving the entire ISL link. FOS firmware automatically manages the VC configuration of each ISL link, eliminating the need to manually fine-tune fibre channel fabrics for maximum performance. Virtual Channels also works in conjunction with Brocade ISL Trunking to improve the efficiency of switch-toswitch communications, simplify fabric design, and reduce the total cost of SAN ownership further. The Inner Workings of Virtual Channels: At 8 and 16 Gbps, this is how FOS allocates buffer credits to Virtual Channels: When QOS is enabled, and it is enabled by default, 34-buffers are reserved for non-long distance links (L0) as follows: 4 for Class F 2 for Multicast 22 for data VCs (2 each for these 11-VCs). The remaining 6-buffers are then placed in a shareable pool for use by the data VCs. When the link is set for long distance (LE, LD, LS) the same BC reservations are made as above, but the shared pool will grow based on the credit calculation and the speed and distance selected: Credits = VC0 (4) + VC6 (1) +VC7 (1) + QoS VCs (14) + ((speed * 5 BC per 10km/1G * distance) / 10) First example: Speed = 8 Gbps Distance = 50km Credits = 20 + (8 * 5 * 50)/10 = = 220 Second example: Speed = 16 Gbps Distance = 50km Credits = 20 + (16 * 5 * 50)/10 = = 420 Only VCs 2-5 are used for data traffic. That provides for 8 reserved buffer credits, plus the shared pool, which typically should be enough for a normal connection between two switches in the same room within multimode cable distances. Since every data VC has 2-reserved credits, there typically should not be any BC starvation conditions. Within the switch there is a Virtual channel configuration register. This register provides a mapping function that assigns the source port on a switching device to a specific virtual channel (VC) within an ISL. The last nibble of the 2 nd byte of the FCID to determine which of the data VCs will be used to cross the ISL link. In the FCID Address, 610A00, A is the portion that is used as the destination ID (D_ID) to determine what VC will be assigned across an ISL link. See Figure 20 below. PID A will choose VC4 every time. This always corresponds to the source port (Port ID or PID) of a blade. Each source port will be assigned to a data VC and if an I/O must traverse an ISL link then that is the VC port (buffer credit segment) that it will use. FICON Advice and Best Practices 66 of 77

67 Figure 20 If we look at a 32 port blade this is what we will find: Figure 21 And a 48-port blade would be similar but, of course, with different addressing: Figure 22 Here is what it might look like in a cascaded FICON fabric. Figure 23 This all appears to be good and it does work well. But it also means that a user should be careful about how they deploy CHPIDs and storage if the I/O is going to be utilizing cascaded links. Many users will adopt a simple approach to deploying CHPID and storage ports across the blades of a Director. For example, some customers will use the bottom portion of the blade for CHPIDs and the upper portion of the blade for storage or vice versa. FICON Advice and Best Practices 67 of 77

68 Figure 24 There are several things wrong with this simple deployment: Cannot make use of local switching to lower the latency of the I/O time between the CHPIDs and storage. Might clog one or more virtual channels while leaving other VCs with little I/O traffic. In this admittedly simple example, CHPIDs and storage are only using VC2 and VC3 for all I/O data traffic. VC4 and VC5 would have no traffic at all. As a better deployment, if there are four CHPIDs that are going to be accessing four ports on the same storage device then you might want to configure the CHPIDs and storage ports to make better use of the VC paths across an ISL link. Figure 25 Now both the CHPIDs and the storage ports are making use of all of the virtual channels which provides a potentially more even flow of I/O traffic across all of the buffer credits (VCs) available to an ISL link. FICON Advice and Best Practices 68 of 77

69 But, if absolute performance is a requirement, then this configuration is not taking advantage of Local Switching. Figure 26 By pairing up the CHPIDs and the storage that it accesses on the same ASIC of a blade the user will be deploying Local Switching with a per frame latency time of about 700 nanoseconds. In the above diagram, not only is the user making very good use of all of the virtual channels, they have deployed four CHPIDs that could access 12 ports of storage and do all of it through Local Switching. The result for that user would be good performance and good throughput. Also, since DLS tries to fairly and evenly distribute ingress ports across the available ISL links, using different patterns of CHPID and storage port placement on a blade might result in just a little bit better distribution of ports across the ISL links. Keep in mind that these diagrams are for 32-port blades. If 48-port blades are being utilized then there is a different port range, per ASIC, for Local Switching. If a user is currently running a cascaded environment it is probably not worth the effort and time to change the IOCP and the cabling to make best use of VCs and/or Local Switching. However, if a user is going to deploy new fabrics, or new connections onto old fabrics, then these guidelines above will help them establish the best configurations possible when taking into consideration supreme performance and non-congested throughput. ESCON ESCON was introduced for mainframe I/O in ESCON Directors came along a few years later. ESCON Directors went to end of sales in December 2004 and ESCON is quickly being pushed aside in favor of FICON in most enterprise data centers. System zec12 no longer supports any ESCON CHPIDs. It is a best practice to migrate away from both ESCON CHPIDs, in older mainframes, and ESCON directors. In order to continue using ESCON storage devices a customer can deploy switched-ficon and utilize the Optica Technologies Prizm multichannel FICON to ESCON Protocol Converter. At FOS 7.0 or higher, FICON Advanced Accelerator, which provides performance-oriented long distance connectivity, can be utilized on Brocade FCIP connections for connection to ESCON storage devices: Support for Optica s Prizm FICON-to-ESCON converter appliance when connected to 3480, 3490 and 3590 ESCON Tape control units. FICON Advice and Best Practices 69 of 77

70 Support for Optica s Prizm FICON-to-ESCON converter appliance and ESBT connected to 3480 Bus and Tag Tape control units. Optica Technologies Prizm FICON-to-ESCON converter appliance is fully qualified and supported by IBM. More information about Optica Technologies and Prizm can be found here: TERADATA AND FCIP EXTENSION Teradata products, from Teradata Corporation, are meant to consolidate data from different sources and make that data available for analysis. Brocade allows Teradata devices to be attached to our products. Teradata device addresses require that MIH be disabled in SYSx.PARMLIB(IECIOSnn) The devices should have an MIH TIME=00:00 in order to disable it. Teradata does not utilize PLOGI to discover the host channel link addresses, as most FICON devices do. The Channel Link address has to be manually specified in the Teradata configuration. Assume that a user is going to configure a cascaded FICON connection, over distance, using a Brocade 7800 switch. On the local side of the FICON-cascaded connection, the customer has CHPID 12 cabled to SWITCH 0x23 on port 06. On the remote side of the FICON-cascaded connection, the Teradata s Control Unit Address (CUA) 0xF0 is connected to SWITCH 0x24 on port 6. The IOCP example looks like this: CHPID PATH=(CSS(0),12),SHARED, PARTITION=((PART001,PART007),(=)),SWITCH=23,PCHID=352, TYPE=FC CNTLUNIT CUNUMBR=1AF0,PATH=((CSS(0),12,18)), UNITADD=((F0,008)),LINK=((CSS(0),2406,**)),UNIT=NOCHECK, IODEVICE ADDRESS=(1AF0,008),CUNUMBR=(1AF0),STADET=Y,UNIT=DUMMY Teradata: CUA F0 is device range 1AF0 1AF7 on CHPID 12. Host Entry: 2306 with Device Dest: 2406 Each line in the Teradata configuration represents one physical Teradata FICON Interface. Each interface has a CUA and a range of device addresses, as defined in IOCP. To make the corresponding change in the Teradata configuration to match the IOCP entries above, change the Channel Link address to 0x2306 for CUA F0, as shown highlighted below: Original Teradata Configuration Statements: CHANNEL #NODE_ID BUS SLOT PORT KIND CUA SPEED HGID LCU CHANNEL ADDRESS FICON 0xF2 FIBER 3 0x00 0x07 0x FICON 0xF4 FIBER 4 0x00 0x07 0x FICON 0xF0 FIBER 2 0x00 0x07 0x FICON 0xFA FIBER 6 0x00 0x07 0x FICON 0xFC FIBER 7 0x00 0x07 0x FICON 0xF8 FIBER 5 0x00 0x07 0x0001 Modified Teradata Configuration Statements: CHANNEL #NODE_ID BUS SLOT PORT KIND CUA SPEED HGID LCU CHANNEL ADDRESS FICON 0xF2 FIBER 3 0x00 0x07 0x FICON 0xF4 FIBER 4 0x00 0x07 0x FICON 0xF0 FIBER 2 0x00 0x07 0x FICON 0xFA FIBER 6 0x00 0x07 0x FICON 0xFC FIBER 7 0x00 0x07 0x FICON 0xF8 FIBER 5 0x00 0x07 0x0001 The above change is disruptive. The customer needs to recycle the Teradata nodes that pertain to the CUA and ADDRESS. Users need to work closely with Teradata in order to go through the steps to successfully configure their devices for long-distance FCIP extension using the Brocade FX8-24 or Brocade FICON Advice and Best Practices 70 of 77

71 NOTES ABOUT TRANSACTION PROCESSING FACILITY (TPF) The mainframe-centric z/transaction Processing Facility (z/tpf) operating system is a special-purpose system that is used by companies with high transaction volume, such as credit card companies and airline reservation systems. z/tpf was once known as Airline Control Program (ACP). It is still used by airlines and has been extended for other large systems with high-speed, high-volume transaction processing requirements. The z/tpf system was originally designed on the assumption that programs execute on a single central processing unit (CPU), commonly called a uniprocessor. A synonym for CPU, used in the z/tpf system information, is instruction-stream engine or simply I-stream engine. An I-stream engine is just a CPU within a z/architecture configuration. Several instruction-stream engines can be combined into a single z/architecture configuration and can work either together or independently. There are two senses of parallel processing enabled by this architecture. One sense considers two or more z/architecture configurations. In the z/tpf system, there is a facility for several z/architecture configurations to operate as a single complex, called loosely coupled. Loosely coupled multiprocessing involves two or more z/architecture configurations sharing a set of module (sometimes referred to as DASD) control units (CUs) along with an external lock facility (XLF) for synchronizing accesses to the module records by multiple z/architecture configurations. XLF is logic in the module CU or a coupling facility (CF) called by any z/architecture configuration attached to the module. This implies that all the participating z/architecture configurations are channel attached to the same module CU or CF. The other sense considers a single z/architecture configuration where multiple I-stream engines execute concurrently; this is called tightly coupled. Tightly coupled multiprocessing refers to the synchronization of accesses to shared main storage in a z/architecture configuration of multiple I-stream engines. A z/architecture configuration with only one I- stream engine is called a uniprocessor, and one with multiple I-stream engines a multiprocessor. Uniprocessor and multiprocessor are terms within the z/tpf system that are associated with tightly coupled multiprocessing. Combining these two senses of parallel processing means that a z/architecture configuration running the z/tpf system in 1, 2, or as many as 99 I-streams, tightly coupled, can be tied together with other tightly coupled z/architecture configurations in a loosely coupled complex of up to 32 z/architecture configurations to yield the processing power of all these combined I-stream engines. Within the channel subsystem, the input/output processor (IOP) supervises the flow of data from shared main storage to I/O engines that, in essence, represent devices to TPF. The IOP does dynamic pathing (routing) to connect a device to an I-stream engine. The term subchannel is synonymous with device in z/tpf terminology; that is, there is a unique subchannel address for each device. The z/tpf common I/O handler (CIO) manages I/O operations through a clearly defined macro interface that permits the set of CIO macros supporting each I/O function to make use of a centralized service structure. The I- stream engine processing related to I/O instructions, channel programs, and related I/O addressing schemes of the past are essentially untouched. CIO, however, takes advantage of the benefits of the z/architecture channel subsystem such as dynamic pathing. A channel subsystem manages the flow of data and I/O commands to an appropriate control unit which, in turn, controls I/O devices. z/architecture support distinguishes between commands and instructions. Command refers to an I/O operation performed by a channel subsystem and instruction implies a non-i/o operation performed by an I-stream engine, with the exception of those instructions used to communicate with the channel subsystem itself. For example, a start subchannel (SSCH) instruction is used by an I-stream engine to pass a channel program, which is a sequence of channel command words, to the channel subsystem. For z/tpf, its I/O gaps represent most of the delay while the message (in TPF the number of messages processed over a given interval of time is called system throughput) is in the CPU, the number of channels to secondary storage, queuing disciplines, and the organization of data are very important in order to maintain fast response times at peak periods z/tpf utilizes both DASD and tape for I/O processes. Magnetic tapes are used for online and offline processing and can be used for both input and output in the z/tpf system. In the z/tpf system, there are no access methods such as those found in z/os: Rather, channel programming is integrated into the system support of communication facilities, direct access storage devices (DASD), magnetic tape devices, and unit record devices. FICON Advice and Best Practices 71 of 77

72 TPF v4.1 and below: Does not allow FICON Cascading. Does not utilize High Performance FICON (zhpf) Cannot use STP for sysplex timing, so the old Sysplex Timer is still required. TPF at z/transaction Processing Facility (z/tpf) Enterprise Edition V1.1: Does not allow FICON Cascading. Supports FICON Express2 to FICON Express8S Supports High Performance FICON with proper microcode releases on DASD z/tpf employs a translator program to convert CCW format I/O operations into TCW format to support HPF Can use either the Sysplex Timer (dual port Sysplex Timer attachment card is required) or Server Time Protocol (STP) for external synchronization In a FICON-only environment, ESA/390 TPF-mode (channel redrive) is not required. TPF does not have an RMF-like function: CUP can be placed onto switching devices used by TPF, but native TPF cannot make use of the performance information from CUP. Users can make a z/os CHPID connection from a z/os LPAR into the TPF-utilized switching devices, and then use RMF to pull all of the information up into the z/os RMF FICON Director Activity Reports. Such reporting then provides both z/os and TPF port statistics. MISCELLANEOUS CHANGES, IMPROVEMENTS AND CONSIDERATIONS AT FOS V7.1 Since FOS 7.0.0c, CUP and Prohibit Dynamic Connectivity Mask (PDCM) can only be utilized on FOS 7.x systems when a FICON Management Server (CUP) license has been purchased. If a user upgrades from FOS 6.4.2a to FOS 7.0.0c, and if FMS is running in the default switch, it will be disabled automatically. There is no warning in the software and it s not in the original release notes. FOS 7.0.0c1, an RPQable release, and FOS 7.0.0d, a normal release, allows FMS to continue to run in the default switch but warns against it. Since FOS 7.0 there is no capability to attach McDATA (M/EOS-based) switching devices into the same fabric with Brocade (FOS-based) switching devices. Can deploy M/EOS-based McDATA fabrics along side other, independent FOS-based Brocade fabrics that attach to the same storage control units: This is not a good practice as it is much more prone to errors and management issues. Since FOS 7.0 the only supported Interopmode is IM=0. IM=2 is completely unsupported and cannot be utilized. IM=3 is only supported for FCP SAN fabrics to which old McDATA switching device(s) are attached. It is not possible to migrate from a FOS release using IM=2 to FOS 7 or higher non-disruptively since changing the interop mode is an offline change. A scalability enhancement is support for up to a 2MB zone database in a fabric with only DCX/DCX-4S/DCX8510 systems (the presence of any other platform in the fabric will limit the maximum zone database to 1MB for the entire fabric). Zoning enhancements in FOS v7.1 are as follows: Ability to replace a zone member (WWN/D, I) by another member via the zoneobjectreplace CLI command. Enhancement to the existing commands (zonecreate, zoneadd, zoneremove) to take a collection of zone aliases as an input instead of a single alias member. The group of aliases is selected by matching a pattern specified by the user in the command line. Provide more options to the existing commands, zoneshow and cfgshow, to list details/differences between the transaction buffer and the committed/saved zone database. Warn users at the time of cfgsave, if the zone database edits in the open zone transaction will make the Defined and Effective Zone configurations inconsistent. This warning reduces the risk of having mismatched effective zone configurations in the same fabric when merging a switch into the fabric. Warn or notify users, when multiple users attempt to simultaneously configure or reconfigure zone sets in more than one switch of a fabric. Name Server commands enhancements in FOS v7.1: Enhancement to the nszonemember command to take domain and port index as optional parameters to display zoned device data, including the device PID and zone alias. The domain and index can be either of the local switch or a remote switch. FICON Advice and Best Practices 72 of 77

73 Add a new option domain to the nsaliasshow command to display a remote device s details for that particular domain in the fabric. Add a new command to take WWNs/PIDs as input parameters and display the zones they belong to. This will display both the regular and the special zones. The details of the device login activities are made available to the user via CLIs. A new command nsdevlog is implemented to display the device login details. Open Systems Fibre Channel Routing (FCR) Enhancements FICON cannot utilize routing: PathInfo with TIZ (Traffic Isolation Zone) configured over FCR: The pathinfo command has been enhanced to display accurate path information in TIZ over FCR configurations. Credit Recovery Support on EX_Ports: FOS v7.1 implements buffer credit recovery mechanisms on EX_Ports Credit recovery is enabled by default on EX_Ports of 16G platforms. Credit recovery on 8G platform EX_Ports is supported only if long distance mode is enabled. Fabric Name Support on FCR: In FOS v7.1 the fcrfabricshow command has been enhanced to display the names of the edge fabrics attached to an FCR. iflshow CLI Command: A new command iflshow is implemented to display the connection details between an edge switch and an FCR. This command is intended be executed on an edge switch, also provides details of VE-VEX connections. Note: Platforms running FOS v7.1 do not support EX_ Port configuration in Interopmode 2 or Interopmode 3. TI (Traffic Isolation) Zone Violation Handling for Trunk Ports Enhancements: If a failover disabled TI zone has trunk members, but if not all members of that trunk group are in the same TI zone, then there is a possibility of routing issues if the members in the TI zone fail. FOS v7.1 tries to detect such mis-configurations upfront and warn users about this condition via the ZONE_1061 RASlog message. RAS enhancements: SFP Monitoring Enhancements: Update sfpshow data when a new SFP is plugged-in FOS v7.1 has been enhanced to update the SFP serial data information when a new SFP is inserted even in a disabled port, and to show the "valid" data (Temperature, Current, Voltage and Tx/Rx power, etc.) once the port is enabled and SFP data has been polled for that port. Time stamp for SFP polling In FOS v7.1 a new field has been added in the command sfpshow port# [-f] to display the last poll time stamp for a port. Allowing the input of portnumbers to be in Hexadecimal Format: The commands portdisable, portenable, portshow, portperfshow, portcfgspeed, portstatsclear, and portstatsshow can accept port numbers in hexadecimal format. x option indicates that a port or a range of ports identified by port index numbers are in a hex format. OPTIONALLY LICENSED SOFTWARE (FROM BROCADE FOS V7.1 RELEASE NOTES) Brocade FOS v7.1 includes all basic switch and fabric support software, as well as optionally licensed software that is enabled via license keys. To obtain license keys: Visit Login/Register at MyBrocade (upper left of the screen) and then go to the bottom and then go to the Quick Navigation Links to find Software License Keys. Optionally licensed features supported in Brocade FOS v7.0 include: Brocade Network Advisor Software management platform that unifies network management for storage area networks (FICON and SAN) and converged networks. FICON Advice and Best Practices 73 of 77

74 Provides a consistent user interface across Fibre Channel and Fibre Channel over Ethernet (FCoE) over data center bridging (DCB), along with custom views and controls based on the users' areas of specialization. Brocade Ports on Demand Currently applies only to 4 Gbps, 8 Gbps and 16 Gbps modular switches for FICON Allows customers to instantly scale the fabric by provisioning additional ports via license key upgrade. (Applies to select models of switches). Brocade Extended Fabrics Provides greater than 10km of switched fabric connectivity at full bandwidth over long distances. Brocade ISL Trunking Provides the ability to aggregate multiple physical links into one logical link for enhanced network performance and fault tolerance. Brocade Advanced Performance Monitoring Enables performance monitoring of networked storage resources. This license includes the Top Talkers feature. Brocade Fabric Watch Monitors mission-critical switch operations and provides notification if established limits or thresholds are exceeded. Fabric Watch includes Port Fencing capabilities. Brocade Accelerator for FICON This license enables unique FICON emulation support for IBM s Global Mirror (formerly XRC) application (including Hitachi Data Systems HXRC and EMC s XRC) as well as Tape Pipelining for all FICON tape and virtual tape systems to significantly improve XRC and tape backup/recovery performance over virtually unlimited distance for FR4-18i. FICON Management Server Also known as CUP (Control Unit Port), enables host-control of switches in Mainframe environments. Enhanced Group Management This license enables full management of devices in a data center fabric with deeper element management functionality and greater management task aggregation throughout the environment. This license is used in conjunction with Brocade Network Advisor application software and is applicable to all FC platforms supported by FOS v7.0 or later Adaptive Networking with QoS Adaptive Networking provides a rich framework of capability allowing a user to ensure that high priority connections obtain the bandwidth necessary for optimum performance, even in congested environments. The QoS SID/DID Prioritization and Ingress Rate Limiting features are included in this license, and are fully available on all 8 and 16 Gbps platforms. Server Application Optimization When deployed with FCP Brocade Server Adapters, this license optimizes overall application performance for physical servers and virtual machines by extending virtual channels to the server infrastructure. Application specific traffic flows can be configured, prioritized, and optimized throughout the entire data center infrastructure Integrated Routing Not for FICON environments. This license allows any port in a Brocade DCX , DCX , 6510, DCX-4S, DCX, 5300, 5100, 7800, or Brocade Encryption Switch to be configured as an EX_port or VEX_port (on some platforms) supporting Fibre Channel Routing. This eliminates the need to add a dedicated router to a fabric for FCR purposes. Encryption Performance Upgrade Encryption blades cannot be provisioned in chassis that carry FICON traffic Advanced Extension This license enables two advanced extension features: FCIP Trunking and Adaptive Rate Limiting. The FCIP Trunking feature allows multiple IP source and destination address pairs (defined as FCIP circuits) via multiple 1 or 10 GbE interfaces to provide a high bandwidth FCIP tunnel and failover resiliency. In addition, each FCIP circuit supports four QoS classes (Class-F, High, Medium and Low Priority), each as a TCP connection. The Adaptive Rate Limiting feature provides a minimum bandwidth guarantee for each tunnel with full utilization of the available network bandwidth without impacting throughput performance under high traffic load. This license is available on the Brocade 7800 and for the FX8-24 blade for the Brocade DCX/DCX-4S/DCX /DCX on an individual slot basis. FICON Advice and Best Practices 74 of 77

75 10 GbE FCIP/10 Gbps Fibre Channel This license enables the two 10 GbE ports on the Brocade FX8-24 or the 10 Gbps FC capability on FC16-xx blade ports On the Brocade 6510 switch, this license enables 10 Gbps FC ports This license is available on the Brocade DCX/DCX-4S/DCX /DCX on an individual slot basis. Brocade FX8-24 blade: With this license assigned to a slot with a Brocade FX8-24 blade, two additional operating modes (in addition to 10 1 GbE ports mode) can be selected; 10 1 GbE ports and 1 10 GbE port 2 10 GbE ports Brocade FC16-xx blades: Enables 10 Gbps FC capability on an FC16-xx blade in a slot that has this license Brocade 6510 switch: Enables 10 Gbps FC capability on the switch Advanced FICON Acceleration This license enables unique FICON emulation support for IBM s Global Mirror (formerly XRC) application (including Hitachi Data Systems HXRC and EMC s XRC) as well as Tape Pipelining for all FICON tape and virtual tape systems to significantly improve XRC and tape backup/recovery performance over virtually unlimited distance. This licensed feature uses specialized data management techniques and automated intelligence to accelerate FICON tape read and write and IBM Global Mirror data replication operations over distance, while maintaining the integrity of command and acknowledgement sequences. This license is available on the 7800 and the DCX/DCX-4S/DCX8510-8/DCX for the FX8-24 on an individual slot basis Port Upgrade This license allows a Brocade 7800 to enable 16 FC ports (instead of the base four ports) and six GbE ports (instead of the base two ports). This license is also required to enable additional FCIP tunnels and also for advanced capabilities like tape read/write pipelining. ICL 16-link, or Inter Chassis Links This license provides dedicated high-bandwidth links between two Brocade DCX chassis, without consuming valuable front-end 8 Gbps ports. Each chassis must have the 16-link ICL license installed in order to enable the full 16-link ICL connections. Available on the Brocade DCX only. ICL 8-Link This license activates all eight links on ICL ports on a Brocade DCX-4S chassis or half of the ICL bandwidth for each ICL port on the Brocade DCX platform by enabling only eight links out of the sixteen links available. This allows users to purchase half the bandwidth of Brocade DCX ICL ports initially and upgrade with an additional 8-link license to utilize the full ICL bandwidth at a later time. This license is also useful for environments that wish to create ICL connections between a Brocade DCX and a DCX-4S, the latter of which cannot support more than 8 links on an ICL port. Available on the Brocade DCX-4S and DCX platforms only. ICL POD License This license activates ICL ports on core blades of Brocade DCX 8510 platforms. An ICL 1st POD license only enables half of the ICL ports on CR16-8 core blades of Brocade DCX or all of the ICL ports on CR16-4 core blades on Brocade DCX An ICL 2nd POD license enables all ICL ports on CR16-8 core blades on a Brocade DCX platform. (The ICL 2nd POD license does not apply to the Brocade DCX ) Enterprise ICL (EICL) License for Open Systems Environments not for FICON The EICL license is required on a Brocade DCX 8510 chassis, used in open systems fabrics, when that chassis is participating in a group of five or more Brocade DCX 8510 chassis connected via ICLs. Note that this license requirement does not depend upon the total number of DCX 8510 chassis that exist in a fabric, but only on how many chassis are interconnected via ICLs. This license is only recognized/displayed when operating with FOS v7.0.1 but enforced with FOS v7.1.0 or later. Note also that he EICL license supports a maximum of nine DCX 8510 chassis connected in a full mesh topology or up to ten DCX 8510 chassis connected in a core-edge topology. Always check with the infrastructure vendor to see which of the above capabilities they support. FICON Advice and Best Practices 75 of 77

76 Always check and carefully read the FOS release notes for much more information about that release. VENDOR SWITCHING DEVICE CROSS-REFERENCE LIST Figure 27 FICON Advice and Best Practices 76 of 77

Brocade s Mainframe and FICON Presentations and Seminars that are available to Customers and Partners

Brocade s Mainframe and FICON Presentations and Seminars that are available to Customers and Partners Brocade s Mainframe and FICON Presentations and Seminars that are available to Customers and Partners Contact: David Lytle (dlytle@brocade.com) or Steve Guendert (sguender@brocade.com) Mainframe and FICON

More information

Understanding FICON Performance

Understanding FICON Performance Understanding Performance David Lytle Brocade Communications August 4, 2010 Session Number 7345 Legal Disclaimer All or some of the products detailed in this presentation may still be under development

More information

Why You Should Deploy Switched-FICON. David Lytle, BCAF Global Solutions Architect System z Technologies and Solutions Brocade

Why You Should Deploy Switched-FICON. David Lytle, BCAF Global Solutions Architect System z Technologies and Solutions Brocade Why You Should Deploy Switched-FICON David Lytle, BCAF Global Solutions Architect System z Technologies and Solutions Brocade Legal Disclaimer All or some of the products detailed in this presentation

More information

IBM Europe Announcement ZG , dated February 13, 2007

IBM Europe Announcement ZG , dated February 13, 2007 IBM Europe Announcement ZG07-0221, dated February 13, 2007 Cisco MDS 9200 for IBM System Storage switches, models 9216i and 9216A, offer enhanced performance, scalability, multiprotocol capabilities, and

More information

BROCADE 8000 SWITCH FREQUENTLY ASKED QUESTIONS

BROCADE 8000 SWITCH FREQUENTLY ASKED QUESTIONS FAQ BROCADE 8000 SWITCH FREQUENTLY ASKED QUESTIONS Overview Brocade provides the industry s leading family of Storage Area Network (SAN) and IP/Ethernet switches. These high-performance, highly reliable

More information

As enterprise organizations face the major

As enterprise organizations face the major Deploying Flexible Brocade 5000 and 4900 SAN Switches By Nivetha Balakrishnan Aditya G. Brocade storage area network (SAN) switches are designed to meet the needs of rapidly growing enterprise IT environments.

More information

CONNECTRIX ED-DCX8510B ENTERPRISE DIRECTORS

CONNECTRIX ED-DCX8510B ENTERPRISE DIRECTORS SPECIFICATION SHEET [Product image] CONNECTRIX ED-DCX8510B ENTERPRISE The Dell EMC Connecrix B-Series ED-DCX-8510B director series support up DIRECTORS to 16 Gigabit per second (Gb/s) Fibre Channel performance

More information

Why Customers Should Deploy Switches In Their SAN and FICON Environments

Why Customers Should Deploy Switches In Their SAN and FICON Environments 2012-2013 Brocade - For San Francisco Spring SHARE 2013 1 Why Customers Should Deploy Switches In Their SAN and FICON Environments David Lytle, BCAF Brocade Communications Inc. Monday February 4, 2013

More information

8Gbps FICON Directors or 16Gbps FICON Directors What s Right For You?

8Gbps FICON Directors or 16Gbps FICON Directors What s Right For You? 8Gbps FICON Directors or 16Gbps FICON Directors What s Right For You? Version 1.0 As you are well aware, Fibre Channel technology is changing rapidly, especially in the FICON and SAN switching arena. This

More information

IBM TotalStorage SAN Switch M12

IBM TotalStorage SAN Switch M12 High availability director supports highly scalable fabrics for large enterprise SANs IBM TotalStorage SAN Switch M12 High port density packaging saves space Highlights Enterprise-level scalability and

More information

PrepAwayExam. High-efficient Exam Materials are the best high pass-rate Exam Dumps

PrepAwayExam.   High-efficient Exam Materials are the best high pass-rate Exam Dumps PrepAwayExam http://www.prepawayexam.com/ High-efficient Exam Materials are the best high pass-rate Exam Dumps Exam : 143-270 Title : Brocade Certified Fabric Designer 16 Gbps Vendor : Brocade Version

More information

Brocade Technology Conference Call: Data Center Infrastructure Business Unit Breakthrough Capabilities for the Evolving Data Center Network

Brocade Technology Conference Call: Data Center Infrastructure Business Unit Breakthrough Capabilities for the Evolving Data Center Network Brocade Technology Conference Call: Data Center Infrastructure Business Unit Breakthrough Capabilities for the Evolving Data Center Network Ian Whiting, Vice President and General Manager, DCI Business

More information

IBM and BROCADE Building the Data Center of the Future with IBM Systems Storage, DCF and IBM System Storage SAN768B Fabric Backbone

IBM and BROCADE Building the Data Center of the Future with IBM Systems Storage, DCF and IBM System Storage SAN768B Fabric Backbone IBM and BROCADE Building the Data Center of the Future with IBM Systems Storage, DCF and IBM System Storage SAN768B Fabric Backbone Uma Thana Balasingam Regional OEM Sales manager IBM & Brocade A History

More information

CONNECTRIX DS-6500B SWITCHES

CONNECTRIX DS-6500B SWITCHES Specification Sheet CONNECTRIX DS-6500B SWITCHES The Connectrix DS-6500B series switches deliver up to 16 Gigabits per second (16Gb/s) Fibre Channel (FC) performance. The DS-6500B switches scale from twelve

More information

IBM System Storage SAN768B

IBM System Storage SAN768B Highest performance and scalability for the most demanding enterprise SAN environments IBM System Storage SAN768B Premier platform for data center connectivity Drive new levels of performance with 8 Gbps

More information

Product Overview. Send documentation comments to CHAPTER

Product Overview. Send documentation comments to CHAPTER Send documentation comments to mdsfeedback-doc@cisco.com CHAPTER 1 The Cisco MDS 9100 Series Multilayer Fabric Switches provide an intelligent, cost-effective, and small-profile switching platform for

More information

IBM System Storage SAN06B-R extension switch

IBM System Storage SAN06B-R extension switch IBM SAN06B-R extension switch Designed for fast, reliable and cost-effective remote data replication and backup over long distance Highlights Designed for high performance with up to sixteen 8 Gbps Fibre

More information

A first look into the Inner Workings and Hidden Mechanisms of FICON Performance

A first look into the Inner Workings and Hidden Mechanisms of FICON Performance A first look into the Inner Workings and Hidden Mechanisms of Performance David Lytle, BCAF Brocade Communications Inc. Tuesday March 13, 2012 -- 1:30pm to 2:30pm Session Number - 11003 Legal Disclaimer

More information

Advanced FICON Networks. Stephen Guendert BROCADE February 15, 2007 Session 3809

Advanced FICON Networks. Stephen Guendert BROCADE February 15, 2007 Session 3809 Advanced Networks Stephen Guendert BROCADE Stephen.guendert@brocade.com February 15, 2007 Session 3809 Today s Agenda Data Center I/O Technologies Advanced Fabric Services and FC Intermix Cascading Cascaded

More information

Cisco I/O Accelerator Deployment Guide

Cisco I/O Accelerator Deployment Guide Cisco I/O Accelerator Deployment Guide Introduction This document provides design and configuration guidance for deploying the Cisco MDS 9000 Family I/O Accelerator (IOA) feature, which significantly improves

More information

IBM TotalStorage SAN256B

IBM TotalStorage SAN256B High performance and scalability for the most demanding enterprise SAN requirements IBM TotalStorage SAN256B The IBM TotalStorage SAN256B is designed to provide outstanding performance, enhanced scalability

More information

IBM TotalStorage SAN Switch F32

IBM TotalStorage SAN Switch F32 Intelligent fabric switch with enterprise performance for midrange and large storage networks IBM TotalStorage SAN Switch F32 High port density packaging helps save rack space Highlights Can be used as

More information

Intermixing Best Practices

Intermixing Best Practices System z FICON and FCP Fabrics Intermixing Best Practices Mike Blair mblair@cisco.comcom Howard Johnson hjohnson@brocade.com 10 August 2011 (9:30am 10:30am) Session 9864 Room Europe 7 Abstract t In this

More information

Storage Area Networks SAN. Shane Healy

Storage Area Networks SAN. Shane Healy Storage Area Networks SAN Shane Healy Objective/Agenda Provide a basic overview of what Storage Area Networks (SAN) are, what the constituent components are, and how these components fit together to deliver

More information

4 Gbps and 10 Gbps switching modules available for Cisco MDS 9000 family of products

4 Gbps and 10 Gbps switching modules available for Cisco MDS 9000 family of products Hardware Announcement April 27, 2006 4 Gbps and 10 Gbps switching modules available for Cisco MDS 9000 family of products Overview The Cisco MDS 9000 family of fabric switch and director offerings, resold

More information

IBM TotalStorage SAN256B

IBM TotalStorage SAN256B High performance and scalability for the most demanding enterprise SAN requirements IBM TotalStorage SAN256B The IBM TotalStorage SAN256B, with next-generation director technology, is designed to provide

More information

VPLEX Networking. Implementation Planning and Best Practices

VPLEX Networking. Implementation Planning and Best Practices VPLEX Networking Implementation Planning and Best Practices Internal Networks Management Network Management VPN (Metro/Witness) Cluster to Cluster communication (WAN COM for Metro) Updated for GeoSynchrony

More information

BROCADE SAN PRODUCTS OVERVIEW

BROCADE SAN PRODUCTS OVERVIEW BROCADE SAN PRODUCTS OVERVIEW Grigory Nikonov SAN Systems Engineer gnikonov@brocade.com 2012 Brocade Communications Systems, Inc. 1 Innovation Brocade ASIC Evolution 16Gb 8Gb 8Gb: Condor2 40-port ASIC

More information

IBM TotalStorage SAN Switch F08

IBM TotalStorage SAN Switch F08 Entry workgroup fabric connectivity, scalable with core/edge fabrics to large enterprise SANs IBM TotalStorage SAN Switch F08 Entry fabric switch with high performance and advanced fabric services Highlights

More information

zseries FICON and FCP Fabrics -

zseries FICON and FCP Fabrics - zseries FICON and FCP Fabrics - Intermixing Best Practices Mike Blair mblair@cisco.comcom Howard Johnson hjohnson@brocade.com 8 August 2012 (3:00pm 4:00pm) Session 12075 Elite 2 (Anaheim Marriott Hotel)

More information

Cisco MDS 9000 Enhancements Fabric Manager Server Package Bundle, Mainframe Package Bundle, and 4 Port IP Storage Services Module

Cisco MDS 9000 Enhancements Fabric Manager Server Package Bundle, Mainframe Package Bundle, and 4 Port IP Storage Services Module Hardware Announcement April 27, 2004 Cisco MDS 9000 Enhancements Fabric Manager Server Package Bundle, Mainframe Package Bundle, and 4 Port IP Storage Services Module Overview The Cisco MDS 9000 family

More information

IBM System Storage SAN384B

IBM System Storage SAN384B Highest performance for the most demanding enterprise SAN environments IBM System Storage SAN384B Compact platform for enhanced data center connectivity Highlights Drive new levels of performance with

More information

Scale-Out Architectures with Brocade DCX 8510 UltraScale Inter-Chassis Links

Scale-Out Architectures with Brocade DCX 8510 UltraScale Inter-Chassis Links Scale-Out Architectures with Brocade DCX 8510 UltraScale Inter-Chassis Links The Brocade DCX 8510 Backbone with Gen 5 Fibre Channel offers unique optical UltraScale Inter-Chassis Link (ICL) connectivity,

More information

FUJITSU NETWORKING BROCADE 7800 EXTENSION SWITCH

FUJITSU NETWORKING BROCADE 7800 EXTENSION SWITCH FUJITSU NETWORKING ES ACCELERATE AND OPTIMIZE REPLICATION, BACKUP, AND MIGRATION OVER ANY DISTANCE WITH THE FASTEST, MOST RELIABLE, AND MOST COST-EFFECTIVE NETWORK INFRASTRUCTURE. Brocade 7800 Extension

More information

IBM System Storage SAN40B-4

IBM System Storage SAN40B-4 High-performance, scalable and ease-of-use for medium-size SAN environments IBM System Storage SAN40B-4 High port density with 40 ports in 1U height helps save rack space Highlights High port density design

More information

CONNECTRIX MDS-9250I SWITCH

CONNECTRIX MDS-9250I SWITCH Specification Sheet MDS-9250i CONNECTRIX MDS-9250I SWITCH Multipurpose Switch The Connectrix MDS-9250i Multilayer Switch offers up to forty 16 Gigabit per second (Gb/s) Fibre Channel ports, two 1/10 Gigabit

More information

Transport is now key for extended SAN applications. Main factors required in SAN interconnect transport solutions are:

Transport is now key for extended SAN applications. Main factors required in SAN interconnect transport solutions are: E Transport is now key for extended SAN applications. Main factors required in SAN interconnect transport solutions are: Native support for all SAN protocols including ESCON, Fibre Channel and Gigabit

More information

SFP GBIC XFP. Application Note. Cost Savings. Density. Flexibility. The Pluggables Advantage

SFP GBIC XFP. Application Note. Cost Savings. Density. Flexibility. The Pluggables Advantage SFP GBIC XFP The Pluggables Advantage interfaces in the same general vicinity. For example, most major data centers have large Ethernet (and Gigabit Ethernet) networks with copper, multimode and single-mode

More information

Brocade 20-port 8Gb SAN Switch Modules for BladeCenter

Brocade 20-port 8Gb SAN Switch Modules for BladeCenter Brocade 20-port 8Gb SAN Switch Modules for BladeCenter Product Guide The Brocade Enterprise 20-port, 20-port, and 10-port 8 Gb SAN Switch Modules for BladeCenter deliver embedded Fibre Channel switching

More information

Brocade approved solutions for 16/10/8G FC SAN connectivity

Brocade approved solutions for 16/10/8G FC SAN connectivity Brocade approved solutions for 16/10/8G FC SAN connectivity Using Wavelength Division Multiplexing to expand network capacity Smartoptics provides qualified embedded CWDM and DWDM solutions for Brocade

More information

Quick Reference Guide

Quick Reference Guide Connectrix MDS Quick Reference Guide An Overview of Cisco Storage Solutions for EMC In collaboration with: 1 2016 Cisco and/or its affiliates. All rights reserved. Connectrix MDS Directors EMC Model MDS-9706

More information

Designing SAN Using Cisco MDS 9000 Series Fabric Switches

Designing SAN Using Cisco MDS 9000 Series Fabric Switches White Paper Designing SAN Using Cisco MDS 9000 Series Fabric Switches September 2016 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 15 Contents What You

More information

THE DATA CENTER TRANSFORMATION IS UNDERWAY IS YOUR STORAGE AREA NETWORK READY?

THE DATA CENTER TRANSFORMATION IS UNDERWAY IS YOUR STORAGE AREA NETWORK READY? THE DATA CENTER TRANSFORMATION IS UNDERWAY IS YOUR STORAGE AREA NETWORK READY? 2013 Brocade Communications Systems, Inc. Company Proprietary Information 1 Agenda Topics State Of The Fibre Channel SAN Market

More information

DATA CENTER. Metro Cloud Connectivity: Integrated Metro SAN Connectivity in 16 Gbps Switches

DATA CENTER. Metro Cloud Connectivity: Integrated Metro SAN Connectivity in 16 Gbps Switches Metro Cloud Connectivity: Integrated Metro SAN Connectivity in 16 Gbps Switches CONTENTS Introduction...3 Overview...4 Brocade Seventh-Generation SAN Metro Connectivity Features...4 16 Gbps Native Fibre

More information

IBM To Resell Cisco Systems MDS 9000 Multilayer Switch and Director Family of Intelligent Storage Networking Products

IBM To Resell Cisco Systems MDS 9000 Multilayer Switch and Director Family of Intelligent Storage Networking Products Hardware Announcement February 17, 2003 IBM To Resell Cisco Systems MDS 9000 Multilayer Switch and Director Family of Intelligent Storage Networking Products Overview IBM announces the availability of

More information

Cisco MDS 9000 Series Switches

Cisco MDS 9000 Series Switches Cisco MDS 9000 Series Switches Overview of Cisco Storage Networking Solutions Cisco MDS 9000 Series Directors Cisco MDS 9718 Cisco MDS 9710 Cisco MDS 9706 Configuration Chassis, dual Supervisor-1E Module,

More information

May 2015 FICON. Administrator's Guide. Supporting Fabric OS v7.4.0

May 2015 FICON. Administrator's Guide. Supporting Fabric OS v7.4.0 29 May 2015 FICON Administrator's Guide Supporting Fabric OS v7.4.0 2015, Brocade Communications Systems, Inc. All Rights Reserved. ADX, Brocade, Brocade Assurance, the B-wing symbol, DCX, Fabric OS, HyperEdge,

More information

FIBRE CHANNEL OVER ETHERNET

FIBRE CHANNEL OVER ETHERNET FIBRE CHANNEL OVER ETHERNET A Review of FCoE Today Abstract Fibre Channel over Ethernet (FcoE) is a storage networking option, based on industry standards. This white paper provides an overview of FCoE,

More information

Large SAN Design Best Practices Using Cisco MDS 9700 and MDS 9500 Multilayer Directors

Large SAN Design Best Practices Using Cisco MDS 9700 and MDS 9500 Multilayer Directors White Paper Large SAN Design Best Practices Using Cisco MDS 9700 and MDS 9500 Multilayer Directors What You Will Learn As SANs continue to grow in size, many factors need to be considered to help scale

More information

Cisco MDS 9000 Family Pluggable Transceivers

Cisco MDS 9000 Family Pluggable Transceivers Cisco MDS 9000 Family Pluggable Transceivers The Cisco Small Form-Factor Pluggable (), and X2 devices for use on the Cisco MDS 9000 Family are hot-swappable transceivers that plug into ports on the Cisco

More information

BROCADE ALLIANCE PARTNER

BROCADE ALLIANCE PARTNER BROCADE ALLIANCE PARTNER QUICK REFERENCE GUIDE SAN SIMPLY REWARDING RELATIONSHIPS PART NUMBERS BR-DCX8518-0001 (includes Enterprise Bundle) BROCADE DCX 8510-8 BACKBONE BR-DCX8514-0001 BR-DCX8514-0002 (includes

More information

Exam : S Title : Snia Storage Network Management/Administration. Version : Demo

Exam : S Title : Snia Storage Network Management/Administration. Version : Demo Exam : S10-200 Title : Snia Storage Network Management/Administration Version : Demo 1. A SAN architect is asked to implement an infrastructure for a production and a test environment using Fibre Channel

More information

The Benefits of Brocade Gen 5 Fibre Channel

The Benefits of Brocade Gen 5 Fibre Channel The Benefits of Brocade Gen 5 Fibre Channel The network matters for storage. This paper discusses key server and storage trends and technology advancements and explains how Brocade Gen 5 Fibre Channel

More information

IBM System Storage SAN40B-4

IBM System Storage SAN40B-4 High-performance, scalable and ease-of-use for medium-size SAN environments IBM System Storage SAN40B-4 High port density with 40 ports in 1U height helps save rack space Highlights High port density design

More information

Brocade SAN Scalability Guidelines: Brocade Fabric OS v7.x

Brocade SAN Scalability Guidelines: Brocade Fabric OS v7.x Brocade SAN Scalability Guidelines: Brocade Fabric OS v7.x Version 7.3, update 1 Dated: January 5, 2015 This document provides scalability guidelines that can be used to design and deploy extremely stable

More information

BROCADE ICX 6610 SWITCHES FREQUENTLY ASKED QUESTIONS

BROCADE ICX 6610 SWITCHES FREQUENTLY ASKED QUESTIONS FAQ BROCADE ICX 6610 SWITCHES FREQUENTLY ASKED QUESTIONS Introduction The Brocade ICX 6610 Switch redefines the economics of enterprise networking by providing unprecedented levels of performance and availability

More information

Dell Networking Optics and Cables Connectivity Guide

Dell Networking Optics and Cables Connectivity Guide Dell Networking and Cables Connectivity Guide 1 Gigabit Ethernet (1GbE) 10 Gigabit Ethernet (10GbE) 40 Gigabit Ethernet (40GbE) 100 Gigabit Ethernet (100GbE) Fibre Channel Networking I/O Connectivity Options

More information

SAN Design and Best Practices

SAN Design and Best Practices SAN Design and Best Practices Version 2.1 A high-level guide focusing on Fibre Channel Storage Area Network (SAN) design and best practices, covering planning, topologies, device sharing in routed topologies,

More information

IBM System Storage SAN80B-4

IBM System Storage SAN80B-4 High-performance, scalable and ease-of-use for medium-size and enterprise SAN environments IBM System Storage SAN80B-4 High port density with 80 ports in 2U height helps save rack space Highlights High

More information

Brocade FC32-64 Port Blade

Brocade FC32-64 Port Blade Highlights Scales the Brocade X6 Director to 512 ports while maximizing space utilization with 33% more device connectivity in a high-density blade Increases agility by enabling flexible architectures

More information

Global Crossing Optical. Lambdasphere Lambdaline Managed Fibre

Global Crossing Optical. Lambdasphere Lambdaline Managed Fibre Global Crossing Optical Lambdasphere Lambdaline Managed Fibre Global Crossing Optical What are Global Crossing Optical Services? The optical network within and between data centres is the critical fabric

More information

BT Connect Networks that think Optical Connect UK

BT Connect Networks that think Optical Connect UK BT Connect Networks that think Optical Connect UK Fast, reliable, secure and affordable Connecting the heart of your organisation Your organisation and its people rely on its applications email, databases,

More information

Brocade G610, G620, and G630 Switches Frequently Asked Questions

Brocade G610, G620, and G630 Switches Frequently Asked Questions FAQ Brocade G610, G620, and G630 Switches Frequently Asked Questions Introduction Brocade, A Broadcom Inc. Company, provides the industry s leading Gen 6 Fibre Channel family of Storage Area Network (SAN)

More information

IBM System Storage SAN768B and SAN384B

IBM System Storage SAN768B and SAN384B IBM SAN768B and SAN384B Designed for highest performance and scalability for the most demanding enterprise SAN environments Highlights Drive new levels of performance with 8 Gbps Fibre Channel (FC) and

More information

SAN Distance Extension Solutions

SAN Distance Extension Solutions SN Distance Extension Solutions Company Introduction SmartOptics designs and markets all types of fibre optical Product portfolio: transmission products. Headquarted in Oslo, Norway, we serve Storage,

More information

QuickSpecs. StorageWorks SAN Switch 2/8-EL by Compaq. Overview

QuickSpecs. StorageWorks SAN Switch 2/8-EL by Compaq. Overview Overview The StorageWorks San Switch 2/8-EL is the next generation entry level 8 port fibre channel SAN fabric switch featuring 2Gb transfer speed and the optional ability to trunk or aggregate the throughput

More information

16GFC Sets The Pace For Storage Networks

16GFC Sets The Pace For Storage Networks 16GFC Sets The Pace For Storage Networks Scott Kipp Brocade Mark Jones Emulex August 30 th, 2011 To be presented to the Block Storage Track at 1:30 on Monday September 19th 1 Overview What is 16GFC? What

More information

Flex System FC5024D 4-port 16Gb FC Adapter Lenovo Press Product Guide

Flex System FC5024D 4-port 16Gb FC Adapter Lenovo Press Product Guide Flex System FC5024D 4-port 16Gb FC Adapter Lenovo Press Product Guide The network architecture on the Flex System platform is designed to address network challenges, giving you a scalable way to integrate,

More information

Obtaining and Installing Licenses

Obtaining and Installing Licenses CHAPTER 10 Licenses are available in all switches in the Cisco MDS 9000 Family. Licensing allows you to access specified premium features on the switch after you install the appropriate license for that

More information

Overview. Cisco UCS Manager User Documentation

Overview. Cisco UCS Manager User Documentation Cisco UCS Manager User Documentation, page 1 Infrastructure Management Guide, page 2 Cisco Unified Computing System, page 3 Cisco UCS Building Blocks and Connectivity, page 5 Cisco UCS Manager User Documentation

More information

Cisco MDS 9000 Series Switches

Cisco MDS 9000 Series Switches Cisco MDS 9000 Series Switches Overview of Cisco Storage Networking Solutions Cisco MDS 9000 Series 32-Gbps Directors Cisco MDS 9718 Cisco MDS 9710 Cisco MDS 9706 Configuration Chassis, dual Supervisor-1E

More information

Native Fabric Connectivity for Today s SANs Center Fabric Technology

Native Fabric Connectivity for Today s SANs Center Fabric Technology DATA CENTER ABRIC Native abric Connectivity for Today s SANs Center abric Technology Brocade offers a wide range of SAN fabric interconnect solutions to provide flexible deployment options that help maximize

More information

An Oracle White Paper April Metro Cloud Connectivity: Integrated Metro SAN Connectivity in 16 Gb/sec Switches

An Oracle White Paper April Metro Cloud Connectivity: Integrated Metro SAN Connectivity in 16 Gb/sec Switches An Oracle White Paper April 2012 Metro Cloud Connectivity: Integrated Metro SAN Connectivity in 16 Gb/sec Switches Introduction... 1! Overview... 2! Brocade Seventh-Generation SAN Metro Connectivity Features...

More information

SAN Configuration Guide

SAN Configuration Guide ONTAP 9 SAN Configuration Guide November 2017 215-11168_G0 doccomments@netapp.com Updated for ONTAP 9.3 Table of Contents 3 Contents Considerations for iscsi configurations... 5 Ways to configure iscsi

More information

HP StorageWorks Fabric OS 6.1.2_cee1 release notes

HP StorageWorks Fabric OS 6.1.2_cee1 release notes HP StorageWorks Fabric OS 6.1.2_cee1 release notes Part number: 5697-0045 First edition: June 2009 Legal and notice information Copyright 2009 Hewlett-Packard Development Company, L.P. Copyright 2009 Brocade

More information

EMC Support Matrix Interoperability Results. September 7, Copyright 2016 EMC Corporation. All Rights Reserved.

EMC Support Matrix Interoperability Results. September 7, Copyright 2016 EMC Corporation. All Rights Reserved. EMC Support Matrix Interoperability Results September 7, 2016 Copyright 2016 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date.

More information

IBM System Storage SAN 384B and IBM System Storage SAN 768B add features

IBM System Storage SAN 384B and IBM System Storage SAN 768B add features , dated November 10, 2009 IBM System Storage SAN 384B and IBM System Storage SAN 768B add features Table of contents 1 Overview 7 Publications 2 Key prerequisites 8 Technical information 2 Planned availability

More information

IBM TotalStorage Enterprise Storage Server Enhances Performance 15,000 rpm Disk Drives and Delivers New Solutions for Long Distance Copy

IBM TotalStorage Enterprise Storage Server Enhances Performance 15,000 rpm Disk Drives and Delivers New Solutions for Long Distance Copy Hardware Announcement April 23, 2002 IBM TotalStorage Enterprise Storage Server Enhances Performance 15,000 rpm Disk and Delivers New Solutions for Long Distance Copy Overview IBM continues to demonstrate

More information

Interoperability Matrix

Interoperability Matrix Cisco MDS 9506, 9509, 9513, 9216A, 9216i, 9222i, and 9134 for IBM System Storage Directors and Switches Interoperability Matrix Last update: July 21, 2008 Copyright International Business Machines Corporation

More information

Architecture. SAN architecture is presented in these chapters: SAN design overview on page 16. SAN fabric topologies on page 24

Architecture. SAN architecture is presented in these chapters: SAN design overview on page 16. SAN fabric topologies on page 24 Architecture SAN architecture is presented in these chapters: SAN design overview on page 16 SAN fabric topologies on page 24 Fibre Channel routing on page 46 Fibre Channel over Ethernet on page 65 Architecture

More information

Cisco MDS 9250i Multiservice Fabric Switch Overview. Introduction CHAPTER

Cisco MDS 9250i Multiservice Fabric Switch Overview. Introduction CHAPTER CHAPTER 1 Cisco MDS 9250i Multiservice Fabric Switch Overview This chapter describes the Cisco MDS 9250i Multiservice Fabric Switch and includes these topics: Introduction, page 1-1 Chassis Description,

More information

The IBM Systems Storage SAN768B announces native Fibre Channel routing

The IBM Systems Storage SAN768B announces native Fibre Channel routing IBM United States Announcement 108-325, dated May 13, 2008 The IBM Systems Storage SAN768B announces native Fibre Channel routing Description...2 Publications... 3 Services...3 Technical information...3

More information

IBM TotalStorage Enterprise Storage Server Delivers Bluefin Support (SNIA SMIS) with the ESS API, and Enhances Linux Support and Interoperability

IBM TotalStorage Enterprise Storage Server Delivers Bluefin Support (SNIA SMIS) with the ESS API, and Enhances Linux Support and Interoperability Hardware Announcement February 17, 2003 IBM TotalStorage Enterprise Storage Server Delivers Bluefin Support (SNIA SMIS) with the ESS API, and Enhances Linux Support and Interoperability Overview The IBM

More information

CWDM CASE STUDY DESIGN GUIDE. Line Systems, Inc. uses iconverter CWDM Multiplexers to overlay Ethernet onto SONET rings

CWDM CASE STUDY DESIGN GUIDE. Line Systems, Inc. uses iconverter CWDM Multiplexers to overlay Ethernet onto SONET rings DESIGN GUIDE CWDM CASE STUDY Line Systems, Inc. uses iconverter CWDM Multiplexers to overlay Ethernet onto SONET rings 140 Technology Drive, Irvine, CA 92618 USA 800-675-8410 +1 949-250-6510 www.omnitron-systems.com

More information

IBM System Storage SAN768B-2 and SAN384B-2

IBM System Storage SAN768B-2 and SAN384B-2 SAN768B-2 and SAN384B-2 Designed to become the foundation for private or hybrid cloud storage area networks Highlights Unleash the full potential of private or hybrid cloud storage with outstanding scalability,

More information

BROCADE PRODUCT PLAN AND PORTFOLIO: JANUARY 29, An overview of the complete Brocade product family following the McDATA acquisition

BROCADE PRODUCT PLAN AND PORTFOLIO: JANUARY 29, An overview of the complete Brocade product family following the McDATA acquisition BROCADE PRODUCT PLAN AND PORTFOLIO: JANUARY 29, 2007 An overview of the complete Brocade product family following the McDATA acquisition 2007 BROCADE PRODUCT PLAN AND PORTFOLIO One of the most critical

More information

Configuring Fibre Channel Interfaces

Configuring Fibre Channel Interfaces This chapter contains the following sections:, page 1 Information About Fibre Channel Interfaces Licensing Requirements for Fibre Channel On Cisco Nexus 3000 Series switches, Fibre Channel capability is

More information

Gen 6 Fibre Channel Evaluation of Products from Emulex and Brocade

Gen 6 Fibre Channel Evaluation of Products from Emulex and Brocade Gen 6 Fibre Channel Evaluation of Products from Emulex and Brocade Gen 6 Fibre Channel provides new speeds and features for enterprise datacenters. Executive Summary Large enterprises choose Fibre Channel

More information

Vendor: IBM. Exam Code: Exam Name: IBM Midrange Storage Technical Support V3. Version: Demo

Vendor: IBM. Exam Code: Exam Name: IBM Midrange Storage Technical Support V3. Version: Demo Vendor: IBM Exam Code: 000-451 Exam Name: IBM Midrange Storage Technical Support V3 Version: Demo QUESTION NO: 1 On the Storwize V7000, which IBM utility analyzes the expected compression savings for an

More information

Datasheet Brocade DCX 8510 backbone family

Datasheet Brocade DCX 8510 backbone family Datasheet Brocade DCX 8510 backbone family The Brocade DCX 8510 Backbone is designed to unleash the full potential of private cloud storage. With unmatched scalability, 16Gbps performance, and reliability,

More information

Copyright International Business Machines Corporation 2008, 2009, 2010, 2011, 2012, 2013 All rights reserved.

Copyright International Business Machines Corporation 2008, 2009, 2010, 2011, 2012, 2013 All rights reserved. IBM SystemStorage SAN24B-4 Express IBM SystemStorage SAN40B-4 IBM SystemStorage SAN80B-4 IBM SystemStorage SAN48B-5 IBM SystemNetworking SAN24B-5 IMPORTANT SAN b type interop documents will be transitioning

More information

Core Switch PID Format Update Best Practices

Core Switch PID Format Update Best Practices Core Switch PID Format Update Best Practices For Immediate Release Updated 7/1/2002 Executive Summary There are certain parameters which must be set identically on all switches in a given fabric. In the

More information

Brocade Fabric OS DATA CENTER. Target Path Selection Guide October 17, 2017

Brocade Fabric OS DATA CENTER. Target Path Selection Guide October 17, 2017 October 17, 2017 DATA CENTER Brocade Fabric OS Target Path Selection Guide Brocade Fabric OS (Brocade FOS) Target Path releases are recommended code levels for Brocade Fibre Channel switch platforms. Use

More information

Unified Storage Networking

Unified Storage Networking Unified Storage Networking Dennis Martin President Demartek Demartek Company Overview Industry analysis with on-site test lab Lab includes servers, networking and storage infrastructure Fibre Channel:

More information

IBM expands multiprotocol storage offerings with new products from Cisco Systems

IBM expands multiprotocol storage offerings with new products from Cisco Systems Hardware Announcement July 15, 2003 IBM expands multiprotocol storage offerings with new products from Cisco Systems Overview The Cisco MDS 9000 family is designed for investment protection, flexibility,

More information

The Impact of Emerging Data Rates on Layer One Fiber Cabling Infrastructures. Rick Dallmann Senior Data Center Infrastructure Architect CABLExpress

The Impact of Emerging Data Rates on Layer One Fiber Cabling Infrastructures. Rick Dallmann Senior Data Center Infrastructure Architect CABLExpress The Impact of Emerging Data Rates on Layer One Fiber Cabling Infrastructures Rick Dallmann Senior Data Center Infrastructure Architect CABLExpress 36 Years of Experience CABLExpress is a manufacturer of

More information

Five Reasons Why You Should Choose Cisco MDS 9000 Family Directors Cisco and/or its affiliates. All rights reserved.

Five Reasons Why You Should Choose Cisco MDS 9000 Family Directors Cisco and/or its affiliates. All rights reserved. Five Reasons Why You Should Choose Cisco MDS 9000 Family Directors 2017 Cisco and/or its affiliates. All rights reserved. Contents Overview... 2 1. Integrated Analytics for Deep Visibility...3 2. Performance

More information

Cloud Optimized Performance: I/O-Intensive Workloads Using Flash-Based Storage

Cloud Optimized Performance: I/O-Intensive Workloads Using Flash-Based Storage Cloud Optimized Performance: I/O-Intensive Workloads Using Flash-Based Storage Version 1.0 Brocade continues to innovate by delivering the industry s first 16 Gbps switches for low latency and high transaction

More information

Storage Access Network Design Using the Cisco MDS 9124 Multilayer Fabric Switch

Storage Access Network Design Using the Cisco MDS 9124 Multilayer Fabric Switch Storage Access Network Design Using the Cisco MDS 9124 Multilayer Fabric Switch Executive Summary Commercial customers are experiencing rapid storage growth which is primarily being fuelled by E- Mail,

More information

FlexArray Virtualization Implementation Guide for Third- Party Storage

FlexArray Virtualization Implementation Guide for Third- Party Storage ONTAP 9 FlexArray Virtualization Implementation Guide for Third- Party Storage June 2018 215-11150_F0 doccomments@netapp.com Table of Contents 3 Contents Where to find information for configurations with

More information