Intermixing Best Practices

Similar documents
zseries FICON and FCP Fabrics -

The IBM System z9 and Virtualized Storage: Consolidation Drives Virtualization in Storage Networks

Integrating Open Systems into Mainframe Fabrics

Planning and Implementing NPIV For System z

Large SAN Design Best Practices Using Cisco MDS 9700 and MDS 9500 Multilayer Directors

IBM Europe Announcement ZG , dated February 13, 2007

Cisco MDS 9000 Family Blade Switch Solutions Guide

Cisco MDS 9000 Series Switches

IBM Storage Networking SAN384C-6 Product Guide IBM Redbooks Product Guide

4 Gbps and 10 Gbps switching modules available for Cisco MDS 9000 family of products

Brocade s Mainframe and FICON Presentations and Seminars that are available to Customers and Partners

Cisco MDS 9000 Series Switches

N-Port Virtualization in the Data Center

SAN 101. Introduction to Storage Technologies SAN (Storage Area Networking) and. Session 10999

Storage Access Network Design Using the Cisco MDS 9124 Multilayer Fabric Switch

Introduction to Storage Technologies SAN (Storage Area Networking) and a little FICON (FIber CONnection)

Cisco MDS 9710 Multilayer Director for IBM System Networking IBM Redbooks Product Guide

Cisco MDS 9000 Enhancements Fabric Manager Server Package Bundle, Mainframe Package Bundle, and 4 Port IP Storage Services Module

Architecture. SAN architecture is presented in these chapters: SAN design overview on page 16. SAN fabric topologies on page 24

Cisco MDS NX-OS Release 6.2Configuration Limits 2

IBM Storage Networking SAN192C-6 Product Guide IBM Redbooks Product Guide

Understanding FICON Performance

Cisco I/O Accelerator Deployment Guide

IBM and BROCADE Building the Data Center of the Future with IBM Systems Storage, DCF and IBM System Storage SAN768B Fabric Backbone

UCS Engineering Details for the SAN Administrator

Cisco MDS 9000 Family 4-Gbps Fibre Channel Switching Modules

Designing and Deploying a Cisco Unified Computing System SAN Using Cisco MDS 9000 Family Switches

Quick Reference Guide

Troubleshooting N-Port Virtualization

Why Customers Should Deploy Switches In Their SAN and FICON Environments

Inter-VSAN Routing Configuration

Vendor: EMC. Exam Code: E Exam Name: Cloud Infrastructure and Services Exam. Version: Demo

Brocade Technology Conference Call: Data Center Infrastructure Business Unit Breakthrough Capabilities for the Evolving Data Center Network

Storage Protocol Offload for Virtualized Environments Session 301-F

IBM TotalStorage SAN Switch M12

CCIE Data Center Storage Networking. Fibre Channel Switching Configuration. Nexus 5500UP FC Initialization Allocate interfaces as type FC

IBM TotalStorage SAN256B

IBM TotalStorage SAN256B

IBM To Resell Cisco Systems MDS 9000 Multilayer Switch and Director Family of Intelligent Storage Networking Products

DCNX5K: Configuring Cisco Nexus 5000 Switches

Advanced FICON Networks. Stephen Guendert BROCADE February 15, 2007 Session 3809

Exam : S Title : Snia Storage Network Management/Administration. Version : Demo

CONNECTRIX MDS-9250I SWITCH

HP Designing and Implementing HP SAN Solutions.

CCIE Data Center Storage Networking. Fibre Channel Switching

Cisco Prime Data Center Network Manager 6.2

PrepAwayExam. High-efficient Exam Materials are the best high pass-rate Exam Dumps

Module 2 Storage Network Architecture

Why You Should Deploy Switched-FICON. David Lytle, BCAF Global Solutions Architect System z Technologies and Solutions Brocade

Configuring and Managing Zones

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

Cisco MDS 9000 Series Fabric Configuration Guide, Release 8.x

Fibre Channel Storage Area Network Design BRKSAN-2701

Configuring and Managing Zones

THE OPEN DATA CENTER FABRIC FOR THE CLOUD

Fibre Channel Networking for the IP Network Engineer and SAN Core Edge Design Best Practices Chad Hintz Technical Solutions Architect BRKVIR-1121

SNIA Discussion on iscsi, FCIP, and IFCP Page 1 of 7. IP storage: A review of iscsi, FCIP, ifcp

Fibre Channel Gateway Overview

Virtualizing SAN Connectivity with VMware Infrastructure 3 and Brocade Data Center Fabric Services

Blue Cross Blue Shield of Minnesota - Replication and DR for Linux on System z

NETWORKING FLASH STORAGE? FIBRE CHANNEL WAS ALWAYS THE ANSWER!

Configuring a Dual Fabric SAN Environment

Cisco MDS Port 10-Gbps Fibre Channel over Ethernet Module

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

This appendix covers the following topics: Understanding the Concepts of Storage Networking Understanding the Storage Networking Protocols

S S SNIA Storage Networking Foundations

IBM TotalStorage Enterprise Storage Server Model 800

Configuring and Managing Zones

Lossless 10 Gigabit Ethernet: The Unifying Infrastructure for SAN and LAN Consolidation

IBM System Storage SAN06B-R extension switch

Fibre Channel Advances Book

Obtaining and Installing Licenses

access addresses/addressing advantages agents allocation analysis

Five Reasons Why You Should Choose Cisco MDS 9000 Family Directors Cisco and/or its affiliates. All rights reserved.

Technical Brief: How to Configure NPIV on VMware vsphere 4.0

Configuring Fabric Congestion Control and QoS

vsan Mixed Workloads First Published On: Last Updated On:

Blue Cross Blue Shield of Minnesota - Replication and DR for Linux on System z

VIRTUALIZING SERVER CONNECTIVITY IN THE CLOUD

The Cisco HyperFlex Dynamic Data Fabric Advantage

8Gbps FICON Directors or 16Gbps FICON Directors What s Right For You?

Configuring PortChannels

IBM System Storage SAN768B

Design and Implementations of FCoE for the DataCenter. Mike Frase, Cisco Systems

Configuring Fabric Congestion Control and QoS

FCIA Fibre Channel: Live Long and Prosper

IBM SYSTEM POWER7. PowerVM. Jan Kristian Nielsen Erik Rex IBM Corporation

Secure SAN Zoning Best Practices

Availability & Resource

Vendor: IBM. Exam Code: Exam Name: IBM Midrange Storage Technical Support V3. Version: Demo

Configuring Fibre Channel Interfaces

A first look into the Inner Workings and Hidden Mechanisms of FICON Performance

The Virtual Machine Aware SAN

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

Cisco MDS 9100 Series Fabric Switches

As enterprise organizations face the major

Storage Area Network (SAN)

PASS4TEST. IT Certification Guaranteed, The Easy Way! We offer free update service for one year

CLOUD COMPUTING IT0530. G.JEYA BHARATHI Asst.Prof.(O.G) Department of IT SRM University

Interoperability Matrix

Transcription:

System z FICON and FCP Fabrics Intermixing Best Practices Mike Blair mblair@cisco.comcom Howard Johnson hjohnson@brocade.com 10 August 2011 (9:30am 10:30am) Session 9864 Room Europe 7

Abstract t In this jointly presented session, the major players in storage networking will discuss: 1. Decision criteria for deciding whether to merge Open and FICON fabrics 2. DOs and DONTs for FICON/OPEN merged fabrics - best practices 3. How NPIV plays into the fabric definition and how to best to zone for zseries Linux environment. 4. Management options / best practices for merge Fabrics. At the end, there will be time for Q&A.

Agenda Merging Open System and FICON Fabrics Intermix Mode Converged / Merged Fabrics Consolidation / Virtualization Considerations when consolidating fabrics Asset Utilization Management Human Factors Application Goals Managing Merged Fabrics Virtualization (NPIV) Fabric Virtualization (VSAN / Virtual Fabrics) Isolation of Resources (Zoning) CUP

Intermix Mode Converged / Merged Fabrics Consolidation / Virtualization MERGING OPEN SYSTEM AND FICON FABRICS

Intermix and Replication Production Operations Access ECKD FICON channel Replication Array to Array FCP Interface FICON FCP

Intermix and FCP Channel FICON zos ECKD Storage FCP Linux for System z Open Systems Storage FICON FCP

Intermix for Tape Archive Limited Resource Shared Across LOBs Dual Personality FICON FCP FICON FCP

Fabric Migration Stages Independent Processing, Storage, and Fabrics Site A Site B OS Servers OS Storage DASD System Z Fabric #1 11 21 Fabric #2 12 22 13 23 Backup Fabric OS Servers OS Storage DASD System Z Tape Tape

Fabric Migration Stages Processor Optimization for Application Consolidation Mainframe Applications Site A Site B Mainframe Applications OS Applications Fabric #1 11 21 OS Applications System Z OS Storage Fabric #2 12 22 System Z OS Storage DASD 13 23 Backup Fabric DASD Tape Tape

Fabric Migration Stages Network Optimization using Virtual Fabrics Mainframe Applications OS Applications Site A Virtual Fabric #1 11 21 Site B Mainframe Applications OS Applications System Z OS Storage 12 Virtual Fabric #2 22 System Z OS Storage DASD 13 23 Virtual Fabric Backup DASD Tape Tape

Fabric Migration Stages I/O Optimization Site A Site B 11 12 13 g Logic SL) Taggin (XI Virtual Fabric #1 Virtual Fabric #2 Virtual Fabric Tagging ISL Virtual Fabric Backup Taggin g Logic (XI SL) 21 22 23

Integrated or Isolated ISLs per VSAN / VF Isolated Production, Dev/Test and Linux are all running on a z9 Isolate the ISL usage between these systems 4 ISLs per each of the three environments FICON FICON FC Prod n Test Linux Solution: - Define 4 ISLs only used by single VSAN - Port Channel for High Availability Gives Total Isolation no interference FLOWs IS SOLATED 12 ISLs Prod n Test Linux

Integrated or Isolated ISLs per VSAN / VF Integrated Production, Dev/Test and Linux are all running on a z9 Integrate ISLs for highest availability All 12 ISLs bandwidth are available for peak usage FICON FICON FC Prod n Test Linux Solution: - Define 12 ISLs carrying all 3 VSANs - Port Channel for High Availability - Potentially use QOS to prioritize Great for if peak usage at different times FLOWs IS SOLATED 12 ISLs Prod n Test Linux

Virtual Fabric Tagging Site A Site B Virtual Virtual Virtual Virtual Virtual Virtual Switch Switch Switch Switch Switch Switch 11 12 13 23 22 21 Tagging Logic Tagging Logic Physical Ports Physical Ports Long Distance ISLs with Virtual Fabric Tagging

Asset Utilization Management Human Factors Application Goals CONSIDERATIONS WHEN CONSOLIDATING FABRICS

Considering Intermix Fabrics overview, enablers, and challenges Isolation of heterogeneous FC fabrics has been a standard d practice in the data center Why or why not deploy intermix in your data center now?

Customers Today Separate Mainframe and Distributed Environments Distributed processing It s different ;-) Supports a broad mix of availability levels Mainframe supports only the gold standard Availability requirements are different Risk adverse environments Call home if errors happen Risk tolerant environments Retry/reboot Both run mission-critical applications Merged environments Strictest requirements are deployed for all Application i goals and expectations must be understood d

Mainframe Environments PREDICTABILITY Avoid risk and design for redundancy Directors instead of switches Avoid unscheduled outages Minimize or eliminate scheduled outages Workload predictability and stability Moved from one set of resources to another Measure what s currently going on I/O Behavior Influenced or directed by Operating System / Hypervisor Predictability in path selection (RMF) Orchestrate network connectivity to optimize performance Conservative deployment of new features

Distributed Environments EASE of DEPLOYMENT and MANAGEMENT Accept risk and design for flexibility Switches instead of Directors Accommodate unplanned outages Cost sensitive Redundancy deploy only when mission critical Regularly scheduled outages Workload flexibility Performance can vary not typically a concern Movement is allowed to be disruptive I/O Behavior Layer-2 and layer-3 routing exploited for connectivity Path selection or influence is a low priority It s the SAN s job

Consolidation Drivers Enablers that make it worth your consideration Reduce operational cost Footprint Energy UPS demand Increase efficiency Optimize asset cost and utilization Virtualization Consolidate applications to fewer platforms Linux on System z Long-term IT Strategy Cloud computing

Protocol Intermix Considerations A different way to think about provisioning I/O Common Architecture Accommodate most highly available applications Avoid single points of failure (Five-9 s) Avoid over provisioning (optimal performance) Component Utilization Port density Feature options (wide area capabilities) Change Management Consumption of resources Isolation of change management Authorization of changes Set and don t touch Plug and play environment

Virtualization (NPIV) Fabric Virtualization (VSAN / Virtual Fabrics) Isolation of Resources (Zoning) CUP MANAGING MERGED FABRICS

What is NPIV? Linux 1 Linux 2 Linux 3 Linux 4 Linux 5 Linux 6... 6 WWNs 6 FCIDs 6 CHPIDs No NPIV FCIP Tunnel Each Linux Guest / Machine has their own CHPID Each has own WWN / FCID Each can be zoned independently to protect for data Isolation Wasteful of channel resources (ie. does this guest push 8G?) Limits the amount of consolidation to the System Z Higher cost per image when considering consolidation

What is NPIV? Linux 1 NPIV Linux 2 Linux 3 Linux 4 Linux 5 Linux 6... 6 WWNs 6 FCIDs 1 CHPID FCIP Tunnel All 6 Linux Guests / Machines share the same CHPID Each has own WWN / FCID Each can be zoned independently to protect for data Isolation Good utilization of channel resources Number of channels is no longer limiting factor to consolidation Lower cost per image when considering consolidation

How does NPIV work? FLOGI ACCEPT (FCID).. FDISC (second virtual WWN) ACCEPT (virtual FCID2) FDISC(thid (third virtual it lwwn) ACCEPT (virtual FCID3) FCIP Tunnel Both System z and adjacent FC Director must me NPIV enabled System Z has a pool of virtual WWNs for each NPIV defined CHPID Switch will create unique FCID per FDISC Based FCID will be 0xDDPP00 (DD = Domain PP = Port 00 is constant NPIV FCIDs will be 0xDDPPxx (xx is 01, 02, 03.) Number of NPIV virtual connections per real is variable

System z N-port tidvit Virtualization ti FC-FS 24 bit fabric addressing Destination ID (D_ID) Domain Area AL (Port) Identifies the Switch Number Up to 239 Switch Numbers Identifies the Switch Port Up to 240 ports per domain AL_PA, assigned during LIP, Low AL_PA, high Priority 1b byte 1b byte 1b byte FICON Express2, Express4 and Express 8 adapters now support NPIV Domain Port @ Virtual Addr. Switch CU Link @ 00 - FF 1 byte 1 byte 1 byte

Virtual Fabric (VSAN) A way to Partition a Switch or SAN into a Virtual/Logical environment Virtual SANs created from larger cost- effective redundant physical fabric Reduces wasted ports of the older island approach Hardware-based isolation Statistics can be gathered per VF Management per VF Unique Serial Number / CUP per FICON VF FICON FICON FICON FICON

Mixing FICON AND FCP Zoning Zoning is a method used with a FICON/FCP switching devices to enable or disable communication between different attached devices Zoning can be done by WWN, domain/index (sometimes called port zoning), or a combination of both FCP typically uses WWN zoning FICON typically uses D/I zoning A best-practice recommendation is to continue to segregate FICON devices in one zone and FCP devices in one or more other zones You would normally continue to use D/I zoning for FICON while using WWN for FCP traffic zoning even on the same switching device and fabric

Zoning A logical grouping of fabric connected Ph i l T l devices within a SAN (or virtual fabric) VF 2 Zoning establishes access control Disk2 Devices within a zone can access each other Zoning increases security ZoneA Host1 Limiting access prevents unauthorized access Zone membership might be configured by: ZoneB Disk4 Port World Wide Name (pwwn) device Fabric World Wide Name (fwwn) fabric VF 3 ZoneD Fibre Channel Identifier (FCID) Fibre Channel Alias (FC_Alias) IP address ZoneA Domain ID/port number Host3 Interface Disk6 Physical Topology Disk3 Disk1 Host2 Host4 Disk5 ZoneC

Mixing FICON AND FCP: Zoning z/os Linux P-2-P Tape Switch FICON Director Swit ched Shared Storage Tape Library SAN Director Shared Backup Switch Servers Switch Servers Shared Storage Switch Switch Servers Shared Storage SAN Director Switch Servers

Fabric Binding for Enhanced Cascading Security Two Switches / One Hop zseries FICON Control Unit Based on Switch WWNs Only authorized switches can connect tto a secure fabric bi Unauthorized switches result in attachment port being placed in Invalid Attachment state Query Security Attributes and Exchange Security Attributes ensure compliance Predictable error recovery WWN X WWN Y Open Systems Requires Insistent (static) Domain IDs SCSI Array Switch Membership List Switch X -> WWN Y Switch Y -> WWN X WWN Z

FICON Cascade Topologies Only One Hop is allowed for FICON Multi-hop is not supported but does work testing and support are why not supported FCIP links are supported Port Channels are supported for FICON FICON

FCP Topologies FCP

VSAN / Virtual Fabric Based Roles System Administrator Configures and manages all platform-specific capabilities VSAN - FICON VSAN - FCP Enables deployment of VSANs that fit existing operational models System-admin configures all platform-specific capabilities VSAN-admin(s) configure and manage their own VSANs The existing role definition is enhanced to include VSAN(s) VSAN Administrators Configure and manages only their VSANs

CUP in Consolidated Fabric System z z/os, TPF, VSE VM, Linux UNIX DASD/Disk Chassis virtualization ti can Local DASD provide improved and Replication isolation if desired Shared FICON/FCP Fabrics Replication Remote DASD/Disk Distributed FICON CUP could provide I/O statistics for both System z and distributed platforms Switches Tape and Tape Libraries and Virtual Tape FCP traffic FICON traffic

SHARE, Orlando, August 2011 System z FICON and FCP Fabrics Intermixing Best Practices Session 9864 THANK YOU!

Standards d and NPIV FC-FS Describes FDISC use to allocate additional N_Port_IDs Section 12.3.2.41 NV_ Ports are treated like any other port Exception is they use FDISC instead of FLOGI FC-GS-4 Describes Permanent Port Name and Get Permanent Port Name command Based on the N_Port ID (G_PPN_ID) The PPN may be the F_Port Name FC-LS Documents the responses to NV_Port related ELSs FDISC, FLOGI and FLOGO Reference 03-338v1

More Standards d on NPIV FC-DA Profiles the process of acquiring additional N_Port_IDs Clause 4.9 FC-MI-2 Profiles how the fabric handles NPIV requests New Service Parameters are defined in 03-323v1 Name Server Objects in 7.3.2.2 and 7.3.2.3

Virtualizing i the Fabric The Full Solution To build a cost saving fabric virtualization solution, 7 key services are required: Virtual Fabric Attachment the ability to assign virtual fabric membership at the port level Multiprotocol Extensions the ability to extend virtual fabric service to iscsi, FCIP, FICON, etc. Virtual Fabric Services the ability to create fabric services per virtual fabric (Login, Name, RSCNs, QoS, etc.) Virtual Fabric Diagnostics the ability to troubleshoot per virtual fabric problems Virtual Fabric Security the ability to define separate security policies per virtual fabric Virtual Fabric Management the ability to map and manage virtual fabrics independently Inter-Fabric Routing the ability to provide connectivity across virtual fabrics without merging the fabrics Virtual Fabric Service Model Inter-Virtual Fabric Routing Virtualized Fabric Management Virtualized Fabric Security Policies Virtualized Fabric Diagnostics Virtualized Fabric Services Multiprotocol Transport Extensions Virtualized Fabric Attachment ISL MDS 9000 Family MDS 9000 Family Full Service End-to-End Virtual Fabric Implementation

THIS SLIDE INTENTIONALLY LEFT BLANK