zseries FICON and FCP Fabrics -

Similar documents
Intermixing Best Practices

The IBM System z9 and Virtualized Storage: Consolidation Drives Virtualization in Storage Networks

Integrating Open Systems into Mainframe Fabrics

Planning and Implementing NPIV For System z

Large SAN Design Best Practices Using Cisco MDS 9700 and MDS 9500 Multilayer Directors

IBM Europe Announcement ZG , dated February 13, 2007

Cisco MDS 9000 Family Blade Switch Solutions Guide

Cisco MDS 9000 Series Switches

IBM Storage Networking SAN384C-6 Product Guide IBM Redbooks Product Guide

4 Gbps and 10 Gbps switching modules available for Cisco MDS 9000 family of products

Brocade s Mainframe and FICON Presentations and Seminars that are available to Customers and Partners

Cisco MDS 9000 Series Switches

N-Port Virtualization in the Data Center

SAN 101. Introduction to Storage Technologies SAN (Storage Area Networking) and. Session 10999

Storage Access Network Design Using the Cisco MDS 9124 Multilayer Fabric Switch

Introduction to Storage Technologies SAN (Storage Area Networking) and a little FICON (FIber CONnection)

Cisco MDS 9710 Multilayer Director for IBM System Networking IBM Redbooks Product Guide

Cisco MDS 9000 Enhancements Fabric Manager Server Package Bundle, Mainframe Package Bundle, and 4 Port IP Storage Services Module

Architecture. SAN architecture is presented in these chapters: SAN design overview on page 16. SAN fabric topologies on page 24

IBM and BROCADE Building the Data Center of the Future with IBM Systems Storage, DCF and IBM System Storage SAN768B Fabric Backbone

IBM Storage Networking SAN192C-6 Product Guide IBM Redbooks Product Guide

UCS Engineering Details for the SAN Administrator

Cisco I/O Accelerator Deployment Guide

Understanding FICON Performance

Why Customers Should Deploy Switches In Their SAN and FICON Environments

Cisco MDS NX-OS Release 6.2Configuration Limits 2

Vendor: EMC. Exam Code: E Exam Name: Cloud Infrastructure and Services Exam. Version: Demo

Designing and Deploying a Cisco Unified Computing System SAN Using Cisco MDS 9000 Family Switches

Quick Reference Guide

Troubleshooting N-Port Virtualization

Inter-VSAN Routing Configuration

Cisco MDS 9000 Family 4-Gbps Fibre Channel Switching Modules

Storage Protocol Offload for Virtualized Environments Session 301-F

Brocade Technology Conference Call: Data Center Infrastructure Business Unit Breakthrough Capabilities for the Evolving Data Center Network

IBM TotalStorage SAN Switch M12

IBM TotalStorage SAN256B

IBM TotalStorage SAN256B

CCIE Data Center Storage Networking. Fibre Channel Switching Configuration. Nexus 5500UP FC Initialization Allocate interfaces as type FC

Advanced FICON Networks. Stephen Guendert BROCADE February 15, 2007 Session 3809

DCNX5K: Configuring Cisco Nexus 5000 Switches

Exam : S Title : Snia Storage Network Management/Administration. Version : Demo

CONNECTRIX MDS-9250I SWITCH

Module 2 Storage Network Architecture

HP Designing and Implementing HP SAN Solutions.

CCIE Data Center Storage Networking. Fibre Channel Switching

Cisco Prime Data Center Network Manager 6.2

Fibre Channel Gateway Overview

IBM To Resell Cisco Systems MDS 9000 Multilayer Switch and Director Family of Intelligent Storage Networking Products

PrepAwayExam. High-efficient Exam Materials are the best high pass-rate Exam Dumps

Why You Should Deploy Switched-FICON. David Lytle, BCAF Global Solutions Architect System z Technologies and Solutions Brocade

Configuring and Managing Zones

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

Cisco MDS 9000 Series Fabric Configuration Guide, Release 8.x

Fibre Channel Storage Area Network Design BRKSAN-2701

Configuring and Managing Zones

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Fibre Channel Networking for the IP Network Engineer and SAN Core Edge Design Best Practices Chad Hintz Technical Solutions Architect BRKVIR-1121

Fibre Channel Advances Book

SNIA Discussion on iscsi, FCIP, and IFCP Page 1 of 7. IP storage: A review of iscsi, FCIP, ifcp

Virtualizing SAN Connectivity with VMware Infrastructure 3 and Brocade Data Center Fabric Services

Configuring a Dual Fabric SAN Environment

Cisco MDS Port 10-Gbps Fibre Channel over Ethernet Module

THE OPEN DATA CENTER FABRIC FOR THE CLOUD

IBM TotalStorage Enterprise Storage Server Model 800

This appendix covers the following topics: Understanding the Concepts of Storage Networking Understanding the Storage Networking Protocols

IBM System Storage SAN06B-R extension switch

S S SNIA Storage Networking Foundations

Lossless 10 Gigabit Ethernet: The Unifying Infrastructure for SAN and LAN Consolidation

Configuring and Managing Zones

Obtaining and Installing Licenses

Blue Cross Blue Shield of Minnesota - Replication and DR for Linux on System z

access addresses/addressing advantages agents allocation analysis

NETWORKING FLASH STORAGE? FIBRE CHANNEL WAS ALWAYS THE ANSWER!

FCIA Fibre Channel: Live Long and Prosper

Design and Implementations of FCoE for the DataCenter. Mike Frase, Cisco Systems

Five Reasons Why You Should Choose Cisco MDS 9000 Family Directors Cisco and/or its affiliates. All rights reserved.

Configuring Fabric Congestion Control and QoS

A first look into the Inner Workings and Hidden Mechanisms of FICON Performance

The Virtual Machine Aware SAN

vsan Mixed Workloads First Published On: Last Updated On:

VIRTUALIZING SERVER CONNECTIVITY IN THE CLOUD

IBM System Storage SAN768B

8Gbps FICON Directors or 16Gbps FICON Directors What s Right For You?

Configuring PortChannels

The Cisco HyperFlex Dynamic Data Fabric Advantage

Configuring Fabric Congestion Control and QoS

Secure SAN Zoning Best Practices

Comparing Server I/O Consolidation Solutions: iscsi, InfiniBand and FCoE. Gilles Chekroun Errol Roberts

Technical Brief: How to Configure NPIV on VMware vsphere 4.0

Cloud on z Systems Solution Overview: IBM Cloud Manager with OpenStack

Vendor: IBM. Exam Code: Exam Name: IBM Midrange Storage Technical Support V3. Version: Demo

Availability & Resource

Designing SAN Using Cisco MDS 9000 Series Fabric Switches

Configuring Fibre Channel Interfaces

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

Blue Cross Blue Shield of Minnesota - Replication and DR for Linux on System z

Storage Area Network (SAN)

Cisco MDS 9100 Series Fabric Switches

As enterprise organizations face the major

Interoperability Matrix

Transcription:

zseries FICON and FCP Fabrics - Intermixing Best Practices Mike Blair mblair@cisco.comcom Howard Johnson hjohnson@brocade.com 8 August 2012 (3:00pm 4:00pm) Session 12075 Elite 2 (Anaheim Marriott Hotel)

Abstract t In this jointly presented session, the major players in storage networking will discuss: Decision criteria for deciding whether to merge Open and FICON fabrics DOs and DONTs for FICON/OPEN merged fabrics - best practices How NPIV plays into the fabric definition and how to best to zone for zseries Linux environment. Management options / best practices for merge Fabrics. At the end, there will be time for Q&A.

Agenda Merging Open System and FICON Fabrics Intermix Mode Converged / Merged Fabrics Consolidation / Virtualization Considerations when consolidating fabrics Asset Utilization Management Human Factors Application Goals Managing Merged Fabrics Virtualization (NPIV) Fabric Virtualization (VSAN / Virtual Fabrics) Isolation of Resources (Zoning) CUP

Intermix Mode Converged / Merged Fabrics Consolidation / Virtualization MERGING OPEN SYSTEM AND FICON FABRICS

Intermix and Replication Production Operations Access ECKD FICON channel Replication Array to Array FCP Interface FICON FCP

Intermix and FCP Channel FICON zos ECKD Storage FCP Linux for System z Open Systems Storage FICON FCP

Intermix for Tape Archive Limited Resource Shared Across LOBs Dual Personality FICON FCP FICON FCP

Full Intermix of Open and Mainframe Full Integration Management Full Dual Management Consolidated team Topology FICON is Core FCP - Core-Edge FCP FICON

Fabric Migration Stages Site A Site B OS Servers OS Storage DASD System Z Fabric #1 11 21 Fabric #2 12 22 13 23 Backup Fabric OS Servers OS Storage DASD System Z Tape Independent Processing, Storage, and Fabrics Tape

Fabric Migration Stages Mainframe Applications Site A Site B Mainframe Applications OS Applications Fabric #1 11 21 OS Applications System Z OS Storage Fabric #2 12 22 System Z OS Storage DASD 13 23 Backup Fabric DASD Tape Processor Optimization for Application Consolidation Tape

Fabric Migration Stages Mainframe Applications OS Applications Site A Virtual Fabric #1 11 21 Site B Mainframe Applications OS Applications System Z OS Storage 12 Virtual Fabric #2 22 System Z OS Storage DASD 13 23 Virtual Fabric Backup DASD Tape Network Optimization using Virtual Fabrics Tape

Fabric Migration Stages I/O Optimization Site A Site B 11 12 13 g Logic SL) Taggin (XI Virtual Fabric #1 Virtual Fabric #2 Virtual Fabric Tagging ISL Virtual Fabric Backup Taggin g Logic (XI SL) 21 22 23

Integrated t or Isolated ISLs per VSAN / VF FICON FICON FC Isolated Production, Dev/Test and Linux are all running on a z9 Isolate the ISL usage between these systems 4 ISLs per each of the three environments Prod n Test Linux Solution: - Define 4 ISLs only used by 12 single VSAN - Port Channel for High Availability Gives Total Isolation no interference FLOWs IS SOLATED ISLs Prod n Test Linux

Integrated t or Isolated ISLs per VSAN / VF FICON FICON FC Integrated Production, Dev/Test and Linux are all running on a z9 Integrate ISLs for highest availability All 12 ISLs bandwidth are available for peak usage Solution: Prod n Test Linux - Define 12 ISLs carrying all 3 12 VSANs - Port Channel for High Availability - Potentially use QOS to prioritize Great for if peak usage at different times FLOWs IS SOLATED ISLs Prod n Test Linux

Virtual Fabric Tagging Site A Site B Virtual Virtual Virtual Virtual Virtual Virtual Switch Switch Switch Switch Switch Switch 11 12 13 23 22 21 Tagging Logic Tagging Logic Physical Ports Physical Ports Long Distance ISLs with Virtual Fabric Tagging

Asset Utilization Management Human Factors Application Goals CONSIDERATIONS WHEN CONSOLIDATING FABRICS

Considering i Intermix Isolation of heterogeneous FC fabrics has been a standard practice in the data center Why or why not deploy intermix in your data center now? Fabrics overview, enablers, and challenges

Customers Today Separate Mainframe and Distributed Environments Distributed processing It s different ;-) Supports a broad mix of availability levels Mainframe supports only the gold standard Availability requirements are different Risk adverse environments Call home if errors happen Risk tolerant environments Retry/reboot Both run mission-critical applications Merged environments Strictest requirements are deployed for all Application goals and expectations must be understood

Mainframe Environments PREDICTABILITY Avoid risk and design for redundancy Directors instead of switches Avoid unscheduled outages Minimize or eliminate scheduled outages Workload predictability and stability Moved from one set of resources to another Measure what s currently going on I/O Behavior Influenced or directed by Operating System / Hypervisor Predictability in path selection (RMF) Orchestrate network connectivity to optimize performance Conservative deployment of new features

Distributed ib t Environments EASE of DEPLOYMENT and MANAGEMENT Accept risk and design for flexibility Switches instead of Directors Accommodate unplanned outages Cost sensitive Redundancy deploy only when mission critical Regularly scheduled outages Workload flexibility Performance can vary not typically a concern Movement is allowed to be disruptive I/O Behavior Layer-2 and layer-3 routing exploited for connectivity Path selection or influence is a low priority It s the SAN s job

Consolidation Drivers Enablers that make it worth your consideration Reduce operational cost Footprint Energy UPS demand Increase efficiency Optimize i asset cost and utilization Virtualization Consolidate applications to fewer platforms Linux on System z Long-term IT Strategy Cloud computing

Protocol Intermix Considerations A different way to think about provisioning I/O Common Architecture Accommodate most highly available applications Avoid single points of failure (Five-9s) 9 s) Avoid over provisioning (optimal performance) Component Utilization Port density Feature options (wide area capabilities) Change Management Consumption of resources Isolation of change management Authorization of changes Set and don t touch Plug and play environment

Virtualization (NPIV) Fabric Virtualization (VSAN / Virtual Fabrics) Isolation of Resources (Zoning) CUP MANAGING MERGED FABRICS

What is NPIV? Linux 1 Linux 2 Linux 3 Linux 4 Linux 5 Linux 6... 6 WWNs 6 FCIDs 6 CHPIDs No NPIV FCIP Tunnel Each Linux Guest / Machine has their own CHPID Each has own WWN / FCID Each can be zoned independently to protect for data Isolation Wasteful of channel resources (ie. does this guest push 8G?) Limits the amount of consolidation to the System Z Higher cost per image when considering consolidation

What is NPIV? Linux 1 NPIV Linux 2 Linux 3 Linux 4 Linux 5 Linux 6... 6 WWNs 6 FCIDs 1 CHPID FCIP Tunnel All 6 Linux Guests / Machines share the same CHPID Each has own WWN / FCID Each can be zoned independently to protect for data Isolation Good utilization of channel resources Number of channels is no longer limiting factor to consolidation Lower cost per image when considering consolidation

How does NPIV work? FLOGI ACCEPT (FCID).. FDISC (second virtual WWN) ACCEPT (virtual FCID2) FDISC(thid (third virtual it lwwn) ACCEPT (virtual FCID3) FCIP Tunnel Both System z and adjacent FC Director must me NPIV enabled System Z has a pool of virtual WWNs for each NPIV defined CHPID Switch will create unique FCID per FDISC Based FCID will be 0xDDPP00 (DD = Domain PP = Port 00 is constant NPIV FCIDs will be 0xDDPPxx (xx is 01, 02, 03.) Number of NPIV virtual connections per real is variable

System z N-port tidvit Virtualization ti FC-FS 24 bit fabric addressing Destination ID (D_ID) Domain Area AL (Port) Identifies the Switch Number Up to 239 Switch Numbers Identifies the Switch Port Up to 240 ports per domain AL_PA, assigned during LIP, Low AL_PA, high Priority 1b byte 1b byte 1b byte FICON Express2, Express4 and Express 8 adapters now support NPIV Domain Port @ Virtual Addr. Switch CU Link @ 00 - FF 1 byte 1 byte 1 byte

Virtual Fabric (VSAN) A way to Partition a Switch or SAN into a Virtual/Logical environment Virtual SANs created from larger cost- effective redundant physical fabric Reduces wasted ports of the older island approach Hardware-based isolation Statistics can be gathered per VF Management per VF Unique Serial Number / CUP per FICON VF FICON FICON FICON FICON

VSAN / Virtual Fabric Based Roles Enables deployment of VSANs that fit existing operational models System-admin configures all platform-specific capabilities VSAN-admin(s) configure and manage their own VSANs The existing role definition is enhanced to include VSAN(s) System Administrator Configures and manages all platform-specific capabilities VSAN - FICON VSAN - FCP VSAN Administrators Configure and manages only their VSANs

Mixing i FICON AND FCP Zoning Zoning is a method used with a FICON/FCP switching devices to enable or disable communication between different attached devices Zoning can be done by WWN, domain/index (sometimes called port zoning), or a combination of both FCP typically uses WWN zoning FICON typically uses D/I zoning A best-practice recommendation is to continue to segregate FICON devices in one zone and FCP devices in one or more other zones You would normally continue to use D/I zoning for FICON while using WWN for FCP traffic zoning even on the same switching device and fabric

Zoning A logical grouping of fabric connected devices within a SAN (or virtual fabric) Ph i l T l Zoning establishes access control VF 2 Devices within a zone can access each other Disk2 Zoning increases security ZoneA Host1 Limiting access prevents unauthorized access Zone membership might be configured by: Disk4 ZoneB Port World Wide Name (pwwn) device Fabric World Wide Name (fwwn) fabric Fibre Channel Identifier (FCID) VF 3 ZoneD Fibre Channel Alias (FC_Alias) IP address ZoneA Domain ID/port number Host3 Interface Disk6 Physical Topology Disk3 Disk1 Host2 Host4 Disk5 ZoneC

Mixing FICON AND FCP: Zoning z/os Linux P-2-P Tape Switch FICON Director Swit ched Shared Storage Tape Library SAN Director Shared Backup Switch Servers Switch Servers Shared Storage Switch Switch Servers Shared Storage SAN Director Switch Servers

Fabric Binding for Enhanced Cascading Security Two Switches / One Hop Based on Switch WWNs Only authorized switches can connect to a secure fabric Unauthorized switches result in attachment port being placed in Invalid Attachment state Query Security Attributes and Exchange Security Attributes ensure compliance Predictable error recovery Requires Insistent (static) Domain zseries WWN X IDs Switch Membership List Switch X -> WWN Y Switch Y -> WWN X WWN Y WWN Z FICON Control Unit Open Systems SCSI Array

FICON Cascade Topologies Only One Hop is allowed for FICON Multi-hop is not supported but does work testing and support are why not supported FCIP links are supported Port Channels are supported for FICON FICON

FCP Topologies FCP

CUP in Consolidated Fabric System z z/os, TPF, VSE VM, Linux UNIX DASD/Disk Chassis virtualization can Local DASD provide improved and dreplication isolation if desired Shared FICON/FCP Fabrics Replication Remote DASD/Disk Distributed Switches FICON CUP could provide I/O statistics for both System z and distributed platforms Tape and Tape Libraries and Virtual Tape FCP traffic FICON traffic

SHARE, Anaheim, August 2012 zseries FICON and FCP Fabrics - Intermixing Best Practices Session 12075 THANK YOU!

Standards d and NPIV FC-FS Describes FDISC use to allocate additional N_Port_IDs Section 12.3.2.41 NV_ Ports are treated like any other port Exception is they use FDISC instead of FLOGI FC-GS-4 Describes Permanent Port Name and Get Permanent Port Name command Based on the N_Port ID (G_PPN_ID) The PPN may be the F_Port Name FC-LS Documents the responses to NV_Port related ELSs FDISC, FLOGI and FLOGO Reference 03-338v1

More Standards d on NPIV FC-DA Profiles the process of acquiring additional N_Port_IDs Clause 4.9 FC-MI-2 Profiles how the fabric handles NPIV requests New Service Parameters are defined in 03-323v1 Name Server Objects in 7.3.2.2 and 7.3.2.3

Virtualizing i the Fabric The Full Solution To build a cost saving fabric virtualization solution, 7 key services are required: Virtual Fabric Attachment the ability to assign virtual fabric membership at the port level Multiprotocol Extensions the ability to extend virtual fabric service to iscsi, FCIP, FICON, etc. Virtual Fabric Services the ability to create fabric services per virtual fabric (Login, Name, RSCNs, QoS, etc.) Virtual Fabric Diagnostics the ability to troubleshoot per virtual fabric problems Virtual Fabric Security the ability to define separate security policies per virtual fabric Virtual Fabric Management the ability to map and manage virtual fabrics independently Inter-Fabric Routing the ability to provide connectivity across virtual fabrics without merging the fabrics Virtual Fabric Service Model Inter-Virtual Fabric Routing Virtualized Fabric Management Virtualized Fabric Security Policies Virtualized Fabric Diagnostics Virtualized Fabric Services Multiprotocol Transport Extensions Virtualized Fabric Attachment ISL MDS 9000 Family MDS 9000 Family Full Service End-to-End Virtual Fabric Implementation

THIS SLIDE INTENTIONALLY LEFT BLANK