EMC Disk Library for mainframe DLm960

Size: px
Start display at page:

Download "EMC Disk Library for mainframe DLm960"

Transcription

1 EMC Disk Library for mainframe DLm960 Version 3.3 User Guide P/N REV A01

2 Copyright 2012 EMC Corporation. All rights reserved. Published July, 2012 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date regulatory document for your product line, go to the Technical Documentation and Advisories section on EMC support website. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners. 2 EMC Disk Library for mainframe DLm960 User Guide

3 CONTENTS Preface Chapter 1 Chapter 2 Overview of EMC Disk Library for Mainframe Introduction to Disk Library for mainframe DLm 960 architecture VTEC Backend storage Mainframe channel interfaces DLm devices and capacity Tape emulation Virtual tape drive states Data formats Support for physical tape drives High availability features of DLm VTEC Celerra server Data Domain Expanded capacity on Data Domain DD Features and benefits DLm Operations Management access to DLm Gather connection data Access the DLm Console Access the ACP Set date and time User administration Access a VTE VT console VTE reboot Power up DLm Celerra server power-up DD880 power-up ACP power-up VTE power-up Power down DLm EMC Disk Library for mainframe Version 3.2 User Guide 3

4 Contents VTE powerdown DD880 powerdown ACP powerdown Halt the Celerra server Verify Control Station powerdown Power down the CLARiiON storage array ACP Power down Start and stop tape devices Support access to DLm ESRS Modem support Chapter 3 DLm Administration Tape libraries DLm 2.x filesystem (Legacy) DLm 3.x enhanced filesystem (EFS) Backward compatibility Initialize scratch volumes Configure virtual devices Planning considerations DLm configuration files Configure global parameters Add devices Scratch synonyms Save configuration Delete a device range Manage configuration files Activate or install a configuration Create a new configuration Copy a configuration Modify or delete a configuration Tape Erase Space erase policy Time-to-Live erase policy Both Manage VTE and ACP logs VTE logs Support data Back-end tape support Direct Tape Export to and import from tapes DLm diagnostic reporting EMC Disk Library for mainframe Version 3.2 User Guide

5 Contents VTEC ConnectEMC Data Domain DD880 alert notifications AWSPRINT library utility Chapter 4 Chapter 5 Chapter 6 DLm Replication Overview Replication terminology Celarra replication Supported configurations Celerra replication procedure Celerra RepOutOfSyncHours feature DLm Celerra replication and disaster recovery Tape catalog considerations Deduplication storage replication Supported configurations Replication session setup Throttling Recovery point Recovery time Disaster recovery in Data Domain systems Directory replication flow Replication between DLm3.x and DLm2.x systems Prerequisites Replication considerations Guaranteed Replication Overview of GR Tape requirements for GR GR configuration Manage GR Verify GR configuration MIH considerations for Guarantied Replication DLm WORM Tapes Overview WORM control file File locking for WORM Retention period Configure WORM EMC Disk Library for mainframe Version 3.2 User Guide 5

6 Contents FLR FLRRET FLRMOD FLREXTENTS Determine if WORM is enabled Extend or modify a WORM tape Backward compatibility of modified (segmented) WORM tapes Scratch WORM tapes Chapter 7 Chapter 8 Mainframe Tasks Configure devices Real 3480, 3490, or Manual tape library MTL considerations for VTE drive selection MTL-related IBM maintenance EMC Unit Information Module Missing Interrupt Handler Mainframe configuration for GR Mainframe configuration for deduplicated virtual tapes Dynamic device reconfiguration considerations DFSMShsm considerations Specify tape compaction Locate and upload the DLm utilities and JCL for z/os Downloading and using the DLm utilities and JCL for z/os GENSTATS utility DLm scratch utility program DLMCMD utility program DLMVER utility program Initial program load from a DLm virtual tape Create a stand-alone IPL tape on DLm IPL from the stand-alone IPL tape IPL considerations for DLm Using DLm with Unisys Unique DLm operations for Unisys mainframes Autodetection Load displays Mount "Ready" interrupt Query Config command Ring-Out Mount request EMC Disk Library for mainframe Version 3.2 User Guide

7 Contents Scratch request Configuring for Unisys Device type Labels Scratch tapes Initializing tapes for Unisys Configuring the mainframe for DLm Chapter 9 Chapter 10 Appendix A z/os Console Support z/os Console operation DLMHOST Installing DLMHOST Running DLMHOST DLMHOST configuration file Using z/os Console support DLMHOST commands WTOR command examples Data Encryption Overview Mixed volumes in the same VTE How RSA Key Manager works with DLm Configure encryption on DLm Configure encryption when adding devices Configure encryption on existing devices Virtual Tape Operator Command Reference Virtual Tape Operator command reference Syntax CLOSE VSTATS PATH EXPORT FIND HELP IMPORT INITIALIZE LOAD QUERY QUIESCE READY REWIND SAVE TRACE EMC Disk Library for mainframe Version 3.2 User Guide 7

8 Contents SET SHOW SNMP STARTVT STOPVT UNLOAD UNQUIESCE UNREADY Appendix B Appendix C Appendix D Appendix E AWSTAPE Information AWSTAPE format Load Display Command CCW Opcode x'9f' Load display messages Format Control Byte Messages 0 and Load display data Format Control Byte Extract DLm statistics DLm statistics files Extraction utility Hourly statistics Volume statistics Mount statistics Unmount statistics Example Example Example Example System Messages Message format DLm system messages Call home messages EMCvts messages z/os system messages DLMCMD messages DLMHOST messages DLMLIB message EMC Disk Library for mainframe Version 3.2 User Guide

9 Contents DLMSCR messages DLMVER messages Healthcheck messages VTEC errors that generate ConnectEMC events Index EMC Disk Library for mainframe Version 3.2 User Guide 9

10 Contents 10 EMC Disk Library for mainframe Version 3.2 User Guide

11 FIGURES Title Page 1 Front view of the VTE Rear view of a VTE Front view of the ACP Rear view of the ACP port AT-9924TL switch Fujitsu XG2000R switch DD880 Controller Rear view ESCON converter cable VTE to storage controllers Network topology VTE to DD storage controllers Network topology Rear panel of a Gen 2 Control Station and an ACP PuTTY session ACP desktop with web browser DLm Console login page DLm Console DLm date and time User ID creation LDAP user authentication VT console SPS power switches on DLm DLm960 bay master power switches ES20 expansion shelf Controller power buttons Front panel of the DLm960 ACP Front view of the VTE EMC Secure Remote Support Global options Control units Add devices section Scratch Synonyms Save configuration System status VTE logs Gathering ACP and VTE support data SNMP configuration Alert messages DLm replication Unisys Device Panel EMC Disk Library for mainframe DLm960 User Guide 11

12 Figures 39 Configuring for encryption AWSTAPE single disk file EMC Disk Library for mainframe DLm960 User Guide

13 TABLES Title Page 1 ACP rear panel connectors DD880 stream count limits FICON adapter LED indicators Differences between the DLm models Details of ESCON and FICON connections DLm system access details Control Station usernames and passwords Behavior of GR and non-gr devices Example of LIBRARY-ID and LIBPORT-ID Parameters in DLMSCR DLMSCR report output messages Error code from DLMCMD Load display data Format Control Byte EMC Disk Library for mainframe DLm960 User Guide 13

14 Tableses 14 EMC Disk Library for mainframe DLm960 User Guide

15 PREFACE As part of an effort to improve and enhance the performance and capabilities of its product lines, EMC periodically releases revisions of its hardware and software. Therefore, some functions described in this document may not be supported by all versions of the software or hardware currently in use. For the most up-to-date information on product features, refer to your product release notes. If a product does not function properly or does not function as described in this document, please contact your EMC representative. Note: This document was accurate as of the time of publication. However, as information is added, new versions of this document may be released to the EMC Online Support website. Check the EMC Online Support website to ensure that you are using the latest version of this document. Purpose Audience EMC Disk Library for mainframe (DLm) provides IBM tape drive emulation to the z/os mainframe using disk storage systems in place of physical tapes. This guide provides information about the features, performance, and capacities of DLm 3.0 and later. It also includes installation and configuration information that is required for ongoing operation. This guide is part of the EMC DLm documentation set, and is intended for use by system operators to assist in day-to-day operation. Installation, configuration, and maintenance tasks must be accomplished by qualified EMC service personnel only. Readers of this document are expected to be familiar with tape library operations and the associated tasks in the mainframe environment. Related documentatio n The following EMC publications provide additional information: EMC Disk Library for mainframe Physical Planning Guide EMC Disk Library for mainframe DLm960 Version 3.3 Release Notes EMC Disk Library for mainframe Command Processors User Guide Using Celerra Replicator (V2) Using SnapSure on Celerra EMC Disk Library for mainframe DLm960 User Guide 15

16 Preface 40U-C Cabinet Unpacking and Setup Guide The EMC documents specified here and additional Celerra information are available in the EMC Online Support website. The Data Domain documents specified here and additional Data Domain information are available in the Data Domain portal: Data Domain documentation is also available on the Data Domain documentation CD that is delivered with DD880. Conventions used in this document EMC uses the following conventions for special notices: DANGER indicates a hazardous situation which, if not avoided, will result in death or serious injury. WARNING indicates a hazardous situation which, if not avoided, could result in death or serious injury. CAUTION, used with the safety alert symbol, indicates a hazardous situation which, if not avoided, could result in minor or moderate injury. Note: A note presents information that is important, but not hazard-related. IMPORTANT An important notice contains information essential to software or hardware operation. 16 EMC Disk Library for mainframe DLm960 User Guide

17 Preface Typographical conventions EMC uses the following type style conventions in this document. Normal Used in running (nonprocedural) text for: Names of interface elements (such as names of windows, dialog boxes, buttons, fields, and menus) Names of resources, attributes, pools, Boolean expressions, buttons, DQL statements, keywords, clauses, environment variables, functions, utilities URLs, pathnames, filenames, directory names, computer names, filenames, links, groups, service keys, file systems, notifications Bold Used in running (nonprocedural) text for: Names of commands, daemons, options, programs, processes, services, applications, utilities, kernels, notifications, system calls, man pages Used in procedures for: Names of interface elements (such as names of windows, dialog boxes, buttons, fields, and menus) What user specifically selects, clicks, presses, or types Italic Used in all text (including procedures) for: Full titles of publications referenced in text Emphasis (for example a new term) Variables Courier Used for: System output, such as an error message or script URLs, complete paths, filenames, prompts, and syntax when shown outside of running text Courier bold Used for: Specific user input (such as commands) Courier italic Used in procedures for: Variables on command line User input variables < > Angle brackets enclose parameter or variable values supplied by the user [ ] Square brackets enclose optional values EMC Disk Library for mainframe DLm960 User Guide 17

18 Preface Vertical bar indicates alternate selections - the bar means or { } Braces indicate content that you must specify (that is, x or y or z)... Ellipses indicate nonessential information omitted from the example 18 EMC Disk Library for mainframe DLm960 User Guide

19 Preface Where to get help EMC support, product, and licensing information can be obtained as follows. Product information For documentation, release notes, software updates, or for information about EMC products, licensing, and service, go to the EMC Online Support website. Technical support For technical support, go to EMC Online Support website and choose Support. On the Support page, you will see several options, including one for making a service request. Note that to open a service request, you must have a valid support agreement. Please contact your EMC sales representative for details about obtaining a valid support agreement or with questions about your account. Your comments Your suggestions will help us continue to improve the accuracy, organization, and overall quality of the user publications. Please send your opinions of this document to: techpubcomments@emc.com EMC Disk Library for mainframe DLm960 User Guide 19

20 Preface 20 EMC Disk Library for mainframe DLm960 User Guide

21 CHAPTER 1 Overview of EMC Disk Library for Mainframe This chapter provides an overview of EMC Disk Library for mainframe. Topics include: Introduction to Disk Library for mainframe DLm 960 architecture Tape emulation Support for physical tape drives High availability features of DLm Features and benefits Overview of EMC Disk Library for Mainframe 21

22 Overview of EMC Disk Library for Mainframe Introduction to Disk Library for mainframe The EMC Disk Library for mainframe (DLm) family of products provides IBM System z mainframe customers the ability to replace their physical tape libraries, including traditional virtual tape libraries such as the IBM VTS and Sun/STK VSM, with dynamic tapeless virtual tape solutions, eliminating the challenges tied to traditional tape-based processing. Some customers have already implemented mainframe host-based tape-emulation solutions such as IBM VTFM (formally known as CopyCross) and CA Vtape. However, these solutions utilize expensive host CPU cycles to perform the tape operations, and use expensive direct access storage device (DASD) space to keep the tape volumes. DLm provides the option for these customers to offload the tape emulation processes from the mainframe host and free up DASD space. DLm works seamlessly with the mainframe environment, including the major tape management systems, DFSMS, DFHSM, and backup applications such as DFDSS and FDR, and others without the need to change any of the customer's JCL statements. There is no need to start a task or define a specific subsystem to operate DLm, since the mainframe host sees the DLm just as tape devices. DLm tape drives can be shared across LPARs without the need for additional tape sharing software through local device varying or through the implementation of MTL definitions. DLm provides disaster recovery protection using bidirectional replication between two DLm systems in the same or different sites. It also supports unidirectional replication from one DLm system to up to four DLm systems that could be in different sites. Since the tape information is kept on disk, DLm enables you to perform disaster recovery tests without compromising your business continuance by having to stop replication during testing. The DLm model DLm960 with the Data Domain DD880 offers deduplication features that deliver the aggregate throughput performance needed for enterprise data centers. This model's in-line deduplication provides the most efficient solution for storing virtual tapes for backup and archive applications. This results in lower storage costs and efficient use of replication links as only the unique data is transported between the sites. In summary, the DLm offers you many benefits over traditional tape libraries and virtual tape libraries including high performance, higher reliability, advanced information protection, and overall lower total cost of ownership (TCO). 22 EMC Disk Library for mainframe DLm960 User Guide

23 Overview of EMC Disk Library for Mainframe DLm 960 architecture The major components of a DLm960 (with an optional DD880) system are the virtual tape emulation controller (VTEC) and the backend storage system. The backend systems supported by DLm960 are: EMC Celerra server with integrated disk storage arrays Data Domain storage system (DD880) DLm960 comes with one Celerra NS960 (base Celerra) by default. In addition, you can order an optional DD880 and an additional Celerra NS960 (expansion Celerra). DLm960 provides deduplicated storage using Data Domain systems and traditional disk storage using EMC Celerra systems. Note: All new DD880s integrated into a DLm or later support only 2 TB drives. However, DD880 systems that are already part of DLm 2.3.x systems and 1TB drives will continue to be supported even after the DLm has been upgraded to The 1 TB and 2 TB drives should not be mixed in an ES20 shelf. Note: Total supported capacity and number of cabinets depend on the drive capacity. VTEC The VTEC is the subsystem that connects to an IBM or IBM-compatible mainframe and provides the emulation of IBM 3480/3490/3590 tape drives. A VTEC contains the following components: Virtual tape engine (VTE) Two Access control points (ACPs) Two 1 Gb Ethernet switches for management network Two 10 Gb Ethernet switches for data transfer VTE Each DLm configuration can have 1 6 VTEs. The mainframe virtual tape emulation software, Virtuent, executes on the VTEs. Each VTE is connected to the mainframe through two FICON or three ESCON channels and emulates up to 256 tape drives. VTEs interface to the mainframe and direct tape data to and from the backend storage arrays. This data is written to the storage arrays and stored in NFS filesystems over a redundant 10 Gb data network. DLm 960 architecture 23

24 Overview of EMC Disk Library for Mainframe Disk 0 Disk 1 A B C D E F G H I L K J GEN Figure 1 Front view of the VTE The VTE controls and indicators are as follows: A and B LAN 2 (Eth 0) and LAN 1 (Eth 1) LEDs activity: Blinking green light indicates network activity. Continuous green light indicates a link between the system and the network to which it is connected. C Power button: Turns the system power on or off. Do not press the Power button while the VTE is online to the host. Follow the shutdown procedure in Power down DLm on page 79 before pressing the Power button D Power/Sleep LED: Continuous green indicates that the system is powered on. Blinking green indicates that the system is sleeping. 24 EMC Disk Library for mainframe DLm960 User Guide

25 Overview of EMC Disk Library for Mainframe No light indicates that the system does not have power applied to it. E Disk activity LED. F System status LED: Continuous green indicates that the system is operating normally. Blinking green indicates that the system is operating in a degraded condition. Continuous amber indicates that the system is in a critical or nonrecoverable condition. No light indicates that POST is running, or the system is off. G System identification LED: A blue light glows when the ID button has been pressed. A second blue ID LED on the rear of the unit also glows when the ID button has been pressed. The ID LED allows you to identify the system you are working on in a bay with multiple systems. H System identification button: Toggles the front panel ID LED and the server board ID LED on and off. The server board LED is visible from the rear of the chassis and allows you to locate the server from the rear of the bay. I Reset button: Reboots and initializes the system. Do not press the Reset button while the VTE is online to the host. Follow the shutdown procedure in Power down DLm on page 79 before pressing the Reset button. J USB 2.0 port: Allows you to attach a Universal Serial Bus (USB) component to the front of the ACP. K NMI button: Pressing this recessed button with a paper clip or pin issues a non-maskable interrupt and puts the system into a halt state for diagnostic purposes. L Video port: Allows you to attach a video monitor to the front of the VTE. DLm 960 architecture 25

26 Overview of EMC Disk Library for Mainframe 10G copper for ESCON Channel 0 and Channel 1 for FICON FC port to tape GEN Figure 2 Rear view of a VTE ACP The DLm960 comes with two ACPs in a primary-secondary configuration. This highly available configuration requires a highly available IP address that is always associated with the primary ACP. This ensures management access even when one of the ACPs fail. If the primary ACP fails, the secondary ACP becomes the primary. The ACPs provide a user-friendly console (DLm Console) to execute various setup and configuration tasks. The ACPs connect to the DLm management LAN of the DLm. Note: You must connect both ACPs to your network. 26 EMC Disk Library for mainframe DLm960 User Guide

27 Overview of EMC Disk Library for Mainframe A B C D E F G CNS Figure 3 Front view of the ACP A. USB port B. Power button C. System status LED D. System power LED E. Hard drive activity LED F. NIC 1 LED G. NIC 2 LED AC power Mouse Com 1 Modem Eth 1 Eth 2 Eth 3 Com 2 B MGMT CS A Keyboard Video Eth 0 USB ports (For USB disk, USB mouse or USB keyboard) Figure 4 Rear view of the ACP DLm 960 architecture 27

28 Overview of EMC Disk Library for Mainframe Table 1 ACP rear panel connectors Connector AC Power Com 1 COM 2 Eth 0 Eth 1 Eth 2 Eth 3 USB Mouse Keyboard Video Description AC power. Connect an AC power cord between the AC power plug and a power distribution unit. Not used in DLm960. Serial connection from service laptop. Connects to management switch port. Not used in DLm960. Connects to management switch port. Not used in DLm960. USB connection. Plug USB drive in here to upload or download files. You can also use the USB connectors for a USB mouse and keyboard. Use with KVM mouse. Alternatively, you can use a USB mouse connected to a USB port. Use with KVM keyboard. Alternatively, you can use a USB keyboard connected to a USB port. Use with KVM monitor. DLm management network The DLm has an internal Gigabit Ethernet network for management purposes. In a DLm960, the ACPs, VTEs, Celerra Server, and Data Domain systems management ports are connected to a pair of 1 Gb Ethernet switches to protect against a single switch failure. AT-9924TL-EMC 2 GEN EMC Disk Library for mainframe DLm960 User Guide

29 Overview of EMC Disk Library for Mainframe 10 Gb data network Figure 5 24-port AT-9924TL switch In a DLm960, the data from the mainframe is transferred to the DLm960 storage systems over 10 Giga bit Ethernet connections. The 10 Gb Ethernet network has a pair of 10 Gb Ethernet switches to protect against a single switch failure FUJISU XG2000 Status Dump Alarm Mng-LAN Console RS-232C Power GEN Backend storage Deduplicating storage Figure 6 Fujitsu XG2000R switch DLm960 supports the Celerra server and an optional Data Domain DD880 for storing the data written to the virtual tapes. In DLm960, the DD880 is to be used for data that deduplicates well and the Celerra server should be used for all other data. Both systems export NFS filesystems and the VTEs then use these NFS filesystems to store the data. The Data Domain system provides DLm's deduplication feature. DLm uses a highly optimized inline data deduplication technology that reduces the footprint by storing only the unique data. This also reduces power consumption and provides a significant total cost saving. The data is streamed from the mainframe through the VTEs to the backend Data Domain storage system. Due to the inline implementation, only the deduplicated, unique data gets stored on the drives. Each Data Domain system contains: A storage controller that executes the Data Domain operating system and supports redundant 12 Gb/s SAS connectivity to the backend drive enclosures. Up to 12 ES20 storage shelves (each containing fifteen 2 TiB SATA drives and providing TiB raw capacity and TiB of usable capacity). DLm 960 architecture 29

30 Overview of EMC Disk Library for Mainframe The DD880 comes with RAID 6 configuration (12+2) and one hot spare in each ES20 drive enclosure. Choose deduplication backend storage for data that is stored long term and is highly redundant. Figure 7 DD880 Controller Rear view See Figure 22 on page 75 for the front view of a DD880 controller. Although DD880 supports up to six VTEs, it has a total stream count limit of 180 streams. This can include backup write streams, backup read streams, directory replication at source, and directory replication at destination. Each of these types of streams has a different limit as listed in Table 2 on page 30. Table 2 DD880 stream count limits Stream type Stream count limit Backup Write streams 180 Backup Read streams 50 Directory replication at source 90 Directory replication at destination 180 Irrespective of the type of streams, the total number of streams cannot exceed 180. DD880 can have more streams opened above these numbers, but this can impact performance. The performance will return when the stream counts drop below those limits. 30 EMC Disk Library for mainframe DLm960 User Guide

31 Overview of EMC Disk Library for Mainframe Celerra storage DLm960 can be configured with a maximum of two Celerra network file servers. Each Celerra file server can have 2-8 storage controllers called Data Movers. All DLm960 systems are configured with a standby Data Mover. Choose Celerra file storage for a large volume of data that does not need to be stored for long periods of time and is not extensively redundant to warrant deduplication. The Celerra network-attached file server comes with: Disks that come in groups of fifteen 1 TB or 2 TB SATA drives (each group is one disk array enclosure [DAE]) Each DAE with 9.5 TB (1 TB drives) or 19.3 TB (2 TB drives) of usable capacity RAID 6 configuration (12+2) and one hot spare in each DAE Mainframe channel interfaces A VTE contains mainframe channel interfaces. These channel interfaces can be either two Fibre Connectivity (FICON) interfaces or three Enterprise Systems Connection (ESCON) interfaces per VTE. The FICON interfaces can be either single mode or multimode. Do not mix ESCON and FICON within a single DLm system nor single mode and multimode FICON within a single DLm system. A DLm system configured with six VTEs provides either 12 FICON interface connections or 18 ESCON interface connections. You must attach at least one mainframe channel to each VTE you intend to configure and use. Any VTE not attached to a mainframe channel will not be operational. Figure 2 on page 26 shows the rear view of the VTE with the channel interfaces to the right of center of the unit. DLm960 supports both ESCON and FICON. FICON channel Each DLm VTE FICON interface has a single LC-type fiber-optic connector. The type of cable you must use depends on the following: The type of connector on the mainframe (either LC or SC) The type of fiber-optic cable (single-mode or multi-mode) supported by the mainframe channel DLm 960 architecture 31

32 Overview of EMC Disk Library for Mainframe The DLm FICON interfaces are available either with single-mode or multi-mode fiber-optic cable support. The micron rating for: Single-mode fiber-optic cable is 9/125 Multi-mode fiber-optic cable is either 50/125 or 62.5/125. Status indicators Each FICON interface has a four-character display visible at the back of the system adjacent to the interface connector. The display scrolls the status of the interface. Under normal operating conditions, the version of the emulation firmware interface is displayed. In DLM960, the FICON adapter has three light-emitting diode (LED) indicators, listed in Table 3 on page 32, visible from its rear. These LEDs indicate the speed of the link: 1 Gbps, 2 Gbps, or 4 Gbps. When the link is up, the LED glows steadily. It blinks if there is traffic. The numbers stamped on the faceplate correspond to the speed. Table 3 FICON adapter LED indicators Yellow LED 4 Gbps Green LED 2 Gbps Amber LED 1 Gbps Activity Off Off Off Power off On On On Power on Flashing Flashing Flashing Loss of synchronization Flashing in sequence Flashing in sequence Flashing in sequence Firmware error On/blinking Off Off 4 Gbps link up/activity Off On/blinking Off 2 Gbps link up/activity Off Off On/blinking 1 Gbps link up/activity 32 EMC Disk Library for mainframe DLm960 User Guide

33 Overview of EMC Disk Library for Mainframe Connect DLm to a FICON channel DLm can be connected directly to the mainframe FICON channel or it can be connected through a FICON switch. I/O generation examples for both kinds of connections are shown next. In either case, to properly define a DLm V348x, or 3480, 3490, or 3590 device on a z/os system, the following parameters are required: TYPE must be FC. UNIT can be defined as one of the following: One of the virtual device types: V3480, V3481, V3482, or V3483 A real 3590 A real 3490 A real 3480 The CHPID can be defined as any one of the following: SHARED DED REC Note: When configuring DLm devices as device type 3490, the maximum number of devices per control unit is 16. Configuration for a direct FICON connection Basic, dedicated non-shared (non-emif) mode CHPID PATH=(0A),DED,TYPE=FC CNTLUNIT CUNUMBR=EA80,PATH=(0A), + UNITADD=((00,32)),UNIT=3590 IODEVICE ADDRESS=(EA80,32),CUNUMBR=(EA80), + STADET=Y,UNIT=3590 Note: With dedicated non-emif (non-shared) mode, specify LPAR=0 in the DLm virtual device configuration program regardless of the LPAR to which it will be connected. The EMC Disk Library for mainframe User Guide provides more information. DLm 960 architecture 33

34 Overview of EMC Disk Library for Mainframe Reconfigurable non-shared (non-emif) mode CHPID PATH=(0A),REC,TYPE=FC CNTLUNIT CUNUMBR=EA80,PATH=(0A), + UNITADD=((00,32)),UNIT=3590 IODEVICE ADDRESS=(EA80,32),CUNUMBR=(EA80), + STADET=Y,UNIT=3590 Note: With reconfigurable non-emif (non-shared) mode, in the DLm virtual device configuration program, specify LPAR=0 regardless of the LPAR to which it will be connected. The EMC Disk Library for mainframe User Guide provides more information. Shared (EMIF) mode CHPID PATH=(0A),SHARED,TYPE=FC CNTLUNIT CUNUMBR=EA80,PATH=(0A), + UNITADD=((00,32)),UNIT=3590 IODEVICE ADDRESS=(EA80,32),CUNUMBR=(EA80), + STADET=Y,UNIT=3590 Note: With EMIF (shared) mode, specify LPAR=n in the DLm virtual device(s) configuration program, where n is the LPAR to which the DLm device is connected. The EMC Disk Library for mainframe User Guide provides more information. Alternate paths in shared (EMIF) mode CHPID PATH=(0A),SHARED,TYPE=FC CHPID PATH=(0B),SHARED,TYPE=FC CNTLUNIT CUNUMBR=EA80,PATH=(0A,0B), + UNITADD=((00,32)),UNIT=3590 IODEVICE ADDRESS=(EA80,32),CUNUMBR=(EA80), + STADET=Y,UNIT=3590 Note: With EMIF (shared) mode, specify LPAR=n in the DLm virtual device(s) configuration program, where n is the LPAR to which the DLm device is connected. The EMC Disk Library for mainframe User Guide provides information. Configuration for a FICON switch in basic mode CHPID PATH=((22)),TYPE=FC,SWITCH=02 34 EMC Disk Library for mainframe DLm960 User Guide

35 Overview of EMC Disk Library for Mainframe CNTLUNIT CUNUMBR=300,PATH=(22),LINK=(C2), + UNITADD=((00,32)),UNIT=3590 IODEVICE ADDRESS=(300,32),CUNUMBR=(300),UNIT=3590 ESCON channel Each DLm ESCON interface has a single ESCON MT-RJ connector. If the mainframe uses the older ESCON duplex connectors, the available ESCON converter cable provides a male MT-RJ connector on one end and a female ESCON duplex connector on the other end as shown in Figure 8 on page 35. The male MT-RJ connector fits into the socket on the ESCON controller in the VTE. Status indicators Each ESCON interface has a four-character display visible on the back edge of the system adjacent to the interface connector. The display scrolls the status of the interface. Under normal operating conditions, the version of the interface's emulation firmware is displayed. ESCON duplex ESCON MT-RJ GEN Figure 8 ESCON converter cable DLm 960 architecture 35

36 Overview of EMC Disk Library for Mainframe Connect DLm to an ESCON channel DLm can be connected directly to the mainframe ESCON channel or through an ESCON director. In either case, the following parameters are required to properly define a DLm V348x, 3490, 3480, or 3590 device on a z/os system: TYPE must be CNC. UNIT can be one of the following: A virtual control unit: V3480, V3481, V3482, or V3483 A real 3590 A real 3490 A real 3480 CHPID can be: SHARED DED REC Note: When configuring DLm devices as device type 3490, the maximum number of devices per control unit is 16. Configuration for a direct ESCON connection The following output examples show I/O generation. Basic, dedicated non-shared (non-emif) mode CHPID PATH=(0A),DED,TYPE=CNC CNTLUNIT CUNUMBR=EA80,PATH=(0A), + UNITADD=((00,32)),UNIT=3590 IODEVICE ADDRESS=(EA80,32),CUNUMBR=(EA80), + STADET=Y,UNIT=3590 Note: With dedicated non-emif (non-shared) mode, specify LPAR=0 in the DLm virtual device configuration program regardless of the LPAR to which it will be connected. The EMC Disk Library for mainframe User Guide provides more information. Reconfigurable non-shared (non-emif) mode CHPID PATH=(0A),REC,TYPE=CNC CNTLUNIT CUNUMBR=EA80,PATH=(0A), + UNITADD=((00,32)),UNIT= EMC Disk Library for mainframe DLm960 User Guide

37 Overview of EMC Disk Library for Mainframe IODEVICE ADDRESS=(EA80,32),CUNUMBR=(EA80), + STADET=Y,UNIT=3590 Note: With reconfigurable non-emif (non-shared) mode, specify LPAR=0 in the DLm virtual device configuration program regardless of the LPAR to which it will be connected. The EMC Disk Library for mainframe User Guide provides more information. Shared (EMIF) mode CHPID PATH=(0A),SHARED,TYPE=CNC CNTLUNIT CUNUMBR=EA80,PATH=(0A), + UNITADD=((00,32)),UNIT=3590 IODEVICE ADDRESS=(EA80,32),CUNUMBR=(EA80), + STADET=Y,UNIT=3590 Note: With EMIF (shared) mode, specify LPAR=n in the DLm virtual device(s) configuration program, where n is the LPAR to which the DLm device is connected. The EMC Disk Library for mainframe User Guide provides more information. Alternate paths in shared (EMIF) mode CHPID PATH=(0A),SHARED,TYPE=CNC CHPID PATH=(0B),SHARED,TYPE=CNC CNTLUNIT CUNUMBR=EA80,PATH=(0A,0B), + UNITADD=((00,32)),UNIT=3590 IODEVICE ADDRESS=(EA80,32),CUNUMBR=(EA80), + STADET=Y,UNIT=3590 Note: With EMIF (shared) mode, specify LPAR=n in the DLm virtual device(s) configuration program, where n is the LPAR to which the DLm device is connected. The EMC Disk Library for mainframe User Guide provides more information. Configuration for an ESCON director in basic mode CHPID PATH=((22)),TYPE=CNC,SWITCH=02 CNTLUNIT CUNUMBR=300,PATH=(22),LINK=(C2), + UNITADD=((00,32)),UNIT=3590 IODEVICE ADDRESS=(300,32),CUNUMBR=(300),UNIT=3590 DLm 960 architecture 37

38 Overview of EMC Disk Library for Mainframe DLm devices and capacity Table 4 on page 38 provides details of the devices supported on DLm and the minimum and maximum supported capacity. Table 4 Differences between the DLm models Capacity Number of cabinets per system DLm960 2 to 13 for 1 TB drives, 2 to 10 for 2 TB drives DLm960 with optional DD880 2 to 14 Number of virtual tape engines (VTEs) 1 to 6 1 to 6 Number of access control points (ACPs) 2 2 Front-end 4G FICON channels (to the host) 1 2, 4, 6, 8, 10, or 12 2, 4, 6, 8, 10, or 12 Front-end ESCON channels (to the host) 2 3, 6, 12, 15, or 18 3, 6, 12, 15, or 18 Maximum active tape devices to to 1536 Model and quantity of EMC Network Servers Optional Data Domain system 1 or 2 NS960 1 or 2 NS960 and the optional DD880 NFS server Number of Data Domain DD880 racks - 1 Data Disk Drive type 1 TB or 2 TB SATA 1 TB or 2 TB SATA RAID protection RAID 6 (12+2) RAID 6 (12+2) Number of storage controllers: Celerra Based on capacity. Minimum of two; configured as one active and one hot standby Maximum of 12; configured as 10 active and two hot standby Based on capacity. Minimum of two; configured as one active and one hot standby Maximum of 12; configured as 10 active and two hot standby Number of storage controllers: Data Domain DD880 NA One active Replication Supported Supported 38 EMC Disk Library for mainframe DLm960 User Guide

39 Overview of EMC Disk Library for Mainframe Capacity User tape non-deduplicated storage (internal disk array) Minimum maximum DAEs Minimum maximum terabytes (TB) User tape deduplicated storage (internal disk array) DLm for 1 TB drives, 1 60 for 2 TB drives for 1 TB drives, for 2 TB drives DLm960 with optional DD880 NA NA Minimum maximum DD TB ES20 enclosures (2 TB drives only) Minimum maximum raw storage (TB) in DD880 ES20 enclosures Minimum maximum usable storage (TB) in DD880 ES20 enclosures NA NA NA Logical storage at 10:1 total compression (TB) NA The number of channels depends on the number of VTEs: two FICON channels per VTE 2. The number of channels depends on the number of VTEs: three ESCON channels per VTE. 3. The maximum number of tape devices depends on the number of VTEs. There can be up to 256 devices per VTE. 4. For DD880 systems that are part of DLm 2.3.x and are upgraded to 2.4, the maximum ES20 shelf count is 12, if the ES20 shelves contain 1TB drives only. Table 5 on page 39 provides details of the front-end ESCON and 4G FICON connections. Table 5 Details of ESCON and FICON connections Adapter type Number of ports Number of unique LPARs Number of control units Number of links Maximum number of paths supported per VTE FICON NA 4096 ESCON DLm devices and capacity 39

40 Overview of EMC Disk Library for Mainframe Tape emulation DLm VTEs emulate the IBM tape drives to the mainframe and direct the tape data to and from the backend storage arrays. Each VTE, once configured, operates independently of the other VTEs in the VTEC and can be configured with up to 256 tape drives. A DLm configured with two VTEs can emulate up to 512 virtual tape devices, while one with six VTEs can emulate up to 1,536 virtual tape devices at one time. The virtual tape emulation software: Receives and interprets channel command words (CCWs) from the host Sends and receives the tape data records and reads and writes corresponding disk data in response to the CCWs Presents initial, intermediate, and final status to the host commands and asynchronous status as needed Sends and receives control information (such as sense and load display data) to and from the host in response to the CCWs Virtual tape drive states A virtual tape drive is in one of the two basic states at any given time Not Ready or Ready: In the Not Ready state, the virtual tape drive appears to the host to be online but in an unmounted state. As on a real tape drive, most channel commands are not accepted in this state and receive a Unit Check status with an Intervention Required sense. While in the Not Ready state, no disk file is opened on the disk subsystem. The Not Ready state is the initial state of all virtual tape drives, and is entered whenever an Unload command is received from the mainframe. In the Ready state, the virtual tape drive accepts all data movement, read, and write commands from the host exactly like the emulated tape drive. As the host reads, writes, and otherwise positions the virtual tape, the application maintains synchronization of the associated disk file to exactly match the content and positioning of the virtual tape volume. A virtual tape drive enters the Ready state when it receives a load request from the host. When the Mount message is received, the disk file associated with the volume specified in the Mount message is opened, and the virtual tape drive comes Ready to the host. The virtual tape drive remains in the Ready state, with 40 EMC Disk Library for mainframe DLm960 User Guide

41 Overview of EMC Disk Library for Mainframe the associated disk file open, until an Unload command is received from the host. On receiving an Unload command, the disk file is closed and the virtual tape drive enters the Not Ready state. Data formats The default file format for tape data written to the DLm disks is a modified AWSTAPE format. This format keeps track of record lengths as the file is being written so that the variable length records can be read exactly as they were originally written. Optionally, data can also be written as a plain, sequential (flat) file. In this format, the original data record lengths, labels, and tapemarks are lost, but any open-system application can read the data as a sequential dataset. DLm devices and capacity 41

42 Overview of EMC Disk Library for Mainframe Support for physical tape drives DLm also supports low-volume access to enable the mainframe to read from and write to physical tapes. Each VTE supports one physical IBM 3592 tape drive or TS1120 attached using point-to-point connection. The Fibre Channel port provided for this connection uses a standard multi-mode fiber optic cable with LC-type connectors. DLm960 supports 1-6 Fibre Channel attached tape drives. Backend storage on page 29 provides information about the utility. 42 EMC Disk Library for mainframe DLm960 User Guide

43 Overview of EMC Disk Library for Mainframe High availability features of DLm960 VTEC DLm includes failure recovery mechanisms in various parts of its architecture to ensure optimum availability. A VTEC with more than one VTE delivers enterprise-class availability and scalability through a modular design based on high-performance, highly available VTEs: VTEs have redundant power supplies, fans, and RAID-protected internal disks. Emulated tape drives on each VTE can mount any cartridge and any logical partition (LPAR) can access any cartridge, delivering enterprise-class availability. DLm960 has two ACPs with a shared IP address to ensure high availability. If the primary ACP fails, the secondary ACP takes over as the primary and the shared IP address moves over to the secondary ACP. The configuration files are saved on the ACP to allow quick and easy restoration of a VTE configuration if a VTE is replaced. The files are also copied over to the secondary ACP. The redundant copies of the configuration files protect against the single point failure of an ACP. VTEs provide redundant data and control paths. In DLm960, two 10 GbE switches provide a redundant data path, and two 1 GbE switches provide a redundant control path. The redundant data path provides failover to protect against link failures, network card failures, and switch failures. The 10 GbE ports on the Celerra Server and Data Domain storage controllers of DLm960 are bonded together in failover mode also. Celerra server Storage controller failover The Celerra server protects against hardware or software failure by providing at least one standby storage controller. A standby storage controller ensures that the VTEs have continuous access to filesystems. When a primary storage controller fails, the standby storage controller assumes the identity and functionality of the failed storage controller. High availability features of DLm960 43

44 Overview of EMC Disk Library for Mainframe Each Celerra within the DLm960 can have up to six storage controllers, where five are active and one is a standby. The number of active storage controllers is based on the storage capacity of the system. Each storage controller is a completely autonomous file server with its own operating system image. During normal operations, the VTEs interact directly with the storage controllers. Storage controller failover can protect the Celerra server against hardware or software failure. Fail-Safe Network (FSN) FSN is a high-availability networking feature supported by the Celerra Data Movers. An FSN appears as a single link with a single Media Access Control (MAC) address and potentially multiple IP addresses. An FSN connection may consist of a single link or multiple links. Celerra defines each set of links to be a single FSN connection. Only one link in an FSN is active at a time although all connections making up the FSN share a single hardware (MAC) address. If the Celerra storage controller detects that the active connection has failed, the storage controller automatically switches to the standby connection in the FSN, and that connection assumes the network identity of the failed connection. The individual links in the FSN connect to different switches so that, if the switch for the active connection fails, the FSN fails over to a connection using a different switch.to use this feature, each storage controller in the Celerra server and DD880 must have: Two optical 10 GbE ports to connect the storage controllers or DD880 to the switches in the VTEC (one for each 1 GbE switch) Two ports configured together using an FSN failure recovery Control Station failover The Celerra server provides a primary and secondary Control Station that ensures uninterrupted file access to users when the primary Control Station is rebooted, upgraded, or unavailable. The Control Station software, which is used to configure and manage the Celerra server, operates independently of the file-access operations and services provided by storage controllers. The Celerra network server uses the ConnectEMC or Home utility to notify EMC Customer Support (or your service provider) of the failure. After the primary Control Station is repaired or replaced and the Control Stations are rebooted, either directly or as a result of a powerdown and restart cycle, the first Control Station to start is restored as the primary. Celerra network server comes with RAID 6 protection to ensure high availability. 44 EMC Disk Library for mainframe DLm960 User Guide

45 Lnk Tx Rx Lnk Tx Rx!!!! Lnk Tx Rx Lnk Tx Rx Overview of EMC Disk Library for Mainframe Ethernet switch VTE-N Storage Controller enclosure 1 Storage Controller enclosure 0 VTE1 Storage Controllers Ethernet switch VTEs GEN Figure 9 VTE to storage controllers Network topology Figure 9 on page 45 shows multiple VTEs connected to the storage controllers through two Ethernet switches. This illustrates the high availability of the network storage and FSN. Data Domain Because the Data Domain operating system (DD OS) is designed for data protection, the goal of its architecture is data invulnerability. Its design includes: End-to-end verification Fault avoidance and containment Continuous fault detection and healing High availability features of DLm960 45

46 X2 REMOVE FAULT FAULT POWER REMOVE TEST POWER LED REMOVE FAULT FAULT POWER REMOVE TEST POWER LED Invalid Address ID X2 IID Overview of EMC Disk Library for Mainframe Filesystem recovery Ethernet switch VTE-N DD880 Controller Failover ID ES20 enclosure 0 EXP N HOST System System HOST EXP N VTE1 Data Domain Storage Controller Ethernet switch VTEs GEN End-to-end verification Figure 10 VTE to DD storage controllers Network topology When the DD OS receives a write request from the backup software, it computes a huge checksum over the data. After analyzing the data for redundancy, it stores only the new data segments and all of the checksums. After the backup is complete and all the data has been synchronized to disk, the DD OS verifies that it can read the entire file from the disk platter through the Data Domain filesystem, and that the checksums of the data that is read back match the checksums written. This ensures that the data on the disks is readable and correct, can be recovered from every level of the system, and that the filesystem metadata structures used to find the data are also readable and correct. 46 EMC Disk Library for mainframe DLm960 User Guide

47 Overview of EMC Disk Library for Mainframe Fault avoidance and containment The biggest risk to filesystem integrity is filesystem software errors that occur when writing new data. New data can accidentally write on existing data, and new updates to filesystem metadata can mangle existing structures. Data Domain systems are equipped with a specialized log-structured filesystem that has four important benefits: New data never overwrites good data Unlike a traditional filesystem, which will often overwrite blocks when data changes, Data Domain systems write only to new blocks. This isolates any incorrect overwrite (for example, a software defect issue) to only the newest backup data. Older versions remain safe. Fewer complex data structures The Data Domain filesystem was built to protect data in backup applications, where the workload is primarily sequential writes of new data. Because the application is simpler, fewer data structures are required to support it. As long as the system can keep track of the head of the log, new writes will not touch old data. This design simplicity greatly reduces the chances of software errors that could lead to data corruption. NVRAM for fast, safe restart Continuous fault detection and healing The system includes a non-volatile RAM write buffer into which it puts all data not yet safely on disk. The filesystem leverages the security of this write buffer to implement fast and safe restart capability. The filesystem includes many internal logic and data structure integrity checks. If any problem is found by one of these checks, the filesystem restarts. The checks and restarts provide early detection and recovery from the kinds of bugs that can corrupt data. As it restarts, the Data Domain filesystem verifies the integrity of the data in the NVRAM buffer before applying it to the filesystem and so ensures that no data is lost due to the restart. Data Domain systems never update just one block in a stripe. Following the no-overwrite policy, all new writes go to new RAID stripes and those new RAID stripes are written in their entirety. The verification after write ensures that the new stripe is consistent. New writes do not put existing backups at risk. As a basis of continuous fault detection and healing, the Data Domain system uses RAID 6 protection to protect against double disk faults. High availability features of DLm960 47

48 Overview of EMC Disk Library for Mainframe Filesystem recovery On-the-fly error detection and correction To ensure that all data returned during a restore is correct, the Data Domain filesystem stores its on-disk data structures in formatted data blocks that are self-identifying and verified by a strong checksum. On every read from disk, the system first verifies that the block read from the disk is the block expected. It then uses the checksum to verify the integrity of the data. If any issue is found, the system uses RAID 6 and its extra level of redundancy to correct the data error. Because the RAID stripes are never partially updated, their consistency is ensured and thus the ability to heal an error when it is discovered. Scrub to ensure data does not go bad Data Domain systems verify the integrity of all data weekly in an ongoing background process. This scrub process finds and repairs grown defects on the disk before they can become a problem. The Data Domain storage array includes various recovery mechanisms to ensure optimal availability on the storage controller, network, and DD880 levels. The Data Domain DD OS Administration Guide contains more information about the various DD880 recovery features. Data Domain systems include features for reconstructing lost or corrupted filesystem metadata, as well as filesystem check tools that can quickly bring an ailing system safely back online. Self-describing data format to ensure metadata recovery Metadata structures, such as indexes that accelerate access, are rebuildable from the data on disk. All data is stored along with metadata that describes it. If a metadata structure is somehow corrupted, there are two levels of recovery. First, a snapshot of the filesystem metadata is taken every several hours, creating point-in-time copy for the recovery process to use. Second, the data can be scanned on disk and the metadata structure can be rebuilt. These features enable recovery even if with a worst-case corruption of the filesystem or its metadata. Redundant 10 Gb Ethernet data path The Data Domain DD880 communicates with the VTE over DLm's internal 10 Gb Ethernet (10 GbE) network. The 10 Gb card on the DD880 is configured in failover mode to protect against single link and switch failures. 48 EMC Disk Library for mainframe DLm960 User Guide

49 Overview of EMC Disk Library for Mainframe Redundant 1 Gb Ethernet connectivity for management The Data Domain DD880 in the DLm uses two GbE ports, Eth0 and Eth2, to connect to the managment network in the DLm. These ports are configured as a failover pair to protect against single link, switch, and NIC failures. Redundant 1 GbE ports for replication The Data Domain DD880 includes two GbE ports that support replication. These ports can be configured as a Failover pair or in Aggregate Mode (LACP) to protect against single link or switch failures. Redundant backend/drive connectivity Each Data Domain DD880 in the DLm comes with two quad-ported SAS cards. Each ES20 drive enclosure also has two dual-ported SAS cards that connect to the controller or the adjacent ES20 enclosure in the chain. The eight SAS connections from the controller to the ES20 enclosures are configured as two failover pairs, distributed across the two cards to protect against card failures. The failover pair is active-passive. Expanded capacity on Data Domain DD880 DLm 2.4.x supports the Expanded Capacity feature provided by Data Domain. This is a licensable feature that allows: A doubling of usable capacity from 71 TB to TB A doubling of raw capacity from 96 TB to 192TB For new DLm2.4.0 systems, this is a non-disruptive activity. For DLm2.4.x systems, that are upgraded from an earlier version of the DLm2.3.x system, this is a disruptive activity. In this case, additional hardware needs to be added to the DD880 system. These upgraded systems will require the following hardware changes: Addition of system memory on the DD880 from 48GB to 64GB Addition of a 3rd dual ported SAS card DLm2.4.x systems, upgraded from an earlier version of the DLm2.3.x system, support both 1 TB and 2 TB increments. Contact EMC Sales for more details, if you need to avail this feature. High availability features of DLm960 49

50 Overview of EMC Disk Library for Mainframe Features and benefits DLm offers many benefits over traditional tape including: Faster processing of tape mount requests (translating into shorter overall job step processing) No requirement for physical tapes (reducing the cost, storage, and potential for loss of tapes and data) Support for data sharing across multiple VTEs (creating a level of data availability not found in previous mainframe virtual tape systems) Support for low volume access of external physical tapes that allow the mainframe to write to and read physical tapes Data integrity maintained by storing the tape data on internal storage arrays and using RAID 6 technology to protect the data from physical disk drive failures Built-in monitoring and reporting technologies, such as Simple Network Management Protocol (SNMP) and ConnectEMC, that raise alerts when attention is needed within the DLm environment Support for replication of tape data between DLm systems and up to two local or remote DLm systems No single point of failure of mainframe tape data if the DLm system has more than one VTE An enhancement to DLm's replication capabilities called Guaranteed Replication (GR) in DLm release 2.2 and later. This feature forces EMC Celerra Replicator to completely replicate a tape volume (VOLSER) to the remote site every time the mainframe issues two consecutive write tape marks on a tape and follows it with a tape close Support for two erase policies for space reclamation (in DLm release 2.3 and later): Space: This is the default policy. When a filesystem reaches a specified percentage of space usage (Recovery Percent general parameter), DLm begins erasing the oldest scratch tapes in that filesystem until the amount specified in the Recovery Amount parameter has been recovered. 50 EMC Disk Library for mainframe DLm960 User Guide

51 Overview of EMC Disk Library for Mainframe Time-to-live: This policy specifies a period of time that scratched tapes will be retained after being scratched, before being automatically erased. Once the period expires, the tapes will automatically be erased regardless of current space utilization. The time-to-live erase options are: Days and Hours. Note: If the VTE has tape libraries that reside on the Data Domain DD880, the erase policy must be configured to one of the Time-to-live options. Support for data deduplication in the DLm960 model integrated with DD880: Support for the best inline data deduplication technology available in the market that reduces the footprint due to deduplication and reduces power consumption Significant cost savings for replication deployments as only the unique data after deduplication is replicated Support for EMC Secure Remote Support (ESRS) that provides secure, fast, and proactive remote support for maximum information availability. Contact EMC Customer Support to configure ESRS. Features and benefits 51

52 Overview of EMC Disk Library for Mainframe 52 EMC Disk Library for mainframe DLm960 User Guide

53 CHAPTER 2 DLm Operations This chapter explains the routine DLm operations: Management access to DLm Access a VTE Power up DLm Power down DLm Start and stop tape devices Support access to DLm DLm Operations 53

54 DLm Operations Management access to DLm Gather connection data The ACP provides the only user access to the VTEC for management, support, and diagnostics purposes. The ACP resides on an internal network and does not connect directly to the customer's LAN. You can access the ACP only by using a secure connection through the DLm Celerra Control Station. However, during initial setup, you can define an IP address using which you can directly access DLm Console from the customer network. To connect to the DLm system, you will need some IP addresses and passwords. You need three IP addresses for the ACPs: one for ACP1, one for ACP2, and a third highly available IP address which is assigned to the primary ACP. Use the highly available IP address to access the DLm Console. Table 6 on page 54 lists the details that you will need before you access the DLm system. Table 6 DLm system access details Item Default Actual DLm Console HA IP address Username Password dlmadmin password (first login) Note: The system prompts you to change the password at the initial login. ACP1 IP address Username Password dlmadmin password ACP2 54 EMC Disk Library for mainframe DLm960 User Guide

55 DLm Operations IP address Table 6 DLm system access details Item Default Actual Username Password dlmadmin password Access the DLm Console Access the Control Station The DLm Console is a web-based console that is used to configure and monitor the DLm system. It is the management interface to the DLm system. DLm Console can be accessed through the web browser on the ACP desktop. During initial setup, if you provided the EMC service personnel with an additional IP address to directly access the DLm Console, you will be able to use that IP address to access DLm Console from outside the DLm (using a web browser). The Control Station is Celerra's dedicated management system. You can connect the Control Stations to a public or private network using their Eth 3 ports for remote administration. During the initial setup and configuration of DLm, IPs are assigned to the Control Stations. After configuration, you can access the Control Station by: Using an SSH client (for example, PuTTY) over TCP/IP Directly connecting a laptop to the serial port. Figure 11 on page 56 explains more about the connectivity. Remote access to the Control Station You can remotely access the Control Station by using any SSH client, such as PuTTY. Table 7 on page 55 provides the usernames and the default passwords required to access the DLm Celerra Control Station. Table 7 Control Station usernames and passwords Username nasadmin root Password nasadmin nasadmin Management access to DLm 55

56 DLm Operations The Control Station has a serial port (COM 1 port) that connects to an external serial modem. Through the modem connection, the Control Station calls home to EMC for remote support and diagnostics. Figure 11 on page 56 displays the rear panel of the Control Station provided with DLm version 2.0 and later. A B C D E F G B MGMT CS A K J I H GEN Figure 11 Rear panel of a Gen 2 Control Station and an ACP A. AC power receptacle B. Mouse C. COM 1 modem D. Eth 1 E. Eth 2 F. Eth 3 G. COM 2 port H. USB 0-1 I. Eth 0 J. Video K. Keyboard 56 EMC Disk Library for mainframe DLm960 User Guide

57 DLm Operations To connect remotely to the Control Station: 1. Start an SSH client terminal session like PuTTY. 2. Select the Connection type as SSH. 3. Type the IP address of the base Celerra's primary Control Station in the Hostname (or IP address) field. 4. [Optional] You can save the session by typing a name for the session in the Saved Sessions field and then clicking Save. 5. Click Open. Direct access to the Control Station You can directly access the Control Station by connecting a laptop to the COM 2 port using a serial connection. Once it is connected, you can use PuTTY to log in to the Control Station. 1. Using a USB-to-Serial converter and/or serial cable (DB-9 NULL Modem cable), connect the rightmost serial port on the back of the primary Control Station (CS0) and the COM 1 port on your service laptop. 2. Open a serial PuTTY session. 3. Use the following values: Serial Line: COM1 Speed: Data bits: 8 Stop bits: 1 Parity: none Flow Control: none 4. Press Enter and you will be prompted to log in. Management access to DLm 57

58 DLm Operations Access the ACP The ACP provides the only user access to the VTEC for management, support, and diagnostics purposes. EMC highly recommends that the ACP remain accessible at all times. DLm version 2.0 and later offers a second ACP to ensure high availability. DLm960 has two ACPs. DLm ships with ACPs. Figure 11 on page 56 shows the rear panel of an ACP. The ACP resides on an internal network and does not connect directly to the customer's LAN. You can access the ACP by using a secure connection through the DLm Celerra Control Station. Do not modify any network interface on the ACP. Doing so might cause the ACP to lose its connection to the VTEs and Control Stations. Secure access to the ACP DLm provides two ACPs, ACP1 and ACP2, to ensure high availability. You must configure two separate PuTTY sessions to access ACP1 and ACP2. With PuTTY connected to the Control Station, use TightVNC to initiate contact with the ACP. Note: EMC recommends that you have only one active VNC session to either ACP1 or ACP2. To connect to the ACP through a secure connection to the DLm Control Station: 1. Start an SSH client terminal session like PuTTY. 2. Connect to the Control Station: a. Select the Connection type as SSH. b. Type the IP address of the base Celerra's primary Control Station in the Hostname (or IP address) field. c. [Optional] You can save the session by typing a name for the session in the Saved Sessions field and then clicking Save. 58 EMC Disk Library for mainframe DLm960 User Guide

59 DLm Operations Figure 12 PuTTY session 3. Create a tunnel to the ACP: a. Select Session > Connection > SSH > Tunnels in the Category of the PuTTY window. b. Type 5801, the default browser port, in the Source port field. c. Type the IP address and port number of the desired ACP in the Destination field: IP address and port number for ACP1: :5801 IP address and port number for ACP2: :5801 d. Click Add. e. Type 5901, the direct VNC port in the Source port field. Management access to DLm 59

60 DLm Operations f. Type the IP address and port number of the desired ACP in the Destination field. IP address and port number for ACP1: :5901 IP address and port number for ACP2: :5901 g. Click Add. h. Click Session in the Category (left side) of the PuTTY window. i. Save the session by typing a name for the DLm session in the Saved Sessions field and click Save. The ACP desktop opens. Connect to the DLm Console Figure 13 ACP desktop with web browser To connect to DLm Console through the ACP: 1. Connect to the ACP as described in Access the ACP on page EMC Disk Library for mainframe DLm960 User Guide

61 DLm Operations 2. On the ACP desktop, double-click the Web Browser icon. 3. On the browser, click the DLm Console link. The DLm Console login screen opens as shown in Figure 14 on page Type the username and password. For a first-time login, enter the following user and password: User: dlmadmin Password: password The DLm Console opens as shown in Figure 15 on page 63. Note: At first login, DLm prompts you to change the DLm Console password. To connect to the DLm Console using the direct access IP address During initial setup, you can provide an IP address to directly access the DLm Console. Note: This procedure assumes that, during initial setup, you provided the EMC service personnel with an IP address for direct access to the DLm Console. It also assumes that you have access to and are connected to the Data Center LAN to which the ACP is connected. 1. Open a web browser. 2. Type the additional IP address you provided during setup for DLm Console access as follows: where ip_address is the address of the ACP on the customer LAN. For example: Your web browser may announce that you re attempting an untrusted connection as illustrated. Select Continue to website (not recommended). Depending on the browser, you may get a different security warning from that displayed below. Regardless of this warning, it is safe to connect to ACP. The login screen opens as shown in Figure 14 on page 62. Management access to DLm 61

62 DLm Operations Figure 14 DLm Console login page 3. Type the username and password. For a first-time login, enter the following user and password: User: dlmadmin Password: password The DLm Console opens as shown in Figure 15 on page 63. Note: At first login, DLm prompts you to change the DLm Console password. 62 EMC Disk Library for mainframe DLm960 User Guide

63 DLm Operations Exit DLm Console Set date and time Figure 15 DLm Console To exit the DLm Console, click Logout on the DLm Console menu bar. The DLm system time is displayed in the status line at the bottom of the VT console. If you need to adjust the system date or time you may do so from the Time tab on the DLm Console: 1. Access the DLm Console as described in Connect to the DLm Console on page 60. Management access to DLm 63

64 DLm Operations 2. Click External. 3. Click the Time tab if it is not already displayed. Figure 16 DLm date and time 4. Use one of these two methods to set the date and time on a VTEC: Configure the system to use a Network Time Protocol (NTP) server. Note: EMC strongly recommends that you use an NTP server. If the ACP is connected to the corporate network and one or more NTP servers are accessible, configure the controller to get date and time from an NTP server. Enter either the network name or IP address of up to four NTP servers. When you make this configuration active by installing it, the ACPs in the configuration attempt to query the NTP servers from 1 to 4 until they successfully get the date and time. Note: If you use a network name to identify an NTP server, you will need to configure a Domain Name Server (DNS) as part of the network configuration. 64 EMC Disk Library for mainframe DLm960 User Guide

65 DLm Operations User administration Manually set a specific date and time. To manually set the date and time, adjust the date and time in the Current date and time fields and click Set. The date and time is set in all the VTEs in the system. By default, DLm ships with two default user IDs: dlmadmin dlmuser The default password for these two usernames is password. The dlmadmin user has full administrator rights and can create new configurations or modify the existing configurations. This user can monitor and control the operation of the VTE. The dlmadmin user can create new users with the same rights as dlmuser; dlmadmin cannot create another user with administrative rights. The dlmuser user can view the configuration and check the status of the VTEs but does not have the authority to modify configurations or operate the VTEs. From the Authentication tab of the DLm Console, the dlmadmin user can add, delete, or modify usernames recognized by the system: 1. Access the DLm Console as described in Connect to the DLm Console on page Click External. 3. Click the Authentication tab. Management access to DLm 65

66 DLm Operations Figure 17 User ID creation 4. Select the authentication type: Native Native on page 66 provides instructions to add, modify, or delete users of Native authentication type. LDAP (including Active Directory) LDAP on page 67 provides instructions to add, modify, or delete users of LDAP authentication type. 5. Under Automatic logout, in Logout perod (minutes), enter the number of minutes after which the user will automatically be logged out if the session is inactive. Leave this field blank to disable automatic logout. 6. Click the Apply authentication changes link to apply the changes. Native Native user administration stores the usernames and passwords on the VTE and is the default type. 66 EMC Disk Library for mainframe DLm960 User Guide

67 DLm Operations To modify a user, modify the content of the Name, Password, or Readonly? fields. To add a new user: a. Click Add Next. b. Enter the username under Name. c. Enter the password for that user under Password. d. Select the Readonly? option if the user should not make changes to the configuration. To delete a user ID, click the X button corresponding to that user. Be careful not to delete all usernames with full administrator privileges. If there are no administrator users you will not be able to modify or operate the system. LDAP When you configure DLm user authentication to use an external Lightweight Directory Access Protocol (LDAP), the usernames and passwords will no longer be maintained on the ACP. When a user attempts to log in to DLm Console, DLm sends a message to the LDAP server. The LDAP server searches for the user name and password that has been entered and informs DLm if the user is found and the password is correct. DLm then grants access to the user. Select the LDAP authentication type when the DLm system is attached to your corporate network, and you already have the appropriate directory server installed and running on the network. If you select LDAP without the required connectivity, your login fails and you must try again using the Native authentication type. Management access to DLm 67

68 DLm Operations Figure 18 LDAP user authentication For administrative access, enter details under LDAP parameters for administrative access: LDAP server: Enter the hostname or IP address of the LDAP server. Base DN: Enter the Distinguished Name (DN) of the entry at which the server must start the search for authentication credentials. Filter: Criteria to use when selecting elements for authentication credentials. The format is (cn=%s) where the login name is substituted for %s. LDAP server bind credentials (optional): Bind DN: The DN to bind the server with Bind password: The password for the Bind DN 68 EMC Disk Library for mainframe DLm960 User Guide

69 DLm Operations For readonly access, enter details under LDAP parameters for readonly access: LDAP server: Enter the hostname or IP address of the LDAP server. Base DN: Enter the Distinguished Name (DN) of the entry at which the server must start the search for authentication credentials. For example, dc=emc or dc=com. Filter: The criteria to be used in selecting the elements to be used in authentication criteria. For example, (cn=%s) indicates that the Common Name (cn) field should be compared against the name entered in the User field of the login screen. (The login name is substituted for %s.) LDAP server bind credentials (optional): Bind DN: The DN to bind the server with Bind password: The password for the Bind DN Access a VTE You can access the VTE through the DLm Console. 1. Access the DLm Console as described in Connect to the DLm Console on page 60. The System status tab of the Status menu opens by default (Figure 15 on page 63). The console displays icons for each configured VTE. From the bottom up, the VTEs in your DLm cabinet are named VTE1, VTE2, VTE3, and so on. Only the icons matching the VTEs installed in the DLm are displayed on the console. 2. In the Console column, click the icon corresponding to the VTE you want to access. Management access to DLm 69

70 DLm Operations Figure 19 VT console The VT console for that VTE opens. The title bar displays the name of the VTE. The blue bar at the bottom of the screen displays the status of the virtual tape application. The informational, warning, and error messages from the VT application scroll on the console window. VT console A VT console does not need to be open for the VTE to be working. You can open a specific VT console when you configure that VTE or when you want to monitor the status of tape operations on that VTE. You can have all VT consoles open simultaneously. All VTEs continue to operate normally regardless of which console is open. The VT console is divided into three sections: The larger, top section displays log messages as they are issued from the VT application. On startup, the VT console displays the messages in the log (up to the last 100,000 bytes). 70 EMC Disk Library for mainframe DLm960 User Guide

71 DLm Operations The following navigation keys can be used to scroll through the messages: Home Move to the top End Move to the bottom PgUp Move up one screen PgDn Move down one screen The smaller, lower section of the VT console is blue and always shows the current status of the VT application on this VTE. When the VT application is not active, the VT status is Not running. When the VT application is active, the VT status on the VT console is Running. Use the START VT and STOP VT commands to start and stop the VT applications, respectively. Start and stop tape devices on page 84 provides information. The DLm system time is displayed in the status line at the bottom of the VT console. Below the VT Status line is a command line where you may enter and edit VT commands. The following navigation keys can be used on the command line: Up Arrow or Ctrl+P Previous command in history Down Arrow or Ctrl+N Next command in history Left Arrow Move 1 character to the left in the command line Right Arrow or Ctrl+F Move 1 character to the right in the command line Ctrl+A Move to the beginning of the command line Del or Ctrl+D Delete one character Ctrl+E Move to the end of the line Backspace or Ctrl+H Backward delete character Ctrl+K Erase to the end of the line Ctrl+T Transpose characters Ctrl+U Discard the line Ctrl+W Word rubout Management access to DLm 71

72 DLm Operations To close the VT console window, click the close window button in the top right corner of the screen. Closing the console does not affect the operation of the virtual tape application in any way. VTE reboot To reboot a VTE: Note: Vary all the devices on the VTE offline to the mainframe before you reboot the VTE. Power up DLm 1. Access the DLm Console as described in Connect to the DLm Console on page 60. The System status tab of the Status menu opens by default. 2. In the Reboot machine column, click Reboot corresponding to the VTE you want to reboot. Note: You must coordinate planned powerdown and powerup events with EMC Customer Support. Powering up a DLm system is a multistep process. Power up the following in this order: 1. Each Celerra Network Server (including the EMC CLARiiON storage array) 2. Each DD Each ACP 4. Each VTE Note: The Ethernet switches are powered up as soon as the cabinets are powered up. Wait at least 10 minutes for the storage to power up before powering up the ACPs and the VTEs. 72 EMC Disk Library for mainframe DLm960 User Guide

73 DLm Operations Celerra server power-up The following steps explain the procedure for powering up the Celerra Network Server and integrated storage arrays: 1. Make sure the power switches for the standby power supplies (SPS), A and B, shown in Figure Figure 20 on page 73, are off (0 position). The SPS power switches are at the rear of the DLm system and expansion bays. SPS power switch B SPS power switch A CIP Figure 20 SPS power switches on DLm Turn on (Position 1) the left and right cabinet circuit-breaker master switches at the rear of the DLm cabinets: Figure Figure 21 on page 74 illustrates the DLm960 bay. The DLm bays and power systems are designed to support DLm equipment only. EMC does not support any other components in these bays, and recommends that you do not install any additional equipment in the DLm bays. Power up DLm 73

74 ON I O OFF ON I O OFF ON I O OFF ON I O OFF ON I O OFF ON I O OFF ON I O OFF ON I O OFF ON I O OFF ON I O OFF ON I O OFF ON I O OFF DLm Operations NAS and NAS extension bays VTEC and storage bays Master switch Master switch Master switch PDP D PDP C PDP D (optional for storage expansion over 9 DAE) PDP C (optional for storage expansion over 9 DAE) ON I O OFF ON I O OFF ON I O OFF ON I O OFF Master switch ON I O OFF ON I O OFF Master switch ON I O OFF ON I O OFF Master switch PDP B ON I O OFF ON I O OFF PDP A PDP B ON I O OFF ON I O OFF PDP A Power source B Power source D Power source A Power source B (15 ft extension cables) Power source C (15 ft extension cables) Figure 21 DLm960 bay master power switches Power source A GEN EMC Disk Library for mainframe DLm960 User Guide

75 DLm Operations Note: If the entire VTEC bay is powered down, some of the LED panel indicators may light when power is applied. This is only an indication that the units have power available, it is not an indication that the ACPs or VTEs are started. You must press the Power button on each ACP and each VTE to actually start them when appropriate. DD880 power-up 3. Turn on (Position 1) the switches for SPS A and SPS B as shown in Figure 20 on page 73, and wait for the storage array to power up. The storage array can take about 8 minutes to power up. 1. Make sure the power switches on the ES20 shelves are in the off position. 2. Power up the DD880 bay. 3. Power on all the ES20s before the DD880 controller. 4. ES20 power: Turn the power switch to on for each of the two power supplies for each ES20 expansion shelf. Figure 22 ES20 expansion shelf 5. Wait approximately 3 minutes after all expansion shelves are turned on. 6. On the Controller, Push in the power button. Power up DLm 75

76 DLm Operations Figure 23 Controller power buttons. ACP power-up To power up the ACP, press the Power button located in the front of the ACP. Figure 24 on page 76 shows the front view of the ACP. You may hear the fans start and then slow down to adjust the system temperature. Shortly thereafter, the system begins to boot and the hard drive activity LED blinks. A B C D E F G CNS Figure 24 Front panel of the DLm960 ACP 76 EMC Disk Library for mainframe DLm960 User Guide

77 DLm Operations A. USB port B. Power button C. System status LED D. System power LED E. Hard drive activity LED F. NIC 1 LED G. NIC 2 LED VTE power-up To power up the VTE, press the Power button located in the front of the VTE, as shown in Figure 25 on page 77. You may hear the fans start and then slow down to adjust the system temperature. The VTE disk LED starts blinking to indicate the VTE startup. Normal startup of a VTE takes 5 to 10 minutes. After the VTE starts its network services, you can access the VTE operational desktop from the ACP. As each VTE operates independently, you can power up the DLm VTEs one at a time or all at once. Disk 0 Disk 1 A B C D E F G H I L K J GEN Figure 25 Front view of the VTE Power up DLm 77

78 DLm Operations The VTE controls and indicators are as follows: A and B. LAN 2 (Eth 0) and LAN 1 (Eth 1) LEDs activity: Blinking green light indicates network activity. Continuous green light indicates a link between the system and the network to which it is connected. C. Power button: Turns the system power on or off. Do not press the Power button while the VTE is online to the host. Follow the shutdown procedure in Power down DLm on page 79 before pressing the Power button. D. Power/Sleep LED: Continuous green indicates that the system is powered on. Blinking green indicates that the system is sleeping. No light indicates that the system does not have power applied to it. E. Disk activity LED. F. System status LED: Continuous green indicates that the system is operating normally. Blinking green indicates that the system is operating in a degraded condition. Continuous amber indicates that the system is in a critical or nonrecoverable condition. No light indicates that POST is running, or the system is off. G. System identification LED: A blue light glows when the ID button has been pressed. A second blue ID LED on the rear of the unit also glows when the ID button has been pressed. The ID LED allows you to identify the system you are working on in a bay with multiple systems. H. System identification button: Toggles the front panel ID LED and the server board ID LED on and off. The server board LED is visible from the rear of the chassis and allows you to locate the server from the rear of the bay. I. Reset button: Reboots and initializes the system. 78 EMC Disk Library for mainframe DLm960 User Guide

79 DLm Operations Do not press the Reset button while the VTE is online to the host. Follow the hutdown procedure in Power down DLm on page 79 before pressing the Reset button. J. USB 2.0 port: Allows you to attach a Universal Serial Bus (USB) component to the front of the ACP. K. NMI button: Pressing this recessed button with a paper clip or pin issues a non-maskable interrupt and puts the system into a halt state for diagnostic purposes. L. Video port: Allows you to attach a video monitor to the front of the VTE. Power down DLm Note: You must coordinate planned powerdown and powerup events with EMC Customer Support. Powering down DLm, like powering it up, is a multi-step process. This process includes procedures to suspend processing and then remove power to accomplish a full system shutdown. Power down the following in this order: 1. Each VTE Note: Vary off the tape devices from the host before you power dwon a VTE. 2. DD880 controller 3. ES20 enclosures 4. Each ACP 5. Each Celerra server 6. The CLARiiON storage array Note: The powerdown process takes up to 30 minutes after the tape drives are varied offline. Power down DLm 79

80 DLm Operations VTE powerdown When you use the poweroff or reboot command to shut down or reboot an ACP, only that ACP is affected. All VTEs continue to operate uninterrupted. When you use the poweroff or reboot command to shut down or reboot any DLm VTE, only that VTE is affected. All other VTEs continue to operate uninterrupted. Always use the poweroff command to shut down a VTE in an orderly manner. If you simply power off the VTE by pressing the Power or Reset buttons, unpredictable errors occur on the host for any active connections, possibly resulting in data loss. Before using this command, you must stop all host programs using the VTE, and vary off the tape devices from the host. To power down a VTE: 1. Vary off all the tape drives from the mainframe. Note: Vary the tape drives offline from every LPAR and wait for it to go offline. If a job is accessing the drive at that point, the drive does not go offline until the job releases the drive. 2. Access the VTE desktop as described in Access a VTE on page Right-click on the VTE desktop and select Administrator Shell. 4. Type the root password; the shell window appears. 5. To power off the VTE, type poweroff. 6. Press Enter. The system automatically enters the shutdown state. Note: The virtual tape application automatically restarts the next time you start the system. DD880 powerdown After executing the poweroff command, the VTE powers down. Pressing the Power button after a poweroff command will turn the VTE on again. You can power down the DD880 system from the ACP by executing a CLI command. To shut down the power to the Data Domain system: 80 EMC Disk Library for mainframe DLm960 User Guide

81 DLm Operations 1. Log in as an administrative user and type this command: # ssh -l sysadmin system poweroff 2. Type yes and press Enter. The command automatically performs an orderly shutdown of DD OS processes. Do not use the chassis power switch to power off the system. Use the system poweroff command instead. The DD880 controller will power down. This may take a few minutes. After powering down the controller, you can power down the ES20 shelves using the power switches at the back of each ES20 shelf. After powering down all the ES20 shelves in the DD880 bay, you can power off the rack if needed. ACP powerdown Halt the Celerra server You can power down and reboot the ACP without affecting the operation of the VTE. To power down: 1. Right-click on the ACP desktop and select Administrator Shell. 2. Type the root password; the shell window appears. 3. To power off the ACP, type poweroff. 4. Press Enter. The system automatically enters the shutdown state. A planned powerdown of the Celerra server and integrated storage array requires access to the Celerra Control Station. Call EMC Customer Support for assistance. Before you power down the Celerra server: 1. Vary off all tape drives and power down all VTEs to stop all I/O activity. 2. Log in to the Control Station as root, using a Telnet or SSH session. To perform a planned powerdown, you must be within close proximity to the Celerra server. 3. If you wish to verify the system's health, type: Power down DLm 81

82 DLm Operations /nas/bin/nas_checkup 4. To halt the Celerra server, type: /nasmcd/sbin/nas_halt now A prompt similar to this one appears: [root@celerra156-cs0 root]# nas_halt now ******************** WARNING! ******************** You are about to HALT this Celerra including all of its Control Stations and storage controllers. DATA will be UNAVAILABLE when the system is halted. Note that this command does *not* halt the storage array. ARE YOU SURE YOU WANT TO CONTINUE? [ yes or no ] : yes 5. Type yes and press Enter. 6. It can take as long as 20 minutes to halt the server, depending on the number of storage controllers and the amount of storage managed by the Celerra server. Your connection ends before you get a command complete message. Verify Control Station powerdown To verify that the Data Movers have been shut down: 1. Reboot the Control Station by pressing the power button in the front of the Control Station. To reach the power button on the Control Station, you have to remove the front bezel. 2. Wait for 5 minutes, and then login as root at the login prompt. 3. Verify that the Data Movers are shut down using this command: # /nasmcd/sbin/getreason This is a sample output for a 6-Data Mover configuration: 6 - slot_0 primary control station slot_1 secondary control station powered off - slot_2 powered off - slot_3 powered off - slot_4 powered off - slot_5 powered off - slot_6 powered off - slot_7 powered off After ensuring that the Data Movers are shut down, you can power down the CLARiiON storage array. 82 EMC Disk Library for mainframe DLm960 User Guide

83 DLm Operations Power down the CLARiiON storage array To shut down a CLARiiON storage array (SPE and boot DAE), use only the power switches on the SPS. Failure to follow this procedure can result in the loss of data and extended periods of data unavailability while the array is returned to normal functionality. SPE chassis and OS-Boot chassis DAEs are plugged into the SPS units. From the rear of the cabinet, the left power supply of each chassis (SPE and OS-boot) is plugged into the left SPS and the right power supplies are plugged into the right SPS. To power down the storage array: 1. Vary off all tape drives, power down all VTEs, and halt the Celerra server to stop all I/O activity. 2. Wait approximately five minutes to allow the write cache to finish writing to the storage system. 3. Use the SPS power switches to power off the storage array. Turn off (0 position) the power switch on the standby power supplies (SPSs). Wait 2 minutes to allow the storage system to write its cache to disk. Ensure that the SPS power indicators are off before continuing. Never turn off the power directly to the SPE chassis or the OS-boot chassis by using any switches on the power supplies. Never unplug any of the AC cables going to the SPE or OS-boot chassis to disconnect power. Power down DLm 83

84 DLm Operations ACP Power down 4. Power off Control Station 0: # /sbin/halt Sample output: # /sbin/halt Broadcast message from root (ttys1) (Fri Feb 13 17:53: ): The system is going down for system halt NOW! INIT: Stopping HAL daemon: [OK] Stopping system message bus: [OK] Halting system... md: stopping all md devices. md: md0 switched to read-only mode. Shutdown: hda System halted. 5. Ensure that the LEDs on all blade management switches are off. When they are off, the Celerra and CLARiiON are completely powered down. After the CLARiiON storage array has completely powered down, you can power down the cabinets. You can power down and reboot the ACP without affecting the operation of the VTE. To power down: 1. Right-click on the ACP desktop and select Administrator Shell. 2. Type the root password; the shell window appears. 3. To power off the ACP, type poweroff. 4. Press Enter. The system automatically enters the shutdown state. Start and stop tape devices To start or stop the virtual tape devices you must start or stop the VT application. Control of the VT application is through the VT console. The commands for starting and/or stopping tape emulation on a controller (node) are: STARTVT to start the VT application and activate devices in the installed configuration file. 84 EMC Disk Library for mainframe DLm960 User Guide

85 DLm Operations STOPVT to stop the VT application. Once the application stops, the channel links are disabled and all virtual drives cease to respond to the host until the application restarts. Any I/O from the host while the application is terminated will receive an I/O error (device not operational). For this reason, you should wait for all host applications using devices to finish, and the virtual tape drives should be varied offline from the host operating system before stopping the vt application. STOPVT will not terminate the application if any virtual drives currently have volumes loaded. STOPVT! to terminate the application while volumes are loaded. Any virtual tapes currently loaded will be immediately unloaded without any further processing. Note: This may result in an incomplete output tape volume if the host has not yet completed writing and properly closing the tape. For this reason, the STOPVT! command should only be used in an emergency situation where VTE must be brought down immediately. Any virtual tape volumes currently being written should be considered invalid. When the VT application is active, the VT console shows the VT status as "Running" and informational, warning, and error messages from the VT application scroll on the console. To start or stop the virtual tape devices: 1. Access the VT console as described in Access a VTE on page In the VT console, type the appropriate command. For example, to start the VT application, type: STARTVT Support access to DLm The blue bar at the bottom of the VT console displays the changed status of the VT application. 3. Type exit and press Enter to close the console window. DLm allows remote access to the ACPs for support and diagnostic purposes. DLm supports EMC Secure Remote Support (ESRS) that monitors DLm operation. ACPs are provided with modem support to communicate issues to EMC. Support access to DLm 85

86 DLm Operations ESRS ESRS for Celerra file storage monitors the operation of DLm for error events and automatically notifies your service provider of error events. It also provides a path for your service provider to use to securely connect to your monitored DLm systems. Modem support Figure 26 EMC Secure Remote Support DLm provides an external modem to allow remote access to the ACPs for support and diagnostic purposes. The supplied modem is normally attached to ACP1, the bottom ACP in a DLm960. A telephone line should be connected to the ACP modem (which in turn should be cabled to the COM1 port of the ACP). Figure 4 on page 27 shows the rear panel of the ACP. 86 EMC Disk Library for mainframe DLm960 User Guide

87 DLm Operations The ACP can be configured to send messages to EMC using the ConnectEMC function when problems are detected within the Celerra Server or the VTEC. The ConnectEMC options include sending the messages via a modem through a customer-supplied analog telephone line. Support access to DLm 87

88 DLm Operations 88 EMC Disk Library for mainframe DLm960 User Guide

89 CHAPTER 3 DLm Administration This chapter explains some of the DLm administrative tasks: Tape libraries Configure virtual devices Manage configuration files Tape Erase Manage VTE and ACP logs Back-end tape support DLm diagnostic reporting AWSPRINT library utility DLm Administration 89

90 DLm Administration Tape libraries DLm 2.x filesystem (Legacy) The filesystems offered by DLm 2.x software and the DLm 3.x software are different in many ways. New systems running 3.x software can be made backward compatible with the legacy type of filesystem offered by DLm 2.x. These systems can later be upgraded to support the DLm 3.x enhanced filesystem (EFS). It is very difficult to revert to the 2.x filesystem after you have data in your libraries. VTEs normally share the virtual tape volumes within a tape library. Each filesystem that provides storage for the tape library filing structure within the DLm system is mounted using a subdirectory named with a two-character VOLSER prefix. For example, if you define and mount a subdirectory BA, that filesystem houses VOLSERs in the range BA0000-BA9999. When planning for VOLSER ranges, be aware that DLm 2.x supports individual VOLSER ranges only up to 10,000 tape volumes in each filesystem. While DLm holds many times this number, each individual VOLSER range must be restricted to 10,000 tapes, even though it is technically possible to create larger ranges. EMC service personnel define tape libraries during initial setup. Note: DLm does not support a configuration where some VTEs use enhanced filesystem and the other VTEs in the configuration use a DLm 2.x (legacy) style filesystem. DLm 3.x enhanced filesystem (EFS) In an EFS-enabled DLm system, the tape library is made up of one or more filesystems and may be sub-divided into storage classes. A virtual tape library is controlled by a top level directory stored on the VTE's system disks. Each filesystem to be used as part of the tape library must be mounted as a subdirectory within that top level directory. The VTE automatically uses all filesystems mounted under the top level directory to store tape volumes. For example, /tapelib/cel1_p1_fs1, where /tapelib is the top level directory and /CEL1_P1_FS1 is the subdirectory. A DLm system that has EFS enabled stores any number of VOLSERs in the filesytems within the library until space within the filesystems is depleted. Additional filesystems can be added to the library at any time without disrupting the operation of 90 EMC Disk Library for mainframe DLm960 User Guide

91 DLm Administration the library. When a new filesystem is available, DLm automatically begins using it when creating new tape volumes or writing to scratch volumes. Each tape volume (VOLSER) is stored as a single file on one filesystem. Like real tape volumes, virtual volumes are written, read, and scratched. Once a VOLSER has been scratched within the library, it can be re-used during a future tape allocation process. IMPORTANT Too many VOLSERs in a filesystem leads to performance issues. EMC strongly recommends that you limit the number of VOLSERS to 30,000 per filesystem. Tape libraries allow for multiple storage classes to be defined. Each filesystem defined to a virtual library is assigned to only one storage class. The storage classes are identified by numbers; for example: 0, 1, 2, etc. If you do not define a class, the filesystem you define is assigned to the default storage class 0. At least one filesystem must be defined for each virtual tape library you intend to define. It is also mandatory to define one small (10 MByte) filesystem to use as a lock directory. Note: To provide the best overall performance in FICON environments, multiple filesystems in each library are desirable. While there is no strict limitation, a minimum of four filesystems is recommended to enable the VTE to balance output across all filesystems in the library. EMC service personnel define tape libraries during initial setup. The steps to successfully define a tape library: 1. Create the filesystem on backend storage subsystems like Celerra Server and/or DD using DLm tools. 2. Define the lock file system and tape library file systems in VTE configuration. 3. Define the libraries to be used by each VTE and configure devices. 4. Install the configuration on all VTEs. 5. Initialize scratch tapes (VOLSERs) into the library. Tape libraries 91

92 DLm Administration Note: DLm does not support a configuration where some VTEs use enhanced filesystem and the other VTEs in the configuration use a DLm 2.x (legacy) style filesystem. Lock filesystem for EFS In addition to defining filesystems to the virtual tape libraries, DLm also requires a small filesystem to use as a lock directory. A lock file system is an NFS filesystem that is required during the allocation of scratch volumes to keep temporary lock files. A 10 MB filesystem is generally sufficient. EMC service personnel create the lock file during initial system configuration and setup. Some important considerations: The lock filesystem must be separate from the filesystems making up your virtual tape library (libraries). When multiple VTEs share a virtual library, the lock filesystem must be resident on the shared (NFS) storage that all VTEs can access. It must be mounted on all the VTEs. Only one lock filesystem is required regardless of how many virtual tape libraries you may be defining to the VTEs. Only one lock filesystem is required even if you have multiple storage subsystems, such as Celerra Server and DD. The same lock directory MUST be defined to each VTE accessing a virtual tape library. The same lock directory can be used for more than one virtual tape library. The lock filesytem is only used during the process of allocating a new scratch volume for output. This filesystem is not used to store tape volumes. (Therefore, the size of the lock filesystem (directory) can be as small as 10 MB). The lock directory is identified with a global parameter called VOLSERLOCKDIR. This parameter is defined as an additional parameter under the Global options on the Devices panel. Note: If you do not define a lock directory filesystem, DLm assumes that you want to operate in compatibility mode using an existing virtual library that was created with an earlier version of VTE software. 92 EMC Disk Library for mainframe DLm960 User Guide

93 DLm Administration Backward compatibility Initialize scratch volumes If you install a DLm 3.x-based VTE into an existing multiple-vte environment with an earlier version of software, you can operate the new VTE in compatibility mode. To operate in compatibility mode using an existing virtual library, you simply do not define a lock directory filesystem in the configuration. When the VOLSERLOCKDIR parameter has not been defined on a VTE, the VTE assumes that the virtual tape library is an existing library created with DLm software older than release 3.1. Keep in mind that if the VTE is running in backward compatibility mode the restrictions of the previous library architecture are all in force. Specifically, each filesystem must be defined (mounted) in the library using the first 2 characters of the VOLSERs that will be stored in that filesystem. Filesystems are generally restricted to 10,000 VOLSERs per filesystem and new filesystems added to the library must have VOLSERs initialized into them before they can be used. If you are defining a new DLm virtual tape library, EMC strongly recommends that you define a lock directory filesystem to take full advantage of the DLm 3.x enhanced filesystem architecture. Before any of the VTEs can mount a virtual tape volume and present it to the mainframe host, you must initialize the tape volumes that you use. Execute at least one INITIALIZE command in a VT console window when you start any tape drives on DLm. Otherwise, no scratch tapes will be available for use within the DLm system. Initialize scratch volumes on 2.x filesystems Since VTEs normally share the virtual tape volumes within a tape library, you need to initialize volumes on only one of the VTEs to make them available to all VTEs sharing the library. You do not need to initialize all VOLSERs associated with a subdirectory at once. You may initialize the number of VOLSERs you expect to write. For example, assume that you have mounted a 100 GB filesystem using the mount point name BA and you are aware that the average tape volume size is 300 MB. This implies that your disk is capable of holding approximately 335 tape volumes. You may choose to initialize only 500 tapes (for example, BA0000-BA0499, or even BA9000-BA9499). You do not need to initialize all 10,000 VOLSERs (BA0000-BA9999). You can define additional tape volumes later. Tape libraries 93

94 DLm Administration The command to initialize tapes is: INITIALIZE VOL=volser DEV=devname COUNT=count [CLASS=n][DIR=dirname] where: volser is the starting serial number to initialize. devname is the device name (address) of any tape drive pointing to the tape library. count is the number of serial numbers to initialize. n is an optional class to which these volumes are to be added to. CLASS= is a required parameter when using Enhanced File System (EFS), and is not valid when EFS is not being used. Unless DIR= is also specified, the new tapes will be spread across all subdirectories of the specified CLASS. dirname optionally specifies the subdirectory to create the volumes in. Specify only the subdirectory, not the full path; the base tape library directory is derived from the PATH of the DEV= parameter. For example, if the tape library is /tapelib, specifying DIR=L2 would initialize the tapes in /tapelib/l2. DIR is an optional parameter when using Enhanced File System (EFS), and is not valid when EFS is not being used. Assuming device E980 is a configured device pointing to your tape library then the command to initialize 500 serial numbers to the storage class 0 beginning with VOLSER would be: INITIALIZE VOL= DEV=E980 COUNT=500 CLASS=0 This would result with volumes ranging from to being created, spread across all filesystems in class 0. If your library has two storage classes defined, class 1 and class 2, the following commands would initialize 1000 VOLSERs per class in the library making both classes ready for use: INITIALIZE VOL=B00000 DEVICE=E980 COUNT=10000 INITIALIZE VOL=B10000 DEVICE=E980 COUNT=10000 Note: If your tape devices are defined in a Manual Tape Library (MTL), you must also define them in the mainframe's tape configuration database (TCDB). You must run the DLMLIB utility to do this. Instructions for running DLMLIB are provided in Locate and upload the DLm utilities and JCL for z/os on page EMC Disk Library for mainframe DLm960 User Guide

95 DLm Administration Initialize scratch volumes in EFS In an EFS-enabled DLm system, tape library is made up of one or more filesystems and may be sub-divided into storage classes. Since VTEs normally share tape volumes within a tape library, you only need to initialize tape volumes into each storage class to make them available to all VTEs sharing the library. If there are no scratch volumes in a storage class, DLm will not be able to satisfy a mount request for a scratch within that storage class and the mount will remain pending. If you have not defined storage classes (other than the default class 0), you will only need to initialize a single range of tape volumes to the library. But if you have defined multiple storage classes then you must initialize a range of VOLSERs for each class you have defined. The command to initialize tapes is: INITIALIZE VOL=volser DEV=devname COUNT=count [CLASS=n] [DIR=dirname] where: volser is the starting serial number to initialize. devname is the device name of any tape drive pointing to the tape library. count is the number of serial numbers to initialize. n is an optional class to which these volumes are to be added to. CLASS= is a required parameter when using Enhanced File System (EFS), and is not valid when EFS is not being used. Unless DIR= is also specified, the new tapes will be spread across all subdirectories of the specified CLASS. dirname optionally specifies the subdirectory to create the volumes in. Specify only the subdirectory, not the full path; the base tape library directory is derived from the PATH of the DEV= parameter. For example, if the tape library is /tapelib, specifying DIR=L2 would initialize the tapes in /tapelib/l2. DIR is an optional parameter when using Enhanced File System (EFS), and is not valid when EFS is not being used. Note: This parameter is only allowed when the Enhanced Filesystem Architecture option is enabled. Otherwise, the target directory is derived from the first two characters of the VOLSER. Tape libraries 95

96 DLm Administration DIR is an optional parameter. DIR is not a requirement. If DIR is not specified, INITIALIZE places the volumes into the first filesystem it finds within the storage class. During processing, scratch tapes will be automatically moved as needed from one directory (filesystem) to another in the same storage class. However, if you wish to spread scratch volumes across multiple filesystems within a class you may use the DIR parameter to direct a range of tapes to a specific filesystem. CLASS is a required parameter. Assuming device E980 is a configured device pointing to your tape library then the command to initialize 500 serial numbers to the storage class 0 beginning with VOLSER would be: INITIALIZE VOL= DEV=E980 COUNT=500 CLASS=0 This would result with volumes ranging from to being created in the first filesystem in class 0. If your library has two storage classes defined, class 1 and class 2, the following commands would initialize 1000 VOLSERs per class in the library making both classes ready for use: INITIALIZE VOL= DEV=E980 COUNT=1000 CLASS=1 INITIALIZE VOL= DEV=E980 COUNT=1000 CLASS=2 Note: Since the INITIALIZE program automatically generates VOLSERs starting with the VOLSER specified with VOL=, make sure you do not overlap VOLSER ranges when entering these commands. In the example above VOL= COUNT=1000 will result in the 1,000 tape volumes being created in the library with serial numbers ranging from to VOL= COUNT=1000 will result in volumes ranging from to being created. The result of these two commands is a virtual library with 2,000 volumes whose serial numbers range from to If you are initializing tapes on a Unisys mainframe, include the LABEL parameter telling DLm the tape volume labels will be ANSI format. For example: INITIALIZE VOL= DEV=E980 COUNT=500 LABEL=A CLASS=0 Note: If your tape devices are defined in a Manual Tape Library (MTL), you must also define them in the mainframe's tape configuration database (TCDB). You must run the DLMLIB utility to do this. Instructions for running DLMLIB are provided in Locate and upload the DLm utilities and JCL for z/os on page EMC Disk Library for mainframe DLm960 User Guide

97 DLm Administration Configure virtual devices Planning considerations You can define up to 256 virtual 3480, 3490, or 3590 tape drives on each DLm VTE. For z/os systems, plan for one virtual device that will always be offline and can be used by DLm utilities to communicate with the VTE. Additionally, if you plan to run the DLm z/os started task (DLMHOST), plan for one virtual device per VTE, (two virtual devices if DLMHOST logging is requested), to remain offline and be used by DLMHOST to communicate with the VTE. DLm configuration files Configure global parameters The DLm Console allows you to configure the VTE and save your configuration as a configuration file. The default configuration file is config. If you simply begin modifying the configuration, you will be working with this default configuration file. Optionally, you can create and use your own configuration files. DLm allows you to store as many configuration files as you want. However, only one configuration file will be the active configuration at any point in time. The Configuration page shown in Figure 31 on page 110 allows you to select the configuration file for a VTE. Manage configuration files on page 111 provides more information. You must save your configuration to a configuration file and install the configuration for it to take effect on the VTE. The current active configuration file is displayed in the Last installation field under the Description field. Each DLm includes a configuration utility, which is a browser-based graphical interface, to configure the virtual tape drives on that VTE ss. 1. Access the DLm Console using the web browser. Access the DLm Console on page 55 provides instructions. 2. Once connected, click Devices to display the Tape device configuration panel. This panel contains a tab for each available VTE. 3. Click the tab pertaining to the VTE you want to configure. Configure virtual devices 97

98 DLm Administration Figure 27 Global options 4. Enter values in the fields under Global options at the top of the Devices panel: Warn at: Sets the percentage of disk space usage at which DLm will begin to warn about usage. Each time the contents of a filesystem changes, the VTE checks the space used against this value. If the used space in the filesystem is above this value, a warning will be issued. The valid range is 0 to 100. The default is 88%. Erase policy: Sets the erase policy you want the VTEs to use when recovering space on scratched tapes: Space, Time-to-Live (TTL) in days or hours, or Both. Erase policies cannot be changed by a SET command. This is a global parameter which applies to all tape library directories of a VTE. 98 EMC Disk Library for mainframe DLm960 User Guide

99 DLm Administration Note: If the VTE has tape libraries with VOLSERs that reside on DD880, the erase policy must be configured to TTL. Tape Erase on page 113 provides more information about DLm s erase policy. Start space recovery at: Sets the percentage of disk space usage at which DLm starts to recover disk space by deleting the data from scratch volumes. Valid values are 0 to 100. The default is 85%. If the recovery percentage is set to 100, DLm will never automatically delete scratch volume data to recover disk space. Note: This field is visible only if the Erase policy option, Space or Both, is selected. Recover amount (1-100): When DLm starts to recover disk space, it continues erasing data from scratch volumes until this amount of free space has been recovered or until there are no more scratch volumes that can be erased. Valid values are 1 to 100. The default is 5%. Setting recovery amount to 100% causes DLm to erase the data from all scratch volumes on this filesystem once the Start space recovery at value has been reached. Note: This field is visible only if the Erase policy option Space or Both is selected. Erase scratched tapes after: Indicates the duration after which the data of a scratched tape will be automatically erased. You can specify this time period in days or hours. Enter a value and select hours or days. Note: This field is visible only if the Erase policy option TTL or Both is selected. IMPORTANT Stagger the Time-to-Live values across VTEs to ensure that multiple VTEs do not start TTL cleanup at the same time. Time-to-Live erase policy on page 114 provides more information. Configure virtual devices 99

100 DLm Administration Tape import/export enabled: Indicates whether or not this VTE must provide export/import utilities. DLm allows the physical attachment of a real IBM 3592 or TS1120 tape drive. The VTE contains export/import utilities that copy (export) a tape volume (VOLSER) from the library to a physical 3592/TS1120 cartridge or copy (import) a physical 3592/TS1120 cartridge to a tape volume (VOLSER) in the tape library. These utilities are executed on the VTE and are independent of any mainframe security programs (such as RACF and ACF/2). By default, these utilities are disabled. Selecting the Tape export enabled option enables the VTE s export/import utilities. Write compatibility: Indicates whether or not the VTE needs backward compatibility with previous generation VTEs. By default, DLm is configured so that it will be backward compatible with the previous version of DLm. This default ensures that a new VTE can be installed into an existing system and share tape volumes with older VTEs. Similarly, volumes written by this VTE can be read by other, older VTEs. For new installations, where there are no existing VTEs, this option can be set to Allow new features but lose backward compatibility. This allows the VTE to take full advantage of all the features of the current generation VTE. Guaranteed replication enabled: Select this checkbox to enable Guaranteed Replication. GR timeout (seconds) - This is the number of seconds that the VTE should wait for data to be copied to the replica before assuming a replication failure. If the replication does not complete in nnnn seconds, a unit check with equipment check sense is returned to the mainframe's WTM CCW. The default value is 2700 seconds (5 minutes less than the default message interrupt handler (MIH) value of 50 minutes). Note: The GR Timeout value must be less than the MIH value. Additional parameters: In addition to the pre-defined global configuration parameters described above, there are global free-form configuration parameters that can be manually entered into the configuration. To add a free-form parmater, click on the click to add free-form parameters link. The currently available free-form parameter is VOLSERLOCKDIR, seen as a link on the Tape device configuration panel. 100 EMC Disk Library for mainframe DLm960 User Guide

101 DLm Administration VOLSERLOCKDIR defines the location of the lock filesystem to be used by the VTE. The mount point must have been previously defined on the Available tab of the Storage panel as well as on the VTE storage panel. Enter values in this format: VOLSERLOCKDIR </mountpoint> The addition of a VOLSERLOCKDIR parameter enables Enhanced File System (EFS) support. For example, if the lock directory has been defined as the filesystem located at mount point /lockfs/lock, enter: VOLSERLOCKDIR /lockfs/lock Add devices Define the virtual tape devices (drives) to be emulated to the mainframe by this VTE in the Control units section. Note: Filesystems must be created before you try to add devices. Tape libraries on page 90 provides more information about tape libraries and filesystems. Control unit Device Type Figure 28 Control units 1. Add one or more controllers to the configuration by entering a valid control unit number and selecting a device type for the devices to be defined on the VTE: Control unit Configure virtual devices 101

102 DLm Administration In the text box, type the hexadecimal control unit number that you are configuring. For FICON, valid values are 00-FF. For ESCON, valid values are 00-0F. Device Type Select the device type to be emulated: 3480, 3490, or Note: All devices on the same Control Unit must be the same type. 2. Click the + button to complete the addition. The control unit is added to the list and an Add devices configuration section appears below the Global options section. Figure 29 Add devices section 102 EMC Disk Library for mainframe DLm960 User Guide

103 DLm Administration 3. Enter values in the fields of the Add devices section to configure the corresponding parameters for each device: Control unit The hexadecimal control unit number that you are configuring (from the list in the Control units section under Global options). Add address range The starting and ending hexadecimal device unit addresses you wish to add to the VTE. You can define sets of 16 or multiples of 16 (n0 nf). Initial device name Each DLm system must have a unique device name. EMC recommends using the same device name that is defined in the UCB name in the mainframe operating system. The name you type must end in hexadecimal digits, and the configuration program increments the name for the number of devices you are defining. For example, if you are defining 16 devices with an address range of 00-0F and you type E900 in the Device Name field, the configurator names the 16 devices E900, E901, E902,... E90F. The name you type may range from 1 to 8 characters long. Tape Library The library to which this device is connected. To appear in the list of available libraries, the storage must be defined on the Available tab of the Storage panel and be connected to the VTE on the VTE tab of the Storage panel. Note: The /lockfs entry should never be selected as a tape library. IDRC This parameter turns on or off write compression of the data that the VTE writes to the library. The available values are Yes, No, and Force. The default value is Yes. When IDRC is set to Yes, the VTE compresses the data it writes to a virtual tape disk file, but only if the mainframe instructs it to do so. Compression is controlled differently by various host operating systems, but is generally configurable in the JCL. Configure virtual devices 103

104 DLm Administration When IDRC is set to No, the VTE does not compress the data it writes to a virtual tape disk file, despite instruction from the mainframe. When IDRC is set to No, the VTE still reports to the host that it supports compression but it does not perform any compression on the data it writes to the disk. This is because some host operating systems or tape managers do not use drives that do not support compression. Note: When writing to VOLSERs stored on Data Domain deduplicated storage, an IDRC setting of YES is ignored. The VTEs do not compress the data before it is written to the deduplicated storage. The deduplication storage server deduplicates and compresses the data before writing to its drives. IDRC No affects only the writing of data. When IDRC is set to No, the VTE can still read (decompress) virtual tape volumes that it previously wrote with compression on. IDRC Force configures the DLm virtual tape device to compress the data it writes to a virtual tape disk file regardless of the mainframe's instructions to the VTE regarding the tape file. Note: Using Force with a deduplicating filesystem can severely limit the ability of the storage system to de-duplicate and will, therefore, use more real disk storage. Encryption key class Enter a valid RSA key class to enable the drives to do encryption. When this field is configured, the tape drive makes a call to the RSA Key Manager using this key class each time the drive opens a tape volume for output. FLR active Select this option to enable the FLR feature. It is unchecked by default. FLR on page 158 provides more information. FLR retention The FLR retention option defines a default retention period to be assigned to tape volumes when the mainframe has not indicated an expiration date in the HDR1 record. FLRRET on page 159 provides more information. FLR mod 104 EMC Disk Library for mainframe DLm960 User Guide

105 DLm Administration Select this option if you want to allow the tape drive to modify (extend) a tape volume that is in the WORM state. It is unchecked by default. FLRMOD on page 159 provides more information. FLR extents FLR extents controls how many times a tape volume in the WORM state can be extended, assuming the FLR mod option is selected. Valid values are from 0 to If the FLR extents parameter is omitted, the default is 100. FLREXTENTS on page 159 provides more information. Additional parameters The Additional parameters field allows you to code a number of optional keyword parameters which will be assigned to the devices being created: GROUP=nn nn is any decimal number. GROUP should be coded whenever DLm is to be used with a VSE system. All virtual tape drives attached to the same VSE system or guest should have a unique GROUP number. When DLMMOUNT or a tape manager requests a mount, only virtual drives in the same GROUP are considered for the mount. Each VSE requires a unique GROUP number. When not coded, all drives default to GROUP=0. LABELS=S/N/A Most operating system mount requests specify a label type, but for those that do not specify a label type, the LABELS parameter sets the default label type for the drive. The default label type iss for IBM standard (EBCDIC) labels. Optional values are N for unlabeled, and A for ANSI (ASCII) labels. The label type affects only how new tapes are initialized by DLm and what type of scratch tape to select when the host does not specify a label in its mount request. The label type setting has no effect on existing tape volumes. It has no effect when the host requests a specific label type in its mount request. SIZE=maxvolumesize This parameter limits the maximum size of an individual tape volume. The maximum volume size can be specified in any of the following:- bytes (SIZE=nnnnnn) kilobytes (SIZE=nnnK) Configure virtual devices 105

106 DLm Administration megabytes (SIZE=nnnM) gigabytes (SIZE=nnnG) terabytes (SIZE=nT). When specifying kilobytes, megabytes, gigabytes, or terabytes the value can contain a decimal point (that is, SIZE=n.nT). Size can range from 2 M to 32 T. If omitted, the maximum volume size defaults to 2 G (two gigabytes) for 3480 or 3490 tape devices and 40 G (40 gigabytes) for 3590 tape drives. The maximum allowable tape size for all device types is 32 T but is limited to the amount of available storage in the filesystem. TRACE=n This parameter allows you to set the trace option for this specific device: 0 No tracing 1 Trace errors only (default) 2 Trace errors and status 3 Trace errors, status, and headers 4 Trace errors, status, headers, and data 5 Perform a full packet trace (for customer support only) VOL=(xx,yy, ) VOL allows scratch volume allocations to be restricted to a specific range of tape volumes beginning with the prefixes defined in VOL. xx can be from 1 to 6 characters in length. For example, 00, 001, 0011, 00111, and are all valid examples of a VOLSER prefix. VOLSER prefix(es) set with VOL are honored during scratch mounts ONLY. The VOL prefixes filter is applied after all other class, space, age, label-type, penalty, and synonym filters have been applied. VOL prefixes do not affect the determination of which directories are picked or in which sequence directories are picked. VOL prefixes do not affect the sequence that VOLSERs are evaluated in. These prefixes are simply a filter that is applied to the VOLSER candidates being considered. The sequences of the prefixes does not change the 106 EMC Disk Library for mainframe DLm960 User Guide

107 DLm Administration Scratch synonyms evaluation process in any way. If any one prefix matches a candidate VOLSER, the VOLSER passes the test and is selected for the scratch allocation. For example, if VOL=(01,02) is specified for a range of devices then those devices would only allocate scratch volumes to VOLSERs beginning with '01' or '02'. If no scratch volumes beginning with '01' or '02' are available in the storage class being allocated to them, the allocation will be ignored and the device will remain in a Not Ready state. 4. When the parameters are set to your satisfaction, click Add range to create the new devices. A Current devices section appears at the bottom of your screen showing the devices that have been created. 5. You can change the configuration of individual devices in the Current devices section. When the mainframe wants a tape volume (VOLSER) mounted on a tape device, it sends a load display command (CCW) over the channel to the device identifying the VOLSER to be mounted. For example, in z/os, if a user codes JCL for a tape volume that reads "VOL=SER=000001", z/os sends DLm a load display CCW indicating that the tape volume with VOLSER '000001' needs to be mounted on the drive. After sending the load display CCW, z/os waits for the device to become ready and then reads the VOL1 label to verify the serial number. z/os uses the default character strings SCRTCH and PRIVAT to indicate a request for a scratch tape to be mounted for output. By default, DLm recognizes these two strings as a request for a scratch tape and mounts an available scratch tape on the requested device to be used for output. Most commercial tape management systems (TMS) support the concept of tape pools, allowing you to define your own scratch pools for use when mounting a new scratch tape. In support of TMS tape pools, DLm allows you to define unique scratch synonyms to the VTEs. During installation, you can configure your own sub-pools of scratch tapes to request tape mounts using meaningful names. DLm accepts up to 64 scratch synonyms. Configure virtual devices 107

108 DLm Administration The field in the Scratch Synonyms section under Global options let you include whatever names your installation uses to request scratch tape mounts. DLm recognizes these synonyms, along with SCRTCH and PRIVAT, as a request for a new scratch volume when they are in a load display CCW. Figure 30 Scratch Synonyms To add scratch synonyms (tape pool names): 1. Define a scratch synonym in the following format in the Scratch Synonyms section under Global options : synonym=(prefix1,prefix2, CLASS=(CLASSn,CLASSn, )) where: synonym is the character string to be used as the synonym. Synonyms may be 1-8 characters in length and must contain only letters A-Z and numbers 0-9. Note: Synonyms are not case sensitive and may be entered in either upper or lower case. prefixn an optional parameter to associate a synonymwith a specific set of VOLSERs. Each prefix can be from 1 to 6 characters in length. prefixn defines the prefix characters of the VOLSERs that can be assigned in response to a scratch request made with this synonym. For example, SCRTCH=(00,01) specifies that any load request received for SCRTCH must be satisfied with a VOLSER that startes with either "00" or "01". Valid VOLSERs that could be mounted by DLm would include any VOLSER in the range ; assuming only numeric VOLSERs are in use. If there are no scratch tapes with VOLSERs beginning with "00" or "01" then DLm does not mount a tape and the mount will remain pending. 108 EMC Disk Library for mainframe DLm960 User Guide

109 DLm Administration If a VOLSER prefix is not defined for a specific scratch synonym then any available scratch tape will be used. CLASSn - defines the storage class or classes associated with this scratch synonym. For example, PRIVAT=CLASS=CLASS1 would indicate that any load request received for PRIVAT must be satisfied by allocating a scratch VOLSER in storage class 1. When enhanced file system (EFS)is in use, DLm first identifies all filesystems assigned to the storage class (or classes) for this scratch synonym and then selects a filesystem from those filesystems based on free space and frequency of use. If a class is not specified, then the scratch synonym will by default only apply to the default storage class of Click the + button to complete the addition. Example Consider the following definitions of scratch synonyms: WORK SCRTCH=(00,01) PRIVAT=CLASS=CLASS1 In this example any mount requested with the synonym WORK will be assigned any available scratch tape in the default storage class 0. A request for SCRTCH will also go to default storage (class 0), but will only be assigned a volume with a serial number beginning with 00 or 01. If no scratch tapes with these prefixes are available the mount will not be satisified and will remain pending. PRIVAT tapes will go to storage assigned to storage CLASS 1. Any available scratch tape within that class will be used. If there are no available scratch tapes in CLASS 1, the mount will remain pending. The syntax is very important when coding scratch synonyms. For example defining: DRTAPE=(00,01),CLASS=(CLASS1,CLASS2) defines two synonyms, DRTAPE and CLASS. The synonym DRTAPE will use volume serial numbers beginning with 00 or 01 in Class 0 storage. The synonym CLASS will use only the specific VOLSERs CLASS1 and CLASS2, in Class 0 storage. DRTAPE=((00,01),CLASS=(CLASS1,CLASS2)) establishes the scratch synonym DRTAPE using VOLSERs beginning with 00 or 01 located in either storage class 1 or storage class 2. Configure virtual devices 109

110 DLm Administration Note: It is not necessary to define any scratch synonyms. By default, DLm allocates any request for SCRTCH or PRIVAT to any scratch tape available on the default (class 0) storage class. Save configuration 1. Select the Configurations menu at the top of the screen. 2. On the Configurations panel, click Save Changes to save your configuration to disk. Delete a device range Figure 31 Save configuration 3. To activate the configuration file, select the VTE on which it must be installed at the bottom of the page and click Install on nodes. Activate or install a configuration on page 111 provides more information. 1. Select the Devices menu at the top of the page. 2. Scroll down to the Current devices section. 110 EMC Disk Library for mainframe DLm960 User Guide

111 DLm Administration 3. Scroll to the device range you want to delete and click the X button next it. 4. Select the Configurations menu at the top of the screen. 5. On the Configurations panel, click Save Changes to save your configuration to disk. Manage configuration files on page 111 describes the procedure to install the updated configurations. Manage configuration files Activate or install a configuration You must install a configuration for it to be used by a VTE. If you modify the currently installed configuration, the changes will not become active until you re-install the configuration. To install (and activate) your configuration: 1. Select the Configuration menu at the top of the DLm Console screen. 2. Select the VTE on which it must be installed at the bottom of the page and click Install on nodes. 3. Click Install on the Configuration operations panel. Note: In multiple-vte configurations, all VTEs must be powered on and running when you click Install. When you click Install, the virtual tape application (VT) restarts. If your VTE is currently online with the mainframe, EMC strongly recommends that you idle all tape drives and vary them offline before installing a new configuration. If your DLm system has multiple VTEs, the VT on every VTE detecting a change to its current configuration will automatically restart. However, if you are adding a new VTE to an existing system, you can install the configuration while the existing VTEs are active as long as you take care not to modify any of the existing VTE s configurations. Manage configuration files 111

112 DLm Administration Create a new configuration Copy a configuration 1. Select the Configuration menu at the top of the DLm Console. 2. Enter a configuration name in the text box adjacent to the Create configuration named: button. 3. Click the Create configuration named: button. 4. Select the Devices menu at the top of the DLm Console and enter the configuration values described in: Configure global parameters on page 97 Add devices on page 101 Scratch synonyms on page Save the configuration as described in Save configuration on page Select the Configuration menu at the top of the DLm Console. 2. At the top right corner of the page, select the configuration file you wish to copy. 3. From the list box near the Copy to field select the config file to which the configuration must be copied. 4. Click Copy to. 5. At the top right corner of the page, select the configuration file you just copied changes to. 6. Click Save changes. Modify or delete a configuration 1. Select the Configuration menu at the top of the DLm Console. 2. Select the configuration file you wish to modify or delete. 3. Do one of the following: To modify the configuration file: a. Select the Devices menu at the top of the DLm Console and make the required changes. b. Return to the Configuration menu and click Save changes. 112 EMC Disk Library for mainframe DLm960 User Guide

113 DLm Administration To delete the configuration file: Click Delete. Tape Erase DLm supports a space recovery feature that automatically erases data from scratch tapes on the filesystem based on an erase policy. The available erase policies are: Space Time-To-Live (TTL) Both (default) Note: If the VTE has tape libraries with VOLSERs that reside on the Data Domain DD880, the erase policy must be configured for the Time-to-Live options. Space erase policy The erase policy is a VTE-wide setting. The erase policy can be different on different VTEs. These erase policies affect only AWS-format scratch tapes residing on NFS filesystems. They affect only automatic space recovery erasing. Erase policies have no effect on erase actions performed by mainframe programs such as DLMSCR. You can configure the erase policy using the fields described in Configure global parameters on page 97. When a filesystem reaches a specified percentage of space usage, DLm begins erasing data in that filesystem until the amount of space specified in the recovery amount parameter has been recovered. The threshold value, which triggers DLm to erase data from scratch tapes is specified using the Start space recovery at field. This automatic space recovery erases the oldest scratch tapes first (based on the time it was scratched). This method is used so that the most recently scratched tapes can be available for some time before being erased. Tape Erase 113

114 DLm Administration Time-to-Live erase policy The TTL erase policy gives you better control over the length of time that the data on a scratch tape is retained when the tape is in the scratch pool. The data on a particular tape is erased when the amount of time since this tape was moved from the active pool to the scratch pool exceeds the duration specified for TTL in the erase scratched tapes after option. Once the period expires, the tapes will automatically be erased regardless of current space utilization. The default TTL value is 365 days. You can specify the time in: Days Hours Note: If the VTE has tape libraries with VOLSERs that reside on the Data Domain DD880, the erase policy must be configured to one of the Time-to-Live options. Data Domain storage does not immediately return the erased storage to the free-space pool, so the SPACE erase policy would result in all scratch VOLSERs being erased once the space threshold is reached. IMPORTANT Stagger the Time-to-Live values across VTEs to ensure that multiple VTEs do not start TTL cleanup at the same time. Staggering the Time-to-Live values across VTEs ensures that only the required number of VTEs are engaged in TTL cleanup. The VTE with the minimum Time-to-Live value starts recovering space. If that VTE cannot complete erasing the scratched tapes before the next higher Time-to-Live value, the next VTE joins in and helps to complete the space recovery. For example, in a four-vte system, if you set the Time-to-Live value of VTE4 to 48 hours, set that of VTE3 to 36, that of VTE2 to 24 hours, and that of VTE1 to 12 hours. In the case of this example, VTE1 starts erasing tapes that were scratched 12 hours ago. If it cannot complete the recovery, VTE2 starts at the end of the twenty fourth hour. Both VTEs recover space until all the tapes are cleaned up. If VTE1 and VTE2 cannot complete the space recovery at the end of the thirty sixth hour, VTE3 joins VTE1 and VTE2 in recovering space. 114 EMC Disk Library for mainframe DLm960 User Guide

115 DLm Administration Both DLm starts erasing space if either of the both conditions - Space erase or TTL policy is satisfied. Manage VTE and ACP logs VTE logs The DLm Console allows you to view the most recent VTE logs and gather ACP and VTE logs for diagnostic purposes. VTEs maintain a log of all messages issued by the virtual tape application. Log files are automatically rotated each day at midnight. Old log files are compressed to minimize the space they take and then kept for a period of time. To view the latest VTE logs: 1. Access the DLm Console using the web browser. Connect to the DLm Console on page 60 provides instructions. The System status tab of the Status menu opens by default. Figure 32 System status 2. Click the icon in the Logs column corresponding to VTE for which you need the logs. Manage VTE and ACP logs 115

116 DLm Administration Figure 33 VTE logs The logs appear in a new window or a new tab. Click Previous to view the previous logs. Use Previous and Next to navigate through the logs. Support data To gather ACP and VTE details for diagnostic purposes: 1. On the Status menu, click the Gather logs menu. The ACPs and VTEs are listed in the Machine name column. 116 EMC Disk Library for mainframe DLm960 User Guide

117 DLm Administration Figure 34 Gathering ACP and VTE support data 2. Under Support data, click Gather in the row corresponding to the system for which you want to gather support data. The Last gathered column displays a link with the time stamp of the last gathered data. A pop-up window confirms the request followed later by another pop-up indicating that the Gather is complete. 3. Click the link in the Last gathered column to download the support data. The downloaded file is a zip file with the name <machine-date-time-logs.zip>; for example, acp _ _logs.zip. The zip file contains the following directory structure when extracted: logdata-<date collected> - sh.log + etc - fstab - hosts - mtab - system_params.json + app + snmp - snmptrapd.conf + linuxsnap - linuxsnap.txt + opt + webconsole + backup_config Manage VTE and ACP logs 117

118 DLm Administration - last_good_config - last_install - lastinput.json - lastinput.msg + configs - <configuration file name>.json - <configuration file name>.msg - rsainit.cfg - rsainitclient.cfg - rsainitsvc.cfg + logs - apply.log - logall.txt - status.txt + proc - mdstat - mounts + var + log + apache2 - access_log - error_log Back-end tape support The DLm allows the Fibre Channel attachment of IBM 3592 or IBM-compatible tape drives. Each VTE supports one physical IBM 3592 or TS1120 tape drive attached using point-to-point connection. A Fibre Channel port is provided at the rear of each VTE for physical tape functions. You must provide the IBM 3592 or TS1120 drive and a Fibre Channel cable to connect the drive to a port on the VTE. Note: DLm supports only point-to-point attachment of a single 3592 or TS1120 tape drive to the VTE. Connection through a Fibre Channel switch is not supported. After the drive is physically attached to a VTE, you have two choices: Map a single mainframe tape drive (device address) to the physical tape drive for writing real tape cartridges from the mainframe. This capability is referred to as Direct Tape. Use the DLm VTE-based Export and Import utilities to copy individual volumes (VOLSERs) from or to the tape. 118 EMC Disk Library for mainframe DLm960 User Guide

119 DLm Administration Direct Tape DLm is primarily a tape-on-disk controller, which emulates tape drives to the mainframe and stores tape volumes on a back-end disk subsystem. However, it also allows a tape-drive-to-tape-drive mapping of an emulated 3590 tape drive to a physical IBM tape drive attached to a DLm VTE. Device mapping To map a single mainframe device address through to a Fibre Channel attached IBM 3592 or TS1120 tape drive, modify the virtual device definition to point the device to the physical drive instead of a virtual tape library on disk. For the device being mapped to the physical drive, you must replace the Tape Library parameter with the following parameter: DRIVE-nnnnnnnnnnnn where nnnnnnnnnnnn is a 12-digit serial number for the tape drive. (Figure 29 on page 102 shows Tape Library field in the Add devices section of the Tape Device Configuration page.) If your drive serial number is less than 12 characters in length then you must pad the number to the left with zeros. For example, if your serial number is , then you would enter DRIVE into the Tape Library field for the mapped drive. The emulated tape drive must be configured to match the characteristics of the physical tape drive. The device being configured must be defined as Device Type (See Figure 28 on page 101.) On the required VTEn tab under the Devices menu of the DLm Console, make these changes: 1. In the Control Units section, specify the device type as Add devices on page 101 provides more information. 2. In the Add devices section, enter DRIVE-<12-digit drive serial number>. a. Access the VT console as described Access a VTE on page 69. b. Obtain the drive serial number by typing the following on the VT console: show drive list If the tape drive is not listed you will need to follow the next steps: a. Vary the drives defined on this VTE offline to the mainframe. b. Verify that the external tape drive is powered on. Back-end tape support 119

120 DLm Administration Segregate the devices c. Verify that the external tape drive is connected to the Fibre Channel adapter of the VTE. d. Verify that the VTE's operating system driver can see the external tape drive. Open the VT console as described in Access a VTE on page 69 and enter the following commands: scsiadd This rebuilds the SCSI device table. lsscsi Ensure that you see the external tape device in the output. e. Stop and start the VTD to pick up the new tape drive information. Type: STOPVT STARTVT f. Obtain the drive serial number by typing the following on the VT console: show drive list If the tape drive is still not listed you will need to reboot the VTE from the webconsole interface as described in VTE reboot on page Update the appropriate tape drive configuration. At this point, the tape application should start and verify the external tape drive.if you receive an error and the tape daemon stops, verify that the tape drive displays "online" and try again. 4. Vary the drives defined on this VTE online to the mainframe. After mapping a device as described in Device mapping on page 119, isolate the mainframe device from other virtual devices in the mainframe configuration in order to control when a real tape is written versus a virtual tape written to disk. Specifically, if you are using MTLs, you must assign a unique library ID (MTL) to this device address. A physical cartridge is written to only when the system ACS routine determines that a real cartridge is to be written to and assigns the appropriate library ID. Otherwise, when the mainframe allocates to the library IDs (MTL) representing the other drives, a virtual volume is written. 120 EMC Disk Library for mainframe DLm960 User Guide

121 DLm Administration When a mainframe device is mapped to a physical tape drive in this manner, mount requests work just as they would if the drive were directly attached to a mainframe channel. Allocation of the drive results in a mount request being posted to the mainframe operator console and the tape drive's display screen. The request remains outstanding until the physical drive becomes ready. This requires an operator to mount a tape and ready the drive. The tape cartridge written will be compatible with 3592 cartridges written from any mainframe-attached 3592 tape drive unless the volume has been encrypted by DLm. DLm-created cartridges can be sent to mainframe locations that do not have DLm installed as long as those locations have real or compatible tape drives capable of reading the 3592 cartridge. Compression DLm supports IDRC data compression. If a mainframe tape device mapped to a physical fibre channel attached drive requests compression VTE will instruct the drive to compress the data before writing it to tape. The tape drive, rather than DLm, will perform the data compression in order to ensure compatibility with other IBM drives that may later attempt to read the data. Export to and import from tapes As an alternative to Direct Tape, when a mainframe tape drive is mapped directly to a physical IBM drive, DLm includes two utilities for exporting and importing tape volumes between the DLm disk library and a tape drive attached to a DLm VTE. These commands are executed within the tape-on-disk application running on the VTE, where the drive is attached. You can have either pass-through or import/export functionality, not both. The EXPORT and IMPORT utilities are disabled in the default DLm VTE configuration because: These commands copy tape volumes based only on the VOLSER irrespective of the data actually contained on the volume. A DLm VTE does not usually have a tape drive physically attached to it. To enable the EXPORT / IMPORT utilities: 1. Access the DLm Console using the web browser. Access the DLm Console on page 55 provides instructions. Back-end tape support 121

122 DLm Administration 2. Click Devices to display the Tape device configuration panel. This panel contains a tab for each configured VTE. 3. Click the tab pertaining to the VTE you want to configure. (The screen shown in Figure 27 on page 98 opens.) 4. Select the Tape import/export enabled check box. Configure global parameters on page 97 provides more information about this field. 5. Save the configuration as described in Save configuration on page 110 and install it on the VTE as described in Activate or install a configuration on page 111. Once the VT application restarts, the EXPORT and IMPORT utilities are available. Note: DLm does not support import and export of scratch tapes. To run these utilities: 1. Open the VT console of the VTE, where the tape drive is attached. Access a VTE on page 69 provides instructions. 2. After connecting to the individual VTE, you can type the EXPORT and IMPORT commands in the VT console. Note: EXPORT and IMPORT commands have no user interaction. If a command is typed incorrectly, an error message is displayed. Retype the command. EXPORT on page 212 provides details about how to use the EXPORT command. IMPORT on page 215 provides details about how to use the IMPORT command. DLm diagnostic reporting The different subsystems of the DLm system generate messages as they operate. The major sources of messages in DLm are: VTEC ConnectEMC (reports VTEC and Celerra Server issues) DataDomain 122 EMC Disk Library for mainframe DLm960 User Guide

123 DLm Administration VTEC The VTEs continually generate informational, warning, and error messages as they operate. These messages are written to the internal system disk so that they can be retrieved as necessary during problem determination. Messages will also be automatically displayed on the VT console. Additionally, DLm is capable of sending informational, warning, and error messages to any of the following: An SNMP management console The z/os master console via a z/os started task You can configure which messages get sent to each destination using the Messages panel of the DLm Console. For sending messages to SNMP: 1. Configure the message destinations. 2. Configure which messages should be sent. Configure messages and recipients on page 124 provides more information. For z/os messages you must install the z/os started task and then configure which messages you want sent. DLMHOST on page 198 provides more information. SNMP The VTEC contains SNMP MIBs that monitor the system and report events. Once configured, the VTEC can send SNMP alerts to a designated SNMP manager. SNMP alerts are sent as SNMPv2c traps on port 162 using the community name 'public.' To configure the VTEC to send SNMP alerts: 1. Access the DLm Console as described in Connect to the DLm Console on page Click External. 3. Select the Notify tab. DLm diagnostic reporting 123

124 DLm Administration Figure 35 SNMP configuration Configure messages and recipients 4. Under SNMP notifications, type the host name or IP address of one or two systems where you want SNMP management messages to be sent. If either of the SNMP manager host fields contain a valid host name or IP address, the VTE forwards messages to that host. If both fields are blank then SNMP messaging is inactive. You can configure which messages get sent to an SNMP management console or the z/os master console. 1. Access the DLm Console as described in Connect to the DLm Console on page Click Messages. Three tabs appear representing informational, warning, and error messages: Error message routing Warning message routing Informational message routing Each tab shows a complete list of all DLm messages in that particular category. 3. Select the tab corresponding to the message type you want to configure. All messages in the Errors message routing tab are preselected and cannot be deselected. 124 EMC Disk Library for mainframe DLm960 User Guide

125 DLm Administration Figure 36 Alert messages 4. Select the check boxes in the following columns to send alerts to the corresponding recipient: SNMP Mainframe 5. Click the toggle all check boxes to reverse the selection. ConnectEMC The Celerra ConnectEMC function can automatically notify the EMC service center or other service providers if the VTEC or Celerra system detects a serious problem. ConnectEMC sends messages using , FTP, or a Celerra modem (through a customer-supplied analog telephone line). You can configure the VTEC to generate ConnectEMC events for error level SNMP traps. VTEC errors that generate ConnectEMC events on page 445 provides a list of traps that generate ConnectEMC events. DLm diagnostic reporting 125

126 DLm Administration Data Domain DD880 alert notifications The DD880 generates alerts when it identifies a problem with either a software component or a hardware component. Not all events generate an immediate notification. Alert (generated immediately) All events of CRITICAL or WARNING severity result in immediate notification to the EMC Data Domain support group. For events with a CRITICAL severity level, the Data Domain DD880 can also be configured to forward the notification to the address of the system administrator. Autosupport s (generated once a day) The Data Domain DD880 generates daily s to the EMC Data Domain support group. These s contain information about all outstanding alerts and the status summary of the general health of the DD880. You an also configure Autosupport s to be sent to the address of the system administrator. AWSPRINT library utility The awsprint utility allows you to produce lists of the tapes in the DLm virtual tape library. You must use the command processor CP503 to obtain the awsprint output. The EMC Disk Library for mainframe Command Processors User Guide provides information about CP503. The FIND VOLUME command function is related to that of awsprint. This command finds a specific volume (VOLSER) in the DLm tape library and reports the current status of that volume. FIND on page 214 provides the details of the command. 126 EMC Disk Library for mainframe DLm960 User Guide

127 CHAPTER 4 DLm Replication This chapter explains the replication concepts and features of a DLm system: Overview Replication terminology Celarra replication DLm Celerra replication and disaster recovery Deduplication storage replication Replication between DLm3.x and DLm2.x systems DLm Replication 127

128 DLm Replication Overview DLm offers IP-based remote replication, which leverages your IP network infrastructure, eliminating the need for channel extension hardware. The replication is storage-based and therefore has no impact on mainframe host operations or performance. DLm replication is asynchronous and only the changes are replicated between the remote sites. DLm supports unidirectional and bidirectional replication, which means that the source system can also be a target system and vice versa. Celerra replication supports up to four target sites per source system, which means you can replicate data to four different sites. The source and destination DLm systems do not have to be configured with the same capacity. DLm replication is a separately licensed feature. In the DLm960, Celerra replication and deduplication storage replication are licensed separately. There are some key differences in the way Celerra replication and Data Domain replication work. Note: A separate license is required for each active storage controller. Celerra replication in DLm occurs at the filesystem level. A filesystem is a virtual shelf of tapes in which a continuous range of VOLSER are defined. Celerra replication on DLm lets you maintain a remote copy of a collection of virtual tape volumes. Figure 37 on page 129 depicts DLm replication. However, all the filesystems under a tape library should have the same replication state. Celerra replication is based on EMC Celerra Replicator V2. Using Celerra Replicator (V2) provides more information on Celerra V2 replication. This document and the latest documentation for your specific version of the Celerra operating environment (OE) for file are available at EMC Online Support website. Data Domain replication occurs on directories corresponding to a tape library and not at the filesystem level. A tape library is a collection of virtual shelves of tapes. In each virtual shelf, a continuous range of VOLSERs is defined. Data Domain replication is based on the EMC Data Domain Replicator. Only unique data with respect to the destination is replicated from the source Data Domain, resulting in large savings in replication bandwidth utilization. 128 EMC Disk Library for mainframe DLm960 User Guide

129 DLm Replication Primary site Celerra Celerra Remote site Mainframe VTEC Mainframe VTEC Data Domain Data Domain Mainframe Data Domain VTEC GEN Figure 37 DLm replication Overview 129

130 DLm Replication Replication terminology The following is some of the terminology used when describing DLm replication: Replication The process of sharing information to ensure consistency between redundant resources. Source object (SO) is also known as: The production object (PO) The production filesystem (PFS) The source filesystem (SFS) This is the original source collection of data to be replicated. Destination object (DO) is also known as: The destination filesystem (DFS) The target filesystem (TFS) The secondary filesystem (SDS) This is the replicated copy of the original data. Replication session The relationship that enables replication between the SO and the DO, including two internal checkpoints or snapshots for both SO and DO. Time-out-of-sync Defines how closely you want to keep the destination object synchronized with the source object. This is specified in minutes. Full copy The complete copy of the source object that is sent to the destination when a replication session is first started, or when a common base is not found. Differential copy The changes made to the source object (since the previous replication) that are sent to the destination during replication. 130 EMC Disk Library for mainframe DLm960 User Guide

131 DLm Replication Snapshot or checkpoint A point-in-time copy of data. This view of data takes very little space and are just pointers to where the actual files are stored. Snapshots/checkpoints are used when transporting the full copy after first synchronization. Using SnapSure on Celerra provides detailed information on snapshots. This document is available at EMC Online Support website. Disaster recovery (DR) The process, policies, and procedures for restoring operations critical to the resumption of business, including regaining access to data, communications, and other business processes after a natural or human-induced disaster. Recovery point objective (RPO) A description of the amount of data lost, measured in time. For example, if the last available good copy of data was made 18 hours before an outage, then the RPO is 18 hours. You can define different RPO values for different VOLSER ranges or tape libraries based on information criticality. Recovery time objective (RTO) A specified amount of time within which a business process must be restored after a disaster to avoid unacceptable consequences associated with a break in continuity. RPO and RTO form the basis on which a disaster recovery strategy is developed. Storage controller interconnect is also known as: Data Mover interconnect (DMIC) DART interconnect (DIC) The storage controller interconnect is a communication path between two Celerra storage controllers (Data Movers) that is used for all replication sessions between those two storage controllers. This connection defines all interfaces that can be used on each storage controller, and also the bandwidth throttle schedule. This interconnection must be created in both directions. Replication terminology 131

132 DLm Replication Celarra replication Celerra replication is based on EMC Celerra Replicator V2. Using Celerra Replicator (V2) provides more information on Celerra V2 replication. This document and the latest documentation for your specific version of Celerra operating environment (OE) code are available on the EMC Online Support website. Prerequisites for Celerra replication are: The required replication licenses are installed in the source and destination DLm systems. Celerra runs the Celerra OE version or or later. Celerra runs the Celerra OE version or later. You have the IP addresses that are assigned to the source and destination storage controllers (Data Movers). The HTTPS connections between the source and destination storage controllers (port 5085) and between the source and destination Control Stations (port 443) are secure. Sufficient storage space is available for the source and destination filesystems. Supported configurations DLm supports the following configurations for Celerra replication: Local replication: Between two separate storage controllers located within the same DLm. Remote replication: Between two separate DLm systems, typically (but not necessarily) in different geographic locations. This includes replicating from a single source to up to four separate destinations. Bi-directional replication: DLm A replicates to DLm B, while DLm B replicates a different filesystem to DLm A. Currently, these configurations are not supported: Replication to more than four separate destinations Cascading (for example, DLm A replicates to DLm B which in turn replicates to DLm C) 132 EMC Disk Library for mainframe DLm960 User Guide

133 DLm Replication Celerra replication procedure Celerra replication sessions DLm uses Celerra Replicator V2 to replicate VOLSERs stored in NFS file systems on the source Celerra. The filesystem on the Celerra corresponds to a VOLSER range in DLm. The replication environment is initially set up for you at installation by EMC service personnel. They use the DLm tools to create the target filesystems and then connect the source to the target filesystem. The target filesystem must have the same name and size as the source filesystem. To make changes or additions to your replication environment, contact EMC Customer Support. DLm allows many Celerra replication sessions to be active simultaneously. Creating a replication session involves these tasks: 1. Ensure that the SO already exists. 2. Create and mount (read only) the DO with the same size and properties as the SO (if it does not already exist). 3. Create internal snapshots at both the source and destination end. Note: Using SnapSure on Celerra provides detailed information on snapshots. This document is available at EMC Online Support website. 4. Configure and start the replication scheduler that drives the time-out-of-sync policy between the two ends. 5. Establish replication between the source and destination end. Data replication The replication of source data occurs in the following way: 1. An application running under z/os writes data to one or more virtual tapes (VOLSERs) within a filesystem (VOLSER range) set up for replication on DLm. 2. Replication creates a checkpoint a point-in-time, block-level copy of the underlying filesystem. 3. Using intelligent scheduling algorithms, checkpoints are transferred to the remote destination asynchronously. 4. Only changed blocks are copied. Celarra replication 133

134 DLm Replication Celerra RepOutOfSyncHours feature DLm version 2.1 and later provides a replication monitoring feature called the RepOutOfSyncHours feature. The EMC Celerra Replicator monitors the synchronization status of each active replication session. Every time a session goes out of synchronization, a timer starts tracking the duration of the out-of-sync state. If that session does not get synchronized within a specified time period, DLm generates a ConnectEMC alert for an out-of-sync callhome condition. If the session returns to synchronized state before the specified time period expires, the timer is reset. The default time period before the generation of an out-of-sync callhome alert is eight hours. If you want to change the default time period, contact EMC Customer Service. Using Celerra Replicator (V2) provides more information about the RepOutOfSyncHours feature. This document and the latest documentation for your specific level of Celerra OE code are available atemc Online Support website. DLm Celerra replication and disaster recovery This section explains the role of DLm replication in a disaster recovery (DR) strategy. Replication terminology on page 130 explains terminology relevant to DLm replication in a disaster recovery strategy. Replication is not a complete disaster recovery strategy, although it provides an essential enabling technology for accomplishing DR. A DR workflow must take into consideration your environment, potential scenarios, and the desired recovery objectives. The disaster recovery procedure in DLm involve the following steps: 1. Mount the read only copy of the all the filesystems at the target site on the VTEs. 2. Identify the tapes that have been lost due to the disaster event. 3. Perform a failover of the filesystems in the Celerra. 4. Unmount and remount the filesystem as read/write. 5. When the source system becomes available, copy the changes made at the target back to the source system. 6. After all the changes have been copied to the source, change the configuration to the original configuration. Replication reduces both RPO and RTO. Each filesystem (VOLSER range) maintains a unique and independent value for: 134 EMC Disk Library for mainframe DLm960 User Guide

135 DLm Replication Time-out-of-sync This controls how often the destination site is refreshed. Depending upon your load and bandwidth, this can be nearly synchronous. This value is equivalent to the RPO described in Replication terminology on page 130. Quality of service (QoS) This controls bandwidth throttling by specifying limits on specific days and hours. Time-out-of-sync DLm replication uses an adaptive scheduling algorithm to determine when to refresh replicated storage. RPO is typically set to less than 10 minutes. The replication scheduler uses best effort to maintain the specified RPO for each range of VOLSERs, and automatically works to catch up after any RPO violation. Advanced capacity planning is required to make sure that RPO violations do not occur. However, events (SNMP traps or ) can be configured in case RPO violations do occur. Quality of service Identifying lost tapes Interconnect QoS defines up to six bandwidth schedules. These are defined in terms of days, hours, and bandwidth. To identify tapes that have been lost due to the disaster event: 1. Use the awsprint utility to identify the list of scratch tapes in the file systems that have been disrupted. Compare the output of the utility with the list of scratch tapes for this VOLSER range according to the Tape Management Catalog. Some will appear in the awsprint output but not in the Tape Management Catalog as they were no longer in scratch state when the disaster event occurred. These tapes might not have completed replicating to the target Celerra. AWSPRINT library utility on page 126 provides information about the utility. 2. Identify the last snapshot that was transferred successfully to the target using the command processor CP504. The output contains the last successful sync time for a particular file system. 3. Execute GENSTATS with the following options: a. STILLINUSE b. PATHNAME= name of tape library Celarra replication 135

136 DLm Replication The Genstat report provides a list of VOLSERs that were being trasferred to the destination at the time of the disaster event. Note: The DATESTART parameter may be used to indicate the start of the search. An example of param usage in the JCL to generate such the report: STILLINUSE PATHNAME=tapelib/BB The sample output: STILLINUSE PATHNAME=tapelib/BB VOLSERS STILL MOUNTED : NODENAME DEVICE VOLSER LAST MOUNTED PATH VTE1 VTE1-01 BB /04/29 23:35:14 tapelib/bb VTE1 VTE1-00 BB /04/29 23:35:14 tapelib/bb VTE1 VTE1-02 BB /04/29 23:35:14 tapelib/bb This list indicates the VOLSERs that has been lost due to the disaster event and these jobs will need to be re-run. EMC Disk Library for mainframe Command Processors User Guide contains more information about GENSTATS and command processor CP504. DR testing from a copy of production data DR testing is performed without interrupting data replication between the DR and production sites by using a copy of the production data. Disk arrays allow the creation of both read-write snaps and instant read-only copies: Read-write snaps: Confirm operation at the DR site Require twice the storage capacity of SO Read-only copies: Confirm that the tapes can be mounted and all required data can be accessed Require minimal incremental storage capacity 136 EMC Disk Library for mainframe DLm960 User Guide

137 DLm Replication Tape catalog considerations Tape catalog management is no different for DLm than it is for offsite storage; that is, catalogs can be written to an emulated tape and replicated to allow data to be recovered. However, in environments that replicate the catalogs synchronously with a DASD replication solution, tape catalog management includes some special considerations. Deduplication storage replication Replication on deduplication storage is executed by the Data Domain Replicator software available with DD880. The replication environment is initially set up for you at installation by EMC service personnel. To make changes or additions to your replication environment, contact EMC Customer Support. Note: Deduplication storage replication applies only to DLm960 systems equipped with DD880 storage system. The Data Domain Replicator software includes different replication policies that use different logical levels of the system for different effects. In a DLm environment, the DD880 is configured to only use directory replication, which offers maximum flexibility in replication implementation. With directory replication, a directory (sub-directory, and all files and directories below it) on a source system is replicated to a destination directory on a different system. Directory replication transfers deduplicated changes of any file or subdirectory within a Data Domain filesystem directory that has been configured as a replication source to a directory configured as a replication target on a different system. In DLm, the directory replication context is established at the directory that corresponds to a virtual tape library. Hence, replication cannot be enabled or disabled for individual VOLSER ranges. Data Domain replication uses a proprietary protocol to transfer only the data that is unique at the destination. Replication transfer for a file is triggered by a file closing. In cases where closes are infrequent, DD Replicator forces data transfer periodically. Once the complete file has been established on the replica, it is made immediately visible to the replica namespace and may be restored or copied at once. The replica at the destination is set to read only. All transfers between the source and the destination use the Diffie-Hellman key exchange. Data Domain Replicator uses its own large checksum to verify the accuracy of all sent data, in addition to the verification that TCP provides. Deduplication storage replication 137

138 DLm Replication Note: The two replication ports on the DD880 are configured in Failover mode to protect against link failures. Failover is the only configuration that DLm supports for the DD880 replication ports. No other configuration is supported for these replication ports. Prerequisites for DD replication are: Data Domain Replicator licenses are installed in the source and destination DLm systems. The software version on the destination VTE must be the same as or higher than the software version on the source VTE. You have the IP addresses that are assigned to the source and destination DD880 systems. Cat5 Ethernet cables are available for each DD880 system and all required WAN switches/ports are configures end-to-end. Sufficient storage space is available in the source and destination filesystems. At initial replication setup, EMC recommends that you plan disk capacity based on a deduplication ratio of zero. Supported configurations The following configurations are supported: Unidirectional from a single source to a single destination Bidirectional between a single source and destination pair Note: Data Domain replication is supported only when both the source and target systems are DLm DD systems. Replication from a DD880 to a Celerra is not supported. Replication session setup The requirements for the successful setup of a Data Domain directory replication are: The destination system must be large enough to store all the data replicated from the source. The network link bandwidth must be large enough to replicate data to the destination. 138 EMC Disk Library for mainframe DLm960 User Guide

139 DLm Replication The fully qualified domain names FQDN for the source and the destination DD880 systems must be registered in the DNS servers. If the hostname of the DD880 is DD-1, the FQDN, for example, may be"dd-1.customer.com." The replication context directory is defined after the directories are created at both the source and the destination. EMC recommends that you set up replication before the system restores backups to the source directory. Erase all files from the destination directory if it is not empty before the initialization of a directory context. Replication initialization must be executed from the source. Throttling As a basic form of quality of service (QoS), times of day during which data may or may not be sent, along with limits to the amount of bandwidth that can be used. Note: Contact EMC Service if this needs to be configured. By default, no throttling is set. Recovery point Recovery time In a Data Domain system, deduplication is fast and inline, and replication can be simultaneous with backup, so it can finish shortly after backup. The restore image is available immediately from the replica. The recovery point is from the current snapshot before the delay represented by the backup window. The replica contains only deduplicated data. The recovery time is the same as the restore rate from the deduplication pool in the replica. This should be measured carefully with a large dataset to ensure sustained performance characteristics. The Data Domain Replicator uses the directory replication feature to support replication at the tape library level. Deduplication storage replication 139

140 DLm Replication Disaster recovery in Data Domain systems Disaster recovery for data stored on a Data Domain system is performed on the entire tape library. The DD880 system reports a parameter called the Sync'd as of time for each tape library being replicated. This Sync'd as of time indicates the timestamp of the most recently replicated data for a replication-enabled tape library. All data that was written to VOLSERs in the source tape library before the Sync'd as of time has been replicated and data received after the Sync'd as of time is in the process of being replicated. For example, if the Sync'd as of time for the replication context /backup/tapelibzzz is reported as 23:35:00 on 04/29/2010, it indicates that all the data written to the tape library tapelibzzz, as of this time 23:35:00 on 04/29/2010 at the source, has been replicated. The data written after this time, for example, 23:36:00 on 04/29/2010, is in the process of being replicated. In the case of a disaster, the VOLSERs in the tape library accessed after the Sync'd as of time reported for that tape library is lost and cannot be recovered. You can use the GENSTAT utility with the SYNCTIME, DATESTART, DATEEND, and PATHNAME parameters to identify the data that is not replicated. The EMC Disk Library for mainframe Command Processors User Guide contains more information about GENSTATS. To identify the unreplicated data stored on a Data Domain system: 1. Execute the command processor CP603 with the status option for each replication-enabled tape library that stores its data on a DD Note the Sync d as of time for each replication-enabled tape library on the DD880 system. 3. Execute the command processor 998 to gather statistics. 4. Execute GENSTATS with the following options: a. STILLINUSE b. SYNCTIME=hr:mm:sec (the Sync d as of time) c. DATESTART=yr:mm:dd (the date to start the search) d. DATEEND=yr:mm:dd (the date of the Sync d as of time for this context) e. PATHNAME="name of tapelibrary" (for example, tapelibzzz ) Note: If you run GENSTATS with the PATHNAME option, the report lists VOLSERs in the tape library that correspond to the specified pathname, the updates of which have not been replicated. 140 EMC Disk Library for mainframe DLm960 User Guide

141 DLm Replication DATESTART and DATEEND define the DLm production time period to report in the GENSTATS reports. If you do not specify a time period, you may see extraneous or irrelevant tape mounts in the STILLINUSE report. EMC Disk Library for mainframe Command Processors User Guide contains more information about GENSTATS, command processor 998, and CP603. This GENSTATS report provides a list of VOLSERs that were accessed after the Sync d as of time and might not have completed replicating the data to the target. This is an example of how the parameter is used in the JCL: STILLINUSE DATEEND=10/04/29 SYNCTIME=23:36:00 PATHNAME=tapelibZZZ/ This is the report generated: VOLSERS MOUNTED AFTER SYNCTIME (10/04/29 23:36:00) 2010/04/29 23:46:36 S /04/29 23:46:36 S /04/29 23:46:36 S /04/29 23:57:59 S /04/29 23:57:59 S /04/29 23:58:00 S /04/30 00:09:25 S /04/30 00:09:25 S /04/30 00:09:25 S /04/30 00:20:49 S /04/30 00:20:49 S /04/30 00:20:50 S VOLSERS STILL MOUNTED : NODENAME DEVICE VOLSER LAST MOUNTED PATH VTE1 VTE1-01 S /04/29 23:35:14 tapelibzzz/s1 VTE1 VTE1-00 S /04/29 23:35:14 tapelibzzz/s2 VTE1 VTE1-02 S /04/29 23:35:14 tapelibzzz/s3 The report provides two lists of VOLSERs: VOLSERs that were mounted at the Sync'd as of time (23:36:00 on 04/29/10 in this example) VOLSERs that were mounted after the Sync'd as of time Directory replication flow This is how DD880 directory replication works: Deduplication storage replication 141

142 DLm Replication The source Data Domain system continuously sends segment references (metadata) to the destination Data Domain system. Destination Data Domain replica filters them by looking up its index to check which segments it doesn't already have. This could impact replication performance due to the high restore/backup load. The source periodically asks the replica which segments need to be sent. The destination responds with list of segment references that it does not have. The Source reads the requested segments from its filesystem and sends them. Replication code picks up the logged close records from a queue and begins replication. The maximum amount of time between a write and when replication will start is one hour. Replication logs the close of a modified file based on the following considerations: 10 minutes after the last access, NFS closes the file. Every hour (by default), all files are closed regardless of how recently they were written. If many files are being accessed or written, files may be closed sooner. Replication between DLm3.x and DLm2.x systems Prerequisites DLm 3.1 and later support replication between DLm 3.x and DLm 2.x systems. This section explains the prerequisites and considerations to enable such replication. The Gen2 DLm should be a DLm 2.5 system. The Celerra in a DLm 2.x system should support Replicator V2 for replication. The target filesystem on the backend Celerra or Celerra must be of the same size or greater than the size of the source filesystem. The DLm 3.x system creates filesystems in TiB increments. You must consider the size of the filesystem created on the DLm2.x if it has to be used for replication with 3.x systems. 142 EMC Disk Library for mainframe DLm960 User Guide

143 DLm Replication The DLm 3.x system cannot use Enhanced File System when involved in replication with DLm 2.x systems and, therefore, Storage Classes cannot be defined and all file systems will be part of the default Storage Class '0'. DLm 2.x systems support only static key encryption. Use of RSA Key Manager for key generation is not supported in a DLm 2.x environment. Contact your EMC representative for details about support for replication of encrypted volsers between a DLm 2.x system and 3.x system. Replication considerations IMPORTANT If you want DLm 3.x system to replicate to a 2.x system, you must decide while initially configuring the DLm 3.x system. Once configured, you cannot change the settings and enable or disable replication between 2.x and 3.x systems. These configuration settings must be done by EMC service personnel only. To enable replication between a DLm 3.x system and DLm 2.x systems, the lock filesystem must be un-mounted from all the VTEs. A standard procedure is followed to mount the Lock filesystem for all DLm systems. This procedure must be skipped. If this is already executed, it must be rolled back. Note: If the DLm system has been put into use with Lock File system, it cannot be rolled back. 1. In the Storage > Available tab in DLm Console, the entry for /lockfs/lock needs to be removed from the list of mount points defined. This list is used to define the list of file systems that need to be mounted on each VTE. 2. The DLm install procedure requires that the Additional parameters dialog to be populated with VOLSERLOCKDIR /lockfs/lock. This setting for the VOLSERLOCKDIR must be removed. 3. Both these changes must be saved. 4. The Data Domain DD880 in the DLm2.3.X is shipped with DD OS version 4.7.x. These DLm systems will need to be upgraded to DLm2.5, which supports DD OS version 4.9. The DD880 system in a DLm6960 supports DD OS version Replication between DD OS 4.7.x and the DD OS version 5.1.x is not supported. Replication between DLm3.x and DLm2.x systems 143

144 DLm Replication 144 EMC Disk Library for mainframe DLm960 User Guide

145 CHAPTER 5 Guaranteed Replication This chapter provides information about the DLm Guaranteed Replication (GR) feature, an enhancement to DLm's replication capabilities. The major topics include: Overview of GR GR configuration Manage GR Guaranteed Replication 145

146 Guaranteed Replication Overview of GR The disaster recovery capabilities of DLm include data replication using the Celerra Replicator V2. Chapter 4, DLm Replication, provides more information on the regular DLm replication feature. Celerra Replicator V2 replicates the data periodically and asynchronously. It is configured to periodically create a snapshot of the local DLm storage/filesystems and then asynchronously transfer the data at the time of the snapshot to the remote DLm. The data stored after the last snapshot is not replicated until the replicator takes the next snapshot. Therefore, this data may be lost if connection to the local DLm is suddenly lost. For most situations and applications, this type of periodic and asynchronous replication is adequate. But, for some critical applications where the data is expected on the remote DLm as soon as it is written to local DLm, this type of asynchronous replication is not adequate. DLm version 2.2 and later offers an enhancement to DLm's replication capabilities called Guaranteed Replication (GR). This feature forces Celerra Replicator to completely replicate a tape volume (VOLSER) to the remote site every time the mainframe issues two consecutive write tape marks on a tape (performs a tape close). GR causes the VTE to withhold acknowledgements of tape close from the mainframe until the VTE confirms that the local (or source) Storage Controller (within the DLm) has completed replicating the tape volume to the remote Celerra. The GR feature is essential for customers and situations where periodic, asynchoronous replication is not adequate. For example, if the DLm at the primary site fails and the processing needs to relocate to the remote site, the data that was being replicated at the time of failure is either completely or partially lost. GR helps to avoid the potential loss of data by ensuring that the tape volume is replicated before it closes. Note: Since devices enabled for data deduplication do not support GR, select No for GR during device configuration for such devices. 146 EMC Disk Library for mainframe DLm960 User Guide

147 Guaranteed Replication When a tape volume on a replication-enabled filesystem is written to a DLm tape device configured for GR, it is assumed that when the VOLSER is successfully closed, it has been fully replicated to the DR site. Assuming the system log indicates a VOLSER was successfully closed, the VOLSER at the DR site is a completely replicated copy. Table 8 Behavior of GR and non-gr devices Filesystem Non-GR device GR device Non-GR filesystem GR filesystem Read (from a named VOLSER mount) - yes Write (to a named VOLSER mount) - yes Mount Scratch - yes Read (from a named VOLSER mount) - yes Write (to a named VOLSER mount) - yes Mount Scratch - yes Read (from a named-volser mount): Yes Write (to a named-volser mount): Yes, with DLm545W warning message Mount Scratch: No; only GR filesystems are searched Read (from a named-volser mount): Yes Write (to a named-volser mount): Yes Mount Scratch: Yes; only GR filesystems are searched Overview of GR 147

148 Guaranteed Replication Tape requirements for GR Before configuring GR, you must note that not all tapes support GR. Only tapes that use any of the following tape labeling standards support GR: IBM Standard Labeled tapes (SL) IBM Non-Labeled tapes (NL) ANSI Labeled Tapes (AL) GR configuration These three standards are implemented in most IBM operating systems as part of the OS data management components (for example, BSAM and QSAM in z/os). These standards require that tapes must be closed with two consecutive tape marks. These tape marks will trigger GR to start a replication cycle. You can implement the GR feature on either a single tape drive or multiple tape drives. For example, a VTE, within a DLm that emulates 32 tape drives, might have only 8 or 16 drives configured for GR, while the remaining tape drives can be configured without GR. There are multiple steps in the GR configuration: 1. Install the replication and GR licenses. GR is a licensed feature. Contact EMC Professional Services to install GR license. 2. Configure the replication. GR works only for the filesystems or filesystems that have replication setup. Contact EMC Professional Services to configure replication for the filesystems on which you want GR. Note: It is not recommended to enable replication on a subset of filesystems in a tape library intended for GR. It should be enabled on all the filesystems in a tape library even if only one (or few of them) is intended to be used for GR. 3. Configure the GR timeout value. Configuring GR Timeout value on page 149 describes this task. 4. Configure the devices for GR. Configuring devices for GR on page 150 describes this task. 148 EMC Disk Library for mainframe DLm960 User Guide

149 Guaranteed Replication 5. Save and install the configuration changes made. This will restart the VTE application. Note: Every time you make any change to the device configuration or configure replication on any tape library file system, you should save and install the configuration for the change to take effect. This will restart the VTE application. Restarting the VTE application will result in a temporary outage of any device emulated by the application. You must coordinate the restart operation with the mainframe operations. You must vary all devices being emulated by a VTE to OFFLINE state before you restart the VTE application. Configuring GR Timeout value The GR Timeout parameter must be configured to enable the GR feature. GR Timeout specifies the duration (in seconds), for which the VTE should wait for acknowledgement that the replication refresh has been completed before assuming a failure has occurred. This value prevents the tape drive from remaining in a wait state should replication to the remote DLm not complete. The default value for GR Timeout is 2700 seconds or 45 minutes, which is 5 minutes less than the standard mainframe Missing Interrupt Handler time of 50 minutes. Note: The GR Timeout value must be less than the mainframe MIH value. If the MIH is less than GR Timeout, the mainframe abends the job when it encounters the MIH. You can configure the GR Timeout value in the General Parameters section of the DLm configuration file. Configure global parameters on page 97 contains instructions to configure this value. When a tape drive is configured for GR, the VTE performs two functions after it receives two consecutive write tape marks (WTM CCWs) from the mainframe: It flushes its cache for the filesystem where the VOLSER has been written. It issues a request asking the replicator to refresh the replication of the filesystem to the remote DLm. GR configuration 149

150 Guaranteed Replication Configuring devices for GR The GR feature is implemented on a device-by-device basis. It is not necessary to configure all the devices in your DLm or all the devices within a single VTE for the GR feature. You can have a subset of your drives configured for GR. Contact EMC Professional services for assistance in configuring devices for GR. You can configure and view the device configuration for each VTE by clicking on the Devices tab in DLm console. Figure 29 on page 102 shows the Add devices and the Current devices section in the Devices tab of the DLm Console, where GR must be configured. Note: GR will only work if a device is configured for GR and the tape library file system has been enabled for replication. Manage GR Mainframe configuration for GR on page 171 provides the instructions for mainframe configuration for GR. Every time a DLm VTE receives two consecutive tape marks from the mainframe on one of its tape volumes, the VTE initiates GR. The VTE requests the Celerra Replicator to refresh the filesystem where the VOLSER is stored and waits for confirmation that the replication refresh is complete. If two consecutive write tape marks occur, the VTE acknowledges tape close after it receives the replication confirmation. All output tapes written to a device configured with GR trigger replication if the filesystem to which the tape is written is configured for replication. During a Control Station failover event, the current GR and new host requests for GR, are likely to fail. However, as soon as the failover is complete and the secondary Control Station assumes the primary role, the subsequent requests for GR work as expected. Note: In the event of a Control Station failover, several minutes may be needed to reach the stage where GR functions are operational again. Verify GR configuration In addition to verifying the Celerra configuration, verify the GR configuration by using the query GR command. 150 EMC Disk Library for mainframe DLm960 User Guide

151 Guaranteed Replication The output of the query GR command displays: The GR Timeout value The tape devices configured with GR set to YES The filesystems on the Storage Controllers that have been configured for replication and are eligible for the GR function MIH considerations for Guarantied Replication As a best practice, the value chosen for MIH should be several minutes more than that chosen for GR Timeout. In this way, the DLm's actions, in case of replication issues, occur before the actions associated with MIH, giving the user closer control of managing the consequences of replication issues. Compare these situations: What happens if MIH times out before GR is done or times out: When GR finishes (either completes or GR Timeout is reached), the VTE will see many attempts from the host to clear the channel and the job will abend. What happens when MIH forces cancel: The job eventually clears (abend) on the host. The host may have sent a FORCE after the initial CANCEL accomplished nothing. This is not recommended for the same reasons that a manual FORCE is not recommended (see the MVS Commands User's Guide from IBM). The virtual tape drive remains unresponsive until the GR finishes (either completes or GR Timeout is reached). Attempts to use the drive during this unresponsive time is likely to result in the drive becoming boxed. What results when manually canceling the job: When GR finishes (either completes or GR Timeout is reached), the VTE will see many attempts from the host to clear the channel and the job will abend. The CANCEL of the mainframe job will NOT complete until GR releases the Virtual Tape Drive (either GR completes or GR Timeout is reached). What results from a manual CANCEL and FORCE The job clears (abend) on the host. Use of FORCE is NOT recommended (see the MVS Commands User's Guide from IBM). The virtual tape drive remains unresponsive until the GR finishes (either GR completes or GR Timeout is reached). Attempts to use the drive during this unresponsive time is likely to result in the drive becoming boxed. Manage GR 151

152 Guaranteed Replication 152 EMC Disk Library for mainframe DLm960 User Guide

153 CHAPTER 6 DLm WORM Tapes This chapter provides information about defining Write Once Read Many (WORM) filesystems in DLm using the DLm file lock retention (FLR) capability. Topics include: Overview Configure WORM Determine if WORM is enabled Extend or modify a WORM tape Scratch WORM tapes DLm WORM Tapes 153

154 DLm WORM Tapes Overview DLm WORM tape is an optional feature which emulates "write-once-read-many" physical tape cartridges. DLm WORM tape allows secure storage of data that is critical for business processes, and for regulatory and legal compliance, by protecting against unintentional or disallowed modification of a virtual tape. You can control the protection status and retention periods at the individual tape level. Note: The WORM tape feature is only available for tapes which reside in filesystems created on Celerra file storage. Once you set a tape to the WORM state, the tape s mode changes to read-only and that tape cannot be modified, deleted, or renamed until the WORM retention period has passed. After the WORM retention period has passed, the file is considered expired. Expired WORM files are not automatically deleted, but once a WORM file is expired it can be deleted. An expired WORM file cannot be changed back to writable mode, nor can it be modified or renamed. However, an expired WORM file can be reset to the protected WORM state by changing the FLR retention details. Two different types of file-level retention are available: enterprise (FLR-E) and compliance (FLR-C). FLR-E protects data content from changes made by users, but allows changes to be made by administrators. FLR-C protects data content from changes made by users, from changes made by administrators, and also meets the requirements of SEC rule 17a-4(f). While these two types of the underlying Celerra FLR features differ in important ways, management of DLm WORM tapes from the mainframe host is the same in either case. Note: Filesystems FLR enabled are referred to as WORM filesystems or FLR filesystems in the following sections. WORM control file The VTE uses a hidden WORM control file named.flr to control whether tape volumes written to an FLR-enabled filesystem are put into the WORM state upon close or whether files are left writable. Files will only be set to the WORM state if this file is 154 EMC Disk Library for mainframe DLm960 User Guide

155 DLm WORM Tapes present. If the ".FLR" file is not present, the VTE will not set files to the WORM state. This feature facilitates development and testing of the preparation of DLm WORM tapes without actually committing those tapes to the WORM state. Once testing is complete, WORM capabilities can be enabled by adding the control file to the filesystem. Whenever the control file is in place, all future volumes written to the filesystem from a drive configured for WORM will automatically be locked during tape unload processing by the host. Note: If the.flr file is deleted, tape volumes that the host unloads after that time will not be put in a WORM state, but tapes already in WORM state are not affected. Note: The DLm594W system message is generated when the host unloads a tape volume written to an FLR-enabled filesystem but the WORM control file does not exist at that time in that filesystem. File locking for WORM The VTE automatically sets a virtual tape file to the WORM state and locks it at tape unload time if: The DLm_FLR license is installed on the VTE. The VTE virtual tape drive is configured with FLR=YES. The filesystem on which the virtual tape file resides: Is on an FLR-configured Celerra filesystem Is an FLR filesystem Contains a ".FLR" control file The tape's HDR1 label specifies an expiration date, or the virtual drive is configured with a default FLR retention period. Retention period Labeled tapes Considerations for determining the retention period for labeled tapes: Overview 155

156 DLm WORM Tapes For labeled tapes, if the host has specified an expiration date in the HDR1, the retention period is set to 00:00:01 hours on the specified date. For multi-file tapes, only the first HDR1 on the first dataset, is used to determine the entire tape's retention period. Note: HDR1 dates are specified in the Julian date format YY/DDD or YYYY/DDD. If the HDR1 does not contain an expiration date, the device's default WORM retention period, FLRRET, is used to determine the action taken: If FLRRET is 0, the tape is not placed in the WORM state (that is, "no retention."). If FLRRET is positive, its value is added to the current date and time, and the result is used to set the retention period. If FLRRET is negative, permanent retention is set. Several HDR1 expiration dates have special meaning rather than as a specific date: 99/365, 1999/365, 99/366, and 1999/366 all mean permanent retention. The VTE sets the file's FLR retention period to 0, which is automatically converted to "infinite retention" by the FLR filesystem when it locks the file. 00/000, 0000/000, 98/000, 1998/000, 97/000 and 1997/0000 all mean no retention. The VTE does not set the WORM state for this tape. 99/000 and 1999/000 mean today plus 14 days. If the expiration date is in the past, other than one of the special dates listed above, the file is automatically set to "infinite retention" when the VTE locks the file. If the HDR1 expiration date is greater than 2038/018, the retention period is set to the maximum value, 2038/018. Unlabeled tapes Considerations for determining the retentiom period for labeled tapes: Unlabeled tapes are always treated as if there was a HDR1 containing no expiration date. Therefore, the device's default WORM retention period, if any, is used. ( FLRRET on page 159 provides information about the default WORM retention period.) 156 EMC Disk Library for mainframe DLm960 User Guide

157 DLm WORM Tapes If the default WORM retention period is a negative number, it signifies that the WORM "infinite retention" is desired. If no default WORM retention period has been configured for the device, the file is not set to the WORM state. Configure WORM Note: EMC strongly recommends that you use a storage class other than 0 for WORM-enabled filesystems. Class 0 is the default storage class; therefore, avoid using it for special types of filesystems such as WORM and GR.If you must use class 0 for a WORM-enabled filesystem, explicitly enter 0 in the Storage Class field in the Storage tab of DLm Console. If you leave the Storage Class field empty, DLm will not activate WORM on these filesystems. 1. Double-click the Configure Devices icon on the DLm VTE desktop. 2. Click the Configure Devices link. 3. Click the View or Modify link next to the active configuration file. 4. Scroll down the configuration screen and click the required device index number in the Device Information panel. 5. Enter appropriate values in the following fields: FLR Select this option to enable the WORM feature. It is unchecked by default. FLR on page 158 provides more information. FLRRET This option defines a default retention period to be assigned to tape volumes when the mainframe has not indicated an expiration date in the HDR1 record. FLRRET on page 159 provides more information. FLRMOD Select this option if you want to allow the tape drive to modify (extend) a tape volume that is in the WORM state. It is unchecked by default. FLRMOD on page 159 provides more information. FLREXTENTS Configure WORM 157

158 DLm WORM Tapes FLR extents controls how many times a tape volume in the WORM state can be extended, assuming the FLR mod option is selected. Valid values are from 0 to If the FLR extents parameter is omitted, the default is 100. FLREXTENTS on page 159 provides more information. 6. To configure WORM for an individual device, scroll down to the Current devices section. In the FLR column, click the link corresponding to the device you want to configure. A dialog box opens displaying the FLR fields. 7. Click one of the Submit buttons. 8. Save the changes as described in Modify or delete a configuration on page To enable changes to the currently running configuration of the VTE, restart the virtual tape application as described in Start and stop tape devices on page 84. Note: Be sure to vary the devices offline to the mainframe before you restart the VT application. FLR Select Yes to enable WORM or No to disable the feature. The default value is No. If you select No for a tape drive, the VTE does not attempt to set the WORM state for any volume written on this drive, even when the drive is writing to an FLR-enabled filesystem. Any new tape volume written by this drive can be deleted, modified, extended, or renamed just as it could be in any non-flr-enabled filesystem. If you select this option for a tape drive, tape volumes written to the drive may be set to the WORM state when written, depending on the following conditions: The file is written to an WORM filesystem. The expiration date sent by the mainframe is a date in the future, or if the host does not specify an expiration date, a default retention period is configured for the drive. (See FLRRET on page 159.) A.FLR control file is present in the WORM filesystem. Note: WORM files can be read by a VTE device even if it does not have FLR configured on it. 158 EMC Disk Library for mainframe DLm960 User Guide

159 DLm WORM Tapes FLRRET FLRRET defines a default retention period to be assigned to tape volumes when the mainframe has not indicated an expiration date in the HDR1 record. The FLRRET parameter has no effect on a tape volume unless the FLR active option is selected for the tape drive. You can set this period in days, months, or years. Enter a numeric value and then select Days, Months, or Years. The default is 0, which indicates there is no default retention period. Specifying a negative retention number indicates that the WORM "infinite retention" period should be set if the host does not set an expiration date. When the mainframe writes a tape volume to an FLR drive with no expiration date in the HDR1 label, the VTE adds the default retention period set by FLRRET to the current date to determine the WORM retention period for the tape volume. If the mainframe does not include an expiration date in the HDR1 and there is no default retention date set then the VTE will leave the volume in a non-worm state. FLRMOD FLRMOD defines whether a tape drive is allowed to modify (extend) a tape volume that is in the WORM state. The default is No; tape volumes in the WORM state cannot be modified. By default, WORM tape volumes cannot be extended because this would require a modification of the file. However, setting the FLRMOD parameter to Yes for a tape drive causes the VTE to allow WORM tape volumes to be extended by using multiple files in the FLR-enabled filesystem to hold the modified image of the tape volume. When you set the FLRMOD parameter to Yes, tape volumes in WORM mode are mounted in read-write ("ring-in") mode, so that the host will know that it can write to the volume. The QUERY command will display the device state as "mod". When you set the FLRMOD parameter to No, tape volumes in WORM mode are always mounted in read-only ("ring-out") mode and writes are not allowed. FLREXTENTS FLREXTENTS controls how many times a tape volume in the WORM state can be extended, assuming the FLRMOD parameter is set to Yes. Valid values are from 0 to If the FLREXTENTS parameter is omitted, the default is 100. Configure WORM 159

160 DLm WORM Tapes The number of extents that make up an extended virtual tape volume is transparent to the mainframe. However, having a large number of extensions can seriously impact the amount of time it takes to open and process all the files involved. FLREXTENTS can be used to limit the quantity of files to a reasonable number. After the FLREXTENTS limit is reached, the VTE still makes a new extension file and accepts writes, but it responds to every write and write tapemark command with a Logical End of Volume indication (Unit Exception status). It would be expected that the mainframe would close the tape volume and start a new tape soon after receiving a Logical End of Volume, but it is free to continue writing as much data as it wants past the Logical End of Volume indications (up to the normal size limitation). Determine if WORM is enabled To determine if WORM is enabled, enter one of these commands: QUERY CONFIG If FLR is enabled, the output displays FLR details as follows: Index Devicename Type CU UA Options A 940A A PATH=/tapelib/ SIZE=40G FLR=YES FLRRET=1D FLRMOD=YES FLREXTENTS=5 QUERY SPACE If WORM is enabled, (FLR) is displayed next to the filesystem name. Tape library space for drives: FF Path Size Active Scratch / Qty Free Filesystem /tapelib 17G M 5% 184 0% 1 5.3G 31% /dev/sda4 /tapelib/aa 492.4G 1.6G 0% 12.2K 0% G 99% :/tapelib/AA /tapelib/bb 492.4G 4.4G 0% 14.6K 0% G 99% :/tapelib/BB /tapelib/cc 492.4G 896.2M 0% 14.2K 0% G 99% :/tapelib/CC /tapelib/dd 492.4G 3G 0% 15.1K 0% G 99% :/tapelib/DD /tapelib/fe 98.5G 1.1G 1% 1.3K 0% G 98% :/tapelib/FE (FLR) QUERY The output displays an ro (read-only) status for a WORM file (unexpired or expired), unless the FLR mod option is selected on the device. If the FLR mod option is selected, it displays the mod status. 160 EMC Disk Library for mainframe DLm960 User Guide

161 DLm WORM Tapes Devicename VOLSER/L AA2222 S R-A2 aws/rw LP 9210 FE0023 S R-A2 aws/ro LP 9211 FE0026 S R-A2 aws/mod LP The four columns under VOLSER\L are: Volume currently mounted on the drive Type of label on the volume Drive status Volume status QUERY on page 218 provides more information. Extend or modify a WORM tape Normally, a tape in the WORM state is mounted in write-protect ("ring-out") mode. However, a VTE device can be configured to allow appending of data to the last dataset on an WORM-protected tape, or addition of data sets to the tape by setting the FLRMOD option. ( FLRMOD on page 159 provides information about setting the FLR mod option.) Since an WORM file is in read-only mode and cannot be modified, the appended data is maintained in auxillary "segment" files while leaving the original files unchanged. Each time an WORM file is modified, a file named VOLSER_nnnnn is created to hold the modifications. VOLSER is the original filename and nnnnn is a number that is incremented sequentially each time the file is modified. For example, if the original volume is VTED00, the first modification will create an additional file named VTED00_00001 to hold the modifications. The next modification, if any, would create an additional file named VTED00_00002, and so on. When a modified WORM tape is unloaded, the new extension file is set to the WORM state with the same retention period as the original VOLSER file. Whenever an extended file is subsequently mounted, all of the segments are opened, concatenated together, and presented to the host as a single tape volume reflecting all the modifications since file creation. Extend or modify a WORM tape 161

162 DLm WORM Tapes The host can only write to a modifiable WORM tape at the end of the last dataset (between the last data block and the tapemark preceding the trailer labels), or between two tapemarks at the end of the existing volume. This corresponds to the host appending to a data set (writing with DISP=MOD) or adding a new file to the tape. Attempts to write at any other location will result in a unit check with sense of command reject, ERPA code x'30' (write protect). Any VTE can read segmented FLR tapes whether or not the FLRMOD option is selected and whether or not the DLm_FLR license is installed. Backward compatibility of modified (segmented) WORM tapes Scratch WORM tapes By default, a WORM tape can only be modified 100 times. This is to restrict the number of files that make up a single tape volume, because a large number of segments would have a performance impact on mounting and reading the tape. The default is 100, but can be configured to a different number using the FLREXTENTS option. ( FLREXTENTS on page 159 provides information about setting the FLR extents option.) Once the FLREXTENTS number of modifications exist, additional modifications will still be accepted and new segments created as needed, but every write to the tape will result in a unit exception (Logical End of Tape - "LEOT"), signaling to the host that it is approaching physical end of tape. It is expected that the host would close the tape volume and start a new tape volume soon after receiving LEOT, but it is free to continue writing as much data as it wants past the LEOT indications. An unexpired tape in the WORM state cannot be scratched. Attempts to scratch an unexpired tape in the WORM state will result in a file not writable error. An expired tape in the WORM state can be scratched. However, since the only operation that can be performed on an expired WORM tape is to delete it and it cannot be renamed from VOLSER to ~VOLSER, scratching of an expired WORM tape is implemented as a combined scratch-and-erase operation. The existing header labels from the tape are copied to a new ~VOLSER file, then the expired VOLSER file is completely deleted. Scratching an expired WORM tape always erases the data. 162 EMC Disk Library for mainframe DLm960 User Guide

163 CHAPTER 7 Mainframe Tasks This chapter discusses using DLm with z/os: Configure devices Real 3480, 3490, or Manual tape library MTL considerations for VTE drive selection MTL-related IBM maintenance EMC Unit Information Module Missing Interrupt Handler Mainframe configuration for GR Mainframe configuration for deduplicated virtual tapes Dynamic device reconfiguration considerations DFSMShsm considerations Specify tape compaction Locate and upload the DLm utilities and JCL for z/os Initial program load from a DLm virtual tape Mainframe Tasks 163

164 Mainframe Tasks Configure devices z/os uses the Hardware Configuration Definition (HCD) utility to define devices on the system. HCD provides an interactive interface that allows you to define the system's hardware configuration to both the channel subsystem and the operating system. The three alternatives for configuring DLm devices on the mainframe are: Configure the devices as real 3480, 3490, or 3590 tape drives. Configure the devices as MTL devices. Configure the devices with a unique device type using the EMC UIM. These alternatives are discussed in the following sections. The preferred approach is to configure the devices as MTL devices. If you are planning to use DLm with IBM's Object Access Method (OAM), you must configure the devices as MTL devices. OAM needs tape drives to be SMS-managed and treats them on the host as a single tape library. The IBM document SC , DFSMS Object Access Method Planning, Installation, and Storage Administration Guide for Tape Libraries provides more information on using a library for OAM object. Real 3480, 3490, or 3590 DLm can emulate 3480, 3490, or 3590 tape drives. If your mainframe installation does not have one of these device types installed, you can select the particular device type to be installed. The advantage of using 3480, 3490, or 3590 device types is that some applications or access methods examine device types to make sure that they are writing or reading to a known tape device. These applications typically do not work with the EMC UIM. However, if you have real 3480, 3490, and 3590 tape drives configured in your system, do not attempt to define the DLm devices in this manner. Configuring the devices as a device type that is already present results in misallocation errors because z/os requests a real 3480, 3490, or 3590 cartridge on a device or requests a tape-on-disk volume on a real 3480, 3490, or If you need to use one of these device types to define the DLm devices, make sure that the tapes configured in your installation do not include this device type. For example, if your JCL is using TAPE (UNIT=TAPE), make sure that TAPE does not include the device type (3480, 3490, or 3590) that you are using to define the DLm devices. 164 EMC Disk Library for mainframe DLm960 User Guide

165 Mainframe Tasks Manual tape library If you have installed 3480, 3490, and 3590 tape drives, you cannot define the DLm devices as real tape drives. Doing so results in misallocation errors as described previously. If you plan to use the DLm devices with OAM or any application that verifies device type, you cannot use the EMC UIM. In this case, you must define your DLm devices as real 3490 or 3590 tape drives and include them in an MTL, so that they are not misallocated. IBM introduced the concept of an MTL with APAR OW This APAR allows stand-alone tape drives and their associated volumes to be SMS-managed by treating a group of such drives as a logical tape library. SMS manages allocations to such a logical library just as it would any automated tape library dataserver (ATLDS), with the exception that mount messages are routed to a tape operator console rather than the ATLDS robotics. The IBM document DFSMS Object Access Method Planning, Installation, and Storage Administration Guide for Tape Libraries (SC ) provides information about MTL support. To define DLm devices with HCD: 1. Configure the DLm devices as either 3490 or 3590 tape devices using HCD. Note: This does not require the use of the EMC UIM; use the standard HCD 3490 or 3590 definitions. 2. On the Device/Parameter Feature definition screen for each drive, choose YES for MTL and supply an artificial LIBRARY-ID and LIBPORT-ID. 3. Define the control unit as a 3490 or 3590 with 16 tape drives available. 4. Be sure that all the devices in the same logical library have the same LIBRARY-ID, with each group of 16 devices having a unique LIBPORT-ID. IBM requires that there be only 16 tape drives to a LIBPORT-ID. As a result, you must configure multiple control units on the same channel using different logical control unit addresses when you want to configure more than 16 drives. 5. Make sure that each control unit's devices have the same LIBRARY-ID, but a unique LIBPORT-ID per control unit. 6. Maximum number of tape drives defined in an MTL is 512. If more tape drives are needed, a second MTL must be defined. Manual tape library 165

166 Mainframe Tasks Table 9 on page 166 contains an example of having the same LIBRARY-ID with its unique LIBPORT-IDs. Table 9 Example of LIBRARY-ID and LIBPORT-ID Dev Add CU Log CU LIBRARY-ID LIBPORT-ID E800 CU E801 CU E80F CU E810 CU E811 CU E81F CU After defining DLm using HCD, it must be defined to SMS using the library management function. Then your ACS routines must be updated to allow jobs to select the new library with appropriate user defined ACS management, data, and storage classes and groups. For example, if you define a new esoteric called VTAPE, your ACS routines could allocate the dataset to the SMS storage group using the DLm MTL whenever UNIT=VTAPE is specified in JCL. The characteristics of DLm virtual tape cartridges match the SMS Media Type: "MEDIA2" for 3490 or "MEDIA4" for Make sure that you specify the appropriate media type (MEDIA2 or MEDIA4) on the Library Definition screen. In addition, since SMS requests scratch tapes using media type, you must add MEDIA2 or MEDIA4 to the list of DLm scratch name synonyms as explained in Scratch synonyms on page 107. Z/OS might request for mounts by media type based upon the DATACLAS definition. The customer's ACS routines or tape display exits may also change the mount request to use storage group names, LPAR names, pool names etc. All such names must be entered into the synonym list. 166 EMC Disk Library for mainframe DLm960 User Guide

167 Mainframe Tasks Note: After you configure the MTL, it is treated as a real library; that is, you must enter cartridges into the library before DLm can use them. Use the DLMLIB utility to enter cartridges into the MTL. Before using the DLMLIB utility, contact your specific tape management system vendor for their customizations that interface with IBM's MTL. You must execute DLMLIB out of an authorized library. EMC provides an example of the JCL required for linking DLMLIB. The sample JCL file is found in the LNKLIB member of DLMZOS.JCL.CNTL. Step 4 on page 175 provides download instructions. EMC also provides an example of the JCL required to run DLMLIB. The sample JCL file is found in the RUNLIB member of DLMZOS.JCL.CNTL. Step 4 on page 175 provides download instructions. The log file lists the result of each cartridge entry request, including any error codes. The utility invokes IBM's LCS External Services (CBRXLCS) macro. Return codes and reason codes can be found in the chapter OAM Diagnostic Aids, of DFSMSdfp Diagnosis (GY ). MTL considerations for VTE drive selection When a request is made for a tape drive defined in an MTL, the ACS routines select the appropriate tape storage group for the library. Allocation subsequently chooses any available drive in that library. This is not a problem if only one VTE is defined as part of the library. However, an MTL can span multiple VTEs for performance and failover considerations. In this case, targeting a specific VTE for batch utilities is required. Note: MTL devices do not support the demand allocation (UNIT=xxxx) method, which selects a specific drive on a particular VTE, thereby enabling a batch utility to communicate with that VTE. Use one of these methods to enable a batch utility to communicate with a specific VTE in an MTL defined with multiple VTEs: Omit a single drive from the MTL in each VTE's device group. MTL considerations for VTE drive selection 167

168 Mainframe Tasks For example, consider an MTL defined with two VTEs, each configured with 64 devices: a. In each VTE, define 63 devices as MTL=YES in the HCD. One device would be MTL=NO in the HCD. b. Subsequently, use demand allocation in JCL to select the specific drive address that is outside the MTL. EMC recommends that you leave this drive offline to prevent inadvertent allocation by other jobs. One way to accomplish this is to bookend your jobs with steps to vary the device online and offline with an operator command utility program. The DLMCMD, DLMSCR, and GENSTATS batch utility programs now support the use of the EXEC statement parameter DEV=xxxx, which allows access to an offline tape device. Type the code as follows: EXEC PGM=DLMCMD,PARM='DEV=xxxx' where xxxx is the offline virtual tape device on the VTE you wish to access. IMPORTANT Ensure the tape device is offline before you run any utility with the DEV= parameter. The device specified in the DEV= parameter must be offline. When the DLMCMD, DLMSCR, or GENSTATS utility is used with the DEV= parameter when the specified device is online, DLm displays the corresponding 182I message and terminates the operation. For DLMCMD and DLMSCR steps, this parameter eliminates the need to code a DLMCTRL DD statement. For GENSTATS, this parameter eliminates the need to code a GENIN DD statement. Define a separate MTL for each VTE to enable VTE selection: a. Similar to the previous method, define only 63 devices on each VTE as part of the same MTL. b. For each VTE, define a separate MTL (different LIB-ID) for the remaining device, as well as a new esoteric. 168 EMC Disk Library for mainframe DLm960 User Guide

169 Mainframe Tasks c. Use ACS routines to select the appropriate library that limits the available drive selection to that one drive. MTL-related IBM maintenance The program temporary fix (PTF) for each of the following APARs must be applied when using DLm in an MTL environment: APAR OA03749 More than one device fails to vary online. APAR OA06698 Replacement tape drives get MSG IEA437I in an MTL environment. APAR OA07945 Mount hangs or times out using MTL with OEM Automated Library. APAR OA08963 Tape volume capacity is incorrect for OAM object support users. APAR OA10482 MTL scratch volume mount error occurs. EMC Unit Information Module As an alternative to defining real 3480s, 3490s, or 3590s or using an MTL, EMC provides a user UIM that allows DLm tape devices to be configured to HCD using a unique device type. Using the EMC UIM prevents the operating system from allocating the DLm virtual tape drives to jobs requesting a mount of a real tape cartridge. If you are not using OAM or an application that checks device types, the EMC UIM is the easiest way to configure the DLm devices, so that no misallocation errors occur with real tape drives. Information regarding user UIM can be found in IBM's document, z/os MVS Device Validation Support (SA ). You must install the EMC UIM and associated Unit Data Table (UDT) into SYS1.NUCLEUS before you configure the DLm devices in HCD. Before you install the UIM, it is important to back up the SYSRES volume containing the SYS1.NUCLEUS dataset to provide a recovery mechanism if anything fails to operate properly. Use ISPF function 3.3 (Utilities: Move or Copy) and copy CBDEC255 and CBDUC255 from DLMZOS.PGMS into SYS1.NUCLEUS, as explained in Locate and upload the DLm utilities and JCL for z/os on page 174. If CBDEC255 or CBDUC255 already exists in SYS1.NUCLEUS, then another vendor has already supplied a user UIM using the same user device number of 255. Contact EMC Customer Support for a different module name to use. MTL-related IBM maintenance 169

170 Mainframe Tasks After installing the UIM, you can configure the DLm devices in HCD. The UIM provides the following: Four control unit types: V3480, V3481, V3482, and V3483 Four supporting device types: V3480, V3481, V3482, and V3483 The generic names for these devices are also V3480, V3481, V3482, and V3483. If you have already defined a generic name of V348x, contact EMC for support. You must define multiple virtual device types to support the multiple DLm systems or a single DLm with multiple virtual tape libraries configured. You must define a V348x tape drive for each virtual tape device that you have configured in DLm. All virtual tape drives assigned to the default virtual tape library in the DLm filing structure (/tapelib) are normally defined with the same generic name (for example, V3480). If you plan to have a drive assigned to a different tape library path in the DLm filing structure, you should define that drive with a separate generic name (for example, V3481). Once the DLm device definitions are active, you must either specify UNIT=V348x or hard code the unit address allocated to a device. In this way, regular jobs that call for real tape drives or use tapes previously cataloged on real 3480s are not allocated to the DLm devices. After a tape is cataloged as created on a V348x device, it is allocated to that same device type when called again. Conversely, a tape cataloged as created on a real tape drive is not allocated to a device. Missing Interrupt Handler The MVS, OS/390, or z/os Missing Interrupt Handler (MIH) timer value is often set too low for the lengthy operations that can occur on a large tape cartridge. If an operation takes longer than the MIH value, the operating system reports I/O errors and often boxes the device, taking it out of service. For this reason, IBM recommends a minimum MIH timer value of 20 minutes for tape drives, including virtual tape drives such as those on DLm. DLm reports a preferred MIH timer value of 3000 seconds (50 minutes) to the host when it is varied online, and the host should take this value as the DLm devices' MIH time. To determine the current MIH timer value, you can use the following z/os operator command: D IOS,MIH,DEV=xxxx 170 EMC Disk Library for mainframe DLm960 User Guide

171 Mainframe Tasks where xxxx is any DLm virtual tape drive address. You can temporarily change the MIH value for DLm devices by typing the following z/os operator command: SETIOS MIH,DEV=(xxxx-xxxx),TIME=mm:ss where xxxx-xxxx is the range of DLm virtual tape drive addresses. The IBM manual, 3490 Magnetic Tape Subsystem Introduction and Planning Guide (GA ), provides more information about the MIH timer and tape drives. Mainframe configuration for GR If you have not configured all the devices on the DLm to support GR, you must perform additional configuration tasks at the mainframe end: you must isolate GR devices from non-gr devices on the mainframe. MTL If you configured your DLm devices using a Manual Tape Library (MTL) and not all devices are configured for GR, perform the following steps: Define two separate MTLs one for GR-enabled devices and one for non-gr drives. Update your ACS routines so that you can direct specific VOLSERs to the MTL supporting GR. Esoterics You can also define different esoterics for devices using GR and those that do not. For example, you can define two esoterics as follows: TAPE To point to devices not enabled for GR RTAPE To point to devices enabled for GR Then, to direct a new VOLSER to GR, you must code JCL to include: UNIT=RTAPE Make sure to include UNIT=RTAPE on any DD statement that will write to the VOLSER, even if the VOLSER already exists in the library. Specifically, make sure to include UNIT=RTAPE on any Tape DD Statement that uses DISP=MOD on a volume (VOLSER) Mainframe configuration for GR 171

172 Mainframe Tasks that was originally created with UNIT=RTAPE. Coding DISP=MOD without UNIT=RTAPE might cause the VOLSER to be mounted on a device not supporting GR. Thus, the update does not get replicated to the DR site at the time the volume is closed. Missing Interrupt Handler GR will cause significant delay between the time a mainframe issues a close to a tape volume and the time the DLm VTE responds to that close. By default, the DLm VTE will report a preferred Missing Interrupt Handler (MIH) value of 3000 seconds (50 minutes) to the mainframe for all its devices when the devices are varied online. If a GR device has not received confirmation from Celerra Replicator that replication has been completed before 50 minutes, the mainframe will interrupt the device indicating that the 50 minutes expired without a response from the previously sent CCW (the close). MIH considerations for Guarantied Replication on page 151 provides more information. Mainframe configuration for deduplicated virtual tapes The deduplicated virtual tapes/volsers need to be isolated from non-deduplicated virtual tapes. All VOLSERs that reside on the Data Domain DD880 are deduplicated and all VOLSERs that reside on the Celerra are non-deduplicated. This can be done by defining an MTL that contains only deduplicated virtual tapes/volsers. Update your ACS routines so that you can direct specific VOLSERs to the MTL supporting deduplication. Dynamic device reconfiguration considerations Since DLm is a virtual tape control unit, it cannot benefit from an operator or a system-initiated 'swap' function. Accordingly, following any message 'IGF500I SWAP xxxx TO xxxx - I/O ERROR' for any device, you must reply NO to the subsequent "## IGF500D REPLY 'YES', DEVICE, OR 'NO'." If you configured the devices as V348x devices using the UIM, Dynamic Device Reconfiguration (DDR) swap is automatically disabled for those devices, and a swap cannot occur. 172 EMC Disk Library for mainframe DLm960 User Guide

173 Mainframe Tasks DFSMShsm considerations If you plan to use DLm with HSM, the various SETSYS tape parameters do not accept V348x generic names as valid. In that case, it is necessary to define esoteric names that are unique to the various V348x devices. To identify esoteric tape unit names to DFSMShsm, you must first define these esoteric tape unit names to z/os during system I/O generation (HCD). Then, you must include the esoteric tape unit names in a DFSMShsm SETSYS USERUNITTABLE command. Only after they have been successfully specified with the SETSYS USERUNITTABLE command are they recognized and used as valid unit names with subsequent DFSMShsm commands. Specify tape compaction Compaction of the virtual tape data under z/os is initiated like it is initiated for a real compression-capable (IDRC) 3480/3490/3590E. The default is NOCOMP for 3480, and COMP for 3490 and 3590E. You can specify the use of compaction in the JCL by using the DCB=TRTCH=COMP or DCB=TRTCH=NOCOMP parameter on the appropriate DD cards for output tapes. No JCL parameter is required for input tapes. The system automatically decompresses the tape data on read requests. Alternatively, the system programmer can specify the COMPACT=YES parameter in the DEVSUPxx PARMLIB member. This would result in compaction being the default option for all of the virtual drives. The COMPACTION=Y/N option on the SMS DATACLAS definition provides another method for activating and disabling compaction. It should be noted that while the compaction option significantly reduces the amount of storage required on the DLm library, there is some impact on the data transfer performance compared to uncompressed data. The efficiency of the compaction, as well as the performance impact, varies depending upon the data. The file-size values reported by the QUERY command and the awsprint utility (using CP503), reflect the compressed data size and not the original uncompressed size. Note: All data written to the deduplicating storage on the DD880 should be written without IDRC. DFSMShsm considerations 173

174 Mainframe Tasks Locate and upload the DLm utilities and JCL for z/os EMC provides a set of utilities and a UIM for the z/os environments. The utilities are: GENSTATS A utility that generates reports from VTE and VOLSER range statistics DLMSCR A scratch utility that sends VOLSER scratch requests to DLm DLMCMD A utility that allows the mainframe to send DLm commands DLMLIB A utility that is required to define scratch volumes on an MTL DLMVER A utility that reports the versions of all the DLm mainframe utilities on the mainframe and the z/os release. DLMHOST A host utility that provides z/os Console Operation support. Chapter 9, z/os Console Support, provides details about this utility. Downloading and using the DLm utilities and JCL for z/os To use any of these utilities, or the UIM: 1. Download the file DLMZOS-<version number>.xmi from the EMC support website ( Select Navigator > Disk Library Tools and transfer the file to the mainframe as follows: ftp target_system_name (satisfy login requirements of the mainframe) quote site recfm=fb lrecl=80 bin put DLMZOS-<version number>.xmi (the file will be placed on the host as 'uid.dlmzos-<version number>.xmi', where uid is the login user id used for the ftp. Alternatively, you may use put DLMZOS-<version number>.xmi 'filename' to force a specific filename of your choice.) quit 2. After transferring the file, use ISPF function 6 (Command Shell) and type: receive indataset('uid.dlmzos.xmi') 3. At the prompt, Enter restore parameters or delete or end, type: 174 EMC Disk Library for mainframe DLm960 User Guide

175 Mainframe Tasks da('dlmzos.pgms') DLMZOS.PGMS is created for the following members: CBDEC255 The unit data table for the UIM CBDUC255 The UIM for the EMC DLm devices DLMLIB The utility required to add volumes to a DLm MTL DLMSCR The DLm scratch utility DLMCMD The DLm command utility DLMVER The DLm utility version reporting utility GENSTATS The report formatting utility DLMHOST - The DLm utility that provides a command interface to VTEs and a mechanism to list selected VTE log messages. 4. Transfer the DLMZOS.JCL file to the host. You must first unzip this file from the DLMZOS.JCL <version>.zip file. The DLMZOS.JCL file contains a sample JCL to link and execute these batch utilities. The file is available at the EMC support website: Select Navigator > Disk Library Tools. To transfer the file, type: ftp target_system_name (satisfy login requirements of the mainframe) quote site recfm=fb lrecl=80 bin put DLMZOS.jcl quit (the file will be placed on the host as 'uid.dlmzos.jcl', where uid is the login user id used for the ftp. Alternatively, you may use put DLMZOS.jcl 'filename' to force a specific filename of your choice.) 5. After transferring the file, use ISPF function 6 (Command Shell) and type: receive indataset('uid.dlmzos.jcl') 6. At the prompt, Enter restore parameters or delete or end, type: da('dlmzos.jcl.cntl') DLMZOS.JCL.CNTL will then be populated with the sample JCL. See member $INDEX for a list of its contents. Locate and upload the DLm utilities and JCL for z/os 175

176 Mainframe Tasks GENSTATS utility 7. If you plan to use the DLMCMDPR or GENSTATP procedures for Command Process jobs, (See EMC Disk Library for mainframe Command Processors User Guide) perform the following steps: a. Copy the DLMCMDPR and GENSTATP procedures to a common PROCLIB. b. Create a PDS dataset for the DLMCMD1, DLMCMD2, and DLMCMD3 REXX programs contained in the DLMZOS.JCL dataset. Specify this REXX dataset in the REXXLIB parameter in the DLMCMDPR and GENSTATP procedures. You can optionally keep the REXX programs in the DLMZOS.JCL dataset and instead just specify the DLMZOS.JCL dataset name in the REXXLIB parameter. c. Specify the dataset name of the DLMZOS.PGMS dataset created in step 3 above in the BTILIB parameter in the DLMCMDPR and GENSTATP procedures. The GENSTATS utility generates reports on the tape mount and unmount statistics logged at the VTE level and at the VOLSER range level. It can selectively present: Daily and hourly throughput numbers Mount rates Concurrent tape drive usage details Compression ratio Average and slow mount response information GENSTATS uses command processors, such as CP998 and CP999, to summarize virtual tape activity. A GENSTATS job consists of two steps: 1. Execute a command processor which accesses the appropriate statistics file and writes the data to a non-labeled tape file. 2. Run GENSTATS to generate a report from the non-labeled tape file data. EMC Disk Library for mainframe Command Processors User Guide contains more information about GENSTATS. It includes samples of JCL for GENSTATS, which show how GENSTATS uses CP998 and CP999 to generate reports. These sample jobs access VTE and VOLSER range statistics and make the data available on the mainframe. 176 EMC Disk Library for mainframe DLm960 User Guide

177 Mainframe Tasks DLm scratch utility program DLm provides the DLMSCR utility that you can use with any of the major tape management systems to keep your TMS scratch status synchronized with the DLm scratch status. You must link the DLMSCR utility as an authorized program into an authorized library under the name DLMSCR. EMC recommends that you use security software, such as Resource Access Control Facility (RACF), to restrict the use of DLMSCR to authorized users only. EMC provides an example of the JCL required to link DLMSCR. The sample JCL file is found in the LNKSCR member of DLMZOS.JCL.CNTL. Step 4 on page 175 provides download instructions. DLMSCR runs on the mainframe and sends volume scratch requests to DLm. As the TMS may dynamically release tapes back to scratch status, you must regularly run DLMSCR to maintain synchronization between the TMS catalog and DLm. To use DLMSCR, you must run a TMS scratch report and input that scratch report into DLMSCR. DLMSCR scans the scratch report for the DLm-owned volumes and sends the appropriate scratch requests to DLm. EMC provides an example of the JCL required to run DLMSCR. The sample JCL is found in the RUNSCRA and RUNSCRB members of DLMZOS.JCL.CNTL. RUNSCRB illustrates the use of the DEV= parameter. Step 4 on page 175 provides download instructions. Locate and upload the DLm utilities and JCL for z/os 177

178 Mainframe Tasks Table 10 on page 178 lists the DLMSCR parameters that may need to be specified. Table 10 Parameters in DLMSCR (page 1 of 2) Parameters TYPE=x PREFIX=y PREFIXLN=n NODSNCHK FREESPACE Specification Where x is used to select the tape management system. Valid types include RMM, TLMS, TMS, TSM, ZARA, CTLM, AFM, or CTLT. This is the only required parameter. Where y is a string of prefix characters that limits processing to volumes whose VOLSER begins with the characters specified. Unless otherwise specified by the PREFIXLN parameter, the default prefix length is 2. PREFIX=AAABAC would cause DLMSCR to process only volumes whose serial numbers begin with AA, AB, or AC. Coding this parameter prevents DLMSCR from trying to unnecessarily scratch volumes that are not stored on DLm. If no PREFIX is specified, DLMSCR processes the entire scratch list. Where n can be a single digit between 1 and 5. This value replaces the default prefix length of 2 for the PREFIX= parameter. PARM='PREFIX=ABCD,PREFIXLN=1' causes DLMSCR to process only volumes whose serial numbers begin with A, B, C, or D. DLm normally validates dataset names (dsname) if found in the scratch report as part of the scratch process. A scratch is not successfully completed if the dsname in the scratch report does not match the dsname in the HDR1 label on the volume being scratched. NODSNCHK prevents the data set name check from being performed and is not recommended for normal use. By default, DLMSCR reclassifies volumes being scratched as eligible for scratch allocation requests, without freeing the space occupied by that volume. The FREESPACE parameter may be used to request that the space be freed. Note: FREESPACE requires the volumes to already be in scratch status. Therefore to immediately free the space, DLMSCR must be run twice. The first execution must run without the FREESPACE parameter to scratch the volumes, and the second execution must run with the FREESPACE parameter to release the space. Keep in mind that DLm automatically frees the space of scratched volumes when it needs space. So, it is generally not necessary to run DLMSCR with the FREESPACE parameter. FREEAFTER SCR While the FREESPACE parameter requires that a volume already be in a scratched state, FREEAFTERSCR frees space from a volume immediately after DLMSCR has scratched it. Note: Once FREEAFTERSCR frees the space associated with the execution ofdlmscr, the volume cannot be recovered if it was scratched by mistake. 178 EMC Disk Library for mainframe DLm960 User Guide

179 Mainframe Tasks Table 10 Parameters in DLMSCR (page 2 of 2) Parameters NODATECHK IGNLCSERR ALLVOLS Specification DLm normally checks the creation date of a tape volume and does not allow any volume to be created and scratched in the same 24-hour period. Setting this parameter allows volumes to be created and scratched on the same day. This parameter ignores the default date check in DLMSCR. This parameter ignores any errors reported by Library Call Subsystem (LCS) used by OAM with the MTL volumes. Normally, DLMSCR logs any error returned by LCS and stops processing scratch tapes when these errors occur. If this parameter is set, DLMSCR scratch processing continues even when the LCS errors are encountered. This parameter allows scratch of volumes with dsnames of all zeros. IGNLCSRC4 This allows DLMSCR processing to continue after receiving a return code of 4 from LCS processing, but terminates if the return code from LCS processing is greater than 4. NOTCDB NOTIFSCR TEST DEV=xxxx USETMC SYNC This prevents DLMSCR from attempting any TCDB updates. This should be used only if the TMS already performs this function. This prevents DLMSCR from attempting to change the TCDB use attribute to scratch if DLm reports that the VOLSER was already scratched. This parameter allows for testing no actual changes will be performed. This allows the specification of an offline virtual tape device and the elimination of the DLMCTRL DD statement as shown on page 167. [CA-1 TMS environments only] This parameter enables DLMSCR to directly read the CA-1 Tape Management Catalog (TMC) to find DLm-resident VOLSERs which have been scratched and send the appropriate scratch requests to DLm for processing. Use of the USETMC option requires that the DLMSCR DD JCL statement point directly at the TMC (or a copy of the TMC). [CA-1 TMS environments only] This parameter is valid only if specified along with the USETMC parameter. It enables DLMSCR to synchronize the status of the VOLSERs in the Tape Control Data Base (TCDB) and the DLm library with those in the CA-1 TMC. Scratch utility output files The two scratch utility output files are: The DLMLOG file maintains a history of the results of each delete request. The file should have a logical record length (LRECL) of 133. Locate and upload the DLm utilities and JCL for z/os 179

180 Mainframe Tasks If an error occurs during a scratch request (such as scratching a non-existent volume), the failure is recorded in the log file. The program continues with the next scratch request and result in a non-zero return code from the program execution. The DLMCTRL file allocates a DLm tape device for use as a control path to pass the scratch requests. If multiple tape libraries in the DLm filing structure are being used to contain the DLm virtual volumes, you must select a tape device address associated with the library in the DLm filing structure containing the volumes to be scratched. The DEV=xxxx parameter allows an offline tape device to be used instead of coding the DLMCTRL DD statement. For example, see RUNSCRB in the sample JCL library. DLMSCR report output messages Note: ALL messages are proceeded by: mm/dd/yyyy hh:mm:ss VOLUME xxxxxx. Table 11 DLMSCR report output messages Hex Code Volume Rejected Message Comments 0x01 0x02 0x05 0x06 0x07 REQUEST REJECT - INVALID LOAD/DSPLY REQUEST REJECT - ALREADY A SCRATCH REQUEST REJECT - INVALID VOLSER REQUEST REJECT - VOLUME IN USE REQUEST REJECT - VOLUME NOT FOUND Invalid data length (must be 17, 23, or 40 bytes). Volume already scratched. Invalid VOLSER specified - The input volume serial number does not conform to the standard volume naming convention. Check the input TMS report. Volume in use on the same or different VTE or there is a possibility of a STALE LOCK. Volume not found in file system; Make sure input tape device number points to correct tape library (i.e /tapelibxxx). 0x08 REQUEST REJECT - I/O ERROR I/O error has occurred during scratching process - refer to btilog for additional information. 0x09 REQUEST REJECT - VOLSER LOCKED File is locked - volume might be in use by another tape drive on this VTE or another VTE or there is a possibility of a STALE LOCK 0x0A REQUEST REJECT - DIRECTORY PROBLEM A Tape library directory is not accessible - Verify tape unit selected for utility points to correct tape library. 180 EMC Disk Library for mainframe DLm960 User Guide

181 Mainframe Tasks Table 11 DLMSCR report output messages Hex Code Volume Rejected Message Comments 0x0B REQUEST REJECT - DIRECTORY PROBLEM B 0x0C REQUEST REJECT - INVAL/MISSING VOL1 Tape library directory is not writeable - Verify that tape library is marked for Read/Write. Invalid or missing VOL1 label in volume - Using AWSPRINT verify volume VOL1 record has not been overwritten. 0x0D REQUEST REJECT - VOLSER MISMATCH The volume serial number from the TMS report and the VOL1 HDR do not match; VOL1 HDR might have been overwritten. 0x0E REQUEST REJECT - INVAL/MISSING HDR1 0x0F REQUEST REJECT - MISMATCHING DSNAME Invalid or missing HDR1 label in volume - The HDR1 record is not in the correct format; overlayed because of error; use AWSPRINT utility to determine error. Data Set Name mismatch - The last 17 alphanumerics of dsname on the volume from input does not match HDR1 name on the volume HDR1. This can be overridden using NODSNAMECHK. 0x10 0x11 0x12 REQUEST REJECT - INVALID DATE PASSED REQUEST REJECT - CREATE DATE=TODAY REQUEST REJECT - FILE NOT WRITEABLE The TMS input report date does not match the execution date of DLMSCR Date mismatch - DLMSCR, unless overridden using NODATECHK, defaults to no scratch same day. File not writable - The filesystem directory is probably marked as Read Only; This might be target site. Note: The hex codes listed are the error codes that the VTE returns to DLMSCR when DLMSCR requests an action on a volume. Working with the DLm scratch utility Note these considerations when working with the DLm scratch utility: The DLMSCR file must point to the scratch report that you have created using the appropriate TMS utility. Generate the scratch report with a logical record length (LRECL) of 133. To avoid any confusion, use a single job to generate a current scratch list file and run the DLMSCR utility against that file. This eliminates the possibility of accidentally running the DLMSCR program against an old scratch report and causing the TMS and DLm to be out of sync. DLm does not scratch a volume created on the current day unless NODATECHK is specified. Locate and upload the DLm utilities and JCL for z/os 181

182 Mainframe Tasks Also, the utility does not run against a scratch report that was not created the same day. The Scratch utility uses the dsname information from the scratch report to verify the volumes being scratched. If the dsname written in the volume header does not match the dsname on the scratch report for that volume, the scratch request is rejected. This action cannot be overridden by NODSNCHK. After completing the DLMSCR utility, you can use or reuse tapes that the utility successfully scratched. RMM considerations Observe the following rules when using DLm with RMM: Predefine the DLm scratch volumes to RMM. If you have not predefined DLm VOLSERs as scratch in RMM, RMM rejects the new volumes, which results in an unsatisfied mount request on the mainframe. To resolve the unsatisfied mount, define the DLm scratches in RMM, and execute a LOAD command at the appropriate VT console to satisfy a stalled request. When defining a new DLm scratch tape to RMM, set the initialize option to no. If you select yes and RMM detects that the volume must be initialized (or EDGINERS is run), RMM sends a request to mount a 'blank' VOLSER on a DLm device. DLm is not automatically ready as it cannot recognize which volume to mount. Consequently, you must use the LOAD command at the VT console to manually mount each volume being initialized. DLMSCR processes two types of RMM scratch reports: TMS considerations TLMS considerations The scratch report that EGDRPTD creates The scratch report that EDGJRPT creates using the EDGRRPTE exec (EDGRPT01) Use the DATEFORM(I) parameter when running EDGRPTD to create scratch reports to ensure the expected date format is used. When the REXX exec form is used, DLMSCR may not accept a user-tailored version of EDGRRPTE. DLMSCR expects Report-05, Report-06, or Report-87 to be used. DLMSCR expects either the TLMS003 or the TLMS043 report as input. 182 EMC Disk Library for mainframe DLm960 User Guide

183 Mainframe Tasks TSM considerations ZARA considerations CA-1 considerations DLMSCR expects a Tivoli Storage Manager (TSM) Volume History Report to be used as input to the DLMSCR DD. DLMSCR expects the LIST SCRATCH type of scratch report to be used as input from ZARA. Although there are various reports supported by TMS (CA-1), DLMSCR expects Report-05 or Report-06 or Report-87 to be used. The report generation parameters should request the field DSN17 instead of the default DSN. (See PRIMUS EMC ) Otherwise, the report for multi-volume multi-file tapes will have the incorrect DSN for all but the first VOLSER. Those volumes with incorrect DSN will fail the DSN validity check performed by DLMSCR before scratching a tape. Unique to CA-1 TMS environments only, DLMSCR supports the following 2 additional run time parameters: USETMC - When this parameter is specified, DLMSCR directly reads the CA-1 Tape Management Catalog (TMC) to find DLm-resident VOLSERs which have been scratched. A separate execution of the CA-1 scratch report utility (EARL) is not required. Use of the USETMC option requires that the DLMSCR DD JCL statement point directly at the TMC (or a copy of the TMC). DLMSCR scans the TMC and sends the appropriate scratch requests to the DLm for processing. Note: When using the USETMC option, DLMSCR sends a scratch request for any scratch volume (those that pass prefix filtering) it finds in the TMC. This might result in a large number of DLm500I messages followed by DLm524W messages being issued to the DLm VTE btilog whenever DLMSCR is run. This is normal. The DLm500I message indicates that the VTE application has received a request to scratch a VOLSER. The DLm524W message indicates that the VOLSER was already scratched. Locate and upload the DLm utilities and JCL for z/os 183

184 Mainframe Tasks SYNC - This parameter is valid only if specified along with USETMC. The SYNC option causes DLMSCR to synchronize the Tape Control Data Base (TCDB) and the DLm library with the CA-1 TMC. The status of the VOLSERs in the TCDB and in the DLm library will be changed from active to scratch or from scratch to active as required to match the status of the CA-1 Tape Management Catalog (TMC). Note: When using the SYNC option DLMSCR sends an unscratch request for any active volume it finds in the TMC. This may result in a large number of DLm500I messages followed by DLm524W messages being issued to the VTE btilog whenever DLMSCR is run. This is normal. The DLm500I message indicates that the VTE application has received a request to unscratch a VOLSER. The DLm524W message indicates the VOLSER was already unscratched. DLMCMD utility program TMS users who use Scratch Pool Management and need to limit the eligible scratch volumes to a limited range of VOLSERs must install the TMS usermod CL05219 (CTSMSGEX exit). When this exit is linked into IGX00030, an IPL with CLPA is required to activate it. The exit causes the first 8 characters of the scratch poolname to be placed into the Load_Display mount message that is sent to the tape drive. This poolname can be defined as a scratch synonym so that the VTE application software can restrict the eligible scratch volumes to a specific prefix group. The DLMCMD utility allows you to execute DLm commands from the mainframe. You must link this utility as an authorized program to an authorized library under the name DLMCMD. EMC highly recommends that you use security software, such as RACF, to restrict the use of DLMCMD to authorized users only. EMC provides an example of the JCL required to run DLMCMD. The sample JCL is found in the RUNCMDA and RUNCMDB members of DLMZOS.JCL.CNTL. RUNCMDB illustrates the use of the DEV= parameter. Step 4 on page 175 provides download instructions. How the DLm command utility works: The DLMCMD utility reads one or more DLm commands from the DLMCMD input file, and sends each command to DLm for execution. Note: The DLMCMD utility accepts input cards up to 256 characters in length. Continuation lines are not allowed. 184 EMC Disk Library for mainframe DLm960 User Guide

185 Mainframe Tasks Indication of success or failure is logged to the file that the DLMLOG DD statement points to. Note: Any messages and other textual results of the command that display on the DLm Console are not returned to the host. DLMCMD does not respond to a mainframe command on the communication tape device until the VTE processing for that command is complete. Use the DLMCTRL file to allocate a DLm device for use as a control path for passing the DLm commands. You can use any available DLm virtual tape device as the DLMCTRL device. MTL considerations for VTE drive selection on page 167 provides information about working with a Manual Tape Library. The DEV=xxxx parameter allows an offline tape device to be used instead of coding the DLMCTRL DD statement. See RUNCMDB in the sample JCL library for an example. The DLMCMD DD statement should point to a list of DLm commands to be sent. The LRECL of DLMCMD cannot exceed 256. If possible, create it using the NONUM ISPF edit option to avoid sequence numbers at the end of the command line. This can optionally be an in-stream input file (DLMCMD DD *) of commands. The DLMLOG DD statement points to a sequential file for logging the DLMCMD results. This file should have a logical record length (LRECL) of 133. If an error occurs during command processing, the failure is recorded in the log file, and a non-zero return code from DLMCMD results. Table 12 on page 186 lists the possible error codes from DLMCMD. Locate and upload the DLm utilities and JCL for z/os 185

186 Mainframe Tasks Table 12 Error code from DLMCMD Error code 0x01 0x02 Description Invalid data length (must be between 1 and 256 bytes). DLm does not accept Host-initiated console commands. Note: This error code is generated when the HOSTCOMMAND option is set to NO in the xmap file. To enable it, you must manually modify the xmap file. 0xFF(-1) 0xFC (-4) A general syntax error occurred. (The DLm console error message "DLM891E: Invalid command syntax" was displayed.) An "E" level error other than general syntax error occurred. (A console error message other than DLM891E was displayed.) This is a sample DLMLOG output: DLMCMD VER 1.0 DLMCTRL = EA /09/10 12:47:49 CMD ERR=FF: this is an invalid command 2004/09/10 12:47:49 CMD ISSUED: q all 2004/09/10 12:47:49 CMD ERR=FC: q xxxxxxxx 2004/09/10 12:47:49 CMD ISSUED: set size=2g dev=ea80 The two optional methods to pass commands to DLMCMD: 1. Use of PARM='WTOR' Sends the message DLC070I, ENTER COMMAND, to the operator, who can reply with the command. The message is reissued after each command is accepted until END is entered as the reply. This method does not use the DLMCMD input file. For example: //LOG EXEC PGM=DLMCMD,PARM='WTOR' //DLMLOG DD DSN=DLM.LOGFILE,DISP=OLD //DLMCTRL DD DSN=DLM.CTRL,UNIT=3590,VOL=SER=BT9999, DISP=(,KEEP) 2. Use of PARM='CMD=' Allows you to pass a single command on the EXEC card instead of using the DLMCMD input file. This method also allows you to call DLMCMD from another program, and pass the command as an entry parameter. For example: //LOG EXEC PGM=DLMCMD,PARM='CMD=Q SPACE' 186 EMC Disk Library for mainframe DLm960 User Guide

187 Mainframe Tasks //DLMLOG DD DSN=DLM.LOGFILE,DISP=OLD //DLMCTRL DD DSN=DLM.CTRL,UNIT=3590,VOL=SER=BT9999, DISP=(,KEEP) Note: If you experience issues with the DLMCMD, check the /var/log/messages file for error messages. DLMVER utility program TDLMVER is a DLm utility that produces a report of DLm load modules that run on the z/os platform. The DLMVER utility reports the versions of: The DLm mainframe modules on the mainframe: DLMCMD DLMLIB DLMSCR DLMVER GENSTATS The z/os Sample JCL The following are sample JCL stream for the DLMVER utility, sample output, and notes on its processing: DLMVER sample JCL < USER JOBCARD > //* //* SAMPLE DLMVER JCL: PRINT DLM MODULE VERSIONS //* //* REPLACE: //* User.Loadlib //* - WITH THE NAME OF THE LOADLIB CREATED //* DURING THE INSTALLATION OF THE DLM UTILITIES //* //* REPORT FORMATING STMTS //* - SPECIFY PARM='WTO' to instruct DLMVER to issue WTOs instead //* of writting to DLMLOG file. //* Locate and upload the DLm utilities and JCL for z/os 187

188 Mainframe Tasks //*72cumentation //* NOTE: If no STEPLIB but JOBLIB is present, //* DLMVER will print DLm Module versions from JOBLIB. //* If no STEPLIB or JOBLIB is present, //* DLMVER will print DLm Module versions from library //* in linklist that DLMVER was loaded from. //* //**************************************************************** ****** //* DLM V2.0 - January, 2012 //**************************************************************** ****** //* //* EXECUTE THE DLMVER PROGRAM TO PRINT DLM MODULE VERSIONS //* //DLMVER EXEC PGM=DLMVER //STEPLIB DD DSN=User.Loadlib,DISP=SHR //DLMLOG DD SYSOUT=* This JCL example will invoke the DLMVER utility to report on the DLm versions for the DLm load modules stored in the User.Loadlib. You can modify the STEPLIB to point to the installed DLm load library. DLMVER sample JCL 2 //S1 EXEC PGM=DLMVER,PARM='WTO' Note: DLMVER will report on the DLm load modules present in any active LINKLST regardless of whether a STEPLIB or JOBLIB is present in the JCL. Sample output from DLMVER DLMVER VER 1.00 PARM DLV010I UTILITY VERSIONS (Z/OS R12 ): DLMCMD V 4.06 DLMLIB V 4.03 DLMSCR V 4.19 DLMVER V 1.00 GENSTATS V 1.15 DLMVER Messages The messages related to DLMVER are: DLV010I UTILITY VERSIONS ( ): DLV050I LOG FILE FAILED TO OPEN DLMVER messages on page 440 provides the details. 188 EMC Disk Library for mainframe DLm960 User Guide

189 Mainframe Tasks Initial program load from a DLm virtual tape Since the DLm virtual tape drives appear to the host as real tape drives, it is possible to initiate an initial program load (IPL) on a mainframe host from a virtual tape volume on DLm. Create a stand-alone IPL tape on DLm To create a stand-alone IPL tape: 1. On the DLm Console, initialize a non-labeled tape on DLm. For example: init vol=saipl label=nl dev=e980 scratch=no The example creates a non-labeled tape called SAIPL in the tape library assigned to the virtual tape drive named E980. You may use any VOLSER of your choice. Replace E980 with the name of a virtual tape drive configured on your DLm. Specify the "scratch=no" parameter so that that no scratch tape mount request can acquire the volume before you are ready to use it. 2. On the DLm Console, manually mount this tape on any virtual tape drive assigned to the tape library where you initialized your stand-alone IPL tape volume: load SAIPL E980 This command causes the virtual tape volume SAIPL to be mounted on the DLm virtual tape drive, E980. In your scenario, replace E980 with the name of a virtual tape drive configured on your DLm. It can be any DLm virtual tape drive that is assigned to the tape library where the stand-alone IPL tape volume resides. 3. From the mainframe, write the stand-alone IPL tape to the virtual tape drive where the target tape is mounted. Explicitly specify the VOLSER you mounted in the previous step. Once the stand-alone IPL tape has been created, it is ready to use. Initial program load from a DLm virtual tape 189

190 Mainframe Tasks IPL from the stand-alone IPL tape On the DLm console, manually mount the stand-alone IPL tape on any virtual tape drive assigned to the tape library where the tape resides: load SAIPL E980 IPL considerations for DLm This command causes the virtual tape volume SAIPL to be mounted on DLm virtual tape drive, E980. In your scenario, replace E980 with the name of a virtual tape drive configured on your DLm. It can be any DLm virtual tape drive that is assigned to the tape library where the stand-alone IPL tape volume resides. On the mainframe console, select as IPL device the DLm virtual tape drive where the stand-alone IPL tape is mounted, and perform the IPL. The mainframe will perform the IPL from the stand-alone IPL tape mounted on DLm. The considerations for initiating an IPL on a mainframe host from DLm are: Stand-alone restore programs might not send a Load Display Mount message, which causes DLm to automatically mount the desired volume. If you use a stand-alone program to restore volumes that reside on the DLm system, you might have to perform a manual Load command on DLm for each of the volumes requested. If you need to initiate IPL a second time from the stand-alone IPL tape, first make sure that the tape is rewound to loadpoint. To do this, enter the Unready and Rewind commands at the VT console. Tapes on which stand-alone programs exist typically are not automatically unloaded. You may need to manually execute the Unready and Unload commands at the DLm console to unload the stand-alone IPL tape when you are done. 190 EMC Disk Library for mainframe DLm960 User Guide

191 CHAPTER 8 Using DLm with Unisys DLm systems with FICON interfaces installed can connect to Unisys 2200 mainframes running OS This chapter discusses issues unique to DLm support for Unisys mainframes. Unique DLm operations for Unisys mainframes Configuring for Unisys Initializing tapes for Unisys Configuring the mainframe for DLm Using DLm with Unisys 191

192 Using DLm with Unisys Unique DLm operations for Unisys mainframes Autodetection Load displays Mount "Ready" interrupt Query Config command This section describes the unique DLm operations required for Unisys mainframe systems. DLm automatically detects that it is attached to a Unisys host when it receives a Load Display command containing data that is unique to a Unisys mainframe. When this occurs, a message is displayed on the DLm console ("DLm080I: Device devicename UNISYS detected"). You can confirm that DLm has recognized that a drive is attached to a Unisys mainframe by reviewing the messages displayed on the DLm console or by running a QUERY CONFIG command. Unisys does not send the 'M' mount message sent by the z/os mainframe systems. DLm determines a Unisys mount request by the FCB byte containing x'48', and then moves the VOLSER from the 1st position into the 2nd position of the mount message and inserts an 'M' into the 1st position to form a standard mount message. The Unisys mainframe does not expect a Not-Ready-to-Ready interrupt when the DLm device comes ready. After sending the Load Display, the Unisys mainframe performs repetitive senses to detect when the device is ready. To accommodate the way Unisys mainframe works, DLm does not send an interrupt when a mount is initiated by a Load Display like it normally does. However, it sends an interrupt when a manual mount is performed at the DLm console, and when a manual Not-Ready-to-Ready transition is performed. The DLm QUERY CONFIG command displays an additional parameter, HOST=UNISYS, for a device that has been determined to be attached to a Unisys mainframe. 192 EMC Disk Library for mainframe DLm960 User Guide

193 Using DLm with Unisys Ring-Out Mount request Scratch request Configuring for Unisys Device type Labels Scratch tapes The Unisys Load Display mount request uses the 8th position of the mount message as a file protect indicator. If that position contains the character 'F', the Unisys mainframe expects to have the tape mounted "ring-out" (read-only). DLm honors the 'F' indicator and mounts the requested volume in read-only mode. When a Unisys host asks for a scratch tape, DLm ignores the label type (either explicitly requested in the mount request or implied by the LABEL=x configuration parameter) and picks any available scratch tape. This behavior is applicable only to Unisys-attached devices. All non-unisys devices will continue to honor label type for scratch mount requests. When configuring devices for use by a Unisys mainframe the Device Type should be set to '3490'. When the Unisys operating system sends a Load Display mount message, it does not specify a label type. Unisys always expects an ANSI label by default. To accommodate this, you must configure each Unisys-attached device with the LABEL=A parameter. This will change the DLm default for this device to ANSI labels instead of IBM standard labels. Figure 38 shows a sample device definition screen where sixteen tape drives are being defined including the LABEL=A parameter. The Unisys operating system does not send the "MSCRTCH" message to request a scratch tape as an IBM mainframe would. Instead it sends an L-BLNK command. To accommodate the L-BLNK command, you must specify a scratch synonym equal to L-BLNK. Figure 38 shows a properly configured scratch synonym for Unisys mainframes. Configuring for Unisys 193

194 Using DLm with Unisys Figure 38 Unisys Device Panel Initializing tapes for Unisys When initializing tape volumes to be used with Unisys you must include the LABEL=A option on the initialize command to tell the system that the tape labels will follow the ANSI standard. For example, to initialize 100 tapes beginning with VOLSER B00000 using tape drive E980, you would enter the following initialize command: INITIALIZE VOL=B00000 DEV=E980 COUNT=100 LABEL=A 194 EMC Disk Library for mainframe DLm960 User Guide

195 Using DLm with Unisys Configuring the mainframe for DLm DLm devices are configured in OS2200 using SCMS / SCMS-II as one or more CTS5136-VSM (non-library) subsystems of 1 to 16 units. The resulting ODB or.ptn file must be installed and the OS rebooted with the proper definitions. The Unisys equipment code for DLm devices is U47M. Configuring the mainframe for DLm 195

196 Using DLm with Unisys 196 EMC Disk Library for mainframe DLm960 User Guide

197 CHAPTER 9 z/os Console Support This chapter discusses DLm support for the z/os Console: z/os Console operation DLMHOST Using z/os Console support z/os Console Support 197

198 z/os Console Support z/os Console operation DLMHOST Installing DLMHOST DLm provides an optional z/os utility that can be used to communicate between a single logical partition's (LPAR) operator console and the DLm. To make use of the DLm z/os Console operation, you must install the z/os DLMHOST utility and then configure the individual VTEs to communicate with it. Using the DLm Configuration program, you can configure which type (informational, warning, or error) of messages and / or which specific DLm messages are sent over the channel to the mainframe. DLMHOST is a host utility that provides z/os Console Operation support. The DLMHOST utility runs as a started task, and accepts commands from the operator. By default, DLMHOST uses Write-to-Operator (WTOR) capabilities for sending DLm commands. Optionally, you may configure DLMHOST to use the z/os MODIFY function in place of WTOR. At startup, DLMHOST reads a configuration file that defines the VTEs to be supported as well as the device addresses, per VTE, to be used for communication and logging. Each DLm VTE will be identified with a unique name so that commands can be targeted to specific VTEs. A tape drive device address must be selected from each VTE's range of addresses that will be used as the command/communication path. A second device address is required on each VTE if you want DLm to send log messages to the z/os console. These devices will not be eligible for allocation once DLMHOST has been started. Only log messages that have passed message filtering will be received by the host. It should be noted that, depending upon the filtering options set on the VTEs, there may be many log messages sent to the consoles. Optionally, DLMHOST supports a configuration option to send the messages to a host file instead of the operator's console. DLMHOST is only supported in a single Logical Partition (LPAR). You cannot connect multiple DLMHOST tasks running in multiple LPARs to the same DLm VTE. 198 EMC Disk Library for mainframe DLm960 User Guide

199 z/os Console Support DLMHOST is distributed in the 3.0 DLMZOS.XMI and the package is available on the EMC support website. Downloading and using the DLm utilities and JCL for z/os on page 174 provides more details. The DLMHOST utility must be linked as an authorized program into an authorized library under the name DLMHOST. It is highly recommended that RACF be used to restrict the use of DLMHOST to authorized users only. An example of the JCL required to link DLMHOST follows: //L EXEC PGM=HEWL,PARM='MAP,LET,LIST,NCAL,AC=1' //SYSLMOD DD DSN=USER.LIBRARY,DISP=SHR //SYSUT1 DD DSN=&&SYSUT1,SPACE=(1024,(120,120),,,ROUND), // UNIT=SYSALLDA,DCB=BUFNO=1 //SYSPRINT DD SYSOUT=* //DDIN DD DSN=DLM.MODULE,DISP=SHR //SYSLIN DD * INCLUDE DDIN(DLMHOST) NAME DLMHOST(R) /* Running DLMHOST The following JCL is used to execute DLMHOST: //DLMSTEP EXEC PGM=DLMHOST,PARM='parameters' //DLMCFG DD DSN=PARMLIB(nodecfg),DISP=SHR //DLMLOG DD DSN=logfilename,DISP=SHR //* THE FORMAT OF THE CONFIG FILE IS AS FOLLOWS: //* Col 1-10 Nodename //* Col Command path device address //* Col Log path device address //* Col Console name The parameters than can be specified are: DOCMDS Requires the use of a DLMCMD DD card pointing to a file of commands that are to be processed during DLMHOST startup. The commands should be in the same format as used in modify or WTOR processing. Note: EOJ can be specified as the last command to terminate DLMHOST after a series of commands. DLMHOST 199

200 z/os Console Support NOLOG Prevents DLMHOST from receiving continuous log data from any VTE. Set this parameter if you plan to use DLMHOST only to send commands from the z/os Console to the DLm. Command responses will be returned even when NOLOG is specified. NOWTOR Prevents DLMHOST from issuing the normally outstanding WTOR. When this parameter is specified, MDL commands can be issued using the z/os MODIFY command as the method of communication in place of WTOR. LOGFILE Causes any received log data from the DLm system to be recorded in the file pointed to by the DLMLOG DD card. When LOGFILE is specified, the log messages are not sent to any console via WTO. If LOGFILE is not specified, the DLMLOG DD card is not required in the JCL. The LOGFILE dataset should be an FB LRECL 133 file, and will be opened for extend each time the task is started. EZSM - EZSM- Causes z/os Console messages that are replicated from the DLm to be prefixed with 'DLM' followed by a severity character, ('E' for error, 'W' for warning, and 'I' for informational messages). This parameter allows for an installation to use EzSM DLm Message Alerts that have wildcards for DLME, DLMW, and DLMI for handling these types of messages. DLMHOST configuration file EMC provides sample JCL to run DLMHOST. Step 4 of Downloading and using the DLm utilities and JCL for z/os on page 174 provides instructions to download the sample JCL. The sample JCL xmit file includes a sample PROC member to run the DLMHOST utility. This proc must be customized to point to the APF authorized load library that DLMHOST has been installed in. Also, one or more configuration statements must be completed for the DLMCFG DD. The customized proc should be placed into a PROCLIB that is searched when the START DLMHOST command is issued from the zos Console. The configuration file pointed to by the DLMCFG DD card should be an FB LRECL 80 file that has a single record for each VTE to be supported. You can define upto 64 records. Comments cards can be included in the input configuration file by placing an asterisk in column 1. The layout of the configuration file records is as follows: Col : NODENAME The name used by the mainframe operator to identify which VTE to communicate with. 200 EMC Disk Library for mainframe DLm960 User Guide

201 z/os Console Support Col 12-15: CMDDEV The 4-digit device address of the tape drive that is to be used for operator commands and responses. If this field is left blank, no operator commands can be sent to this nodename. Col 17-20: LOGDEV The 4-digit device address of the tape drive that is to be used for logging activity whenever logging is active for this VTE. If left blank, no host logging can occur from the VTE. Col 22-29: CONSNAME The console that log messages should be directed to if logging is active for this VTE. If this field is left blank, the log messages will go to all routcde=5 (tape library) consoles. The following is sample JCL for DLMHOST within a 3-VTE configuration supporting both messaging and commands: VTLNODE1 VTLNODE2 VTLNODE3 038E 038F TAPECON1 039E 039F TAPECON1 03AE 03AF TAPECON1 Using z/os Console support If DLMHOST is active and configured to receive DLm messages, it automatically forwards any message received to the appropriate console or log file. When DLMHOST is executed without the NOWTOR parameter, the following message is displayed on the z/os console: DLM001I ENTER COMMAND, EOJ,OR? FOR HELP An outstanding Write to Operator Reply (WTOR) message will remain pending. To send a command to DLMHOST, you need to know the message reply number from the WTOR. To determine the WTOR message number, enter the following z/os command on the operator's console: d r,l (or /d r,l from SDSF) This command returns the reply message number for any outstanding WTORs on the system. Using z/os Console support 201

202 z/os Console Support To issue a command to DLMHOST, enter the command using the WTOR message number in the following format: msg#,command Where: msg# is the reply message number returned from the d r,l command. COMMAND is the DLMHOST command to be executed. When DLMHOST is executed with the NOWTOR parameter, the following message is returned: DLM002I jobname USE MODIFY TO ENTER COMMAND, EOJ, OR? FOR HELP Subsequently the z/os Modify command can be used to issue commands to DLMHOST using the 'jobname' indicated in the DLM002I message. The format of the z/os Modify command is: F jobname,command where: DLMHOST commands jobname is the job name of DLMHOST reported in the DLM002I message. command is the DLMHOST command to be executed. The following commands are recognized by DLMHOST: CMD The CMD sends a DLm command to a specific VTE. This command requires a nodename also be specified by using the NODE= parameter (or N=). A nodename of ALL can be specified to send the command to every VTE. All DLm Operator commands can be entered as parameters to this command. The following are examples of valid use of this command: CMD=Q SPACE,NODE=NODE1 CMD=FIND VOLUME ,N=N1 STOPLOG 202 EMC Disk Library for mainframe DLm960 User Guide

203 z/os Console Support The STOPLOG command requests that DLMHOST stop logging VTE log messages for a specific VTE. This command requires that a nodename be specified by using the NODE= parameter (or N=). A nodename of ALL can be specified to stop host logging for all defined VTEs. For example: STOPLOG,N=ALL STOPLOG,N=VTLNODE1 STARTLOG The STARTLOG command requests that DLMHOST start host logging of VTE log data for a specific VTE. This command requires that a nodename be specified by using the NODE= parameter (or N=). A nodename of ALL can be specified to start logging for all defined VTEs. For example: STARTLOG,N=NODE2 STARTLOG,NODE=ALL STATUS The STATUS command requests that DLMHOST display the current configuration and status of the command and logging functions. DLMHOST will issue this message followed by the status of each configured node: DLM2401 NODENAME CMDDEV LOGDEV CONSNAME A y or n next to the device address indicates whether the command/logging function is currently active or inactive for that node name, respectively. For example: DLM2401 NODENAME CMDDEV LOGDEV CONSNAME NODE1 038E Y 038F Y CON1 NODE2 048E N 048F Y EOJ The EOJ Command will terminate the DLMHOST task. HELP or? The Help (?) command returns the DLM000I message with a list of the valid DLMHOST commands. Using z/os Console support 203

204 z/os Console Support WTOR command examples The commands that DLM000I lists are: STARTLOG,N=nodename/ALL STOPLOG,N=nodename/ALL C=Command,N=nodename/ALL STATUS When DLMHOST has been executed without the NOWTOR parameter, an outstanding WTOR message reply is used to send commands to DLMHOST. The following are valid examples of DLMHOST commands: msg#,status msg#,c=q SPACE,N=N1 msg#,stoplog,n=all where msg# is the message number returned from the d r,l (/d r,l from SDSF). The following are valid examples of the same DLMHOST commands when DLMHOST has been executed with the NOWTOR parameter using the job name DLMHOST: F DLMHOST,STATUS F DLMHOST,C=Q SPACE,N=N1 F DLMHOST,STOPLOG,N=ALL 204 EMC Disk Library for mainframe DLm960 User Guide

205 CHAPTER 10 Data Encryption This chapter discusses using data encryption: Overview How RSA Key Manager works with DLm Configure encryption on DLm Data Encryption 205

206 Data Encryption Overview EMC Disk Library for mainframe (DLm) includes a data encryption module that allows you to encrypt data using AES-256 encryption as the data is being written. Emulated tape drives in DLm can be configured to perform data encryption by assigning an encryption key class to the drive. When an encryption key class is assigned, the drive will automatically encrypt any tape volume the mainframe writes to the drive. Both the tape header labels and the data blocks are encrypted. Mixed volumes in the same VTE Data encryption is configured on a drive-by-drive basis. You can configure one or more drives to perform the data encryption. Each time a tape volume is written to one of the drives, the data is automatically encrypted before it is written to the backend storage. If you want some tape volumes in the DLm to be encrypted and other tape volumes to be written unencrypted, configure your DLm with some devices supporting encryption and others without encryption.. Drives 00 0F V3480 Drives 10 1F V3481 Devices 00 0F Encryption = No Devices 10 1F Encryption = Yes B00000 encrypted B00001 B00002 encrypted B00003 Figure 39 Configuring for encryption The Virtual Tape Emulator (VTE) shown in Figure 39 on page 206 is configured with a single library as a backend. 206 EMC Disk Library for mainframe DLm960 User Guide

EMC Disk Library for mainframe Version 3.0

EMC Disk Library for mainframe Version 3.0 EMC Disk Library for mainframe Version 3.0 User Guide P/N 300-012-441 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2011 EMC Corporation.

More information

EMC Disk Library for mainframe

EMC Disk Library for mainframe EMC Disk Library for mainframe Version 4.5.0 User Guide for DLm8100 with VNX and Data Domain 302-003-492 REV 03 Copyright 2017 Dell Inc. or its subsidiaries. All rights reserved. Published September 2017

More information

EMC Disk Library for mainframe

EMC Disk Library for mainframe EMC Disk Library for mainframe Version 4.4.0 User Guide for DLm2100 with VNX 302-002-159 REV 01 Copyright 2016 EMC Corporation. All rights reserved. Published in the USA. Published February, 2016 EMC believes

More information

EMC Disk Library for mainframe

EMC Disk Library for mainframe EMC Disk Library for mainframe Version 4.3.0 User Guide for DLm8100 with VMAX 302-001-831 REV 02 Copyright 2015 EMC Corporation. All rights reserved. Published in USA. Published July, 2015 EMC believes

More information

EMC Disk Library for mainframe

EMC Disk Library for mainframe EMC Disk Library for mainframe Version 4.4.0 User Guide for DLm2100 with Data Domain 302-002-160 REV 01 Copyright 2016 EMC Corporation. All rights reserved. Published in the USA. Published February, 2016

More information

EMC Secure Remote Support Device Client for Symmetrix Release 2.00

EMC Secure Remote Support Device Client for Symmetrix Release 2.00 EMC Secure Remote Support Device Client for Symmetrix Release 2.00 Support Document P/N 300-012-112 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright

More information

EMC Celerra Network Server NS-480 Dense Storage

EMC Celerra Network Server NS-480 Dense Storage EMC Celerra Network Server NS-480 Dense Storage Cabling Verification Guide P/N 300-011-017 REV 01 EMC Corporation Corporate Headquarters: Hopkinton, M 01748-9103 1-508-435-1000 www.emc.com Copyright 2010

More information

EMC DiskXtender File System Manager for UNIX/Linux Release 3.5 SP1 Console Client for Microsoft Windows

EMC DiskXtender File System Manager for UNIX/Linux Release 3.5 SP1 Console Client for Microsoft Windows EMC DiskXtender File System Manager for UNIX/Linux Release 3.5 SP1 Console Client for Microsoft Windows P/N 300-012-249 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000

More information

EMC Ionix Network Configuration Manager Version 4.1.1

EMC Ionix Network Configuration Manager Version 4.1.1 EMC Ionix Network Configuration Manager Version 4.1.1 RSA Token Service Installation Guide 300-013-088 REVA01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com

More information

EMC DiskXtender File System Manager for UNIX/Linux Release 3.5 Console Client for Microsoft Windows

EMC DiskXtender File System Manager for UNIX/Linux Release 3.5 Console Client for Microsoft Windows EMC DiskXtender File System Manager for UNIX/Linux Release 3.5 Console Client for Microsoft Windows Installation Guide P/N 300-009-578 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103

More information

EMC NetWorker Module for SnapImage Release 2.0 Microsoft Windows Version

EMC NetWorker Module for SnapImage Release 2.0 Microsoft Windows Version EMC NetWorker Module for SnapImage Release 2.0 Microsoft Windows Version Installation and Administration Guide P/N 300-007-130 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000

More information

EMC for Mainframe Tape on Disk Solutions

EMC for Mainframe Tape on Disk Solutions EMC for Mainframe Tape on Disk Solutions May 2012 zmainframe Never trust a computer you can lift! 1 EMC & Bus-Tech for Mainframe EMC supports mainframe systems since 1990 with first integrated cached disk

More information

DLm8000 Product Overview

DLm8000 Product Overview Whitepaper Abstract This white paper introduces EMC DLm8000, a member of the EMC Disk Library for mainframe family. The EMC DLm8000 is the EMC flagship mainframe VTL solution in terms of scalability and

More information

EMC SourceOne Discovery Manager Version 6.5

EMC SourceOne Discovery Manager Version 6.5 EMC SourceOne Discovery Manager Version 6.5 Installation and Administration Guide 300-008-569 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright

More information

EMC SourceOne Discovery Manager Version 6.7

EMC SourceOne Discovery Manager Version 6.7 EMC SourceOne Discovery Manager Version 6.7 Installation and Administration Guide 300-012-743 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright

More information

EMC SourceOne for Microsoft SharePoint Version 7.1

EMC SourceOne for Microsoft SharePoint Version 7.1 EMC SourceOne for Microsoft SharePoint Version 7.1 Installation Guide 302-000-151 REV 01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2009-2013

More information

EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning

EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning Abstract This white paper describes how to configure the Celerra IP storage system

More information

EMC SourceOne for Microsoft SharePoint Version 6.7

EMC SourceOne for Microsoft SharePoint Version 6.7 EMC SourceOne for Microsoft SharePoint Version 6.7 Administration Guide P/N 300-012-746 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2011

More information

Mainframe Data Library 6000 / 4000

Mainframe Data Library 6000 / 4000 Mainframe Data Library 6000 / 4000 Installation Guide Part Number 40-03214-A0-001 powered by 129 Middlesex Turnpike Burlington, MA 01803 (781)272-8200 Federal Communications Commission (FCC) Statement

More information

EMC Celerra NS-120 System (Dual Blade) Installation Guide

EMC Celerra NS-120 System (Dual Blade) Installation Guide EMC Celerra NS- System (Dual Blade) Installation Guide P/N -7-88 Rev A EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without

More information

EMC NetWorker. Cloning Integration Guide. Release 8.0 P/N REV A02

EMC NetWorker. Cloning Integration Guide. Release 8.0 P/N REV A02 EMC NetWorker Release 8.0 Cloning Integration Guide P/N 300-013-558 REV A02 Copyright (2011-2013) EMC Corporation. All rights reserved. Published in the USA. Published January 2013 EMC believes the information

More information

EMC SourceOne for Microsoft SharePoint Version 6.7

EMC SourceOne for Microsoft SharePoint Version 6.7 EMC SourceOne for Microsoft SharePoint Version 6.7 Installation Guide 300-012-747 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2011 EMC

More information

Disk and FLARE OE Matrix P/N REV A17 February 22, 2011

Disk and FLARE OE Matrix P/N REV A17 February 22, 2011 EMC CX4 Series Storage Systems Disk and FLARE OE Matrix P/N 300-007-437 REV A17 February 22, 2011 To function properly, disks in an EMC CLARiiON system require that each storage processor run minimum revisions

More information

EMC SourceOne TM Offline Access USER GUIDE. Version 6.8 P/N A01. EMC Corporation Corporate Headquarters: Hopkinton, MA

EMC SourceOne TM Offline Access USER GUIDE. Version 6.8 P/N A01. EMC Corporation Corporate Headquarters: Hopkinton, MA EMC SourceOne TM Offline Access Version 6.8 USER GUIDE P/N 300-013-695 A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2005-2012 EMC Corporation.

More information

EMC VSI for VMware vsphere: Path Management

EMC VSI for VMware vsphere: Path Management EMC VSI for VMware vsphere: Path Management Version 5.6 Product Guide P/N 300-013-068 REV 06 Copyright 2011 2013 EMC Corporation. All rights reserved. Published in the USA. Published September 2013. EMC

More information

Under the Covers. Benefits of Disk Library for Mainframe Tape Replacement. Session 17971

Under the Covers. Benefits of Disk Library for Mainframe Tape Replacement. Session 17971 Under the Covers Benefits of Disk Library for Mainframe Tape Replacement Session 17971 Session Overview DLm System Architecture Virtual Library Architecture VOLSER Handling Formats Allocating/Mounting

More information

EMC SourceOne Management Version 6.7

EMC SourceOne  Management Version 6.7 EMC SourceOne Email Management Version 6.7 Installation Guide 300-012-741 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2011 EMC Corporation.

More information

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Release 3.0 SP1 User Guide P/N 302-000-098 REV 02 Copyright 2007-2014 EMC Corporation. All rights reserved. Published in the

More information

EMC SourceOne for File Systems

EMC SourceOne for File Systems EMC SourceOne for File Systems Version 7.2 Administration Guide 302-000-958 REV 02 Copyright 2005-2015 EMC Corporation. All rights reserved. Published in the USA. Published December 9, 2015 EMC believes

More information

EMC Celerra NS-120 System (Single Blade) Installation Guide

EMC Celerra NS-120 System (Single Blade) Installation Guide EMC Celerra NS- System (Single Blade) Installation Guide P/N 3-7-88 Rev A4 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change

More information

Dell EMC Avamar for SQL Server

Dell EMC Avamar for SQL Server Dell EMC Avamar for SQL Server Version 7.5 User Guide 302-003-931 REV 01 Copyright 2001-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2017 Dell believes the information in this

More information

EMC NetWorker Module for DB2 Version 4.0

EMC NetWorker Module for DB2 Version 4.0 EMC NetWorker Module for DB2 Version 4.0 Administration Guide P/N 300-005-965 REV A03 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 1998-2009 EMC

More information

Dell EMC Avamar for SQL Server

Dell EMC Avamar for SQL Server Dell EMC Avamar for SQL Server Version 7.5.1 User Guide 302-004-292 REV 01 Copyright 2001-2018 Dell Inc. or its subsidiaries. All rights reserved. Published February 2018 Dell believes the information

More information

EMC Avamar 6.1 for SharePoint VSS

EMC Avamar 6.1 for SharePoint VSS EMC Avamar 6.1 for SharePoint VSS User Guide P/N 300-013-358 REV 06 Copyright 2001-2013 EMC Corporation. All rights reserved. Published in the USA. Published September 2013 EMC believes the information

More information

EMC NetWorker. Licensing Guide. Release 8.1 P/N REV 02

EMC NetWorker. Licensing Guide. Release 8.1 P/N REV 02 EMC NetWorker Release 8.1 Licensing Guide P/N 302-000-557 REV 02 Copyright 2011-2013 EMC Corporation. All rights reserved. Published in the USA. Published October, 2013 EMC believes the information in

More information

EMC DATA DOMAIN OPERATING SYSTEM

EMC DATA DOMAIN OPERATING SYSTEM EMC DATA DOMAIN OPERATING SYSTEM Powering EMC Protection Storage ESSENTIALS High-Speed, Scalable Deduplication Up to 31 TB/hr performance Reduces requirements for backup storage by 10 to 30x and archive

More information

Dell EMC Avamar for Sybase ASE

Dell EMC Avamar for Sybase ASE Dell EMC Avamar for Sybase ASE Version 7.5.1 User Guide 302-004-293 REV 01 Copyright 2001-2018 Dell Inc. or its subsidiaries. All rights reserved. Published February 2018 Dell believes the information

More information

EMC SourceOne Version 7.0

EMC SourceOne Version 7.0 EMC SourceOne Version 7.0 Disaster Recovery Solution Guide 300-015-197 REV 01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2012 EMC Corporation.

More information

SPECIFICATION FOR NETWORK ATTACHED STORAGE (NAS) TO BE FILLED BY BIDDER. NAS Controller Should be rack mounted with a form factor of not more than 2U

SPECIFICATION FOR NETWORK ATTACHED STORAGE (NAS) TO BE FILLED BY BIDDER. NAS Controller Should be rack mounted with a form factor of not more than 2U SPECIFICATION FOR NETWORK ATTACHED STORAGE (NAS) TO BE FILLED BY BIDDER S.No. Features Qualifying Minimum Requirements No. of Storage 1 Units 2 Make Offered 3 Model Offered 4 Rack mount 5 Processor 6 Memory

More information

DISK LIBRARY FOR MAINFRAME

DISK LIBRARY FOR MAINFRAME DISK LIBRARY FOR MAINFRAME Geographically Dispersed Disaster Restart Tape ABSTRACT Disk Library for mainframe is Dell EMC s industry leading virtual tape library for mainframes. Geographically Dispersed

More information

DISK LIBRARY FOR MAINFRAME

DISK LIBRARY FOR MAINFRAME DISK LIBRARY FOR MAINFRAME Geographically Dispersed Disaster Restart Tape ABSTRACT Disk Library for mainframe is Dell EMC s industry leading virtual tape library for IBM zseries mainframes. Geographically

More information

Dell EMC PowerMax enas Quick Start Guide

Dell EMC PowerMax enas Quick Start Guide Dell EMC PowerMax enas Quick Start Guide Version 8.1.13.35 For Dell EMC PowerMax and VMAX All Flash REVISION 01 Copyright 2015-2018 Dell Inc. or its subsidiaries All rights reserved. Published May 2018

More information

EMC SourceOne Version 7.1

EMC SourceOne Version 7.1 EMC SourceOne Version 7.1 Disaster Recovery Solution Guide 302-000-180 REV 01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2013 EMC Corporation.

More information

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA Configuring Hosts to Access NFS File Systems 302-002-567 REV 01 Copyright 2016 EMC Corporation. All rights reserved. Published in the

More information

EMC ProtectPoint. Solutions Guide. Version REV 03

EMC ProtectPoint. Solutions Guide. Version REV 03 EMC ProtectPoint Version 3.5 Solutions Guide 302-003-476 REV 03 Copyright 2014-2017 Dell Inc. or its subsidiaries. All rights reserved. Published May 2017 Dell believes the information in this publication

More information

DELL EMC DATA DOMAIN DEDUPLICATION STORAGE SYSTEMS

DELL EMC DATA DOMAIN DEDUPLICATION STORAGE SYSTEMS Data Domain Systems Table 1. DELL EMC DATA DOMAIN DEDUPLICATION STORAGE SYSTEMS Dell EMC Data Domain deduplication storage systems continue to revolutionize disk backup, archiving, and disaster recovery

More information

IBM Spectrum Protect Version Introduction to Data Protection Solutions IBM

IBM Spectrum Protect Version Introduction to Data Protection Solutions IBM IBM Spectrum Protect Version 8.1.2 Introduction to Data Protection Solutions IBM IBM Spectrum Protect Version 8.1.2 Introduction to Data Protection Solutions IBM Note: Before you use this information

More information

BACKUP AND RECOVERY FOR ORACLE DATABASE 11g WITH EMC DEDUPLICATION A Detailed Review

BACKUP AND RECOVERY FOR ORACLE DATABASE 11g WITH EMC DEDUPLICATION A Detailed Review White Paper BACKUP AND RECOVERY FOR ORACLE DATABASE 11g WITH EMC DEDUPLICATION EMC GLOBAL SOLUTIONS Abstract This white paper provides guidelines for the use of EMC Data Domain deduplication for Oracle

More information

Dell EMC Avamar Virtual Edition for VMware

Dell EMC Avamar Virtual Edition for VMware Dell EMC Avamar Virtual Edition for VMware Version 7.5.1 Installation and Upgrade Guide 302-004-301 REV 01 Copyright 2001-2018 Dell Inc. or its subsidiaries. All rights reserved. Published February 2018

More information

EMC VMAX Best Practices Guide for AC Power Connections

EMC VMAX Best Practices Guide for AC Power Connections EMC VMAX Best Practices Guide for AC Power Connections For: VMAX3 Family and VMAX All Flash REVISI 06 Copyright 2014-2017 Dell Inc. or its subsidiaries. All rights reserved. Published May 2017 Dell believes

More information

DELL EMC DATA DOMAIN DEDUPLICATION STORAGE SYSTEMS

DELL EMC DATA DOMAIN DEDUPLICATION STORAGE SYSTEMS DELL EMC DATA DOMAIN DEDUPLICATION STORAGE SYSTEMS Dell EMC Data Domain deduplication storage systems continue to revolutionize disk backup, archiving, and disaster recovery with high-speed, inline deduplication.

More information

EMC Avamar Data Store Gen4S

EMC Avamar Data Store Gen4S EMC Avamar Data Store Gen4S Customer Service Guide 300-999-650 REV 04 Copyright 2012-2016 EMC Corporation. All rights reserved. Published in the USA. Published May, 2016 EMC believes the information in

More information

EMC VSI for VMware vsphere : Path Management Version 5.3

EMC VSI for VMware vsphere : Path Management Version 5.3 EMC VSI for VMware vsphere : Path Management Version 5.3 Product Guide P/N 300-013-068 REV 03 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2012

More information

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture EMC Solutions for Microsoft Exchange 2007 EMC Celerra NS20 EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2008 EMC Corporation. All rights

More information

EMC DiskXtender Release 6.5 SP8

EMC DiskXtender Release 6.5 SP8 EMC DiskXtender Release 6.5 SP8 Microsoft Windows Version Administration Guide P/N 302-002-314 REV 01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright

More information

EMC CLARiiON CX3 Series FCP

EMC CLARiiON CX3 Series FCP EMC Solutions for Microsoft SQL Server 2005 on Windows 2008 EMC CLARiiON CX3 Series FCP EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com Copyright 2008

More information

IBM Tivoli Storage Manager Version Introduction to Data Protection Solutions IBM

IBM Tivoli Storage Manager Version Introduction to Data Protection Solutions IBM IBM Tivoli Storage Manager Version 7.1.6 Introduction to Data Protection Solutions IBM IBM Tivoli Storage Manager Version 7.1.6 Introduction to Data Protection Solutions IBM Note: Before you use this

More information

Hardware Installation Guide Installation (x3350)

Hardware Installation Guide Installation (x3350) Title page Nortel Application Gateway 2000 Nortel Application Gateway Release 6.3 Hardware Installation Guide Installation (x3350) Document Number: NN42400-300 Document Release: Standard 04.03 Date: January

More information

EMC DATA DOMAIN PRODUCT OvERvIEW

EMC DATA DOMAIN PRODUCT OvERvIEW EMC DATA DOMAIN PRODUCT OvERvIEW Deduplication storage for next-generation backup and archive Essentials Scalable Deduplication Fast, inline deduplication Provides up to 65 PBs of logical storage for long-term

More information

EMC SAN Copy Command Line Interfaces

EMC SAN Copy Command Line Interfaces EMC SAN Copy Command Line Interfaces REFERENCE P/N 069001189 REV A13 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2006-2008 EMC Corporation. All

More information

Drive and FLARE OE Matrix

Drive and FLARE OE Matrix EMC CX4 Series Storage Systems and FLARE OE Matrix P/ N 300-007-437 To function properly, drives in an EMC CLARiiON system require that each storage processor run minimum s of the FLARE Operating Environment

More information

EMC DiskXtender for NAS Release 3.1

EMC DiskXtender for NAS Release 3.1 EMC DiskXtender for NAS Release 3.1 Theory of Operations P/N 300-005-730 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2006-2007 EMC Corporation.

More information

Lot # 10 - Servers. 1. Rack Server. Rack Server Server

Lot # 10 - Servers. 1. Rack Server. Rack Server Server 1. Rack Server Rack Server Server Processor: 1 x Intel Xeon E5 2620v3 (2.4GHz/6 core/15mb/85w) Processor Kit. Upgradable to 2 CPU Chipset: Intel C610 Series Chipset. Intel E5 2600v3 Processor Family. Memory:

More information

Copyright 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Copyright 2010 EMC Corporation. Do not Copy - All Rights Reserved. 1 Using patented high-speed inline deduplication technology, Data Domain systems identify redundant data as they are being stored, creating a storage foot print that is 10X 30X smaller on average than

More information

Dell EMC Avamar for SharePoint VSS

Dell EMC Avamar for SharePoint VSS Dell EMC Avamar for SharePoint VSS Version 18.1 User Guide 302-004-683 REV 01 Copyright 2001-2018 Dell Inc. or its subsidiaries. All rights reserved. Published July 2018 Dell believes the information in

More information

HP Supporting the HP ProLiant Storage Server Product Family.

HP Supporting the HP ProLiant Storage Server Product Family. HP HP0-698 Supporting the HP ProLiant Storage Server Product Family https://killexams.com/pass4sure/exam-detail/hp0-698 QUESTION: 1 What does Volume Shadow Copy provide?. A. backup to disks B. LUN duplication

More information

Configuring NDMP Backups to Disk on VNX

Configuring NDMP Backups to Disk on VNX EMC VNX Series Release 7.0 Configuring NDMP Backups to Disk on VNX P/N 300-011-829 REV A02 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 1998-2011

More information

EMC NetWorker and EMCData Domain Boost Deduplication Devices

EMC NetWorker and EMCData Domain Boost Deduplication Devices EMC NetWorker and EMCData Domain Boost Deduplication Devices Release 8.1 Integration Guide P/N 302-000-553 REV 02 Copyright 2010-2013 EMC Corporation. All rights reserved. Published in the USA. Published

More information

Universal Storage Consistency of DASD and Virtual Tape

Universal Storage Consistency of DASD and Virtual Tape Universal Storage Consistency of DASD and Virtual Tape Jim Erdahl U.S.Bank August, 14, 2013 Session Number 13848 AGENDA Context mainframe tape and DLm Motivation for DLm8000 DLm8000 implementation GDDR

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

EMC NetWorker Module for Microsoft for Hyper-V VSS

EMC NetWorker Module for Microsoft for Hyper-V VSS EMC NetWorker Module for Microsoft for Hyper-V VSS Release 8.2 User Guide P/N 302-000-653 REV 02 Copyright 2007-2014 EMC Corporation. All rights reserved. Published in the USA. Published September 2014

More information

High Availability and MetroCluster Configuration Guide For 7-Mode

High Availability and MetroCluster Configuration Guide For 7-Mode Updated for 8.2.1 Data ONTAP 8.2 High Availability and MetroCluster Configuration Guide For 7-Mode NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501

More information

IBM System Storage DS5020 Express

IBM System Storage DS5020 Express IBM DS5020 Express Manage growth, complexity, and risk with scalable, high-performance storage Highlights Mixed host interfaces support (FC/iSCSI) enables SAN tiering Balanced performance well-suited for

More information

EMC CLARiiON Server Support Products for Windows INSTALLATION GUIDE P/N REV A05

EMC CLARiiON Server Support Products for Windows INSTALLATION GUIDE P/N REV A05 EMC CLARiiON Server Support Products for Windows INSTALLATION GUIDE P/N 300-002-038 REV A05 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2004-2006

More information

DELL EMC DATA DOMAIN DEDUPLICATION STORAGE SYSTEMS

DELL EMC DATA DOMAIN DEDUPLICATION STORAGE SYSTEMS SPEC SHEET DELL EMC DATA DOMAIN DEDUPLICATION STORAGE SYSTEMS Data Domain Systems Dell EMC Data Domain deduplication storage systems continue to revolutionize disk backup, archiving, and disaster recovery

More information

Reference Architecture

Reference Architecture EMC Solutions for Microsoft SQL Server 2005 on Windows 2008 in VMware ESX Server EMC CLARiiON CX3 Series FCP EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

Mainframe Backup Modernization Disk Library for mainframe

Mainframe Backup Modernization Disk Library for mainframe Mainframe Backup Modernization Disk Library for mainframe Mainframe is more important than ever itunes Downloads Instagram Photos Twitter Tweets Facebook Likes YouTube Views Google Searches CICS Transactions

More information

IBM TotalStorage Enterprise Tape Library 3494

IBM TotalStorage Enterprise Tape Library 3494 Modular tape automation for multiple computing environments IBM TotalStorage Enterprise Tape Library 3494 A 16-frame IBM TotalStorage Enterprise Tape Library 3494 high availability configuration with two

More information

Dell EMC Unity Family

Dell EMC Unity Family Dell EMC Unity Family Version 4.3 Configuring High Availability H16708 02 Copyright 2017-2018 Dell Inc. or its subsidiaries. All rights reserved. Published January 2018 Dell believes the information in

More information

EMC ViewPoint for SAP (4.6, 4.7) Special Ledger Module ADMINISTRATION MANUAL. Version 2.0 P/N REV A01

EMC ViewPoint for SAP (4.6, 4.7) Special Ledger Module ADMINISTRATION MANUAL. Version 2.0 P/N REV A01 EMC iewpoint for SAP (4.6, 4.7) Special Ledger Module ersion 2.0 ADMINISTRATION MANUAL P/N 300-003-495 RE A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com

More information

Restoring an SP Boot Image

Restoring an SP Boot Image AX100-Series Restoring an SP Boot Image Revision A01 June 9, 2004 This document explains how to restore an SP s boot image. Read it when an SP does not start properly and its fault light blinks four times

More information

EMC Ionix Network Configuration Manager Integration Adapter for IT Ops Manager Version 2.1.2

EMC Ionix Network Configuration Manager Integration Adapter for IT Ops Manager Version 2.1.2 EMC Ionix Network Configuration Manager Integration Adapter for IT Ops Manager Version 2.1.2 Installation and Configuration Guide 300-014-093 REV A02 EMC Corporation Corporate Headquarters: Hopkinton,

More information

Manager Appliance Quick Start Guide

Manager Appliance Quick Start Guide Revision D Manager Appliance Quick Start Guide The Manager Appliance runs on a pre-installed, hardened McAfee Linux Operating System (MLOS) and comes pre-loaded with the Network Security Manager software.

More information

DELL EMC UNITY: HIGH AVAILABILITY

DELL EMC UNITY: HIGH AVAILABILITY DELL EMC UNITY: HIGH AVAILABILITY A Detailed Review ABSTRACT This white paper discusses the high availability features on Dell EMC Unity purposebuilt solution. October, 2017 1 WHITE PAPER The information

More information

The World s Fastest Backup Systems

The World s Fastest Backup Systems 3 The World s Fastest Backup Systems Erwin Freisleben BRS Presales Austria 4 EMC Data Domain: Leadership and Innovation A history of industry firsts 2003 2004 2005 2006 2007 2008 2009 2010 2011 First deduplication

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

DELL EMC DATA DOMAIN DEDUPLICATION STORAGE SYSTEMS

DELL EMC DATA DOMAIN DEDUPLICATION STORAGE SYSTEMS SPEC SHEET DELL EMC DATA DOMAIN DEDUPLICATION STORAGE SYSTEMS Data Domain Systems Dell EMC Data Domain deduplication storage systems continue to revolutionize disk backup, archiving, and disaster recovery

More information

EMC DiskXtender Release 6.4 Microsoft Windows Version

EMC DiskXtender Release 6.4 Microsoft Windows Version EMC DiskXtender Release 6.4 Microsoft Windows Version Administration Guide P/N 300-007-798 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright

More information

DISK LIBRARY FOR MAINFRAME

DISK LIBRARY FOR MAINFRAME DISK LIBRARY FOR MAINFRAME Mainframe Tape Replacement with cloud connectivity ESSENTIALS A Global Virtual Library for all mainframe tape use cases Supports private and public cloud providers. GDDR Technology

More information

Brochure. ION Multi-Service Integration Platform. Integrate. Optimize. Navigate. Transition Networks Brochure.

Brochure. ION Multi-Service Integration Platform. Integrate. Optimize. Navigate. Transition Networks Brochure. Brochure ION Multi-Service Integration Platform Integrate. Optimize. Navigate. Transition Networks Brochure The ION Platform Overview The ION Multi-Service Integration Platform offers first-rate solutions

More information

EMC Data Domain Boost for Oracle Recovery Manager 1.1 Administration Guide

EMC Data Domain Boost for Oracle Recovery Manager 1.1 Administration Guide EMC Data Domain Boost for Oracle Recovery Manager 1.1 Administration Guide Backup Recovery Systems Division Data Domain LLC 2421 Mission College Boulevard, Santa Clara, CA 95054 866-WE-DDUPE; 408-980-4800

More information

Drobo 5N2 User Guide

Drobo 5N2 User Guide Drobo 5N2 User Guide Contents 1 Drobo 5N2 User Guide... 6 1.1 Before You Begin... 7 1.1.1 Product Features at a Glance... 8 1.1.2 Checking Box Contents...10 1.1.3 Checking System Requirements...11 1.1.3.1

More information

Oracle RAC 10g Celerra NS Series NFS

Oracle RAC 10g Celerra NS Series NFS Oracle RAC 10g Celerra NS Series NFS Reference Architecture Guide Revision 1.0 EMC Solutions Practice/EMC NAS Solutions Engineering. EMC Corporation RTP Headquarters RTP, NC 27709 www.emc.com Oracle RAC

More information

EMC Celerra CNS with CLARiiON Storage

EMC Celerra CNS with CLARiiON Storage DATA SHEET EMC Celerra CNS with CLARiiON Storage Reach new heights of availability and scalability with EMC Celerra Clustered Network Server (CNS) and CLARiiON storage Consolidating and sharing information

More information

Power Vault in EMC Symmetrix DMX-3 and Symmetrix DMX-4 Systems

Power Vault in EMC Symmetrix DMX-3 and Symmetrix DMX-4 Systems Power Vault in EMC Symmetrix DMX-3 and Symmetrix DMX-4 Systems Applied Technology Abstract This white paper is an overview of Power Vault operation in EMC Symmetrix DMX-3 and Symmetrix DMX-4 environments.

More information

VTRAK E-Class/J-Class Quick Start Guide

VTRAK E-Class/J-Class Quick Start Guide VTRAK E-Class/J-Class Quick Start Guide Version.0 Firmware 3.9 008 Promise Technology, Inc. All Rights Reserved. VTrak Quick Start Guide About This Guide This Quick Start Guide shows you how to install

More information

Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA

Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA Design Guide Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA VMware vsphere 5.1 for up to 2000 Virtual Desktops EMC VSPEX Abstract This guide describes required components and a configuration

More information

ReadyNAS OS 6 Desktop Storage Systems Hardware Manual

ReadyNAS OS 6 Desktop Storage Systems Hardware Manual ReadyNAS OS 6 Desktop Storage Systems Hardware Manual Model ReadyNAS 102, 104 ReadyNAS 202, 204, 212, 214 ReadyNAS 312, 314, 316 ReadyNAS 422, 424, 426, 428 ReadyNAS 516, 524X, 526X, 528X ReadyNAS 626X,

More information

Dell DL4300 Appliance Release Notes

Dell DL4300 Appliance Release Notes Dell DL4300 Appliance Release Notes Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION: A CAUTION indicates either potential

More information