FlexFrame for SAP. Version 5.0A. Administration and Operation. Edition December 2011 Document Version 1.7

Size: px
Start display at page:

Download "FlexFrame for SAP. Version 5.0A. Administration and Operation. Edition December 2011 Document Version 1.7"

Transcription

1 FlexFrame for SAP Version 5.0A Administration and Operation Edition December 2011 Document Version 1.7

2 Fujitsu Limited Copyright Fujitsu Technology Solutions 2011 FlexFrame and PRIMERGY are trademarks of FujitsuSAP and NetWeaver are trademarks or registered trademarks of SAP AG in Germany and in several other countries Linux is a registered trademark of Linus Torvalds SUSE Linux is a registered trademark of Novell, Inc., in the United States and other countries Java is a trademark of Sun Microsystems, Inc. in the United States and other countries Intel and PXE are registered trademarks of Intel Corporation in the United States and other countries MaxDB is a registered trademark of MySQL AB, Sweden MySQL is a registered trademark of MySQL AB, Sweden NetApp, Network Appliance, Open Network Technology for Appliance Products, Write Anywhere File Layout and WAFL are trademarks or registered trademarks of Network Appilance, Inc. in the United States and other countries Oracle is a registered trademark of ORACLE Corporation EMC, CLARiiON, Symmetrix, PowerPath, Celerra and SnapSure are trademarks or registered trademarks of EMC Corporation in the United States and other countries VMware, the VMware boxes logo and design, Virtual SMP and VMotion are regis-tered trademarks or trademarks (the Marks ) of VMware, Inc. in the United States and/or other jurisdictions. Ethernet is a registered trademark of XEROX, Inc., Digital Equipment Corporation and Intel Corporation Windows and Word are registered trademarks of Microsoft Corporation All other hardware and software names used are trademarks of their respective companies. All rights, including rights of translation, reproduction by printing, copying or similar methods, in part or in whole, are reserved. Offenders will be liable for damages. All rights, including rights created by patent grant or registration of a utility model or design, are reserved. Delivery subject to availability. Right of technical modification reserved.

3 Contents FlexFrame for SAP... 1 Administration and Operation Introduction Requirements... 1 Notational Conventions Document History Related Documents... 2 Special Hints for FlexFrame 5.0A Incompatibilities (command scripts) FlexFrame Architecture General Notes on FlexFrame Hardware and Software Shared Operating System... 9 Shared OS Boot Concept Control Nodes Application Nodes FlexFrame Structure in LDAP Working with LDAP Linux-HA Cluster on Control Center Terminology Simple Resources Constraints Score Stickiness Resource Groups FlexFrame Specific Configuration LDAP Configuration Ressource ff_manage Resource Netbooot STONITH Configuring the Cluster Starting Behavior of the Cluster Status Information of the Clusters Linux-HA CLI Commands Linux-HA Logfile Linux-HA GUI Network LAN Failover Segments Administration and Operation

4 Contents Network Switches Network Speed Network Switch Groups Network Switch Ports Automounter Concept Storage Systems NAS Support Architectual Overview Network Appliance Filer EMC NAS (Celerra) SAN Support Architectural Overview SAN Basic Layers Scope of the FlexFrame SAN Integration Rules and Restrictions FlexFrame Basic Administration Accessing a FlexFrame Landscape (Remote Administration) Powering up the FlexFrame Landscape Powering off the FlexFrame Landscape Reactivating ANs after Power Shutdown by FA Agents Displaying the Current FlexFrame Configuration State FlexFrame Web Portal FA Autonomous Agents State of Pools State of Application Nodes State of SAP Systems State of SID Instances Networks ServerView Operations Manager Cluster Status FlexFrame Backup with Tape Library NetWorker ARCserve Pools and Groups Adding a Pool Removing a Pool Listing Pool Details Listing All Pools Changing Pool DNS Domain Changing Pool Default Router Adding a Group to a Pool Removing Pool Group Changing Group Assignment of Application Nodes Changing Group and Pool Assignment of Application Nodes Hosts Database Administration and Operation

5 Contents Script: ff_hosts.sh User and Groups Administration Create, Modify, Delete, or List User(s) for Application Nodes Creating, Modifying, Deleting or Listing Group(s) for Application Nodes Creating, Modifying, Deleting or Listing Service(s) for Application Nodes Pool-independent Spare Nodes Creation of Spare Nodes in the ADMINPOOL Moving of a Spare Node Listfunction for Spare Nodes Handling Pool-independent Spare Nodes with FA Agents Application Nodes Administration Listing Application Nodes Displaying Information on a Specific Application Node Displaying Information on all Application Nodes Adding Application Nodes Removing Application Nodes Renaming Application Nodes Moving Application Nodes Between Pools Application Nodes and SAN Administrating Blade Server Cabinets Listing Blade Server Cabinets Displaying Information on a Specific Blade Server Cabinet Displaying Information on all Configured Blade Server Cabinets Adding Blade Server Cabinets Removing Blade Server Cabinets Changing Switch Blade Type Changing Switch Blade Name Changing Switch Blade Password Getting Switch Blade Initial Configuration Change Switch Blade Uplink Move a Blade Cabinet to Another Switch Group Administrating ESX Servers and Virtual Machines Getting started with ESX Servers and VMs ESX related global FlexFrame parameters System Code for ESX Servers and VMs vcenter Server Adding ESX Servers Completing ESX Server Configuration Removing ESX Servers Displaying Information about ESX Servers and VMs ESX Servers and Pools Administration and Operation

6 Contents Special Functions for Virtual Machines Virtual Machine Properties and ESXi Resources Using vsphere Functions for FlexFrame Objects Script for Power on/off/reboot of a Computing Node in FF4S Synopsis Storage Systems Administration NAS Systems Configuration (EMC and NetApp) Adding a New NAS Removing a NAS Configuring SNMP Traps for NetApp Filers Displaying All Configured NAS Displaying NAS Configuration Adding a Pool to a NAS Removing a Pool from a NAS Adding a Blade (Data Mover) to an EMC Celerra NAS Removing a Blade (Data Mover) from an EMC Celerra NAS Create NAS Cluster Partnership Move a NAS to another Switch Group Switching a NetApp Filer between 1Gbit and 10Gbit Changing NAS Command Shell Changing NAS LinkAggregate Ports NAS Disk Free Celerra SRDF-NAS High Availability Syntax Background Processes Diagnosis Return Codes Used File Ressources Used Perl Modules SAN Configuration in a FlexFrame Environment Setting Up the SAN Configuration Configuring Storage General Remark for the Use of Navisphere Storage System Access LUN Creation Recording LUN Information Configuring Application Nodes Connecting the Storage to the Application Nodes Creating Zones on the Fibre Channel Switches Checking Visibility of the Storage System on the Application Node Registering Host Initiators with a CLARiiON/FibreCAT CX Mapping LUNs to the Application Nodes Checking Visibility of the LUNs on the Application Node Creating Volumes and File Systems for a SAP System Administration and Operation

7 Contents Creating a Linux LVM2 Volume Group for FlexFrame Usage Completing the Configuration and Testing Usability of SAN for an SID Dynamic LUN Masking Using StorMan to Reconfigure SAN Installation of SMI-S Provider Installation of StorMan SRDF Support in FlexFrame Storage System Configuration Configuring Application Nodes for SAN SRDF Usage FlexFrame SAN Configuration for SRDF SAN SRDF Usage in FlexFrame FlexFrame SAN Configuration Script: ff_san_ldap_conf.pl FlexFrame SAN Configuration File SAN Support Scripts Script: ff_san_mount.sh Script: ff_san_info.sh Script: ff_qlascan.sh Script: ff_san_srdf.pl Script: ff_san_luns.pl Switch Administration Adding a Switch to a Switch Group Removing a Switch from a Switch Group Listing a Switch Group Configuration Changing the Password of a Switch Group Changing the Host Name of a Switch Group Displaying/Changing Common Network Configuration Parameters Adding a Switch Group Adding an Expansion Module Removing an Expansion Module Removing a Switch Group Adding an Uplink to Switch Group Extend an Uplink of Switch Group Delete an Uplink of Switch Group Migrating a Switch of a Switch Group Adding a Switch Port Configuration Removing a Switch Port Configuration Displaying a Switch Port Configuration Displaying the Complete Switch Port Configuration Moving Device Connection to Core Switch Move Control Center to Core Switch Move Client LAN to Core Switch Move NAS System to Core Switch Move Application Node to Core Switch Administration and Operation

8 Contents Move ESX Server to Core Switch Move BX Chassis to Core Switch SAP System Handling Listing SAP SIDs and Instances Updating System Configuration Files Adding/Removing/Modifying SAP SIDs and Instances (Classic SAP Services) Removing SAP SIDs and Instances Adding/Removing SAP SIDs (addon services) BOBJ Business Objects Content Server (CMS) MDM Master Data Management SMD Solution Manager Diagnostics TREX (Search and Classification Service) Cloning a SAP SID into a Different Pool Script: ff_clone_sid.pl Changing User and Group IDs after Cloning Multiple NAS Systems and Multiple Volumes NetApp Filer EMC Celerra Upgrading a SAP System Service Port FA Agents Support SAP Upgrade SAP Kernel Updates and Patches Unloading volff Status Quo/Solution ff_relocate_sid_data.pl LDAP Move Data from volff (how to) Specify Volume Before Installing SAP Moving an Existing /usr/sap/sid Delete Entries from LDAP Administrating SAP Services Displaying Status of SAP Services myamc.fa WebGUI List SAP Services Starting and Stopping Application Services Starting and Stopping SAP Services Without Root Privileges SAP Service Scripts SAP Service Script Actions SAP Service Script logging SAP Service Script User Exits Return Code of the SAP Service Script Starting and Stopping Multiple SAP Services Administration and Operation

9 Contents Removing an Application from Monitoring by FA Agents Stopping and Starting an Application for Upgrades Using r3up Service Switchover Use ServicePings from FA Agents Software and Hardware Update and Maintenance Upgrading the Entire FlexFrame Landscape Software Upgrade on the Control Node ServerView Update via RPM ServerView Agents ServerView Operations Manager ServerView RAID Manager ServerView Storage Solutions - StorMan Updating/Installing a New Linux Kernel Software Stage Install the New Kernel Reboot the Control Node Backup/Restore of FlexFrame Control Nodes Backup of a Control Node Restore of a Control Node Maintenance of the Control Node - Hardware Exchanging a Control Node Hardware Failed, Hard Disk and Installed OS are not Affected One Hard Disk is Defect, the Other One is Undamaged The Control Nodes OS is Damaged Replacing a Network Card Control Node Maintenance of Application Nodes - Software Introduction Schematic Overview Installing Application Node Images from Installation Media Installing the Application Node Image Understanding the Application Node Image Step #1: Creating a Maintenance Base Image Step #2: Assigning the Maintenance Image, booting and maintaining Choosing a Node Assigning Booting Maintaining Step #3: Reverting the Maintenance Image Migrating remaining Application Nodes Re-Using the Maintenance Image Maintaining Use Cases Service Packs Updating/Installing a New Linux Kernel ServerView Update Administration and Operation

10 Contents Upgrading the Application Software Updating RPM Packages on an Application Node Updating vmware-open-vm-tools Maintenance of the Application Nodes - Hardware Changing BIOS Settings for Netboot Replacing a Network Card Application Node Replacing a Switch Blade Replacing Power Control Hardware Maintenance of ESXi Servers BIOS Updates Replacing a Network Card - ESXi Server ESXi Updates and Patches Maintenance of Other FlexFrame Components NetApp Storage EMC Storage Cisco Switches and Switch Blades Firmware Update Backup of Switch Configurations Restore of Switch Configurations DNS Servers Third Party Products MyAMC Agents Troubleshooting Locking of Flex Frame Administration Commands Script Logging in FlexFrame Log Files Network Errors NFS Mount Messages LDAP Error Codes and Messages LDAP and Cache Coherence Linux Start/Stop Script Errors Severity INFO Severity WARN Severity ERROR Severity 'DEBUG' Script Debugging Shell Scripts Perl Scripts Debugging the Linux Kernel Netconsole Capturing Crash Dumps Common Restrictions for Taking Crash Dumps "Kdump" Kernel Crash Dump Capturing Administration and Operation

11 Contents Forcing a Crash Dump Activate core dumps on CN or AN Abbreviations Glossary Index Administration and Operation

12

13 1 Introduction This document provides instructions on administrating and operating an installed FlexFrame 5.0A environment. It focuses on general aspects of the architecture as well as on software updates, hardware extensions and FlexFrame-specific configuration. It does not cover the installation of an entire FlexFrame environment. Please refer to the Installation of a FlexFrame Environment manual for information on initial installation. 1.1 Requirements This document addresses administrators on FlexFrame environments. We assume that the reader of this document has technical background knowledge in the areas of operating systems (Linux ), IP networking and SAP basis. 1.2 Notational Conventions The following conventions are used in this manual: Additional information that should be observed. Warning that must be observed. fixed font <fixed font> fixed font Names of paths, files, commands, and system output. Names of variables. User input in command examples (if applicable using <> with variables). Command prompt: # The notation control1:/<somewhere> # <command> indicates that the command <command> is issued on the first Control Node in the directory /<somewhere>. The reader may need to change into the directory first, e.g. control1:~ # cd /<somewhere> control1:/<somewhere># <command>. Administration and Operation 1

14 Introduction Document History 1.3 Document History Document Version Changes Date 1.0 First Edition Appendixes for Nexus Switches Update in ff_pool_adm.pl Updates New chapter DNS Servers Hint to user exits (adding new services) 1.6 Update of the chapter Restore of a Control Node Synopsis of ff_sid_adm.pl Related Documents FlexFrame for SAP Administration and Operation FlexFrame for SAP HW Characteristics Quickguides FlexFrame for SAP Installation ACC 7.2 FlexFrame for SAP Installation Guide for SAP Solutions FlexFrame for SAP Installation of a FlexFrame Environment FlexFrame for SAP Management Tool FlexFrame for SAP myamc.fa_agents Installation and Administration FlexFrame for SAP myamc.fa_messenger Installation and Administration FlexFrame for SAP myamc.fa_logagent Installation and Administration FlexFrame for SAP Network Design and Configuration Guide FlexFrame for SAP Security Guide FlexFrame for SAP Technical White Paper FlexFrame for SAP Upgrading FlexFrame 4.1A, 4.2A or 4.2B to 5.0A ServerView Documentation SUSE Linux Enterprise Server Documentation 1.5 Special Hints for FlexFrame 5.0A In this document, you often will find console output, configuration data and installation examples which are based on earlier FlexFrame versions. Please keep in mind that these 2 Administration and Operation

15 Special Hints for FlexFrame 5.0A Introduction are examples and may look slightly different on the new operating systems introduced in FlexFrame 5.0A. The two Control Nodes (CN) of FlexFrame for SAP are also named as the FlexFrame Control Center (CC). In this documentation the notation Control Node (CN) is used as a synonym for Control Center (CC) and the other way round Incompatibilities (command scripts) There are some changes with supplied scripts The script ff_change_id.pl is not further available. With the script ff_user_adm.pl resp. ff_group_adm.pl you can change UIDs or GIDs of a specific user/group The command line syntax of ff_setup_sid_folder.sh changed to ff_setup_sid_folder.sh s <sid> -p <pool> Administration and Operation 3

16

17 2 FlexFrame Architecture The FlexFrame solution is a revolutionary approach to run complex SAP solutions with higher efficiency. At the same time some major changes of the configuration paradigms for infrastructures have been implemented. These changes are: A shared operating system booted via IP networks for the SAP Application Nodes. Decoupling of application software and operating system, called virtualization of SAP software or Adaptive Computing. Shared Network Attached Storage from Network Appliance providing Write Anywhere File Layout (WAFL ) and sophisticated snap functionality. Shared Network Attached Storage by EMC Celerra. FlexFrame Autonomous Agents (FA Agents) providing revolutionary mechanisms to implement high-availability functions without cluster software SAN storage The concept of FlexFrame for SAP consists of several components, which implement state-of-the-art functionality. Together with new components, such as the FlexFrame Autonomous Agents, the whole solution is far more than just the sum of its components. A major part of the benefits consist in a dramatic reduction in day to day operating costs for SAP environments. It is, of course, possible to use parts of the FlexFrame solution in project-based implementations. However, they can not be called FlexFrame. 2.1 General Notes on FlexFrame FlexFrame was designed and developed as a platform for SAP Applications. Its major purpose is to simplify and abstract the basic components to enable the administrator of a SAP system landscape to focus on SAP and not worry about servers, networking and storage. The FlexFrame Control Nodes are seen as an appliance. Like a toaster or a micro wave oven, they have a well defined purpose and the build-components must work together to achieve that purpose. It happened that Fujitsu picked SUSE Linux Enterprise Server (SLES) as the operating system for the Control Nodes, however it is not intended that the customer is using it as a regular server, meaning installing additional software on it and applying patches to it is not wanted, unless Fujitsu support line instructs so. Upcoming versions of the Control Node's operating system may be totally different and may not allow modifying anything at all. The installation and backup/restore functionality of a Control Node are based on fixed images which are delivered on DVD. Modifications of installed images will not be taken care of, if a new version is installed. Administration and Operation 5

18 FlexFrame Architecture Hardware and Software Modifications may even lead to errors which may be hard to find. Therefore we strongly recommend to the customers not to install any software or patches onto the Control Nodes without confirmation of Fujitsu support. Similar to the Control Nodes are the Application Nodes. Fixed images are shipped for installation of Linux Application Nodes in the FlexFrame landscape. Another aspect of FlexFrame is the reduction of the TCO (Total Cost of Ownership). A static approach (once started, never touched) will not be very efficient. To achieve the best savings, it is recommended to actively manage where a certain SAP application instance or database instance is running. If, as an example, a SAP instance requires the power of two CPUs of an Application Node during most days of a month and eight CPUs during month-end calculations, it is best to move it back and forth to the appropriate Application Node with the right size. During the time where the application instance is running on the two CPU Application Node, another SAP instance can use the bigger eight CPU Application Node and therefore saving the need to have more eight CPU Application Nodes as in a static approach. 2.2 Hardware and Software The FlexFrame solution consists of both, hardware and software. To grant proper function of the whole landscape, the entire software set is strictly defined. Anything other than the software components listed below is not part of FlexFrame. This applies unchanged if software from the list below is missing, is installed in other versions than below, or if software other than the actual SAP components is added. For detailed information about the hardware supported in a FlexFrame environment, see the FlexFrame 5.0A Configuration Guide. Any other functions, such as backup, can be implemented separately as an add-on to FlexFrame and need dedicated hardware, operating system, high availability, professional service and support etc. 6 Administration and Operation

19 Hardware and Software FlexFrame Architecture No. Hardware OS Software Services 1 Control Nodes: 2 x PRIMERGY RX300 S6 or 2 x PRIMERGY RX300 S5 or 2 x PRIMERGY RX300 S4 or 2 x PRIMERGY RX300 S3 SLES 10 SP3 (x86_64) FA Agents (Control Agents) V9.0, FlexFrame 5.0A File System Image CN, ServerView etc. TFTP, DHCP, LDAP, (SAPROUTER), etc. 2 Network switches: n*m Cisco Catalyst 3750x 1 n 1, m 2 n*2 Cisco Nexus 50xx 1 n 0 IOS (proprietary) NXOS (proprietary) (as delivered) 3 Network Attached Storage: or one or more NetApp Filer heads (FASxxxx), disk shelves as required 2 hosting shared OS file systems, and application data one or more EMC Celerra NSxxx, disk shelves as required 2, hosting shared OS file systems, and application data. ONTAP 3 ONTAP 3 NetApp Tools DART 3 DART 3 EMC Tools NFS, SnapRestore optional: cluster components, FlexClone, SnapVault, SnapMirror NFS, 1 allowed Types according to FlexFrame Support Matrix 2 The amount of disks required for customer-specific FlexFrame configurations can be determined together with Fujitsu's Customer Support Filer Sizing Team 3 supported versions according to FlexFrame Support Matrix Administration and Operation 7

20 FlexFrame Architecture Hardware and Software No. Hardware OS Software Services 4 SAN Storage Multipath SW SLES 11 / SLES 10: DM-MPIO integrated Multipath SW HA Services 5 SAN Storage Volume Manager SLES 11 / SLES 10: LINUX Volume Manager LVM2 Volume Management Services 6 Intel- or AMD-based PRIMERGY Servers (standard rack or blade server) SLES 11 SP1 (x86_64) and / or SLES 10 SP3 (x86_64) FlexFrame 5.0A File System Image, FA Agents (Application Agents), SAP Applications, Database SAP & DB Services 8 Administration and Operation

21 2.3 Shared Operating System One major aspect of FlexFrame is its shared operating system. Sharing in this case means, that the very same files of essential parts of the underlying operating system are used to run multiple Application Nodes. This part of the file system is mounted read-only, so none of the Application Nodes that run the actual applications can modify it. Server specific information is linked to a file system area that is server-specific and mounted read-write. The shared operating system is kept to a NAS system (Network Attached Storage system from Network Appliance or EMC Celerra) Shared OS Boot Concept The chart below shows the boot process of a FlexFrame Application Node (PRIMERGY/Linux): Administration and Operation 9

22 FlexFrame Architecture Shared Operating System Control Nodes A productive FlexFrame landscape always includes two Control Nodes. Their purpose is to be a single point of control for the Application Nodes, as well as to check and manage the proper function of the Application Nodes. Control Nodes are not running SAP software (with the exception of saprouter, as an option). They exclusively run SUSE Linux Enterprise Server Version 10 (SLES10, SP3), installed on local disks. Control Nodes provide and run services such as: Linux-HA high availablity cluster framework Timeserver for the complete FlexFrame landscape Control Agents Web server to provide the Control Agents user interface DHCP for assignment of IP addresses and TFTP for the boot process of the Application Nodes saprouter (optional) Control Nodes have to be of the type PRIMERGY RX300 S6, RX300 S5, RX300 S4 or RX300 S Application Nodes Application Nodes run database and SAP services on the SUSE Linux Enterprise Server shared operating system. Application Nodes can be physical servers that offer CPU and memory, or virtual servers built on top of physical servers, using the physical server's CPU and memory through a virtualization layer. For FlexFrame Version 5.0A, the principal types of Application Nodes are PRIMERGY servers running Linux directly and virtual servers on top of PRIMERGY servers using VMware ESXi as a virtualization layer. Admissible PRIMERGY servers have to be approved for SAP on Linux by Fujitsu. During the boot process using Intel's PXE technology, each Application Node will be identified using the hardware address of its boot interface (MAC address). The Control Node will assign an IP address to it and supply the operating system via the network. File systems, especially the root file system (/), are mounted via the network in read-only mode. If, for any reason, an Application Node needs to be replaced or added, only a handful of settings need to be adjusted to integrate it into the FlexFrame landscape. Intel's PXE technology is implemented in Fujitsu PRIMERGY servers and allows booting via the network. DHCP will be used with static MAC-Address relationship for all the Application Nodes. 10 Administration and Operation

23 FlexFrame Structure in LDAP FlexFrame Architecture 2.4 FlexFrame Structure in LDAP LDAP is used as the central information service for all shared OS nodes within a FlexFrame environment. The Control Nodes are used as LDAP servers. The LDAP database is located on shared file systems mounted from the NAS storage. The Application Nodes are configured as LDAP clients. LDAP requests from Application Nodes are restricted to the data of their own pool. LDAP provides host-related network information such as: net boot automount user authentication groups host names and IP addresses shared services networks and netmasks LDAP client profiles The FlexFrame LDAP tree roughly looks as illustrated here: Administration and Operation 11

24 FlexFrame Architecture FlexFrame Structure in LDAP Additional information about configuration data is only applicable for Control Nodes. These are used FlexFrame-internally to add, remove or modify the configuration of Application Nodes or SAP services. FlexFrame utilizes LDAP for two different purposes: (1) for operating naming services (such as host name resolution, user/password retrieval, tcp/udp service lookup, etc.) and (2) for storing FlexFrame specific data on the structure of the installed environment Application Nodes are only able to search in area (1). It is separated into pool specific sections in order to protect pools from accessing other pools' data. Each of them contains pool specific network information service (NIS) like data. The LDAP servers have access lists to prevent searches outside of the own pool. The other main DIT part contains FlexFrame configuration data (2). It should only be accessed through maintenance tools from one of the Control Nodes. This part of the DIT contains a lot of cross references, which need to be kept in sync. Do not try to change this data, unless you are explicitly instructed by Fujitsu support to do so. 12 Administration and Operation

25 Linux-HA Cluster on Control Center FlexFrame Architecture Working with LDAP The usage of LDAP specific commands like ldapadd or ldapmodify is limited to very few actions. One is to create or remove a PUT service for a SAP system copy. This action is described within the Installation Guide for SAP Solutions manual. Other direct interaction through LDAP commands is limited to service issues. No other interventions have to and should be done. The FlexFrame maintenance tools provide the necessary functionality. 2.5 Linux-HA Cluster on Control Center Terminology Some of the Linux-HA terms used below are explained here in order to promote broaderbased understanding: Node Every computer that is part of a cluster is a node Resource Everything than can be administered by heartbeat is referred to as a resource. For example, an IP address that is administered by the cluster is a resource. Resource Agent (RA) The RA is the connection between heartbeat and the programs that are started when the RA is called. They are shell scripts, which have to provide a standardized interface to heartbeat, so that they can be started, monitored and stopped by heartbeat. Supported standards: Linux Standard Base (LSB) All scripts under /etc/init.d correspond to this standard Open Cluster Framework (OCF) For examples see /usr/lib/ocf/resource.d Stonith Administration and Operation 13

26 FlexFrame Architecture Linux-HA Cluster on Control Center Designated Coordinator (DC) Every cluster has precisely one DC as the central instance in the cluster. It is elected from all the nodes. It alone is responsible for all the actions in the cluster and has the only valid cluster information base. All the other nodes only have a duplicate. STONITH STONITH is an abbreviation for "Shoot The Other Node In The Head". This is the name of a method that stops any nodes no longer accessible from "causing damage" in the cluster by switching it off or alternatively causing a reboot. heartbeat The cluster can be started and stopped with the /etc/init.d/heartbeat script. Cluster Resource Manager (CRM) The CRM manages the resources, decides which resource is to run where and ensures that the required status of the cluster is achieved based on the current status. For this purpose the CRM distributes the work to the LRM and receives feedback from the latter. Local Resource Manager (LRM) The LRM manages the resources on "its" local nodes. The CRM tells the LRM which resources they are. The LRM communicates with the CRM. Cluster Information Base (CIB) The CIB is the central information file for all the resources and nodes of the cluster. It not only contains information about the static configuration but also about a dynamic part of the current status of all resources. The data is stored in XML format. ha.cf This configuration file controls the behavior of the cluster software. It is needed to start heartbeat. For example, it defines the interface via which communication is to take place in the cluster and the nodes which belong to the cluster. It must be available on every node under /etc/ha.d. authkeys The file authkeys is used to authenticate the nodes of a cluster for one another. This file must also be available on every node under /etc/ha.d. Its access mode should be set to Administration and Operation

27 Linux-HA Cluster on Control Center FlexFrame Architecture Simple Resources Constraints Simple resources can be linked with constraints, which are triggered when the resource is started. These are: Ordering: specifies the relationship of two resources to each other. This means that you can only start resource B after resource A. Collocation: ensures that two resources are started on the same node. Location: defines the node where a resource is to preferably run Score Constraints are defined by rules. A system of points is used to decide on which node a resource is to run. Each applicable rule is linked to a score. In addition to normal integers, it can also accept special values INFINITY and INFINITY. Example: Resource A is to preferrably run on CN1. The rule: NODE eq CN1 Score: 10 If A would run on CN1, the rule would return true and CN1 would get the score 10. If A would run on CN2, the rule would return false and CN2 would not get any score, thus 0. After all possibilities being evaluated by the cluster, CN1 has a greater score than CN2, thus CN1 would be chosen to run A. Administration and Operation 15

28 FlexFrame Architecture Linux-HA Cluster on Control Center The extremes: INFINITY: The resource is started in any case on the node selected by an applicable rule. If it cannot be started on the specified node, an attempt is made to start it on another node. -INFINITY: The resource is not started on the node selected by an applicable rule. If it cannot be started on any other node than the selected one, it remains stopped. In other words: INFINITY ~ is applicable if possible -INFINITY ~ is never applicable Stickiness Another value that plays an important role in the decision about the resource where the node is to run is stickiness. You can regard this value as the level of adhesion, with which a resource aims to remain on the node on which it is currently running. This value is defined via the global settings of the CIB by the variable default-resourcestickiness. The following values are possible. 0 : Linux-HA attempts to optimally place the resource in the cluster. In other words, the resource is redistributed in the cluster if Linux-HA finds a more "suitable" node. > 0 : Linux-HA attempts to optimally place the resource in the cluster. In other words, the resource is redistributed in the cluster if Linux-HA finds a more "suitable" node. < 0 : The resource tends to leave the current node. Lower values reinforce this tendency. INFINITY : The resources remain on the current node until they are compelled to leave it, e.g. when their node is shut down. -INFINITY : The resources do not want to remain on the current node. The two values, score and stickiness, determine the node on which a resource is then actually started. For this purpose, they are determined and added for each node of the cluster. The resource is then started on the node that has the highest score. 16 Administration and Operation

29 Linux-HA Cluster on Control Center FlexFrame Architecture Example: Default_resource_stickiness : 100 Resource A is to be preferably run on CN1 Rule: NOTE eq CN1 Score: 10 Case 1: A already runs on CN1 Points for CN1 : Location score =10 Stickiness score = 100 Resulting score CN1 =110 Points for CN2 : Location score = 0 (rule not met) Stickiness Score = 0 (because it is currently not running on CN2) Resulting score CN2 = 0 Result: A remains on CN1, score CN1 > score CN2 Case 2: A already runs on CN2 Points for CN1 : Location score = 10 Stickiness score = 0 (as not running on CN2) Resulting score CN1 = 10 Points for CN2 : Location score = 0 Stickiness score = 100 Resulting score CN2 = 100 Result: As score CN2 > score CN1, resource A remains on CN2, although its location score expresses the wish to move to CN1. Administration and Operation 17

30 FlexFrame Architecture Linux-HA Cluster on Control Center Case 3: A does not run on any node (e.g. during cluster start) Points for CN1 : Location score =10 Stickiness score = 0 Resulting score CN1 = 10 Points for CN2 : Location score = 0 (rule not met) Stickiness score = 0 Resulting score CN2 = 0 Result: A starts on CN1, score CN1 > score CN2 In FlexFrame almost all the resources are configured in such a way that they remain on their current node after a move, even when the other control node is available again after a downtime. This avoids unnecessary switching processes Resource Groups Simple resources can be put together to form groups. It is also possible to create socalled clones, in which simple resources may run repeatedly in the cluster. Constraints, such as ordering or colocation, which are defined for the group, apply for all the resources belonging to the group FlexFrame Specific Configuration The basic configuration for heartbeat is created during the initial configuration of FlexFrame, i.e. the file ha.cf is set up, a key to authenticate the nodes CN1 and CN2 is created for the file authkeys and the CIB is created. 18 Administration and Operation

31 Linux-HA Cluster on Control Center FlexFrame Architecture It contains the following resources (type and name of the RA in brackets): Resource Group: network_cn1_server ip_cn1_server_<poolname-m> (ocf::heartbeat:ipaddr2) Resource Group: network_cn2_server ip_cn2_server_<poolname-m> (ocf::heartbeat:ipaddr2) ldap_master (ocf::fsc:ff_ha_ldap) ldap_replica (ocf::fsc:ff_ha_ldap) slurpd (lsb:ff_ha_slurpd) Resource Group: netboot dhcpd (lsb:ff_ha_dhcpd) tftpd (lsb:ff_ha_tftpd) Resource Group: ff_manage) mysql (lsb:ff_ha_mysql) myamc.fa_messenger (lsb:ff_ha_myamc.messengersrv) tomcat (lsb:ff_ha_tomcat) myamc.fa_ctrlagent (lsb:ff_ha_myamc.ctrlagent) Clone Set: clone_clustermon ClusterMon:0 (ocf::heartbeat:clustermon) ClusterMon:1 (ocf::heartbeat:clustermon) stonith_ipmi_cn1 (stonith:external/ipmi stonith_ipmi_cn2 (stonith:external/ipmi) Clone Set: clone_stonith_meatware Stonith_meatware:0 (stonith:meatware) Stonith_meatware:1 (stonith:meatware) Administration and Operation 19

32 FlexFrame Architecture Linux-HA Cluster on Control Center LDAP Configuration The FlexFrame LDAP concept is implemented by the resource groups network_cn1_server and network_cn2_server as well as the simple resources ldap_master, ldap_replica and slurpd. Since this concept has damatically changed with the introduction of Linux-HA, it is explained in more detail here. Each network_cn<n>_server (n=1..2) group consists of the simple resources network_cn<n>_server_<poolname-m> (n=1..2) 4. Each of these simple resources is exactly one server LAN IP of a pool, i.e. the appropriate resource agent (IPaddr2) accurately monitors a server LAN IP and, in the event of a fault, moves it to the surviving node of the cluster. Accordingly, there are as many resources in the cluster as there as server LAN IPs. This ensures that the entire server LAN IPs of all the pools can always be accessed in the network. The resource ldap_master is initially started on the first control node. It starts the LDAP server process slapd on the same control node with the server LAN IPs of all the pools of the first control node as defined by the management tool. Analog to this, the resource ldap_replica on the second control node is started with the appropriate server LAN IPs of all the pools of the second control node. By applying the constraint "Colocation" (see above) the resource slurpd is forced to always start on the node, on which the resource ldap_master could also be started. This concept ensures that: 1. ldap_master und ldap_replica can always access precisely defined IP addresses, i.e. the address of the server LAN 2. these IP addresses are managed by the cluster and are thus always available on a cluster-wide basis 3. ldap_master und ldap_replica can also run in parallel on a node. In other words, if a node fails, the cluster ensures that its IP addresses and the appropriate ldap resource are switched over to the other node and are then available again on a system-wide basis 4 "n" is an enumeration consisting of 2 elements. Element 1 for CN1 and element 2 for CN2. 20 Administration and Operation

33 Linux-HA Cluster on Control Center FlexFrame Architecture Example: 2 Pools, pool1, pool2: Server LAN IPs : cn1-pool1-se cn1-pool2-se cn2-pool1-se cn2-pool2-se Ressource group network_cn1_server contains the two simple resources: network_cn1_server_pool1 manages network_cn1_server_pool2 manages Ressource group network_cn2_server with network_cn2_server_pool1 manages network_cn2_server_pool2 manages The resource ldap_master starts on CN1 with the IP addresses and The resource ldap_replica starts on CN2 with the IP addresses and Since ldap_master could be started on CN1, the resource slurpd is on account of the "colocation" constraint also started on the node CN1. In comparison with other resources and resource groups the rule for the "location" constraint of the resource groups network_cn<n>_server (n=1..2) is given a score (=100000) higher than the default-resource-stickiness value (=100). After a move to the other nodes (e.g. after a control node failure or reboot) this causes the server LAN and LDAP resources to return to their original nodes as soon as possible. Administration and Operation 21

34 FlexFrame Architecture Linux-HA Cluster on Control Center Ressource ff_manage The resource ff_manage is a group of simple resources that is initially started on the first control node. These are: mysql myamc.fa_messenger tomcat myamc.fa_ctrlagent Prerequisite for the start of myamc.fa_messenger is the successful start of the resource mysql. This is compelled by the constraint "orders". Since the start of ff_manage can take longer, an attempt is made as a result of an "orders" constraint to start the resources ldap_master and ldap_replica before the start of the group ff_manage Resource Netbooot The resource group netboot contains the simple resources. dhcp tftpd An attempt is also made for this group as for the group ff_manage to only start it if the resources ldap_master and ldap_replica run successfully. The "orders" constraint is also used for this STONITH The stonith agents' purpose is to prevent the cluster entering a non-defined status in the event of a fault in communications. The corresponding stonith resources are. stonith_ipmi_cn2 stonith_ipmi_cn1 as well as the clone stonith_meatware:0 stonith_meatware:1 Due to the "location" constraint stonith_ipmi_cn2 may under no circumstances be started on CN2 and stonith_ipmi_cn1 may not be started on CN1. These resources monitor the other node respectively. If, after a time interval set during configuration, a node finds that communication with the other node is no longer possible, a hard "reboot" or a "reset" is triggered on the inaccessible node via the IPMI interface. 22 Administration and Operation

35 Linux-HA Cluster on Control Center FlexFrame Architecture The IP addresses of the IPMI interface as well as user and password were transferred to the resource during configuration in order to enable this step. For the reboot to take place via the ipmi interface the latter must be accessible. Therefore stonith_ipmi_cn<n> (n=1..2) monitors the respective interface every 30 seconds. If it is determined that the interface does not answer (e.g. in the event of a power failure), stonith_ipmi_cn<n> is ended and stonith_meatware:<n> is started. This resource communicates with the operator: it creates a message as to which node is affected and requests it to restart the node. When this is done, feedback is expected from the operator. The message about the affected node and the command to provide feedback after the start are written in /var/log/ha-log. Example after power failure of CN2 : Using the command crm_mon -1 -r -f -n provides an overview of the error counters and inactive resources: control1:~ # crm_mon -1 r f n Inactive resources: stonith_ipmi_cn2 (stonith:external/ipmi): Stopped Clone Set: clone_stonith_manual stonith_meatware:0 (stonith:meatware): Started cn1 stonith_meatware:1 (stonith:meatware): Stopped Failcount summary: * Node cn2: * Node cn1: stonith_ipmi_cn2: fail-count= Failed actions: stonith_ipmi_cn2_start_0 (node=cn1, call=699, rc=1): complete Administration and Operation 23

36 FlexFrame Architecture Linux-HA Cluster on Control Center The following entry is written in the log file /var/log/ha-log : stonithd[6462]: 2009/05/28_10:47:46 CRIT: OPERATOR INTERVENTION REQUIRED to reset cn2. stonithd[6462]: 2009/05/28_10:47:46 CRIT: Run "meatclient -c cn2" AFTER powercycling the machine. The operator is thus requested to execute the command meatclient c cn2 on CN1 after ensuring that CN2 has been completely switched off or that CN2 has been manually "reset" or restarted. Otherwise, no further actions are performed on the resources by the cluster to avoid data loss; the cluster does not know its status due to the lack of a connection to the partner. control1:~ # meatclient -c cn2 WARNING! If node "cn2" has not been manually power-cycled or disconnected from all shared resources and networks, data on shared disks may become corrupted and migrated services might not work as expected. Please verify that the name or address above corresponds to the node you just rebooted. PROCEED? [yn] y Meatware_client: reset confirmed The process which asks the operator to run the meatclient tool is respawned every 30 seconds. If this respawn occurs after the operator started the meatclient tool and before he entered "yes", the request will be ignored. Therefore it may be required to call it several times and confirm it with a reasonable small delay. 24 Administration and Operation

37 Linux-HA Cluster on Control Center FlexFrame Architecture Configuring the Cluster The cluster is automatically configured by the script ff_post_conf.sh during the installation of FlexFrame. If, in certain situations, it is necessary to subsequently configure the cluster, the command ff_ha_tool.sh [-f] -i must be used. During the setup the IPMI user and IPMI password are queried if they are not yet known or access is not possible with the known user and password. Syntax : ff_ha_tool.sh [-f] -i -i: initial configuration of linux-ha for FlexFrame -f: force execution, purge old configuration Starting Behavior of the Cluster The cluster is started: automatically when FlexFrame is installed via the installation scripts with each reboot with the command /etc/init.d/hearbeat start Two cases should be observed both when rebooting and during a manual start: 1. One node is affected either it has failed or the service heartbeat was stopped manually. The second node subsequently took on the resources and has the role of the DC 2. Both nodes are affected In case 1 there is still nothing to be considered. After the node is online again or the cluster could be successfully started on this node, the resources are redistributed according to their score values and started. If the cluster has to be restarted on both nodes (case 2) either through rebooting or manually the following must be taken into consideration: During the start of a node the service heartbeat attempts to communicate with the other nodes. If there is no answer after a configurable time interval, the stonith agent is activated, which via the IPMI interface then causes the second node to reboot. The time interval is configured in the file ha.cf by the parameter initdead and is currently 60 seconds. Administration and Operation 25

38 FlexFrame Architecture Linux-HA Cluster on Control Center Particularly when switching on an entire FlexFrame environment it is essential to ensure that both control nodes have to be switched on very quickly one after each other. If the delay is too great, the control node switched on last may be reset hard by the stonith agent, but this usually has no negative effects. If in this situation the IPMI interface of the other node is not accessible, the stonith IPMI agent would - after a vain attempt to trigger a reboot on the node - activate the stonith meatware agent. Then the operator must proceed as described in the section "STONITH" so that the resources monitored by the cluster can be started Status Information of the Clusters Various options can be used to find out the current status of the cluster. One of which is implemented by the resource agent ClusterMon. It is the task of this RA to regularly transfer the current status of the cluster to a HTML file. For this purpose, the clones ClusterMon:0 and ClusterMon:1 are configured which are each started on a control node. They write the status in /srv/www/htdocs/clustermon.html, which can be accessed via the local web server at and is also linked from the default homepage of the web server Linux-HA CLI Commands Various options are available to achieve a required action. For this purpose, FlexFrame provides a simple CLI interface to Linux-HA. The command ff_ha_cmd.sh, which is internally based on the Linux-HA CLI commands, was provided for the most important actions, such as status display, starting and stopping a ressource. Action Display the status of all resources Display the status of one resource Start a resource Stop a resource Migrate (move) a resource to another node this creates a special location constraint Undo the migration the special location constraint is deleted Command ff_ha_cmd.sh status ff_ha_cmd.sh status <resource> ff_ha_cmd.sh start <resource> ff_ha_cmd.sh stop <resource> ff_ha_cmd.sh migrate <resource> <node> ff_ha_cmd.sh unmigrate <resource> 26 Administration and Operation

39 Linux-HA Cluster on Control Center FlexFrame Architecture Action Cleanup a resource, delete all status flags this will restart the resource if possible Remove a resource from the administration using heartbeat. Resources will not be automatically started, stopped or restarted Reintegrate a resource in the administration using heartbeat Output the CIB (XML format) Command ff_ha_cmd.sh cleanup <resource> ff_ha_cmd.sh unmanage <resource> ff_ha_cmf.sh manage <resource> cibadmin -Q List all the resources of the cluster crm_resource -L crm_resource -l List a resource (XML format) Output the node, on which a resource runs crm_resource -x -r <resource> crm_resource -W -r <resource> ff_ha_cmd.sh status resource Regular status display (interval: 15 sec) Regular status display, interval <n> sec crm_mon crm_mon -i <n> Once-only status display crm_mon -1 [Once-only] status display with output of the error counters of all resources as well as all offline resources crm_mon [-1] -f -r As above, but grouped by nodes crm_mon [-1] -f -r -n If a resource is to be migrated to another node, it is essential that this action creates a "Location" constraint with the score INFINITY. However, this means that this resource also remains permanently on this node even after a complete restart of the cluster and is therefore not moved unless the node has a problem to the effect that no resources can run any more. This constraint can be removed with the command ff_ha_cmd.sh unmigrate <resource>, which nevertheless does not cause the resource to automatically move back; it merely removes the constraint of the preferred node as a result. Administration and Operation 27

40 FlexFrame Architecture Network Linux-HA Logfile All messages that concern Linux-HA are written in the log file /var/log/ha-log Linux-HA GUI Linux-HA has its own GUI which can be launched using the "hb_gui" command. However, it is strongly recommended not to use this interface to change configuration parameters or control resources, because each change hides the risk of negatively influencing or even severely damaging the FlexFrame cluster configuration. There also may be situations where it is not possible to use the command line interface after having used the "hb_gui" interface. Please use the command line interface ff_ha_cmd.sh exclusively to control resources. 2.6 Network The network is the backbone of the FlexFrame solution. Communication between the various nodes and storage devices is done exclusively via the IP network infrastructure. It serves both, communication between servers and clients as well as delivering IO data blocks between the NAS (Network Attached Storage) and the servers. The IP network infrastructure is essential for every FlexFrame configuration. FlexFrame is designed with a dedicated network for connections between servers and storage that is reserved for FlexFrame traffic only. One network segment, the Client LAN (see below) can be routed outside the FlexFrame network to connect to the existing network LAN Failover The term LAN failover describes the ability of a FlexFrame environment to use a logical network interface that consists of several physical network interface cards (NICs), which in turn are using redundant network paths (cables and switches). When a network component (NIC, cable, switch, etc.) fails, the network management logic will switch over to another network interface card and path Segments FlexFrame uses a network concept providing high availability as well as increased flexibility in virtualizing the whole FlexFrame landscape. The FlexFrame network concept relies on VLAN technology that allows running multiple virtual networks across a single physical network. Additionally, in order to ensure high network availability, LAN bonding is used on every node. This includes a double switch and wiring infrastructure, to keep the whole environment working, even when a network 28 Administration and Operation

41 Network FlexFrame Architecture switch or cable fails. Within FlexFrame there are four virtual networks. These networks run through one logical redundant NIC, using bonding on Linux. Client LAN The purpose of the Client LAN segment is to have dedicated user connectivity to the SAP instances. This segment also allows administrators to access the Control Nodes. Control LAN The Control LAN segment carries all RSB, IPMI, e0 and administrative communication, such as IPMI or similar interfaces. Server LAN The Server LAN segment is used for the communication between SAP instances among each other and the databases. Administration and Operation 29

42 FlexFrame Architecture Network Storage LAN The Storage LAN segment is dedicated to NFS communication for accessing the Application Nodes' shared operating systems, the executables of SAP and the RDBMS as well as the IO of the database content and SAP instances. The following figure outlines the basic network segments of a typical FlexFrame landscape with Application Nodes. 30 Administration and Operation

43 Network FlexFrame Architecture Network Switches Network switching components play a very important role within FlexFrame. Therefore, only the following switch types are tested and supported. All supported switch models support VLAN technology for a flexible configuration for the various network segments. Switch Model Description FlexFrame switch type Cisco Catalyst WS-C3750g-24t Cisco Catalyst WS-C3750g-24ts Cisco Catalyst WS-C3750g-48ts Cisco Catalyst WS-C3750e-24td Cisco Catalyst WS-C3750e-48td Cisco Nexus 5010 Cisco Nexus Ethernet 10/100/1000 ports cat3750g-24t 24 Ethernet 10/100/1000 ports and 4 1GbE SFP ports 48 Ethernet 10/100/1000 ports and 4 1GbE SFP ports 24 Ethernet 10/100/1000 ports and 2 10GbE ports 48 Ethernet 10/100/1000 ports and 2 10GbE ports 20 fixed 10 Gigabit Ethernet/FCoE SFP+ ports, first 8 dual speed 1/10 GE/FCoE and 1 expansion module slot 40 fixed 10 Gigabit Ethernet/FCoE SFP+ ports, first 16 dual speed 1/10 GE/FCoE and 2 expansion module slots cat3750g-24ts cat3750g-48ts cat3750e-24td cat3750e-48td nexus5010 nexus5020 Nexus Expansion Module FlexFrame module type 6-Port 10 Gigabit Ethernet and FCoE Module 6x10GbE If you want to use a Cisco Catalyst WS-C3750e-24td (WS-C3750e-48td resp.) with TwinGig SFP Converter Module you have to file a request for special release. Those switches then may be configured as a cat3750g-24ts (cat3750g-48ts resp.). Administration and Operation 31

44 FlexFrame Architecture Network Ports of Cisco Nexus switches are only allowed to be connected to 10GbE Ports of other devices with the exception of 1GbE SFP ports of 3750g models for uplink purposes which are allowed to be connected to dual speed ports Network Speed In FlexFrame Version 5.0A there are two different network speeds supported: 1Gbit/sec (1GbE) 10Gbit/sec (10GbE) Under normal circumstances a network speed of 1Gbit/sec is sufficient. But if you have a higher network load, a network connection with 10Gbit/sec is recommended. In this case you have to use the 10Gbit/sec ports of the 3750e switches and an endsystem with 10Gbit/sec ports. The endsystems with 10Gbit/sec ports supported by FlexFrame you can find in the FlexFrame Support Matrix version (see Network Switch Groups In FlexFrame network switches are grouped. Ports of an endsystem building a redundant connection are connected to ports of different members of the same switch group. A switch group consists of at least two switches of the Cisco Catalyst 3750 Series switches building a switch stack or of exactly two Cisco Nexus 5000 Series switches building a vpc domain. The switch groups within the entire FlexFrame environment are numbered starting with 1. Also the members within a switch group are numbered starting with 1 in each case Network Switch Ports In FlexFrame network switch ports are uniquely identified within the entire FlexFrame environment by: the number of the switch group the number of the member within the switch group the port ID within the switch The portid in case of a Cisco Catalyst 3750 switch is a single number. In case of a Cisco Nexus switch the portid is <slot number>/<port number>. In case of Cisco Catalyst 3750e the numbers 1 and 2 are used both for GigabitEthernet Ports and TenGigabitEthernet Ports. To distinguish both cases a --10gbit Option is used on input and on output (10G) is appended. 32 Administration and Operation

45 Network FlexFrame Architecture Switch ports belonging to the unspecific switch group 0 (zero) are meant to be somewhere outside the FlexFrame environment Automounter Concept The Automounter Concept is based on the ability of Linux to mount file systems automatically when their mount points are accessed. During the boot process of an Application Node some file systems are mounted. For Linux these are the root file system (read-only) as well as the /var mount point (readwrite). These are the basic file systems which must be accessible for the Application Node to function properly. There is no data in the two file systems that is specific for a SAP service. Data which is specific for a SAP service, a database or a FlexFrame Autonomous Agent is found in directories which are mounted on first access. Some of the mounts will stay as long as the Application Node is operational. Others will be unmounted again, if directories or files below that mount point have not been accessed for a certain period of time. Within the LDAP database there are two types of data which relate to the automounter configuration: automountmap and automount. An automountmap is a base for automount objects. Here's how to list the automountmaps: control1:~ # ldapsearch -x -LLL '(objectclass=automountmap)' dn: ou=auto.flexframe,ou=automount,ou=pool2,ou=pools,ou=flexframe, dc=flexframe,dc=wdf,dc=fujitsu,dc=com objectclass: top objectclass: automountmap ou: auto.flexframe... The base directory looks like this: dn: cn=/flexframe,ou=auto_master,ou=automount,ou=pool1,ou=pools, ou=flexframe,dc=flexframe,dc=wdf,dc=fujitsu,dc=com objectclass: top objectclass: automount cn: /FlexFrame automountinformation: auto_flexframe Administration and Operation 33

46 FlexFrame Architecture Network Further on there are entries like: dn: cn=myamc,ou=auto_flexframe,ou=automount,ou=pool1,ou=pools, ou=flexframe,dc=flexframe,dc=wdf,dc=fujitsu,dc=com objectclass: top objectclass: automount cn: myamc automountinformation: -rw,nointr,hard,rsize=32768,wsize=32768,proto=tcp,nolock, vers=3 filpool1-st:/vol/volff/pool-pool1/pooldata/& There are two things that have to be pointed out. First, the ou=auto.flexframe refers to the base directory as shown before. The second notable aspect in this entry is the use of the wildcard &. If the folder /FlexFrame/myAMC is accessed, the autofs process tries to mount it from the path filpool1-st:/vol/volff/pool-pool1/pooldata/myamc. If the folder myamc is found and permissions allow the clients to access it, it will be mounted to /FlexFrame/myAMC/<name>. If myamc is not found or the client does not have the permissions, the folder will not be mounted. In such a case, try to mount the folder manually on a different folder, e.g. like: an_linux:~ # mount filpool1-st:/vol/volff/pool-pool1/pooldata/myamc /mnt If you get an error message like Permission denied, check the exports on the NAS system and the existence of the directory myamc/ itself. Other entries in LDAP make use of platform specifics. With Linux you can find a number of variables like ${OSNAME}/${OSDIST}/${ARCH} to make a distinction between different platforms. dn: cn=/,ou=auto.oracle,ou=automount,ou=pool2,ou=pools, ou=flexframe,dc=flexframe,dc=wdf,dc=fujitsu,dc=com objectclass: top objectclass: automount cn: / description: catch-all for Linux automount automountinformation: -rw,nointr,hard,rsize=32768,wsize=32768,proto=tcp,nolock, vers=3 filpool2-st:/vol/volff/pool-pool2/oracle/${osname}/${osdist}/${arch}/& On Linux, the automount mount points can be read using the following command: an_linux:~ # mount rootfs on / type rootfs (rw) /dev/root on / type nfs (ro,v3,rsize=32768,wsize=32768,reserved, hard,intr,tcp,nolock,addr= ) 34 Administration and Operation

47 Network FlexFrame Architecture :/vol/volFF/os/Linux/FSC_4.2A SLES-9.X86_64/var_img/varc0a80b36 on /var type nfs (rw,v3,rsize=32768,wsize=32768,reserved, hard,intr,tcp,nolock,addr= ) :/vol/volFF/os/Linux/FSC_4.2A SLES-9.X86_64/var_img/varc0a80b36 /dev on /dev type nfs (rw,v3,rsize=32768,wsize=32768,reserved, hard,intr,tcp,nolock,addr= ) :/vol/volFF/os/Linux/pool_img/pool-c0a80bff on /pool_img type nfs (rw,v3,rsize=32768,wsize=32768,reserved, hard,intr,tcp,nolock,addr= ) proc on /proc type proc (rw) devpts on /dev/pts type devpts (rw) shmfs on /dev/shm type shm (rw) /dev/ram on /var/agentx type ext2 (rw) automount(pid1750) on /FlexFrame type autofs (rw) automount(pid1772) on /saplog/mirrloga type autofs (rw) automount(pid1752) on /home_sap type autofs (rw) automount(pid1788) on /saplog/saplog1 type autofs (rw) automount(pid1766) on /sapdata/sapdata5 type autofs (rw) automount(pid1778) on /saplog/origloga type autofs (rw) automount(pid1762) on /sapdata/sapdata3 type autofs (rw) automount(pid1758) on /sapdata/sapdata1 type autofs (rw) automount(pid1784) on /saplog/saparch type autofs (rw) automount(pid1764) on /sapdata/sapdata4 type autofs (rw) automount(pid1786) on /saplog/sapbackup type autofs (rw) automount(pid1760) on /sapdata/sapdata2 type autofs (rw) automount(pid1754) on /myamc type autofs (rw) automount(pid1796) on /usr/sap type autofs (rw) automount(pid1780) on /saplog/origlogb type autofs (rw) automount(pid1768) on /sapdata/sapdata6 type autofs (rw) automount(pid1792) on /saplog/sapreorg type autofs (rw) automount(pid1776) on /saplog/oraarch type autofs (rw) automount(pid1774) on /saplog/mirrlogb type autofs (rw) automount(pid1770) on /sapdb type autofs (rw) automount(pid1756) on /oracle type autofs (rw) automount(pid1790) on /saplog/saplog2 type autofs (rw) automount(pid1794) on /sapmnt type autofs (rw) filpool2-st:/vol/volff/pool-pool2/pooldata on /FlexFrame/pooldata type nfs (rw,v3,rsize=32768,wsize=32768,reserved,hard,tcp, nolock,addr=filpool2-st) The cn: parts show the mount points. Administration and Operation 35

48 FlexFrame Architecture Storage Systems 2.7 Storage Systems Concerning the storage aspect of a FlexFrame landscape today there is the basic and mandatory NAS (Network Attached Storage) part and (as of version 4.0) the additive, optional SAN (Storage Area Network) part. The central NAS storage can be a FAS System from Network Appliance (NetApp) or a Celerra Network Server from EMC 2. And SAP database data may reside on either NAS or SAN attached Storage from either Network Appliance or EMC 2. Both the Network Appliance and EMC 2 implementations of the NFS (Networked File System) allow the same data files to be shared by multiple hosts and thus provide a builtin cluster file system. Fujitsu is working jointly with the two partners Network Appliance and EMC 2 in the development of the FlexFrame concept. The Network Appliance product class Filer or FAS System is an essential part of the FlexFrame infrastructure solution, and as of FlexFrame 4.0 the same is true for the EMC 2 product class Celerra. Both product classes allow the so called nested export function, which is important for the FlexFrame realization. While data areas like operating systems, application software and commonly used data of FlexFrame infrastructure solutions have comparatively low need for data throughput, these areas remain on NAS and thus still benefit from the flexibility of the present solution, the central data areas of type database and loggings (sapdata and saplog) needing high data throughput may be shifted to SAN storage NAS Support Architectual Overview Network Appliance Filer In FlexFrame the storage for all Application Nodes can be one or more NAS systems (Filers) from Network Appliance (or EMC, see ). The Filer can be connected with 10Gbit/sec or 1Gbit/sec NICs to the FlexFrame system. The parallel use of 1Gbit/sec and 10Gbit/sec NICs of one Filer ist not possible. But you can use on one filer 1Git/sec NICs and on another 10Gbit/sec NICs. Fujitsu is working jointly with Network Appliance (see in the development of the FlexFrame concept. The Network Appliance product class Filer or FAS System is an essential part of the FlexFrame infrastructure solution. The operating system of the Filer is called "ONTAP". The disks will be grouped into RAID groups. A combination of RAID groups will make a volume. Starting with ONTAP 7, also aggregates can be created. FlexVolumes can be created on top of aggregates. A Filer volume contains a file system (WAFL - Write Anywhere File Layout) and provides volumes or mount points for NFS (for UNIX systems) or CIFS (for Windows systems). The Filer has NVRAM (Non Volatile RAM) that buffers committed IO blocks. The contents 36 Administration and Operation

49 Storage Systems FlexFrame Architecture of the NVRAM will remain intact if the power of the Filer should fail. Data will be flushed to the disks once power is back online. The minimal FlexFrame landscape has at least the following volumes: vol0 (ONTAP, configuration of Filer) sapdata (database files) saplog (database log files) volff (OS images of Application Nodes, SAP and database software, pool related files) In FlexFrame, the volume volff separates FlexFrame data (file system of Application Nodes and other software) from the Filer's configuration and ONTAP. In larger installations, multiple sapdata and saplog volumes can be created (e.g. to separate production and QA etc.) Built-in Cluster File System The Network Appliance implementation of the NFS (Networked File System) allows sharing of the same data files between multiple hosts. No additional product (e.g. cluster file system) is required Volume Layout The FlexFrame concept reduces the amount of "wasted" disk space since multiple SAP systems can optionally share the same volume of disks. As the data grow, it is easy to add additional disks and enlarge the volumes without downtime Snapshots When a snapshot is taken, no data blocks are being copied. Just the information where the data blocks are located is saved. If a data block is modified, it is written to a new location, while the content of the original data block is preserved (also known as copy on write ). Therefore, the creation of a snapshot is done very quickly, since only few data have to be copied. Besides that, the snapshot functionality provided by NetApp is unique, because the usage of snapshots does not decrease the throughput and performance of the storage system. Snapshot functionality will allow the administrator to create up to 250 backup-views of a volume. The functionality SnapRestore provided by NetApp significantly reduces the time to restore any of the copies if required. Snapshots will be named and can be renamed and deleted. Nested snapshots can be used to create e.g. hourly and daily backups of all databases. In a FlexFrame landscape, a single backup server is sufficient to create tape backups of all volumes. Even a server-less backup can be implemented. Off-Line backups require a minimal down-time to the database because the backup to tape can be done reading form a quickly taken snapshot. Administration and Operation 37

50 FlexFrame Architecture Storage Systems Filer Cluster A Filer can be clustered to protect data against the failure of a single Filer. Switching from one Filer to its cluster counterpart is transparent to the Application Nodes. Filer clustering is a functionality of Network Appliance Filers EMC NAS (Celerra) As of FlexFrame for SAP V4.0A you can use an EMC NAS (Network Attached Storage of EMC = Celerra) instead of or additive to a NetApp Filer. In the following we use NAS system or NAS to indicate that we mean NetApp Filer and/or EMC NAS. Several times we use Celerra for EMC NAS. In FlexFrame for SAP EMC NAS is always connected with 1Gbit/sec cabeling technology. An EMC Celerra consists of control station(s), data mover(s) and a storage system, which is a CLARiiON or a Symmetrix, depending on whether it is an integrated or a gateway variant. The EMC NAS family is composed of two - for FlexFrame relevant - product lines: The NS line - available in both integrated and gateway models offering choice of 1, 2 or 4 data movers that attach to CLARiiON, or in the case of a gateway, to CLARiiON/FibreCAT or Symmetrix storage subsystems. The high end CNS (Celerra Clustered Network Server) with a high performance fault tolerant cluster of data movers attached to enterprise class Symmetrix storage subsystems and/or modular CLARiiON subsystems. The minimal FlexFrame landscape has at least the following volumes: sapdata (database files) saplog (database log files) volff (OS images of Application Nodes, SAP and database software, pool related files) Control Station The control station is a management computer which controls components of the Celerra such as data movers (see below). The control station works with RedHat Linux as operating system, which is customized to its needs. The control station is able to connect to each data mover in the Celerra and send commands to them. After the data movers are booted, they do not depend on the control station for normal operation. In the unlikely event the control station fails, the data movers continue to serve files to clients. Depending on the model, your Celerra Network Server may include an optional standby control station that takes over if the primary control station fails. 38 Administration and Operation

51 Storage Systems FlexFrame Architecture Data Mover The data movers are the Celerra components that transfer data between the storage system and the network client. The data mover operating system is DART (Data Access in Real Time). You do not manage a data mover directly. You use the control station to send commands to the data mover. A Celerra can have from 1 to 14 data movers. Normally, the data movers are named server_n, from server_2 to server_15. A data mover can be active (or primary) or, in the case of a Celerra Network Server model that supports multiple data movers, can be a standby for other data movers. However you must configure one or more data movers as standby to support high availability. A data mover communicates with the data storage systems using Fibre Channel connections. In the Celerra, this is done by a Fibre Channel HBA card, that connects to the host through a Fibre Channel switch. In the Celerra NS series, storage processing is provided by two storage processors Cluster Architecture The EMC Celerra product family has distributed clustered architectures that give these products competitive level of scalability. The EMC Celerra NS family supports one to four data movers in a single system and the EMC Celerra CNS can support up to 14 data movers. This scalability allows customers to grow their EMC NS and CNS environments by adding data movers to enhance bandwidth, processing power and cache memory. The EMC Celerra implementation of the NFS (Networked File System) allows sharing of the same data files between multiple hosts. No additional product (e.g. cluster file system) is required. The Celerra Network Server protects against failures (service interruption or data loss) with an additional control station and one or more data movers that can take over the operations of a failed component. Each data mover is a completely autonomous file server with its own operating system. During normal operations, the clients interact directly with the data mover, not only for NFS access, but also for control operations such as mounting and unmounting file systems. Data mover failover protects the Celerra Network Server against total or partial hardware failure of an individual data mover. You can define one data mover as standby for the others (see Installation Guide Configuration of Standby Data Mover ). This standby data mover has no own MAC address and IP address, no hostname. In the case of a failover it takes all these from the failed data mover, so that the client cannot decide, whether he is connected with the original or the standby data mover. In the same time of the failover, the control station alerts the EMC service center via modem, and an EMC technician can diagnose the problem and do corrective actions. Administration and Operation 39

52 FlexFrame Architecture Storage Systems Snapshots Snapshot images of a file system are used to minimize backup windows and to enable quick restores of accidentally deleted or corrupted files. The EMC product for this purpose is SnapSure. It refers to these snapshot images as checkpoints. The amount of storage consumed for each checkpoint is only determined by the amount of data that was written since the last checkpoint was taken. Checkpoint images can be created on demand or scheduled according to a time or change based policy. As an example, consider a SAP database. A checkpoint at midnight is used to quickly create a consistent point in time image of the file system for backup to tape. Checkpoints during business hours as well are used to enable quick restores of accidentally deleted or corrupted files and minimize the window of possible data loss compared to recovery from last night's tape backup Two Side Mirror Architecture In FlexFrame environments at the moment only SRDF active/passive configurations with one control station for the R1 side and the R2 side are supported. R1 side is the active Celerra where the NAS service is running for normal operation. R2 side is the passive Celerra to which will be switched in case of errors to continuing the NAS service. The configuration of both Celerra and both storage systems will be done by EMC. In general a switchback to the R1 side after a switchover to the R2 side must be done by EMC, because after NAS errors a check of Celerra environment and a following repair of the error must be done for a successful switchback to the R1 side. Beside SRDF mirrored active data movers an SRDF active/passive configuration can have also not-mirrored local active data movers.these will be stopped in the case of a switchover and so no data access is further possible for volumes which were mounted on these data movers. There can be also not-mirrored active local data movers at the R2 side, these won't be stopped in a case of switchover and so they can serve further their active mounted volumes. R1 side SRDF mirrored active data movers should have a local standby data mover to protect a local failure. For each remote mirrored data mover at the R1 side must exist a dedicated linked R2 side RDF standby data mover. In a minimal remote high available SRDF active/passive configuration there are four data movers. For the supported and released Celerra DART software version please consult the FlexFrame support matrix. For the functionality of FlexFrame Celerra NAS high availability an ssh connection will be used. The configuration of these ssh connections is described in the FlexFrame installation manual. Only volumes, which are mirrored through R1 devices to R2 devices, can be switched over at all. Data volumes, which are not protected through an SRDF-R1-R2 mirroring, can't be switched over. At the moment with FlexFrame NAS high availability only the central volff volume will be protected and only for the volff volume a manually initiated or an automatic switchover can be done. All other volumes, which are located on that 40 Administration and Operation

53 Storage Systems FlexFrame Architecture SRDF active/passive configuration, will be switched implicitly with the volff volume. There is no possibility for other data volumes in that SRDF active/passive configuration to switch them separately. For an active data mover with an assigned local standby data mover local standby activation is controlled by a parameter policy. Three different policies exist: manual, retry and auto. With manual a standby activation won't be done automatically, it must be done manually with an administrative command. With retry at first a reboot of failed data movers will be tried and afterwards only standby data mover activation will be started. With auto standby data mover activation will be done immediately. In FlexFrame auto must be set. FlexFrame Celerra NAS high availability checks data access also with Celerra internal commands. For that reason no switchover is initiated for healthy data movers with a disturbed data access. Such events can result from an error on the IP data mover access or wrong defined exports. For automatic switching an access to both control stations must be possible. The R1 control station is needed to stop a controlled R1 side data mover to avoid data inconsistencies in a split brain scenario. In some special configuration situations no switchover will be done, e.g. standby data mover policy = manual or SRDF disk group isn't set to synchronous mode. FlexFrame Celerra SRDF-NAS high availability is completely implemented in the FlexFrame command ff_nas_ha.pl. This command has following operation modes: init, list, check and switchover. Each operation mode must be specified with the option -op. e.g. ff_nas_ha.pl --op list. This FlexFrame command is only installed on the Control Nodes and only users with the user ID = 0 are privileged to execute this command. After the above Celerra SRDF-NAS base configuration as a first step, a FlexFrame Celerra SRDF-NAS initialization with the operation mode init must be done. With that initialization in the main on each Control Node a local parameter file that is independent from volff will be created. The state of a FlexFrame Celerra SRDF-NAS configuration can interactively determined with option list. For an automatic control the operation mode check can be used. This operation mode can also be executed manually, however the main user is the myamc FrameAgent. The FrameAgent checks periodically the volff volume. In the case of an volff error an snmp trap will be generated. The FrameAgent is on both control nodes active and so from both FrameAgents a check will be done concurrently, this can lead to multiple traps for one error event. A wanted automatic switchover must be configured in the FrameAgent different from the default configuration of manual intervention after a volff error. An automatic switchover will be initiated if a prior executed check delivers an appropriate numeric return code. Administration and Operation 41

54 FlexFrame Architecture Storage Systems An SRDF-NAS switchover from the R1 side to the R2 side will be done through the operation mode switchover. This operation mode will be executed automatically by the FrameAgent after an appropriate check return code and the configuration of automatic switchover in the FrameAgent. The operation mode switchover can also be executed manually if this was introduced by an error or as an administrative task, e.g. before any maintenance work for the R1 side Celerra. A manual switchover will be done interactively through the switchover caller. Before performing a manual switchover or switchback the FrameAgents on both Control Nodes must be stopped, otherwise the FrameAgents will be active in parallel and the result may be not predictable! For the configuration and in-depth information of the FrameAgent please refer to the corresponding FlexFrame manual (FlexFrame for SAP - myamc.fa_agents Installation and Administration) SAN Support Architectural Overview This section provides an architectural overview on the integration of SAN-based storage in the FlexFrame environment. It describes the level of integration provided with the actual release of FlexFrame and the components which are considered supported and those which are not explicitly supported, but can be used due to the approach described here. Reading this section requires a basic technical understanding of the different SAN components (e.g. fibre channel adapter, fibre channel switches and storage components). 42 Administration and Operation

55 Storage Systems FlexFrame Architecture SAN Basic Layers Configuring a SAN environment for one or multiple SAP databases can be a very complex task. As with many complex tasks they become easier if you break it down into little pieces. In order to make you understand the various layers of a SAN environment we divide it along the path as seen from the disk to the database: Level DISK RAID-GROUP ARRAY-LUN SAN-FABRIC HBA OS-DRIVER HOST-LUN MULTIPATHING VOLUME MANAGER FILE SYSTEM DATABASE Description The real disk(s) contained in the storage subsystem (aka storage system). One or multiple disks are grouped together based on RAID mechanisms provided by the storage subsystem. LUNs as seen by the storage subsystem. A SAN fabric to connect one or multiple storage subsystems with one or multiple hosts. It consists of fibre channel cabling and switches. The HBA (host bus adapter) which connects the host to the SAN fabric. Basic OS drivers to make the HBA(s) work. LUNs as seen by the host (Application Node). OS specific multipathing software to make multiple HBAs to work as a single virtual interface. An OS specific volume manager to group multiple LUNs from the same or different storage subsystems into virtual volume groups. OS specific file systems created on top of the virtual volumes provided by the volume manager. The database files contained on the file system(s). Each of the layers above can be configured in many different ways using lots of variations. Therefore not all of those layers are controlled by FlexFrame. Administration and Operation 43

56 FlexFrame Architecture Storage Systems Scope of the FlexFrame SAN Integration This section describes the scope of the FlexFrame SAN integration. The decisions were made based on a couple of customer projects. In order to broaden the set of ''usable'' SAN components in the FlexFrame environment the scope of the project was not to integrate the full stack of hardware and software layers described in , but rather take care of the configuration as off a certain level. Thus, installation and configuration of storage systems, SAN switches/fabrics, volume managers and even driver software is by no means automated, but left to the responsibility of the user. Still, some predefined sets of the software stack components have been tested and explicitly went through a Quality Assurance process. As a baseline the SAN fabric is ''out of scope'' for all administrative tools within the FlexFrame environment. Configuration of the storage system, such as configuration of disks into RAID groups, provisioning of LUNs, zoning on FC switches is completely left to the responsibility of the customer. However, some rules do apply to the outcome of a setup, such as ''all servers in one FlexFrame group (NOT pool) have to be able to access the LUNs which are relevant for the SAP databases in question at the same time''. This rule and further ones are described in detail later in this document. Since FlexFrame 4.2A, this special rule is obsoleted if dynamic LUN masking is used. On the Application Nodes, a set of adapters with pre-defined drivers is supported, in a way that those set-ups are fully described in the documents and went through a quality assurance cycle. Specific software components mentioned later are required in order to support certain features (such as multipathing). Some software packages (esp. for Linux), which are generally available, may explicitly be excluded from usage in a FlexFrame environment, but this is usually due to SAP's strict rules concerning the use of none-gpl drivers in Linux systems with their software on-top. FlexFrame 5.0A comes with the utilities to optionally use a set of volume managers; however customer specific software can easily be hook into the environment, if necessary. The SAN integration is limited to the use for database files and logs only. Meaning everything else will still be placed on NAS (NFS) storage. This explicitly includes the database software, SAP applications and the OS. Even the database files do not necessarily have to be placed onto a SAN based storage. One might choose to place large, productive databases with IO requirements onto the SAN based storage while smaller databases will be on NAS storage. The choice is based on SID level. One SID might be on SAN another SID on NAS. The following pictures illustrate the components which can be placed onto NAS and SAN storage. 44 Administration and Operation

57 Storage Systems FlexFrame Architecture FlexFrame Components on NAS FlexFrame Components on SAN Rules and Restrictions In order to make things work correctly with FlexFrame some rules upon each layer must be configured. This chapter lists some rules and restrictions which must be observed. 1. All Application Nodes within one pool group must be configured equally. This includes the number and type of HBAs. They must also use the same OS image. 2. All Application Nodes within one pool-group must have access to the same set of LUNs of each involved storage subsystem. Each Application Node of a pool group must be able to access the same LUNs without reconfiguring the access settings of the storage subsystem. Since FlexFrame 4.2A, this rule is obsoleted if dynamic LUN masking is used. With dynamic LUN masking, the access settings on the storage subsystems are dynamically modified using StorMan Software, so that each Application Node gets access to the LUNs needed by the database component of a SAP system when this database instance is started on the Application Node. 3. Host based mirroring can be used on volume manager layer to mirror one or multiple LUNs from one storage subsystem to another (e.g. for a two-site-concept). Copy mechanisms of the storage subsystems can also be used. If the failover to a different storage subsystem requires a change in the WWN and host LUN addressing of the fail-over LUN, this change must also be done in the FlexFrame LDAP database using the appropriate FlexFrame script. 4. A volume group has an n:1 relationship to a SID. There may be multiple volume groups for one SID, but a volume group must not contain volumes for more than one SID. The appropriate volume groups are switched from one Application Node to the next if a database is moved. 5. Access to all file systems of a SID must be tested on each Application Node of its pool group before it can be monitored and used ( watch ). 6. A database which was installed using SAN based database files in one pool group cannot be started in a different pool group. If this is required the database must be installed on NAS. Administration and Operation 45

58 FlexFrame Basic AdministrationAccessing a FlexFrame Landscape (Remote Administration) 3 FlexFrame Basic Administration 3.1 Accessing a FlexFrame Landscape (Remote Administration) A FlexFrame landscape can be accessed through Secure Shell (ssh, scp) connections to the Control Node. Any other remote administration tools like rsh or telnet have been disabled for reasons of security. 3.2 Powering up the FlexFrame Landscape If the complete FlexFrame landscape was powered off, the following power-on sequence is recommended: 46 Administration and Operation

59 Powering off the FlexFrame Landscape FlexFrame Basic Administration Before an Application Node can be booted, the NTP server must be set up correctly. This can be verified by running the following command: control1:~ # ntpq -p remote refid st t when poll reach delay offset jitter ================================================================= *LOCAL(0) LOCAL(0) 5 l control2-se control1-se 7 u This command needs to be repeated until an asterisk (*) is displayed at the beginning of one of the data lines. This character indicates that the NTP server is now ready. If you do not wait and continue with booting, the Application Nodes may work with a different time than the Control Nodes and may (among other possible side effects) create files which may have wrong time stamp information. If this sequence is not used and all servers are powered on at the same time, the Application Nodes will try to boot while the Control Nodes are not ready to receive the Application Node's boot request. If this is the case, manual intervention is required to re-initiate the boot process of the Application Nodes. 3.3 Powering off the FlexFrame Landscape If you need to power-off the complete FlexFrame landscape (e.g. to move it to a different location) we recommend following the steps as outlined below: Administration and Operation 47

60 FlexFrame Basic Administration Powering off the FlexFrame Landscape Before shutting down all SAP and DB services, check if users, batch jobs, print jobs and RFC connections to other SAP systems have finished working. To shutdown the SAP and DB services, use the following command for each pool: control1:~ # stop_all_sapservices <pool_name> Before you can stop the NAS-System, you need to stop all processes on the Control Nodes. Since those processes are under control of the Linux-HA cluster you have to switch all recources from control node 2 to control node 1 and then stop all resources. control1:~ # ssh control2 /sbin/rcheartbeat stop control1:~ # /sbin/rcheartbeat stop Keep in mind, stopping all resources will take some time. To stop the NAS system, use the following command. For a NetApp Filer: control1:~ # rsh <filer_name> halt There's no explicit power-off sequence for the switches. Assuming there are no other devices connected to the switches, they may simply be plugged-off after all components were powered down. If you do not send an explicit halt command to the Filer, the backup-battery of the Filer maybe drained since the Filer assumes a power-loss and tries to preserve the contents of its NVRAM. If the Filer is powered-off for too long the result can be a loss of NVRAM data and a long waiting period during next startup of the Filer. For an EMC Celerra (where cel-co is the control LAN address of the control station of Celerra): control1:~ # ssh <cel-co> -l root cel-co: # init 0 48 Administration and Operation

61 Reactivating ANs after Power Shutdown by FA Agents FlexFrame Basic Administration 3.4 Reactivating ANs after Power Shutdown by FA Agents FlexFrame Autonomy places an Application Node out of service if it is confronted with a problem it cannot solve. The reactions, messages and alarms which take place in this case are described in the FA Agents - Installation and Administration manual in the context of the switchover scenarios. It is the responsibility of the administrator to analyze why the node could not be used any longer. We recommend analyzing the FA log and work files as these may be able to supply valuable information. A node which is to start operating as a Spare Node after switchover must be validated using suitable test scenarios. It may happen in some cases that the "placing out of service" of an Application Node fails (e.g. when the IPMI-interface has broken down) and so it cannot be decided if the Application Node is down. In these cases - as default - the services that were running on the server are not switched over to another node. Therefore there is an additional high availability FA Agents parameter "IgnoreShutdownFailure" with impact on the failover reaction for services. Setting this parameter to value "true" assigns the FA Agents to deactivate the network interfaces of the server if successful the FA Agents automatically switch over services that were running on the server to another node the failed server can only be reactivated by manually reactivating its network interfaces via the command ff_sw_ports.pl (see the chapters "Switchover Control Parameters" and "Shutdown Configuration" in the FA Agents Installation and Administration manual). Default value "false" means, that a not successful server shutdown has no effect on the server network interfaces and the services that were running on the server are not switched over to another node. Setting this parameter strengthens the service high availability, but the FlexFrame administrator has to be aware of its impact in certain failure scenarios. Recommendation is to set the parameter "IgnoreShutdownFailure" to "true" after installation of the Flex-Frame system, if the FlexFrame administration wants to expand the high availability of service and is willing to manually reactivate the network interfaces of a server in such error cases. Administration and Operation 49

62 FlexFrame Basic Administration Displaying the Current FlexFrame Configuration State 3.5 Displaying the Current FlexFrame Configuration State To obtain a general overview of an active FlexFrame system, use the FA WebGUI. In principle, the FA WebGUI can be used with every browser with SUN JAVA Plugin V1.4.1 or higher which has access to the page on the Control Node. The WebGUI can always be accessed when the Apache Tomcat service is running. This service is normally started by the Linux-HA cluster. The WebGUI is described in detail in chapter The login mask expects a user name and password for authentication purposes. You can only use the WebGUI with a valid combination of user name and password. For details on the configuration of the users, see the myamc documentation for the WebGUI. The FA WebGUI provides a presentation of all elements of a FlexFrame system. On the left-hand side the pools, groups, nodes and the active SAP services on the individual nodes are shown in a TreeView. 50 Administration and Operation

63 Displaying the Current FlexFrame Configuration State FlexFrame Basic Administration The TreeView of the FA WebGUI can show either the physical view or the applicationrelated view of the active SAP systems and their instances or a mixed view. The panels derived from the TreeView (Application Server Panel, Application System Panel, ServiceView and MessageView) always show the objects in relation to the selected hierarchical level in the TreeView. The FA WebGUI is thus the central cockpit for displaying the static configuration of a FlexFrame infrastructure, but as well it is the display for the active SAP systems and their instances. Here, all user interactions such as startup and shutdown are shown directly. All reactions initiate by the FA Agents are displayed as well. If the FA messenger component has been configured and activated, all traps are stored in a support database. This permits the temporal process for the traps to be displayed very simply at pool, group or node level FlexFrame Web Portal The FlexFrame Control Nodes provide a web portal with links to Web interfaces of several FlexFrame components, i.e. FA Autonomous Agents ServerView Operations Manager Cluster status To access this portal, start firefox and enter the Control Node's IP address in the location bar. If you are directly on the Control Node you want to configure, just call firefox from the shell: control1:~ # firefox localhost You will see an overview of all Web interfaces installed on the Control Nodes. Administration and Operation 51

64 FlexFrame Basic Administration Displaying the Current FlexFrame Configuration State FA Autonomous Agents Use this tool to manage the virtualized services in your FlexFrame environment. For information on usage, please refer to the FA Autonomous Agents manuals. You can access the FlexFrame Autonomous Agents WebGUI directly from the active Control Node by entering the following URL: This only works if the Jakarta Tomcat web server is running. If it is not running, check the Linux-HA cluster configuration State of Pools The active configured pools are displayed on the FA WebGUI. The nodes or systems belonging to a pool are displayed in the FA WebGUI TreeView. Each node element in the tree which represents a pool is identified by the prefixed keyword Pool. 52 Administration and Operation

65 Displaying the Current FlexFrame Configuration State FlexFrame Basic Administration State of Application Nodes Each Application Node with a running FA Application Agent is shown in the FA WebGUI. It is shown in the Nodes TreeView with its host name (Linux), and also in the Application Server Panel. In the Application Server Panel the data displayed depend on the hierarchical level selected in the TreeView State of SAP Systems The active SAP system IDs can be displayed very easily in the Systems TreeView of the FA WebGUI and also in the SAP System Panel. The SAP System Panel is shown in parallel to the Application Server Panel. In the SAP System Panel, the data displayed depend on the hierarchical level selected in the TreeView State of SID Instances The active instances of a SID can be displayed very simply in the InstancesView of the FA WebGUI by clicking on a SID in the SAP System Panel. For each view, information is provided in tabular form specifying the service's current pool, group, node, priority and status Networks The network of FlexFrame is its backbone. Here are some tips to get an overview of the current situation on the various networks: To double-check the network addresses, their names and pool assignment you can use the getent command: control1:~ # getent networks loopback control storage_pool server_pool client_pool storage_pool server_pool client_pool The loopback network is local for each host and always has the IP address The control network is the Control LAN network segment for the complete FlexFrame landscape. In the example we have configured two pools called pool1 and pool2. For each pool there are the three dedicated and distinct segments storage, server and client. The building rule of the network name is <segment>_<pool_name>. Administration and Operation 53

66 FlexFrame Basic Administration Displaying the Current FlexFrame Configuration State On the Control Nodes you can see the relation of each pool specific segment to its interface using the netstat -r command like this: control1:~ # netstat -r Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface server_pool2 * U vlan42 control * U bond0 server_pool1 * U vlan32 storage_pool2 * U vlan41 client_pool1 * U vlan30 storage_pool1 * U vlan31 client_pool2 * U vlan40 default gw216p UG vlan30 Here you can quickly see that the Server LAN segment of pool2 (server_pool2) is using the VLAN ID 42 on interface vlan42. Note that the VLAN ID of the control Control LAN segment cannot be seen because that vlan is native on bond ServerView Operations Manager ServerView Operations Manager allows you to query information of PRIMERGY servers in the FlexFrame environment. This tool is not needed for administration of FlexFrame but is required by Fujitsu support. If you would like to monitor the hardware status of PRIMERGY servers in your FlexFrame environment, click on ServerView and on the next page, click on the button labeled start: 54 Administration and Operation

67 Displaying the Current FlexFrame Configuration State FlexFrame Basic Administration Please note that SSL connections are not supported. Beginning with FlexFrame 4.0, the ServerView database for configuration, traps and server lists are no longer shared across both Control Nodes. Each Control Node runs its own instance with its own configuration. Administration and Operation 55

68 FlexFrame Basic Administration Displaying the Current FlexFrame Configuration State An overview of all configured servers (initially only the local host) will be shown on the main screen: To add more servers, run the Server Browser by either clicking on Administration / Server Browser on the top navigation bar or right clicking anywhere in the list and selecting New Server from the context menu. 56 Administration and Operation

69 Displaying the Current FlexFrame Configuration State FlexFrame Basic Administration The Server Browser window will be opened: The easiest way to add servers is to scan a subnet for each pool, for example the Server subnet for Application Nodes and the Control subnet for management blades. Administration and Operation 57

70 FlexFrame Basic Administration Displaying the Current FlexFrame Configuration State To scan a subnet, enter the first three bytes of the network address in the Subnet input field on the bottom left of the screen and click on Start Browsing: The browsing process is finished when the button Stop Browsing changes its caption to Start Browsing. 58 Administration and Operation

71 Displaying the Current FlexFrame Configuration State FlexFrame Basic Administration To add all manageable servers, right click anywhere on the list and click on Select Manageables. Administration and Operation 59

72 FlexFrame Basic Administration Displaying the Current FlexFrame Configuration State After clicking on Select Manageables, all servers with known type will be selected: To finally add those selected servers, click on Apply in the upper right corner of the Server Browser window. After closing the Server Browser, the main window containing the server list will show the added servers. Further servers can be added by repeating the recent steps using a different subnet address. 60 Administration and Operation

73 Displaying the Current FlexFrame Configuration State FlexFrame Basic Administration To view the state of a single server, click on the blue server name. To view the state of a Server Blade, click on the Management Blade / Blade Center, then select the right blade and click on ServerView on the bottom navigation bar. Events such as SNMP Traps can be viewed by navigating to Event Management on the top navigation bar. This will open ServerView Alarm Service, which will not be described in detail. To monitor SNMP Traps, we recommend using FA WebGUI / myamc Messenger. Administration and Operation 61

74

75 3.6 Cluster Status A short overview of the cluster status is available from the FlexFrame Web Portal. 3.7 FlexFrame Backup with Tape Library NetWorker A dedicated backup server will be used for maintaining NetWorker. As a backup tool NetWorker is used including the database module NetWorker Module PLUS for Oracle. In the case of a NetApp Filer as NAS device the database module NetWorker Module PLUS for Oracle is used. This concept is based on snapshots and uses the NDMP protocol for transferring data from the NetApp Filer directly to the tape library as shown in the example below. Configuration Example for Oracle on NetApp Filer: Administration and Operation 63

76 FlexFrame Backup with Tape Library In the graphic the * means: NetWorker Module PLUS for Oracle for NDMP NetApp Detailed information on FlexFrame for SAP Backup with NetWorker such as Implementation and configuration of the NetWorker Backup Solution for Oracle (at great length) Slide sets for different target groups like marketing and sales White papers will be provided by Fujitsu Storage Consulting and is available at see Best Practices / Description Papers ARCserve Detailed information on a backup solution with CA ARCserve is available at arc_tech_slides. 64 Administration and Operation

77 Adding a Pool Pools and Groups 4 Pools and Groups 4.1 Adding a Pool A new pool can be added using ff_pool_adm.pl. Some parameters have to be defined on the command line. They are used to configure switch VLANs and ports, to create the NAS system volume folder structures, to create LDAP pool subtree and to configure Control Nodes. An ADMINPOOL (see chapter 6) is created, if pool_name is adminpool or force_admin is given. Only one ADMINPOOL is allowed in a FlexFrame system. Adding a pool changes the exports file on all given NAS systems. Temporary exports on these NAS systems will be gone after running ff_pool_adm.pl. Be sure not to have temporary exports. If the pool shall be equipped with Application Nodes of type BX600 (blade server in a blade server cabinet with switch blades of type 1GbE-10/6-Q (SB9a)), then the following restriction concerning the VLAN-ID has to be considered: Switch blades of type 1GbE-10/6-Q only allow the VLAN ID range from 1 to 3965! Administration and Operation 65

78 Pools and Groups Adding a Pool Creating a new pool establishes a "ClientLAN to CorporateLAN" (CLAN) switch configuration. Two global parameters are used to determine the type of this CLAN. The settings are originally given by the Management Tool (MT, see the manual Management Tool, chapter Network Object) and may be changed by ff_swgroup_adm.pl --op parameters (SWGADM) (see this manual chapter 9). The parameters are One ClientLAN per SWP (true/false) at MT or clanportpervlan (yes/no) at SWGADM true/yes : Use an independant switch port pair for each pool Client LAN. These ports will be configured as native access ports. false/no : All pool ClientLANs use the same switch port pair. Both ports are configured as Cisco trunk ports, with tagged VLANs. Use 1Gbit SWPs for ClientLANs at MT or usetxtoclan at SWGADM true/yes : use copper twisted pair ports as CLAN ports. false/no : use fiber optic (SFP) ports as CLAN ports if available else use copper ports. So if the original global setting does not match your current wish for wiring the new pool, you can change it temporarily with the two corresponding ff_swgroup_adm.pl commands. Synopsis ff_pool_adm.pl --op add --name <pool_name> --storage <vlan_id>,<network_ip>,<netmask> --server <vlan_id>,<network_ip>,<netmask> --client <vlan_id>,<network_ip>,<netmask> --dns <domain_name>[,<dns_server_ ip>[,<dns_server_ ip>] -- defrouter <default router ip>] [--dns_search_list <list_of_domains>] [--sapdata <nas_name>[,<volume_path>]] [--saplog <nas_name>[,<volume_path>]] [--volff <nas_name>[,<volume_path>]] [--volff_common <nas_name>[,<volume_path>]] 66 Administration and Operation

79 Adding a Pool Pools and Groups Options --op add [--defrouter <default_router_ip>] [--switchgrp <id>[,<id>]] [--force_admin] Adds a pool and displays information about processing steps, errors and warnings. --name <pool_name> Name of new pool (has to be unique within entire FlexFrame, maximum length is 13 patterns). The pool name adminpool always creates an ADMINPOOL. --storage <vlan_id>,<network_ip>,<netmask> Pool specific storage network segment. The option is followed by a comma separated list of the VLAN ID, the network IP and the netmask of the network IP address. --server <vlan_id>,<network_ip>,<netmask> Pool specific server network segment. The option is followed by a comma separated list of the VLAN ID, the network IP and the netmask of the network IP address. --client <vlan_id>,<network_ip>,<netmask> Pool specific client network segment. The option is followed by a comma separated list of the VLAN ID, the network IP and the netmask of network IP. --dns <domain_name>,[<dns_server_ip>] DNS domain name and servers to be used for this pool. More than one DNS server IP address may be given. Keep in mind to use the default router option if server IP addresses are given. A DNS option may look like this: my.domain.com, , dns_search_list <list_of_domains> List of DNS domain names to be used for this pool. At most six domain names are allowed. For technical reasons the first DNS domain name of this list always must be the DNS domain name given in option dns. Administration and Operation 67

80 Pools and Groups Adding a Pool --sapdata <nas_name>[,<volume_path>] Optional NAS name and volume path the pool should use for sapdata. A missing volume path is auto-filled with the default (/vol/sapdata/<pool_name>). e.g. filer1,/vol/sapdata/pool1. The entire option defaults to common NAS name with default path. <nas_name> is the NAS system's node name for this pool (without -st suffix). --saplog <nas_name>[,<volume_path>] Optional NAS name and volume path the pool should use for saplog. A missing volume path is auto filled with the default (/vol/saplog/<pool_name>). e.g. filer1,/vol/saplog/pool1. The entire option defaults to common NAS name with default path. <nas_name> is the NAS system's node name for this pool (without -st suffix). --volff <nas_name>[,<volume_path>] Optional NAS name and volume path the pool should use for volff. A missing volume path is auto filled with the default (/vol/volff/pool-<pool_name>). e.g. filer1,/vol/volff/pool-pool1. The entire option defaults to common NAS name with default path. <nas_name> is the NAS system's node name for this pool (without -st suffix). --volff_common <nas_name>[,<volume_path>] Optional NAS name and volume path the pool should use for common volff data. A missing volume path is auto filled with the default (/vol/volff). e.g. filer1,/vol/volff. The entire option defaults to common NAS name with default path. <nas_name> is the NAS system's node name for this pool (without - st suffix). It has to be the first NAS system of the FlexFrame landscape and /vol/volff/pool-<pool_name>. --defrouter <default_router_ip> The default router is a gateway to route IP data to other, non-pool local networks. All IP data that can not be addressed to a local network will be sent to the default router to be forwarded to the destination network. The option parameter is an IP address of this default router. Use a default router IP address matching one of the local pool networks, because otherwise it will not be accessible by Application Nodes. 68 Administration and Operation

81 Adding a Pool Pools and Groups --switchgrp <id>[,<id>] The switch group ID(s) the Client LAN to corporate LAN ports should be configured. If not given, the client VLAN is assigned to the existing trunk ports or a new port pair at first both switch groups. Not more than two switch group IDs are accepted. --force_admin Example This pool (with any pool_name) should be the ADMINPOOL cn1:~ # ff_pool_adm.pl --op add --name pool4 --storage 30, , server 31, , client 32, , sapdata filer --saplog filer --volff filer --volff_common filer --dns my.domain.com --defrouter update LDAP... update switch 1/1 configuration Notice: Update will take about 1 minute. vlan: storage-30 has been created restart cluster service ldap_srv1 Notice: restart will take up to 1 minute. stop and wait until service is offline start and wait until service is online restart cluster service ldap_srv2 Notice: restart will take up to 1 minute. stop and wait until service is offline start and wait until service is online restart cluster service netboot_srv Notice: restart will take up to 2 minutes. stop and wait until service is offline start and wait until service is online If not reported any warnings or errors all precautions are done and the pool was successfully created. Use ff_poolgroup_adm.pl to define the host groups of this pool to be able to add application nodes. See /tmp/pool-pool4/ff_pool_adm.errlog for complete error and warning log. Administration and Operation 69

82 Pools and Groups Removing a Pool 4.2 Removing a Pool A pool can be removed using ff_pool_adm.pl. Some parameters have to be defined on the command line. Switch VLANs will be removed and the affected ports reconfigured. The LDAP pool subtree will be removed and Control Node configurations rewritten. The pool created with the attribute "CN Pool = true" may not be removed by an administration command, since this pool is used to mount file systems from the NAS device. Synopsis A pool may not be removed if any Application Node or SID is defined. The first pool may not be removed due to system requirements. Removing a pool changes the exports on all NAS systems used by this pool. Use list or listall to get the storage configuration of pool to be removed. Temporary exports (not written to the exports file /vol0/etc/exports) on these NAS systems will be gone after running ff_pool_adm.pl. Be sure not to have temporary exports. ff_pool_adm.pl --op rem --name <pool_name> Options --op rem Removes a pool and displays only errors and warnings. --name <pool_name> Name of pool to be removed. Use ff_pool_adm.pl --op list-all to get a list of currently configured pools (see 4.4). Example cn1:~ # ff_pool_adm.pl --op rem --name pool4 update LDAP... update switch 1/1 configuration Notice: Update will take about 1 minute. restart cluster service ldap_srv1 Notice: restart will take up to 1 minute. stop and wait until service is offline start and wait until service is online restart cluster service ldap_srv2 Notice: restart will take up to 1 minute. 70 Administration and Operation

83 Listing Pool Details Pools and Groups stop and wait until service is offline start and wait until service is online restart cluster service netboot_srv Notice: restart will take up to 2 minutes. stop and wait until service is offline start and wait until service is online If not reported any warnings or errors the pool was successfully removed. Keep in mind, the volumes and their data were not harmed. It's on you to remove them. See /tmp/pool-pool4/ff_pool_adm.errlog for complete error and warning log. 4.3 Listing Pool Details To list the configurations details of a pool like used networks, pool groups, SIDs and Application Nodes, the maintenance tool ff_pool_adm.pl can be used. The pool name has to be defined on the command line. Synopsis ff_pool_adm.pl --op list --name <pool_name> [--list <part>[,<part>]] Options --op list Displays pool configuration details --name <pool_name> Name of pool to be listed. Use ff_pool_adm.pl --op list-all to get a list of currently configured pools (see 4.4). --list <part>[,<part>] To reduce the output to the interesting parts, use this option. The parameters to this option are the display sections. Add them as a comma separated list. The default sections are: network,storage,dns,group,sid,an,cn,nas system. You may also use a two character abbreviation instead the full section name like ne for network. Administration and Operation 71

84 Pools and Groups Listing Pool Details Examples cn1:/opt/flexframe/bin # ff_pool_adm.pl --op list --name p1 Pool configuration details of pool p1 Networks Client-LAN Network: Netmask: VLAN ID: 100 Server-LAN Network: Netmask: VLAN ID: 110 Storage-LAN Network: Netmask: VLAN ID: 120 Def.Router: Storage Volumes sapdata fas01-p1-st:/vol/sapdata/p1 saplog fas01-p1-st:/vol/saplog/p1 volff fas01-p1-st:/vol/volff/pool-p1 volff shared fas01-p1-st:/vol/volff DNS data Domain Name: my.domain.com Pool Groups Linux OS: SUSE Linux SLES-9.X86_64 SIDs and their instances D01 SAP Version: SAP-6.20 DB Version: Oracle-9 Instances Type db ID 0 Server-LAN: dbd01-se Type ci ID 9 Client-LAN: cid Server-LAN: cid01-se Type app ID 10 Client-LAN: app10d Server-LAN: app10d01-se ID 11 Client-LAN: app11d Server-LAN: app11d01-se P01 SAP Version: SAP-6.20 DB Version: Oracle-9 Instances Type db ID 0 Server-LAN: dbp01-se Type ci 72 Administration and Operation

85 Listing Pool Details Pools and Groups ID 0 Client-LAN: cip Server-LAN: cip01-se Type app ID 1 Client-LAN: app01p Server-LAN: app01p01-se ID 3 Client-LAN: app03p Server-LAN: app03p01-se ID 4 Client-LAN: app04p Server-LAN: app04p01-se ID 5 Client-LAN: app05p Server-LAN: app05p01-se Q01 SAP Version: SAP-6.20 DB Version: Oracle-9 Instances Type db ID 0 Server-LAN: dbq01-se Type ci ID 6 Client-LAN: ciq Server-LAN: ciq01-se Type app ID 7 Client-LAN: app07q Server-LAN: app07q01-se ID 8 Client-LAN: app08q Server-LAN: app08q01-se Application Nodes blade01 Type: BX600 Cabinet ID: 1 Slot/Partition ID: 1 OS: SUSE Linux SLES-9.X86_64 Group: Linux Client-LAN blade Server-LAN blade01-se Storage-LAN blade01-st blade02 Type: BX600 Cabinet ID: 1 Slot/Partition ID: 2 OS: SUSE Linux SLES-9.X86_64 Group: Linux Client-LAN blade Server-LAN blade02-se Storage-LAN blade02-st blade03 Type: BX600 Cabinet ID: 1 Slot/Partition ID: 3 OS: SUSE Linux SLES-9.X86_64 Group: Linux Client-LAN blade Administration and Operation 73

86 Pools and Groups Listing All Pools Server-LAN blade03-se Storage-LAN blade03-st rx801 Type: RX800 OS: SUSE Linux SLES-9.X86_64 Group: Linux Client-LAN rx Server-LAN rx801-se Storage-LAN rx801-st Control Nodes cn1 Client-LAN cn1-p Server-LAN cn1-p1-se Storage-LAN cn1-p1-st cn2 Client-LAN cn2-p Server-LAN cn2-p1-se Storage-LAN cn2-p1-st NAS Nodes fas01-p1 Storage-LAN fas01-p1-st A sample with a reduced output: cn1:/opt/flexframe/bin # ff_pool_adm.pl --op list --name p1 --list ne,gr Pool configuration details of pool p1 Networks Client-LAN Network: Netmask: VLAN ID: 100 Server-LAN Network: Netmask: VLAN ID: 110 Storage-LAN Network: Netmask: VLAN ID: 120 Def.Router: Pool Groups Linux OS: SUSE Linux SLES-9.X86_ Listing All Pools To display an overview of all pools with their used networks, pool groups, SIDs and Control Node and NAS system interfaces, the maintenance tool ff_pool_adm.pl can 74 Administration and Operation

87 Listing All Pools Pools and Groups be used. No arguments except the operation mode have to be defined on the command line. Synopsis ff_pool_adm.pl --op list-all [--list <part>[,<part>]] Options --op list-all Displays the configuration details of all configured pools. --list <part>[,<part>] To reduce output to interesting parts use this option. The parameters to this option are the display sections. Add them as a comma separated list. The default sections are: network,storage,group,sid,cn,nas system. You may also use a two character abbreviation instead the full section name like ne for network. Examples cn1:/opt/flexframe/bin # ff_pool_adm.pl --op list-all Pool configurations p1 Pool Networks Client-LAN Network: Netmask: VLAN ID: 100 Server-LAN Network: Netmask: VLAN ID: 110 Storage-LAN Network: Netmask: VLAN ID: 120 Pool Storage Volumes sapdata fas01-p1-st:/vol/sapdata/p1 saplog fas01-p1-st:/vol/saplog/p1 volff fas01-p1-st:/vol/volff/pool-p1 volff shared fas01-p1-st:/vol/volff Pool Groups Linux OS: SUSE Linux SLES-9.X86_64 Pool SIDs D01 SAP Version: SAP-6.20 P01 SAP Version: SAP-6.20 DB Version: Oracle-9 DB Version: Oracle-9 Administration and Operation 75

88 Pools and Groups Listing All Pools Q01 SAP Version: SAP-6.20 DB Version: Oracle-9 Pool Control Node Interfaces cn1 Client-LAN cn1-p Server-LAN cn1-p1-se Storage-LAN cn1-p1-st cn2 Client-LAN cn2-p Server-LAN cn2-p1-se Storage-LAN cn2-p1-st Pool NAS Node Interfaces fas01-p1 Storage-LAN fas01-p1-st A sample with a reduced output on a single pool configuration: cn1:/opt/flexframe/bin # ff_pool_adm.pl --op list-all --list sid,group Pool configurations p1 Pool Groups Linux OS: SUSE Linux SLES-9.X86_64 Pool SIDs D01 SAP Version: SAP-6.20 P01 SAP Version: SAP-6.20 Q01 DB Version: Oracle-9 DB Version: Oracle-9 76 Administration and Operation

89 Changing Pool DNS Domain Pools and Groups 4.5 Changing Pool DNS Domain Changing a pools DNS configuration can be done using the maintenance tool ff_pool_adm.pl. The DNS configuration is written to /FlexFrame/volFF/pool-<pool_name>/pooldata/config/etc/resolv.conf. The DNS configuration is newly configured, therfore all domain names and servers to be used for this pool have to be given in the options. Any modification in resolv.conf, done by the customer, is lost. Synopsis ff_pool_adm.pl --op dns --name <pool_name> --dns <dns_domain>[,<dns_server_ip> [,<dns_server_ip>] --defrouter <default router ip>] [--dns_search_list <list_of_domains>] Options --op dns Adds or changes DNS configuration of a pool. --name <pool_name> Name of pool the DNS configuration has to be changed. Use ff_pool_adm.pl op list-all to get a list of currently configured pools (see 4.4). --dns <dns_domain>[,<dns_server_ip>] DNS domain name and servers to be used for this pool. If one or more dns_server_ip(s) are given, a default router has to be given too. --dns_search_list <list_of_domains> List of DNS domain names to be used for this pool. At most six domain names are allowed. For technical reasons the first DNS domain name of this list always must be the DNS domain name given in option dns. Administration and Operation 77

90 Pools and Groups Changing Pool Default Router Example cn1:~ # ff_pool_adm.pl --op dns --name pool3 --dns pool3.wdf.fsc.net DNS domain successfully changed at LDAP and AN images. 4.6 Changing Pool Default Router Add or change a pool's default router with opcode defrouter, remove a pool's default router with opcode rem-defrouter. The default router may be necessary to reach external DNS servers. Within all cases a default router has to have an IP address matching one of pools IP networks (in detail the server or client LAN). The default router is used to transfer network traffic between the local IP networks and external IP networks. Synopsis ff_pool_adm.pl --op defrouter --name <pool_name> --defrouter <default_router_ip ff_pool_adm.pl --op rem-defrouter --name <pool_name> Options --op defrouter Adds or changes default router configuration of a pool. --op rem-defrouter removes default router configuration of a pool. --name <pool_name> Name of pool the default router configuration has to be changed and removed respectively. Use ff_pool_adm.pl op list-all to get a list of currently configured pools (see 4.4). --defrouter <default_router_ip> The default router to be used for this pool to communicate with other, non local networks. The IP address has to match to client or server pool network. Example cn1:~ # ff_pool_adm.pl --op defrouter --name pool3 --defrouter Defaultrouter successfully changed at LDAP and AN images. 78 Administration and Operation

91 Adding a Group to a Pool Pools and Groups 4.7 Adding a Group to a Pool A pool may have more than one group. To add a group to a pool, the maintenance tool /opt/flexframe/bin/ff_poolgroup_adm.pl can be used. To associate a group to a pool, some parameters have to be defined on the command line. Groups are used with the FlexFrame Autonomous Agents. In the ADMINPOOL only a group named SPARE is allowed. This group acts exclusively as a container for all types of spare nodes. Adding the group SPARE the options ostype, osversion and osvendor are not necessary. The group is added to a pool in the LDAP database. Synopsis ff_poolgroup_adm.pl --op add --pool <pool_name> --group <group_name> [--ostype {Linux} --osversion <version_string> [--osvendor {SUSE}]] Options --op add Adds a group to a pool. --pool <pool_name> Name of pool the group should be added. We recommend using short lowercase names for <pool_name>. --group <group name> The name of the group to be added. --ostype {Linux } Type of operating system (OS) the systems of this group work with. Currently only Linux is a valid choice. --osversion <version_string> The version of the OS. --osvendor {SUSE} The vendor of the OS. Currently only SUSE (Linux) is supported. Administration and Operation 79

92 Pools and Groups Removing Pool Group 4.8 Removing Pool Group To remove a group of a pool, use the maintenance tool /opt/flexframe/bin/ff_ poolgroup_adm.pl. You have to define the pool and the group name on the command line. The group is removed from a pool in the LDAP database. Synopsis ff_poolgroup_adm.pl --op rem --pool <pool_name> --group <group_name> Options --op rem Removes a group from a pool. --pool <pool_name> Name of the pool the group should be removed from. --group <group_name> The name of the group to be removed. 4.9 Changing Group Assignment of Application Nodes Change the assigned pool group of an Application Node with /opt/flexframe/bin/ff_an_adm.pl. Command line arguments are the Application Node name for which the pool group should be changed and the name of the new pool group. The command changes the pool group of Application Node in the LDAP database. The configuration of FA Agents currently has to be changed manually. Synopsis ff_an_adm.pl --op group --name <node_name> --group <group_name> 80 Administration and Operation

93 Changing Group and Pool Assignment of Application Nodes Pools and Groups Options --op group Changes pool group of Application Node. --name <node_name> Name of the Application Node the pool group has to be changed. --group <group_name> The name of the pool group the Application Node should be assigned to. Use ff_pool_adm.pl --op list-all to get available pool groups (see 4.4) Changing Group and Pool Assignment of Application Nodes There is currently no maintenance tool to do this. The recommended way is to remove an Application Node and add it with the new pool and group name Hosts Database It may become necessary to have additional entries in the hosts database. Those entries may be required by 3rd-party products installed on customized Application Node images. The hosts database is stored in LDAP. To maintain the additional host entries use the command ff_hosts.sh. You cannot remove names or addresses which are essential to the FlexFrame landscape. Each pool has its own hosts database. Therefore you have to maintain each pool individually Script: ff_hosts.sh This tool allows the administrator to list, add and delete host names or aliases to the LDAP database. Note: When adding host names with an IP address that matches one of the pool's network segments or the Control LAN segment, the list of IP addresses for that segment gets extended by the IP address of this host name to prevent automatic allocation of the same IP address by other FlexFrame tools. Only one option of -l, -a or -r can be used at one time. Administration and Operation 81

94 Pools and Groups Hosts Database Synopsis ff_hosts.sh [-d] -p <pool_name> [{-l -a <ip> -n <name> -r <name>}] Options -d. -l This option will log debug information List all the hosts entries for the pool as provided by option -p. -p <pool_name> -a <ip> Pool where the host name should be added to. Add the IP address <ip>. Has to be used together with option -n. If an entry with the IP address <ip> already exists, the name provided will be added as an alias. -n <name> Host name <name> will be added to the list. -r <name> Deletes the host name or alias of name. The host name cannot be deleted if it has additional aliases. Remove the aliases first. Examples The following example will list all additional hosts entries for pool poolname created using this tool: cn1:~ # ff_hosts.sh -l -p poolname The following example will add a host newhost with the IP address to the pool poolname: cn1:~ # ff_hosts.sh -p poolname -a n newhost The following example will remove the hosts entry for newhost:. cn1:~ # ff_hosts.sh -p poolname -r newhost 82 Administration and Operation

95 Create, Modify, Delete, or List User(s) for Application NodesUser and Groups Administration 5 User and Groups Administration 5.1 Create, Modify, Delete, or List User(s) for Application Nodes Synopsis /opt/flexframe/bin/ff_user_adm.pl --op add -user <user_name> --pool <pool_name> [ -group {<group_name> <group_id>},...] [ -uid <uid_number>] [--home <home_directory>] [ -pass <passwd>] --shell <login_shell> - gecos <text> [--shadowmin <number_of_days>] [--shadowmax <number_of_days>] [--shadowwarn <number_of_days>] /opt/flexframe/bin/ff_user_adm.pl --op mod -user <user_name> --pool <pool_name> [ -pass <passwd>] --shell <login_shell> [- gecos <text>] [--shadowmin <number_of_days>] [--shadowmax <number_of_days>] [--shadowwarn <number_of_days>] [--home <home_directory>] [--group {<primary_group_name> <primary_group_id>}] [--uid <uid_number>] /opt/flexframe/bin/ff_user_adm.pl --op rem -user <user_name> --pool <pool_name> [--shell <login_shell>] /opt/flexframe/bin/ff_user_adm.pl --op list -user <user_name> --pool <pool_name> [--shell <login_shell>] /opt/flexframe/bin/ff_user_adm.pl --op list-all --pool <pool_name> Options --op add -user <user name> --pool <pool_name> Creates a new user in a given pool. Administration and Operation 83

96 User and Groups AdministrationCreate, Modify, Delete, or List User(s) for Application Nodes --op mod -user <user_name> --pool <pool_name> Modifies a user in a given pool. --op rem -user <user_name> --pool <pool_name> Deletes a user in a given pool. --op list -user <user_name> --pool <pool_name> Displays information of a user in a given pool. --op list-all --pool <pool_name> Displays information of all users in a given pool. --group <group_name> <group_id>,... A comma separated list of group names/ids the user belongs to. The first group is taken as primary group. Default: gid number 1 (=other). --group <primary_group_name> <primary_group_id> The affiliation to the primary group can be changed with this option. --uid <uid_number> A certain uid number, if desired. Default: The first free number >= In connection with the mod option the existence of the uid is not checked. For specific SAP userids we recommend to use a number >= 3600 concerning SAP conventions. --home <home_directory> Home directory of the user. If this option is not used the home directory is set to /FlexFrame/pooldata/home/<user> and created if it does not exists. The following rules are valid if the option is used: <home_directory>::= <name-1>/<name-2> <name-3> Using <name-1>/<name-2> or <name3> creates a directory in /FlexFrame/pooldata/<appropriate_dir> if the appropriate directory does not exist and sets the home directory to /FlexFrame/pooldata/<appropriate_dir>. <home_directory>::= /home_sap/<name-1> /<name-2>/<name-3> Using /home_sap/<name-1> sets the home directory to /home_sap/<name-1> and creates the directory if it does not exist. Using /<name-2>/<name-3> sets the home directory to /<name-2>/<name-3> but does not create the appropriate directory --pass <passwd> 84 Administration and Operation

97 Creating, Modifying, Deleting or Listing Group(s) for Application NodesUser and Groups Administration Login password for the user. Default: password. --shell <login_shell> Login shell of the user. Default: /bin/csh. In connection with the mod option the existence of the login shell is not checked. --gecos <text> Some comment used for the user's gecos attribute. Default: Normal user. This replaces the previous used option comment. --shadowmin <number_of_days> Min. number of days until the password can be changed. --shadowmax <number_of_days> Max. number of days until the password have to be changed. --shadowwarn <number_of_days> Number of days you get a warning message before the password expires. 5.2 Creating, Modifying, Deleting or Listing Group(s) for Application Nodes This command enables you to extend the operating system group entries stored in the LDAP database. The modification or deletion of group entries is restricted to those entries which are created with ff_group_adm.pl. Synopsis ff_group_adm.pl --op add -name <osgroup_name> --pool <pool_name> [ -gid <group id>] [--member <member,...>] [--gmember <member,...>] [- text <description>] [--force] [--fname <filename>] ff_group_adm.pl --op mod -name <osgroup_name> --pool <pool_name> [ -gid <group id>] [--member <member,...>] [--gmember <member,...>] [--force] [--fname <filename>] ff_group_adm.pl --op rem -name <osgroup_name> --pool <pool_name> [--member <member,...>] [--gmember <member,...>] [--fname <filename>] ff_group_adm.pl --op list -name <osgroup_name> --pool <pool_name> ff_group_adm.pl --op list-all [ -name <osgroup_name>] [--pool <pool_name>] Administration and Operation 85

98 User and Groups AdministrationCreating, Modifying, Deleting or Listing Group(s) for Application Nodes ff_group_adm.pl --help Options --op add Creates a new group in a given pool. --op mod Modifies a group in a given pool. --op rem Deletes a user in a given pool. --op list Displays information of group entries in a given pool. --op list-all Displays information of all group entries in a given pool or in all pools. --name <osgroup_name,...> A comma separated list of group names. --pool <pool_name> Name of the pool the group should belong to. --gid <gid_number> A certain GID number, if desired. If a list of groups is specified, it is the start GUID number for the first group. For the next group names the value is always incremted by one. If the increment means a conflict with an already existing GUID the script looks for the next free GUID. You can modify the GID with operation mod, but you are responsible for the effects. It is also not possible to use a list of groups if you want to change a certain GID. --member <member,...> A comma separated list of user names which should belong to this group. No check is done if the user really exists. --gmember <member,...> A comma separated list of user names which is inserted into the group members list of a group. This option requires a modified flexframe.schema file. Group members usually used with a DB2 installation. --text <description> A user specific description of the group entry. 86 Administration and Operation

99 Creating, Modifying, Deleting or Listing Service(s) for Application NodesUser and Groups Administration --force The add operation continues with the next group if any add operation before terminates unexpectedly. --fname <file_name> --help Name of the file to store the used LDIF statements for this request. Shows the usage of the command. 5.3 Creating, Modifying, Deleting or Listing Service(s) for Application Nodes This command enables you to extend the service entries stored in LDAP database. The modification or deletion of service entries is restricted to those entries which are created with ff_services_adm.pl. Synopsis ff_services_adm.pl --op add -name <service_name,...> --pool <pool_name> [ -port <port_number>] [--prot {udp tcp}] [- text <description>] [--force] [--fname <filename>] ff_services_adm.pl --op mod -name <service_name> --pool <pool_name> [ -port <port_number>] [--prot {udp tcp}] [--force] [--fname <filename>] ff_services_adm.pl --op rem -name <service_name,...> --pool <pool_name> ff_services_adm.pl --op list -name <service_name,...> --pool <pool_name> ff_services_adm.pl --op list-all [ -name <service_name>] [--pool <pool_name>] ff_services_adm.pl --help Options --op add Creates new services in a given pool. Administration and Operation 87

100 User and Groups AdministrationCreating, Modifying, Deleting or Listing Service(s) for Application Nodes --op mod Modifies a service in a given pool. --op rem Deletes services in a given pool. --op list Displays information of service entries in a given pool. --op list-all Displays information of all service entries in a given pool or in all pools. --name <service_name>,... A comma separated list of service names. In case of operation mod you cannot use a list of services. --pool <pool_name> Name of the pool the services should belongs to. --port <port_number> A certain port number, if desired. If a list of services is specified it is the start port number for the first service. For the next service names the value is always incremted by one. --prot {udp tcp} Specifies the used protocol (default tcp). --text <description> --force A user specific description of the service entry The add operation continues with the next service if any add operation before terminates unexpectedly. --fname <file_name> --help Name of the file to store the used LDIF statements for this request. Shows the usage of the command. 88 Administration and Operation

101 6 Pool-independent Spare Nodes Before FlexFrame version 4.2 a group specific failover concept for Application Nodes was used, i.e. if a node in a certain pool group failed, the FA Agents searched another Application Node in this group with enough free capacity to take over the service of the failed node. As of FlexFrame version 4.2 the new concept of pool-independent spare nodes was introduced: If no adequate spare node is found in the group of the failed node, the FA Agents can search a spare node in the special pool ADMINPOOL, which mostly is called adminpool (except in cases, where this name has already been reserved). The ADMINPOOL is a reservoir of spare nodes for all pools of a FlexFrame landscape, and it must not serve as a normal production pool. A spare node in the ADMINPOOL is ready for takeover by the FA Agents, only if it is booted before! To move a spare node from the ADMINPOOL to its target pool and target group, the option --op move of ff_an_adm.pl has been implemented. Furthermore StorMan has been integrated into the Control Node image to support the switch of SAN con-figurations for spare nodes. The following is an overview of the details of this concept. 6.1 Creation of Spare Nodes in the ADMINPOOL To create spare nodes, you have to perform the following steps using the Management Tool or the corresponding administration scripts (see parentheses): 1. Create the ADMINPOOL (ff_pool_adm.pl -op add; see section 4.1) 2. Create the pool group SPARE within this pool (ff_poolgroup_adm.pl -op add; see section 4.7) 3. Add the necessary spare nodes to this group (ff_an_adm.pl -op add; see section 7.2) 4. Create the os images with ff_new_an.sh and boot them. 6.2 Moving of a Spare Node To move a spare node from the ADMINPOOL to the desired target pool and target group, use ff_an_adm.pl op move (see section 7.5). Administration and Operation 89

102 Pool-independent Spare Nodes Listfunction for Spare Nodes 6.3 Listfunction for Spare Nodes With this function you can select spare nodes from group SPARE in the ADMINPOOL, which satisfy several conditions, such as OS type HBA type, or 10Gbit. Synopsis ff_an_adm.pl --op list-spare [--os-type {LINUX}] [--hba-type <HBA_type>] [--10GB] Options --op list-spare Lists the spare node. --os-type {LINUX} Os type the selected node must have. --hba-type <HBA_type> --10GB HBA type the selected node must have. Specifies whether the node supports 10Gbit facility. 6.4 Handling Pool-independent Spare Nodes with FA Agents If no adequate spare node is found in the group of the failed node, the FA agents can search a spare node in the ADMINPOOL. For more information see the FA Agents - Installation and Administration manual. 90 Administration and Operation

103 7 Application Nodes Administration Manual Application Node administration would be very complex and error-prone. The script /opt/flexframe/bin/ff_an_adm.pl does the major changes and supports adding, changing, removing and listing of Application Nodes. Below, each action is described in detail. In this document, you often will find console output, configuration data and installation examples which are based on earlier FlexFrame versions. Please keep in mind that these are examples and may look slightly different on the new operating systems introduced in FlexFrame 5.0A. 7.1 Listing Application Nodes Displaying Information on a Specific Application Node Synopsis ff_an_adm.pl --op list --name <node_name> Options --op list Lists the configuration details of an Application Node. --name <node_name> The name of the Application Node to be listed. The output is structured in sections: hardware, software, network, assigned pool and group, switch ports. Hardware This section contains information about system type, rack ID, device rack name, shutdown facility with IP address and host name, mac addresses and on blade servers the chassis and slot/partition ID. Software This section contains information on OS type, vendor and version and the root image path. Network This section lists the VLAN ID, IP address and host name of all configured networks, sorted by LAN segments. Administration and Operation 91

104 Application Nodes Administration Listing Application Nodes Pool and Group This section lists the names of the assigned pool and group. Switch ports In case link aggregates are configured, this section identifies the aggregate and its ports. Each used switch port is shown with the switch group ID, switch ID and port ID (cabinet ID, switch blade ID and port on blade servers) for the common LANs (Storage, Server and Client LAN) and the Control LAN (if used). Definition of Switch Group: A number of Cisco Catalyst 3750g/3750e switches within one system cabinet. The switches are connected as a loop with Cisco StackWise cables at the rear of each switch. With connected cables, the switches form a stack that behaves like a virtual switch including all ports of connected switches. To identify a switch port within the entire FlexFrame environment, three items are used: the number of the switch group (it is like a numbering of the virtual switches) starting with 1. the number of the switch within the switch group starting with 1. the number of the switch port starting with 1. Definition of Switch Port: A switch has a number of ports where other network devices and the host network interfaces are connected. The port is identified by a number starting at 1. Within a switch group, the port number is prefixed by the switch number (the identification number of the switch within the switch group). 92 Administration and Operation

105 Listing Application Nodes Application Nodes Administration Examples The command displays detailed information on the selected Application Node. The output differs between blade servers and all others. Example of a PRIMERGY blade server output: cn1:~ # ff_an_adm.pl --op list --name bx91-1 Configuration details of node bx91-1 Hardware System: BX920S1 RackID: 2 AN Name: AN1 10GBit: No Shut.Facil.: Mgmt Blade bx91-co ( ) irmc Facil.: irmc bx91-1-co ( ) MAC Addr.: 00:23:8b:97:ca:02 00:23:8b:97:ca:03 IDs: 3 / 1 (System Cabinet / Slot Partition) Software OS: SuSE / Linux / SLES-10.x86_64 (Vendor / Type / Version) OS Path: fil1na-pool2-st:/vol/volff/os/linux/fsc_5.0a sles-10.x86_64 Network VlanID Host IP Hostname Storage LAN: bx91-1-st Server LAN: bx91-1-se Client LAN: bx91-1 Pool and Group Pool: Group: pool2 group1 Switch ports Cabinet SwBlade Port Common LANs: Common LANs: Administration and Operation 93

106 Application Nodes Administration Listing Application Nodes Example of a PRIMERGY rack server output: cn1:~ # ff_an_adm.pl --op list --name rx35-1 Configuration details of node rx35-1 Hardware System: RX300S5 RackID: 1 AN Name: AN5 10GBit: No Shut.Facil.: IPMI rx35-1-co ( ) MAC Addr.: 00:c0:9f:dc:3a:da 00:c0:9f:dc:3a:db Software OS: SuSE / Linux / SLES-10.x86_64 (Vendor / Type / Version) OS Path: fil1na-st:/vol/volff/os/linux/fsc_5.0a sles-10.x86_64 Network VlanID Host IP Hostname Storage LAN: rx35-1-st Server LAN: rx35-1-se Client LAN: rx35-1 Pool and Group Pool: Group: pool1 group1 Switch ports SW Grp Switch Port Common LANs: Common LANs: Control LAN: Administration and Operation

107 Listing Application Nodes Application Nodes Administration Displaying Information on all Application Nodes Synopsis ff_an_adm.pl --op list-all [--pool <pool_name>] Options --op list-all Lists all configured Application Nodes. --pool <pool_name> Example The name of the pool of which the Application Nodes have to be listed. cn1:/opt/flexframe/bin # ff_an_adm.pl --op list-all Nodes sorted by pool, group and name Pool pool1 Pool Group bx600_a bx1-1 Node Type: BX620S4 Rack/Cabinet/Slot Partition ID: 1/1/1 OS: SUSE / Linux / SLES-10.X86_64 (Vendor / Type / Version) OS Path: f1-pool1-st:/vol/volff/os/linux/fsc_5.0a sles- 10.X86_64/root_img Host IP Hostname Storage LAN: bx1-1-st Server LAN: bx1-1-se Client LAN: bx1-1 MAC Addr.: 00:c0:9f:95:5f:ac 00:c0:9f:95:5f:ad bx1-2 Node Type: BX620S4 Rack/Cabinet/Slot Partition ID: 1/1/2 OS: SUSE / Linux / SLES-10.X86_64 (Vendor / Type / Version) OS Path: f1-pool1-st:/vol/volff/os/linux/fsc_5.0a sles- 10.X86_64/root_img Host IP Hostname Storage LAN: bx1-2-st Server LAN: bx1-2-se Client LAN: bx1.2 MAC Addr.: 00:c0:9f:95:5f:8a 00:c0:9f:95:5f:8b bx2-1 Node Type: BX620S4 Rack/Cabinet/Slot Partition ID: 2/2/1 Administration and Operation 95

108 Application Nodes Administration Listing Application Nodes OS: SUSE / Linux / SLES-10.X86_64 (Vendor / Type / Version) OS Path: f1-pool1-st:/vol/volff/os/linux/fsc_5.0a sles- 10.X86_64/root_img Host IP Hostname Storage LAN: bx2-1-st Server LAN: bx2-1-se Client LAN: bx2-1 MAC Addr.: 00:c0:9f:95:60:60 00:c0:9f:95:60:61 bx2-2 Node Type: BX620S4 Rack/Cabinet/Slot Partition ID: 2/2/2 OS: SUSE / Linux / SLES-10.X86_64 (Vendor / Type / Version) OS Path: f1-pool1-st:/vol/volff/os/linux/fsc_5.0a sles- 10.X86_64/root_img Host IP Hostname Storage LAN: bx2-2-st Server LAN: bx2-2-se Client LAN: bx2-2 MAC Addr.: 00:c0:9f:93:7f:cc 00:c0:9f:93:7f:cd Pool pool2 Pool Group bx600_o bx1-6 Node Type: BX620S3 Rack/Cabinet/Slot Partition ID: 1/1/6 OS: SUSE / Linux / SLES-10.X86_64 (Vendor / Type / Version) OS Path: f1-pool2-st:/vol/volff/os/linux/fsc_5.0a sles- 10.X86_64/root_img Host IP Hostname Storage LAN: bx1-6-st Server LAN: bx1-6-se Client LAN: bx1-6 MAC Addr.: 00:C0:9F:99:E6:CC 00:C0:9F:99:E6:CD bx2-6 Node Type: BX620S3 Rack/Cabinet/Slot Partition ID: 2/2/6 OS: SUSE / Linux / SLES-10.X86_64 (Vendor / Type / Version) OS Path: f1-pool2-st:/vol/volff/os/linux/fsc_5.0a sles- 10.X86_64/root_img Host IP Hostname Storage LAN: bx2-6-st Server LAN: bx2-6-se Client LAN: bx2-6 MAC Addr.: 00:C0:9F:99:E9:F4 00:C0:9F:99:E9:F5 96 Administration and Operation

109 Adding Application Nodes Application Nodes Administration The output of list-all is less detailed than the list output. It is used to get an overview. It shows the Application Nodes sorted by pool and group in alphabetical order. For each node the system type, the cabinet and slot ID (if node is a blade server), the OS type, vendor, version, the root image path, the main IP addresses, host names and the MAC addresses are listed. 7.2 Adding Application Nodes This section describes how to provide the required information for adding a new AN to an existing FlexFrame environment. See also section You have to define some parameters at command line. They are used to configure switch ports, to create the boot information and the OS image. Adding an Application Node changes the exports file on the common volff NAS system. Temporary exports on this NAS system will be gone after running ff_new_an.sh. or ff_an_adm.pl with option new_image, Be sure not to have temporary exports. Synopsis ff_an_adm.pl --op add --type <system_type> --name <node_name> --pool <pool_name> --group <group_name> --swgroup <switch_group_id> --mac <mac_addresses> --ospath <path_to_os_image> [--host <ip_host_number> [,<cntl_lan_ip_host_number>, <2 nd _cntl_lan_ip_host_number>]] [--slot <BXxxx_cabinet/slot>] [--hba <list_of_hba_names][--hba-type <HBA_type>] [--sw <multipath_software_name>] [--10gbit] [--mgmtswgroup <switch_group_id>] [--new-image] [--port switch:port,switch:port,switch:port, [switch:port]] [--esx <esxi node name>] [--vcpus <number of virtual cpus>] [--vmem <virtual machine memory size] [--force] Administration and Operation 97

110 Application Nodes Administration Adding Application Nodes Options --op add Adds an Application Node and displays some information about processing steps. --type <system_type> Specifies the product name and type. Call ff_an_adm.pl without any parameter to get a list of supported system types. A system type of ESXVM denotes a virtual machine on an ESXi host, and a virtual machine with the same name is implicitly created on the denoted ESXi host --name <node_name> The name of the Application Node. This name has to be unique for the entire FlexFrame system. All interface names are based on this node name. We recommend using lower case names if possible. --pool <pool_name> The name of the pool this node should belong to. See usage (call ff_an_adm.pl without any parameter) to get a list of currently configured pools. --group <group_name> The name of the pool group this node is a member of. A group must consist of Application Nodes of the same OS image version and should be of the same capacity (CPU, Memory etc.). There should be at least one spare node in a group. Otherwise, take-over of failing services will not be possible. Use command ff_pool_adm.pl with op mode list or list-all to get the pool groups. --swgroup <switch_group_id> Defines the switch group the Application Nodes data NIC 1 and 2 are connected to. This information is necessary to assign and configure switch ports. Call ff_an_adm.pl without any parameter to get a list of currently configured switch group IDs. --mac <mac_addresses> Add here both MAC addresses of the data NICs used with booting. Use the colon separated hex notation for each MAC address. Concatenate them with a comma. The MAC address syntax is a six colon separated hex value, eg. 00:e0:00:c5:19:41. For an Application Node with type ESXVM, an appropriate MAC address is generated by the script.--ospath <path_to_os_image> Defines the OS image to be used. Add the relative path to /FlexFrame/volFF/os/ as seen from the Control Node. See usage (call ff_an_adm.pl without any parameter) to get a list of available OS paths. 98 Administration and Operation

111 Adding Application Nodes Application Nodes Administration --host <ip_host_number>[,<cntl_lan_ip_host_number>,2nd_cntl_lan_ip_ho st_number>] Host part to be used to build IP addresses for the three networks. If necessary host part(s) to be used to build IP addresses for the management network can be added, separated by commas. If this option is omitted, the script uses free host numbers to calculate the IP addresses. --slot <BXxxx_cabinet/slot> With PRIMERGY server blades use this option to define the cabinet and slot ID of the server blade. New cabinets have to be defined with the /opt/flexframe/bin/ff_bx_cabinet_adm.pl command. For models that occupy more than one slot (e.g. BX630 S2 quad, BX630 quad, or BX630 octo) the part of the server blade that occupies the slot with the highest slot number is called the master. The master is usually the rightmost slot and its slot ID has to be chosen as the master slot ID. --hba <list_of_hba_names> Specifies a list of symbolic SAN HBA names. The HBA name may consist of alphanumerical characters and dashes. Separate HBA names with a comma. --hba-type <HBA_type> Specifies the type of the HBAs on this node. This option can be important for searching an adequate spare node from the ADMINPOOL (see chapter 6). It is necessary to know what the meaning is: With this option all relevant information of HBAs, which have influence on the OS image, must be coded. This can be, for example, the speed of a HBA. The administrator must know what information is important for the spare node selection. He can code this information, as he wants it, he is free in choosing the value of this option, but he must do it obeying a norm, which is unique for the whole FlexFrame. --sw <multipath_software_name> Specifies the name of the SAN multipath software to be used. See usage for a list of known and accepted software names. --10gbit Specifies that the node is used with 10 Gigabit data NICs. The specification can be omitted if the nodes system_type only allows usage with 10 Gigabit data NICs. --mgmtswgroup <switch_group_id> Defines the switch group the Application Nodes management interface (IPMI) is connected to. If omitted the effective switch group is computed as follows: If the Administration and Operation 99

112 Application Nodes Administration Adding Application Nodes switch group given with --swgroup is no NEXUS switch group that switch group is used else the switch group the NEXUS switch management interfaces are connected is used. Call ff_an_adm.pl --help to get a list of currently configured switch group IDs. --new-image Creates a new os image (need not call ff_new_an.sh!). --port switch:port,switch:port,switch:port,[switch:port] Defines the switch ports to be used. First two tuples are for data NIC 1 and 2 and allocated in the switch group defined with --swgroup. The following tuples are for mgmt NICs and allocated in the effective switch group for management interfaces. If --10gbit is specified, the first two ports must be 10 Gigabit capable ports.--esx <esxi node name> When adding a node with type ESXVM, this parameter is mandatory and specifies the name of the ESXi host where the virtual machine must be created. An ESXi host with this name must already exist in FlexFrame and must be configured for FlexFrame usage. For details refer to chapter 7.8 "Administrating ESX servers and Virtual Machines". --vcpus <number of virtual cpus> Specifies the number of virtual CPUs when creating a virtual machine, The default is 2 CPUs. --vmem <virtual machine memory size in MB> --force Defines the memory size in MB when creating a virtual machine, The default is 8 GB Specifies that the memory usage of the ESXi host may be overcommitted when creating a virtual machine. Default is to deny creation of a new virtual machine on a host if the total vmem of the virtual machines on this host, including the new one, exceeds the memory size of the hostexamples Output for a blade server: cn1:/opt/flexframe/bin # ff_an_adm.pl --op add --type BX620S5 --name bx1-6 --pool pool1 --group bx600_o --ospath Linux/FSC_5.0A SLES-10.X86_64 --host 1 --slot 1/6 --mac 00:C0:9F:99:E6:CC,00:C0:9F:99:E6:CD update swblade 1/1 configuration Notice: Update will take about 1 minute. update swblade 1/2 configuration Notice: Update will take about 1 minute. 100 Administration and Operation

113 Removing Application Nodes Application Nodes Administration If not reported any error all precautions are done to create application nodes os image. To do this call: ff_new_an.sh -n bx1-6 Creating and customizing an image may take some minutes. Don't get anxious. Output for a non-blade server: cn1:~ # ff_an_adm.pl --op add --name rx type RX300S6 --pool pool2 --group group1 --swgroup 1 --ospath Linux/FSC_5.0A SLES-10.x86_64 --mac 00:15:17:2d:ab:a8,00:15:17:2d:ac:02 update switch 1/1 configuration Notice: Update will take about 1 minute. Connect your systems LAN interfaces to named switch ports: SwitchGroup / Switch / Port LAN Interface 1 / 1 / 6 data NIC-1 1 / 2 / 6 data NIC-2 1 / 1 / 24 IPMI NIC-1 If not reported any error all precautions are done to create application nodes os image. To do this call: ff_new_an.sh -n rx300-1 Creating and customizing an image may take some minutes. Don't get anxious. The script first checks all arguments and aborts with error messages in case of errors. Then it fetches free IP addresses and switch ports. The switch ports are reconfigured to match requirements, the LDAP data is created and a netboot file is written. The netboot file is used by ff_new_an.sh to create Application Node images and extend the NAS system's exports list. At the end you get a cabling advice and instructions how to call ff_new_an.sh script to finish the Application Node creation. 7.3 Removing Application Nodes You only have to give the node name to be removed at the command line. All switch ports will be unconfigured and the boot information and OS image are deleted. For an Application Node of type ESXVM, the virtual machine is also destroyed. Administration and Operation 101

114 Application Nodes Administration Renaming Application Nodes Synopsis ff_an_adm.pl --op rem --name <node_name> Options --op rem Removes an application and displays only errors and warnings. --name <node_name> The name of the Application Node to be removed. Use operation mode list-all to get all configured Application Nodes and their names (see 7.1.2). Example cn1:/opt/flexframe/bin # ff_an_adm.pl --op rem --name rx Renaming Application Nodes Changing of node names may be necessary for various reasons. Removing and adding the node may result in changes of network cabling while renaming does not. With renaming a node an alternate os image path may be given. Renaming is not supported for virtual machine Application Nodes. Synopsis ff_an_adm.pl --op rename --name <node_name> --newname <new_node_name> [--ospath <path_to_os_image>] [--remove_image] Options --op rename Removing an Application Node results in direct deletion of its image, removal of its LDAP entries as well as disabling the respective switch ports. Please make sure you really want to remove the Application Node (AN) when calling the script, the script does not ask for further confirmation. Removing an Application Node changes the exports file on the common volff NAS system. Temporary exports on this NAS system will be gone after running ff_an_adm.pl. Please make sure not to have temporary exports. Changes the node name of given Application Node. --name <node_name> 102 Administration and Operation

115 Moving Application Nodes Between Pools Application Nodes Administration Current name of node to be changed. --newname <new_node_name> New name of the node. --ospath <path_ to_os_ image> Os image path the renamed node should use. --remove_image Example Removes the old unused image of node. cn1:~ # ff_an_adm.pl --op rename --name node4 --newname node5 update LDAP... If not reported any error all precautions are done to create a new application node os image. To do this call: ff_new_an.sh -n node5 Creating and customizing an image may take some minutes. Don't get anxious. 7.5 Moving Application Nodes Between Pools Moving of Application Nodes from one pool to another pool may often be necessary, specially for pool-independent spare nodes (see chapter 6). The opcode move serves to satisfy this requirement. While performing the move action, the node gets new Vlans in its new target pool, but changes of network cabling are not necessary. This operation is not supported for virtual machine Application Nodes. Administration and Operation 103

116 Application Nodes Administration Moving Application Nodes Between Pools Synopsis ff_an_adm.pl --op move --name <node_name> [--to-pool <pool_name>] [--to-group <group_name>] [--newname <new_node_name>] [--to-host <ip_host_number> [,<cntl_lan_ip_host_number>, <2nd_cntl_LAN_ip_host_number>]] [--to-ospath <path_to_os_image>] [--failed-host <node_name>] [--new-image] [--cleanswap] Options --op move Moves the Application Node given by --name. --name <node_name> Current name of the node to be moved. --to-pool <pool_name> Name of target pool. --to-group <group_name> Name of target group. --newname <new_node_name> New name of the node in its new group and pool. --to-host <ip_host_number> [,<cntl_lan_ip_host_number>, <2nd_cntl_LAN_ip_host_number>] Host part of Ip address in the target pool. --to-ospath <path_to_os_image> Os image path the moved node should use. --new-image Creates a new os image (need not call ff_new_an.sh!). --clean-swap Clean local disks of node on first boot of moved node. This parameter has only meaning together with --new-image. 104 Administration and Operation

117 Application Nodes and SAN Application Nodes Administration If you specify --failed-host, all other options except name, --new-image and clean-swap are ignored. In this case all necessary information for the moved node in its target pool is derived from the failed host. 7.6 Application Nodes and SAN For adding or changing the entire list of SAN HBA (Host Bus Adapter) names, use operation mode hba-change. The names are symbolical names and may consist of alphanumerical characters, dashes and underscores. The names have to be separated by comma. For redundancy at least two names for two HBAs are needed. To specify the HBA type, use the option --hba-type. This option can be important for searching an adequate spare node from the ADMINPOOL (see chapter 6). It is necessary to know what the meaning is: With this option all relevant information of HBAs, which have influence on the OS image, must be coded. This can be, for example, the speed of a HBA. The administrator must know what information is important for the spare node selection. He can code this information, as he wants it, he is free in choosing the value of this option, but he must do it obeying a norm, which is unique for the whole FlexFrame. ff_an_adm.pl --op hba-change --name <node_name> --hba <hba_list> --hba-type <HBA_type> To remove the entire list of HBA names use operation mode hba-rem. It removes the list from nodes LDAP data. To remove only a single HBA, use operation mode hba-change. ff_an_adm.pl --op hba-rem --name <node_name> To define the name of the available SAN multipath software, use operation mode sansw-change. See usage for a list of known software names. ff_an_adm.pl --op sansw-change --name <node_name> --sw <multipath_software> To remove the SAN multipath software name use operation mode sansw-rem. It removes the name from nodes LDAP data. ff_an_adm.pl --op sansw-rem --name <node_name> 7.7 Administrating Blade Server Cabinets Some network settings have to be changed to add or remove a blade server cabinet. The script /opt/flexframe/bin/ff_bx_cabinet_adm.pl will simplify the administration by doing LDAP changes automatically and preparing configurations to be done manually. Administration and Operation 105

118 Application Nodes Administration Administrating Blade Server Cabinets The script supports adding, removing and listing of blade server cabinets. Each action is described in detail below Listing Blade Server Cabinets Displaying Information on a Specific Blade Server Cabinet Synopsis ff_bx_cabinet_adm.pl --op list --name <cabinet_name> Options --op list Lists the configuration details of a blade server cabinet. --name <cabinet_name> The name of the blade server cabinet to be listed. The output is structured in sections: hardware, software, network, assigned pool and group, switch ports. Output example Primergy Cabinet 1 (cab1) System Type: BX600S3 Management Blade Hostname / IP Address: cab1-co Integrated LAN Switch Ports: SwitchGroup SwitchID PortID Switch Blade SwitchID Type Switch name Hostname IP Address 1 1GbE-10/6-Q bx600-2-swb1 bx600-2-swb GbE-10/6-Q bx600-2-swb2 bx600-2-swb Switch Blade Port Integrated LAN Switch Port Switch Blade ID PortID <--> SwitchGroup SwitchID PortID As seen from the sample above, the cabinet ID and name, the cabinet system type, the management blade and the switch blades are listed. 106 Administration and Operation

119 Administrating Blade Server Cabinets Application Nodes Administration For the management blade the host name, the IP address and both FlexFrame integrated LAN switch ports are displayed. The switch blade information shows the switch and host name, the IP address and the switch blade port to FlexFrame integrated LAN switch port connections, structured by switch blade ID Displaying Information on all Configured Blade Server Cabinets Synopsis ff_bx_cabinet_adm.pl --op list-all Option --op list-all Lists all configured blade server cabinets. Output example Primergy Cabinets 1 (cab1) BX600S3 Management Blade: cab1-co / Switch Group ID: 1 Server Blades (by slot id) 1 (blade1) BX630S2 Pool / Group: pool1 / PROD 2 (blade2) BX630S2 Pool / Group: pool1 / PROD 3 (blade3) BX630S2 Pool / Group: pool1 / PROD 4 (blade4) BX603S2 Pool / Group: pool2 / DEV 5 (blade5) BX630S2 Pool / Group: pool2 / DEV For each cabinet the ID, the cabinet name, the management host name and IP address and the server blades are displayed. Each server blade is shown with its slot ID and name, the system type and the pool and group it belongs to. Administration and Operation 107

120 Application Nodes Administration Administrating Blade Server Cabinets Adding Blade Server Cabinets This section describes how to provide the required information for adding a new blade server cabinet to an existing FlexFrame environment. You have to define some parameters at the command line. They are used to configure switch ports and to create the switch blade configurations. Synopsis ff_bx_cabinet_adm.pl --op add --type <system_type> --name <cabinet_name> --swgroup <switch_group_id> [--swblades <type_of_switch_blades>] [--swblogin <switch_blade_login_name>] [--swbpwd <switch_blade_password>] [--host <ip_host_parts>] [--10gbit] Options --op add Adds a blade server cabinet. --type <system_type> [--mgmtswgroup <switch_group_id>] [--uplinkportcnt <nic_count>] PRIMERGY blade system type e.g. BX600S3, BX900S1. Call ff_bx_cabinet_adm.pl without any parameter to get a list of supported system types. --name <cabinet_name> Name of the subsystem (cabinet). It is used to generate a new name for the management blade (has to be unique within entire FlexFrame). --swgroup <switch_group_id> Switch group number (starts with 1) the cabinet has to be connected to (physically). Call ff_bx_cabinet_adm.pl without any parameter to get a list of currently configured switch group IDs. --mgmtswgroup <switch_group_id> Defines the switch group the cabinets management blade interfaces are connected to. If omitted the effective switch group is computed as follows: If the switch group given with --swgroup is no NEXUS switch group that switch group is used else the 108 Administration and Operation

121 Administrating Blade Server Cabinets Application Nodes Administration switch group the NEXUS switch management interfaces are connected is used. Call ff_bx_cabinet_adm.pl --help to get a list of currently configured switch group IDs --swblades <type_of_switch_blades> The type of switch blades. For valid types see the usage. For the default switch blades this option may be omitted. --swblogin <switch_blade_login_name> Name used to login into the switch blades. If this option is omitted, the login name of the switch group is used. --swbpwd <switch_blade_password> Name used to login into the switch blades. If this option is omitted, the login password of the switch group is used. --host <ip_host_parts> Host parts to be used to build IP addresses for the control lan networks. If this option is omitted the script uses free host numbers to calculate the IP addresses. Order of comma separated host numbers: first for both management blades and then one for each switch blade. --10gbit Use 10 Gigabit switch blade ports as uplink. The specification can be omitted if only 10 Gigabit switch blade ports are available--uplinkportcnt <nic_count> Set the NIC count of switch blades uplink channel to given value. Defaults to 2. Output example At the end of the output, the command displays further instructions. Configure the ManagementBlade with the control lan settings: control lan IP address: control lan name: cab1-co to interoperate correctly with the FA Agents. Interconnect the ManagementBlades and SwitchBlades with the switches of SwitchGroup 1 as noted below: SwitchID/Port Mgmt/SwitchBlade 1 / 8 slave ManagementBlade 1 / 11 SwitchBlade 1 Port 12 1 / 12 SwitchBlade 2 Port 12 2 / 8 master ManagementBlade 2 / 11 SwitchBlade 1 Port 11 2 / 12 SwitchBlade 2 Port 11 Administration and Operation 109

122 Application Nodes Administration Administrating Blade Server Cabinets Uploads of initial SwitchBlade configurations have to be done manually. See the document(s) 'Quick Start Hardware FlexFrame for SAP PRIMERGY SWB 1GbE-10/6-Q' in the doc/hwinfo section of the Service CD for details. The files to be uploaded are named: SwitchBlade Blade Type File Path 1 1GbE-10/6-Q /tftpboot/swblade-2-1.config 2 1GbE-10/6-Q /tftpboot/swblade-2-2.config Look at "/opt/flexframe/network/wiring-bx600-cab1.txt" to get a copy of this message. Set up the management blade initially with name and IP address listed by the output as seen above. Use console redirection of management blade to connect to the console of the switch blades, and upload configuration as described by FlexFrame Installation Guide. Finally, plug in the network cables according to the wiring plan given by the command output Removing Blade Server Cabinets You only have to give the ID of the cabinet that is to be removed, at the command line. All FlexFrame integrated LAN switch ports will be unconfigured. Removing a blade server cabinet requires removing of all of its server blades. Synopsis ff_bx_cabinet_adm.pl --op rem --id <cabinet_id> Options --op rem Removes a blade server cabinet. --id <cabinet_id> Specifies the subsystem (cabinet) ID of the cabinet to be removed. Use the list-all option to get the ID (see section 7.7.3). Output examples If there are any server blades configured for this cabinet, an error message is displayed. ERROR: there are server blades configured for this cabinet. To remove the cabinet, remove application nodes (server blades) first. 110 Administration and Operation

123 Administrating Blade Server Cabinets Application Nodes Administration Use command ff_an_adm.pl to do this. Use the list operation mode to list the configured server blades. You have to remove them before you can remove the cabinet they are in. If no server blades are configured for this cabinet, the command displays a summary at the end. If not reported any warnings or errors the cabinet was removed from LDAP and integrated LAN switches. The cabinet has been removed successfully from LDAP and the FlexFrame integrated LAN switch ports used by the cabinet have been reconfigured to default Changing Switch Blade Type Within service cases it may be necessary to change the type of a switch blade due to a defective part replacement. Only switching blades can be used for a type change. To change the switch blade type the cabinet, the switch blade id and the switch blade type have to be specified. Synopsis ff_bx_cabinet_adm.pl --op swb-change --id <cabinet_id> --swbid <switch_blade_id> --swbtype <switch_blade_type> Options --op swb-change Selects the operation mode. Change the type of a switch blade. --id <cabinet_id> Specifies the subsystem (cabinet) ID of the cabinet. Use the list-all option to get the ID. --swbid <switch_blade_id> Specifies the switch blade ID. The ID is the slot number of the selected switch blade. --swbtype <switch_blade_type> Defines the new type of the switch blade. See usage for the currently supported types. Administration and Operation 111

124 Application Nodes Administration Administrating Blade Server Cabinets Output example Switch type of switch blade 2 was successfully changed from "1GbE-10/6-Q" to "1GbE- 10/2+4-C" at LDAP database. The switch blade type was changed at LDAP database. To get the initial configuration use operation mode swb-config of this program. It will display instructions how to upload the configurations too. 112 Administration and Operation

125 Administrating Blade Server Cabinets Application Nodes Administration Changing Switch Blade Name On adding a new cabinet, the name of the switch blade is like the cabinet name with a slot extension. In some cases the names of the switch blades have to be changed to match naming conventions. Synopsis ff_bx_cabinet_adm.pl --op swb-name --id <cabinet_id> --swbid <switch_blade_id> --swbname <switch_blade_name> Options --op swb-name Selects the operation mode. Change the name of a switch blade. --id <cabinet_id> Specifies the subsystem (cabinet) ID of the cabinet. Use the list-all option to get the ID. --swbid <switch_blade_id> Specifies the ID of the switch blade. The ID is the slot number of the selected switch blade. --swbname <switch_blade_name> Defines the new name of the switch blade. Output example If not reported any warnings or errors the hostname was successfully changed at switch blade, hosts files and LDAP. As noted by the program the name of the switch will be changed at /etc/hosts of both control nodes, the LDAP database and at least the hostname and, if possible, the SNMP sysname at the selected switch blade itself. Administration and Operation 113

126 Application Nodes Administration Administrating Blade Server Cabinets Changing Switch Blade Password If on adding a cabinet the switch blade password could not be derived from the switch group and was not specified in the command, an default password (password) is assigned. This is not secure. Synopsis ff_bx_cabinet_adm.pl --op swb-passwd --id <cabinet_id> --swbid <switch_blade_id> --swbpwd <password> Options --op swb-passwd Selects the operation mode. Change the login password of a switch blade. --id <cabinet_id> Specifies the subsystem (cabinet) ID of the cabinet. Use the list-all option to get the ID. --swbid <switch_blade_id> Specifies the switch blade ID. The ID is the slot number of the selected switch blade. --swbpwd <password> Defines the new login and enable password of the switch blade. Output example If not reported any warnings or errors the password was successfully changed at switch blade and LDAP. As noted by the program the password will be changed at the LDAP database and the selected switch blade. At the switch blade the login password and the enable password are changed and have to be the same. 114 Administration and Operation

127 Administrating Blade Server Cabinets Application Nodes Administration Getting Switch Blade Initial Configuration In case of a service issue it may be necessary to get an initial switch blade configuration, which has to be uploaded manually. Synopsis ff_bx_cabinet_adm.pl --op swb-config --id <cabinet_id> --swbid <switch_blade_id> Options --op swb-config Selects the operation mode. Create the initial switch blade configuration. --id <cabinet_id> Specifies the subsystem (cabinet) ID of the cabinet. Use the list-all option to get the ID. --swbid <switch_blade_id> Specifies the ID of the switch blade. The ID is the slot number of the selected switch blade. Output example If not reported any warnings or errors the switch configuration was successfully created and stored into /tftpboot/swblade2.config. To upload the initial switch blade configuration see detailed description at "FlexFrame(TM) Installation Guide" chapter "SwitchBlade Configuration". This hint is additionally stored at: /tmp/swb-config_bx_cabinet/todo.txt The configuration file is stored directly to /tftpboot. Upload of configuration is described using TFTP which uses /tftpboot as top level directory. A detailed instruction can be found in the Installation Guide Change Switch Blade Uplink More than the two default ports can be used for the uplink to switch group. To change the count of ports used by the uplink link aggregates, including switch port configuration, use the operation mode swb-uplink of ff_bx_cabinet_adm.pl. On switch group and switch blades the appropriate link aggregates will be expanded by new ports until count of given ports is reached. Administration and Operation 115

128 Application Nodes Administration Administrating Blade Server Cabinets Synopsis ff_bx_cabinet_adm.pl --op swb-uplink --id <cabinet id> --uplinkportcnt <nic count> Options --op swb-uplink Change the count of NICs of uplink link aggregate for each switch blade of cabinet. --id <cabinet id> Defines the cabinet to change. --uplinkportcnt <nic count> Example Set the NIC count of switch blades uplink channel to given value. Defaults to 2 cn1:~ # ff_bx_cabinet_adm.pl --op swb-uplink --id 1 -uplinkportcnt 6 update LDAP... update swblade 1/1 configuration Notice: Update will take about 1 minute.... update swblade 1/2 configuration Notice: Update will take about 1 minute.... Interconnect additional NICs between SwitchBlades and the switches of SwitchGroup 1 as noted below: SwitchBladeID/Port SwitchID/Port 1 / 13 <==> 1 / 9 1 / 14 <==> 2 / 9 1 / 15 <==> 1 / 11 1 / 16 <==> 2 / 11 2 / 13 <==> 1 / 14 2 / 14 <==> 2 / 14 2 / 15 <==> 1 / 16 2 / 16 <==> 2 / 15 Unless any errors are reported follow instructions above to extend switch blade channels to switch group. Look at "/opt/flexframe/network/wiring-channel-extend-bx600-1.txt" to get a copy of this message. 116 Administration and Operation

129 Administrating Blade Server Cabinets Application Nodes Administration Administration and Operation 117

130 Application Nodes Administration Administrating Blade Server Cabinets Move a Blade Cabinet to Another Switch Group On growing installations sometimes the load of a switch group is splitted onto two groups. While doing this sometimes entire blade cabinets have to be moved to another switch group. Previously the entire cabinet had to be deleted and added again at the new switch group. A lot of work on a fully used cabinet. To simplify this action use the operation mode move of ff_bx_cabinet_adm.pl. Synopsis ff_bx_cabinet_adm.pl --op move --id <cabinet_id> --swgroup <new_switch_group_id> Options --op move [--mgmtswgroup <switch_group_id>] Change the switch group of cabinet, including switch blade but not server blade connections. --id <cabinet_id> Defines the cabinet to change. --swgroup <new_switch_group_id> The switch group id the cabinet should be moved to. --mgmtswgroup <switch_group_id> Defines the switch group the cabinets management blade interfaces should be moved to. If omitted the effective switch group is computed as follows: If the switch group given with --swgroup is no NEXUS switch group that switch group is used else the switch group the NEXUS switch management interfaces are connected is used. Example cn1:~ # ff_bx_cabinet_adm.pl --op move --id 1 -swgroup 2 update LDAP... update swblade 1/1 configuration Notice: Update will take about 1 minute.... update swblade 1/2 configuration 118 Administration and Operation

131 Administrating ESX Servers and Virtual Machines Application Nodes Administration Notice: Update will take about 1 minute.... Interconnect NICs between MgmtBlades,SwitchBlades and the switches of the new SwitchGroup 2 as noted below: SwGrpID/SwitchID/Port Mgmt/SwitchBlade 2 / 1 / 1 <==> master ManagementBlade 2 / 1 / 2 <==> SwitchBlade 1 Port 11 2 / 1 / 3 <==> SwitchBlade 2 Port 11 2 / 2 / 1 <==> slave ManagementBlade 2 / 2 / 2 <==> SwitchBlade 1 Port 12 2 / 2 / 3 <==> SwitchBlade 2 Port 12 Unless any errors are reported follow instructions above to move cabinet to new switch group. Look at "/opt/flexframe/network/wiring-cabinet-move-bx600-1.txt" to get a copy of this message. 7.8 Administrating ESX Servers and Virtual Machines Instead of being used directly as Application Nodes, PRIMERGY servers may also be used in FlexFrame as ESXi servers. An ESXi server can host a number of virtual machines that are used as Application Nodes. VMware ESXi and VMware ESX are "bare-metal" hypervisors that form the foundation of VMware vsphere. The VMware vsphere 4 product line supports three different types of hypervisors - ESX classic, ESXi installable and ESXi embedded. In FlexFrame 5.0, only ESXi installable and ESXi embedded are supported. However, the terms "ESX" and "ESXi" are both used in the FlexFrame documentation and code to denote the VMware hypervisor and always mean "ESXi" unless explicitly stated otherwise. ESXi servers and virtual machines used as Application Nodes can be predefined with the FlexFrame Management Tool and put into operation during the FlexFrame installation process, or added to a FlexFrame configuration later with FlexFrame administrative tools as described in section below. The FlexFrame administrative tools also offer functions to display or adjust the configuration as shown in detail in the following sections. Besides the FlexFrame tools which focus mainly on the FlexFrame specific aspects of ESXi servers and virtual machines, there are several ways to access vsphere components, such as the vsphere Client or the vsphere Command-Line Interface (vcli). These tools can be used in addition to the FlexFrame tools, but certain actions must be avoided to insure proper operation of the FlexFrame system. A short overview of this topic is given in section Administration and Operation 119

132 Application Nodes Administration Administrating ESX Servers and Virtual Machines Getting started with ESX Servers and VMs This section gives an overview of the steps needed to add an ESX server and virtual machines used as Application Nodes on the ESX server to a FlexFrame system when using FlexFrame administrative tools. 1. Add the ESX server to the FlexFrame configuration. cn1:~ # ff_esx_adm.pl --op add --name <esxi node name> --type <system_type>... Depending on system type, more options are needed when entering this command. See section for details. 2. Install/Boot the ESXi server. The necessary actions depend on the used hypervisor type. For embedded ESXi, the server is already installed on an internal USB device and the only action is to boot it from this device. For ESXi installable, install the ESXi server from its CD or DVD (an ISO-Image of this software is also contained on the FlexFrame Service DVD) and then boot it from local hard disk. Refer to the "Install and boot ESXi" section of the manual "Installation of a FlexFrame Environment" for details and to the section "Setup the <server type> for an ESXi Server installation" in the appropriate FlexFrame for SAP - HW Characteristics Quickguide. 3. Do a minimal configuration of the ESXi server using its Direct Console User Interface. This includes setting a password and selecting the network adapters for the ESXi server's network connection. Refer to the "ESXi Preconfiguration on the Server Console" section of the manual "Installation of a FlexFrame Environment" for details. 4. Complete the ESX server configuration. cn1:~ # ff_esx_adm.pl --op complete-config --name <esxi node name> --user <user_name> --pass <password> See section below for details. 120 Administration and Operation

133 Administrating ESX Servers and Virtual Machines Application Nodes Administration 5. No explicit creation of virtual machines is needed. Instead of that, simply use the well-known ff_an_adm.pl --op add command with a system type of ESXVM to specify that the "hardware" of the new Application Node is a virtual machine on an ESXi host. In addition to its usual functions such as reserving IP addresses or creating LDAP data, the script then creates a virtual machine with the same name as the Application Node on the ESXi host specified on the command line. The memory size and number of CPUs for the virtual machine are set to default values, but can also be specified on the command line (options --vmem and --vcpus). To select appropriate values for these parameters, the available resources of the ESXi host must be taken into account, as well as the intended use of the Application Node. See section Virtual Machine Properties and ESXi Resources on page 135 for details on this topic. cn1:~ # ff_an_adm.pl --op add --name <vm-an node name> --type ESXVM --pool <pool_name> --group <group_name> --ospath <path_to_os_image> --esx <esxi node name>... Proceed as you would do it for any other Application Node. Create a new personalized boot image for the Application Node (if not implicitly done using the --new-image operand of ff_an_adm.pl): cn1:~ # ff_new_an.sh -n <vm-an node name> Then boot the node by calling: cn1:~ # ff_ponpoff.pl --op power-on --name <vm-an node name> ESX related global FlexFrame parameters Besides the ESX servers and virtual machines, some FlexFrame global parameters must be taken into account when planning a FlexFrame system using VMware functionality. They are usually set with the FlexFrame Management Tool, but can be also set with administrative tools in some special cases System Code for ESX Servers and VMs The FlexFrame system code for ESX servers and virtual machines is a numeric value between 0 and 63 that is used to generate MAC addresses for FlexFrame virtual machines and to build names of some ESX related resources such as Port Groups and Datastores. Its purpose is to differentiate these objects belonging to different FlexFrame systems used in a common environment, for example if using a common vcenter Server or a not completely separated ethernet network. Administration and Operation 121

134 Application Nodes Administration Administrating ESX Servers and Virtual Machines The FlexFrame system code is usually set with the FlexFrame Management Tool and must not be changed for a system with already configured ESX servers and virtual machines. The ability to set the System Code using the tool ff_esx_adm.pl is mainly provided for cases when the FlexFrame system has been upgraded from an older version without VMware support or if the initial FlexFrame setup has been done without VMware usage. Synopsis Use a different system code for each FlexFrame system in your environment. Do not change the system code for an existing FlexFrame system with configured ESX servers and virtual machines. ff_esx_adm.pl --op set-syscode --syscode <system_code> Options --op set-syscode Sets the FlexFrame system code for ESX servers and virtual machines. --syscode <system_code> Numeric value between 0 and 63 to be used as FlexFrame system code vcenter Server More complex administrative actions and functionalities, as well as the centralized view and monitoring of the VMware Infrastructure are outside the scope of FlexFrame. For these functions, a vcenter Server can be used. If FlexFrame ESX servers are administrated by a vcenter Server, the name and IP address of the vcenter Server have to be made known to FlexFrame. This is usually done with the FlexFrame Management Tool, but can also be done using the tool ff_esx_adm.pl, for example in cases when the FlexFrame system has been upgraded from an older version without VMware support or if the initial FlexFrame setup has been done without VMware usage. You can also remove the previously set vcenter Server information, e.g. if the information set with the FlexFrame Management Tool is not correct, and you can set the authentication data for the vcenter Server in FlexFrame, so that the FlexFrame tools can access this server. 122 Administration and Operation

135 Administrating ESX Servers and Virtual Machines Application Nodes Administration Synopsis ff_esx_adm.pl --op vcenter-add --vc-name <vcenter_name> [--vc-ip <vcenter_ip>] [--vc-in-cntllan] ff_esx_adm.pl --op vcenter-rem ff_esx_adm.pl --op vcenter-auth --user <user_name> --pass <password> Options --op vcenter-add Add vcenter Server hostname and IP address to FlexFrame. --op vcenter-rem Remove vcenter Server information from FlexFrame. --op vcenter-auth Set authentication data for vcenter Server in FlexFrame. --vc-name <vcenter_name> Defines the name of the vcenter Server with --op vcenter-add. --vc-ip <vcenter-ip> Defines the IP address of the vcenter Server with --op vcenter-add. It can be an address in the FlexFrame Control LAN or an address outside FlexFrame, that is reachable from the Control Nodes. If the --vc-ip option is omitted, and --vc-in-cntllan is specified, a free address from Control LAN is selected. --vc-in-cntllan Specifies that the vcenter IP address is in the Control LAN. If no address is given, advises the script to select a free address from Control LAN. --user <user_name> Specifies the user name for accessing the vcenter Server. --pass <password> Specifies the password for accessing the vcenter Server. Administration and Operation 123

136 Application Nodes Administration Administrating ESX Servers and Virtual Machines If using vcenter Server in FlexFrame, the names and IP addresses of the ESX servers must be resolvable on a DNS server that is accessible for the vcenter Server. The FlexFrame ESX servers must be added manually to vcenter Server using their hostnames as they are known in FlexFrame. Add the ESX servers to a Datacenter called FlexFrame or, if more than one FlexFrame system is administrated with the same vcenter Server, use a distinct Datacenter called FlexFrame-<i> for each FlexFrame system, where i is the FlexFrame system code explained in section Adding ESX Servers This section describes how to add an ESXi server to an existing FlexFrame environment. Synopsis ff_esx_adm.pl --op add --type <system_type> --name <node_name> --swgroup <switch_group_id> --mac <mac_addresses> [--host <ip_host_number>[,<ipmi_ip_host_number> [,<2nd_IPMI_ip_host_number>]]] [--slot <BXxxx_cabinet/slot>] [--10gbit] Options --op add [--mgmtswgroup <switch_group_id>] [--port switch:port,switch:port,switch:port [,switch:port]] Adds an ESX server node and displays some information about processing steps. --type <system_type> Specifies the product name and type. Call ff_esx_adm.pl --help to get a list of supported system types --name <node_name> The name of the ESX server node. This name has to be unique for the entire FlexFrame system. All interface names are based on this node name. The name has to be lowercase and consists of up to 13 characters (letters, digits and dashes). --swgroup <switch_group_id> Defines the switch group the ESX server node is connected to. This information is necessary to assign and configure switch ports. Call ff_esx_adm.pl --help to get a list of currently configured switch group IDs. For blade servers, the --swgroup option may be omitted, as the switchgroup is already defined with the blade cabinet. 124 Administration and Operation

137 Administrating ESX Servers and Virtual Machines Application Nodes Administration --mac <mac_addresses> Add here the MAC addresses of the server's data NICs.They are used to configure DHCP. Use the colon separated hex notation for each MAC address. Concatenate the two MAC addresses with a comma. A MAC address syntax example is 00:e0:00:c5:19:41. For blade servers, the --mac option may be omitted. The MAC addresses are then fetched via SNMP frm the management blade. --host <ip_host_number>[,<ipmi_ip_host_number> [,2nd_IPMI_ip_host_number>]] Host part to be used to build the IP addresses for ESXi management and, depending on system type, for IPMI, all of them in the Control LAN. If the --host option is omitted, the script uses free host numbers to calculate the IP addresses. --slot <BXxxx_cabinet/slot> With PRIMERGY server blades use this option to define the cabinet and slot ID of the server blade. New cabinets have to be defined with the ff_bx_cabinet_adm.pl command. --mgmtswgroup <switch_group_id> Defines the switch group the ESX servers management interface (IPMI) is connected to. If omitted the effective switch group is computed as follows: If the switch group given with --swgroup is no NEXUS switch group that switch group is used else the switch group the NEXUS switch management interfaces are connected is used. Call ff_esx_adm.pl --help to get a list of currently configured switch group IDs. --10gbit Specifies that the node is used with 10 Gigabit data NICs. The specification can be omitted if the nodes system_type only allows usage with 10 Gigabit data NICs. --port switch:port,switch:port,switch:port,[switch:port] Defines the switch ports to be used. First two tuples are for data NIC 1 and 2 and allocated in the switch group defined with --swgroup. The following tuples are for mgmt NICs and allocated in the effective switch group for management interfaces. If --10gbit is specified, the first two ports must be 10 Gigabit capable ports If --port is not specified, the script assigns free ports according to an internal algorithm. Administration and Operation 125

138 Application Nodes Administration Administrating ESX Servers and Virtual Machines Example cn1:~ # ff_esx_adm.pl --op add --name rx300s5-esx1 --type RX300S5 -- swgroup 2 --mac 00:19:99:48:e2:aa,00:19:99:48:e2:ab update LDAP... update switch 2/1 configuration Notice: Update will take about 1 minute. restart cluster service dhcpd stopping dhcpd (timeout=20). dhcpd:done. done. starting dhcpd (timeout=20).. dhcpd:done. done. Connect your systems LAN interfaces to named switch ports: LAN Interface SwGroup / Switch / Port data NIC-1 2 / 2 / 2 data NIC-2 2 / 1 / 2 IPMI NIC-1 2 / 2 / 24 The script first checks all arguments and aborts with an error message in case of errors. It then fetches free IP addresses and switch ports or checks whether the given ones can be used for the node. LDAP data are created and the switch ports are reconfigured to match the requirements. The DHCP service on the control nodes is reconfigured to assign the ESXi management IP address in the Control LAN based on the given MAC addresses. At the end, you get a cabling advice if the node must be connected to a switchgroup. How to continue The next step is to do the cabling according to the advice if applicable. Then proceed with installing/booting ESXi and preconfiguring the ESXi server using the Direct Console User Interface (see section "Getting started with ESX Servers and VMs" on page 120 and the sections "Install and boot ESXi" and "ESXi Preconfiguration on the Server Console" of the manual "Installation of a FlexFrame Environment"). Finally, complete the ESX server configuration as shown in the next section Completing ESX Server Configuration This section describes how to complete the ESXi server configuration. Synopsis ff_esx_adm.pl --op complete-config [--name <node_name>] --user <user_name> --pass <password> 126 Administration and Operation

139 Administrating ESX Servers and Virtual Machines Application Nodes Administration Options --op complete-config Completes the ESX server configuration and stores the authentication data needed to access the ESX server in FlexFrame for subsequent access of the server. --name <node_name> Name of the ESX server node. If the name is omitted, the operation is done for all defined ESX server nodes. This possibility is intended for usage during the installation process. --user <user_name> Specifies the user name for accessing the ESX server. --pass <password> Example Specifies the password for accessing the ESX server. cn1:~ # ff_esx_adm.pl --op complete-config --name rx300s5-esx1 --user root --pass password The script accesses the ESXi server using the given authentication data and completes the ESXi server configuration to match the FlexFrame needs, e.g. the FlexFrame Control Nodes are set as NTP servers and as trap targets for SNMP. If it is already known for which pools the ESXi server must be prepared, the corresponding Port Groups and Datastores are created on the ESXi server (see section "ESX Servers and Pools" on page 131 for details). Finally, the script stores the authentication data needed to access the ESXi server to enable the subsequent access of the server by FlexFrame scripts Removing ESX Servers To remove an ESX server node from the FlexFrame configuration, the only needed parameter is the node name. The node is removed from the LDAP database, the hosts file and the DHCP configuration. The switch ports used by this node are also unconfigured. An ESX server cannot be removed if there are still virtual machine Application Nodes that use this ESX server in the FlexFrame configuration. Please make sure that you really want to remove the ESX server node, as the script does not ask for confirmation. Administration and Operation 127

140 Application Nodes Administration Administrating ESX Servers and Virtual Machines Synopsis ff_esx_adm.pl --op rem --name <node_name> Options --op rem Removes an ESX server node from the FlexFrame configuration. --name <node_name> Example Name of the ESX server node. cn1:~ # ff_esx_adm.pl --op rem --name rx300s5-esx Displaying Information about ESX Servers and VMs Using the script ff_esx_adm.pl, one can get an overview about all ESX servers, or detailed information about a specific ESX server including its virtual machines. With ff_esx_vm.pl, it is possible to get an overview of all FlexFrame virtual machines, irrespective of the involved ESX server. Synopsis ff_esx_adm.pl --op list-all ff_esx_adm.pl --op list --name <node_name> [--cmdline] Options --op list-all Displays an overview of all ESX server nodes and also shows ESX related global FlexFrame parameters, such as the vcenter Server name and address or the FlexFrame system code or ESXi servers and virtual machines. --op list Displays detailed information about a specific ESX server, including its virtual machines. --name <node_name> Name of the ESX server node. --cmdline 128 Administration and Operation

141 Administrating ESX Servers and Virtual Machines Application Nodes Administration Used with --op list, the command line that can be used to recreate the ESX server node is displayed at the end of the node listing. Synopsis ff_esx_vm.pl --op list-all Options --op list-all Displays an overview of all FlexFrame virtual machines, irrespective of the ESX host. Examples Overview of all ESX servers and global information: cn1:~ # ff_esx_adm.pl --op list-all Global information vcenter Server Hostname: vcenter-co vcenter Server IP: FlexFrame Systemcode: 1 ESX Nodes sorted by name bx31-esx Node Type: BX620S4 Cabinet/Slot ID: 3/1 ESX mgmt IP/Hostname: / bx31-esx Mac Addr.: 00:1b:24:2d:ab:03 00:1b:24:2d:ab:04 bx33-esx Node Type: BX620S4 Cabinet/Slot ID: 3/3 ESX mgmt IP/Hostname: / bx33-esx Mac Addr.: 00:1b:24:2d:a0:01 00:1b:24:2d:a0:02 rx300s5-esx1 Node Type: RX300S5 ESX mgmt IP/Hostname: / rx300s5-esx1 Mac Addr.: 00:19:99:48:e2:aa 00:19:99:48:e2:ab Administration and Operation 129

142 Application Nodes Administration Administrating ESX Servers and Virtual Machines Detailed information about a specific ESX server: cn1:~ # ff_esx_adm.pl --op list --name rx300s5-esx1 Configuration details of ESX node rx300s5-esx1 Hardware System: RX300S5 10GBit: No Shut.Facil.: IPMI rx300s5-esx1-co ( ) Mac Addr.: 00:19:99:48:e2:aa 00:19:99:48:e2:ab ESX management interface (Control LAN) Host IP: Hostname: rx300s5-esx1 LAN Interface Connections LAN Interface SwGroup / Switch / Port data NIC-1 2 / 2 / 2 data NIC-2 2 / 1 / 2 IPMI NIC-1 2 / 2 / 24 ESX resources Product Name: VMware ESXi build Memory Size: 8179 MB CPU Cores: 8 List of Virtual Machine - Application Nodes: Name Pool Group State CPUs Memory lin-vm13 pool1 p1-vms Off lin-vm21 pool2 p2-vms Off Overview of all FlexFrame virtual machines: cn1:~ # ff_esx_vm.pl --op list-all List of Virtual Machine - Application Nodes registered in LDAP VM-AN Name Pool Group ESX Host lin-vm11 pool1 p1-vms bx31-esx lin-vm13 pool1 p1-vms rx300s5-esx1 lin-vm2 pool1 p1-vms bx33-esx lin-vm21 pool2 p2-vms rx300s5-esx1 lin-vm23 pool2 p2-vms bx33-esx lin-vm39 pool3 p3-vms bx33-esx Collecting information from ESX Hosts: rx300s5-esx1... done 130 Administration and Operation

143 Administrating ESX Servers and Virtual Machines Application Nodes Administration bx31-esx... done bx33-esx... done List of FlexFrame Virtual Machines found on available ESX servers VM Name ESX Host State lin-vm11 bx31-esx On lin-vm13 rx300s5-esx1 Off lin-vm2 bx33-esx On lin-vm21 rx300s5-esx1 Off lin-vm23 bx33-esx On lin-vm39 bx33-esx Off ESX Servers and Pools An ESX server can host virtual machine Application Nodes from different pools. For each pool the server needs some specific resources, such as a virtual machine Port Group for each of the pool networks and two datastores (config and software) in the pool specific volff volume of the pool. Usually, these resources are implicitly prepared when the first virtual machine Application Node of a pool is created on an ESX server, so that there is no need to do an explicit pool preparation of the ESX server. However, if a FlexFrame virtual machine is moved to another ESX server by mechanisms outside FlexFrame (see also section ), these resources must be prepared in advance. This can be done by using the add-pool operation of the ff_esx_adm.pl script. Synopsis ff_esx_adm.pl --op add-pool --name <node_name> --pool <pool_name> Options --op add-pool Prepares an ESX server for usage by virtual machine Application Nodes from given pool. --name <node_name> Name of the ESX server node. --pool <pool_name> Name of the pool for which the ESX server must be prepared. Administration and Operation 131

144 Application Nodes Administration Administrating ESX Servers and Virtual Machines Example cn1:~ # ff_esx_adm.pl --op add-pool --name rx300s5-esx1 --pool pool Special Functions for Virtual Machines A virtual machine designated for use as a FlexFrame Application Node is usually created when the Application Node is added to the FlexFrame configuration with ff_an_adm.pl --op add. Likewise, the virtual machine is destroyed when the associated Application Node is removed using ff_an_adm.pl --op rem. Under special circumstances, it is necessary to create FlexFrame virtual machines for Application Nodes already defined in LDAP, for example during the FlexFrame installation process. For special purposes, it may also be helpful to destroy a FlexFrame virtual machine while preserving its LDAP entry, or if the LDAP entry is missing, e.g. as an effect of an Application Node removal that failed to destroy the virtual machine. The script ff_esx_vm.pl provides these functionalities, as well as some other special functions acting on virtual machines. This script is not intended for usual administrative interactions with virtual machine Application Nodes. Use ff_an_adm.pl to administrate these Application Nodes too in almost the same manner as other Application Nodes. Synopsis ff_esx_vm.pl --op create [--name <vm_node_name>] [--esx <esxi_node_name>] [--vcpus <number_of_virtual_cpus>] [--vmem <virtual_machine_memory_size] [--force] ff_esx_vm.pl --op destroy --name <vm_node_name> [--force] ff_esx_vm.pl --op move --name <vm_node_name> --to-esx <esxi_node_name> ff_esx_vm.pl --op refresh-ldap [--name <vm_node_name>] ff_esx_vm.pl --op list --name <vm_node_name> ff_esx_vm.pl --op list-all Options --op create Creates virtual machines that are already defined in the FlexFrame configuration database (LDAP). If called without option --name, a virtual machine is created for each Application Node with system type ESXVM found in LDAP. Otherwise, only the Application Node with the specified name is considered. The virtual machines get 132 Administration and Operation

145 Administrating ESX Servers and Virtual Machines Application Nodes Administration the same names as the Application Nodes. The target ESXi server, the number of virtual CPUs and the memory size of the virtual machine are taken from LDAP by default, but can be overridden using the options --esx, --vcpus and --vmem. Administration and Operation 133

146 Application Nodes Administration Administrating ESX Servers and Virtual Machines --op destroy Destroys a FlexFrame virtual machine while preserving its LDAP entry or if the LDAP entry is missing. If the LDAP entry is missing, the operation is started only if the option --force is specified. --op move Moves a powered off FlexFrame virtual machine to another ESXi server. --op refresh-ldap Adjusts the LDAP information for a specific or for all virtual machine Application Node(s) after changes done with mechanisms outside FlexFrame, e.g. when a virtual machine has been moved to another ESXi host by VMware HA. For more information about using other mechanisms on FlexFrame virtual machines see section below. --op list Displays information about a specific FlexFrame virtual machine. --op list-all Displays information about all FlexFrame virtual machines. See also section name <vm_node_name> The name is used to identify a specific node. If specified, an Application Node with this name and system type ESXVM must exist in LDAP, except for the case when the --force option is specified with --op destroy. --esx <esxi_node_name> Specifies the node name of the ESXi host where the virtual machine must be created. By default, the value from LDAP is used. When no vm_node_name is given, the --esx option is used to select all VMs for creation for which the specified ESXi host is preset in LDAP. --vcpus <number_of_virtual_cpus> Defines the number of virtual CPUs when creating a virtual machine. By default, the value from LDAP is used. The argument must be a number between 1 and 8. Moreover, the available resources of the ESXi host must be taken into account. See also section below. --vmem <virtual_machine_memory_size> Defines the memory size in MB when creating a virtual machine. By default, the value from LDAP is used. The memory size must be a number between 256 and (=255GB). Moreover, the available resources of the ESXi host must be taken into account. See also section below. 134 Administration and Operation

147 Administrating ESX Servers and Virtual Machines Application Nodes Administration --force Specifies that the memory usage of the ESXi host may be overcommitted when creating virtual machines. Default is to deny creation of virtual machines on a host if the total vmem of the virtual machines on this host, including the new ones, exceeds the memory size of the host. With operation mode destroy, --force allows to destroy a FlexFrame virtual machine even when no corresponding LDAP entry exists. --to-esx <esxi_node_name> This option is used with --op move to specify the name of the target ESXi server Virtual Machine Properties and ESXi Resources In a vsphere environment, ESX/ESXi hosts provide resources for virtual machines. Resources include CPU, memory, power, storage and network resources. In this section, we focus on CPU and memory resources only. As a part of the virtual hardware properties, the memory size and the number of virtual CPUs are defined when creating a virtual machine. In a FlexFrame environment, values for these parameters are either predefined with the FlexFrame Management Tool for each virtual machine Application Node and can be later overridden when creating the virtual machine using the ff_esx_vm.pl script, or are defined when creating the Application Node including its virtual machine with ff_an_adm.pl. When choosing values for these parameters, a careful planning must be done that takes into account which machines will be powered on on a given ESX server at the same time and for what kind of SAP workload they are planned to be used. VMware techniques allow to configure more memory for the virtual machines on an ESX server than the available physical memory of the host. This feature is called memory overcommit and basically assumes that a VM does not use all assigned memory and therefore it can be shared with other VMs. For SAP systems it is strongly recommended not to overcommit memory usage, because SAP allocates memory permanently and does not release it again. Furthermore, a minimum of 4GB should be used, and for Unicode systems, a minimum of 8GB. A virtual machine can be configured with up to eight virtual CPUs, but it cannot be powered on if it has more CPUs than the number of logical processors of the host - that is the number of physical processor cores if hyperthreading is disabled or two times the number of physical processor cores if hyperthreading is enabled. Similarly as with memory, VMware techniques allow to overcommit processor usage. When a host runs multiple virtual machines that require more than the available CPU resources, the host time-slices the physical processors across all virtual machines so that each virtual machine runs as if it has its specified number of virtual processors. The CPU virtualization adds varying overhead with performance implications. Administration and Operation 135

148 Application Nodes Administration Administrating ESX Servers and Virtual Machines SAP has successfully run performance tests in vsphere virtual machines (utilizing all available virtual CPUs to 100%) which overcommitted the host system up to 200%. The performance degradation inside the virtual machines was linear reciprocal to the overcommitment. You may exceed the 200% overcommitment, but keep in mind, that the performance of virtual machines in such a scenario is not guaranteed. In case of performance problems, SAP can demand you to shutdown or pause other running virtual machines to check if the overcommitment caused problems. When creating virtual machines using FlexFrame scripts, the parameters given for the memory size and the number of virtual CPUs (--vmem and --vcpus) are roughly checked for plausible values and it is checked that the memory usage of the designated ESX server is not overcommitted, but no further checks and resource evaluations are carried out. This task is in the responsibility of the specialists that make the configuration and sizing planning for the FlexFrame system and assumes a good knowledge of the virtualization techniques and the requirements of the used SAP systems Using vsphere Functions for FlexFrame Objects While the FlexFrame tools focus mainly on the FlexFrame specific aspects of ESXi servers and virtual machines, there are other ways to access vsphere components, such as the vsphere Client or the vsphere Command-Line Interface (vcli). Moreover, for more complex administrative actions and functionalities, as well as the centralized view and monitoring of the VMware Infrastructure, a vcenter Server can be used These tools and functions are outside the scope of FlexFrame, and for information on their usage please refer to the appropriate VMware documentation. However, when using them for ESX servers and virtual machines known in FlexFrame, it must be taken care not to disturb the FlexFrame functionality. Some special requirements are: 1. Virtual machines for FlexFrame usage must be created using FlexFrame tools 2. Do not rename FlexFrame virtual machines 3. Do not destroy FlexFrame virtual machines while the associated Application Node still exists. Removing the Application Node usually also destroys the virtual machine. 4. Do not change the virtual machine properties and devices. As an exception, you may change the number of CPUs and the memory size of an powered off virtual machine, but take into account the resources of the involved ESX server. After such changes, please call ff_esx_vm.pl --op refresh-ldap to adjust the correspondig LDAP settings, or use the script ff_ponpoff.pl to power on the virtual machine, as this does an implicit refresh of the LDAP settings. 5. Do not move a FlexFrame virtual machine to an ESX server that is not part of the same FlexFrame system. To move a powered off virtual machine to another ESX server of the FlexFrame system, you can use ff_esx_vm.pl --op move. 136 Administration and Operation

149 Script for Power on/off/reboot of a Computing Node in FF4SApplication Nodes Administration Otherwise, you must also take care that the target ESX server is prepared for the pool the virtual machine Application Node belongs to. After a move done with non- FlexFrame tools, please call ff_esx_vm.pl --op refresh-ldap to adjust the corresponding LDAP setting as soon as possible. 6. Do not suspend/resume FlexFrame virtual machines. This may result in a service being executed twice when a service formerly running on the suspended virtual machine has been moved to a spare node and then the other VM is resumed. To enforce this requirement, the FlexFrame scripts create an alarm on vcenter server (if known in FlexFrame, including authentication data) that powers off a virtual machine when it is resumed. 7.9 Script for Power on/off/reboot of a Computing Node in FF4S Synopsis ff_ponpoff.pl --op power-on power-off reboot --name <node --op power-on power-on the node(s) power-off power-off the node(s) reboot reboot the node(s) --node <node name> execute the above actions for a single node <node execute the above actions for all nodes of a FlexFrame The script takes the shutdown configuration files from /opt/myamc/vff/vff_<poolname>/data/fa/shutdown to get shutdown-users and passwords, and in case of server blades, gets the address of the management blade. The content of these configuration files is supplied during the installation process of a FlexFrame, or later, it can be actualized by the FA Agents (see also FlexFrame Installation Guide V5.0A Power Shutdown Configuration). It can happen that a certain node isn't in the appropriate configuration file: You can write it manually into it, or answer the procedure's prompts for user and password. Following files are used: /opt/myamc/vff/vff_<poolname>/data/fa/shutdown/fa_ipmi.cfg for Rackserver /opt/myamc/vff/vff_<poolname>/data/fa/shutdown/fa_blade.cfg for Bladeserver Administration and Operation 137

150 Storage Systems Administration NAS Systems Configuration (EMC and NetApp) 8 Storage Systems Administration 8.1 NAS Systems Configuration (EMC and NetApp) Adding a New NAS To add a NAS to FlexFrame environment the NAS has to be configured, the network prepared and some data has to be stored at LDAP database. The NAS configuration has to be done manually. But all necessary data and locations are displayed by this program. The network and LDAP preparation is done directly by ff_nas_adm.pl. Synopsis ff_nas_adm.pl --op add --name <node_name> --type <nas_type> --swgroup <switch_group_id> [--host <ip_host_part>] [--ports <port_count>] [--blades <blade_count>] [--partner <node_name> [--cstations <control_station_count>] [--shcmd <shell_command>] [blade <blade_id>] [--10gbit] Options --op add Adds a NAS and displays some information about processing steps. --name <node_name> Defines the node name of the NAS. --type <nas_type> Defines the type of the NAS. See usage for a list of known NAS types. --swgroup <switch_group_id> Defines the switch group the NAS should be added to. See usage for a list of configured switch group IDs. --host <ip_host_part> Defines the host part to be used to build IP addresses for the Control or Storage LAN networks. If this option is omitted the script uses a free host number to calculate the IP address. 138 Administration and Operation

151 NAS Systems Configuration (EMC and NetApp) Storage Systems Administration --ports <port_count> Defines the count of ports to be used with pool Storage LAN networks. If this option is omitted the script uses the default of two ports. The maximum number of ports per channel is 8 for 1GB/sec ports and 2 for 10GB/sec ports. --blades <blade_count> Blade/data mover count of an EMC Celerra NAS. If this option is omitted the script uses the default of one blade. --partner <node_name> Defines the cluster partner in a NetApp filer cluster or the remote partner for an EMC Celerra. --cstations <control_station_count> Count of control stations of an EMC Celerra NAS. If this option is omitted the script uses the default of one control station. --shcmd <shell_command> Shell command (absolut path) used to configure NAS. If this option is omitted the script uses the the default of /usr/bin/ssh. --blade <blade_id> Blade/data mover Id of an EMC Celerra NAS. The Id is the numerical value of the blade/data mover (eg. server_2 means Id 2). If this option is omitted the script uses the default of Id gbit NAS is a 10Gbit system. This option is only usable for NetApp Filers. Example for a NetApp Filer cn1:/opt/flexframe/bin # ff_nas_adm.pl --op add --name filer2 --type FAS swgroup 1 update LDAP... update switch 1/1 configuration Notice: Update will take about 1 minute....+ Some manual interventions are necessary to integrate the NAS into FlexFrame environment. The following list of actions has to be performed in order to integrate the NAS into your FlexFrame landscape. Since your exact configuration may vary, these steps have to be performed manually. However, the VIF must be named 'storage'. Administration and Operation 139

152 Storage Systems Administration NAS Systems Configuration (EMC and NetApp) The /etc/rc, /etc/exports and /etc/hosts.equiv have to be edited at volume vol0. These lines have to be added to or changed at /etc/rc: hostname filer2 vif create multi storage <-- add your NICs here, eg. e0a e0b for 1GB/sec ports; e1 e2 for 10GB/sec ports. vlan create storage 13 ifconfig storage netmask broadcast mtusize wins up options dns.enable off options nis.enable off savecore These lines have to be added to or changed at /etc/exports: /vol/vol0 -sec=sys,rw= : ,anon=0 These lines have to be added to /etc/hosts.equiv: root root As the switch ports are already configured the correct wiring between NAS and switch ports has to be done. See below a list of cable connections. Connect your NAS LAN interfaces to named switch ports: SwitchGroup / Switch / Port LAN Interface 1 / 2 / 3 filer2 (FAS3000): port "data NIC-1" 1 / 1 / 3 filer2 (FAS3000): port "data NIC-2" Finally execute command "mount /FlexFrame/filer2/vol0" on both Control Nodes to mount filers vol0 at Control Nodes. This is necessary for further automated configuration of filer. The complete instruction above is listed at file /tmp/filer2-add/todo 140 Administration and Operation

153 NAS Systems Configuration (EMC and NetApp) Storage Systems Administration Example for an EMC Celerra: cn1:~ # ff_nas_adm.pl --op add --name emc01 --type NS-TYPE --swgroup 1 --blades 2 update LDAP... update switch 1/1 configuration Notice: Update will take about 1 minute....+ Some manual interventions are necessary to integrate the NAS into FlexFrame environment. The following list of actions has to be performed in order to integrate the NAS into your FlexFrame landscape. Since your exact configuration may vary, these steps have to be performed manually. As the switch ports are already configured the correct wiring between NAS and switch ports has to be done. See below a list of cable connections. Connect your NAS LAN interfaces to named switch ports: SwitchGroup / Switch / Port LAN Interface 1 / 2 / 8 emc01 (NS-TYPE): server_2: port "data NIC-1" 1 / 1 / 8 emc01 (NS-TYPE): server_2: port "data NIC-2" 1 / 2 / 9 emc01 (NS-TYPE): server_3: port "data NIC-1" 1 / 1 / 9 emc01 (NS-TYPE): server_3: port "data NIC-2" 1 / 2 / 20 emc01 (NS-TYPE): port "mgmt NIC-1" Mounting of any Celerra filesystems to Control Nodes is not necessary for further configuration. The complete instruction above is listed at file /tmp/emc01-add/todo Removing a NAS To remove a NAS from FlexFrame landscape it must not be used by any pool. ff_nas_adm.pl --op list will display the current configuration of the NAS. The switch port configuration will be removed from switches and LDAP database. Synopsis ff_nas_adm.pl --op rem --name <node_name> Administration and Operation 141

154 Storage Systems Administration NAS Systems Configuration (EMC and NetApp) Options --op rem Removes a NAS from FlexFrame landscape about the NAS configuration and displays some information about processing steps. --name <node_name> Example Defines the node name of the NAS. cn1:/opt/flexframe/bin # ff_nas_adm.pl --op rem --name filer2 update switch 1/1 configuration Notice: Update will take about 1 minute....+ update LDAP... NAS successfully removed from network and LDAP Configuring SNMP Traps for NetApp Filers The NetApp Filers within FlexFrame should send their messages (SNMP traps) to the Control Nodes (resp. myamc.messenger). This has to be configured on the Filers side. Therefore logon to the Filer(s) with telnet <filer> and continue with the following steps: filer> snmp traphost add filer> snmp traphost add Use IP addresses of the Control LAN segment. filer> snmp community add ro public public may be replaced by the community you have specified in the Management Tool. For further information how to configure thresholds on Filer specific traps, please see the NetApp Filer documentation. 142 Administration and Operation

155 NAS Systems Configuration (EMC and NetApp) Storage Systems Administration Displaying All Configured NAS To get an overview of all configured NAS within the FlexFrame environment use the operation mode list-all of the program ff_nas_adm.pl. It displays IP addresses and names, type, switch ports and the link aggregation id, separated by NAS. Synopsis ff_nas_adm.pl --op list-all Option --op list-all Example Displays all configured NAS. cn1:/opt/flexframe/bin # ff_nas_adm.pl --op list-all NAS configurations filer Control Lan filer-co Type: FAS3000 Shell: /usr/bin/ssh -l flexframe Switch Link Aggregation Port Count: 2 Link Aggr.ID: 5 Storage LAN switch ports 1 / 1 / 13 SwGroup / Switch / Port 1 / 2 / 13 SwGroup / Switch / Port Control LAN switch ports 1 / 2 / 15 SwGroup / Switch / Port Pools pool filer-pool1-st master pool filer-pool2-st master pool filer-pool5 master usr filer-usr-st master Administration and Operation 143

156 Storage Systems Administration NAS Systems Configuration (EMC and NetApp) Displaying NAS Configuration To display the detailed configuration of a NAS as known by LDAP database, use the command ff_nas_adm.pl with operation mode list. Synopsis ff_nas_adm.pl --op list --name <node_name> Options --op list Displays the configuration of a switch port. --name <node_name> Defines the node name of a NAS. Example for a NetApp Filer: cn1:/opt/flexframe/bin # ff_nas_adm.pl --op list --name filer NAS configurations filer Control Lan filer-co Type: FAS3000 Shell: /usr/bin/ssh -l flexframe Switch Link Aggregation Port Count: 2 Link Aggr.ID: 5 Storage LAN switch ports 1 / 1 / 13 SwGroup / Switch / Port 1 / 2 / 13 SwGroup / Switch / Port Control LAN switch ports 1 / 2 / 15 SwGroup / Switch / Port Pools pool filer-pool1-st master pool filer-pool2-st master pool filer-pool5 master usr filer-usr-st master 144 Administration and Operation

157 NAS Systems Configuration (EMC and NetApp) Storage Systems Administration Example for an EMC Celerra: cn1:~ # ff_nas_adm.pl --op list name cel NAS configurations cel Control Lan cel-co celcs1-co ControlStation1 priv celcs2-co ControlStation2 priv. Type: NS-Type Shell: /usr/bin/ssh Switch Link Aggregation DataMover Port Count Link Aggr.ID Storage LAN switch ports DataMover SwGroup / Switch / Port Control LAN switch ports 1 / 1 / 20 SwGroup / Switch / Port Pools DataMover Pool IP Name 2 p cel-p1-st Adding a Pool to a NAS To be able to use an existing NAS for a pool, the network connection to the NAS has to be enhanced. On the NAS, a new virtual interface has to be created and on the switch ports the new VLAN has to be configured. All these steps are done with ff_nas_adm.pl using the operation mode add-pool, but the program will not change any exports. For EMC NAS the switch ports to any data mover of an EMC Celerra are allowed for all VLANs the NAS is using. So any data mover is able to take over the role of any other data mover. Synopsis ff_nas_adm.pl --op add-pool --name <node_name> --pool <pool_name> [--role {master slave}] [--host <ip_host_part>] [--blade <blade_id>] Administration and Operation 145

158 Storage Systems Administration NAS Systems Configuration (EMC and NetApp) Options --op add-pool Adds the given pool to named NAS. --name <node_name> Defines the node name of NAS. --pool <pool_name> Defines the pool name the NAS has to be support. See usage for a list of configured pools. --role {master slave} Defines the role of NAS within given pool. This information is used by the ff_setup_sid_folder program. If this option is omitted the script uses the default role master. --host <ip_host_part> Defines the host part to be used to build IP addresses for the Control or Storage LAN networks. If this option is omitted, the script uses a free host number to calculate the IP address. --blade <blade_id> Blade/data mover Id of an EMC Celerra NAS. The Id is the numerical value of the blade/ data mover (eg. server_2 means Id 2). If this option is omitted the script uses the default of Id 2. Example cn1:/opt/flexframe/bin # ff_nas_adm.pl --op add-pool --name filer --pool pool4 update LDAP... update switch 1/1 configuration Notice: Update will take about 1 minute....+ vlan: storage-25 has been created Pool pool4 successfully added to NAS, LDAP and network. 146 Administration and Operation

159 NAS Systems Configuration (EMC and NetApp) Storage Systems Administration Removing a Pool from a NAS The rem-pool mode is the opposite to the add-pool operation mode of ff_nas_adm.pl. It removes the virtual interface to the pool and removes VLAN access on NAS switch ports for the given pool. This action will not be permitted if any SID of the pool uses this NAS or the NAS is the default NAS of the pool. Within the last case, the pool interface of the NAS will be removed when removing the pool. For EMC NAS the pool's VLAN will only be removed from EMC Celerra switch ports if no longer any data mover is configured for this pool. Synopsis ff_nas_adm.pl --op rem-pool --name <node_name> --pool <pool_name> --blade <blade_id> Options --op rem-pool Removes the given pool from the named NAS. --name <node_name> Defines the node name of the NAS. --pool <pool_name> Defines the pool name the NAS has to be support. See usage for a list of configured pools. --blade <blade_id> Blade/data mover Id of an EMC Celerra NAS. The Id is the numerical value of the blade/ data mover, eg. server_2 means Id 2. If this option is omitted the script uses the default of Id 2. Example cn1:/opt/flexframe/bin # ff_nas_adm.pl --op rem-pool --name filer --pool pool4 update switch 1/1 configuration Notice: Update will take about 1 minute....+ update LDAP... Pool pool4 successfully removed from NAS, LDAP and network. Administration and Operation 147

160 Storage Systems Administration NAS Systems Configuration (EMC and NetApp) Adding a Blade (Data Mover) to an EMC Celerra NAS An EMC Celerra may consist of more than one data mover. To add one or more data movers to an existing NAS use this command. It will add configuration data to LDAP and assign and configure switch ports. It will display instructions how to interconnect the new data movers with the FlexFrame switch group. Synopsis ff_nas_adm.pl --op add-blade --name <node_name> --ports <port_count> --blades <blade_count> Options --op add-blade Adds blades/ data movers to NAS. --name <node_name> Defines the node name of the NAS. --ports <port_count> Count of ports to be used with pool storage lan networks. If this option is omitted the program uses the default of 2. --blades <blade_count> Number of blades/data movers to be added. If this option is omitted the program uses the default of 1. Example cn1:~ # ff_nas_adm.pl --op add-blade --name cel update LDAP... update switch 1/1 configuration Notice: Update will take about 1 minute....+ As the switch ports are already configured the correct wiring between DataMover and switch ports has to be done. See below a list of cable connections. Connect your DataMover LAN interfaces to named switch ports: SwitchGroup / Switch / Port LAN Interface 1 / 2 / 11 cel (NS-Type): server_4: port "data NIC-1" 1 / 1 / 11 cel (NS-Type): server_4: port "data NIC-2" 148 Administration and Operation

161 NAS Systems Configuration (EMC and NetApp) Storage Systems Administration The complete instruction above is listed at file /tmp/cel-add/todo Removing a Blade (Data Mover) from an EMC Celerra NAS To remove one data mover from an existing NAS use this command. It will remove configuration data from LDAP and deactivate the corresponding switch ports. Synopsis ff_nas_adm.pl --op rem-blade --name <node_name> Options --op rem-blade --blade <blade_id> Removes blades / data movers from NAS. --name <node_name> Defines the node name of the NAS. --blade <blade_id> Blade/data mover Id to be removed. If this option is omitted the program uses the default of Create NAS Cluster Partnership Synopsis ff_nas_adm.pl --op partner -name <node_name> --partner <node_name> Options --op partner Define a cluster partnership for a NetApp filer cluster or an EMC Celerra. --name <node_name> First partner in a NetApp filer cluster or an EMC Celerra cluster. --partner <node_name> Second partner in a NetApp filer cluster or an EMC Celerra cluster. Administration and Operation 149

162 Storage Systems Administration NAS Systems Configuration (EMC and NetApp) Move a NAS to another Switch Group Synopsis ff_nas_adm.pl --op move --name <node_name> --swgroup <switch_group_id> Options --op move {--1gbit --10gbit } move a NAS to another switch group --name <node_name> Defines the node name of the NAS. --swgroup <switch_group_id> --10Gbit --1gbit Defines the switch group the data ports of the NAS should be moved to. move to 10 Gbit Connection. move to 1Gbit Connection Switching a NetApp Filer between 1Gbit and 10Gbit To switch a NetApp Filer from 1 Gbit to 10 Gbit (and vice versa), the operation mode conf10gb option has been implemented for ff_nas_adm.pl. It consists of following actions: 1. First do the LDAP update and the switch update for the target configuration. 2. Remove the old configuration from LDAP. 3. Display the wiring, as suggested by the script. 4. Ask the caller, whether he wishes to remove the old ports' switch configuration from the switches and if 'yes', it is removed. 150 Administration and Operation

163 NAS Systems Configuration (EMC and NetApp) Storage Systems Administration For a complete switching you have to do: 1. Deactivation of NetApp Filer (Shutdown of all Application Nodes, which use this Server) 2. call ff_nas_adm.pl --op conf10gb --name... : Do the suggested wiring, before you remove the old ports' switch configuration! 3. Remove the cables for the old ports' switch configuration. 4. Power on NetApp Filer and Application Nodes and restart SAP applications 5. Reboot the Control Nodes Synopsis ff_nas_adm.pl --op conf10gb --name <node_name> {--to-10gbit to-1gbit} Options --op conf10gb Switches a NAS from 1 Gbit to 10 Gbit (and vice versa). --name <node_name> Defines the node name of the NAS. --to-10gbit Switch from 1 Gbit to 10 Gbit. --to-1gbit Switch from 10Gbit to 1Gbit.. Example for a NetApp Filer ff_nas_adm.pl --op conf10gb --name fas to-1gbit update LDAP... update switch 1/1 configuration Notice: Update will take about 1 minute....+ Some manual interventions are necessary to integrate the NAS into FlexFrame environment. The following list of actions has to be performed in order to integrate the NAS into your FlexFrame landscape. Since your exact configuration may vary, these steps have to be performed manually. Administration and Operation 151

164 Storage Systems Administration NAS Systems Configuration (EMC and NetApp) vif create multi storage <-- replace the NICs here, by the 10Gbit NICs or 1Gbit NICS, if desired To make this modification persistent, take the vol0/etc/rc-file and replace the vif-instruction in it. As the switch ports are already configured the correct wiring between NAS and switch ports has to be done. See below a list of cable connections. Connect your NAS LAN interfaces to named switch ports: SwitchGroup / Switch / Port LAN Interface 1 / 2 / 2 fas3001 (FAS3000): port "data NIC-1" 1 / 1 / 2 fas3001 (FAS3000): port "data NIC-2" Finally execute command "mount /FlexFrame/fas3001/vol0" on both Control Nodes to mount filers vol0 at Control Nodes. This is necessary for further automated configuration of filer. The complete instruction above is listed at file /tmp/nas-conf10gb-fas3001/todo update LDAP... Old 1GB/10GB Ports successfully removed from LDAP!!!!!!! First do the correct wiring, as suggested in the message before!!!!!! After that: Do you wish to remove the old ports' switch configuration: [yes/no]. yes remove configuration on switches Changing NAS Command Shell To change the command shell used to configure the NAS, the ff_nas_adm.pl command supports the change_sh operation mode. The given shell command replaces the currently used command or default configuring the named NAS. To change the shell for more than one NAS the procedure has to be done for each NAS. Be sure there is no password request using this shell command to connect the NAS. Synopsis ff_nas_adm.pl --op change-sh --name <node_name> --shcmd <shell_command> 152 Administration and Operation

165 NAS Systems Configuration (EMC and NetApp) Storage Systems Administration Options --op change-sh Changes the sh command for NAS configuration. --name <node_name> Defines the node name of the NAS. --shcmd <shell_command> Shell command (absolut path) used to configure NAS. If this option is omitted the script uses the defaults /usr/bin/rsh for NetApp Filers and /usr/bin/ssh for EMC Celerras. Example cn1:~ # ff_nas_adm.pl --op change-sh --name filer -shcmd '/usr/bin/ssh -l flexframe' update LDAP. NAS shell command changed at LDAP. Shell command changed to "/usr/bin/ssh -l flexframe" Changing NAS LinkAggregate Ports To change the count of NICs of the vif link aggregate the ff_nas_adm.pl commands supports the change_ports operation mode. The given number is the count of NICs the aggregate should be contain. The difference between actual and given count of ports will be allocated at switch group, proper confi-gured and displayed at end of program run. The additional wiring and configuration of NAS have to be done manually. At EMC Celerra the link aggregates for all data movers/blades have to be expanded. Synopsis ff_nas_adm.pl --op change-ports --name <node_name> --ports <port_count> Administration and Operation 153

166 Storage Systems Administration NAS Systems Configuration (EMC and NetApp) Options --op change-ports Change the count of NICs of link aggregate per Filer or data mover. --name <node_name> Defines the node name of the NAS. --ports <port count> Example Count of NICs the link aggregate should contain. cn1:~ # ff_nas_adm.pl --op change-ports --name filer -ports 4 update LDAP. Expand StorageLAN vif with 2 NICs up to 4 NICs. Do this before(!) connecting new NICs to switch ports. The additional switch ports at Switch Group are already configured. Interconnect additional NICs between NAS and the switches of SwitchGroup 1 as noted below: SwGrpID/SwitchID/Port Filer 1 / 1 / 20 <==> Filer data NIC-3 1 / 2 / 20 <==> Filer data NIC-4 Unless any errors are reported follow instructions above to extend NAS link aggregate(s) to switch group. Look at "/opt/flexframe/network/wiring-channel-extend-filer.txt" to get a copy of this message NAS Disk Free To get information on available space on NAS file systems, the program ff_nas_df.pl may be used. It connects to all FlexFrame NAS and views the disk free (df) information. To be comparable all size values are given as kilo bytes. The view may be restricted to a single NAS. Synopsis ff_nas_df.pl [--name <node_name>] [--mounted] 154 Administration and Operation

167 NAS Systems Configuration (EMC and NetApp) Storage Systems Administration Options --name <node_name> Name of NAS to be viewed. Omit this option to view all. --mounted Adds "Mounted on" column to view. Typically it is the same as the file system and therefore omitted. Example cn1:~ # ff_nas_df.pl cel server_2 Filesystem total KB used KB avail KB capacity mxi_data % mxi_logs % root_fs_ % root_fs_common % bar_default_data % bar_default_logs % volff_bar % f940 Filesystem total KB used KB avail KB capacity /vol/data_o10/ % /vol/foo_default_data/ % /vol/foo_default_logs/ % /vol/linux_luns/ % /vol/logs_o10/ % /vol/oxi_data/ % /vol/oxi_logs/ % /vol/sapdata/ % /vol/saplog/ % /vol/vol0/ % /vol/volff/ % /vol/volff_foo/ % Administration and Operation 155

168 Storage Systems Administration NAS Systems Configuration (EMC and NetApp) Celerra SRDF-NAS High Availability A short description of a Celerra SRDF-NAS cluster can be found in the chapter For more in-depth information please read the EMC Celerra manuals. Beside the base Celerra-SRDF installation and configuration, which is described in the FlexFrame installation manual, specific FlexFrame SRDF-NAS configurations are needed for the functionality of the FlexFrame Celerra SRDF-NAS high availability. The administration of the Celerra SRDF-NAS high availability in FlexFrame is performed with the ff_nas_ha.pl command Syntax ff_nas_ha.pl --op init ff_nas_ha.pl --op list ff_nas_ha.pl --op check ff_nas_ha.pl --op switchover ff_nas_ha.pl --help ff_nas_ha.pl -version The init operation mode will be started on one Control Node and initializes a local parameter file on each Control Node. The initialization must be done as first step after a reconfiguration of the Celerra SRDF-NAS configuration or, in some cases, of a Control Node reconfiguration. Otherwise all other operations will fail. If you are unsteady whether the initialization was performed, perform it (again). The init operation mode creates local files named /opt/flexframe/etc/ff_nas_ha.ppar on each Control Node and on the Celerra control station of R2 side an SRDF-NAS switch lock script. The file creation is protected with a write lock. Additionally the content of the parameter file is protected with a md5 checksum. Old existent parameter files will be saved with a time stamp suffix. For the copy on both Control Nodes temporary files will be used, which are temporarily located in the /opt/flexframe/tmp directory. Please be aware that the call of ff_nas_ha.pl --op init must be done when both Celerras are in normal operation mode; this means that all data movers have their designated roles (no standby is activated at this time). You can check the state of the data movers by entering the command nas_server -info all at the control station of each Celerra. With the list operation mode you can interactively get an current SRDF-NAS state of both Celerra. The first part of the displayed information is static and comes from the parameter file. A second part of the information will be dynamically collected from both Celerra control stations. The third part of the information is the current SRDF pair state. All information together should be a base of the Celerra configuration for an administrator and should serve for a manual switchover. 156 Administration and Operation

169 NAS Systems Configuration (EMC and NetApp) Storage Systems Administration The check operation mode delivers the SRDF-NAS state in form of a numeric return code. For this SRDF-NAS check some single inspections will be done. In one inspection a test write into /FlexFrame/volFF will be executed. In some rare conditions, files named ff_nas_ha.pl-<cn>-<pid>-testwrite can stay as garbage and will be erased with the next call of ff_nas_ha.pl. If you find such test write files and if you are sure that it is garbage, you can erase them also manually. The check operation mode is explicitly called from the myamc FrameAgents for automatic state detection. So these FrameAgents can initiate an automatic switchover that is configured in the FrameAgents. The switchover operation mode initiates an SRDF-NAS switchover of the central nfs based file system /FlexFrame/volFF. An SRDF-NAS switchover switches Celerra R1 volumes to the R2 side. The switchover can manually initiated by calling the ff_nas_ha.pl command directly or, in the case of a configured automatic switchover, through the FrameAgent. If another switchover is already active, a second parallel switchover will be rejected with the corresponding return value. The lock mechanism for the switchover is realized in form of a lock daemon, which will be started on the control station on the R2 side and stopped at the end of the switchover. A signal SIGUSR1 will stop the lock daemon and release the lock. The help option delivers the common syntax of the ff_nas_ha.pl command. The version option delivers the version of the ff_nas_ha.pl command Background Processes Monitor alert process: For a timeout check communication with the FrameAgent the so called monitor alerts are used. These will be started at the ff_nas_ha.pl command start for each operation and will stay alive in the background for one minute. So they can be detected by FrameAgents. In special conditions a monitor alert for timeout extension must be created. The monitor alert looks in the process list like: /opt/flexframe/tmp/10163/monitor_alert SYMSRV:SRV_NAS TIMERANGE:180 PHASE:START-NAS-LIST PID:10163 Switchover lock process: For the switchover a lock mechanism will be started on the control station of the R2 side with /root/.ff_nasrdf_switch.pl. This daemon logs his activity in the control station system logging file /var/log/messages. Administration and Operation 157

170 Storage Systems Administration NAS Systems Configuration (EMC and NetApp) Diagnosis For diagnosis purposes ff_nas_ha.pl writes on each Control Node into the local logging file /var/log/ff_nas_ha.log. This logging file will be compressed and saved with a timestamp suffix in the case of a maximal size. Beside this, all switchovers will also be logged into the Control Node system logging file /var/log/messages Return Codes For operation mode init 0 Parameter file is created. 10 Parameter file isn't created. For operation mode list and check 0 Status of R1 side is ok and automatic switchover isn't necessary. A manual switchover can be done. 10 Status of R2 side is ok and a manual switchback is allowed. 20 Status isn't ok and an automatic or manual switchover is possible. 30 Status isn't ok and no switchover nor manual switchback is allowed. 40 The parameter file is missing. 41 The parameter file is inconsistent. For operation mode list only 50 Only static parameter values are printed out. For operation mode switchover 0 A switchover is successfully executed. 10 A switchover terminated with error. 20 A switchover is started and isn't finished yet. 30 A switchover isn't started. 40 The parameter file is missing. 50 Another switchover or manual switchback is already active. For all operation modes 90 OS internal error occurred. 91 Unexpected error occurred. 92 No Implementation error occured (Only for none implemented functionality!) 93 Wrong syntax is used. 94 This is a version request. 95 Operation isn't supported. 96 User has no privilege for execution. 158 Administration and Operation

171 NAS Systems Configuration (EMC and NetApp) Storage Systems Administration Used File Ressources In directory <control_node>:/opt/flexframe/etc ff_nas_ha.ppar ff_nas_ha.ppar.lck ff_nas_ha.ppar.md5 ff_nas_ha.ppar.<date_time> In directory <control_node>:/opt/flexframe/tmp ff_nas_ha.ppar_$$ ff_nas_ha.ppar_$$_rem ff_nas_ha.ppar.md5_$$ ff_nas_ha.ppar.md5_$$_rem $$/monitor_alert_$$ In directory <control_node>:/var/log ff_nas_ha.log ff_nas_ha.log.lck messages In directory <control_node>:/flexframe/volff ff_nas_ha.pl-<cn>-$$-testwrite In directory <r2_celerra_cs0>:/root.ff_nasrdf_switch.pl.ff_nasrdf_switch.pl.lck In directory <r2-celerra-cs0>:/var/log messages CN local parameter lock file for CN local parameter md5 checksum for local parameter backups of CN local parameters temporary CN local parameter temporary CN remote parameter temporary local md5 file temporary remote md5 file temporary monitor alert script CN local logging file lock file for CN local logging file CN local system logging file temporary test write file daemon lock script on R2-CS0 lock file for switchover lock on R2-CS0 R2-CS0 local system logging file The used directories are located on a Control Node or the control station of the R2 side. This is expressed by the place holder in angel brackets. The $$ in the file ressources is used as a shortcut for a process ID. Administration and Operation 159

172 Storage Systems Administration SAN Configuration in a FlexFrame Environment Used Perl Modules FlexFrame Perl modules : /opt/flexframe/lib/fsc_ff/utils.pm Standard Perl modules : /usr/lib/perl5/5.8.3/time/local.pm /usr/lib/perl5/5.8.3/i586-linux-thread-multi/digest/md5.pm /usr/lib/perl5/5.8.3/i586-linux-thread-multi/fcntl.pm /usr/lib/perl5/5.8.3/getopt/long.pm /usr/lib/perl5/5.8.3/i586-linux-thread-multi/io/handle.pm /usr/lib/perl5/5.8.3/i586-linux-thread-multi/io/select.pm /usr/lib/perl5/5.8.3/i586-linux-thread-multi/posix.pm /usr/lib/perl5/5.8.3/i586-linux-thread-multi/socket.pm /usr/lib/perl5/5.8.3/i586-linux-thread-multi/sys/syslog.pm /usr/lib/perl5/vendor_perl/5.8.3/i586-linux-threadmulti/xml/parser.pm 8.2 SAN Configuration in a FlexFrame Environment Setting Up the SAN Configuration The SAN configuration for a FlexFrame landscape needs careful planning and design. A team of specialists for storage systems, servers and SAN infrastructure must work out a concept that will be a guideline for the complete setup. Furthermore, it is required to consult the actual recommendations from the manufacturer of the used storage and SAN components and from the involved partners as well as the MatrixEP from Fujitsu. These documents can contain important information that must be taken into account when planning the configuration and when configuring the individual components Configuring Storage This section does not intend to give a detailed description of how to configure SAN storage for usage in a FlexFrame environment. This task will usually be done by a storage systems expert in cooperation with a member of the FlexFrame implementation team. 160 Administration and Operation

173 SAN Configuration in a FlexFrame Environment Storage Systems Administration Furthermore, the SAN storage system is not necessarily an integral part of the FlexFrame system; you can use LUNs from a storage system which is also used for other purposes in your computing environment. Therefore, the LUN creation and administration is a task outside the scope of FlexFrame. Before starting with this task, a detailed storage concept should be worked out General Remark for the Use of Navisphere When using a CLARiiON storage subsystem within a FlexFrame environment (SAN- or NAS-attached) and the Control Node is used for administrating and/or configuring the CLARiiON via Navisphere Web GUI it may happen that caused by a switchover of an Application Node the Servlet Container tomcat is started and requires the port 8080 used by the Navisphere client. In that case the java_vm and the web browser, e.g. the Navisphere client are terminated. To avoid this situation use a non-flexframe server for the Navisphere client Storage System Access The FlexFrame software itself does not need an administrative access to the SAN storage system. However, if an administrative access to the storage system from inside the FlexFrame environment is desired, you can add it either via the Management Tool (External Connectivity), or later with the ff_swport_adm.pl tool. If using dynamic LUN masking to reconfigure the LUN access settings on the SAN storage system, the StorMan software (or more precisely, the SMI-S provider used by StorMan) needs access to the storage system. For details on StorMan configuration in FlexFrame see chapter "Dynamic LUN Masking". A special case is when the SAN storage system is used for NAS access too. If it is a NetApp Filer, the administrative access for SAN administration is the same as the access used for NAS administrative tasks and will be usually created as a part of the initial setup via the Management Tool (Controlling, NAS Storage). In case of a Celerra gateway that uses a CLARiiON/FibreCAT as backend, the Celerra control station needs access to the Storage Processors (SPs) of the CLARiiON/FibreCAT; as the Celerra control station is in the FlexFrame control LAN, the SPs must be connected to the control LAN too. For details see Configuration of NAS Systems in the Installation of a FlexFrame Environment manual. Be aware that to be able to use the SAN functionality of the NetApp Filer, you have to license the fcp service (license add <license_code> - may require a reboot, depending on filer model), start the fcp service (fcp start) and bring up the target HBA(s) (fcp config <adapter> up). If you are using a clustered NetApp Filer, it is recommended to use a cfmode setting of single_image. Administration and Operation 161

174 Storage Systems Administration SAN Configuration in a FlexFrame Environment LUN Creation For usage by the FlexFrame SAP databases, several LUNs must be created on the storage system. The term LUN refers here to a storage device exported from a storage system that can be seen as a disk device by a host. It may be a LUN from a FibreCAT CX or NetApp system or a Symmetrix device. The number and size of LUNs depends on the number and type of the SAP systems to be installed on SAN storage, as well as on the fact if a host based volume management software will be used. Based on these LUNs, all file systems needed for the database data and log files will be created. A host based volume management software might be used for different reasons, such as more administrative flexibility, or the ability to create volumes with mirrors on different storage systems (host-mirroring). The supported volume manager is the native volume manager from Linux (LVM2). For FlexFrame specific details on creating volumes and file systems see section Example It has to be ensured that a LUN is used by only one SAP system. Never use a volume group for more than one SAP system. When using a host based volume management software, at least two LUNs - preferably more - should be used for each SAP system, one for sapdata volumes and one for saplog volumes. Only in test environments with low requirements concerning database performance, you can place saplog and sapdata volumes on the same LUN. A similar recommendation also applies to the usage of partition/slices of a LUN in cases where no host based volume management software is used. For productive systems, you should also place the LUNs for sapdata and saplog volumes on physically separated disks, e.g. by using different RAID groups of a FibreCAT CX, or different aggregates of a NetApp system. Creation of two LUNs on a NetApp system for usage with Linux, without special performance considerations. Be aware that if performance is an issue, it may be advisable to create the LUNs for sapdata in a separate volume that is also located in a separate aggregate. 162 Administration and Operation

175 SAN Configuration in a FlexFrame Environment Storage Systems Administration > vol create SAN aggr0 120g Creation of volume 'SAN' with size 120g on containing aggregate 'aggr0' has completed. > vol options SAN nosnap on > qtree create /vol/san/sol > lun create -s 50g -t linux /vol/san/sol/datasa1 > lun create -s 7g -t linux /vol/san/sol/logssa1 > lun show /vol/san/sol/datasa1 50g ( ) (r/w, online) /vol/san/sol/logssa1 7g ( ) (r/w, online) > > lun show -v /vol/san/sol/datasa1 /vol/san/sol/datasa1 50g ( ) (r/w, online) Serial#: hpdnsj8y2uj0 Share: none Space Reservation: enabled Multiprotocol Type: linux > lun show -v /vol/san/sol/logssa1 /vol/san/sol/logssa1 7g ( ) (r/w, online) Serial#: hpdnsj8y1eir Share: none Space Reservation: enabled Multiprotocol Type: linux For more information on how to create RAID groups, aggregates, volumes, qtrees and LUNs on a NetApp storage system, please refer to the documentation from To create a LUN on a FibreCAT CX, you may use the Bind LUN command of the Navisphere Manager GUI as shown in the following example: Administration and Operation 163

176 Storage Systems Administration SAN Configuration in a FlexFrame Environment For more information on how to manage storage and create LUNs on EMC storage systems, please refer to the documentation from Recording LUN Information For each LUN to be used in a FlexFrame pool for the database of a SAP system, you can create a VLUN section in the SAN configuration file as described in section If using dynamic LUN masking, the creation of a VLUN section is mandatory. For a NetApp LUN, use the LUN path as arraylun and determine the LUNGUID from the LUN's Serial# by means of the tool ff_san_info.sh on a FlexFrame Node. For a FibreCAT CX, use the LUN ID as arraylun and the LUN's Unique ID as LUNGUID. The following example shows how to find these information using the Navisphere Manager GUI: 164 Administration and Operation

177 SAN Configuration in a FlexFrame Environment Storage Systems Administration The hostlun value will be determined and added later, when mapping the LUNs to hosts (see section ). Administration and Operation 165

178 Storage Systems Administration SAN Configuration in a FlexFrame Environment Configuring Application Nodes All Application Nodes that will run database instances of SAP systems which are to be configured for SAN usage must be equipped with Host Bus Adapters (HBAs). To ensure that the correct type of Fibre Channel HBA is used, please consult the FlexFrame Support Matrix. The supported types include Qlogic and Emulex HBAs for Linux nodes. For all types of Application Node images, depending on the used combination of OS/HBA/storage system, specific driver settings may be needed. These must be set using a Maintenance Cycle as described in section 11.4 Maintenance of Application Nodes - Software on page 315. Be sure to select one of the Application Nodes that is equipped with Fibre Channel HBAs. For performance reasons as well as to protect against hardware failure, it is strongly recommended to use more than one HBA or at least a dual ported HBA on each Application Node. To manage the multiple paths between host and storage system, a multipath software is needed. The FlexFrame Linux Application Node Images already contain a ready-to-use installation of the Linux native multipath software DM-MPIO. Be aware that the DB-Instance of a SAP system configured for SAN usage will be able to run only on Application Nodes with the correct image. Therefore special care must be taken not to start this instance on an Application Node outside the pool group (of course it is assumed that all nodes run on the same image, as generally required by a correct FlexFrame installation) Connecting the Storage to the Application Nodes Before connecting your Application Nodes to the SAN Storage System, please ensure that your configuration including server, HBA, Fibre Channel switch and storage system is supported according to the FlexFrame Support Matrix. Also check that all used components are correctly configured according to the actual recommendations from the respective manufacturer or partner and from the MatrixEP. Fibre Channel switches used to connect the Application Nodes with the Storage Systems are not considered an integral part of the FlexFrame landscape, and no automatic configuration of these switches is done. For more information on setup and configuration of the Fibre Channel switches, refer to the documentation of your switch. For high availability, a redundant fabric should be set up, so use two separate fabrics if possible. It is not necessary to connect the LAN ports of the Fibre Channel switches to one of the FlexFrame LANs. However, if you wish to administrate the switches from the FlexFrame 166 Administration and Operation

179 SAN Configuration in a FlexFrame Environment Storage Systems Administration Control Node, you can create connections to the control LAN either via the Management Tool (External Connectivity), or later with the ff_swport_adm.pl command. It is also supposed that the storage system is already connected to the Fibre Channel switches. If not yet done, follow the guidelines of your storage system's documentation. To connect an Application Node to the storage system and the LUNs created on the system for usage by the SAP systems that will run on this Application Node, several preparations have to be done. You should go through this procedure for one Application Node first, then create volumes and file systems on this Application Node as described in section Finally repeat it for all other nodes. Optionally, some steps can be done in parallel for all nodes (e.g. create zones for all Application Nodes at the same time). During the preparation of an Application Node it must be ensured the FlexFrame Autonomous Agents are deactivated for this node. 1. Connect the Fibre Channel ports of the Application Node to the Fibre Channel switches; for redundancy purposes, connect one port to one switch and the other port to the other switch. 2. Create zones on the Fibre Channel switch, so that the Application Node can see the storage system(s). For some hints see section Check the visibility of the storage system on the Application Node. An example for this action is shown in section Check the visibility of the Application Node's HBAs (host initiators) on the storage system. On a FibreCAT CX, register the connection of the host initiators (see section ). 5. On the storage system, implement the actions required to assign the LUNs that will be accessed by this Application Node to this node (LUN masking/mapping). Some details on how this can be done are shown in section Check the visibility of the LUNs on the Application Node (see section ) Creating Zones on the Fibre Channel Switches Zones are created on Fibre Channel switches to protect servers and storage systems. There is no specific requirement how zoning is to be done for a FlexFrame environment; you may use hard or soft zoning, port based or WWN-based. We recommend to use WWN-based zoning, and create zones that contain two WWPNs: one from a host and one from the storage system. Administration and Operation 167

180 Storage Systems Administration SAN Configuration in a FlexFrame Environment If you want to configure Brocade Fibre Channel switches, you can use the tool ff_fc_gen_sw.sh on the FlexFrame Control Node to provide you with some assistance. Consult the man page of this tool for details. To get the WWPNs of the Application Nodes, you can use the command ff_san_info.sh i on each Application Node. Example on a Linux Application Node: # ff_san_info.sh -i HBA WWNN WWPN host1 20:00:00:c0:9f:c6:45:22 21:00:00:c0:9f:c6:45:22 host2 20:00:00:c0:9f:c6:45:23 21:00:00:c0:9f:c6:45:23 For a NetApp system, the WWPNs can be found as shown in the following example: > fcp config 0c: ONLINE <ADAPTER UP> Loop Fabric host address portname 50:0a:09:81:86:28:13:0c nodename 50:0a:09:80:86:28:13:0c mediatype ptp speed 2Gb 0d: ONLINE <ADAPTER UP> PTP Fabric host address portname 50:0a:09:82:86:28:13:0c nodename 50:0a:09:80:86:28:13:0c mediatype ptp speed 2Gb You can find the WWNs of a FibreCAT CX in the Navisphere Manager GUI as shown in the following screenshot: 168 Administration and Operation

181 SAN Configuration in a FlexFrame Environment Storage Systems Administration Administration and Operation 169

182 Storage Systems Administration SAN Configuration in a FlexFrame Environment Checking Visibility of the Storage System on the Application Node If the correct zones have been created on the Fibre Channel switch, the Application Node should be able to see the storage system. As no LUNs have been mapped yet, the storage system will present a pseudo LUN 0 to the host. You can verify now that the correct number of pathes between storage system and Application Node is available Registering Host Initiators with a CLARiiON/FibreCAT CX Use Navisphere Manager's Connectivity Status window for the storage system to register each HBA connection with the storage system: In the Connectivity Status window, select the connection for the WWN of the HBA, click Register to open the Register Initiator Record window. Select the following: Initiator Type of CLARiiON Open ArrayCommPath Unit Serial Number of Array Failover Mode of 1 In the Host Information box, select New Host and enter the Application Node's host name and IP address for the first HBA connection of the host. For all other connections of the host, select Existing Host and choose the appropriate host name in the drop-down box. Click OK. After a few seconds the registration status of the HBA connection should change from No to Yes. If you have a host with the Navisphere CLI package installed and LAN access to the SPs, you can alternatively use the following command to register the HBA connections of an Application Node as shown in the following example: # /opt/navisphere/bin/navicli -h storagegroup -setpath -hbauid 20:00:00:00:c9:49:42:d2:10:00:00:00:c9:49:42:d2 -sp a -spport 0 -failovermode 1 -arraycommpath 1 -o -host tompw4 # /opt/navisphere/bin/navicli -h storagegroup -setpath -hbauid 20:00:00:00:c9:49:42:d1:10:00:00:00:c9:49:42:d1 -sp a -spport 1 -failovermode 1 -arraycommpath 1 -o -host tompw4 # /opt/navisphere/bin/navicli -h storagegroup setpath -hbauid 20:00:00:00:c9:49:42:d2:10:00:00:00:c9:49:42:d2 -sp b -spport 0 -failovermode 1 -arraycommpath 1 -o -host tompw4 170 Administration and Operation

183 SAN Configuration in a FlexFrame Environment Storage Systems Administration # /opt/navisphere/bin/navicli -h storagegroup -setpath -hbauid 20:00:00:00:c9:49:42:d1:10:00:00:00:c9:49:42:d1 -sp b -spport 1 -failovermode 1 -arraycommpath 1 -o -host tompw4 Note that the hbauid above is a concatenation of the WWNN and the WWPN of the Application Node's HBA Mapping LUNs to the Application Nodes If using dynamic LUN masking, this step must be done using the StorMan component. For details see section "Using StorMan to Reconfigure SAN". The rest of this section refers to a static SAN setup. For a FibreCAT CX, this means that you have to connect the Application Node to the storage group that contains all LUNs that will be used by the Application Node's pool group. If the storage group does not exist yet, create it and add the LUNs to the storage group. When adding a LUN to a storage group, a Host LUN Id is automatically assigned. For ease of administration, you may decide to assign this number by yourself and choose the same number as the LUN ID. The following example shows how you can do this: You may also decide to use storage groups that contain only one Application Node, but in this case you have to take special care to add the same LUNs to all storage groups that contain hosts from the FlexFrame pool group, and that the LUNs also have the same host LUN number in all groups. Therefore we do not recommend this scheme. Example on a NetApp host: > igroup add igroupsol c949415c c949415b > lun map /vol/san/sol/datasa1 igroupsol lun map: auto-assigned igroupsol=0 > lun map /vol/san/sol/logssa1 igroupsol lun map: auto-assigned igroupsol=1 Administration and Operation 171

184 Storage Systems Administration SAN Configuration in a FlexFrame Environment The number reported behind the igroup name is the hostlun to be used in the VLUN section of the SAN configuration file described in section "FlexFrame SAN Configuration File". If you decide to assign this number by yourself, you can add a Lun ID parameter to the lun map command. You can display all existing LUN mappings on a NetApp storage system as shown in the following example: > lun show -m LUN path Mapped to LUN ID Protocol /vol/san/sol/datasa1 igroupsol 0 FCP /vol/san/sol/logssa1 igroupsol 1 FCP However, if you decide to assign the host LUN ID by yourself, it is advisable to arrange for a host LUN ID of 0 for each storage group or igroup, to avoid some known problems Checking Visibility of the LUNs on the Application Node Example on a Linux Application Node Make the LUNs visible by doing a reboot. Optionally you can use the ff_qlascan.sh script for a host using QLA 2xxx HBA, followed by the command multipath v2. The visibility can be checked at various levels, starting from driver, SCSI layer up to multipath layer. # cat /proc/scsi/qla2xxx/1 QLogic PCI to Fibre Channel Host Adapter for FCI/O-CARD2Gb/s: Firmware version IPX, Driver version sles... SCSI LUN Information: (Id:Lun) * - indicates lun is not registered with the OS. ( 0: 0): Total reqs 26, Pending reqs 0, flags 0x0, 0:0:81 00 ( 0: 1): Total reqs 27, Pending reqs 0, flags 0x0, 0:0: # multipath -ll e bab561d38db11 [size=3 GB][features="1 queue_if_no_path"][hwhandler="1 emc"] \_ round-robin 0 [prio=2][active] \_ 1:0:2:9 sdad 65:208 [active][ready] \_ 2:0:1:9 sdbn 68:16 [active][ready] 172 Administration and Operation

185 SAN Configuration in a FlexFrame Environment Storage Systems Administration \_ round-robin 0 [enabled] \_ 1:0:1:9 sdl 8:176 [active][ready] \_ 2:0:0:9 sdav 66:240 [active][ready] e a9fa75d1d38db11 [size=3 GB][features="1 queue_if_no_path"][hwhandler="1 emc"] \_ round-robin 0 [prio=2][active] \_ 1:0:2:6 sdaa 65:160 [active][ready] \_ 2:0:1:6 sdbk 67:224 [active][ready] \_ round-robin 0 [enabled] \_ 1:0:1:6 sdi 8:128 [active][ready] \_ 2:0:0:6 sdas 66:192 [active][ready] Creating Volumes and File Systems for a SAP System If you have decided to place the sapdata and saplog data of a SAP System on SAN storage, you have to create file systems on the LUNs that have been created and made accessible to an Application Node. Usually, these file systems are not created directly on the LUNs, but on volumes created on top of these LUNs with the help of a host based volume management software (the native volume manager of Linux is supported only), or on partitions/slices of the LUNs. For more information on how to create volume groups, add physical volumes to the volume group and create logical volumes inside a volume group, refer to the documentation of the used volume management software. The type of the used volume manager, the names of the volume groups as well as the used volumes have to be documented in the SAN configuration file as described in section Here you can also add some options to be used when importing or deporting a volume group (e.g. forceimport; waitforce=5). Note that each volume name is recorded as file SystemMountpointSource in the MOUNTPATH section for the file system that is created on top of this volume. Keep in mind that a volume group has to be exclusively used for the database of one SAP system only. For some notes that must be taken into account when creating a volume group with Linux native Volume Manager (LVM2) for usage by FlexFrame see section The number and size of the needed file systems depends on the type of the SAP System to be installed and is determined as a part of the detailed storage concept that has been defined in advance. As an example for an Oracle based installation you may need 6 sapdata file systems - sapdata1 to sapdata6 and 8 saplog file systems: saparch, oraarch, sapbackup, sapreorg, origloga, origlogb, mirrloga, mirrlogb. Depending on the storage concept, it is also possible to define a lower or higher number of file systems. For more information refer to the appropriate SAP Installation Guide. For each file system that will be used by a SAP system, you will have to add a MOUNTPATH entry in the SAN Administration and Operation 173

186 Storage Systems Administration SAN Configuration in a FlexFrame Environment configuration file as described in section Consult the SAP notes applicable for the database and file system type to get the correct attributes to be used for the creation of the file systems and the recommended mount options. If you have entered the description of the volume groups and file systems for a SAP system ID (SID) in your SAN configuration file, it is time to add these information to the FlexFrame database (LDAP) and test their usability. The needed steps are detailed in section Creating a Linux LVM2 Volume Group for FlexFrame Usage The FlexFrame images for Linux Application Nodes contain a configuration file for LVM (/etc/lvm/lvm.conf) that is prepared so that LVM volumes can be set up on DM-MPIO multipath nodes. Furthermore, if you want to use an LVM volume group with FlexFrame, it is necessary to mark this volume group with an FF4S tag as shown in the following example: bx1:~ # pvcreate /dev/disk/by-name/ e c1bad1d38db11 Physical volume "/dev/disk/by-name/ e c1bad1d38db11" successfully created bx1:~ # vgcreate sapdata_s12 /dev/disk/by-name/ e c1bad1d38db11 Volume group "sapdata_s12" successfully created bx1:~ # vgchange --addtag FF4S sapdata_s12 Volume group "sapdata_s12" successfully changed Completing the Configuration and Testing Usability of SAN for an SID After you have entered the description of the volume groups and file systems for a SAP system in the SAN configuration file, you should add these information to the Flex- Frame database (LDAP) and test their usability. You can do this by performing the following steps: 1. Add the SAP SID to the FlexFrame configuration using the script ff_sid_adm.pl as described in the section Adding/Removing/Modifying SAP SIDs and Instances on page 252. Alternatively, you can also use an SID that has been defined with the Management Tool. 2. Tell the LDAP database that this SID is using SAN volumes for its database as shown in the following example: control1:~ # ff_sid_mnt_adm.pl -op add -pool pool2 -sid S01 --san /tmp/tftpboot/config/sancfgs10 Alternatively you can do this by using the script ff_san_ldap_conf.pl as shown in section Administration and Operation

187 SAN Configuration in a FlexFrame Environment Storage Systems Administration 3. Before you can install SAP with its database on the SAN volumes, some folders for the SID have to be created in advance. This also creates links to the mount points for the SAN volumes. To do so, run the script ff_setup_sid_folder.sh as shown in the following example: cont r ol 1: ~ # f f _set up_si d_f ol der. sh p pool 2 s S01 After running this script, you should also check that in the database directory for this SID(/oracle/S01 according to the example shown above) the required links to /var/flexframe/san/... have been created and there are no links into the sapdata/saplog volumes on the NAS system. If some of these links are still here (this can happen if not all standard areas are needed), delete them, as otherwise a mixed SAN/NAS installation could result from this setup, if not taking care during SAP installation. This must be avoided, as mixing SAN and NAS for the database of one SID is not supported. 4. Mount the file systems defined for this SID by using the script ff_san_mount.sh on the Application Node where you have prepared the file systems, as shown in the following example: # ff_san_mount.sh pre sapdb S01 start /FlexFrame/scripts/ff_san_mount.sh: SAN tool. (SID=S01, PHASE=pre, ACTION=start) SID S01 of pool pool1 is not configured for SRDF. storattach of sid S01 is done! Volume group "vg_data_s01" successfully imported Volume group "vg_data_s01" successfully changed Volume group "vg_logs_s01" successfully imported Volume group "vg_logs_s01" successfully changed fsck 1.38 (30-Jun-2005) e2fsck 1.38 (30-Jun-2005) /dev/vg_data_s01/sapdata2: clean, 17/ files, 84441/ blocks /dev/vg_data_s01/sapdata2 mounted at /var/flexframe/san/oracle/s01/sapdata2 fsck 1.38 (30-Jun-2005) e2fsck 1.38 (30-Jun-2005) /dev/vg_data_s01/sapdata6: clean, 10/51200 files, 10578/ blocks /dev/vg_data_s01/sapdata6 mounted at /var/flexframe/san/oracle/s01/sapdata6 5. Check that the mounted file systems have the correct ownership required by the database installation and change it as appropriate. 6. Unmount the file systems on this Application Node using the ff_san_mount.sh script as shown in the following example: # ff_san_mount.sh post sapdb S01 stop Administration and Operation 175

188 Storage Systems Administration Dynamic LUN Masking Repeat the steps 4 and 6 on the other Application Nodes of the Pool Group to test that each one can access the file systems if needed. Before you start the SAP installation, mount the file systems again on the Application Node that has been selected for this task. After the SAP installation is finished, unmount the file systems again using the ff_san_mount.sh script. For more details on using this script refer to its man page. Please note in particular that this script is usually implicitly called during the start and stop of a database instance and should not be used directly, except in situations where you have assured that no other host is using the volumes and file systems configured for the database instance. Not taking care of this may result in severe data corruption, up to making the respective database unusable. 8.3 Dynamic LUN Masking Using StorMan to Reconfigure SAN To do the dynamic LUN Masking for a SAN based database instance of a SAP application, you can use StorMan, which has been integrated in FlexFrame 5.0A. StorMan is a virtualization layer for the dynamic management of storage resources and their dynamic assignment to servers. FlexFrame 5.0A is delivered with StorMan V2.0. You can use the same StorMan commands, independently of the usage of EMC or NetApp storage systems. For more information see the StorMan V2.0 manual on If you have EMC storage systems, before initial setup of StorMan on your FlexFrame first of all you have to install the SMI-S provider. In case of a NetApp Filer the installation of the SMI-S provider is done automatically by the ONTAP software. SMI-S (Storage Management Initiative Specification) is a standard, which forms the base technology of StorMan. Defined by the SNIA (Storage Networking Industry Association) is it a widespread standard in the storage world Installation of SMI-S Provider To install the SMI-S provider you have to consider: Get the SMI-S provider for CLARiiONs and Symmetrix from EMC-Powerlink ( An account is needed to access the EMC-Powerlink! Provider-Kit: Home -> Support -> Downloads & Patches -> Downloads S-Z 176 Administration and Operation

189 Dynamic LUN Masking Storage Systems Administration Release Notes: Home -> Support -> Documentation Library -> Software S -> SMI-S Provider Please see the release notes of the used StorMan version (now 2.0), to see which versions of SMI-S providers are admitted. Then look into the release notes of the SMI-S providers to see which Hardware is supported. For example, thus you can find out that you must take version >= to support DMX-4 and CX-4 systems. For FibreCAT CX systems the SMI-S provider is available (for certified FibreCAT-CEs) from It is recommended to install the SMI-S provider for a CLARiiON/FibreCAT CX on both (!) Control Nodes of your FlexFrame landscape, provided that there exist LAN-connections from your Control Nodes to the storage system. Otherwise, the SMI-S provider must be installed on an external node with LAN connection to the storage system, and the Control Nodes must have LAN connections to this node. If you have a Symmetrix, a fibre channel connection to the storage system is necessary. Therefore an external node with fibre channel connection is needed for the SMI-S provider, and the Control Nodes must have a LAN Connection to this node. For high availability reasons, two external nodes should be used, with an SMI-S provider installed on each of them. For more information about SMI-S provider look into StorMan Manual V2.0, Chap. Installation -> Software -> 'SMI-S Provider' and 'Installation and start of EMC SMI-S provider'). The following picture shows the case of SMI-S providers for a CLARiiON on both (!) Control Nodes. The second CLARiiON is optional and for high availability reasons. Administration and Operation 177

190 Storage Systems Administration Dynamic LUN Masking CN1 StorMan-Server StorMan-erver SMI-S SMI-S CN2 StorMan-Server LAN LAN CLARiiON-1 CLARiiON-2 optional 2nd CLARiiON HA: In case of a failure of the 1st SMI-S-Provider the FSC Cluster Software PCL switches over to the 2nd SMI-S-Provider The next picture shows SMI-S providers (SMI-S-1/2) installed on external nodes with FibreChannel connections to the storage system (Symmetrix) and LAN connections to the Control Nodes. For high availability reasons, two external nodes should be used, with the SMI-S provider installed on each of them. The second Symmetrix is optional and for high availability reasons. 178 Administration and Operation

191 Dynamic LUN Masking Storage Systems Administration CN1* StorMan-Server CN2 StorMan-Server * The FlexFrame Control Nodes do not allow Fiber Channel attachments! LAN LAN LAN LAN SMI-S-1 Server SMI-S-2 Server FC FC FC Symmetrix-1 FC Symmetrix-2 optional 2nd Symmetrix HA: The 2nd SMI-S-Provider will automatically be involved by the StorMan-Server in case of a failure of the 1st SMI-S-Provider Installation of StorMan You can decide, whether you want to start StorMan manually or integrate it into your Linux HA cluster configuration. But, for high availability reasons it is strongly recommended to do the Linux HA cluster integration if you want to use dynamic LUN masking! Starting StorMan Manually 1. Start the StorMan server by using /etc/init.d/storman start. 2. Normally the SMI-S provider is started implicitly by its installation process. If not, start the SMI-S provider by using startproc opt/emc/ecim/ecom/bin/ecom -d 3. Connect the SMI-S provider to the StorMan server: storcfg cimom add name <name of host, where SMI-S provider is running>... Administration and Operation 179

192 Storage Systems Administration Dynamic LUN Masking 4. In case of an EMC CLARiiON with no FibreChannel connection to the host where the SMI-S provider is running: Tell the SMI-S provider, where to discover the storage configuration. Example: storemc add -name <name of host, where SMI-S provider is running>... -sp <list of storage processors of CLARiiON/FibreCAT> -cxuser <administrator user id on storage processors> -cxpwd <password for administrator userid> Tell the SMI-S provider to discover the storage configuration: storcfg cimom -discover -name <name of host, where SMI-S provider is running> Tell the StorMan server the storage system, which you want: storcfg system -discover -system <systemid>... You can get the system id with storcfg system -show Integrating StorMan into the Linux HA cluster Configuration After you have performed the steps 1 6 above: 1. Stop the running StorMan server with /etc/init.d/storman stop and the SMI-S provider using killproc /opt/emc/ecim/ecom/bin/ecom (see also StorMan manual). 2. Test it by the commands storcheck and checkproc /opt/emc/ecim/ecom/bin/ecom. 3. Add StorMan to the cluster configuration if CIM is not locally installed or in case of NetApp storage systems: ff_ha_tool.sh a storman if CIM is locally installed (only for EMC): ff_ha_tool.sh a storman ff_ha_tool.sh a cim Adjust path names of SMI-S provider in FlexFrame/volFF/FlexFrame/SMAWstor/cim.conf. Don't forget to complete the start command by -d (see also StorMan manual). 180 Administration and Operation

193 Dynamic LUN Masking Storage Systems Administration 4. If you want to use the StorManGUI, you have to adapt the correct host name in /opt/smaw/smawstor/storman/stormangui/stormanstart.jnlp. This could be necessary, if the DNS name for the SMI-S provider, which has been generated in this file, is not known in the DNS server, because FlexFrame is working in a private, local net Preparation Works for Dynamic LUN Masking First of all you should remove all existing LUN mappings on your storage system, because this could cause problems, in the case of EMC CLARiiONs and NetApp storage systems and for storage groups in which several hosts share the same devices. Then you have to store all storage relevant informations into LDAP by using ff_san_ldap_conf.pl (see 8.5.1). For all types of volume managers you must add VLUN entries to your SAN configuration files (see 8.5.2). This is very important for the next step: Calling the procedure ff_storman_conf.pl to execute the configuration of storage for the desired SAN configuration by using StorMan. When using ff_san_ldap_conf.pl, you must define the arrayluns as Storman needs it. For example, on a CLARiiON, you must complete them by leading zeros to a 4 or 5 byte integer (depending on the SMI-S Provider version). It is very important, that ff_storman_conf.pl is called after ff_san_ldap_conf.pl, when adding VLUNs, and that it is called before ff_san_ldap_conf.pl, when deleting VLUNs! Additionally, on each NetApp storage system used for dynamic LUN masking, you need a dummy LUN of minimal size which is statically attached with host LUN Id 0 to all Application Nodes that will use LUNs from this storage system. If using StorMan for dynamic LUN masking, this attachement must be done using StorMan commands. You should do this after calling ff_storman_conf.pl, when the Application Nodes are already defined in StorMan. To be able to do the attach, this LUN must also be defined in StorMan. Example: cn1:~ # storcfg pool -add -poolid ff4s_dummy_ system ONTAP: cn1:~ # storcfg volume -add -storid dummy_lun0_ poolid ff4s_dummy_ system ONTAP: deviceid /vol/vol_dummy/dummy:c4lmz4oenkar cn1:~ # storattach -storid dummy_lun0_ hostname bx hostlun 0 cn1:~ # storattach -storid dummy_lun0_ hostname bx hostlun 0 Administration and Operation 181

194 Storage Systems Administration Dynamic LUN Masking Script ff_storman_conf.pl ff_storman_conf.pl is used to configure StorMan for FlexFrame SAN configuration. The necessary information is taken from the LDAP. ff_storman_conf.pl generates or deletes hosts, pools and devices in StorMan. You have the choice to do this for a certain pool or for all defined pools. For each specified pool you can specify an SID, thus generating StorMan configuration for this SID only. If you don't specify an SID, all SIDs of the specified pool are taken for StorMan configuration. Synopsis For defining hosts in StorMan, all affected nodes must have defined HBAs and WWPNs in their node definition. To define HBAs, ff_an_adm.pl with option add or hba-change can be used (see section 7.2 and 7.6) and WWPN's are supplied during boot of the node. ff_storman_conf.pl -op {add del} -pool {<pool_name> [--sid Options --op (add del) Adds or deletes a StorMan configuration. --pool Defines from which pools the SIDs for StorMan configuration should be taken: a certain SID of a specified pool (a pool name in combination with a given SID), all SIDs of a specified pool (a pool name only), or all SIDs of all pools (@ALL). --sid <SID> Defines the SID, for which you want to configure StorMan On the Control Nodes: Start of Communication Server To receive the attach/detach orders from the Application Nodes and transmit them to StorMan, the communication server has to be started. This is normally done automatically when StorMan has been started by Linux-HA. But if this one has been started manually, the communication must be started manually by calling startproc /opt/flexframe/bin/ff_comm_server.pl It can be stopped manually by killproc /opt/flexframe/bin/ff_comm_server.pl 182 Administration and Operation

195 Dynamic LUN Masking Storage Systems Administration On the Application Node: Attach or Detach the Luns Attach the LUNs of an SID to the current host If in a SAN configuration an SID is started on a host, all LUNs of this SID have to be visible to the host. You can achive this visibility by calling ff_lunmasking.pl with opcode attach, but only if the necessary preparation in StorMan is done, i.e. the LUNs are defined as volumes in StorMan (see section above). If calling ff_lunmasking.pl either the list of the concerned LUNs is given by caller (with option lun_file) or ff_lunmasking.pl is collecting them by LDAP. Afterwards all concerned LUNs are attached to the host by StorMan. Normally this script is automatically used on Application Nodes during the start of an SID. Detach the LUNs of an SID from the current host If in a SAN configuration an SID is stopped on a host, the LUNs should no longer be visible in the host. You can finish this visibility by calling ff_lunmasking.pl with opcode detach, but only if the necessary preparation in StorMan is done, i.e. the LUNs are defined as volumes in StorMan (see section above). If calling ff_lunmasking.pl either the list of the concerned LUNs is given by caller (with option lun_file) or ff_lunmasking.pl is collecting them by LDAP. Afterwards all concerned LUNs are attached to the host by storman. Normally this script is automatically used on Application Nodes during the stop of an SID. Synopsis Caution must be applied when using this script directly. The configured luns must not be in use. Not taking care of this may cause data corruption and make the data on luns unusable. ff_lunmasking.pl --op {attach detach} - sid <sid_name> [--lun_file <file_name>] Administration and Operation 183

196 Storage Systems Administration SRDF Support in FlexFrame Options --op attach Attached the LUNs of the SID to the current host. --op detach Detached the LUNs of the SID from the current host. --sid <sid_name> Name of the SID, whose LUNs are attached or deatched. --lun_file <file_name> Name of a file, containing the list of all LUNs which should be attached or detached. In lun_file every LUN has to be written in a single line. If lun_file is not given by the caller, the script finds the LUNs by searching in LDAP. 8.4 SRDF Support in FlexFrame As a prerequisite for disaster recovery procedures for FlexFrame landscapes using EMC Symmetrix storage systems, a basic SRDF support is included in FlexFrame starting with version 5.0A. Please note that the SRDF support in FlexFrame is only a building block for a disaster recovery solution. It is in no case a replacement for a detailed customer specific planning and operations guide that takes into account the customer specific configuration and requirements. Besides this, one should be aware that a disaster is unpredictable by nature; the IT components of the site hit by the disaster can be partially still functional, or come back to life after a short outage. Therefore, human intelligence will be needed in most cases to identify a disaster and act appropriately. The SRDF support in FlexFrame includes the support of SRDF protected NAS systems (Celerra) as decribed in chapter "Celerra SRDF-NAS High Availability" and the SRDF protection for SAN based SAP database file systems, as decribed in the next paragraphs. Symmetrix is EMC's product line of high-end storage solutions targeted to meet the requirements of customer's mission critical databases and applications. EMC Symmetrix Remote Data Facility (SRDF) is a Symmetrix-based family of business continuance and disaster restart solutions. 184 Administration and Operation

197 SRDF Support in FlexFrame Storage Systems Administration SRDF is a configuration of Symmetrix units whose purpose is to maintain multiple realtime copies of the same logical volume data in more than one location. The Symmetrix units can be in the same room, in different buildings within the same campus, or up to hundreds of kilometers apart. The local SRDF device, known as the source (R1) device, is associated with a target LUN (called R2 device). Data of the R1 device are mirrored to the R2 device. For more information on Symmetrix storage systems and SRDF refer to the appropriate EMC documentation Storage System Configuration As with SAN storage systems in general, this activity is outside the scope of FlexFrame and will usually be done by EMC specialists. The FlexFrame SAN SRDF support is limited to configurations where each SAN based SAP database (referred to as a SID in the next paragraphs) with SRDF protection has its source LUNs (R1 devices) on exactly one Symmetrix system and its target LUNs (R2 devices) on another single Symmetrix system. Other SIDs may use the same or another pair of Symmetrix systems, or the same pair with opposite roles. Furthermore, a cross-connected configuration is assumed, where each Application Node on which a SID's database instance may run is connected via cabling and zoning to the Symmetrix system holding the SID's source LUNs and to the Symmetrix system holding the SID's target LUNs. If possible, dedicated ports on the Symmetrix systems should be assigned for FlexFrame SAN usage. It is not possible to use the same ports that are used for the connection of the Symmetrix based Celerra. It is very important to use the bit settings for the ports used to connect FlexFrame SAN Application Nodes according to the EMC Support Matrix for the exact Storage Array type, Storage Operating Environment (Microcode) version and host Operating System. A brief description can also be found in the EMC Host Connectivity Guide for Linux. As a part of the storage system preparation it is also very important to create a sufficient number of gatekeeper devices on the Symmetrix systems for usage by the Application Nodes on which the SAN SRDF functionality will be used. A number of at least three gatekeeper devices per Application Node is recommended. Administration and Operation 185

198 Storage Systems Administration SRDF Support in FlexFrame Configuring Application Nodes for SAN SRDF Usage The FlexFrame scripts supporting SAN SRDF functionality use the EMC Solutions Enabler to issue commands on the Symmetrix storage systems. This software is not included in the FlexFrame Application Node images. Therefore it must be installed using the Maintenance Cycle for Linux Application Node Image as described in section 11.4 Maintenance of Application Nodes - Software on page 315. After the preparation of the maintenance image as described in the first part of section 11.4 the installation itself is done on the selected maintenance Application Node. Thereby some FlexFrame specific considerations must be taken into account: 1. /var/emc must be selected as working root directory instead of the proposed default of /usr/emc. 2. After installing the rpm, enter the license keys using the symlmf command for at least the following SYMAPI features: BASE / Symmetrix, SRDF / Symmetrix, DevMasking / Symmetrix. These licenses will be used by all Application Nodes running with this image. 3. Enable the usage of GNS (group name services) by uncommenting the line containing the text #SYMAPI_USE_GNS = ENABLE in the file /var/emc/api/symapi/config/options. This service facilitates the usage of the same device group definitions on several hosts accessing the same Symmetrix storage systems, as the definitions are maintained on the storage systems and not only in the hosts symapi database as it is without this service. 4. Adjust the settings for the PATH and MANPATH variable for the root user to include the newly installed software (see example below). You should also consider to use this maintenance cycle to implement driver specific adjustments in the Application Node image needed in conjunction with the Symmetrix storage system, that are not covered by the default settings of the driver (as an example, append the line options lpfc lpfc_nodev_tmo=10 to the file /etc/modprobe.conf.local if using Emulex HBAs). Otherwise, these adjustments must be done in a separate maintenance cycle, which is time-consuming. The following example assumes that the Solutions Enabler software version 6.5 has been downloaded as a compressed tar archive and copied to a location that can be accessed from the Application Node. Example for installing EMC Solutions Enabler on an Application Node named an1 that has been prepared for maintenance: an1:/mnt/emc/solution_enabler/ /linux # cp se6500-linux-x86_64-ni.tar.z /tmp an1:/mnt/emc/solution_enabler/ /linux # cd /tmp an1:/tmp # tar -xzvf se6500-linux-x86_64-ni.tar.z 186 Administration and Operation

199 SRDF Support in FlexFrame Storage Systems Administration symcli-core-v x86_64.rpm symcli-datacore-v x86_64.rpm symcli-datastorbase-v x86_64.rpm symcli-oracle-v x86_64.rpm symcli-srmbase-v x86_64.rpm symcli-star_perl-v x86_64.rpm symcli-storbase-v x86_64.rpm symcli-storfull-v x86_64.rpm symcli-symcli-v x86_64.rpm symcli-symrecover-v x86_64.rpm se6500_install.sh an1:/tmp #./se6500_install.sh -install # # EMC Installation Manager # Copyright 2007, EMC Corporation All rights reserved. The terms of your use of this software are governed by the applicable contract. Solutions Enabler Native Installer[RT] Kit Location : /tmp Install root directory [/opt/emc] : Working root directory [/usr/emc] : /var/emc Entered "/var/emc". To confirm type "Y" : Y Checking for OS version compatibility... Checking for previous installation of Solutions Enabler... [... snip...] Do not forget to run 'symcfg discover' after the installation completes and whenever your configuration changes. You may need to manually rediscover remotely connected arrays. Please see the installation notes for further information. # # The following HAS BEEN INSTALLED in /opt/emc via the rpm utility. # Administration and Operation 187

200 Storage Systems Administration SRDF Support in FlexFrame ITEM PRODUCT VERSION 01 EMC Solutions Enabler V RT KIT # an1:/tmp # /usr/symcli/bin/symlmf Register License Key (y/[n])? y Enter License Key : E M C S O L U T I O N S E N A B L E R SOLUTIONS ENABLER LICENSE MANAGEMENT FACILITY After entering the license keys, you can check their presence: an1:/tmp # cat /var/symapi/config/symapi_licenses.dat License Key: XXXX.XXXX-XXXX-XXXX SYMAPI Feature: BASE / Symmetrix License Key: XXXX.XXXX-XXXX-XXXX SYMAPI Feature: SRDF / Symmetrix License Key: XXXX.XXXX-XXXX-XXXX SYMAPI Feature: ConfigChange / Symmetrix License Key: XXXX.XXXX-XXXX-XXXX SYMAPI Feature: DevMasking / Symmetrix To enable GNS, open the file /var/emc/api/symapi/config/options with an editor, search for the line starting with #SYMAPI_USE_GNS, remove the starting # and save the modified file. Verify the setting: an1:/tmp # grep SYMAPI_USE_GNS /var/emc/api/symapi/config/options # Parameter: SYMAPI_USE_GNS SYMAPI_USE_GNS = ENABLE To adjust the PATH and MANPATH setting for the root user, enter: an1:/tmp # cat >>/root/.bashrc export PATH=$PATH:/usr/symcli/bin export MANPATH=$MANPATH:/usr/storapi/man:/usr/storapi/storman Do not forget to complete the Maintenance Cycle by applying the step described in section "Step #3: Reverting the Maintenance Image" on page 327. At the end of this procedure, all Application Nodes on which a database instance of a SAN based SID with SRDF protection may run should use the newly created image that contains the EMC Solutions Enabler. Please note that this procedure must be reapplied after upgrading the relevant Application Nodes to a new standard FlexFrame Application Node image. 188 Administration and Operation

201 SRDF Support in FlexFrame Storage Systems Administration FlexFrame SAN Configuration for SRDF When using SRDF protection for SAN based SIDs, some additional data must be recorded in the FlexFrame LDAP database. This is done together with the other SAN configuration data using the script ff_san_ldap_conf.pl. In the configuration file used with ff_san_ldap_conf.pl when adding SAN data for SIDs, the SID entry for each SID that uses SRDF must contain the parameter sanremotemirrortype with value SRDF, and further related parameters (primarydevgroup, secondarydevgroup, autofailoverallowed). Furthermore, besides the virtuallun list(s), that must contain the names of all LUNs used by a SID in normal operation mode (the R1 devices), the list of the LUNs used after a failover (the R2 devices) must be given with the parameter secondaryvirtuallun. For details about using the script ff_san_ldap_conf.pl refer to chapter 8.5. Example for the SRDF relevant part of a SAN configuration file # SAN data of SID D01 SID pool1 D01 DB0 { sanremotemirrortype = SRDF primarydevgroup = ff_pool1_d01_r1 secondarydevgroup = ff_pool1_d01_r2 autofailoverallowed = yes VOLUMEGROUP D01data {... snip... virtuallun = D01_data_R1 secondaryvirtuallun = D01_data_R2... snip... } VOLUMEGROUP D01logs {... snip... virtuallun = D01_logs_R1 secondaryvirtuallun = D01_logs_R2... snip... } # SAN data of VLUNs VLUN D01_data_R1 pool1 { Administration and Operation 189

202 Storage Systems Administration SRDF Support in FlexFrame hostlun = 1 arraylun = 00CFB arrayid = LUNGUID = } VLUN D01_data_R2 pool1 { hostlun = 1 arraylun = 0221B arrayid = LUNGUID = } VLUN D01_logs_R1 pool1 { hostlun = 5 arraylun = 00D03 arrayid = LUNGUID = } VLUN D01_logs_R2 pool1 { hostlun = 5 arraylun = arrayid = LUNGUID = } # SAN data of storage systems STORSYS pool1 { storagearraytype = Symmetrix wwpn = d52cbbd8 wwpn = d52cbbd7 } STORSYS pool1 { storagearraytype = Symmetrix wwpn = a75767 wwpn = a75768 } With operation mode list-all of the script ff_san_srdf.pl, a list of all SAN based SAP systems with SRDF protection as defined in the LDAP database can be displayed, 190 Administration and Operation

203 SRDF Support in FlexFrame Storage Systems Administration whereas operation mode list-by-symm of the same script shows these SIDs grouped by the Symmetrix system they are using according to LDAP. Examples cn1:~ # ff_san_srdf.pl --op list-all SIDs with SRDF usage sorted by pool and SID name Pool pool1 SID D01 Primary Device Group: ff_pool1_d01_r1 Secondary Device Group: ff_pool1_d01_r2 Automatic failover allowed: yes Secondary LUNs are in use: no SID D02 Primary Device Group: ff_pool1_d02_r1 Secondary Device Group: ff_pool1_d02_r2 Automatic failover allowed: yes Secondary LUNs are in use: no cn1:~ # ff_san_srdf.pl --op list-by-symm SIDs with SRDF usage grouped by used Symmetrix system Pool pool1 SIDs using Symmetrix with Id : D01 D02 The Symmetrix Device Groups specified in LDAP must exist on each Application Node where the respective SID's database instance is started. With the usage of GNS that should be enabled as described in chapter 8.4.2, it is sufficient to create them on one Application Node. Care must be taken to add the same devices to the device groups as defined in LDAP. When used with operation mode check-conf on an Application Node with SAN connection and operational EMC Solutions Enabler, the script ff_san_srdf.pl checks if this has been done correctly. Besides this, an additional option of this operation mode gives the possibility to generate a template file with Symmetrix commands that can be used to create the needed device groups. Examples cn1:~ # ff_san_srdf.pl --op check-conf --sid D01 --pool pool1 --outfile /tmp/dg-d01 Template for device groups written to /tmp/dg-d01 For a complete check, call this function on an Application Node with SAN connection and operational EMC Solution Enabler. cn1:~ # cat /tmp/dg-d01 ### File created on Thu Sep 4 18:07: by ff_san_srdf.pl # Primary device group: symdg create ff_pool1_d01_r1 -type RDF1 Administration and Operation 191

204 Storage Systems Administration SRDF Support in FlexFrame symld -g ff_pool1_d01_r1 add dev 0CFB -sid symld -g ff_pool1_d01_r1 add dev 0D03 -sid # Secondary device group: symdg create ff_pool1_d01_r2 -type RDF2 symld -g ff_pool1_d01_r2 add dev 221B -sid symld -g ff_pool1_d01_r2 add dev sid an1:~ # ff_san_srdf.pl --op check-conf --sid D01 --verbose Checking LDAP settings Check storage systems in LDAP... ok Check device groups in LDAP... ok Checking storage system view for LDAP objects Check availability of storage systems... ok Check availability of device groups... ok Check devices in device groups... ok an1:~ # Besides this, it is also possible to get more detailed information about the SRDF configuration of a SID or its state with other operation modes of the script ff_san_srdf.pl. For details refer to the script decription in chapter As the FlexFrame SAN SRDF function uses dynamic LUN masking, it is also required to make all preparations for this functionality as described in chapter SAN SRDF Usage in FlexFrame With an SRDF configuration it is possible to continue, or, to be more exact, to resume operation if a storage system is no longer available, by switching to the storage system that holds the devices with up-to-date copies of the formerly used ones. This storage system action, also known as an SRDF failover, can be automatically triggered by the FlexFrame software when certain conditions are met, and the administrator has requested the usage of the automatism by setting the parameter autofailoverallowed to yes for the affected SID in the LDAP database. Please note that an automatic SRDF failover for the SAN devices of a SID is only invoked if the FlexFrame software can find out that the secondary devices are up-to-date copies of the primary devices. Otherwise, no automatic failover will be done, even if the parameter autofailoverallowed has been set to yes. During start or restart of the database instance of a SAN based SID with SRDF protection, the concerned FlexFrame scripts check whether the needed Symmetrix storage system is reachable, and if it is not reachable, the state of the relevant Symmetrix 192 Administration and Operation

205 SRDF Support in FlexFrame Storage Systems Administration device groups is checked to determine if an SRDF failover to the secondary devices is possible without data loss. The reaction to the outcome of this check depends on the setting of the parameter autofailoverallowed. If this parameter has the value yes, and the check has a positive outcome, the SRDF failover is invoked and after its successful completion, the start or restart continues with the secondary devices. If a failover is needed but the LDAP setting does not allow it, this is signalled to the FlexFrame Autonomous Agents by means of the MonitorAlert interface and a message is produced that describes the situation. In this case, the administrator must decide whether a failover should be done. The failover can be invoked manually with the operation mode failover of the script ff_san_srdf.pl. The successful completion of the failover is detected by the waiting start or restart processing, which continues with the secondary devices. In a situation where a storage system outage has been detected by other means, or when such an outage is planned, the failover operation of the script ff_san_srdf.pl can also be invoked. Before calling this operation manually for a SID, it must be ensured that the database instance of this SID is not running on an Application Node. After completion of the failover operation on the Symmetrix storage system, the script ff_san_srdf.pl also sets an indicator in the LDAP data of the affected SID which shows that the secondary devices are in use for this SID, so that subsequent actions for the same SID can determine the correct devices. To facilitate the detection of a storage system failure that can be handled by an SRDF failover, the ServicePingDb detector of the FlexFrame Autonomous Agents should be activated in configurations with SAN SRDF protection. For details on how to activate this function, refer to the documentation of the FA Agents. Thereby take care to set also the flag SAN_SRDF_CHECK_ON in the ServicePingDb script. If the ServicePing script or another detection mechanism of the FA Agents detects a failure of the database instance, a restart of the service or a reboot of the Node followed by a service start will be trigerred as defined by the FlexFrame Autonmous Agent's parameters. During this restart or start processing, a more detailed check of the needed Symmetrix storage system and the requirement for a failover is done, that can lead to the invocation of an SRDF failover in some cases as described above. Please note that while in failover mode, an SRDF protection is no longer available for the affected devices. After repairing the failed storage system, it is possible to switch processing back to the primary devices and to reestablish the SRDF protection by using the operation mode failback of the script ff_san_srdf.pl. Before invoking this operation, the database instance of the affected SID must be stopped. Administration and Operation 193

206 Storage Systems Administration FlexFrame SAN Configuration 8.5 FlexFrame SAN Configuration Script: ff_san_ldap_conf.pl ff_san_ldap_conf.pl is used to administrate the FlexFrame SAN configuration of Application Nodes. The arguments given with the command line differ with the operation mode. To add new information to the FlexFrame database it is necessary to define a lot of data. These data are stored in a configuration file which is specified as parameter of the add operation. Synopsis ff_san_ldap_conf.pl op add [--conffile <filename>] [--outfile <filename>] [--sid {<sid> *}] [--pool {<poolname> *}] [--vlun {<vlunname> *}] [--storsys {<storsysname> *}] [--node {<nodename> *}] [--dryrun] [--verbose] ff_san_ldap_conf.pl op del [--outfile <filename>] [--sid {<sid> *}] [--pool {<poolname> *}] [--vlun {<vlunname> *}] [--storsys {<storsysname> *}] [--node {<nodename> *}] [--dryrun] ff_san_ldap_conf.pl op list [--sid {<sid> *}] [--pool {<poolname> *}] [--vlun {<vlunname> *}] [--storsys {<storsysname> *}] [--node {<nodename> *}] ff_san_ldap_conf.pl -help 194 Administration and Operation

207 FlexFrame SAN Configuration Storage Systems Administration Options --op add Adds new SAN information to the FlexFrame database. --op del Deletes SAN information from the FlexFrame database. --op list Displays SAN information from the FlexFrame database. --conffile <filename> Name of the SAN configuration file. The default for conffile is /tftpboot/config/san.cfg. --outfile <filename> Name of the work file for writing data to LDAP. The default for outfile is /tmp/ff_san_ldapdata. --sid <sid> * The data for the given SID or all sids (*) of the specified pool(s) are selected. --pool <poolname> * The data for the given pool or all pools (*) are selected. --vlun <vlunname> * The data for the given vlun or all vluns (*) of the specified pool(s) are selected. --storsys <storsysname> * The data for the given storage system or all storage systems (*) of the specified pool(s) are selected. --node <nodename> * The data for the given node or all nodes (*) are selected. --dryrun Performs the actions of the script but don't write the data to LDAP. The result is written to outfile. --verbose All LDAP messages are displayed. Administration and Operation 195

208 Storage Systems Administration FlexFrame SAN Configuration --help Displays usage FlexFrame SAN Configuration File # definition of a node NODE <node_name> { availablemultipathsw = { DM-MPIO } listofhba = <value>... }... # definition of an SID SID <pool_name> <sid_name> <instance_name> { sanremotemirrortype = {SRDF USER} primarydevgroup = <value> secondarydevgroup = <value> autofailoverallowed = <yes no> VOLUMEGROUP <volume_group_name> { usedmultipathsw = { DM-MPIO } volumemanagertype = { LVM USER} volumemanageruserscript = <value> volumemanageroptions = <value> usage = {SAPDATA SAPLOG} virtuallun = <value> secondaryvirtuallun = <value> MOUNTPATH <file_system_mountpoint_destination> { filesystemtype = { ext3 USER} filesystemuserscript = <value> filesystemmountoptions = <value> filesystemcheckoptions = <value> filesystemmountpointsource = <value> }... }... }... # definition of a VLUN VLUN <vlun_name> <pool_name> { 196 Administration and Operation

209 FlexFrame SAN Configuration Storage Systems Administration hostlun = <value> arraylun = <value> arrayid = <value> LUNGUID = <value> }... # definition of a SAN storage system STORSYS <storage_array_id> <pool_name> { storagearraytype = {Symmetrix CLARiiON NetAppFiler} wwpn = <value> wwpn = <value>... }... The file /opt/flexframe/etc/ff_san_ldap_conf.template contains a template of the configuration file. The configuration file consists of several lines: Comment lines are marked with a # in the first column of the line. Empty lines or lines consisting only of white spaces are handled like comment lines. All other lines describe the entries of the configuration file. The configuration file has four entries: NODE entry, SID entry, VLUN entry and STORSYS entry. Each entry consists of the header line, the opening curly bracket line, the data lines and the closing curly bracket line. NODE entry The NODE entry describes the properties of an Application Node and is identified by its host name (as defined in the Management Tool and returned by uname -n). It consists of the following parameters: availablemultipathsw This parameter is required and defines the multipath software which is available on the node. The possible value is: Value Meaning Available on DM-MPIO Native multipath software of LINUX LINUX Administration and Operation 197

210 Storage Systems Administration FlexFrame SAN Configuration listofhba This parameter is required and assigned a logical name to each available HBA. The values of the logical names are not restricted by FlexFrame. SID entry The SID entry describes the SAN properties of an SID's database instance and is identified by the name of the FlexFrame pool, the SID and the database instance. It consists of a part describing the SAN remote mirroring method used (currently only SRDF) and one or more VOLUMEGROUP entries. sanremotemirrortype This parameter must be set with the value SRDF if using EMC Symmetrix storage with SRDF protection for this SID. It is also possible to specify the value USER, if another storage system based mirroring method is used. If using Host Based Mirroring, no entry for sanremotemirrortype must be specified. The following three parameters are only needed if sanremotemirrortype is SRDF: primarydevgroup The name of the Symmetrix Device Group holding the R1 devices. secondarydevgroup The name of the Symmetrix Device Group holding the R2 devices. autofailoverallowed Specifies whether an automatic failover to the secondary Storage system should be done if certain conditions are met. Possible values: yes, no. Default: no VOLUMEGROUP entry A VOLUMEGROUP entry consists of parameters describing the properties of the volume group and one or more MOUNTPATH entries. Each entry is identified by the name of the volume group. The syntax of the volume group name depends on the used volume manager. If no volume manger is used then the value is not restricted by FlexFrame. 198 Administration and Operation

211 FlexFrame SAN Configuration Storage Systems Administration usedmultipathsw This parameter is required and defines the used multipath software. The possible value is: Value Meaning Available on DM-MPIO Native multipath software of LINUX LINUX volumemanagertype This parameter is required and defines the used volume manager. The possible values are: Value Meaning Available on LVM Native volume manager of LINUX LINUX USER User defined volume manager LINUX The special value USER means that a non-standard volume manager is used. The functionality of this volume manager is not guaranteed. volumemanageruserscript If the value of the parameter volumemanagertype is USER then this parameter specifies the name of a user script for handling the volume manger's start and stop actions. The file name must be full qualified. The call by FlexFrame is performed with the following parameters: Action code volume_on for actions performed during start of a SID or volume_off for actions performed during start of an SID SID of the SAP system Service type Service number DN of the LDAP entry which caused the user script to be called Administration and Operation 199

212 Storage Systems Administration FlexFrame SAN Configuration volumemanageroptions This parameter is optional and contains parameters for the used volume manager. The possible values depend on the used volume manager. For details see the description of the used volume manager. Examples for values are listed in the following table: Value Meaning Usage forceimport waitforce=5 Force the import (this value is mandatory) Wait time is five seconds The different values are separated by a semicolon. This parameter specifies the usage of the volume group. The allowed values are SAPDATA, SAPLOG, or both. virtuallun This parameter specifies the name(s) of the virtual LUN(s) which are contained in the volume group and is only required if dynamic LUN masking is used. Each name specified here refers to an entry in the VLUN section. secondaryvirtuallun This parameter specifies the name(s) of the virtual LUN(s) which are used instead of the ones specified with virtuallun after a switch to the secondary secondary system in conjunction with the remote mirroring method specified with sanremotemirrortype. Each name specified here refers to an entry in the VLUN section. MOUNTPATH entry A MOUNTPATH entry consists of parameters describing the properties of the mount path and is identified by the absolute pathname of the destination of the mount path. The syntax of this name must be conform with the syntax of the Application Node's operating system. The pathname must start with /oracle/<sid> or /sapdb/<sid>. This pathname will be prefixed with /var/flexframe/san. 200 Administration and Operation

213 FlexFrame SAN Configuration Storage Systems Administration filesystemtype This parameter is required and defines the used file system. The possible values are: Value Meaning Available on ext3 Extended file system version 3 LINUX USER User defined file system LINUX The special value USER means that a non-standard file system is used. The functionality of this file system is not guarantied. filesystemuserscript If the value of the parameter filesystemtype is USER then this parameter specifies the name of a user script for handling the file system start and stop actions. The file name must be full qualified. The call by FlexFrame is performed with the following parameters. mount / umount specify that a mount or an umount action is required SID of the SAP system Service type Service number Mount point (full qualified filename). Volume name Volume group name filesystemmountoptions This parameter is optional and defines options used during the mount of the file system. The possible values depend on the used file system. For details see the description of the file system. Examples for values are listed in the following table: Value forceumount Meaning Force the umount mountopts=largefiles=yes Options for the mount command The different values are separated by a semicolon. Administration and Operation 201

214 Storage Systems Administration FlexFrame SAN Configuration filesystemcheckoptions This parameter is optional and defines options used during the check of the file system. The possible values depend on the used file system. For details see the description of the file system. The special value nofsck indicates that no check of the file system is to be performed. Examples for values are listed in the following table. Value nofsck fsckopts=-ya Meaning Skip file system check Options for the fsck command filesystemmountpointsource This parameter is required and specifies the source of the data which are used for this mount path. The source is a relative file name. For example, if using a volume manger this is the volume name. The complete file name is constructed by FlexFrame depending on the used software stack. VLUN entry The VLUN entry consists of data describing the properties of the virtual LUN and is identified by the name of the virtual LUN. This name is a string starting with an alphabetical character, followed by up to 254 alphabetical characters, numbers, '-', '_' or '.'. It should not start with the prefix SM_ or _SSYS_ (regardless of case). Virtual LUN names must be unique, regardless of case. The specification of VLUN entries is needed if dynamic LUN masking with StorMan is used. For details on StorMan usage refer to the section Using StorMan to Reconfigure SAN. A VLUN entry consists of the following parameters: hostlun This parameter is required and specifies the ID of the storage LUN as it is visible by the application node. arraylun This parameter is required and specifies the ID of the storage LUN as it is visible by the SAN storage subsystem. arrayed This parameter is required and specifies the ID of the SAN storage subsystem. It is the SymmID of a Symmetrix array, the system ID of a NetApp Filer or the serial number of a CLARiiON/FibreCAT CX. 202 Administration and Operation

215 FlexFrame SAN Configuration Storage Systems Administration LUNGUID This parameter is required and specifies the global unique ID of the LUN. STORSYS entry The STORSYS entry consists of data describing the properties of a SAN storage subsystem and is identified by the ID of the SAN storage subsystem. The specification of a STORSYS entry is needed for each SAN storage subsystem which is referenced by the parameter arrayid of a VLUN entry. Use the same name here as with the arrayid parameter of the corresponding VLUN entries. A STORSYS entry consists of the following parameters: storagearraytype Wwpn This parameter is required and specifies the type of the storage system. Possible values are Symmetrix, CLARiiON (to be used also for FibreCAT CX) and NetAppFiler. This parameter is required and specifies a WWPN (World Wide Port Name) of a storage system port that is used for connection to the FlexFrame Application Nodes. A wwpn parameter line must be specified for each used WWPN SAN Support Scripts Script: ff_san_mount.sh Mounting of SAN attached file systems Synopsis ff_san_mount.sh {pre post} <service> <SID> {start stop prestart post-stop} Description This script is used on Application Nodes during start or stop of a database service. During installation of a database this script must be called if the data files are to be placed onto SAN based file systems. It is assumed that the information for all mount points is maintained in the configuration database using ff_san_ldap_conf.pl. Administration and Operation 203

216 Storage Systems Administration FlexFrame SAN Configuration Caution must be applied when using this script directly. The configured file systems and logical volumes must not be used by other hosts. Not taking care of this may cause data corruption and make the data on these file systems unusable. In particular you must be aware that if the forceimport option has been specified in the configuration database for the affected volume group, the volume group will be made accessible on this host by forcing import with the mechanisms of the used volume manager software. Once the database is properly installed the ff_service.sh script is calling ff_san_mount.sh during the start or stop phase implicitly. The prestart and prestop options are used with SAP ACC. However, their functionality is identical to start and stop respectively. To mount a specific set of file systems for the database SID use ff_san_mount.sh pre sapdb SID start To unmount the file systems (after all services have stopped) use ff_san_mount.sh post sapdb SID stop This script acts upon information stored in the LDAP database. Refer to ff_san_ldap_conf.pl to configure this information. Debugging /tmp/ff_san_mount.sh.debuglog This file is located at the Application Node and will hold debugging information. In case of problems provide this file Script: ff_san_info.sh SAN information utility Synopsis ff_san_info.sh {-g lunid -t lunspec -w guid -n serno -e -i} Description This utility can be used to get various SAN information and convert them into other formats. 204 Administration and Operation

217 FlexFrame SAN Configuration Storage Systems Administration Options -g lunid Shows GUID of LUN(s) with LUN-ID. -t lunspec -w guid Shows WWNN of target by lunspec (format <host_no>:0:<target_no>:<lun_no>). Shows WWNN of target by GUID. -n serno Shows GUID of a NetApp LUN based on its serial number (see lun show -v <path> on filer). -e Shows instructions to find GUID of EMC systems. -i Shows information on FC hosts and targets which are seen from this node. Usage example on a SLES 9 Application Node: bx2:~ # ff_san_info.sh -i HBA WWNN WWPN 1 20:00:00:c0:9f:c6:45:08 21:00:00:c0:9f:c6:45: :00:00:c0:9f:c6:45:09 21:00:00:c0:9f:c6:45:09 Target WWNN WWPN 1:0:0 50:06:01:60:b0:60:1f:31 50:06:01:60:30:60:1f:31 1:0:1 50:06:01:60:b0:60:1f:31 50:06:01:68:30:60:1f:31 2:0:0 50:06:01:60:b0:60:1f:31 50:06:01:61:30:60:1f:31 2:0:1 50:06:01:60:b0:60:1f:31 50:06:01:69:30:60:1f:31 bx2:~ # ff_san_info.sh -g 1 LUN(s): 1:0:1:1 2:0:1:1 1:0:0:1 2:0:0:1 GUID: e a1841d38db11 tom1bx2:~ # ff_san_info.sh -t 1:0:1:1 LUN 1:0:1:1: Target WWNN: f31 Corresponding line from 'lsscsi': [1:0:1:1] disk DGC RAID /dev/sdy tom1bx2:~ # ff_san_info.sh -w e a1841d38db11 GUID e a1841d38db11: LUN 1:0:1:1: Target WWNN: f31 Corresponding line from 'lsscsi': [1:0:1:1] disk DGC RAID /dev/sdy Administration and Operation 205

218 Storage Systems Administration FlexFrame SAN Configuration Script: ff_qlascan.sh Scanning for new LUNs on LINUX using a QLA 2xxx HBA Synopsis ff_qlascan.sh [-d] -c <a-b> -l <x-y> Description This script can be used to query for new LUNs on a Linux Application Node, if the IO subsystem is connected using QLA host bus adapters (HBA). Options -d Turn on debugging. Currently not supported - use bash -x ff_qlascan.sh... instead. -c <a-b> Scan controllers from a to b (e.g. "1-2"). -l <x-y> Scan for LUN-IDs from x to y (e.g. "10-20") Script: ff_san_srdf.pl FlexFrame(TM) SAN SRDF Support Synopsis ff_san_srdf.pl --op list-all [--pool <pool name>] ff_san_srdf.pl --op list-by-symm [--pool <pool name>] ff_san_srdf.pl --op list --sid <sid name> [--pool <pool name>] ff_san_srdf.pl --op check-conf --sid <sid name> [--pool <pool name>] [--outfile <output file>] ff_san_srdf.pl --op check --sid <sid name> ff_san_srdf.pl --op check-and-activate --sid <sid name> [--service <service spec>] ff_san_srdf.pl --op check-failover-done --sid <sid name> ff_san_srdf.pl --op failover --sid <sid name> 206 Administration and Operation

219 FlexFrame SAN Configuration Storage Systems Administration ff_san_srdf.pl --op failback --sid <sid name> Description ff_san_srdf.pl is used to support SRDF usage for SAN based SAP systems in FlexFrame. It complements the functionality provided for SAP systems (SIDs) with database on SAN storage for the case of SRDF protected Symmetrix systems and allows to get an overview about SRDF protected SIDs or detailed informations about a single SID, check a SID's SRDF-specific FlexFrame configuration, check its state or initiate a failover or a failback. As a prerequisite, some SRDF specific SAN configuration data must be recorded in the LDAP database. This is done together with the other SAN configuration data using the script ff_san_ldap_conf.pl. To provide full functionality, the script ff_san_srdf.pl must be called on an Application Node with SAN connection and operational EMC Solutions Enabler (SYMCLI) for most operation modes (exceptions are list-all and list-by-symm). The EMC Solutions Enabler must be installed in the Application Node image using the maintenance cycle as described in chapter "Configuring Application Nodes for SAN SRDF Usage". To get an overview about all SAN based SAP systems with SRDF protection, use operation mode list-all. If a pool name is given with operand --pool, the output is restricted to this pool. If called on an Application Node, the pool of this Node is always used. With operation mode list-by-symm, the SAN based SAP systems with SRDF protection are grouped by needed Symmetrix system. Only the actually needed system is taken into account; this means the primary system in normal mode and the secondary system after a failover. If a pool name is given with operand --pool, the output is restricted to this pool.if called on an Application Node, the pool of this Node is always used. With operation mode list, more detailed informations are listed for a single SID specified with operand --sid. If called on Control Node, a pool must also be specified with operand --pool, and the output will contain only data form the LDAP database. If called on an Application Node with SAN connection and operational EMC Solutions Enabler, the storage system view for the relevant objects is also given. With operation mode check-conf the SRDF-specific FlexFrame configuration for a SID given with operand --sid can be checked. For a complete check, it must be called on an Application Node with SAN connection and operational EMC Solutions Enabler. Otherwise, only LDAP settings are checked. As a part of the check on an Application Node it is also verified that the device groups specified in LDAP are available and contain the devices according to the LDAP specification. If operand --outfile is given, a list of commands is written to this file that can be used as a template for creation of the needed device groups. Administration and Operation 207

220 Storage Systems Administration FlexFrame SAN Configuration With operation mode check, the SRDF-specific state for a SID given with operand -- sid can be checked. It must be called on an Application Node with SAN connection and operational EMC Solutions Enabler. Operation mode check-and-activate is implicitly called during start and restart of the database instance of the SAP system with SID given with operand --sid by ff_san_mount.sh to check if a failover is needed, initiate the failover if it is needed, the device state allows a secure failover and the LDAP setting autofailoverallowed for this SID is yes, and activate the correct set of LUNS (primary or secondary). If a failover is needed but the LDAP setting does not allow it, this is signalled to the FlexFrame Autonomous Agents by means of the monitor-alert interface, a message that describes the situation is sent and a specific exit code is produced. Calling this operation directly is not supported. Operation mode check-failover-done is used by ff_san_mount.sh in conjunction with operation mode check-and-activate if a failover is needed for a SID, but the LDAP setting for this SID does not allow it. It is also intended for internal use only. With operation mode failover, a failover for the Symmetrix devices of the SID given with operand --sid is done. It must be called interactively on an Application Node with SAN connection and operational EMC Solutions Enabler. It is the user's responsibility not to call this operation for a SID while it is active on an Application Node. The user will always be asked for confirmation before invoking RDF actions on the Symmetrix system. With operation mode failback, a failback for the Symmetrix devices of the SID given with operand --sid is done. It must be called interactively on an Application Node with SAN connection and operational EMC Solutions Enabler. It is the user's responsibility not to call this operation for a SID while it is active on an Application Node. The user will always be asked for confirmation before invoking RDF actions on the Symmetrix system. Debugging /tmp/ff_san_srdf.pl.debuglog This file is located at the Application Node or Control Node where the script has been called and contains debugging information. In case of problems provide this file. 208 Administration and Operation

221 FlexFrame SAN Configuration Storage Systems Administration Script: ff_san_luns.pl FlexFrame(TM) SAN LUN helper functions Synopsis ff_san_luns.pl --op list-all [--pool <pool name>] ff_san_luns.pl --op list --sid <sid name> [--pool <pool name>] ff_san_luns.pl --op list-att --sid <sid name> [--pool <pool name>] [--symcli] ff_san_luns.pl --op list-att-node --node <node name> ff_san_luns.pl --op check-conf [--pool <pool name>] ff_san_luns.pl --op attach --sid <sid name> [--primary --secondary] ff_san_luns.pl --op detach --sid <sid name> [--primary --secondary] ff_san_luns.pl --op switch-att-to --sid <sid name> {--primary --secondary} ff_san_luns.pl --op detach-all --node <node name> ff_san_luns.pl --op switch-to --sid <sid name> {--primary --secondary} Description ff_san_luns.pl is used for lun-related actions for SAN based SAP systems in FlexFrame. It is mainly used internally by the scripts ff_san_mount.sh and ff_san_srdf.pl on Application Nodes during start, stop and restart of a database service to attach or detach the correct set of LUNs for a SID (primary or secondary LUNs, dependig on the currently used ones in conjunction with the usage of storage system based remote mirroring). Besides this, the operation modes list-all and list give the possibility to get an overview about all SAN based SAP systems (SIDs) with LUN usage or to get detailed informations about the LUNs defined for a specific SID, respectively. Administration and Operation 209

222 Storage Systems Administration FlexFrame SAN Configuration Example cn1:~ # ff_san_luns.pl --op list-all SIDs with SAN usage sorted by pool and SID name Pool pool1 SID D01 LUNs: 2 on 1 storage system(s) Secondary LUNs: 2 on 1 storage system(s) SID D02 LUNs: 2 on 1 storage system(s) Secondary LUNs: 2 on 1 storage system(s) cn1:~ # ff_san_luns.pl --op list --sid D01 --pool pool1 SAN LUNs of SID D01 from pool pool1 LDAP view: Volume Manager Type: LVM Groups: D01data D01logs Multipath Software Type: DM-MPIO File System Type: ext3 Number of FS: 12 Dynamic LUN masking with StorMan: yes LUNs: 2 on 1 storage system(s) LUN details - grouped by Storage System VLUN hostlun arraylun LUNGUID --- Storage System D01_data_R1 1 00CFB D01_logs_R1 5 00D Secondary LUNs: 2 on 1 storage system(s) Secondary LUNs are not in use LUN details - grouped by Storage System VLUN hostlun arraylun LUNGUID --- Storage System D01_data_R B D01_logs_R Furthermore it is possible to verify that the LUN assignments for SIDs in LDAP are consistent or to switch the LUN usage setting in the LDAP database. The arguments given with the command line differ with the operation mode. The functionality also depends on the fact if the call is done on a FlexFrame Control Node or an Application Node. 210 Administration and Operation

223 FlexFrame SAN Configuration Storage Systems Administration To get an overview about all SAN based SAP systems and the number of LUNs assigned to each SID, use operation mode list-all. If a pool name is given with operand --pool, the output is restricted to this pool. If called on an Application Node, the pool of this node is always used. With operation mode list, more detailed informations are listed for a single SID specified with operand --sid. If called on Control Node, a pool must also be specified with operand --pool, and the output will contain only data form the LDAP database. If called on an Application Node with SAN connection, the view of this Application Node is also given. With the additional option --skip-ldap, the output of the LDAP information can be suppressed. Operation mode list-att is usually to be called on Control Node and displays for each LUN of the SID specified with operand --sid from pool given with operand --pool to which node it is attached. On Application Node it can be called only with option --symcli and it then expects an operational EMC Solutions Enabler on this Node. In this case, only a summary of LUN connections for this Application Node and the SID given with operand --sid is shown. Operation mode list-att-node must be called on Control Node and displays for an Application Node specified with operand --node the SIDs with LUNs that are attached to this Application Node. With operation mode check-conf, the LUN configuration in the LDAP database can be checked. If a pool name is given with operand --pool, the check is restricted to this pool. If called on an Application Node, the pool of this Node is always used. If called on Control Node, it is also checked that all LUNs of SIDs configured for dynamic LUN masking with StorMan are also known by StorMan. Operation modes attach and detach are usually implicitly called during start and stop of a SAP database instance configured for SAN usage with dynamic LUN masking on the Application Node where the database instance is started or stopped. Caution must be applied when calling this operations directly. This is only allowed during setup of the SAN configuration for a SID, and the caller is responsible that the LUNs are not in use by other hosts when attaching, or that the usage of the LUNs has been ended on the host where the detach is done. Operation mode attach must be called on Application Node and attaches the LUNs of the SID given with operand --sid to the Application Node on which it is called, and also makes sure that the operating system's device structures are updated with the newly attached LUNs. For a SID, for which a storage based remote mirroring method is configured, it can be specified which set of LUNs (primary or secondary) must be attached, by using one of the options --primary or --secondary. If not specified, the correct set of LUNs is selected according to the actual LDAP setting. Administration and Operation 211

224 Storage Systems Administration FlexFrame SAN Configuration Operation mode detach must be called on Application Node and detaches the LUNs of the SID given with operand --sid from the Application Node on which it is called, and also makes sure that the operating system's device structures are updated accordingly. For a SID, for which a storage based remote mirroring method is configured, it can be specified which set of LUNs (primary or secondary) must be detached, by using one of the options --primary or --secondary. If not specified, the correct set of LUNs is selected according to the actual LDAP setting. Operation mode switch-att-to is a combination of the two operations described above and is used internally for SIDs with SRDF usage. Direct usage is not supported. Operation mode detach-all must be called on Control Node and allows to detach the LUNs of all SIDs configured for dynamic LUN masking with StorMan from the Application Node specified with operand --node. This can be done only when the concerned Application Node is not operational. Operation mode switch-to allows to set the information in LDAP which set of LUNs (primary or secondary) is to be used for the SID given with operand --sid. If called on Control Node, the pool of this SID must also be specified with operand --pool. To select the LUN set, one of the options --primary or --secondary must be given. This operation is only relevant for a SID for which a storage based remote mirroring method is configured. It must be used only when an action on the storage system has been done directly that changed the LUN set to be used (as an example, if an SRDF failback operation has been done without using the ff_san_srdf.pl script, the LUN set setting must be switched to primary). Debugging /tmp/ff_san_luns.pl.debuglog This file is located at the Application Node or Control Node where the script has been called and contains debugging information. In case of problems provide this file. 212 Administration and Operation

225 Adding a Switch to a Switch Group Switch Administration 9 Switch Administration A switch group consists of at least two switches of the Cisco Catalyst 3750g or 3750e switch family building a switch stack or of exactly two Nexus Switches building a vpc domain. The switches building a switch stack are interconnected on the back with Cisco StackWise technology and work like one switch. For a description how to interconnect the switches, please refer to Cisco Catalyst 3750 Installation Manual or Cisco StackWise Technology White Paper. Adding or removing a switch to or from a switch group means to add or remove a switch to/from a stack. To ensure safe operation we recommend doing this at a downtime to minimize influence on running systems. This requires shutting down all systems connected to this switch group. In case a NAS system is connected, all systems that have mounted file systems from the NAS system have to be shut down as well. Ensure all switches in the stack have a proper IOS Version. If necessary perform an IOS upgrade resp. downgrade. In a mixed switch group with members of both types 3750g and 3750e an IOS upgrade can only be done step by step one switch after the other, as both types need different IOS versions. Please refer to original Cisco documents for Catalyst 3750 for that task. 9.1 Adding a Switch to a Switch Group To add a new switch to an existing switch group, the new switch has to be inserted into the switch stack comprising the switch group. See notes above for recommendations. The following processing has to be performed. 1. Mount the new switch next to the existing switches 2. Run ff_save_switch_config.pl to save configurations 3. Write down switch ids of each stack member of the switch stack as they may change inserting the new switch to the stack. 4. Check the IOS versions of the switches in the existing switch stack and the new switch. Switches of the same model (G or E model) must have the same version. If the versions are different, upgrade resp. downgrade the IOS of the new switch. Administration and Operation 213

226 Switch Administration Adding a Switch to a Switch Group 5. Power off all switches of the switch stack, connect the new switch to the stack using the provided stacking cable and stacking ports at rear side (see Cisco installation manual for details), power on all switches of the stack except the new one 6. Compare the actual stack member ids with your noticed ids In case of differences use the following IOS command for renumbering: switch <Switch Number> renumber <New switch number> 7. Power on the new switch, once again compare the stack member ids, set all interfaces of the new switch to shutdown 8. Use ff_swgroup_adm.pl --op add-sw to add the switch to the FlexFrame configuration as described below. The Switch group is now ready for use for further configuration. Synopsis ff_swgroup_adm.pl --op add-sw --group <switch_group_id> --type <switch_type> [--dryrun] Options --op add-sw Adds a new member to the switch group and displays some information about processing steps. --group <switch_group_id> Defines the switch group to be used. --type <switch_type> --dryrun Defines the type of the new switch to be added to the switch group. Call ff_swgroup_adm.pl without any parameter to get a list of supported switch types The maximum number of switches per switchgroup is 9. For more than 4 switches of the 3750e model the StackWise cabling may be a bottleneck. For test purpose you can perform the function without changing the LDAP Database. 214 Administration and Operation

227 Adding a Switch to a Switch Group Switch Administration Example cn1:/opt/flexframe/bin # ff_swgroup_adm.pl --op add-sw --group 1 --type cat3750g-24ts If program is aborted by Ctrl-C or a failure remove left overs by calling: ff_swgroup_adm.pl --op rem-sw --group 1 --switch 3 Switch was added to LDAP data. Keep in mind: INSERTING SWITCH TO STACK NEEDS A DOWN TIME! To add the switch to switch stack (switch group 1) write down the current switch ids as they may change inserting the new switch to stack. To connect the switch to the stack use the provided stacking cable and stacking ports at rear side. See Cisco installation manual for details. If switch ids get scrambled use the IOS command "switch <current_no> renumber <new_no>" to put them in same order as before. In short the to do list: -> write down switch ids of each switch of group -> power down entire switch group -> insert switch into stack -> power on entire switch group -> look at switch ids and compare with your noticed -> in case of differences use IOS command to renumber switches Switch group is ready for use See file /tmp/swgrp-add-1-3/next_steps for same instructions as above. Administration and Operation 215

228 Switch Administration Removing a Switch from a Switch Group 9.2 Removing a Switch from a Switch Group A switch may be removed if it is unused and has the highest ID within the stack. Remove it first from the LDAP database with the command ff_swgroup_adm.pl and then from the stack. See notes above for recommendations. All ports of the switch must be unused. Synopsis ff_swgroup_adm.pl --op rem-sw --group <switch_group_id> --switch <switch_id> [--dryrun] Options --op rem-sw Removes the last member from a switch group and displays some information about processing steps. --group <switch_group_id> Defines the switch group to be used. --switch <switch_id> Defines the stack ID of the switch to be removed from a switch group. --dryrun Example For test purpose you can perform the function without changing the LDAP Database. cn1:/opt/flexframe/bin # ff_swgroup_adm.pl --op rem-sw --group 1 --switch 3 Switch was successfully removed from LDAP data. Keep in mind: REMOVING SWITCH FROM STACK NEEDS A DOWN TIME! In short the to do list: -> power down entire switch group -> remove switch from stack -> power on entire switch group Switch group is ready for use See file /tmp/swgrp-rem-1-3/next_steps for same instructions as above. 216 Administration and Operation

229 Listing a Switch Group Configuration Switch Administration 9.3 Listing a Switch Group Configuration Invoking the command ff_swgroup_adm.pl with the list operation mode displays the configuration of a switch group like used switch types, port channels, port usage statistics and used switch ports. Synopsis ff_swgroup_adm.pl --op list --group <switch_group_id> Options --op list Displays switch group configuration. --group <switch_group_id> Example Defines the switch group to be used. cn1:/opt/flexframe/bin # ff_swgroup_adm.pl --op list --group 1 Switch Group 1 Name/IP: switch-i / Login: root Password: passwort SNMP Community: public;ro Switch Types: (switch id, switch type) 1 cat3750g-24t 2 cat3750g-24t 3 cat3750g-24t Port Channels: (channel id, switch ports, connected device) 2 1/5,2/5 swb-1-1/11,swb-1-1/12 3 1/6,3/1 swb-1-2/11,swb-1-2/12 4 1/15,3/7 intfiler 5 1/19,2/11 swb-2-1/39,swb-2-1/40 6 2/22,3/22 swb-2-2/39,swb-2-2/40 Switch port usage: (switch id, used, free tx, free fx, free 10GB ports) 1 11 used 13 free tx 0 free fx 0 free 10Gb 2 10 used 14 free tx 0 free fx 0 free 10Gb 3 8 used 16 free tx 0 free fx 0 free 10Gb Administration and Operation 217

230 Switch Administration Listing a Switch Group Configuration Switch port list: (switch id, port id, connected device, vlans) 1 1 BX cabinet 2 u unused rx t10,t12,u rx t10,t12,u swb-1-1/11 t13,t10,t12,t11,t1 1 6 swb-1-2/12 t13,t10,t12,t11,t unused unused unused unused cn1 u13,t11,t12,t cn2 u13,t11,t12,t unused unused intfiler t11,t extern. Connect t13,t10,t11,t unused unused swb-2-1/39 t13,t10,t12,t11,t unused unused unused Corporate LAN u BX cabinet 2 u unused rx t10,t12,u rx t10,t12,u swb-1-1/12 t13,t10,t12,t11,t unused unused unused unused unused swb-2-1/40 t13,t10,t12,t11,t cn2 mgmt u unused unused cn1 mgmt u unused unused unused unused unused Administration and Operation

231 Changing the Password of a Switch Group Switch Administration 2 21 BX cabinet 1 u swb-2-2/39 t13,t10,t12,t11,t Corporate LAN u swb-1-2/11 t13,t10,t12,t11,t unused unused unused cn1 u13,t11,t12,t cn2 u13,t11,t12,t intfiler t11,t unused unused unused unused unused unused unused unused unused unused unused BX cabinet 1 u rx mgmt u rx mgmt u swb-2-2/40 t13,t10,t12,t11,t1 9.4 Changing the Password of a Switch Group To change the access password of a switch group it has to be changed at switch group as well as LDAP database. The command ff_swgroup_adm.pl changes both. Synopsis ff_swgroup_adm.pl --op pass --group <switch_group_id> --passwd <password> [--dryrun] Options --op pass Changes the switch group access. --group <switch_group_id> Defines the switch group to be used. Administration and Operation 219

232 Switch Administration Changing the Host Name of a Switch Group --passwd <password> Defines the new password as clear text. --dryrun Example For test purpose you can perform the function without changing the LDAP Database. cn1:/opt/flexframe/bin # ff_swgroup_adm.pl --op pass --group 1 --passwd berta update switch 1/1 configuration Notice: Update will take about 1 minute....+ Password changed from "anton" to "berta". See file /tmp/swgrp-pass-1/info for same information as above. 9.5 Changing the Host Name of a Switch Group To change the host name of a switch group it has to be changed at switch group as well as in the LDAP database and host files on both control nodes. The command ff_swgroup_adm.pl changes all of them. Synopsis ff_swgroup_adm.pl --op name --group <switch_group_id> --name <name> [--dryrun] Options --op name Changes the switch group host. --group <switch_group_id> Defines the switch group to be used. --name <name> Defines the new host name to be used. --dryrun For test purpose you can perform the function without changing the LDAP Database. 220 Administration and Operation

233 Displaying/Changing Common Network Configuration Parameters Switch Administration Example cn1:/opt/flexframe/bin # ff_swgroup_adm.pl --op name --group 1 --name swg1 update switch 1/1 configuration Notice: Update will take about 1 minute....+ Switch name changed from "swg-1" to "swg1". See file /tmp/swgrp-name-1/info for same information as above. 9.6 Displaying/Changing Common Network Configuration Parameters Some parameters of network configuration influence the switch group configuration and were defined with the Management Tool at initial installation. To display or change these parameters use the ff_swgroup_adm.pl command. Synopsis ff_swgroup_adm.pl --op parameter [ -parameter <parameter_name> --value <parameter_value>] [--dryrun] Options --op parameter Displays (without the following options) or changes (in conjunction with the following options) network configuration parameters. --parameter <parameter_name> Defines the name of the parameter to be changed. Known parameters are: clanportpervlan The parameter depends the way the Client LANs are connected to the corporate LAN (with the sapgui users). There are two modes: no Use two dedicated switch ports (for redundancy) and use them for all pool Client LANs (as tagged VLANs onto these two ports). yes Use two dedicated switch ports (for redundancy) for each pool Client LAN (as untagged VLAN on both ports). Administration and Operation 221

234 Switch Administration Displaying/Changing Common Network Configuration Parameters usetxtoclan If the switch group has fiber optic ports (SFP ports) these may be preferred as ports to the corporate LAN transferring the pool Client-LAN data. To be able to customize port type the parameter has two values: no The port type will be fiber optic. Be sure to have matching modules for the SFP ports modules for the SFP ports. yes The port type will be Cat5 twisted pair, commonly abbreviated as TX usetxuplink If the switch group has fiber optic ports (SFP ports) these may be preferred as ports to connect another switch group directly or to use them as uplink ports to the FlexFrame integrated LAN switch. The parameter has two values: no The port type for uplink will be fiber optic. Be sure to have matching modules for the SFP ports. yes The port type for uplink will be Cat5 twisted pair, commonly abbreviated as TX uplinkportcnt Defines the count of ports aggregated to the uplink. The uplink channel consists of at least two ports and may have a maximum of eight ports. The parameter value is the count of ports. --value <parameter_value> Defines the value of the parameter to be changed. For uplinkportcnt use a range between 2 and 8. For all other parameters use yes or 1 and no or 0 as value. --dryrun For test purpose you can perform the function without changing the LDAP Database. 222 Administration and Operation

235 Adding a Switch Group Switch Administration Examples cn1:~ # ff_swgroup_adm.pl op parameter Common Network Configuration Parameters Parameter Name Parameter Value Client LAN port per VLAN yes spread Client LAN ports over switch groups no use TX ports to Client LAN yes use TX ports as uplink no uplink port count 2 timezone Europe/Berlin POSIX timezone CET cn1:~ # ff_swgroup_adm.pl --op parameter --parameter usetxtoclan --value yes Parameter successfully changed at LDAP. 9.7 Adding a Switch Group To add an entire new switch group, use the ff_swgroup_adm.pl command. The command adds data to the LDAP database and creates an initial configuration file. The configuration file has to be manually uploaded to the switch. The instruction to do this is displayed by the command with values appropriate to the current configuration. The uplink channels of a switch group are normally connected to a core switch. On the core switch you must have free ports of the same type as the ports of the switch group. These ports must be configured manually. In the special case that you have only two switch groups you can connect the uplink ports of each switch group directly without a core switch. Administration and Operation 223

236 Switch Administration Adding a Switch Group Synopsis ff_swgroup_adm.pl --op add --group <switch_group_id> --type <list_of_switch_types> --name <name_of_switch_group> --passwd <password> [--login <login_name>] [--host <ip_host_part>[,ip_host_part]] [--snmp <community_name>] [--syslog <syslog_server_ips>;<syslog_facility>] [--ntp <server_ips>] [--10gbituplink] [--uplinkchannels <channel_count>] [--uplinkportcnt <port_count_per_channel>] [--uplinkportmedia {tx fx}] [--mgmtswgroup <switch_group_id>] [--vpcdomain <vpc_domain_id>] [--dryrun] Options --op add Adds a switch group and displays some information about processing steps. --group <switch_group_id> Defines switch group to operate. Within this operation mode used to define the id of the new switch group. --type <list_of_switch_types> Defines the switch types of the switches the switch group will consist of. The types are comma separated without any white spaces. The types must belong to the same switch family as described above. Call ff_swgroup_adm.pl --help to get a list of supported switch types. In case of cat3750 types the first switch is intended to be the first member of a 3750 switch stack and so on. The maximum number of switches per switchstack is 9. For more than 4 switches of the 3750e model the StackWise cabling may be a bottleneck. In case of nexus types exactly two types have to be specified. It is recommended to use switches with the same number of ports because ports are needed always in same quantity from both switches. 224 Administration and Operation

237 Adding a Switch Group Switch Administration --name <name_of_switch_group> Name string to be set as switch group node name. --passwd <password> Clear text password to be set at switch group. Administration and Operation 225

238 Switch Administration Adding a Switch Group --login <login_name> Login name to be set at switch group. Defaults to flexframe. --host <ip_host_part>[,ip_host_part] Host part to be used to build IP addresses for the control lan network. If this option is omitted the script uses a free host number to calculate the IP address. If this option is used the second definition is necessary if switches of type NEXUS are involved. --snmp <community_name> Defines the community name. Defaults to public. --syslog <syslog_server_ips>;<syslog_facility> Names the IP addresses of syslog server(s) and the facility to log messages. Join IP addresses by comma. Defaults to <cn_control_lan_ips>;local0. --ntp <server_ips> Names the IP addresses of ntp server(s). Join IP addresses by comma. Defaults to <cn_control_lan_ips>. --10gbituplink Use 10 Gbit ports to create an uplink channel (default: 0). --uplinkchannels <channel_count> Count of uplink channels for InterSwitchLinks (default: 1). --uplinkportcnt <port_count_per_channel> Count of ports to be used for an uplink channel (default: 2). --uplinkportmedia {tx fx} Media type of ports to be used for an uplink channel (default: depends on FlexFrame global network settings). --mgmtswgroup <switch_group_id> Defines the switch group the switch management interface should be connected to. The information is necessary if switches of type NEXUS are involved. Call ff_swgroup_adm.pl --help to get a list of currently configured switch group IDs. --vpcdomain <vpc_domain_id> Defines a unique vpc domain within the network. The information is necessary if switches of type NEXUS are involved. 226 Administration and Operation

239 Adding a Switch Group Switch Administration --dryrun For test purpose you can perform the function without changing the LDAP Database and updating the switchports. cn1:~ # ff_swgroup_adm.pl --op add --group 6 --type cat3750g-24ts,cat3750e-24td,cat3750e-24td,cat3750e-24td,cat3750e- 24td --name reinholdswgt1 --passwd passwort --uplinkportcnt 4 If program is aborted by Ctrl-C or a failure remove leftovers by calling: ff_swgroup_adm.pl --op rem --group 6 update LDAP... New SwitchGroup 6 was added to LDAP. The switch is configured with 1 channel for uplink to a core switch or another switch group (in case of only two switch groups). See below the configured ports for uplink and connect them to the core switch or the other switch group. SwitchID / Port 1 / 1 Uplink port of EtherChannel 1 to core switch 2 / 1 Uplink port of EtherChannel 1 to core switch 3 / 1 Uplink port of EtherChannel 1 to core switch 4 / 1 Uplink port of EtherChannel 1 to core switch Upload of initial switch configuration has to be done manually. See installation guide for details. For a quick instruction see below. The file to be uploaded is named: /tftpboot/reinholdswgt1.config Quick install instruction for switch group 6 Plug only one port to core switch to prevent disabling of portchannel at core switches. Connect a RS232 cable between a serial port of ControlNode 1 and the console of the switch. Use console of first switch in stack (stack master). At ControlNode 1 use "minicom" to connect to the serial port. See below a session snippet as sample how to upload the configuration: Administration and Operation 227

240 Switch Administration Adding an Expansion Module... no Press RETURN to get started! --- System Configuration Dialog --- Would you like to enter the initial configuration dialog? [yes/no]: Switch> enable Switch # configure terminal Switch (config)# vtp mode transparent Switch (config)# interface vlan 1013 Switch (config-if)# ip address Switch (config-if)# no shutdown Switch (config-if)# end Switch # copy tftp:// /reinholdswgt1.config startup-config Accessing tftp:// /reinholdswgt1.config... Loading reinholdswgt1.config from (via Vlan1013):! [OK bytes] [OK] 3630 bytes copied in secs (9629 bytes/sec) Switch # reload Plug all other ports to core switches. The switch group should be fully operational. Unless any errors are reported follow instructions above to solve all precautions needed to integrate switch group into FlexFrame. Look at "/opt/flexframe/network/wiring_of_swgroup6 reinholdswgt1.txt" to get a copy of this message. 9.8 Adding an Expansion Module A switch of type NEXUS may have slots where you can insert expansion modules to get more available Ports. Use ff_swgroup_adm.pl --op add-module to add the expansion modules ethernet ports to the FlexFrame configuration as described below. Synopsis ff_swgroup_adm.pl --op add-module --group <switch_group_id> --switch <switch_id> --slot <slot_id> --module <module_type> 228 Administration and Operation

241 Removing an Expansion Module Switch Administration Options --op add-module Adds an expansion module to a switch. --group <switch_group_id> Defines the switch group to operate. --switch <switch_id> Defines the switch to operate. --slot <slot_id> Defines the slot to be used. --module <module_type> Defines the type of the expansion module to be added. 9.9 Removing an Expansion Module Before you remove an expansion module physically from a switch use ff_swgroup_adm.pl --op rem-module to remove the expansion modules ethernet ports from the FlexFrame configuration as described below. All ports of the expansion module must be unused. Synopsis ff_swgroup_adm.pl --op rem-module --group <switch_group_id> --switch <switch_id> --slot <slot_id> Options --op rem-module Removes an expansion module from a switch. --group <switch_group_id> Defines the switch group to operate. --switch <switch_id> Defines the switch to operate. Administration and Operation 229

242 Switch Administration Removing a Switch Group --slot <slot_id> Defines the slot to operate Removing a Switch Group To remove an entire switch group use the ff_swgroup_adm.pl command. The command removes data from LDAP database. As a precaution the switch group may no longer be in use by any FlexFrame component. Only the uplink port may be configured. All ports of the switchgroup must be unused. Synopsis ff_swgroup_adm.pl --op rem --group <switch_group_id> [--dryrun] 230 Administration and Operation

243 Adding an Uplink to Switch Group Switch Administration Options --op rem Removes named switch group. --group <switch_group_id> Defines switch group to operate. Within this operation mode used to define the id of the switch group to be removed. --dryrun Example For test purpose you can perform without changing the LDAP Database. cn1:~ # ff_swgroup_adm.pl --op rem --group 2 update LDAP.... Unless any errors are reported disconnect uplink ports 1/49, 2/49 and switch group is removed from FlexFrame environment Adding an Uplink to Switch Group To create an additional uplink at a switch group use operation mode add-uplink of ff_swgroup_adm.pl. It will create a link aggregate at switch group. Use the command line arguments to specify options like port count, 1 or 10 Gbit ports and port media. Synopsis ff_swgroup_adm.pl --op add-uplink --group <switch_group_id> [--10gbituplink] [--uplinkportcnt <port_count_per_channel>] [--uplinkportmedia {tx fx}] [--dryrun] Administration and Operation 231

244 Switch Administration Adding an Uplink to Switch Group Options --op add-uplink Creates a new uplink link aggregate. --id <switch_group_id> Defines the switch group to change. --10gbituplink Use 10 Gbit ports to create an uplink channel (default: 0). --uplinkportcnt <port_count_per_channel> Count of ports to be used for an uplink channel (default: 2). The maximum number of ports per channel is 8. --uplinkportmedia <tx fx> Media type of ports to be used for an uplink channel (default: depends on network parameters). --dryrun Example For test purpose you can perform the function without changing the LDAP Database. cn1:~ # ff_swgroup_adm.pl --op add-uplink --group 2 --uplinkportcnt 4 --uplinkportmedia fx If program is aborted by Ctrl-C or a failure remove left overs by calling: ff_swgroup_adm.pl --op rem-uplink --group 2 --channel 2 update LDAP... update switch 2/1 configuration Notice: Update will take about 1 minute.... New uplink with channel id 2 was created for switch group 2. It was added to switch group and LDAP. See below the configured uplink ports and connect them to the peer switch. SwitchID / Port 232 Administration and Operation

245 Extend an Uplink of Switch Group Switch Administration 1 / 51 Uplink port of EtherChannel 2 to peer switch 1 / 52 Uplink port of EtherChannel 2 to peer switch 2 / 51 Uplink port of EtherChannel 2 to peer switch 2 / 52 Uplink port of EtherChannel 2 to peer switch Unless any errors are reported cable switch ports to use the uplink channel. Look at "/opt/flexframe/network/wiring_of_swgroup2 swg2.txt" to get a copy of this message Extend an Uplink of Switch Group To extend an existing uplink at a switch group, use operation mode ext-uplink of ff_swgroup_adm.pl. It will add new ports to given link aggregate at switch group until given port count is reached. Synopsis ff_swgroup_adm.pl --op ext-uplink --group <switch_group_id> --channel <uplink_channel_id> [--uplinkportcnt <port_count_per_channel>] [--dryrun] Options --op ext-uplink Expands an existing uplink link aggregate. --id <switch_group_id> Defines the switch group to change. --channel <uplink_channel_id> Defines the uplink channel of switch group to changed. --uplinkportcnt <port_count_per_channel> Count of ports to be used for an uplink channel (default: 2). The maximum number of ports per channel is 8. --dryrun For test purpose you can perform the function without changing the LDAP Database and updating the switchports. Administration and Operation 233

246 Switch Administration Delete an Uplink of Switch Group Example cn1:~ # ff_swgroup_adm.pl --op ext-uplink --group 2 --channel 2 -- uplinkportcnt 8 update LDAP... update switch 2/1 configuration Notice: Update will take about 1 minute.... Uplink with channel id 2 of switch group 2 was extended up to 8 ports. Switch group configuration and LDAP are updated. See below the new configured uplink ports and connect them to the peer switch. SwitchID / Port 2 / 51 Uplink port of EtherChannel 2 to peer switch 3 / 50 Uplink port of EtherChannel 2 to peer switch 3 / 51 Uplink port of EtherChannel 2 to peer switch 4 / 51 Uplink port of EtherChannel 2 to peer switch Unless any errors are reported cable switch ports to use the new uplink channel ports. Look at "/opt/flexframe/network/wiring_of_swgroup2 swg2.txt" to get a copy of this message Delete an Uplink of Switch Group To delete an existing uplink of a switch group use operation mode rem-uplink of ff_swgroup_adm.pl. It will remove the channel with its link aggregate and all associated ports at switch group. Synopsis ff_swgroup_adm.pl --op rem-uplink --group <switch_group_id> --channel <uplink_channel_id> [--dryrun] 234 Administration and Operation

247 Delete an Uplink of Switch Group Switch Administration Options --op ext-uplink Expands an existing uplink link aggregate. --id <switch group id> Defines the switch group to change. --channel <uplink_channel_id> Defines the uplink channel of switch group to removed. --dryrun For test purpose you can perform the function without changing the LDAP Database and updating the switchports. Example cn1:~ # ff_swgroup_adm.pl --op rem-uplink --group 2 --channel 2 update LDAP... update switch 2/1 configuration Notice: Update will take about 1 minute.... Uplink with channel id 2 removed at switch group 2. It was removed from switch group and LDAP. See below the freed uplink ports. SwitchID / Port 1 / 51 2 / 51 3 / 49 3 / 50 3 / 51 4 / 49 4 / 50 4 / 51 Unless any errors are reported switch ports of uplink channel are now unused. Look at "/opt/flexframe/network/wiring_of_swgroup2 swg2.txt" Administration and Operation 235

248 Switch Administration Migrating a Switch of a Switch Group to get a copy of this message Migrating a Switch of a Switch Group Because of the various supported switch types within a switch group situations may occur where a switch of a specific type should be replaced with a switch of another type. Such situations are assisted without losing the switch configuration by switch migration. The following figure illustrates the supported migrations. from to cat3750g-24t cat3750g-24ts cat3750g-48ts cat3750e-24td cat3750e-48td cat3750g-24t x x x x cat3750g-24ts - x x x cat3750g-48ts x cat3750e-24td x cat3750e-48td Migration not possible x Migration possible.use Script ff_swgroup_adm.pl --op migratesw To migrate a switch of an existing switch group, the new switch has to be inserted into the stack at the position of the old switch. This action requires a downtime of the affected switch group. The following processing has to be performed. 1. Run ff_save_switch_config.pl to save configurations 2. Write down switch ids of each stack member of the switch stack as they may change inserting the new switch to the stack. 3. Check the IOS versions of the switches in the existing switch stack and the new switch. Switches of the same model (G or E model) must have the same version. If the versions are different, upgrade resp. downgrade the IOS of the new switch. 4. If the switch you want to migrate uses SFP ports for uplinks and the new switch does not support SFP ports use ff_swgroup_adm.pl -op rem-uplink. to remove the uplink 236 Administration and Operation

249 Migrating a Switch of a Switch Group Switch Administration 5. Use ff_swgroup_adm.pl -op migrate-sw to migrate the type of the switch as described below. 6. Power off all switches of the switch stack 7. Remove all network and backplane cables from the switch you want to migrate. 8. Replace the switch. The possible replacements you can see in the table above. 9. Plug in all network cables and backplane cables into the switch. 10. Power on all switches of the switch stack. 11. If the new switch supports more interfaces, set all new interfaces to shutdown 12. Compare new switch ids with your noticed ids. In case of differences use the following IOS command for renumbering: switch <Switch Number> renumber <New switch number> 13. If necessary reconfigure the uplink using ff_swgroup_adm.pl -op add-uplink If you want to migrate more switches of a switch stack, you have to repeat step 3 to 5 and 7 to 13 for each switch. The switch group is now ready for use or further configuration. Synopsis ff_swgroup_adm.pl --op migrate-sw --group <switch_group_id> --switch <switch_id> --type <switch_type> [--dryrun] Options --op migrate-sw Migrates the type of a switch of the switch group. --group <switch_group_id> Defines the switch group to be used. --switch <switch_id> Defines the stack ID of the switch to be migrated. --type <switch_type> Defines the type the switch should migrate to. Call ff_swgroup_adm.pl without any parameter to get a list of supported switch types. Migration is allowed according the migration table above. --dryrun Administration and Operation 237

250 Switch Administration Adding a Switch Port Configuration Example For test purpose you can perform the function without changing the LDAP Database. cn1:/opt/flexframe/bin # ff_swgroup_adm.pl --op migrate-sw --group 1 switch 2 --type cat3750e-24td 9.15 Adding a Switch Port Configuration Switch ports are typically directly configured by maintenance tools. But some issues like configuring ports for gateways, backup or migration systems need a way to do this on a per port basis. For this type of configuration a special peer type is used. The program ff_swport_adm.pl is used to configure or remove this type of port configuration. Synopsis ff_swport_adm.pl --op add --port <swgroup:switch:port> [ -10gbit] --lan <pool:lan[:lan][,pool:lan[:lan]]> [--native <pool:lan>] [--desc <description>] [--dryrun] Options --op add Adds a switch port configuration and displays some information about processing steps. --port <swgroup:switch:port> Defines the switch group, switch and port ID of the port to be used. --10gbit Defines the port number used in --port to be the number of a TenGigabitEthernet port. --lan <pool:lan[:lan][,pool:lan[:lan]]> Defines the accessible VLANs. For better readability, a VLAN is specified with its pool and LAN name. Use only client, server or storage as LAN names. For more than one LAN per pool the LAN names may be added to the same pool statement. The VLANs are not restricted to belong to the same pool. To directly add VLAN IDs not used within any pool, use '#' as pool name and the VLAN ID(s) as LAN(s). 238 Administration and Operation

251 Adding a Switch Port Configuration Switch Administration The accessible VLANs. For better readability, a VLAN is specified with its pool and LAN name. Use only client, server or storage as LAN names. For more than one LAN per pool the LAN names may be added to the same pool statement. The VLANs are not restricted to belong to the same pool. To directly add VLAN IDs not used within any pool, use '#' as pool name and the VLAN ID(s) as LAN(s). If only a single VLAN is configured this is accessible as native VLAN. This means the data packet contains no VLAN tag. This is the behavior used by a standard server network interface. For more than one LAN they are configured as tagged. To define which of them should be used as native VLAN use the option --native. Examples: --lan poola:client:server,poolb:client:server --lan poola:client,poola:server,poolb:client:server --lan poola:storage,poolb:storage --lan poola:server --lan '#:417:891' --lan poola:server,'#:417:891' --lan 'poola:server,#:417:891' --native <pool:lan> Use this option to define the native VLAN of the accessible VLANs defined with option --lan. To directly add VLAN ID not used within any pool, use '#' as pool name and the VLAN ID as LAN. Examples: --native poola:server --native '#:417' --desc <description> The description is added to configuration of switch port and the LDAP data of the switch port configuration. --dryrun For test purpose you can perform the function without changing the LDAP Database and updating the switchports. Administration and Operation 239

252 Switch Administration Removing a Switch Port Configuration Example cn1:/opt/flexframe/bin # ff_swport_adm.pl --op add --port 1:1:15 --lan ip1:storage:server,'#:4000' --native ip1:storage Execution may take some minutes. If program is aborted by Ctrl-C or a failure remove left overs by calling: ff_swport_adm.pl --op rem --port 1:1:15 update switch 1/1 configuration Notice: Update will take about 1 minute....+ If not reported any error the port is configured and LDAP is updated successfully Removing a Switch Port Configuration To remove a switch port configuration that was previously configured by ff_swport_adm.pl or as external connectivity with the Management Tool, use the ff_swport_adm.pl command. Other ports are configured with maintenance tools like ff_an_adm.pl, ff_pool_adm.pl or ff_bx_cabinet_adm.pl. The switch port configuration will be removed from the switch and the LDAP database. Synopsis ff_swport_adm.pl --op rem --port <swgroup:switch:port> [ -10gbit] Options --op rem Removes the configuration of a switch port and displays some information about processing steps. --port <swgroup:switch:port> Defines the switch group, switch and port ID of the port to be used. --10gbit Defines the port number used in --port to be the number of a TenGigabitEthernet port. --dryrun 240 Administration and Operation

253 Displaying a Switch Port Configuration Switch Administration For test purpose you can perform the function without changing the LDAP Database and updating the switchports. Example cn1:/opt/flexframe/bin # ff_swport_adm.pl --op rem --port 1:1:15 Execution may take some minutes. If program is aborted by Ctrl-C or a failure remove left overs by calling: ff_swport_adm.pl --op rem --port 1:1:15 update switch 1/1 configuration Notice: Update will take about 1 minute....+ If not reported any error the port is unconfigured and LDAP is updated successfully Displaying a Switch Port Configuration To display the configuration of a switch port as known by LDAP database in detail, use the command ff_swport_adm.pl with operation mode list. Synopsis ff_swport_adm.pl --op list --port <swgroup:switch:port> [ -10gbit] Options --op list Displays configuration of the switch port. --port <swgroup:switch:port> Defines the switch group, switch and port ID of the port to be used. --10gbit Defines the port number used in --port to be the number of a TenGigabitEthernet port. Administration and Operation 241

254 Switch Administration Displaying the Complete Switch Port Configuration Examples cn1:/opt/flexframe/bin # ff_swport_adm.pl --op list --port 1:1:4 Switch Port Configuration of 1:1:4 (Switch Group : Switch : Port) assigned VLAN IDs: 24,25,26 assigned VLAN Names: pool1:client,pool1:server,pool1:storage native VLAN: 26 Port Peer Type: AN Peer Node: rx300-1 Display of an unconfigured port: ERROR: wrong switch port "1:1:8". Port configuration unknown Displaying the Complete Switch Port Configuration To display the complete configuration of all switch ports as known by LDAP database in detail, use the command ff_swport_adm.pl with operation mode list-all. Synopsis ff_swport_adm.pl --op list-all Options --op list-all Displays configuration of all switch groups. 242 Administration and Operation

255 Displaying the Complete Switch Port Configuration Switch Administration Examples cn1:/opt/flexframe/bin # ff_swport_adm.pl --op list-all Switch Port Configuration of 1:1:1 (Switch Group : Switch : Port) assigned VLAN IDs: 1010,1011,1012 assigned VLAN Names: pool1:client,pool1:storage,pool1:server native VLAN: 1011 Port Peer Type: GW Peer Node: sno1apl5p1 Switch Port Configuration of 1:1:2 (Switch Group : Switch : Port) assigned VLAN IDs: 1010,1011,1012 assigned VLAN Names: pool1:client,pool1:storage,pool1:server native VLAN: 1011 Port Peer Type: AN Peer Node: sno1apl2 Switch Port Configuration of 1:1:3 (Switch Group : Switch : Port) assigned VLAN IDs: 1,90,91,92,93,94,95,1010,1011,1012,1013,1020,1021,1022,1030,1031,1032 assigned VLAN Names: -:defaultvlan,toni:storage,toni:client,toni:server,toni1:storage,toni1:client,ton i1:server, pool1:client,pool1:storage,pool1:server,- :control,pool2:client,pool2:storage,pool2:server,pool3:client,pool3:stor age,pool3: server Port Channel ID: 3 Port Peer Type: SWB Switch Port Configuration of 1:1:4 (Switch Group : Switch : Port) assigned VLAN IDs: 1,90,91,92,93,94,95,1010,1011,1012,1013,1020,1021,1022,1030,1031,1032 assigned VLAN Names: -:defaultvlan,toni:storage,toni:client,toni:server,toni1:storage,toni1:client,ton Administration and Operation 243

256 Switch Administration Moving Device Connection to Core Switch i1:server, pool1:client,pool1:storage,pool1:server,- :control,pool2:client,pool2:storage,pool2:server,pool3:client,pool3:stor age,pool3: server Port Channel ID: 4 Port Peer Type: SWB Moving Device Connection to Core Switch Devices (e.g. application nodes or NAS storage etc) in FlexFrame are usually connected to the FlexFrame internal (LAN) switch groups. When the configuration grows e.g. if a lot of servers connected to several switch groups occupy the same NAS system it would be more convenient if devices e.g. like this NAS system are directly connected to core switches. Core switches are customer switches the switch groups have uplinks to. Core switches are not under control of FlexFrame and not represented in LDAP. Devices directly connected to core switches are still represented in LDAP with a connection to an imaginary switch group zero. For more detailed information to the core network concept see also: FlexFrame for SAP Network Design and Configuration Guide. ff_move_to_core.pl supports the move of device connections to core switches. The switch group ports occupied from the device to be moved are displayed and if the --doit option is used the configuration of the affected ports is changed and an update of LDAP is performed. Synopsis ff_move_to_core.pl --help ff_move_to_core.pl --version Options --help Display usage. --version Display script version. 244 Administration and Operation

257 Moving Device Connection to Core Switch Switch Administration Move Control Center to Core Switch Synopsis ff_move_to_core.pl --op cn [--doit] Options --op cn --doit The switch group ports of both control nodes should be released. The affected ports are displayed. The configuration of the affected ports is changed and an update of LDAP is performed. Example cn1:/opt/flexframe/bin # ff_move_to_core.pl --op cn Move Client LAN to Core Switch Synopsis ff_move_to_core.pl --op clan [--doit] Options --op clan --doit The switch group ports of the client LAN should be released. The affected ports are displayed The configuration of the affected ports is changed and an update of LDAP is performed. Example cn1:/opt/flexframe/bin # ff_move_to_core.pl --op clan Administration and Operation 245

258 Switch Administration Moving Device Connection to Core Switch Move NAS System to Core Switch Synopsis ff_move_to_core.pl --op nas --name <NAS node name> [--doit] Options --op nas The switch group ports of a NAS system should be released. The affected ports are displayed. --name <NAS node name> --doit The node name of the NAS system which should be moved. The configuration of the affected ports is changed and an update of LDAP is performed. Example cn1:/opt/flexframe/bin # ff_move_to_core.pl --op nas --name filer Move Application Node to Core Switch Synopsis ff_move_to_core.pl --op an --name <AN node name> [--doit] Options --op an The switch group ports of an application node should be released. The affected ports are displayed. --name <AN node name> --doit The node name of the application node which should be moved. The configuration of the affected ports is changed and an update of LDAP is performed. 246 Administration and Operation

259 Moving Device Connection to Core Switch Switch Administration Example cn1:/opt/flexframe/bin # ff_move_to_core.pl --op an --name rx Move ESX Server to Core Switch Synopsis ff_move_to_core.pl --op esx --name <ESX node name> [--doit] Options --op esx The switch group ports of an application node should be released. The affected ports are displayed. --name <ESX node name> --doit The node name of the application node which should be moved. The configuration of the affected ports is changed and an update of LDAP is performed. Example cn1:/opt/flexframe/bin # ff_move_to_core.pl --op esx --name rx Move BX Chassis to Core Switch Synopsis ff_move_to_core.pl --op bx --name <BX chassis name> [--doit] Options --op bx The switch group ports of a BX chassis should be released. The affected ports are displayed. --name <BX chassis name> --doit The chassis name of a BX chassis which should be moved. Administration and Operation 247

260 Switch Administration Moving Device Connection to Core Switch The configuration of the affected ports is changed and an update of LDAP is performed. Example cn1:/opt/flexframe/bin # ff_move_to_core.pl --op bx --name bx Administration and Operation

261 Moving Device Connection to Core Switch SAP System Handling 10 SAP System Handling This chapter describes the management of SAP System IDs (so-called SIDs) and their respective instances within the FlexFrame environment. It further describes how to clone SIDs as well as their instances for a different pool than the one they were installed to. The tools described below only maintain the LDAP database entries, rather than adding or removing the data and binaries of the respective SAP systems. These steps need to be performed manually. Listing, adding, removing and cloning the above entities in the LDAP server is supported by two tools, ff_sid_adm.pl and ff_clone_sid.pl. Both scripts will take care of keeping the data accessed by the operating system's naming service mechanism in sync with the FlexFrame internal configuration data, both of which reside in the LDAP server. This data should not be manipulated manually. Preventing port conflicts Please take into account that depending on the instance number ports are reserved for the SAP application. That could mean that you conflict with other non-sap applications on your nodes (e.g. SAP's ICM uses ports 80nn which maybe conflicts with an application which uses port 8081). Please refer to SDN document TCP/IP Ports Used by SAP Applications to prevent conflicts. If a new SAP system is added by ff_sid_adm.pl and the modification of the LDAP database fail, ff_sid_adm.pl always tries to remove all changes in LDAP which are made until the failure This rollback may cause error messages because the rollback operation does not exactly know where the error occurred (it cannot recognize the failed command). Those messages can be ignored. The script ff_sid_adm.pl tries to generate files to support diagnostics. Those files are: /tmp/ldap.ldifs.<pool>.<sid> /tmp/ldap.error.<pool>.<sid> /tmp/ldap.log.pool>.<sid> /tmp/rollback.<pool>.<sid> In addition to the decribed SAP System handling in this chapter here there is another possibility to handle SAP systems using ACC (Adaptive Computing Controller (ACC) - SAP system for monitoring and controlling SAP environments). For more information see manual "Installation ACC" and additional documentation from SAP (ACCImplementation.pdf, ACCSecurity.pdf, ACCCustomizing.pdf). Administration and Operation 249

262 SAP System Handling Listing SAP SIDs and Instances 10.1 Listing SAP SIDs and Instances Synopsis ff_sid_adm.pl -op list-sids [ -pool <pool_name> ] ff_sid_adm.pl --op list-ids [ --pool <pool_name> --sid <sid> ] ff_sid_adm.pl -op list-all [ -pool <pool_name> --sid <sid> ] Options --op list-sids The SIDs of the specified pool/all pools are displayed with information on SAP version, database type and database version --op list-ids The instance numbers of the specified SID are displayed related to the given pool/sid or all pools/all SIDs. --op list-all Shows extended information about a specific SID (instance types, instances numbers, used ip@, used hostnames) of a given pool. --pool <pool_name> Specifies the FlexFrame pool to which the operation should be applied. The list of SIDs in <pool_name> is shown. --sid <SAP_system_id> The instance numbers of the specified SID are displayed related to the given pool. Examples %> ff_sid_adm.pl -op list-ids -pool Pan sid SHT Configuration Data of SID instance number and types: SHT 05 scs DB0 db 07 ci 250 Administration and Operation

263 Listing SAP SIDs and Instances SAP System Handling %> ff_sid_adm.pl -op list-sids -pool Pan -sid SHT List of SIDs in pool: Pan O01 SAP-7.1 Oracle-10 S02 SAP 7.0 MaxDB-76 %> ff_sid_adm.pl -op list-all -pool Pan -sid SHT Configuration data of SID: SHT (Pool: Pan) SID global Data: SAP version specified: SAP-7.0 Database specified: Oracle-10 Instance Data: Instance type: ci Instance number: 00 Client-Lan Host-Ip: nnn.nn.nn.nnn Client-Lan Hostname: cisht Server-Lan Host-IP: nnn.nn.mm.nnn Serber-Lan Hostname: cisht-se Administration and Operation 251

264 SAP System Handling Updating System Configuration Files 10.2 Updating System Configuration Files Synopsis ff_sid_adm.pl --op db2adm --pool <pool_name> --sid <SAP_system_id> Options --op db2adm Updates the system configuration files with DB2 specific data (host name, service names) for a specific SID. --pool <pool_name> Specifies the FlexFrame pool to which the operation should be applied. --sid <SAP_system_id> Specifies the SID being used Adding/Removing/Modifying SAP SIDs and Instances (Classic SAP Services) Synopsis ff_sid_adm.pl -op add -pool <pool_name> --sid <SAP_system_id> --sapversion { } --db {ORACLE{ } MAXDB{ } DB2V{ }} :{<db_loghost_ip> *} --lc {MAXDB75 MAXDB76 MAXDB77} {<lc_loghost_ip> *} --group <groupname1>:<gidnumber1>,<groupname2>:<gidnumber2>,... --user <username1>:<uidnumber1>,<username2>:<uidnumber2>,... :{<db_loghost_ip> *} --sap {ci app jc j scs ascs ers}:<sysnr>: :{<loghost_client_ip> *} :{<loghost_server_ip> *} [--os <instance_type>:<os>,<instance_type>:<os>,...] [--ips <old_ip>:<new_ip>,<old_ip>:<new_ip>,...] [--db2srv sapdb2<sid>:<port>,db2_db2<sid>:<port>, DB2_db2<sid>_1:<port>, DB2_db2<sid>_2:<port>, DB2_db2<sid>_END:<port>] 252 Administration and Operation

265 Adding/Removing/Modifying SAP SIDs and Instances (Classic SAP Services)SAP System Handling ff_sid_adm.pl -op mod -pool <pool_name> --sid <SAP_system_id> [--os <instance_type>:<os>,<instance_type>:<os>,...] ff_sid_adm.pl -op mod -pool <pool_name> --sid <SAP_system_id> [--ips <old_ip>:<new_ip>,<old_ip>:<new_ip>,...] ff_sid_adm.pl -op del -pool <pool_name> --sid <SAP_system_id> [--sysnr <SYSNR>] Options --op add Determines the add operation. --op mod Determines the mod operation. This option is only used to modify OS specifications and exchanges IP adresses of specific SID instances. --op del Determines the del operation. --pool <pool_name> Specifies the FlexFrame pool to which the operation should be applied. --sid <SAP_system_id> Specifies the SID being used. --sapversion { } Specifies the SAP basis version being used. Please take into account that SAP 7.2 is released only for specific SAP applications. Support of SAP 7.3 depends on a specific patchlevel. --db {ORACLE{ } MAXDB{ } DB2V{ }} Specifies the database type as well as the respective version being used. The specification of the database type is not case-sensitive. Please take into account there are restriction using database type depending on SAP release (see information at SAP services sites). --lc {MAXDB75 MAXDB76 MAXDB77} Specifies the database type as well as the respective version being used. The specification of the database type is not case-sensitive. Administration and Operation 253

266 SAP System HandlingAdding/Removing/Modifying SAP SIDs and Instances (Classic SAP Services) --group <groupname1>:<gidnumber1>,<groupname2>:<gidnumber2>,... --user <username1>:<uidnumber1>,<username2>:<uidnumber2>,... User and group enable specially selected user numbers and group numbers to be assigned to SAP users and SAP groups respectively. In this case a check is made to see whether the user or group has already been defined for the DB system involved. A user or group is created only if they do not already exist. For example, a group dba which already exists cannot be assigned a group number which deviates from the default value. {<db_loghost_ip> *}} The logical host name is used for the database (generated automatically: db<sid>se) as well as the IP address for that host name. Use an asterisk if you want it to be chosen automatically. All the entries need to be specified in a colon separated format. You can omit the network part of the IP, e.g , and specify only the last tupel of the IP, e.g {<lc_loghost_ip> *}} The logical host name is used for the database (generated automatically: lc<sid>se) as well as the IP address for that host name. Use an asterisk if you want it to be chosen automatically. All the entries need to be specified in a colon separated format. You can omit the network part of the IP, e.g , and specify only the last tupel of the IP, e.g sap {ci app jc j scs ascs ers}:<sysnr>: {<loghost_client_ip> *} {<loghost_server_ip> *} Specifies an SAP instance (optionally multiple of those) through its type (ci, app, jc, j, scs, ascs, ers), its SAP system number, the logical host name (generated automatically: <type><sid>) in the client network, the respective IP address, the logical host name (generated automatically: <type><sid>-se) in the server network and the respective IP address. Again, the IP addresses can be replaced with asterisks in order to have them chosen automatically. ERS is only supported with --sapversion 7.0 or higher. For Enqueue Replicated Servers (ERS) the asterisk should be used because we do not need an IP address from the client and server LAN (given IP addresses will be ignored). All the entries need to be specified in a colon separated format. 254 Administration and Operation

267 Adding/Removing/Modifying SAP SIDs and Instances (Classic SAP Services)SAP System Handling Using --sapversion 7.1 you can omit the network part of the IP, e.g , (or ) and specify only the last tupel of the IP, e.g With SAP 7.1 SAP systems do not know instance type jc for JAVA systems. Therefore the hostname used changed from jc<sid> to j<sysnr><sid> and from jc<sid>-se to j<sysnr><sid>-se. ff_sid_adm.pl still requires the type jc used in option -sap to make a distinction between central instance and application instance. Please take into account the different hostname specifications depending on SAP version used. --sysnr <SYSNR> Removes a specific SAP instance instead of the entire system (SID). --os <instance_type>:<os>,<instance_type>:<os>,...] Specifies the OS type for the given SID and/or their instances. instance_type::= {default ci app jc j scs ascs ers} os::= {SLES-10.x86_64 SLES-11.x86_64} In combination with the -add option, default:<os> sets the operating system of the SID itself and all their instances which no own operating system is assigned to. In combination with the -mod option, default:<os> sets the operating system of the SID only. The specifications of their instances are not changed. Examples: default:sles-10.x86_64 all instances are set to SLES-10.x86_64 default:sles-10.x86_64,ci:sles-11_x86_64 all instances are set to SLES-10.x86_64 except instance CI is set to SLES-11 DB: SLES-11.x86_64,SCS: SLES-10.x86_64, - for each instance an OS is set explicitely The consistence of the given instance type/os combinations is not checked by FlexFrame. For a list of allowed combinations see SAP note ips <old_ip>:<new_ip>,<old_ip>:<new_ip>,...] Allows to exchange the IP addresses of specific SID instances. This option is only used with -op mod. You have to specify the full IP you want to exchange. ff_sid_adm.pl searches for the specific instances within a SID and exchanges all corresponding entries in LDAP concerned by that request. Please pay attention that you make critical changes within your configuration. We strongly recommend to save a backup of the LDAP database before changing any IP addresses. Administration and Operation 255

268 SAP System HandlingAdding/Removing/Modifying SAP SIDs and Instances (Classic SAP Services) --db2srv sapdb2<sid>:<port>,db2_db2<sid>:<port>, DB2_db2<sid>_1:<port>,DB2_db2<sid>_2:<port>, DB2_db2<sid>_END:<port> This option is only used for DB2, otherwise it is ignored. You can specify a list of services you need. The services are written to LDAP and the pool specific /etc/services. Examples Adding an SID with one Central Instance: control1:~ # ff_sid_adm.pl -op add -sid SHT -pool Otto --sapversion db ORACLE9: sap ci:00:\*:\* Adding an instance to an existing SAP System: control1:~ # ff_sid_adm.pl -op add -sid SHT -pool Otto --sapversion sap app:01::\*:\* Adding a SID for LiveCache: control1:~ # ff_sid_adm.pl --op add --pool test --sid LCA --sapversion lc MAXDB76:\* Adding a SID with ERS support: control1:~ # ff_sid_adm.pl --op add --pool pool1 -sid S04 --sapversion db ORACLE10:dbs04-se: sap ers:12:\*:\* --sap ci:13: : Adding a SID with operation system: control1:~ # ff_sid_adm.pl --op add --pool pool1 -sid OS1 --sapversion db ORACLE10: sap ci:57: : sap ascs:55:: : os default:sles-10.x86_64,ascs:sles-9.x86_64 Adding a SID with DB2 services: control1:~ # ff_sid_adm.pl --op add --pool pool1 -sid LB5 --sapversion Administration and Operation

269 Adding/Removing/Modifying SAP SIDs and Instances (Classic SAP Services)SAP System Handling --db DB2V91: sap ci:57: : sap ascs:55: : db2srv sapdb2lb5:60000,db2_db2lb5:60001,db2_db2lb5_1:60002,db2_db2lb5_2:60003, DB2_db2lb5_END:60004 Adding a SID control1:~ # ff_sid_adm.pl --op add --pool pool1 -sid JA0 --sapversion db MAXDB77:d sap jc:57: : sap j:58: : sap ascs:55: :: Adding a SID with old syntax (SAP 7.1, JAVA) control1:~ # ff_sid_adm.pl --op add --pool pool1 -sid JA1 --sapversion db ORACLE10: sap jc:57: : sap j:58: : sap ascs:55: : Removing an entire SID (including its instances): %> ff_sid_adm.pl -op del -sid SHT -pool Otto Removing an Application Server: %> ff_sid_adm.pl -op del -sid SHT -pool Otto -sysnr 01 Modifying IP adresses: %> ff_sid_adm.pl -op mod -sid SHT -pool Otto ips : , : Modifying OS version of the SID entry: %> ff_sid_adm.pl -op mod -sid SHT -pool Otto - os "default:sles-10.x86_64,ci:sles-11.x86_64" Administration and Operation 257

270 SAP System Handling Removing SAP SIDs and Instances 10.4 Removing SAP SIDs and Instances Synopsis ff_sid_adm.pl -op del -pool <pool_name> --sid <SAP_system_id> ff_sid_adm.pl -op [ del rem ] -pool <pool_name> \ --sid <SAP_system_id> [--sysnr <SYSNR>] 258 Administration and Operation

271 Adding/Removing SAP SIDs (addon services) SAP System Handling Options --op del rem Determines the del operation. --pool <pool_name> Specifies the FlexFrame pool to which the operation should be applied. --sid <SAP_system_id> Specifies the SID being used. --sysnr <SYSNR> Removes a specific SAP instance instead of the entire system (SID). Examples Removing an entire SID (including all instances): %> ff_sid_adm.pl -op del -sid SHT -pool Otto Removing an specifc instance: %> ff_sid_adm.pl -op del -sid SHT -pool Otto -sysnr Adding/Removing SAP SIDs (addon services) BOBJ Business Objects Synopsis ff_sid_adm.pl -op add -pool <pool_name> --sid <SAP_system_id> \ --sapversion { } \ --bobj <client lan hostip> --os <spec> ff_sid_adm.pl -op del -pool <pool_name> --sid <SAP_system_id> Administration and Operation 259

272 SAP System Handling Adding/Removing SAP SIDs (addon services) Options --op add del Determines the operation, add or delete. --pool <pool_name> Specifies the FlexFrame pool to which the operation should be applied. --sid <SAP_system_id> Specifies the SID being used. --sapversion { } Specifies the SAP basis version being used. --bobj <client lan hostip> Specifies the hostip in the client lan for BOBJ service. --group <groupname1>:<gidnumber1>,<groupname2>:<gidnumber2>,... --user <sid>adm:<uidnumber> user and group enable specially selected user numbers and group numbers to be assigned to SAP users and SAP groups respectively. In this case a check is made to see whether the user or group has already been defined for the SAP system involved. A user or group is created only if they do not already exist. For example, a group sapsys which already exists cannot be assigned a group number which deviates from the default value. --bobj <client lan hostip> The IP address for the clien lan host name (generated internally: bobj<sid>). Use an asterisk if you want it to be chosen automatically. You can omit the network part of the IP, eg , and specify only the last tupel of the IP, e.g os <instance_type>:<os>,<instance_type>:<os>,...] Specifies the OS type for the given SID and/or their instances. instance_type::= {default bobj} os::= {SLES-10.x86_64 SLES-11.x86_64} In combination with the -add option, default:<os> sets the operating system of the SID itself and all their instances which no own operating system is assigned to. In combination with the -mod option, default:<os> sets the operating system of the SID only. The specifications of their instances are not changed. 260 Administration and Operation

273 Adding/Removing SAP SIDs (addon services) SAP System Handling The consistence of the given instance type/os combinations is not checked by FlexFrame. For a list of allowed combinations see SAP note Examples Adding a SID with BOBJ service: control1:~ # ff_sid_adm.pl -op add -sid bob -pool Otto --sapversion 3.2 \ bobj 160 Removing a BOBJ service: control1:~ # ff_sid_adm.pl -op del -sid bob -pool Otto Content Server (CMS) Adding a CMS service at first time means that you need to specify the all the instances required by CMS (database and cms client spec). CMS service contains of a database instance a client instance Synopsis ff_sid_adm.pl -op add -pool <pool_name> --sid <SAP_system_id> \ --db MaxDB76:{<db_loghost_ip> \*} \ --cms <client lan_ip> \*} \ --sapversion 6.40 [--os <spec> ] ff_sid_adm.pl -op del -pool <pool_name> --sid <SAP_system_id> Options --op add del Determines the operation, add or delete. --pool <pool_name> Specifies the FlexFrame pool to which the operation should be applied. --sid <SAP_system_id> Administration and Operation 261

274 SAP System Handling Adding/Removing SAP SIDs (addon services) Specifies the SID being used. --sapversion {6.40} Specifies the SAP basis version being used. --db MaxDb76:{<db loghost ip> \*} Specifies the database version being used and the server lan host IP --cms <client lan hostip> Specifies the hostip in the client lan for BOBJ service. {<db_loghost_ip> *}} The logical host name (generated internally: db<sid>-se) is used for the database as well as the IP address for that host name. Use an asterisk if you want it to be chosen automatically. All the entries need to be specified in a colon separated format. You can omit the network part of the IP, e.g , and specify only the last tupel of the IP, e.g cms <client lan hostip> The IP address for that host name (generated internally: cms<sid>). Use an asterisk if you want it to be chosen automatically. You can omit the network part of the IP, e.g , and specify only the last tupel of the IP, e.g group <groupname1>:<gidnumber1>,<groupname2>:<gidnumber2>,... --user <username1>:<uidnumber1>,<username2:uidnumber2>, user and group enable specially selected user numbers and group numbers to be assigned to SAP users and SAP groups respectively. In this case a check is made to see whether the user or group has already been defined for the SAP system involved. A user or group is created only if they do not already exist. For example, a group sapsys which already exists cannot be assigned a group number which deviates from the default value. --os <instance_type>:<os>,<instance_type>:<os>,...] Specifies the OS type for the given SID and/or their instances. instance_type::= {default bobj} os::= {SLES-10.x86_64 SLES-11.x86_64} In combination with the -add option, default:<os> sets the operating system of the SID itself and all their instances which no own operating system is assigned to. 262 Administration and Operation

275 Adding/Removing SAP SIDs (addon services) SAP System Handling In combination with the -mod option, default:<os> sets the operating system of the SID only. The specifications of their instances are not changed. The consistence of the given instance type/os combinations is not checked by FlexFrame. For a list of allowed combinations see SAP note Examples Adding a SID with CMS service: control1:~ # ff_sid_adm.pl -op add -sid cmx -pool Otto --sapversion 6.40 \ --db MaxDB76:170 \ cms 170 ] Adding a CMS client instance: control1:~ # ff_sid_adm.pl -op add -sid cmx -pool Otto --sapversion 6.40 \ cms 170 ] Removing a CMS Service: control1:~ # ff_sid_adm.pl -op rem -sid cmx -pool Otto Removing the CMS client instance: control1:~ # ff_sid_adm.pl -op rem -sid cmx -pool Otto sysnr CMS MDM Master Data Management Adding a MDM service for the first time means that you need at least to specify the database instance. MDM service contains: a database instance a number of services of type 'mds' a number of services of type 'mdss' a number of services of type 'mdis' Administration and Operation 263

276 SAP System Handling Adding/Removing SAP SIDs (addon services) Synopsis ff_sid_adm.pl -op add -pool <pool_name> --sid <SAP_system_id> \ --pool <pool> --sid <sid> \ --db {ORACLE{9 10} MAXDB{ } DB2V{ }}:{<db_loghost_ip> *} --mdm mds:<nr>:{<client lan_ip> \*}>:{<server lan_ip> \*} \ --mdm mdss:<nr>:{<client lan_ip> \*}>:{<server lan_ip> \*} \ --mdm mdis:<nr>:{<client lan_ip> \*}>:{<server lan_ip> \*} \ --sapversion 7.1 [--os <spec> ] ff_sid_adm.pl -op del -pool <pool_name> --sid <SAP_system_id> --op add Determines the add operation. --op mod Determines the mod operation. This option is only used to modify OS specifications and exchanges IP adresses of specific SID instances. --op del Determines the del operation. --pool <pool_name> Specifies the FlexFrame pool to which the operation should be applied. --sid <SAP_system_id> Specifies the SID being used. --sapversion {7.1} Specifies the SAP basis version being used. --db {ORACLE{9 10} MAXDB{ } DB2V{ }} Specifies the database type as well as the respective version being used. The specification of the database type is not case-sensitive. 264 Administration and Operation

277 Adding/Removing SAP SIDs (addon services) SAP System Handling --group <groupname1>:<gidnumber1>,<groupname2>:<gidnumber2>,... --user <username1>:<uidnumber1>,<username2>:<uidnumber2>,... user and group enable specially selected user numbers and group numbers to be assigned to SAP users and SAP groups respectively. In this case a check is made to see whether the user or group has already been defined for the DB system involved. A user or group is created only if they do not already exist. For example, a group dba which already exists cannot be assigned a group number which deviates from the default value. {<db_loghost_ip> *}} The logical host name is used for the database (generated automatically: db<sid_- se) as well as the IP address for that host name. Use an asterisk if you want it to be chosen automatically. All the entries need to be specified in a colon separated format. You can omit the network part of the IP, e.g , and specify only the last tupel of the IP, e.g mdm {mds mdss mdis}:<sysnr>:{<loghost_client_ip> *}:{<loghost_server_i p> *} Specifies an SAP instance (optionally multiple of those) through its type, its SAP system number, the client network IP address, the the server network IP address. Again, the IP addresses can be replaced with asterisks in order to have them chosen automatically. All the entries need to be specified in a colon separated format. FlexFrame expects that the syntax of loghost-client follows the rule <type>sid and loghost-server follows <type>sid-se. --sysnr <SYSNR> Removes a specific SAP instance instead of the entire system (SID). --os <instance_type>:<os>,<instance_type>:<os>,...] Specifies the OS type for the given SID and/or their instances. instance_type::= {default mds mdss mdis} os::= {SLES-10.x86_64 SLES-11.x86_64} In combination with the -add option, default: <os> sets the operating system of the SID itself and all their instances which no own operating system is assigned to. In combination with the -mod option, default:<os> sets the operating system of the SID only. The specifications of their instances are not changed. Administration and Operation 265

278 SAP System Handling Adding/Removing SAP SIDs (addon services) The consistence of the given instance type/os combinations is not checked by FlexFrame. For a list of allowed combinations see SAP note ips <old_ip>:<new_ip>,<old_ip>:<new_ip>,...] Allows to exchange the IP addresses of specific SID instances. This option is only used with -op mod. You have to specify the full IP you want to exchange. ff_sid_adm.pl searches for the specific instances within a SID and exchanges all corresponding entries in LDAP concerned by that request. Please pay attention that you make critical changes within your configuration. So we strongly recommend you to take a backup of LDAP database before changing IP addresses. Examples Adding a SID with MDM services: control1:~ # ff_sid_adm.pl -op add -sid mdm -pool Otto --sapversion 7.1 \ --db MaxDB76:170 \ mdm mds:01:171:171 mdm mdss:02:172:172 mdm mdis:03:173:173 Adding an instance of type mdss to an existing MDM SID: control1:~ # ff_sid_adm.pl -op add -sid mdm -pool Otto --sapversion 7.1 \ mdm mdss:04:174:174 Removing an specific instance of a MDM SID: control1:~ # ff_sid_adm.pl -op rem -sid mdm -pool Otto sysnr 02 Removing a SID with MDM services: control1:~ # ff_sid_adm.pl -op rem -sid mdm -pool Otto 266 Administration and Operation

279 Adding/Removing SAP SIDs (addon services) SAP System Handling SMD Solution Manager Diagnostics Before SAP 7.1 EHP 1 a Solution Manager Diagnostics Agent specification was optional. From SAP 7.1 EHP 1 up services like 'ci', 'app' etc. request a SMD service SID to continue installation process. You do not need to create a SMD SID for each SAP system SID you want to install. You can specify a general SMD SID ans specify SMDs for different SAP services, even from different SAP service SIDs. You have to take into account that each SMD instance needs a unique instance number. This may lead to a flood of instance numbers within your specific pool. Synopsis ff_sid_adm.pl -op add -pool <pool_name> --sid <SAP_system_id> \ --pool <pool> --sid <sid> \ --smd <nr>:<client lan_ip monitored host> \ --sapversion 7.1 [--os <spec> ] ff_sid_adm.pl -op del -pool <pool_name> --sid <SAP_system_id> --op add Determines the add operation. --op del Determines the del operation. --pool <pool_name> Specifies the FlexFrame pool to which the operation should be applied. --sid <SAP_system_id> Specifies the SID being used. --sapversion {7.1} Specifies the SAP basis version being used. Administration and Operation 267

280 SAP System Handling Adding/Removing SAP SIDs (addon services) --group <groupname1>:<gidnumber1>,<groupname2>:<gidnumber2>,... --user <username1>:<uidnumber1>,<username2>:<uidnumber2>,... user and group enable specially selected user numbers and group numbers to be assigned to SAP users and SAP groups respectively. In this case a check is made to see whether the user or group has already been defined for the DB system involved. A user or group is created only if they do not already exist. For example, a group dba which already exists cannot be assigned a group number which deviates from the default value. --smd <SYSNR>:<client lan hostip monitored host> Specifies the name of the host which should be monitored. The name of the host depends on the SAP service type app app<nr><sid> j j<nr><sid> ci ci<sid> jc jc<sid> (less than SAP 7.1), j<nr><sid> (SAP > 7.1) --sysnr <SYSNR> Removes a specific SAP instance instead of the entire system (SID). --os <instance_type>:<os>,<instance_type>:<os>,...] Specifies the OS type for the given SID and/or their instances. instance_type::= {default smd} os::= {SLES-10.x86_64 SLES-11.x86_64} In combination with the -add option, default:<os> sets the operating system of the SID itself and all their instances which no own operating system is assigned to. In combination with the -mod option, default:<os> sets the operating system of the SID only. The specifications of their instances are not changed. The consistence of the given instance type/os combinations is not checked by FlexFrame. For a list of allowed combinations see SAP note Administration and Operation

281 Adding/Removing SAP SIDs (addon services) SAP System Handling Examples Adding a SID with SMD services: control1:~ # ff_sid_adm.pl -op add -sid SMD -pool Otto --sapversion 7.1 \ smd 01:j12abc smd 02:cixyz Adding an instance of type mdss to an existing SMD SID: control1:~ # ff_sid_adm.pl -op add -sid SMD -pool Otto --sapversion 7.1 \ smd 04:app12xyz Removing an specific instance of a SMD SID: control1:~ # ff_sid_adm.pl -op rem -sid mdm -pool Otto sysnr 02 Removing a SID with SMD services: control1:~ # ff_sid_adm.pl -op rem -sid mdm -pool Otto TREX (Search and Classification Service) Synopsis ff_sid_adm.pl -op add -pool <pool_name> --sid <SAP_system_id> \ --pool <pool> --sid <sid> \ --trx <nr>:<client lan_ip>>:<server lan_ip> \ --sapversion 7.1 [--os <spec> ] ff_sid_adm.pl -op del -pool <pool_name> --sid <SAP_system_id> --op add Determines the add operation. --op del Determines the del operation. --op mod Administration and Operation 269

282 SAP System Handling Adding/Removing SAP SIDs (addon services) Determines the mod operation. This option is only used to modify OS specifications and exchanges IP adresses of specific SID instances. --pool <pool_name> Specifies the FlexFrame pool to which the operation should be applied. --sid <SAP_system_id> Specifies the SID being used. --sapversion {7.1} Specifies the SAP basis version being used. --group <groupname1>:<gidnumber1>,<groupname2>:<gidnumber2>,... --user <username1>:<uidnumber1>,<username2>:<uidnumber2>,... user and group enable specially selected user numbers and group numbers to be assigned to SAP users and SAP groups respectively. In this case a check is made to see whether the user or group has already been defined for the DB system involved. A user or group is created only if they do not already exist. For example, a group dba which already exists cannot be assigned a group number which deviates from the default value. --smd <SYSNR>:<client lan hostip>>:<server lan hostip> The logical host name is used for the database (generated automatically: trx<nr><sid>, trx<nr><sid-se>) as well as the IP address for that host name. Use an asterisk if you want it to be chosen automatically. All the entries need to be specified in a colon separated format. You can omit the network part of the IP, e.g and specify only the last tupel of the IP, e.g sysnr <SYSNR> Removes a specific SAP instance instead of the entire system (SID). --os <instance_type>:<os>,<instance_type>:<os>,...] Specifies the OS type for the given SID and/or their instances. instance_type::= {default trx} os::= {SLES-10.x86_64 SLES-11.x86_64} In combination with the -add option, default:<os> sets the operating system of the SID itself and all their instances which no own operating system is assigned to. In combination with the -mod option, default:<os> sets the operating system of the SID only. The specifications of their instances are not changed. 270 Administration and Operation

283 Adding/Removing SAP SIDs (addon services) SAP System Handling The consistence of the given instance type/os combinations is not checked by FlexFrame. For a list of allowed combinations see SAP note ips <old_ip>:<new_ip>,<old_ip>:<new_ip>,...] Allows to exchange the IP addresses of specific SID instances. This option is only used with -op mod. You have to specify the full IP you want to exchange. ff_sid_adm.pl searches for the specific instances within a SID and exchanges all corresponding entries in LDAP concerned by that request. Please pay attention that you make critical changes within your configuration. So we strongly recommend you to take a backup of LDAP database before changing IP addresses. Examples Adding a SID with TRX services: control1:~ # ff_sid_adm.pl -op add -sid TR1 -pool Otto --sapversion 7.1 \ trx 01:201:201 trx 02:200:200 Adding an instance of type mdss to an existing TRX SID: control1:~ # ff_sid_adm.pl -op add -sid TR1 -pool Otto --sapversion 7.1 \ smd 04:202:202 Removing an specific instance of a TRX SID: control1:~ # ff_sid_adm.pl -op rem -sid tr1 -pool Otto sysnr 02 Removing a SID with TRX services: control1:~ # ff_sid_adm.pl -op rem -sid tr1 -pool Otto Administration and Operation 271

284 SAP System Handling Cloning a SAP SID into a Different Pool 10.6 Cloning a SAP SID into a Different Pool Script: ff_clone_sid.pl The script ff_clone_sid.pl allows users to clone (basically copy) an entire SID from one pool to another. It needs to be clearly understood that only the FlexFrame-specific administrational data in the LDAP server as well as the required information for the operating system's naming services are copied and/or added to the LDAP database. Any additional work (like copying SAP/database binaries and database content) will not be performed by this tool and needs to be performed, as well. To get a real clone the script has to expect that there are no conflicts in userids, groupids, services and other properties between the origin and the target pool. You can check it by using option dryrun. If there are conflicts and you are sure it does not matter you can clone the SID and the script will select other values than defined in origin. Otherwise the cloning could meant inconsistencies in your LDAP database. Be careful! Synopsis ff_clone_sid.pl -sid <SID_name> --srcpool <pool_name> --trgtpool <pool_name> Changing User and Group IDs after Cloning The ff_change_id.pl script is not further available. To change the UID of an OS user you can use ff_user_adm.pl. To change the GID of an OS group you can use ff_group_adm.pl. 272 Administration and Operation

285 Multiple NAS Systems and Multiple Volumes SAP System Handling 10.7 Multiple NAS Systems and Multiple Volumes During the installation process, FlexFrame assumes that there is one NAS system with sapdata and saplog volumes. Larger installations may require more than one NAS system or multiple volumes on the same NAS system. It is possible to distribute SAP databases across multiple NAS systems and multiple volumes under the following conditions: 1. All NAS systems were entered in the FlexFrame Management Tool, prior to installation of the FlexFrame landscape. 2. The software for SAP and the database are located in the centralized volume volff of the first NAS system or alternatively (if wanted) in a pool-specific volff volume in an arbitrary NAS system (defined within the FlexFrame environment). 3. For each SID you can assign a sapdata volume for the database's data files. This sapdata volume can be shared with other SIDs or solely for this SID. Sharing with other SIDs could mean the same as offered by poolspecific volumes for sapdata. 4. For each SID you can assign a saplog volume for the database's online redolog files. This saplog volume can be shared with other SIDs or solely for this SID. Sharing with other SIDs could mean the same as offered by poolspecific volumes for saplog NetApp Filer The volumes must be created manually on the Filer, e.g. like: filer2> vol create datac11 10 filer2> vol create logc11 4 Here, datac11 and logc11 are the new names of the volumes and 4 and 10 are the numbers of disks to be used. We recommend using the volume names sapdata and saplog (if on a different Filer) or data<sid> and log<sid> for SID specific volumes on the same Filer. You may use FlexVols (ONTAP 7G) or regular volumes. The following options must be set for the volume: filer2> vol options datac11 nosnap on filer2> vol options datac11 nosnapdir on filer2> vol options datac11 minra off filer2> vol options datac11 no_atime_update on filer2> vol options logc11 nosnap on Administration and Operation 273

286 SAP System Handling Multiple NAS Systems and Multiple Volumes filer2> vol options logc11 nosnapdir on filer2> vol options logc11 minra off filer2> vol options logc11 no_atime_update on Next, create qtrees for each FlexFrame pool which will store data and logs in those volumes: filer2> qtree create /vol/datac11/pool1 filer2> qtree create /vol/logc11/pool1 If you use more than the first Filer, make sure it is reachable using, e.g.: control1:~ # ping filer2-st PING filer2-st ( ) from : 56(84) bytes ofdata. 64 bytes from filer2-st ( ): icmp_seq=1 ttl=255 time=0.117 ms 64 bytes from filer2-st ( ): icmp_seq=2 ttl=255 time=0.107 ms 64 bytes from filer2-st ( ): icmp_seq=3 ttl=255 time=0.103 ms Now the LDAP database has to be told that this SID is not using the default (first) Filer and sapdata/saplog volumes: control1:~ # ff_sid_mnt_adm.pl -op add -pool pool1 -sid C11 --sapdata filer2:/vol/datac11/pool1/c11 -saplog filer2:/vol/logc11/pool1/c11 Now the volumes on the Control Nodes need to be mounted. To do so you should add the following lines to each Control Node's /etc/fstab: filer2-st:/vol/vol0 /FlexFrame/filer2-st/vol0 nfs nfsvers=3,rw,bg,udp,soft,nolock,wsize=32768,rsize=32768 filer2-st:/vol/datac11/pool1 /FlexFrame/filer2-st/pool1/dataC11 nfs nfsvers=3,rw,bg,udp,soft,nolock,wsize=32768,rsize=32768 filer2-st:/vol/logc11/pool1 /FlexFrame/filer2-st/pool1/logC11 nfs nfsvers=3,rw,bg,udp,soft,nolock,wsize=32768,rsize=32768 Repeat the sapdata and saplog-lines for each pool, if there's more than one pool for those volumes. Use the volume name for the last directory in the mount point. Now we need the mount points: control1:~ # mkdir -p /FlexFrame/filer2-st/vol0 274 Administration and Operation

287 Multiple NAS Systems and Multiple Volumes SAP System Handling control1:~ # mkdir -p /FlexFrame/filer2-st/pool1/dataC11 control1:~ # mkdir -p /FlexFrame/filer2-st/pool1/logC11 (Again, sapdata and saplog for each pool) Administration and Operation 275

288 SAP System Handling Multiple NAS Systems and Multiple Volumes Before mounting the files we need to tell the Filer to export the volumes appropriately: control1:~ # mount /FlexFrame/filer2-st/vol0 control1:~ # ff_exports.pl --op add --nas filer2-st --path /vol/datac11/pool1 --option "-sec=sys,rw= /24,anon=0" control1:~ # ff_exports.pl --op add --nas filer2-st --path /vol/logc11/pool1 --option "-sec=sys,rw= /24,anon=0" Save the file. The network /24 must match the Storage LAN segment of pool pool1. Now you need to make the file re-read its configuration file. This command is only necessary if you do not use ff_exports.pl to manipulate Filer s export: control1:~ # rsh filer2-st exporfs -a Now we can mount the volumes: control1:~ # mount -a Before you can install SAP and the database on those volumes, some folders for the SID in question have to be created in advance. To do so, run the following command for each SID (replace "pool1" with your pool name and "C11" with your SID: control1:~ # ff_setup_sid_folder.sh p pool1 s C11 Now you can continue with the SAP installation EMC Celerra Regard a Celerra with at least two active data movers: What is to be done, if volff is on one data mover, and the SAP volumes should be on the other? Define the volumes as described in the Installation Guide, but you have to export the corresponding file systems on the data mover 2 (=server_3), e.g.: server_export server_3.../datac11 server_export server_3.../logc11 For the data mover, which is to access the SAP volumes, you will have to define the storage VLANs of all FlexFrame pools, that should have access to these volumes. In the example it is pool1 and VLAN-ID Administration and Operation

289 Multiple NAS Systems and Multiple Volumes SAP System Handling Now the LDAP database has to be told that this SID is not using the default NAS and sapdata/saplog volumes: control1:~ # ff_sid_mnt_adm.pl -op add -pool pool1 -sid C11 --sapdata filer2:/vol/datac11/pool1/c11 -saplog filer2:/vol/logc11/pool1/c11 Now the volumes on the Control Nodes need to be mounted. To do so you should add the following lines to each Control Node's /etc/fstab: <Datamover2-st>:/vol/dataC11/pool1 /FlexFrame/<Datamover2- st>/pool1/datac11 nfs nfsvers=3,rw,bg,udp,soft,nolock,wsize=32768,rsize=32768 <Datamover2-st>:/vol/logC11/pool1 /FlexFrame/<Datamover2- st>/pool1/logc11 nfs nfsvers=3,rw,bg,udp,soft,nolock,wsize=32768,rsize=32768 Repeat the sapdata and saplog lines for each pool, if there's more than one pool for those volumes. Use the volume name for the last directory in the mount point. Now we need the mount points: control1:~ # mkdir -p /FlexFrame/<Datamover2-st>/pool1/dataC11 control1:~ # mkdir -p /FlexFrame/<Datamover2-st>/pool1/logC11 (sapdata and saplog for each pool again) Now we can mount the volumes: control1:~ # mount -a Before you can install SAP and the database on those volumes, some folders for the SID in question have to be created in advance. To do so, run the following command for each SID (replace pool1 with your pool name and C11 with your SID: control1:~ # ff_setup_sid_folder.sh p pool1 s C11 If you have several Celerras, the FlexFrame administrator will have to configure the time service and the SSH keys for each of them (see Installation Guide). Now you can continue with the SAP installation. Administration and Operation 277

290 SAP System Handling Upgrading a SAP System 10.8 Upgrading a SAP System Service Port If you plan to upgrade your SAP release, you have to add a special service port (called shadow instance) to LDAP. SAP shadow service ports are required during an SAP release upgrade. To list, add or remove the service ports in the LDAP database for service entries, you can use this tool. Synopsis ff_sap_shadowport.sh ff_sap_shadowport.sh [-d] -l -p <pool_name> [-s <sid>] [-o <port_no>] [-d] {-a -r} -p <pool_name> -s <sid> [-o <port_no>] Options -d -l Writes debugging information. Lists all SAP shadow service ports of the pool provided with the -p option. If the option -s is used, only the port of the specified SID is displayed. -a Adds an entry. -r Removes an entry. -p <pool_name> Specifies the name of the pool (e.g. pool1). -s <sid> Specifies the SAP System ID (SID) by a 3 character string (e.g. C11). -o <port_no> Specifies the service port number. The default is 3694 (optional). 278 Administration and Operation

291 Upgrading a SAP System SAP System Handling FA Agents Please make sure that the FA Application Agents are stopped on the hosts while you are installing, updating or removing any SAP or database software: Stop the FA Agent and check the status: control1:~ # /etc/init.d/myamc.fa_appagent stop control1:~ # /etc/init.d/myamc.fa_appagent status There should be no running processes listed Support SAP Upgrade This command is used to support the migration from a previous SAP base version to a higher one. It could be also useful if you are migrating from an older release of FF to a newer one. You are able to change the SID specific settings in LDAP database like version of SAP or database installation. It also tries to introduce the LDAP data which are requested with newer SAP versions. The script tries to cover all requirements concerning version of SAP or database service. Sometimes users want to migrate from one database type to an other. The script cannot execute all actions requested by such a migration. Manual actions could be required Synopsis ff_sap_upgrade.pl --pool <pool_name> --sid {<sid> \*} --sapvo <old_version> --sapvn <new_version> --dbvo <old_version> --dbvn <new_version> ff_sap_upgrade.pl [--help] Options --pool <pool_name> --type <service type> [--version] [--debug] [--dryrun] Name of the pool the SID should belong to. --sid <sid> SAP system identifier. Either you specify a specific SID or you use the asterisk. Asterisk means all SIDs of the specific pool which match to sapvo and dbvo are modified. Administration and Operation 279

292 SAP System Handling Upgrading a SAP System --sapvo <old_version> Current value of the SAP version stored in LDAP (without leading SAP-). <old_version> can be one of 4.6, 6.20, 6.40, 7.0, 7.1 or 7.2. The type BOBJ allows only Administration and Operation

293 Upgrading a SAP System SAP System Handling --sapvn <new_version> SAP version the SID is migrated to. <new_version> is 7.0, 7.1, 7.2 or 7.3. Type BOBJ only allows dbvo <old_version> Version of previous used database version. <old_version> can be one of ORACLE9, ORACLE10, SAPDB73, SAPDB73, SAPDB74, MAXDB75, MAXDB76, MAXDB77, DB2V91 or DB2V95. --dbvn <new_version> Version of database the SID is migrated to. <new_version> can be one of ORACLE9, ORACLE10, ORACLE11, MAXDB76, MAXDB77, MAXDB78, DB2V91, DB2V95 or DB2V97 --type <service> Specifies the SAP service. Default value is SAP, other values are BOBJ, CMS, MDM, SMD and TREX. Except SAP, all services are new. This is the reason why option type has been introduced. --version --debug Show version of command. Sets the debug option to enlarge logging. --dryrun --help Just to see which changes would be made in LDAP. Display usage. Tips and Tricks It is not allowed to specify 7.1 with option sapvo. But in some installations it could be necessary to make system changes (e.g. automount entries). So you can do that with a little workaround: ff_sid_adm.pl op add pool pool1 sid ttt sapversion db ORACLE9:dbttt-se:\* --sap ci:90:\*:\* ff_sap_upgrade.pl pool pool1 sid ttt sapvo 4.6 sapvn dbvo ORACLE9 dbvn ORACLE10 ff_sid_adm.pl op del pool pool1 sid ttt Administration and Operation 281

294 SAP System Handling SAP Kernel Updates and Patches 10.9 SAP Kernel Updates and Patches For an SAP kernel update (binary patches), logon to the Application Node with the CI of the SAP system that is to be updated. Please make sure that the FA Application Agents are stopped on the host while you are updating the SAP kernel (see also section FA Agents on page 279). SAP's OSS note describes where to find kernel patches and how to handle the installation. For the installation of SAP ABAP patches or similar, please refer to the SAP documentation. There are no FlexFrame specific changes Unloading volff Status Quo/Solution All SID specific data are located on the same volume. In case of request of diagnostics it could happen that the volume space is not sufficient or it is not easy to find the specific data you need because of all SIDs writes into the same location. With FlexFrame for SAP, Release 5.0A, it is possible to separate some directories with SID specific data to a pool-specific volff volume. The SID specific data are stored in oracle home_sap/sidadm or home_sap/sqdsid sapmnt usr_sap sapdb To weaken that situation we offer a way to move large SID specific data to its own storage location a monitor function which logs the usage of volff 282 Administration and Operation

295 Unloading volff SAP System Handling ff_relocate_sid_data.pl Sometimes you are requested to move SID specific data from volff to a separated volume. ff_relocate_sid_data.pl is used to change the automount information in LDAP to use separated volume for data access. If you take a snapshot of a configuration and you want to use it for an installation on another system, the changes made in LDAP automount entries with ff_relocate_sid_data.pl are lost. You must repeat the commands you have executed in the origin installation to recreate the LDAP data for relocation. The new command does not support the tasks to configure a volume and to make the corresponding changes in system files needed (e.g. exports file). Synopsis ff_relocate_sid_data.pl -op add --pool <pool_name> --sid <sid> --name <type>:<nas_system>:<volume_path>/<sid>... [--force] [--dryrun] ff_relocate_sid_data.pl -op del --pool <pool_name> --sid <sid> --name <type>... [--dryrun] ff_relocate_sid_data.pl -op list --pool <pool_name> --sid <sid> ff_relocate_sid_data.pl -help Options --op add Determines the add operation.. --op { add del } Detemines the delete operations. --pool <pool_name> Name of the pool the SID belongs to. --sid <sid> SAP system identifier. Administration and Operation 283

296 SAP System Handling Unloading volff --name <type>:<nas_system>:<volume_path>/<sid> Type, name of the NAS system and path to the new storage. This option could be repeated several times within one call. Each type can only be used one time. Moving data of SIDs from different pools need an own command for each SID and pool which data are moved. The subdirectory on the new volume has to be created manually. For <type> you can usr_sap or sapmnt or oracle or sapdb or db2 or <sid>adm or sqd<sid> or db2<sid> or sap<sid> or sap<sid>db or oraarch. oraarch is a special case because it is part of /oracle (link to /saplog/oraarch/<sid>). You have to take this into account if you are moving the data from the origin storage to the new location. A workaround is introduced by using /FlexFrame/scripts/sapdb. To mount the relocated directory in a shell, use sapdb <SID> mount, and to umount the relocated data, use sapdb <SID> umount. --force The process continues even if errors occur. If force is not set, the command rollback the previous changes made in LDAP until the error occurs. --dryrun --help Hints Displays which changes would be made in LDAP. Displays usage. Moving sapmnt is a special case. If new SID instances are introduced by ff_sid_adm.pl the mount points SID and SID_exe are already created. ff_relocate_sid_data.pl renames these two nodes and generates new entries. If you are delete sapmnt, the previous entries are restored. The installation of a Celerra must be done by the EMC customer support. Please refer to the Fujitsu Solution Facts document "Guidelines for the Installation of an EMC-Celerra within FlexFrame", which can be downloaded from the public "FlexFrame for SAP Softbooks" site (select "FlexFrame Further Public Documents). To generate a new volume (on NetApp) use rsh <filer_name-st> vol create <volume_name> aggr0 5g 284 Administration and Operation

297 Unloading volff SAP System Handling e.g. rsh jer1na-st vol create z02_vol aggr0 5g The next step is the insertion into the corresponding export file executing ff_exports.pl ff_exports.pl op add nas <filer> --path <volume_name> -option - sec=sys,rw=<cn1 ip@>:<cn2_ip@>:<server_lan>/24,anon=0 e.g. ff_exports.pl op add nas jer1na1-st path /vol/z02_vol option - sec=sys,rw= : : /24,anon=0 If there is already an exports entry available for this volume read the current options using ff_exports.pl op list and extend the options with :<server_lan>/24 if necessary. Administration and Operation 285

298 SAP System Handling Unloading volff LDAP A new entry in LDAP looks like this: 286 Administration and Operation

FlexFrame for SAP 4.2A

FlexFrame for SAP 4.2A User Guide English FlexFrame for SAP 4.2A Management Tool FlexFrame for SAP Version 4.2A Management Tool Edition December 2008 Document Version 1.0 Fujitsu Siemens Computers GmbH Copyright Fujitsu Siemens

More information

FlexFrame for SAP 4.1A

FlexFrame for SAP 4.1A User Guide English FlexFrame for SAP 4.1A Planning Tool FlexFrame for SAP Version 4.1A Planning Tool Edition April 2008 Document Version 1.0 Fujitsu Siemens Computers GmbH Copyright Fujitsu Siemens Computers

More information

Release Information for FlexFrame for SAP V5.1A00

Release Information for FlexFrame for SAP V5.1A00 Copyright 2013 Fujitsu Technology Solutions Table of Contents General... 1 Ordering... 2 Delivery... 2 Documentation... 2 SW/HW Extensions / New Functionality... 3 Technical Information... 4 Resource Requirements...

More information

The name FlexFrame is a generic term for both the traditional "FlexFrame for SAP " (FF4S) and the current "FlexFrame Orchestrator" (FFO).

The name FlexFrame is a generic term for both the traditional FlexFrame for SAP  (FF4S) and the current FlexFrame Orchestrator (FFO). WHITE PAPER FlexFrame Orchestrator Version 1.2A THE FLEXFRAME INFRASTRUCTURE SOLUTION - TECHNICAL WHITE PAPER INTRODUCTION This document describes the technical aspects of the FlexFrame infrastructure

More information

The name FlexFrame is a generic term for both the traditional "FlexFrame for SAP " (FF4S) and the current "FlexFrame Orchestrator" (FFO).

The name FlexFrame is a generic term for both the traditional FlexFrame for SAP  (FF4S) and the current FlexFrame Orchestrator (FFO). WHITE PAPER FlexFrame Orchestrator Version 1.1A THE FLEXFRAME INFRASTRUCTURE SOLUTION - TECHNICAL WHITE PAPER INTRODUCTION This document describes the technical aspects of the FlexFrame infrastructure

More information

FlexFrame. Version 5.3A/1.0A. Management Tool. Edition October 2013 Document Version 1.0

FlexFrame. Version 5.3A/1.0A. Management Tool. Edition October 2013 Document Version 1.0 FlexFrame Version 5.3A/1.0A Management Tool Edition October 2013 Document Version 1.0 Fujitsu Limited Copyright Fujitsu Technology Solutions 2013 FlexFrame and PRIMERGY are trademarks or registered trademarks

More information

The name FlexFrame is a generic term for both "FlexFrame for SAP " (FF4S) and "FlexFrame Orchestrator" (FFO).

The name FlexFrame is a generic term for both FlexFrame for SAP  (FF4S) and FlexFrame Orchestrator (FFO). WHITE PAPER FlexFrame Orchestrator Version 1.0A THE FLEXFRAME INFRASTRUCTURE SOLUTION - TECHNICAL WHITE PAPER INTRODUCTION This document describes the technical aspects of the FlexFrame infrastructure

More information

All technical aspects described in this document and this document itself is subject of change without further notice.

All technical aspects described in this document and this document itself is subject of change without further notice. WHITE PAPER FlexFrame for SAP Version 5.1A THE FLEXFRAME INFRASTRUCTURE SOLUTION - TECHNICAL WHITE PAPER INTRODUCTION This document describes the technical aspects of the FlexFrame infrastructure solution

More information

The name FlexFrame is a generic term for both the traditional "FlexFrame for SAP " (FF4S) and the current "FlexFrame Orchestrator" (FFO).

The name FlexFrame is a generic term for both the traditional FlexFrame for SAP  (FF4S) and the current FlexFrame Orchestrator (FFO). WHITE PAPER FlexFrame Orchestrator Version 1.4A THE FLEXFRAME INFRASTRUCTURE SOLUTION - TECHNICAL WHITE PAPER INTRODUCTION This document describes the technical aspects of the FlexFrame infrastructure

More information

User's Guide for Infrastructure Administrators (Resource Management)

User's Guide for Infrastructure Administrators (Resource Management) ServerView Resource Orchestrator Cloud Edition V3.0.0 User's Guide for Infrastructure Administrators (Resource Management) Windows/Linux J2X1-7612-01ENZ0(05) April 2012 Preface Purpose This manual provides

More information

ServerView Resource Orchestrator V User's Guide. Windows/Linux

ServerView Resource Orchestrator V User's Guide. Windows/Linux ServerView Resource Orchestrator V2.2.1 User's Guide Windows/Linux J2X1-7526-01ENZ0(01) November 2010 Preface Purpose This manual provides an outline of ServerView Resource Orchestrator (hereinafter Resource

More information

FlexFrame for SAP. Version 5.1A. Network Design and Configuration Guide. Edition March 2012 Document Version 1.1

FlexFrame for SAP. Version 5.1A. Network Design and Configuration Guide. Edition March 2012 Document Version 1.1 FlexFrame for SAP Version 5.1A Network Design and Configuration Guide Edition March 2012 Document Version 1.1 Fujitsu Limited Copyright Fujitsu Technology Solutions 2011 FlexFrame and PRIMERGY are trademarks

More information

FlexFrame Orchestrator

FlexFrame Orchestrator FlexFrame Orchestrator Version 1.2A Management Tool Edition November 2015 Document Version 1.0 Fujitsu Limited Copyright 2015 Fujitsu Technology Solutions GmbH PRIMEFLEX is a registered trademark of Fujitsu

More information

ServerView Resource Orchestrator V User's Guide. Windows/Linux

ServerView Resource Orchestrator V User's Guide. Windows/Linux ServerView Resource Orchestrator V2.3.0 User's Guide Windows/Linux J2X1-7530-01ENZ0(02) July 2011 Preface Purpose This manual provides an outline of ServerView Resource Orchestrator (hereinafter Resource

More information

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES I. Executive Summary Superior Court of California, County of Orange (Court) is in the process of conducting a large enterprise hardware refresh. This

More information

Virtual Appliance User s Guide

Virtual Appliance User s Guide Cast Iron Integration Appliance Virtual Appliance User s Guide Version 4.5 July 2009 Cast Iron Virtual Appliance User s Guide Version 4.5 July 2009 Copyright 2009 Cast Iron Systems. All rights reserved.

More information

Installing VMware vsphere 5.1 Components

Installing VMware vsphere 5.1 Components Installing VMware vsphere 5.1 Components Module 14 You Are Here Course Introduction Introduction to Virtualization Creating Virtual Machines VMware vcenter Server Configuring and Managing Virtual Networks

More information

ServerView Resource Orchestrator Cloud Edition V Setup Guide. Windows/Linux

ServerView Resource Orchestrator Cloud Edition V Setup Guide. Windows/Linux ServerView Resource Orchestrator Cloud Edition V3.1.0 Setup Guide Windows/Linux J2X1-7610-02ENZ0(00) July 2012 Preface Resource Orchestrator Documentation Road Map The documentation road map for Resource

More information

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide.   support.dell.com VMware Infrastructure 3.0.2 Update 1 for Dell Systems Deployment Guide www.dell.com support.dell.com Notes and Notices NOTE: A NOTE indicates important information that helps you make better use of your

More information

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide.   support.dell.com VMware Infrastructure 3.0.2 Update 1 for Dell PowerEdge Systems Deployment Guide www.dell.com support.dell.com Notes and Notices NOTE: A NOTE indicates important information that helps you make better

More information

vsphere Upgrade Update 1 Modified on 4 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

vsphere Upgrade Update 1 Modified on 4 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 Update 1 Modified on 4 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you

More information

FlexFrame for SAP. Version 4.0. FA Agents - Installation and Administration. Edition March 2007 Document Version 2.0

FlexFrame for SAP. Version 4.0. FA Agents - Installation and Administration. Edition March 2007 Document Version 2.0 FlexFrame for SAP Version 4.0 FA Agents - Installation and Administration Edition March 2007 Document Version 2.0 Fujitsu Siemens Computers GmbH Copyright Fujitsu Siemens Computers GmbH 2007 FlexFrame,

More information

FUJITSU Software ServerView Suite ServerView Installation Manager

FUJITSU Software ServerView Suite ServerView Installation Manager User Guide - English FUJITSU Software ServerView Suite ServerView Installation Manager Edition June 2017 Comments Suggestions Corrections The User Documentation Department would like to know your opinion

More information

ServerView Resource Coordinator VE. Setup Guide. Windows/Linux

ServerView Resource Coordinator VE. Setup Guide. Windows/Linux ServerView Resource Coordinator VE Setup Guide Windows/Linux J2X1-7459-02ENZ0(00) November 2009 Preface Purpose This manual contains an outline of ServerView Resource Coordinator VE (hereinafter Resource

More information

Fujitsu Technology Solutions. StorMan Version 2.0 May Release Notice

Fujitsu Technology Solutions. StorMan Version 2.0 May Release Notice Fujitsu Technology Solutions StorMan Version 2.0 May 2009 Release Notice All rights reserved, especially industrial property rights. Modifications to technical data and delivery subject to availability.

More information

Symantec NetBackup PureDisk Compatibility Matrix Created August 26, 2010

Symantec NetBackup PureDisk Compatibility Matrix Created August 26, 2010 Symantec NetBackup PureDisk 6.6.1 Compatibility Matrix Created August 26, 2010 Copyright 2010 Symantec Corporation. All rights reserved. Symantec, the Symantec Logo, and Backup Exec are trademarks or registered

More information

Introducing ServerView Resource Orchestrator Express/Cloud Edition XOC Pack V1

Introducing ServerView Resource Orchestrator Express/Cloud Edition XOC Pack V1 Introducing ServerView Resource Orchestrator Express/Cloud Edition XOC Pack V1 Version 1.1.2 August 2014 Fujitsu Limited Product Overview Integrated monitoring of a virtualized collection of virtual machines

More information

ServerView Resource Coordinator VE. Installation Guide. Windows/Linux

ServerView Resource Coordinator VE. Installation Guide. Windows/Linux ServerView Resource Coordinator VE Installation Guide Windows/Linux J2X1-7458-03ENZ0(00) February 2010 Preface Purpose This manual explains how to install ServerView Resource Coordinator VE (hereinafter

More information

ExpressCluster X 2.0 for Linux

ExpressCluster X 2.0 for Linux ExpressCluster X 2.0 for Linux Installation and Configuration Guide 03/31/2009 3rd Edition Revision History Edition Revised Date Description First 2008/04/25 New manual Second 2008/10/15 This manual has

More information

Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5

Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5 Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5 Setup for Microsoft Cluster Service Setup for Microsoft Cluster Service Revision: 041108

More information

Structure and Overview of Manuals

Structure and Overview of Manuals FUJITSU Software Systemwalker Operation Manager Structure and Overview of Manuals UNIX/Windows(R) J2X1-6900-08ENZ0(00) May 2015 Introduction Purpose of This Document Please ensure that you read this document

More information

ServerView Resource Coordinator VE. Installation Guide. Windows

ServerView Resource Coordinator VE. Installation Guide. Windows ServerView Resource Coordinator VE Installation Guide Windows B1WD-2748-01ENZ0(01) August 2009 Preface Purpose This manual explains how to install ServerView Resource Coordinator VE (hereinafter Resource

More information

ETERNUS SF Express V15.3/ Storage Cruiser V15.3/ AdvancedCopy Manager V15.3. Migration Guide

ETERNUS SF Express V15.3/ Storage Cruiser V15.3/ AdvancedCopy Manager V15.3. Migration Guide ETERNUS SF Express V15.3/ Storage Cruiser V15.3/ AdvancedCopy Manager V15.3 Migration Guide B1FW-5958-06ENZ0(00) June 2013 Preface Purpose This manual describes how to upgrade to this version from the

More information

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware website at:

More information

What s New in VMware vsphere 5.1 Platform

What s New in VMware vsphere 5.1 Platform vsphere 5.1 Platform VMware vsphere 5.1 TECHNICAL MARKETING DOCUMENTATION V 2.0 / UPDATED JUNE 2012 Table of Contents Introduction... 3 vsphere Platform Enhancements... 3 User Access... 3 Auditing....

More information

Release Information for FlexFrame V4.0A10 for SAP

Release Information for FlexFrame V4.0A10 for SAP Release Information for FlexFrame V4.0A10 for SAP Copyright 2008 Fujitsu Siemens Computers Table of Contents General... 1 Ordering... 1 Delivery... 1 Documentation... 2 Software Extensions / New Functionality...

More information

ServerView Resource Coordinator VE. Command Reference. Windows

ServerView Resource Coordinator VE. Command Reference. Windows ServerView Resource Coordinator VE Command Reference Windows B1WD-2751-01ENZ0(01) August 2009 Preface Purpose This manual explains the commands available in ServerView Resource Coordinator VE (hereinafter

More information

VMware vsphere with ESX 4.1 and vcenter 4.1

VMware vsphere with ESX 4.1 and vcenter 4.1 QWERTYUIOP{ Overview VMware vsphere with ESX 4.1 and vcenter 4.1 This powerful 5-day class is an intense introduction to virtualization using VMware s vsphere 4.1 including VMware ESX 4.1 and vcenter.

More information

CISCO EXAM QUESTIONS & ANSWERS

CISCO EXAM QUESTIONS & ANSWERS CISCO 642-999 EXAM QUESTIONS & ANSWERS Number: 642-999 Passing Score: 800 Time Limit: 90 min File Version: 32.5 http://www.gratisexam.com/ Sections 1. Questions 2. Drag & Drop 3. Hot Spot CISCO 642-999

More information

ServerView Resource Orchestrator Cloud Edition V Quick Start Guide. Windows/Linux

ServerView Resource Orchestrator Cloud Edition V Quick Start Guide. Windows/Linux ServerView Resource Orchestrator Cloud Edition V3.1.0 Quick Start Guide Windows/Linux J2X1-7622-02ENZ0(00) July 2012 Preface QSGDocument road map The following manuals are provided with Resource Orchestrator.

More information

ServerView Resource Orchestrator Cloud Edition V Design Guide. Windows/Linux

ServerView Resource Orchestrator Cloud Edition V Design Guide. Windows/Linux ServerView Resource Orchestrator Cloud Edition V3.1.0 Design Guide Windows/Linux J2X1767301ENZ0(00) July 2012 Preface Resource Orchestrator Documentation Road Map The documentation road map for Resource

More information

ServerView Resource Coordinator VE. Command Reference. Windows/Linux

ServerView Resource Coordinator VE. Command Reference. Windows/Linux ServerView Resource Coordinator VE Command Reference Windows/Linux J2X1-7461-01ENZ0(00) September 2009 Preface Purpose This manual explains the commands available in ServerView Resource Coordinator VE

More information

DSI Optimized Backup & Deduplication for VTL Installation & User Guide

DSI Optimized Backup & Deduplication for VTL Installation & User Guide DSI Optimized Backup & Deduplication for VTL Installation & User Guide Restore Virtualized Appliance Version 4 Dynamic Solutions International, LLC 373 Inverness Parkway Suite 110 Englewood, CO 80112 Phone:

More information

VMware vsphere with ESX 6 and vcenter 6

VMware vsphere with ESX 6 and vcenter 6 VMware vsphere with ESX 6 and vcenter 6 Course VM-06 5 Days Instructor-led, Hands-on Course Description This class is a 5-day intense introduction to virtualization using VMware s immensely popular vsphere

More information

FUJITSU Software ServerView Resource Orchestrator V Errata. Windows/Linux

FUJITSU Software ServerView Resource Orchestrator V Errata. Windows/Linux FUJITSU Software ServerView Resource Orchestrator V3.1.2 Errata Windows/Linux J2X1-7732-04ENZ0(01) June 2014 Preface Purpose This manual provides corrections to the FUJITSU Software ServerView Resource

More information

Datasheet Fujitsu ServerView Resource Orchestrator V3.0 Software Virtual Edition

Datasheet Fujitsu ServerView Resource Orchestrator V3.0 Software Virtual Edition Datasheet Fujitsu ServerView Resource Orchestrator V.0 Software Virtual Edition Datasheet Fujitsu ServerView Resource Orchestrator V.0 Software Virtual Edition Increase efficiency of day-to-day server

More information

Cisco UCS Performance Manager Release Notes

Cisco UCS Performance Manager Release Notes Cisco UCS Performance Manager Release Notes First Published: November 2017 Release 2.5.1 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

VMware ESX Server Software for Dell PowerEdge Servers. Deployment Guide. support.dell.com

VMware ESX Server Software for Dell PowerEdge Servers. Deployment Guide.   support.dell.com VMware ESX Server 2.5.1 Software for Dell PowerEdge Servers Deployment Guide www.dell.com support.dell.com VMware ESX Server 2.5.1 Software for Dell PowerEdge Servers Deployment Guide www.dell.com support.dell.com

More information

VMware Infrastructure 3.5 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

VMware Infrastructure 3.5 for Dell PowerEdge Systems. Deployment Guide.   support.dell.com VMware Infrastructure 3.5 for Dell PowerEdge Systems Deployment Guide www.dell.com support.dell.com Notes and Notices NOTE: A NOTE indicates important information that helps you make better use of your

More information

NEC SigmaSystemCenter 3.6 First Step Guide

NEC SigmaSystemCenter 3.6 First Step Guide NEC SigmaSystemCenter 3.6 First Step Guide - First Edition - Copyright (C) NEC Corporation 2003-2017. Disclaimer of Warranty All the information, text, graphics, links or other items contained within this

More information

Lifecycle Controller 2 Release 1.0 Version Readme

Lifecycle Controller 2 Release 1.0 Version Readme Lifecycle Controller 2 Release 1.0 Version 1.0.8 Readme Lifecycle Controller 2 GUI provides advanced embedded systems management and is delivered as part of Integrated Dell(R) Remote Access Controller

More information

FUJITSU Storage ETERNUS SF Express V16.5 / Storage Cruiser V16.5 / AdvancedCopy Manager V16.5. Installation and Setup Guide

FUJITSU Storage ETERNUS SF Express V16.5 / Storage Cruiser V16.5 / AdvancedCopy Manager V16.5. Installation and Setup Guide FUJITSU Storage ETERNUS SF Express V16.5 / Storage Cruiser V16.5 / AdvancedCopy Manager V16.5 Installation and Setup Guide B1FW-5999-06ENZ0(00) May 2017 Preface Purpose This manual provides information

More information

vcenter Server Installation and Setup Modified on 11 MAY 2018 VMware vsphere 6.7 vcenter Server 6.7

vcenter Server Installation and Setup Modified on 11 MAY 2018 VMware vsphere 6.7 vcenter Server 6.7 vcenter Server Installation and Setup Modified on 11 MAY 2018 VMware vsphere 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

ETERNUS SF Express V15.1/ Storage Cruiser V15.1/ AdvancedCopy Manager V15.1. Migration Guide

ETERNUS SF Express V15.1/ Storage Cruiser V15.1/ AdvancedCopy Manager V15.1. Migration Guide ETERNUS SF Express V15.1/ Storage Cruiser V15.1/ AdvancedCopy Manager V15.1 Migration Guide B1FW-5958-03ENZ0(00) August 2012 Preface Purpose This manual describes how to upgrade to this version from the

More information

Accelerated NCDA Boot Camp Data ONTAP 7-Mode (ANCDABC87)

Accelerated NCDA Boot Camp Data ONTAP 7-Mode (ANCDABC87) Accelerated NCDA Boot Camp Data ONTAP 7-Mode (ANCDABC87) COURSE OVERVIEW: This is a 5-day bootcamp that expands upon what students learned in the D7ADM course in order to prepare for the NCDA 7-mode certification

More information

Cisco Actualtests Questions & Answers

Cisco Actualtests Questions & Answers Cisco Actualtests 642-999 Questions & Answers Number: 642-999 Passing Score: 800 Time Limit: 90 min File Version: 22.8 http://www.gratisexam.com/ Sections 1. Questions 2. Drag & Drop 3. Hot Spot Cisco

More information

Active System Manager Release 8.2 Compatibility Matrix

Active System Manager Release 8.2 Compatibility Matrix Active System Manager Release 8.2 Compatibility Matrix Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates

More information

iscsi Boot from SAN with Dell PS Series

iscsi Boot from SAN with Dell PS Series iscsi Boot from SAN with Dell PS Series For Dell PowerEdge 13th generation servers Dell Storage Engineering September 2016 A Dell Best Practices Guide Revisions Date November 2012 September 2016 Description

More information

SnapCenter Software 4.0 Concepts Guide

SnapCenter Software 4.0 Concepts Guide SnapCenter Software 4.0 Concepts Guide May 2018 215-12925_D0 doccomments@netapp.com Table of Contents 3 Contents Deciding whether to use the Concepts Guide... 7 SnapCenter overview... 8 SnapCenter architecture...

More information

FUJITSU Storage ETERNUS SF Express V16.2 / Storage Cruiser V16.2 / AdvancedCopy Manager V16.2. Installation and Setup Guide

FUJITSU Storage ETERNUS SF Express V16.2 / Storage Cruiser V16.2 / AdvancedCopy Manager V16.2. Installation and Setup Guide FUJITSU Storage ETERNUS SF Express V16.2 / Storage Cruiser V16.2 / AdvancedCopy Manager V16.2 Installation and Setup Guide B1FW-5999-03ENZ0(02) July 2015 Preface Purpose This manual provides information

More information

Installation and Cluster Deployment Guide

Installation and Cluster Deployment Guide ONTAP Select 9 Installation and Cluster Deployment Guide Using ONTAP Select Deploy 2.3 March 2017 215-12086_B0 doccomments@netapp.com Updated for ONTAP Select 9.1 Table of Contents 3 Contents Deciding

More information

Active Fabric Manager Installation Guide 1.5

Active Fabric Manager Installation Guide 1.5 Active Fabric Manager Installation Guide 1.5 Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either

More information

FUJITSU Storage ETERNUS SF Express V16.5 / Storage Cruiser V16.5 / AdvancedCopy Manager V16.5. Migration Guide

FUJITSU Storage ETERNUS SF Express V16.5 / Storage Cruiser V16.5 / AdvancedCopy Manager V16.5. Migration Guide FUJITSU Storage ETERNUS SF Express V16.5 / Storage Cruiser V16.5 / AdvancedCopy Manager V16.5 Migration Guide B1FW-6000-06ENZ0(01) June 2017 Preface Purpose This manual describes how to upgrade to this

More information

SAP HANA in alta affidabilità: il valore aggiunto di Fujitsu - NetApp

SAP HANA in alta affidabilità: il valore aggiunto di Fujitsu - NetApp SAP HANA in alta affidabilità: il valore aggiunto di Fujitsu - NetApp Antonio Gentile Fujitsu SAP Business Development Manager Matteo Pirelli NetApp Technical Partner Manager In-Memory Computing In-memory

More information

VMware vsphere Storage Appliance Installation and Configuration

VMware vsphere Storage Appliance Installation and Configuration VMware vsphere Storage Appliance Installation and Configuration vsphere Storage Appliance 1.0 vsphere 5.0 This document supports the version of each product listed and supports all subsequent versions

More information

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA Version 4.0 Configuring Hosts to Access VMware Datastores P/N 302-002-569 REV 01 Copyright 2016 EMC Corporation. All rights reserved.

More information

FUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series Non-Stop Storage Reference Architecture Configuration Guide

FUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series Non-Stop Storage Reference Architecture Configuration Guide FUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series Non-Stop Storage Reference Architecture Configuration Guide Non-stop storage is a high-availability solution that combines ETERNUS SF products

More information

Troubleshooting Cisco APIC-EM Single and Multi-Host

Troubleshooting Cisco APIC-EM Single and Multi-Host Troubleshooting Cisco APIC-EM Single and Multi-Host The following information may be used to troubleshoot Cisco APIC-EM single and multi-host: Recovery Procedures for Cisco APIC-EM Node Failures, page

More information

Install ISE on a VMware Virtual Machine

Install ISE on a VMware Virtual Machine ISE Features Not Supported in a Virtual Machine, page 1 Supported VMware Versions, page 1 Support for VMware vmotion, page 2 Support for Open Virtualization Format, page 2 Virtual Machine Requirements,

More information

OnCommand Unified Manager Installation and Setup Guide for Use with Core Package 5.2 and Host Package 1.3

OnCommand Unified Manager Installation and Setup Guide for Use with Core Package 5.2 and Host Package 1.3 IBM System Storage N series OnCommand Unified Manager Installation and Setup Guide for Use with Core Package 5.2 and Host Package 1.3 GA32-1020-03 Table of Contents 3 Contents Preface... 10 Supported

More information

NETAPP - Accelerated NCDA Boot Camp Data ONTAP 7-Mode

NETAPP - Accelerated NCDA Boot Camp Data ONTAP 7-Mode NETAPP - Accelerated NCDA Boot Camp Data ONTAP 7-Mode Duration: 5 Days Course Price: $5,850 Course Description Course Overview This training course is a 5-day boot camp with extended hours. The training

More information

ExpressCluster X 3.2 for Linux

ExpressCluster X 3.2 for Linux ExpressCluster X 3.2 for Linux Installation and Configuration Guide 5/23/2014 2nd Edition Revision History Edition Revised Date Description 1st 2/19/2014 New manual 2nd 5/23/2014 Corresponds to the internal

More information

Install ISE on a VMware Virtual Machine

Install ISE on a VMware Virtual Machine Supported VMware Versions, page 1 Support for VMware vmotion, page 1 Support for Open Virtualization Format, page 2 Virtual Machine Requirements, page 3 Virtual Machine Resource and Performance Checks,

More information

ExpressCluster X for Windows

ExpressCluster X for Windows ExpressCluster X for Windows PP Guide (Virtualization Software) 09/30/2012 5th Edition Revision History Edition Revision Date 1 04/14/2009 Created 2 10/20/2009 Corresponds to Hyper-V 2.0. Modified sample

More information

ExpressCluster X 3.1 for Linux

ExpressCluster X 3.1 for Linux ExpressCluster X 3.1 for Linux Installation and Configuration Guide 10/11/2011 First Edition Revision History Edition Revised Date Description First 10/11/2011 New manual Copyright NEC Corporation 2011.

More information

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect Vblock Architecture Andrew Smallridge DC Technology Solutions Architect asmallri@cisco.com Vblock Design Governance It s an architecture! Requirements: Pretested Fully Integrated Ready to Go Ready to Grow

More information

User's Guide for Infrastructure Administrators (Resource Management)

User's Guide for Infrastructure Administrators (Resource Management) FUJITSU Software ServerView Resource Orchestrator Cloud Edition V3.1.2 User's Guide for Infrastructure Administrators (Resource Management) Windows/Linux J2X1-7612-06ENZ0(05) June 2014 Preface Purpose

More information

vsphere Host Profiles 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7

vsphere Host Profiles 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about

More information

vcenter Server Installation and Setup Update 1 Modified on 30 OCT 2018 VMware vsphere 6.7 vcenter Server 6.7

vcenter Server Installation and Setup Update 1 Modified on 30 OCT 2018 VMware vsphere 6.7 vcenter Server 6.7 vcenter Server Installation and Setup Update 1 Modified on 30 OCT 2018 VMware vsphere 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

ETERNUS SF Storage Cruiser V15.0. Quick Reference

ETERNUS SF Storage Cruiser V15.0. Quick Reference ETERNUS SF Storage Cruiser V15.0 Quick Reference B1FW-5963-02ENZ0(00) April 2012 Preface Purpose This manual describes the pre-installation requirements, installation procedure, configuration procedure,

More information

Install ISE on a VMware Virtual Machine

Install ISE on a VMware Virtual Machine ISE Features Not Supported in a Virtual Machine, page 1 Supported VMware Versions, page 1 Support for VMware vmotion, page 2 Support for Open Virtualization Format, page 2 Virtual Machine Requirements,

More information

Dell Storage Integration Tools for VMware

Dell Storage Integration Tools for VMware Dell Storage Integration Tools for VMware Version 4.1 Administrator s Guide Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION:

More information

vsphere Update Manager Installation and Administration Guide 17 APR 2018 VMware vsphere 6.7 vsphere Update Manager 6.7

vsphere Update Manager Installation and Administration Guide 17 APR 2018 VMware vsphere 6.7 vsphere Update Manager 6.7 vsphere Update Manager Installation and Administration Guide 17 APR 2018 VMware vsphere 6.7 vsphere Update Manager 6.7 You can find the most up-to-date technical documentation on the VMware website at:

More information

Dell Storage Compellent Integration Tools for VMware

Dell Storage Compellent Integration Tools for VMware Dell Storage Compellent Integration Tools for VMware Version 4.0 Administrator s Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your

More information

openqrm Technical Overview

openqrm Technical Overview openqrm Technical Overview Open Source - Data Center Management Software November 2006 Qlusters, Inc. 1841 Page Mill Road, G2 Palo Alto, CA 94304 www.qlusters.com 650-812-3200 1 Table of Contents 1. High

More information

Administering VMware vsphere and vcenter 5

Administering VMware vsphere and vcenter 5 Administering VMware vsphere and vcenter 5 Course VM-05 5 Days Instructor-led, Hands-on Course Description This 5-day class will teach you how to master your VMware virtual environment. From installation,

More information

FUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series

FUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series Utilizing VMware vsphere Virtual Volumes (VVOL) with the FUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series Reference Architecture for Virtual Platforms (15VM/iSCSI) The ETERNUS AF series and

More information

FUJITSU Software Infrastructure Manager V2.3 Infrastructure Manager for PRIMEFLEX V2.3. User's Manual

FUJITSU Software Infrastructure Manager V2.3 Infrastructure Manager for PRIMEFLEX V2.3. User's Manual FUJITSU Software Infrastructure Manager V2.3 Infrastructure Manager for PRIMEFLEX V2.3 User's Manual CA92344-2506-02 October 2018 Preface Purpose This manual describes the installation procedure and the

More information

Configuring Cisco UCS Server Pools and Policies

Configuring Cisco UCS Server Pools and Policies This chapter contains the following sections: Global Equipment Policies, page 1 UUID Pools, page 4 Server Pools, page 5 Management IP Pool, page 7 Boot Policy, page 8 Local Disk Configuration Policy, page

More information

Data Protection Guide

Data Protection Guide SnapCenter Software 4.0 Data Protection Guide For VMs and Datastores using the SnapCenter Plug-in for VMware vsphere March 2018 215-12931_C0 doccomments@netapp.com Table of Contents 3 Contents Deciding

More information

Installation and Cluster Deployment Guide for VMware

Installation and Cluster Deployment Guide for VMware ONTAP Select 9 Installation and Cluster Deployment Guide for VMware Using ONTAP Select Deploy 2.6 November 2017 215-12636_B0 doccomments@netapp.com Updated for ONTAP Select 9.3 Table of Contents 3 Contents

More information

Active System Manager Version 8.0 User s Guide

Active System Manager Version 8.0 User s Guide Active System Manager Version 8.0 User s Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION: A CAUTION indicates either

More information

Monitoring ETERNUS DX systems with ServerView Operations Manager

Monitoring ETERNUS DX systems with ServerView Operations Manager User Guide - English FUJITSU Software ServerView Suite Monitoring ETERNUS DX systems with ServerView Operations Manager Edition February 2018 Comments Suggestions Corrections The User Documentation Department

More information

FUJITSU Software StorManMonitor. Version V8.0 December Release Notice

FUJITSU Software StorManMonitor. Version V8.0 December Release Notice FUJITSU Software StorManMonitor Version V8.0 December 2017 Release Notice All rights reserved, including intellectual property rights. Technical data subject to modifications and delivery subject to availability.

More information

FUJITSU Software ServerView Resource Orchestrator Cloud Edition V Quick Start Guide. Windows/Linux

FUJITSU Software ServerView Resource Orchestrator Cloud Edition V Quick Start Guide. Windows/Linux FUJITSU Software ServerView Resource Orchestrator Cloud Edition V3.1.2 Quick Start Guide Windows/Linux J2X1-7622-06ENZ0(01) June 2014 Preface Purpose of This Document This manual explains the flow of installation

More information

Server Fault Protection with NetApp Data ONTAP Edge-T

Server Fault Protection with NetApp Data ONTAP Edge-T Technical Report Server Fault Protection with NetApp Data ONTAP Edge-T Jeff Whitaker, NetApp March 2013 TR-4154 TABLE OF CONTENTS 1 Introduction... 3 2 Backup and Disaster Recovery Technology... 4 2.1

More information

Data ONTAP 8.1 Software Setup Guide for 7-Mode

Data ONTAP 8.1 Software Setup Guide for 7-Mode IBM System Storage N series Data ONTAP 8.1 Software Setup Guide for 7-Mode GA32-1044-03 Contents Preface................................ 1 About this guide.............................. 1 Supported features.............................

More information

Best Practices for deploying VMware ESX 3.x and 2.5.x server with EMC Storage products. Sheetal Kochavara Systems Engineer, EMC Corporation

Best Practices for deploying VMware ESX 3.x and 2.5.x server with EMC Storage products. Sheetal Kochavara Systems Engineer, EMC Corporation Best Practices for deploying VMware ESX 3.x and 2.5.x server with EMC Storage products Sheetal Kochavara Systems Engineer, EMC Corporation Agenda Overview of EMC Hardware and Software Best practices with

More information

Dell Storage Compellent Integration Tools for VMware

Dell Storage Compellent Integration Tools for VMware Dell Storage Compellent Integration Tools for VMware Administrator s Guide Version 3.1 Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your

More information

EXPRESSCLUSTER X 3.3 for Linux

EXPRESSCLUSTER X 3.3 for Linux EXPRESSCLUSTER X 3.3 for Linux Installation and Configuration Guide 04/10/2017 5th Edition Revision History Edition Revised Date Description 1st 02/09/2015 New manual. 2nd 06/30/2015 Corresponds to the

More information