Establishment of National Agricultural Bioinformatics Grid in ICAR

Size: px
Start display at page:

Download "Establishment of National Agricultural Bioinformatics Grid in ICAR"

Transcription

1 आई. ए. एस. आर. आई./ट. ब. 03/2014 I.A.S.R.I/T.B. 03/2014 भ. क. अन. प. म र ष ट र य क ष ज व - स चन ग र ड क स थ पन Establishment of National Agricultural Bioinformatics Grid in ICAR ग च छ व स त कल अग रभकल पन दस त व ज़ Ñf"k ts o lw p uk ds U nz Hkkjrh; Ñf"k lkaf[;dh vuqla/kku laléku ykbcszjh,osu;w] iwlk] uã fnyyh& HkkjrÀ Centre for Agricultural Bioinformatics Indian Agricultural Statistics Research Institute Library Avenue, Pusa, New Delhi , India 2014

2 CLUSTER ARCHITECTURE DESIGN DOCUMENT IASRI, New Delhi Anil Rai K. K. Chaturvedi S. B. Lal Anu Sharma C-DAC, Pune Goldi Misra Prasad Wadlakondwar Ashish Ranjan Gourov Choudhary

3 ह, ई, उ " " ई उ हब( ) उ ब ह ह उ ई उ ख ब ह ह ह ह ब ह ह घ ब ब ह उ, ब औ उ ह ह औ ह आई,,, औ ब ख ह ह घ (उ, औ ) औ उ ब घ ह. ह उ घ औ उ ह ब ह ह, उ घ औ / आ ब घ ह ह, ब औ ब -1 उ ब ख ख ह ब ह ख Establishment of National Agricultural Bioinformatics Grid in ICAR Page 1

4 Preface The Cluster Architecture Design Document (CADD) provides the information regarding the design of High Performance Computing (HPC) platform named Advanced Super Computing Hub for OMICS Knowledge in Agriculture (ASHOKA) at IASRI, New Delhi under sub project Establishment of National Agricultural Bioinformatics Grid in ICAR (NABG) of National Agricultural Innovation Project (NAIP), New Delhi, India. The ASHOKA consists with hybrid architecture of four types of cluster to cater the need of variant types of users at IASRI, New Delhi. There is a mini supercomputing facility is also created at each of the domain institutions of the project. The document covers the details of server machines configured in setting up these clusters, details of the network components used to monitor and manage data centre and high speed connectivity between HPC system and storage. This document also covers details of storage system and its configuration. The complete established data centre is monitored remotely through ilo, HP CMU, HP Open View Manager and Network Node Manager-I software. The document comprises with the overview of the different components (HPC cluster, network and storage) of data centre and its associated components. This will also covers the connectivity between different parts of HPC cluster and hardware used in HPC systems and connectivity with the configured storage system. This document also provides the monitoring system of the HPC having various components provided by associated vendors/suppliers of these components. Broadly, it covers HP Operation Manager and Network Node Manager-I for infrastructure monitoring, HP CMU utility for managing and monitoring HPC clusters. AUTHORS Establishment of National Agricultural Bioinformatics Grid in ICAR Page 2

5 Table of Contents List of Figures Abbreviations 1 Introduction Architecture of HPC Facility High Performance computing Nodes Linux Based Cluster Nodes Windows Based Cluster Nodes GPGPU Based Linux Cluster Linux based SMP System Nodes Linux based Cluster at each Domain Site Networks used in HPC Data Center High bandwidth network with low latency (Q-logic QDR InfiniBand switch) Ethernet network ilo management network Storage System Monitoring HP Operation manager HP Network node manager I (NNMI) software CMU (Cluster Management Utility) Conclusion Establishment of National Agricultural Bioinformatics Grid in ICAR Page 3

6 List of Figures Figure 1: Schematic Diagram of High Performance Computing (HPC) facility...10 Figure 2: Design of High performance computing facility at IASRI...11 Figure 3: HP ProLiant DL 380G7 Server...12 Figure 4: SL6500 chassis with 8 compute nodes Figure 5: HP ProLiant DL980 G7 SMP system...15 Figure 6: Q-logic QDR switch...16 Figure 7: General view: Connectivity using IB switch Figure 8: InfiniBand switch..18 Figure 9: Connectivity using HP Switch Figure 10: X9300 gateway Figure 11: HP P2000 G3 MSA Figure 12: SAN switch..23 Figure 13: HP Operation Manager monitored component Figure 14: Server nodes monitored by HP Operation Manager 27 Figure 15: Tools associated with HP Operation Manager Figure 16: NNMI Software Interface...29 Figure 17: HP CMU welcome page...30 Figure 18: Global Cluster View Figure 19: Table view with summary of all nodes Figure 20: Instant view of Login nodes 31 Figure 21: Table view of Login nodes..32 Figure 22: Detailed view of the login node...32 Establishment of National Agricultural Bioinformatics Grid in ICAR Page 4

7 Abbreviations ASHOKA CLI CMU CPU DDR FC FLOPS GDDR5 HA HDD HPC IASRI IB IBTA ICAR ilo IML IP IPMI IRF MAD-BFD Advanced Supercomputing Hub for Omics Knowledge in Agriculture Command Line Interface Cluster Management Utility Central Processing Unit Double Data Rate Fibre Channel Floating Point Operations per Second Graphics Double Data Rate High Availability Hard Disk Drive High Performance Computing Indian Agricultural Statistics Research Institute InfiniBand Infiniband Trading Association Indian Council of Agricultural Research Integrated Lights-Out Integrated Management Log Internet Protocol Intelligent Platform Management Interface Intelligent Resilient Framework Multi-Active Detection- Bidirectional Forwarding Detection Establishment of National Agricultural Bioinformatics Grid in ICAR Page 5

8 MPLS MSA NABG NAIP NAS NFS NIC NNMI PFS QDR RAID RHEL: RIBCL SAN SAN SFP SMP TF VLAN XML Multiprotocol Label Switching Modular Storage Array National Agricultural Bioinformatics Grid in ICAR National Agriculture Innovation Project Network Attached Storage Network File System Network Interface Card Network Node Manager-I Parallel File System Quad Data Rate Redundant Array of Independent Disks Red Hat Enterprise Linux Remote Insight Board Command Language Storage Area Network Storage Area Network Small form-factor pluggable Symmetric Multiprocessing Tera Flops Virtual Local Area Network Extensible Mark-up Language Establishment of National Agricultural Bioinformatics Grid in ICAR Page 6

9 1. Introduction The Cluster Architecture Design Document is to provide a technical view of new High Performance Computing (HPC) system installed at IASRI, Delhi and its domain institutions. This document explains the architecture of HPC facility created under the NAIP Component- I sub-project entitled Establishment of National Agricultural Bioinformatics Grid in ICAR (NABG). Main component of the architecture are combination of different types of HPC clusters, storages and networks. Clusters are collections of computers that are connected together. The special sets of software are used to configure HPC environment. This set up has been named as Advanced Supercomputing Hub for Omics Knowledge in Agriculture (ASHOKA). The importance of HPC is rapidly growing because more and more scientific and technical problems are being studied on the huge data sets which require very high computational power as well. HPC offers environment for biologists, scientists, analysts, engineers and students to utilize the computing resources in making vital decisions, to speed up research and development, by reducing the execution time. Following types of configurations are used in setting up of this facility A. TYPES OF HPC CLUSTER: a. 256 Nodes Linux Based Cluster b. 16 Nodes Windows Based Cluster c. 16 Nodes GPGPU Based Linux Cluster d. 16 Nodes Linux based SMP system e. 16 Nodes Linux Based Cluster at each of the five domains B. TYPES OF NETWORK: a. High bandwidth network with low latency (Q-logic QDR infiniband switch) b. Gigabit network for cluster administration and management c. ILO3 Management Network C. TYPES OF STORAGE: a. Parallel File System (PFS) for computational purpose b. Network Attached Storage (NAS) for user Home Directory c. Archival Storage for back up. The document covers the details of configuration of servers, nodes, storage units etc. This also covers details of storage system used and networking connectivity. The complete Establishment of National Agricultural Bioinformatics Grid in ICAR Page 7

10 established data centre is monitored remotely through ilo, HP CMU, HP Open View Manager and Network Node Manager-I software. This document will be useful for persons who are interested in knowing and managing technical details about the HPC system. Establishment of National Agricultural Bioinformatics Grid in ICAR Page 8

11 2. Architecture Design of HPC The architectural designing is one of the pre-requisite to configure any environment. There are three major components of this HPC at Indian Agricultural Statistics Research Institute (IASRI), New Delhi namely clusters, networks and storage. The architecture design and inter-connection of the major components of this HPC facility is shown in Figure Nodes Linux cluster High Bandwidth IB Network Parallel File System 16 Nodes Windows Cluster 16 Nodes Linux Based GPGPU Cluster Gigabit Network for Cluster Administration and Management Network Attached Storage Application Servers and SMP IPMI Management Network Archival Storage Figure 1: Schematic Diagram of High Performance Computing (HPC) facility The broad implementation diagram of ASHOKA is shown in Figure 2. It can be seen that there is a link between the head, login and compute nodes through InfiniBand and Gigabit Ethernet network. It also specifies the connection of the storage with the HPC system and application servers through InfiniBand and Gigabit Ethernet network. Establishment of National Agricultural Bioinformatics Grid in ICAR Page 9

12 Figure 2: Implementation design of High Performance Computing (HPC) facility at IASRI, New Delhi In addition to this, the clusters at each domain site are also configured and linked using MPLS connectivity with this central hib. The architectural construct of the domain sites are similar to the main site. 2.1 High Performance Computing (HPC) System The power of the connected nodes can be utilized as a single unit. It is a group of computers connected together with a network and centrally coordinated by the special set of software. The integration and synchronization of more than one node is called a cluster. In this case, clusters are configured using clustering software utility namely HP Cluster Management Utility (CMU). There are three different types of clusters configured in this hub ASHOKA at IASRI, New Delhi. Establishment of National Agricultural Bioinformatics Grid in ICAR Page 10

13 Nodes Linux Based Cluster The 256 nodes Linux based cluster consist of mainly three components (a) Head/Master node, (b) Login node and (c) Compute node which are discussed in detail in the following sections. a. Head/Master Node Head/Master node typically handles cluster administration functions such as compute node provisioning, image management, cluster monitoring, user management, job scheduling and compilation. Head node provides important features to maintain redundancy and reliability. There are two Head nodes in active-passive mode. The ProLiant DL 380G7 Server is configured as Head/Master node as shown in Figure 3. The hardware configuration of the Head/Master node is as follows Server Name : HP ProLiant DL380-G7 Server Type of Processor : Intel Xeon X Ghz Number of Processors : 2 Core per Processor : 6 Total memory (RAM) : 96GB Memory per Core : 8GB Hard Disk : 6*600GB SAS OS : RHEL 6.2 (Linux) Figure 3: HP ProLiant DL 380G7 Server The processor of the Head/Master node is dual hex core having eight memory slots. Each memory slot contains 8GB memory module. There are 6 SAS based HDD of 600 GB capacity with each node. The operating system used for the Head/Master node is RHEL 6 (Red Hat Enterprise Linux). Establishment of National Agricultural Bioinformatics Grid in ICAR Page 11

14 b. Login Node: Login node typically provides services like user login, pre-processing and post-processing of user applications. There are four login nodes configured to provide better and reliable service to the users. Users can login to any of the four Login nodes to submit their jobs. The configuration of the Login node is same as the Head/Master node. c. Compute Node: All compute nodes perform computational work. There are 256 compute nodes. These nodes are configured in a chassis. Each SL6500 chassis can hold up to 8 compute nodes. The SL6500 Chassis is shown in Figure 4. The chassis are placed in 42U racks. Figure 4: SL6500 chassis with 8 compute nodes The hardware configuration of each compute node is as follows Server Name : HP ProLiant SL390-G7 Server Type of Processor : Intel Xeon X Ghz Number of Processors : 2 Core per Processor : 6 Total memory (RAM) : 96G Memory per Core : 8GB Hard Disk : 300GB SAS OS : RHEL 6.2 (Linux) The disk capacity of each node is 300 GB because only operating system and few other cluster related software are needed to install on these nodes. One can calculate peak performance of the cluster using standard formula i.e. Cluster Performance = (Number of nodes) * (number of CPUs per node) * (number of cores per CPU) * (CPU speed in GHz) * (CPU instruction per cycle) Establishment of National Agricultural Bioinformatics Grid in ICAR Page 12

15 256 nodes Linux based cluster consists of two head/master nodes in high availability (HA) mode, four login nodes, and 256 compute nodes. This cluster delivers the estimated peak performance of 37.6 TF, whereas Linpack benchmarking performance is TF Nodes Windows Based Cluster This cluster consists of a Head/Master node and 16 Compute nodes. The cluster is configured using Microsoft Windows 2008 HPC Edition. The hardware configuration of the Head/ Master and Compute nodes are similar to 256 nodes Linux based cluster. The Windows based cluster has estimated peak performance of 2.3TF whereas Linpack performance is achieved as 1.9 TF Nodes GPGPU Based Linux Cluster This is a 16 node GPGPU cluster with RHEL 6 Operating system. Each compute nodes holds two Nvidia Tesla M2090 cards. The NVIDIA Tesla M2090 Graphics Processing Unit (GPU) Computing Module is a PCI Express, double-wide, full-height (4.376 inches by 9.75 inches by 1.52 inches) form factor computing module based on the NVIDIA Fermi GPU. This module comprises a computing sub-system with a GPU and high speed memory. The Tesla M2090 module offers 6 GB of GDDR5 on-board memory. The hardware configuration of the Head and Compute nodes are same as 256 nodes Linux cluster with the two additional GPU cards. There are two GPU cards per Compute node. The configuration of GPU card is as follows GPU Card Model : Nvidia Tesla M2090 Number of processor cores : 512 Processor core clock : 1.3 GHz Package size : 42.5mm 42.5mm 1981-pin ball grid array (BGA) Memory clock : 1.85 GHz Memory Interface : 384-bit Memory per card : 6 GB Memory Package size : 24 pieces 128M 16 GDDR5 136-pin BGA, SDRAM This cluster has estimated peak performance as TF, whereas, Linpack performance is TF. Establishment of National Agricultural Bioinformatics Grid in ICAR Page 13

16 2.1.4 Linux Based SMP System Symmetric Multiprocessing (SMP) systems are tightly coupled multiprocessor systems with a pool of homogeneous processors running independently. Each processor is executing different programs and working on different data with the capability of sharing common resources (memory, I/O device, interrupt system and so on). These devices are connected using a system bus. As the processor count increases, execution time decreases. The ProLiant DL980 G7 is used as SMP system and shown in Figure 5. Figure 5: HP ProLiant DL980 G7 SMP system The hardware and software specifications of this SMP is as follows Server Name : HP ProLiant DL 980 G7 Type of Processor : Intel Xeon E GHz Number of Processors : 8 Core per Processor : 8 Total memory (RAM) : 1.5 TB Hard Disk : 396 GB OS : RHEL 6.2 Total number of cores in this system is 64 and RAM is 1.5 TB. The estimated peak performance of this unit is TF (=8*8*4*2.13 GHz) Nodes Linux Cluster at each of the Five Domain Sites Each of the five domain site is having 16 node Linux based cluster which consists of one Head/ Master node, one Login node and 16 Compute nodes. The hardware configuration of these Head, Login and Compute nodes are similar to Head, Login and Compute nodes implemented at IASRI, New Delhi. Establishment of National Agricultural Bioinformatics Grid in ICAR Page 14

17 2.2 Network Architecture of the System Three types of networks are being used in establishing ASHOKA, i.e., (i) InfiniBand network, used to ensured the flow of large amount of data across interconnected systems at low latency, (ii) Gigabit Ethernet, used for cluster operation and management and (iii) dedicated Ethernet network, used for remote management of all HPC systems InfiniBand high bandwidth network with low latency (Q-logic QDR InfiniBand switch) InfiniBand (IB), a switched fabric computer network communications link, is being used in HPC and enterprise data centres. The InfiniBand architecture specification defines a connection between processor nodes and high performance I/O nodes such as storage devices. IB switch is shown in Figure 6. The features of IB Switch are as follows Figure 6: Q-logic QDR switch Establishment of National Agricultural Bioinformatics Grid in ICAR Page 15

18 864 ports of IB QDR (40Gbps) performance with support for DDR and SDR Scales to 51.8-Tbps aggregate bandwidth True Scale architecture, with scalable, predictable low latency Multiple Virtual Lanes (VLs) per physical port Supports virtual fabric partitioning Fully-redundant system design Option to use Ultra High Density (UHD) leafs for maximum connectivity or Ultra High Performance (UHP) leafs for maximum performance Integrated chassis management capabilities for installation, configuration, and ongoing monitoring Optional InfiniBand Fabric Suite (IFS) management solution that provides an expanded fabric views and fabric tools RoHS (Restriction of Use of Hazardous Substances) 6 compliant Minimal power and cooling requirements Complies with InfiniBand Trade Association (IBTA) version 1.2 standard Every master node, compute node, login node are connected through IB switch at one side and to storage at other side. This is shown in Figure 7. Figure 7: General view of Connectivity using IB Switch All nodes in the network are connected to each other with IB switch. The nodes can exchange data with higher bandwidth and high speed which results in reducing the computation time. Establishment of National Agricultural Bioinformatics Grid in ICAR Page 16

19 Each domain site is provided with a Qlogic port QDR InfiniBand switch. This switch provides connectivity between master, login, compute nodes and storage systems. Figure 8 shows the switch used in setting up the cluster at each of the five domain sites Gigabit Ethernet Network Figure 8: InfiniBand switch Main purpose of Ethernet network in the cluster is to provide services like cluster management, cluster monitoring, compute node deployment and many other things. It is always recommended to have different application data network. This can be seen in the Figure 9. The cluster consists of two switches in HA mode which provide connectivity between IASRI internal network, HPC network and external network (MPLS connectivity) HP12518 HP12518 Figure 9: Connectivity using HP Switch The HPC network connectivity is provided by HP switches. Separate VLAN has been created to serve the purpose of HPC Ethernet LAN connectivity. These two switches are used to provide HA functionality. There are two HP series switches installed in the Data centre. According to the best practices, following technical configurations were implemented. I. IRF between the chassis: Establishment of National Agricultural Bioinformatics Grid in ICAR Page 17

20 Intelligent Resilient Framework (IRF) is an innovative HP switch platform with virtualization technology that allows dramatic simplification of the design and operations of data centre and local Ethernet networks. IRF overcomes the limitations of traditional STP (Spanning Tree Protocol) based and legacy competitive designs by delivering new levels of network performance and resiliency. Moreover, IRF is configured to make two physical switches work as single logical switch. II. Ports used to build IRF: Switch-1: Port Ten-Gigabit Ethernet 1/2/0/1 and port Ten-Gigabit Ethernet 1/2/0/2 Switch-2: Port Ten-Gigabit Ethernet 2/2/0/1 and port Ten-Gigabit Ethernet 2/2/0/2 III. VLAN Configuration: As per the requirement, two VLAN s are created as VLAN1 and VLAN2. a) VLAN1 is created to provide the accessibility of all the servers to the internal users X.X/16 is the network IP addresses assigned to the VLAN1. b) VLAN2 is created to provide the accessibility of servers to external users through internet X.X/24 is the network IP addresses assigned to VLAN2. c) VLAN 500 is created for the heartbeat between the chassis, which is known as MAD BFD. VLAN1 users cannot have an access to VLAN2 users and vice versa because Inter VLAN routing is not enabled. IV. MAD BFD Configuration: MAD (Multi Active Detection) protects IRF link failure between switches with having configuration as the master switch. In this case, MAD shuts down one of the switches according to role selection. The switch with a higher priority becomes the master, and then the local interfaces for switch 2 are shut down. The following ports are used Switch-1: Ten-GigabitEthernet1/9/0/1 Switch-2: Ten-GigabitEthernet2/9/0/1 Two Gigabit Ethernet switch have been configured at each of the domain sites. These switches are configured in HA mode. The function of these switches is to provide services Establishment of National Agricultural Bioinformatics Grid in ICAR Page 18

21 like cluster management, cluster monitoring, compute node deployment and many other things. Everything is connected in a similar way as the 256 Linux based cluster ilo Management Network on Ethernet Integrated Lights-Out, or ilo, is an embedded server management technology. ilo is useful as an out-of-band management technology i.e. dedicated management channel for device maintenance. There is a dedicated switch for ilo management of all the Head/ Master, Login, Compute nodes and Storage devices. ilo makes it possible to perform activities on any of these nodes from a remote location. The ilo card has a separate network connection (and its own IP address) to which one can connect via HTTPS. Possible operations performed using ilo are as follows: Reset the server (in case the server doesn't respond anymore via the normal network card) Power-up the server (possible to do this from a remote location, even if the server is shut down) Remote console (in some cases however an 'Advanced license' may be required for some of the utilities to work) Mount remote physical CD/DVD drive or image Access the server's IML (Integrated Management Log) Can be manipulated remotely through XML-based Remote Insight Board Command Language (RIBCL) Full CLI support through RS-232 port (shared with system), though the inability to enter Function keys prevents certain operations. Establishment of National Agricultural Bioinformatics Grid in ICAR Page 19

22 2.3 Storage Infrastructure HP-IBRIX Fusion is a scalable parallel file system combined with integrated logical volume manager and a management interface. IBRIX storage solution has been implemented in this infrastructure. This storage solution provides space for user Home Directory, Parallel File System (PFS) required by parallel applications and archiving for backup purpose to long term storage. Network Attached Storage (NAS) NAS is storage unit connected to a network that provides file-based data storage services to other devices on the network. NAS uses standard file protocols such as Common Internet File System (CIFS) and Network File System (NFS) to allow Microsoft Windows, Linux, and UNIX clients to access files, file systems, and databases over the IP network. Storage Area Network (SAN): A Storage Area Network (SAN) is a secure high-speed data transfer network that provides access to consolidated block-level storage. An SAN makes a network of storage devices accessible to multiple servers. Storage solution mainly consist of three components, which are: a) HP storage gateway X9300 b) HP SAN SAS storage P2000 G3 c) HP SAN switch a) HP storage gateway X9300: This is the frontend of the storage system and called as storage gateway. Function of X9300 gateway is to manage the storage from Modular Storage Array (MSA) and to monitor the MSA. It serves the storage space from MSA to other machines. Figure 10: X9300 gateway Establishment of National Agricultural Bioinformatics Grid in ICAR Page 20

23 The X9300 gateways are clubbed together in creating cluster to provide High availability (HA) feature. Figure 10 shows X9300 gateway system used in the storage system at IASRI, New Delhi. The specifications of the storage gateway is as follows Model : X9300 CPU Quantity : 2 Intel Xeon Quad Core CPU Speed : 2400 MHz Memory : 48 GB NIC : 4 +1 ilo LAN Storage Controller : E200i, RAID 1 Infini Band Interface : QLogic 4X QDR IB Dual Port FC Card : 1 with Dual port HDD Internal : 2*300GB SAS Drives Total Quantity : 34 Nos. OS : Store ALL OS 6.3 b) HP SAN SAS storage P2000 G3: P2000 G3 is Modular Storage Array (MSA). This is main component that actually holds the data. Several P2000 G3 MSA can be connected together to fulfil the need of required amount of storage. Figure 11 shows a typical P2000 G3 MSA array system. Figure 11: HP P2000 G3 MSA The specifications are as follows Model : P2000 G3 RAID Level : RAID 6 HDD Internal : 49*600GB 10K RPM SAS Drive HDD Internal for Archive : 49*900GB 10K RPM SAS Drive Firmware : TS240P003 Total MSA Unit : 36 Nos. c) HP SAN switch: Establishment of National Agricultural Bioinformatics Grid in ICAR Page 21

24 The 8/80 SAN Switch features a non-blocking architecture with as many as 80 ports concurrently active at 8 Gbps full duplex. This provides an aggregate bandwidth of 1280 Gbps. Inter-Switch Link trunking can supply up to 64 Gbps of balanced data throughput to reduce congestion and increase bandwidth. The SAN switch is shown in Figure 12. Figure 12: SAN switch The specifications of SAN switch is as follows Model : HP Storage Works 8/80 Serial No : CZC242XXBX, CZC247XZFF, CZC242XXBU, CZC247XZFE Total FC Ports : 80 Total SFPs : 80 Fabric OS Rev. : V7.0.2c License : Web, Zoning Quantity : 4 Different types of file system are created for storing user s data, running parallel jobs and archiving the important data. There are three types of storage (i) Network Attached Storage (NAS), (ii) Parallel File System (PFS) and (iii) Archival Storage Home File System Network Attached Storage (NAS) is used to store the user s Home Directory in a network shared file system. The hardware used in setting up scratch and home partitions are as follows Model : P2000 G3 RAID Level : RAID 6 HDD Internal : 49*600GB 10K RPM SAS Drive Firmware : TS240P003 Total MSA Unit : 28 Nos. Establishment of National Agricultural Bioinformatics Grid in ICAR Page 22

25 2.3.2 Scratch File System Parallel File System (PFS) separates out data from metadata, and supports multiple storage servers working together in parallel to create a single name space to increase storage throughput and capacity. Compute nodes are accessing PFS architecture which can read and write data from multiple storage servers and devices in parallel, greatly improving performance over a traditional NFS solution. Separating the metadata function from the data path to dedicated servers and utilizing faster spindles to match the metadata I/O workload increases the file system performance and scalability. Raw space of approximately 823 TB has been used to create home and scratch partitions. The effective storage for scratch directory is 250 TB in PFS and 250 TB in NAS for user Home Directory Archive Storage The archive file system is for the long-term storage of many large files. These files are stored on a tape drive or disk-based file system. In this case, NAS has been used to carry out archiving process. It is possible to keep this data and directly accessible on line in the HOME directories, but this is too expensive. The home storage is much faster than archive storage and are used as on line for very large datasets, so data can be transported to cheaper/slower storage system i.e. archive storage. Raw space used in setting up Archive Storage is approximately 352 TB. The effective archival storage is achieved as 200TB. Following are the details of hardware used in setting up scratch and home partitions: Model : P2000 G3 RAID Level : RAID 6 HDD Internal : 49*900GB 10K RPM SAS Drive Firmware : TS240P003 Total MSA Unit : 08 Nos. Domain Site: Storage solution at each of the domain sites is same as at IASRI, New Delhi except the capacity. Raw space of PFS approximately 129 TB has been used to create Home and Scratch partitions. The effective storage for Scratch directory is 50 TB in PFS and 25 TB in Network Attached Storage used for user Home Directory. Establishment of National Agricultural Bioinformatics Grid in ICAR Page 23

26 2.4 System Monitoring Following tools are configured to manage and monitor the clusters of ASHOKA HP Operation Manager HP Network Node Manager-I (NNMI) Software HP Cluster Management Utility (CMU) tools HP Operation Manager HP Operation Manager monitors the IT infrastructure, consolidates, correlates fault and performance events to help in identifying the causes of IT incidents. HP Operation Manager provides a single monitoring console for virtual and cloud infrastructures. HP Operations Manager associates events with views of services, applications, and infrastructure to achieve quick and faster resolution of the system s faults. HP Operation Manager takes care of different components of HPC system like service, nodes, tools, graphs, certificates and policies as shown in Figure 13. Figure 13: HP Operation Manager monitored component HP Operations Manager will be managing the servers at IASRI, New Delhi. HP Operation Manager will be connected to SQL express which comes bundled with the tool. Key benefits Infrastructure monitoring and consolidations tools Identify root-cause of IT problems and reduce duplication of effort Establishment of National Agricultural Bioinformatics Grid in ICAR Page 24

27 View dependencies among applications, business services, and infrastructure physical and virtual Leverage integration with ArcSight Logger for context events related to security issues The Server nodes are being monitored by HP Operation Manager as in Figure 14. Figure 14: Server nodes monitored by HP Operation Manager The tools provided by HP Operation Manger are shown in Figure 15. Establishment of National Agricultural Bioinformatics Grid in ICAR Page 25

28 Figure 15: Tools associated with HP Operation Manager HP Network Node Manager-I (NNMI) Software NNMI is the solution for managing fault, availability, performance and network services for physical, virtualized, hybrid, and cloud network environments. Features: Consistent presentation of fault and performance information in the context of network topology Extreme scale, unified polling and a single configuration point to reduce cost Continuous spiral discovery for up-to-date topology and root-cause analysis Flexible architecture for regional control and consolidation of information Simple configuration and customization for low administrative overhead Figure 16 shows different component and their details provided by HP NNMI software. Establishment of National Agricultural Bioinformatics Grid in ICAR Page 26

29 Figure 16: NNMI Software Interface Cluster Management Utility (CMU) Tools HP-Insight Cluster Management Utility (CMU) is HP's cluster lifecycle management software, providing GUI-based control and display of a cluster as a single entity. HP Insight CMU enables cluster management and cloning, and offers zoom-out displays of cluster health and performance metrics. In order to access CMU utility type in web browser. You will see the screen shown in Figure 17. [HeadNode1-ip can be either or based on the accessibility in the network]. Establishment of National Agricultural Bioinformatics Grid in ICAR Page 27

30 Figure 17: HP CMU welcome page Select Launch Insight Cluster Management Utility GUI and you will see the combine states i.e. global cluster view of both Linux Cluster. Figure 18 shows the global cluster view. Figure 18: Global Cluster View Establishment of National Agricultural Bioinformatics Grid in ICAR Page 28

31 Summary of Cluster nodes shows the matrix which contains information like CPU load, memory utilization, paging disk read/ write network utilization etc. of all the Compute nodes belongs to a particular cluster. The sample screen of the browser is shown in Figure 19. Figure 19: Table view shows the summary of all nodes belongs to NE-GPU cluster Instant View provides a quick summary of CPU load as mentioned in Figure 20. Figure 20: Instant view of Login nodes Table View provides detailed information about nodes as shown in Figure 21. Establishment of National Agricultural Bioinformatics Grid in ICAR Page 29

32 Figure 21: Table view of Login nodes Detailed Value view: Select any node from any cluster to view all the information as given in Figure 22. This figure also shows hardware details of a selected node. Figure 22: Detailed view of the login node Establishment of National Agricultural Bioinformatics Grid in ICAR Page 30

33 Conclusion To develop any infrastructure, there is a pre-requisite to make an architectural design of the infrastructure. This design document contains all the peripherals and connectivity required in establishing this HPC infrastructure. This Advanced Supercomputing Hub for Omics Knowledge in Agriculture (ASHOKA) has been established using state-of-art architecture design. This document provides the details of hardware and software used in configuration of ASHOKA. This document is useful for the HPC architect to carry out design details for building other similar HPC infrastructure. This document will be referred if any user/developers want to understand and upgrade the present HPC setup facility. Establishment of National Agricultural Bioinformatics Grid in ICAR Page 31

Establishment of National Agricultural Bioinformatics Grid in ICAR

Establishment of National Agricultural Bioinformatics Grid in ICAR आई. ए. एस. आर. आई./ट. ब. 05/2014 I.A.S.R.I/T.B. 05/2014 भ. क. अन. प. म र ष ट र य क ष ज व - स चन ग र ड क स थ पन Establishment of National Agricultural Bioinformatics Grid in ICAR न ग र और स स धन क आव टन

More information

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments Torben Kling-Petersen, PhD Presenter s Name Principle Field Title andengineer Division HPC &Cloud LoB SunComputing Microsystems

More information

SUN CUSTOMER READY HPC CLUSTER: REFERENCE CONFIGURATIONS WITH SUN FIRE X4100, X4200, AND X4600 SERVERS Jeff Lu, Systems Group Sun BluePrints OnLine

SUN CUSTOMER READY HPC CLUSTER: REFERENCE CONFIGURATIONS WITH SUN FIRE X4100, X4200, AND X4600 SERVERS Jeff Lu, Systems Group Sun BluePrints OnLine SUN CUSTOMER READY HPC CLUSTER: REFERENCE CONFIGURATIONS WITH SUN FIRE X4100, X4200, AND X4600 SERVERS Jeff Lu, Systems Group Sun BluePrints OnLine April 2007 Part No 820-1270-11 Revision 1.1, 4/18/07

More information

PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE SHEET) Supply and installation of High Performance Computing System

PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE SHEET) Supply and installation of High Performance Computing System INSTITUTE FOR PLASMA RESEARCH (An Autonomous Institute of Department of Atomic Energy, Government of India) Near Indira Bridge; Bhat; Gandhinagar-382428; India PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE

More information

Veritas NetBackup on Cisco UCS S3260 Storage Server

Veritas NetBackup on Cisco UCS S3260 Storage Server Veritas NetBackup on Cisco UCS S3260 Storage Server This document provides an introduction to the process for deploying the Veritas NetBackup master server and media server on the Cisco UCS S3260 Storage

More information

DESCRIPTION GHz, 1.536TB shared memory RAM, and 20.48TB RAW internal storage teraflops About ScaleMP

DESCRIPTION GHz, 1.536TB shared memory RAM, and 20.48TB RAW internal storage teraflops About ScaleMP DESCRIPTION The Auburn University College of Engineering Computational Fluid Dynamics Cluster is built using Dell M1000E Blade Chassis Server Platform. The Cluster will consist of (4) M1000E Blade Chassis

More information

Dell EMC Ready Bundle for HPC Digital Manufacturing Dassault Systѐmes Simulia Abaqus Performance

Dell EMC Ready Bundle for HPC Digital Manufacturing Dassault Systѐmes Simulia Abaqus Performance Dell EMC Ready Bundle for HPC Digital Manufacturing Dassault Systѐmes Simulia Abaqus Performance This Dell EMC technical white paper discusses performance benchmarking results and analysis for Simulia

More information

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini White Paper Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini June 2016 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 9 Contents

More information

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini White Paper Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini February 2015 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 9 Contents

More information

Sugon TC6600 blade server

Sugon TC6600 blade server Sugon TC6600 blade server The converged-architecture blade server The TC6600 is a new generation, multi-node and high density blade server with shared power, cooling, networking and management infrastructure

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Data Sheet Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Fast and Flexible Hyperconverged Systems You need systems that can adapt to match the speed of your business. Cisco HyperFlex Systems

More information

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server White Paper Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server Executive Summary This document describes the network I/O performance characteristics of the Cisco UCS S3260 Storage

More information

The Genesis HyperMDC is a scalable metadata cluster designed for ease-of-use and quick deployment.

The Genesis HyperMDC is a scalable metadata cluster designed for ease-of-use and quick deployment. The Genesis HyperMDC is a scalable metadata cluster designed for ease-of-use and quick deployment. IPMI Control Dual Power Supplies Enhanced Metadata Uptime Storage Up to 1.3M IOPS and 5,500 MBps throughput

More information

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Data Sheet Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Fast and Flexible Hyperconverged Systems You need systems that can adapt to match the speed of your business. Cisco HyperFlex Systems

More information

HP GTC Presentation May 2012

HP GTC Presentation May 2012 HP GTC Presentation May 2012 Today s Agenda: HP s Purpose-Built SL Server Line Desktop GPU Computing Revolution with HP s Z Workstations Hyperscale the new frontier for HPC New HPC customer requirements

More information

Cisco HyperFlex HX220c M4 Node

Cisco HyperFlex HX220c M4 Node Data Sheet Cisco HyperFlex HX220c M4 Node A New Generation of Hyperconverged Systems To keep pace with the market, you need systems that support rapid, agile development processes. Cisco HyperFlex Systems

More information

A High-Performance Storage and Ultra- High-Speed File Transfer Solution for Collaborative Life Sciences Research

A High-Performance Storage and Ultra- High-Speed File Transfer Solution for Collaborative Life Sciences Research A High-Performance Storage and Ultra- High-Speed File Transfer Solution for Collaborative Life Sciences Research Storage Platforms with Aspera Overview A growing number of organizations with data-intensive

More information

Dell EMC Ready Bundle for HPC Digital Manufacturing ANSYS Performance

Dell EMC Ready Bundle for HPC Digital Manufacturing ANSYS Performance Dell EMC Ready Bundle for HPC Digital Manufacturing ANSYS Performance This Dell EMC technical white paper discusses performance benchmarking results and analysis for ANSYS Mechanical, ANSYS Fluent, and

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

HPC Solution. Technology for a New Era in Computing

HPC Solution. Technology for a New Era in Computing HPC Solution Technology for a New Era in Computing TEL IN HPC & Storage.. 20 years of changing with Technology Complete Solution Integrators for Select Verticals Mechanical Design & Engineering High Performance

More information

Cisco MCS 7845-H1 Unified CallManager Appliance

Cisco MCS 7845-H1 Unified CallManager Appliance Data Sheet Cisco MCS 7845-H1 Unified CallManager Appliance THIS PRODUCT IS NO LONGER BEING SOLD AND MIGHT NOT BE SUPPORTED. READ THE END-OF-LIFE NOTICE TO LEARN ABOUT POTENTIAL REPLACEMENT PRODUCTS AND

More information

HP Supporting the HP ProLiant Storage Server Product Family.

HP Supporting the HP ProLiant Storage Server Product Family. HP HP0-698 Supporting the HP ProLiant Storage Server Product Family https://killexams.com/pass4sure/exam-detail/hp0-698 QUESTION: 1 What does Volume Shadow Copy provide?. A. backup to disks B. LUN duplication

More information

Exchange Server 2007 Performance Comparison of the Dell PowerEdge 2950 and HP Proliant DL385 G2 Servers

Exchange Server 2007 Performance Comparison of the Dell PowerEdge 2950 and HP Proliant DL385 G2 Servers Exchange Server 2007 Performance Comparison of the Dell PowerEdge 2950 and HP Proliant DL385 G2 Servers By Todd Muirhead Dell Enterprise Technology Center Dell Enterprise Technology Center dell.com/techcenter

More information

MAHA. - Supercomputing System for Bioinformatics

MAHA. - Supercomputing System for Bioinformatics MAHA - Supercomputing System for Bioinformatics - 2013.01.29 Outline 1. MAHA HW 2. MAHA SW 3. MAHA Storage System 2 ETRI HPC R&D Area - Overview Research area Computing HW MAHA System HW - Rpeak : 0.3

More information

QLogic BS21, , and InfiniBand Switches IBM Power at-a-glance guide

QLogic BS21, , and InfiniBand Switches IBM Power at-a-glance guide QLogic 12200-BS21, 12800-040, and 12800-180 InfiniBand Switches IBM Power at-a-glance guide InfiniBand is an industry-standard high-performance interconnect for clusters and enterprise grids. This industry-standard

More information

p5 520 server Robust entry system designed for the on demand world Highlights

p5 520 server Robust entry system designed for the on demand world Highlights Robust entry system designed for the on demand world IBM p5 520 server _` p5 520 rack system with I/O drawer Highlights Innovative, powerful, affordable, open and adaptable UNIX and Linux environment system

More information

HP ProLiant blade planning and deployment

HP ProLiant blade planning and deployment HP ProLiant blade planning and deployment Chris Powell CSG Products, Services, and Solutions Training Hewlett-Packard 2004 Hewlett-Packard Development Company, L.P. The information contained herein is

More information

Microsoft SQL Server 2012 Fast Track Reference Architecture Using PowerEdge R720 and Compellent SC8000

Microsoft SQL Server 2012 Fast Track Reference Architecture Using PowerEdge R720 and Compellent SC8000 Microsoft SQL Server 2012 Fast Track Reference Architecture Using PowerEdge R720 and Compellent SC8000 This whitepaper describes the Dell Microsoft SQL Server Fast Track reference architecture configuration

More information

Scaling to Petaflop. Ola Torudbakken Distinguished Engineer. Sun Microsystems, Inc

Scaling to Petaflop. Ola Torudbakken Distinguished Engineer. Sun Microsystems, Inc Scaling to Petaflop Ola Torudbakken Distinguished Engineer Sun Microsystems, Inc HPC Market growth is strong CAGR increased from 9.2% (2006) to 15.5% (2007) Market in 2007 doubled from 2003 (Source: IDC

More information

QuickSpecs HP Cluster Platform 3000 and HP Cluster Platform 4000

QuickSpecs HP Cluster Platform 3000 and HP Cluster Platform 4000 Overview An HP Cluster Platform 3000 or 4000 with 128 compute nodes (HP ProLiant DL160 G6 or HP ProLiant DL165 G5 Servers) and an InfiniBand high-speed interconnect. The configuration consists of 3 compute

More information

IBM System p5 550 and 550Q Express servers

IBM System p5 550 and 550Q Express servers The right solutions for consolidating multiple applications on a single system IBM System p5 550 and 550Q Express servers Highlights Up to 8-core scalability using Quad-Core Module technology Point, click

More information

IBM TotalStorage SAN Switch F32

IBM TotalStorage SAN Switch F32 Intelligent fabric switch with enterprise performance for midrange and large storage networks IBM TotalStorage SAN Switch F32 High port density packaging helps save rack space Highlights Can be used as

More information

ProLiant DL F100 Integrated Cluster Solutions and Non-Integrated Cluster Bundle Configurations. Configurations

ProLiant DL F100 Integrated Cluster Solutions and Non-Integrated Cluster Bundle Configurations. Configurations Overview ProLiant DL F100 Integrated Cluster Solutions and Non-Integrated Cluster Bundle Configurations 1. MSA1000 6. Fibre Channel Interconnect #1 and #2 2. Smart Array Controller 7. Ethernet "HeartBeat"

More information

HP BladeSystem c-class Server Blades OpenVMS Blades Management. John Shortt Barry Kierstein Leo Demers OpenVMS Engineering

HP BladeSystem c-class Server Blades OpenVMS Blades Management. John Shortt Barry Kierstein Leo Demers OpenVMS Engineering HP BladeSystem c-class Server Blades OpenVMS Blades Management John Shortt Barry Kierstein Leo Demers OpenVMS Engineering 1 19 March 2009 Agenda Overview c-class Infrastructure Virtual Connect Updating

More information

INFOBrief. Dell-IBRIX Cluster File System Solution. Key Points

INFOBrief. Dell-IBRIX Cluster File System Solution. Key Points INFOBrief Dell-IBRIX Cluster File System Solution High-performance parallel, segmented file system for scale-out clusters, grid computing, and enterprise applications Capable of delivering linear scalability

More information

Genesis HyperMDC 200D

Genesis HyperMDC 200D The Genesis HyperMDC 200D is a metadata cluster designed for ease-of-use and quick deployment. IPMI Control Dual Power Supplies Enhanced Metadata Uptime Storage Up to 1.3M IOPS and 5,500 MBps throughput

More information

Configuring a Single Oracle ZFS Storage Appliance into an InfiniBand Fabric with Multiple Oracle Exadata Machines

Configuring a Single Oracle ZFS Storage Appliance into an InfiniBand Fabric with Multiple Oracle Exadata Machines An Oracle Technical White Paper December 2013 Configuring a Single Oracle ZFS Storage Appliance into an InfiniBand Fabric with Multiple Oracle Exadata Machines A configuration best practice guide for implementing

More information

Cisco HyperFlex HX220c Edge M5

Cisco HyperFlex HX220c Edge M5 Data Sheet Cisco HyperFlex HX220c Edge M5 Hyperconvergence engineered on the fifth-generation Cisco UCS platform Rich digital experiences need always-on, local, high-performance computing that is close

More information

IBM Storwize V7000 Unified

IBM Storwize V7000 Unified IBM Storwize V7000 Unified Pavel Müller IBM Systems and Technology Group Storwize V7000 Position Enterprise Block DS8000 For clients requiring: Advanced disaster recovery with 3-way mirroring and System

More information

IBM TotalStorage SAN Switch F08

IBM TotalStorage SAN Switch F08 Entry workgroup fabric connectivity, scalable with core/edge fabrics to large enterprise SANs IBM TotalStorage SAN Switch F08 Entry fabric switch with high performance and advanced fabric services Highlights

More information

Reference Architecture Microsoft Exchange 2013 on Dell PowerEdge R730xd 2500 Mailboxes

Reference Architecture Microsoft Exchange 2013 on Dell PowerEdge R730xd 2500 Mailboxes Reference Architecture Microsoft Exchange 2013 on Dell PowerEdge R730xd 2500 Mailboxes A Dell Reference Architecture Dell Engineering August 2015 A Dell Reference Architecture Revisions Date September

More information

Integrated Ultra320 Smart Array 6i Redundant Array of Independent Disks (RAID) Controller with 64-MB read cache plus 128-MB batterybacked

Integrated Ultra320 Smart Array 6i Redundant Array of Independent Disks (RAID) Controller with 64-MB read cache plus 128-MB batterybacked Data Sheet Cisco MCS 7835-H1 THIS PRODUCT IS NO LONGER BEING SOLD AND MIGHT NOT BE SUPPORTED. READ THE END-OF-LIFE NOTICE TO LEARN ABOUT POTENTIAL REPLACEMENT PRODUCTS AND INFORMATION ABOUT PRODUCT SUPPORT.

More information

Microsoft SQL Server 2012 Fast Track Reference Configuration Using PowerEdge R720 and EqualLogic PS6110XV Arrays

Microsoft SQL Server 2012 Fast Track Reference Configuration Using PowerEdge R720 and EqualLogic PS6110XV Arrays Microsoft SQL Server 2012 Fast Track Reference Configuration Using PowerEdge R720 and EqualLogic PS6110XV Arrays This whitepaper describes Dell Microsoft SQL Server Fast Track reference architecture configurations

More information

HP0-S15. Planning and Designing ProLiant Solutions for the Enterprise. Download Full Version :

HP0-S15. Planning and Designing ProLiant Solutions for the Enterprise. Download Full Version : HP HP0-S15 Planning and Designing ProLiant Solutions for the Enterprise Download Full Version : http://killexams.com/pass4sure/exam-detail/hp0-s15 QUESTION: 174 Which rules should be followed when installing

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Microsoft SQL Native Backup Reference Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE Design Guide APRIL 0 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

IBM eserver xseries. BladeCenter. Arie Berkovitch eserver Territory Manager IBM Corporation

IBM eserver xseries. BladeCenter. Arie Berkovitch eserver Territory Manager IBM Corporation BladeCenter Arie Berkovitch eserver Territory Manager 2006 IBM Corporation IBM BladeCenter What is a Blade A server on a card each Blade has its own: processor networking memory optional storage etc. IBM

More information

QLogic TrueScale InfiniBand and Teraflop Simulations

QLogic TrueScale InfiniBand and Teraflop Simulations WHITE Paper QLogic TrueScale InfiniBand and Teraflop Simulations For ANSYS Mechanical v12 High Performance Interconnect for ANSYS Computer Aided Engineering Solutions Executive Summary Today s challenging

More information

SNAP Performance Benchmark and Profiling. April 2014

SNAP Performance Benchmark and Profiling. April 2014 SNAP Performance Benchmark and Profiling April 2014 Note The following research was performed under the HPC Advisory Council activities Participating vendors: HP, Mellanox For more information on the supporting

More information

IBM System p5 510 and 510Q Express Servers

IBM System p5 510 and 510Q Express Servers More value, easier to use, and more performance for the on demand world IBM System p5 510 and 510Q Express Servers System p5 510 or 510Q Express rack-mount servers Highlights Up to 4-core scalability with

More information

Fidelis Enterprise Collector Cluster QUICK START GUIDE. Rev-I Collector Controller2 (HP DL360-G10) and Collector XA2 (HP DL360-G10) Platforms

Fidelis Enterprise Collector Cluster QUICK START GUIDE. Rev-I Collector Controller2 (HP DL360-G10) and Collector XA2 (HP DL360-G10) Platforms Fidelis Enterprise Collector Cluster Rev-I Collector Controller2 (HP DL360-G10) and Collector XA2 (HP DL360-G10) Platforms 1. System Overview The Fidelis Collector is the security analytics database for

More information

Cisco and Cloudera Deliver WorldClass Solutions for Powering the Enterprise Data Hub alerts, etc. Organizations need the right technology and infrastr

Cisco and Cloudera Deliver WorldClass Solutions for Powering the Enterprise Data Hub alerts, etc. Organizations need the right technology and infrastr Solution Overview Cisco UCS Integrated Infrastructure for Big Data and Analytics with Cloudera Enterprise Bring faster performance and scalability for big data analytics. Highlights Proven platform for

More information

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect Vblock Architecture Andrew Smallridge DC Technology Solutions Architect asmallri@cisco.com Vblock Design Governance It s an architecture! Requirements: Pretested Fully Integrated Ready to Go Ready to Grow

More information

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018 EonStor GS Family Best Practices Guide White Paper Version: 1.1 Updated: Apr., 2018 Abstract: This guide provides recommendations of best practices for installation and configuration to meet customer performance

More information

Ref : IT/GOV/ th July Tender for Procurement of Blade Enclosure, Server & Storage

Ref : IT/GOV/ th July Tender for Procurement of Blade Enclosure, Server & Storage Ref : IT/GOV/1295 29 th July 2009 Tender for Procurement of Blade Enclosure, Server & Storage WBIDC is inviting bids for procurement of severs and server related hardware and software as per Annexure I.

More information

SPECIFICATION FOR NETWORK ATTACHED STORAGE (NAS) TO BE FILLED BY BIDDER. NAS Controller Should be rack mounted with a form factor of not more than 2U

SPECIFICATION FOR NETWORK ATTACHED STORAGE (NAS) TO BE FILLED BY BIDDER. NAS Controller Should be rack mounted with a form factor of not more than 2U SPECIFICATION FOR NETWORK ATTACHED STORAGE (NAS) TO BE FILLED BY BIDDER S.No. Features Qualifying Minimum Requirements No. of Storage 1 Units 2 Make Offered 3 Model Offered 4 Rack mount 5 Processor 6 Memory

More information

HP X9000 Network Storage Systems (NAS)

HP X9000 Network Storage Systems (NAS) HP X9000 Network Storage Systems (NAS) Never in history has the demand for increasing storage capacity been so extreme. It is estimated that storage requirements will increase by more than fifty percent

More information

Architecting High Performance Computing Systems for Fault Tolerance and Reliability

Architecting High Performance Computing Systems for Fault Tolerance and Reliability Architecting High Performance Computing Systems for Fault Tolerance and Reliability Blake T. Gonzales HPC Computer Scientist Dell Advanced Systems Group blake_gonzales@dell.com Agenda HPC Fault Tolerance

More information

IBM TotalStorage SAN Switch M12

IBM TotalStorage SAN Switch M12 High availability director supports highly scalable fabrics for large enterprise SANs IBM TotalStorage SAN Switch M12 High port density packaging saves space Highlights Enterprise-level scalability and

More information

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide.   support.dell.com VMware Infrastructure 3.0.2 Update 1 for Dell Systems Deployment Guide www.dell.com support.dell.com Notes and Notices NOTE: A NOTE indicates important information that helps you make better use of your

More information

IBM _` p5 570 servers

IBM _` p5 570 servers Innovative, modular, scalable, mid-range systems designed for the on demand world IBM _` p5 570 servers and departmental or regional server deployments. The rack-mount p5-570 delivers power, flexibility,

More information

Dynamic Storage Using IBM System Storage N series

Dynamic Storage Using IBM System Storage N series Dynamic Storage Using IBM System Storage N series Javier Suarez e-techservices A dynamic infrastructure addresses today s challenges and tomorrow s opportunities. IMPROVE SERVICE Not only ensuring high

More information

Assessing performance in HP LeftHand SANs

Assessing performance in HP LeftHand SANs Assessing performance in HP LeftHand SANs HP LeftHand Starter, Virtualization, and Multi-Site SANs deliver reliable, scalable, and predictable performance White paper Introduction... 2 The advantages of

More information

NEC M100 Frequently Asked Questions September, 2011

NEC M100 Frequently Asked Questions September, 2011 What RAID levels are supported in the M100? 1,5,6,10,50,60,Triple Mirror What is the power consumption of M100 vs. D4? The M100 consumes 26% less energy. The D4-30 Base Unit (w/ 3.5" SAS15K x 12) consumes

More information

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions A comparative analysis with PowerEdge R510 and PERC H700 Global Solutions Engineering Dell Product

More information

HP solutions for mission critical SQL Server Data Management environments

HP solutions for mission critical SQL Server Data Management environments HP solutions for mission critical SQL Server Data Management environments SQL Server User Group Sweden Michael Kohs, Technical Consultant HP/MS EMEA Competence Center michael.kohs@hp.com 1 Agenda HP ProLiant

More information

Pass-Through Technology

Pass-Through Technology CHAPTER 3 This chapter provides best design practices for deploying blade servers using pass-through technology within the Cisco Data Center Networking Architecture, describes blade server architecture,

More information

Lossless 10 Gigabit Ethernet: The Unifying Infrastructure for SAN and LAN Consolidation

Lossless 10 Gigabit Ethernet: The Unifying Infrastructure for SAN and LAN Consolidation . White Paper Lossless 10 Gigabit Ethernet: The Unifying Infrastructure for SAN and LAN Consolidation Introduction As organizations increasingly rely on IT to help enable, and even change, their business

More information

The Virtualized Server Environment

The Virtualized Server Environment CHAPTER 3 The Virtualized Server Environment Based on the analysis performed on the existing server environment in the previous chapter, this chapter covers the virtualized solution. The Capacity Planner

More information

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System Best practices Roland Mueller IBM Systems and Technology Group ISV Enablement April 2012 Copyright IBM Corporation, 2012

More information

Overview. Cisco UCS Manager User Documentation

Overview. Cisco UCS Manager User Documentation Cisco UCS Manager User Documentation, page 1 Infrastructure Management Guide, page 2 Cisco Unified Computing System, page 3 Cisco UCS Building Blocks and Connectivity, page 5 Cisco UCS Manager User Documentation

More information

THE OPEN DATA CENTER FABRIC FOR THE CLOUD

THE OPEN DATA CENTER FABRIC FOR THE CLOUD Product overview THE OPEN DATA CENTER FABRIC FOR THE CLOUD The Open Data Center Fabric for the Cloud The Xsigo Data Center Fabric revolutionizes data center economics by creating an agile, highly efficient

More information

Oracle Database Consolidation on FlashStack

Oracle Database Consolidation on FlashStack White Paper Oracle Database Consolidation on FlashStack with VMware 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 18 Contents Executive Summary Introduction

More information

Highest Levels of Scalability Simplified Network Manageability Maximum System Productivity

Highest Levels of Scalability Simplified Network Manageability Maximum System Productivity InfiniBand Brochure Highest Levels of Scalability Simplified Network Manageability Maximum System Productivity 40/56/100/200Gb/s InfiniBand Switch System Family MELLANOX SMART INFINIBAND SWITCH SYSTEMS

More information

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide.   support.dell.com VMware Infrastructure 3.0.2 Update 1 for Dell PowerEdge Systems Deployment Guide www.dell.com support.dell.com Notes and Notices NOTE: A NOTE indicates important information that helps you make better

More information

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance 11 th International LS-DYNA Users Conference Computing Technology LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton

More information

DELL Reference Configuration Microsoft SQL Server 2008 Fast Track Data Warehouse

DELL Reference Configuration Microsoft SQL Server 2008 Fast Track Data Warehouse DELL Reference Configuration Microsoft SQL Server 2008 Fast Track Warehouse A Dell Technical Configuration Guide base Solutions Engineering Dell Product Group Anthony Fernandez Jisha J Executive Summary

More information

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 EMC Backup and Recovery for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120, Replication Manager, and Hyper-V on Windows Server 2008 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

Intel Select Solutions for Professional Visualization with Advantech Servers & Appliances

Intel Select Solutions for Professional Visualization with Advantech Servers & Appliances Solution Brief Intel Select Solution for Professional Visualization Intel Xeon Processor Scalable Family Powered by Intel Rendering Framework Intel Select Solutions for Professional Visualization with

More information

Active System Manager Release 8.2 Compatibility Matrix

Active System Manager Release 8.2 Compatibility Matrix Active System Manager Release 8.2 Compatibility Matrix Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates

More information

An Oracle White Paper December Accelerating Deployment of Virtualized Infrastructures with the Oracle VM Blade Cluster Reference Configuration

An Oracle White Paper December Accelerating Deployment of Virtualized Infrastructures with the Oracle VM Blade Cluster Reference Configuration An Oracle White Paper December 2010 Accelerating Deployment of Virtualized Infrastructures with the Oracle VM Blade Cluster Reference Configuration Introduction...1 Overview of the Oracle VM Blade Cluster

More information

Overview. About the Cisco UCS S3260 System

Overview. About the Cisco UCS S3260 System About the Cisco UCS S3260 System, on page 1 How to Use This Guide, on page 3 Cisco UCS S3260 System Architectural, on page 5 Connectivity Matrix, on page 7 Deployment Options, on page 7 Management Through

More information

Cisco SFS 7000D InfiniBand Server Switch

Cisco SFS 7000D InfiniBand Server Switch Data Sheet The Cisco SFS 7000D InfiniBand Server Switch sets the standard for cost-effective, low-latency, 4X DDR and SDR InfiniBand switching for building high-performance clusters. High-performance computing

More information

Architecting Storage for Semiconductor Design: Manufacturing Preparation

Architecting Storage for Semiconductor Design: Manufacturing Preparation White Paper Architecting Storage for Semiconductor Design: Manufacturing Preparation March 2012 WP-7157 EXECUTIVE SUMMARY The manufacturing preparation phase of semiconductor design especially mask data

More information

SMART SERVER AND STORAGE SOLUTIONS FOR GROWING BUSINESSES

SMART SERVER AND STORAGE SOLUTIONS FOR GROWING BUSINESSES Jan - Mar 2009 SMART SERVER AND STORAGE SOLUTIONS FOR GROWING BUSINESSES For more details visit: http://www-07preview.ibm.com/smb/in/expressadvantage/xoffers/index.html IBM Servers & Storage Configured

More information

REQUEST FOR PROPOSAL FOR PROCUREMENT OF

REQUEST FOR PROPOSAL FOR PROCUREMENT OF REQUEST FOR PROPOSAL FOR PROCUREMENT OF Upgrade of department RFP No.: SBI/GITC/ATM/2018-19/481 : 18/05/2018 Corrigendum II dated 30/05/2018 to Ref: SBI/GITC/ATM/2018-19/481 : 18/05/2018 State Bank of

More information

IT Certification Exams Provider! Weofferfreeupdateserviceforoneyear! h ps://

IT Certification Exams Provider! Weofferfreeupdateserviceforoneyear! h ps:// IT Certification Exams Provider! Weofferfreeupdateserviceforoneyear! h ps://www.certqueen.com Exam : 000-115 Title : Storage Sales V2 Version : Demo 1 / 5 1.The IBM TS7680 ProtecTIER Deduplication Gateway

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE Design Guide APRIL 2017 1 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

Cisco UCS B440 M1High-Performance Blade Server

Cisco UCS B440 M1High-Performance Blade Server Cisco UCS B440 M1 High-Performance Blade Server Product Overview The Cisco UCS B440 M1 High-Performance Blade Server delivers the performance and reliability to power compute-intensive, enterprise-critical

More information

Sun Dual Port 10GbE SFP+ PCIe 2.0 Networking Cards with Intel GbE Controller

Sun Dual Port 10GbE SFP+ PCIe 2.0 Networking Cards with Intel GbE Controller Sun Dual Port 10GbE SFP+ PCIe 2.0 Networking Cards with Intel 82599 10GbE Controller Oracle's Sun Dual Port 10 GbE PCIe 2.0 Networking Cards with SFP+ pluggable transceivers, which incorporate the Intel

More information

Who says world-class high performance computing (HPC) should be reserved for large research centers? The Cray CX1 supercomputer makes HPC performance

Who says world-class high performance computing (HPC) should be reserved for large research centers? The Cray CX1 supercomputer makes HPC performance Who says world-class high performance computing (HPC) should be reserved for large research centers? The Cray CX1 supercomputer makes HPC performance available to everyone, combining the power of a high

More information

As enterprise organizations face the major

As enterprise organizations face the major Deploying Flexible Brocade 5000 and 4900 SAN Switches By Nivetha Balakrishnan Aditya G. Brocade storage area network (SAN) switches are designed to meet the needs of rapidly growing enterprise IT environments.

More information

Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd

Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd Performance Study Dell EMC Engineering October 2017 A Dell EMC Performance Study Revisions Date October 2017

More information

Performance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA

Performance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA Performance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA Pak Lui, Gilad Shainer, Brian Klaff Mellanox Technologies Abstract From concept to

More information

Advanced Supercomputing Hub for OMICS Knowledge in Agriculture. Help to Access Discovery Studio v- 4.1

Advanced Supercomputing Hub for OMICS Knowledge in Agriculture. Help to Access Discovery Studio v- 4.1 Advanced Supercomputing Hub for OMICS Knowledge in Agriculture Help to Access Discovery Studio v- 4.1 Centre for Agricultural Bioinformatics ICAR - Indian Agricultural Statistics Research Institute Library

More information

Edge for All Business

Edge for All Business 1 Edge for All Business Datasheet Zynstra is designed and built for the edge the business-critical compute activity that takes place outside a large central datacenter, in branches, remote offices, or

More information

Xyratex ClusterStor6000 & OneStor

Xyratex ClusterStor6000 & OneStor Xyratex ClusterStor6000 & OneStor Proseminar Ein-/Ausgabe Stand der Wissenschaft von Tim Reimer Structure OneStor OneStorSP OneStorAP ''Green'' Advancements ClusterStor6000 About Scale-Out Storage Architecture

More information

EMC Celerra CNS with CLARiiON Storage

EMC Celerra CNS with CLARiiON Storage DATA SHEET EMC Celerra CNS with CLARiiON Storage Reach new heights of availability and scalability with EMC Celerra Clustered Network Server (CNS) and CLARiiON storage Consolidating and sharing information

More information

Cray XD1 Supercomputer Release 1.3 CRAY XD1 DATASHEET

Cray XD1 Supercomputer Release 1.3 CRAY XD1 DATASHEET CRAY XD1 DATASHEET Cray XD1 Supercomputer Release 1.3 Purpose-built for HPC delivers exceptional application performance Affordable power designed for a broad range of HPC workloads and budgets Linux,

More information