PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE SHEET) Supply and installation of High Performance Computing System

Similar documents
Quotations invited. 2. The supplied hardware should have 5 years comprehensive onsite warranty (24 x 7 call logging) from OEM directly.

(Reaccredited with A Grade by the NAAC) RE-TENDER NOTICE. Advt. No. PU/R/RUSA Fund/Equipment Purchase-1/ Date:

Department of Mechanical Engineering, Indian Institute of Technology Madras Chennai 36

GLOBAL TENDER NOTICE NO.: 05/ Last date of receipt of the sealed quotations: Upto 3 P.M. of

The bidder should be willing to arrange for a demonstration of equipment offered at free of cost on

Subject: Request for proposal for a high-performance computing cluster

SPECIFICATION FOR NETWORK ATTACHED STORAGE (NAS) TO BE FILLED BY BIDDER. NAS Controller Should be rack mounted with a form factor of not more than 2U

Enterprise Server Midrange - Hewlett Packard

INSTRUCTIONS TO BIDDERS

Institute of Physics (An autonomous Research Institute of Dept. of Atomic Energy, Govt. of India) P.O: Sainik School, Bhubaneswar, Orissa ,

Virtual Environment For Yemen Mobile

SCHOOL OF PHYSICAL, CHEMICAL AND APPLIED SCIENCES

Representation of the interested Bidders / vendors. Form no. T2 (TECHNICAL MINIMUM SPECIFICATIONS)

SUN CUSTOMER READY HPC CLUSTER: REFERENCE CONFIGURATIONS WITH SUN FIRE X4100, X4200, AND X4600 SERVERS Jeff Lu, Systems Group Sun BluePrints OnLine

1 Page No. 20, 21 & 22 Form factor. This should be read as 1U/2U Rack Mountable

Lot # 10 - Servers. 1. Rack Server. Rack Server Server

LAMMPS-KOKKOS Performance Benchmark and Profiling. September 2015

GROMACS (GPU) Performance Benchmark and Profiling. February 2016

Tender for Purchase of One Number of Chassis & One Number of Blade Server for Permanent Campus of the Institute at Okhla Phase-III, New Delhi

Fujitsu PRIMERGY TX100 S2 Server

Department of Mechanical Engineering Indian Institute of Technology Madras Chennai

NAMD Performance Benchmark and Profiling. January 2015

Ref : IT/GOV/ th July Tender for Procurement of Blade Enclosure, Server & Storage

should be able to scale to additional pair controllers with 128 GB Cache across Controllers." replacement. With memory of >= 64 GB

Altair OptiStruct 13.0 Performance Benchmark and Profiling. May 2015

Altos T310 F3 Specifications

(Detailed Terms, Technical Specifications, and Bidding Format) This section describes the main objective to be achieved through this procurement.

All licenses required for the H/W implementation and managing servers should be included as part of the solution.

Selecting Server Hardware for FlashGrid Deployment

Pre-Bid Queries for NIT No. RECPDCL/TECH/SERVER-GED/e-Tender/ /186 Dated:

American University of Madaba Information Technology Center Digital Signage General Information

CPMD Performance Benchmark and Profiling. February 2014

Acer AR320 F1 specifications

Sugon TC6600 blade server

SNAP Performance Benchmark and Profiling. April 2014

OCTOPUS Performance Benchmark and Profiling. June 2015

INDIAN INSTITUTE OF SCIENCE EDUCATION AND RESEARCH PUNE

Indian Institute of Engineering Science and Technology, Shibpur Howrah Advt. No. DE/D(AA)/16/71 dated 16/12/2016 TENDER

STAR-CCM+ Performance Benchmark and Profiling. July 2014

LBRN - HPC systems : CCT, LSU

Cisco UCS S3260 System Storage Management

The power of centralized computing at your fingertips

IBM Express Portfolio is updated to include new IBM System x3650 M5 models

OpenFOAM Performance Testing and Profiling. October 2017

NATIONAL INSTITUTE OF TECHNOLOGY, TIRUCHIRAPPALLI INVITATION FOR QUOTATION. TEQIP-III/2017/NITT/Shopping/44

Altos R320 F3 Specifications. Product overview. Product views. Internal view

Intel Xeon E v4, Windows Server 2016 Standard, 16GB Memory, 1TB SAS Hard Drive and a 3 Year Warranty

INVITATION FOR QUOTATION. TEQIP-III/2018/uceo/Shopping/30

GW2000h w/gw175h/q F1 specifications

INVITATION FOR QUOTATION. TEQIP-III/2018/uceo/Shopping/30

Cisco HyperFlex HX220c M4 Node

ANSYS Fluent 14 Performance Benchmark and Profiling. October 2012

Qualifying Minimum Requirements

ANNEXURE II - PRICE BID -- Envelope No. 2. Name of Company: Name OF The Item & Specification Offered By Vendor. Total. Total. Sr. No.

Cisco MCS 7845-H1 Unified CallManager Appliance

PROVISION, INSTALLATION, COMMISIONING & DEPLOYMENT OF BLADE SERVERS WITH CHASSIS

Cisco UCS S3260 System Storage Management

भ रत य प द य ग क स स थ न प लक क ड. Indian Institute of Technology Palakkad अहललआ एक क त क म पस, क ज़ हहप र

CORRIGENDUM IN TENDER NO. HWT

Data Sheet Fujitsu Server PRIMERGY CX250 S2 Dual Socket Server Node

Supply, Installation & Commissioning of CPU

Comet Virtualization Code & Design Sprint

Purchasing Services SVC East Fowler Avenue Tampa, Florida (813)

Intel Xeon E v4, Optional Operating System, 8GB Memory, 2TB SAS H330 Hard Drive and a 3 Year Warranty

Gram:CENBOSEC, Delhi-92 Phones: rodelhi. Website:

Suggested use: infrastructure applications, collaboration/ , web, and virtualized desktops in a workgroup or distributed environments.

IDBI Bank Limited Corrigendum to the RFP For Procurement of Treasury PCs RFP ref. no: IDBI/PCELL/RFP/ /017 dated : 30-Oct-2015

Cisco HyperFlex HX220c Edge M5

Pinnacle3 Professional

REQUEST FOR PROPOSAL FOR PROCUREMENT OF

Acer AR320 F2 Specifications

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments

Acer AW2000h w/aw170h F2 Specifications

DATARMOR: Comment s'y préparer? Tina Odaka

IBM System p5 570 POWER5+ processor and memory features offer new options

High Performance Computing

High Performance Computing (HPC) Prepared By: Abdussamad Muntahi Muhammad Rahman

Cisco MCS 7835-H2 Unified Communications Manager Appliance

Ref : IT/GOV/ th October Tender for Procurement of Blade Enclosure, Server & Storage

MegaGauss (MGs) Cluster Design Overview

(A premier Public Sector Bank) Information Technology Division Head Office, Mangalore. Corrigendum 2. Tender No. 01 / dated

MILC Performance Benchmark and Profiling. April 2013

HP ProLiant DL360 G7 Server - Overview

Security Section Indian Institute of Technology Kanpur

RESPONSE TO PRE-BID VENDOR QUERIES. SERC/HPCS/1/2014 dt

GW2000h w/gw170h/d/q F1 specifications

Rajagiri School of Engineering & Technology, Kochi Department of Information Technology

QuickSpecs HP Cluster Platform 3000 and HP Cluster Platform 4000

Cisco UCS C200 M2 High-Density Rack-Mount Server

ThinkServer RD530 Photography - 3.5" Disk Bay*

WEST BENGAL UNIVERSITY OF TECHNOLOGY TENDER FORM. Notice No:: CSE/FO/2013/4

Integrated Ultra320 Smart Array 6i Redundant Array of Independent Disks (RAID) Controller with 64-MB read cache plus 128-MB batterybacked

NEC Express5800/ft series

Architecting High Performance Computing Systems for Fault Tolerance and Reliability

Minutes of Pre Bid Meeting

NEC Express5800/R120h-2M System Configuration Guide

Acer AT310 F2 Specifications

SHORT QUOTATION NOTICE TU/11-24/Pur/Qtn/ /2697-A. dated:

INVITATION FOR TENDER FOR SUPPLY OF EQUIPMENT

STANDARD SPECIFICATIONS FOR THE PROCUREMENT OF UP TO FIVE PERSONAL COMPUTERS TECHNICAL SPECIFICATIONS

Transcription:

INSTITUTE FOR PLASMA RESEARCH (An Autonomous Institute of Department of Atomic Energy, Government of India) Near Indira Bridge; Bhat; Gandhinagar-382428; India PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE SHEET) Supply and installation of High Performance Computing System IPR SCOPE for High Performance Computing System: Sufficient number of compute nodes must be quoted to achieve at least 24TF CPU theoretical peak performances. Sufficient number of compute nodes with GPU must be quoted to achieve at least 6 TF GPU theoretical peak Performance. (That is pure GPU and Not CPU+GPU). System/Compute Node should be able to run a single system Image (SSI) of64bit Red Hat Enterprise Linux OS for with 2 CPU-Processors with 28 Cores. Entire system must have comprehensive on-site warranty for 3 years. This is applied to all compute nodes, Compute nodes with GPU, Head/Master Node, all switches, Networking components, storage nodes and storage, Ethernet switches, IB switches etc. Appropriate number of commercial licenses with 3 years support/upgrade has to be quoted. OEM developed and owned Middleware/Management tools with latest version should be provided. Bidder should provide all necessary Cables, Connectors, optical drives, Components (Ethernet Cables, IB Cables, KVM cables etc.) required for quoted system. Bidder should provide Block Diagram of Quoted System. Technical compliance letter, detailed cluster diagram with datasheet of proposed solution to be submitted along with bid. Note: OEM refers to Original Equipment Manufacturer for Master/Head Nodes, Compute Nodes, compute Nodes with GPU, Cluster Management Software, Racks, storage server, storage. Technical Specification of HPC Cluster Hardware Specifications: (1) Compute Node with CPU: CPU: Two Number of 14 core, 64 bit Intel Xeon E5-Series ( 2.3 GHz) or better processors, each core capable of executing 16 ( sixteen) FLOPS or better per cycle. Cache : 35 MB or better Memory: 256GB DDR4 2133 MHz or better, optional quote for 512GB DDR4 Page 1 of 5

Internal Disk: 900 GB SAS disk with 10000 RPM or better Form Factor: Maximum 2 U height with rack/blade/dense form factor. Infiniband: Dual Port 4x FDR ports with 100% non-blocking architecture Network: 2 X 1Gbps ports with PXE boot capability. Power supply: The solution should be configured with Hot plug Redundant Power supplies with efficiency. Serviceability: All the compute nodes should be individually serviceable Remote management Port: At least one dedicated port for remote GPU enabled: All the compute nodes must be GPU enabled. Compute node must have at least 2 GPU slots with Gen 3.0 or better. (2) Compute Node with CPU+GPU : CPU: Two Number of 14 core, 64 bit Intel Xeon E5-Series (2.3 GHz) or better processors, each core capable of executing 16 ( sixteen) FLOPS or better per cycle. Cache: 35MB or better. Memory: 256GB DDR4 2133 MHz or better, optional quote for 512GB DDR4 Internal Disk: 900 GB SAS disk with 10000 RPM or better Form Factor: Maximum 2 U height with rack/blade/dense form factor. Infiniband: Dual Port 4x FDR ports with 100% non-blocking architecture Network: 2 X 1Gbps ports with PXE boot capability. Power supply: The solution should be configured with Hot plug Redundant Power supplies with efficiency. Serviceability: All the compute nodes should be individually serviceable Remote management Port: At least one dedicated port for remote GPU enabled: All the compute nodes must be GPU enabled. Compute node must have at least 2 GPU slots with Gen 3.0.or better. GPUs: The compute node should be configured with Nvidia K40x GPU. Compute node has to be in proportion of GPU cards in 1:1 ratio of CPU: GPU. (Two GPU per node) (3) Master /Head Node with High availability: CPU: Two Number of 14 core, 64 bit Intel Xeon E5-Series (2.3 GHz) or better processors, each core with capable of executing 16 (sixteen ) FLOPS or better per clock cycle. Cache: 35 MB or better. Memory: 256GB DDR4 2133 MHz or better, optional quote for 512GB DDR4 Page 2 of 5

Internal Disk: 2 X 900GB SAS with 10000 RPM disks configured with RAID1. Software based RAID is not acceptable. Form Factor: Maximum 2 U height with rack/blade/dense form factor. Infiniband: Dual Port 4x FDR ports with 100% non-blocking architecture Network: 4 X 1Gbps ports with PXE boot capability. Power supply: The solution should be configured with Hot plug Redundant Power supplies. Serviceability: All the compute nodes should be individually serviceable Remote management Port: At least one dedicated port for remote GPU enabled: Nodes must be GPU enabled. Node must have at least 2 GPU slots. GPUs: One number of Nvidia K40x GPU in each node Nodes: Two nodes must be configured in pair of high availability to provide 100% redundancy. (3) Networking: Primary Interconnect 4X FDR Infiniband :- 4x FDR - infiniband switch or better appropriate for HPC system with 100% non-blocking architecture. Redundant Power Supplies should be offered in all the switches. Appropriate numbers of IB switches and IB cables, connectors etc. accessories must be quoted as per requirement will be supplied. Admin and Console Network : GigE based Admin and Console network with appropriate managed L2 switches with redundant power supply to be supplied. Appropriate number of CAT6 cables with suitable length to be supplied. (5) Storage / Storage Nodes (Servers) with Parallel File System: Minimum 100 TB usable capacity with RAID 6 should be configured with parallel file system. All Storage nodes (Servers) required to implement Parallel File System should be in pair for High availability. All storage server should have dual port 4X FDR IB HBA. Storage should be connect to cluster s IB network via storage servers. The storage should be configured with at-least 1 GB/s WRITE and 1GB/s read performance (both READ and WRITE happening concurrently). Storage should be with dual controller with at least 8 GB total cache. Storage must be configured with redundant power supplies. Storage and Storage Server must be from same OEM and supported for 3 years. Offered solution must have no single point of failure. Solution must be fully compatible with quoted system. Page 3 of 5

Note: Optionally quote for 150 TB solution with above specification mentioned in no.(5) (6) Management Console: 1U Rack mount LCD Keyboard and mouse with required cables and accessories. 8 port KVM SWITCH with cables compatible with the supplied system. (7) Rack Enclosure: 42 U OEM rack of standard size with all required accessories to be supplied. All the rack components including power socket etc. must be fully compatible with quoted system. Software Specifications: (8) HPC Operating System and Software: Operating System: Latest version of 64 bit Red Hat enterprise Linux with 3 years Subscription and support. Licenses must be provided for all the nodes. (Master nodes, Compute nodes, GPU Compute nodes, Storage server nodes and other, if required.) JOB scheduler: Altair PBS pro suitable for all servers with 3 years support and subscriptions. It must support GPU jobs also. Compilers: Single user license of Intel cluster Studio with GPU support with 3 years support and subscription. Appropriate number of Licenses for all above software along with patches, upgrades etc. for 3 years. Necessary tools for parallel programming on CPU and GPU like Open MP, Open ACC, Open CL and CUDA should be supplied. GPU libraries to be provided Supplied compliers and libraries must be suitable to offered solution. Cluster Management Software: Management software with appropriate licenses must be from same OEM. Linux supported management software for monitoring and management of HPC cluster hardware like CPU, GPU, RAM etc. Software should handle all the nodes provided in the solution. It should support GUI/Web Based access. It should provide proactive notifications alerts. Parallel File System Software : Licensed Intel Sourced Lustre parallel file system. Note: All software with latest version will be supplied. All software licenses must be perpetual. (9) Delivery: Delivery and installation of the system at IPR-Bhat should be completed within 10 weeks from the date of purchase order/loi. (10) Implementation Schedule: Bidder will provide implementation schedule of the system. Page 4 of 5

(11) Documentation: Documentation on installation on commissioning of the system will be prepared and provided to IPR in soft and hard form. (12) Training and Support: HPC Cluster System Administration (Cluster Management, Configuration Monitoring, Job Scheduling etc.) Technical support for administration / maintenance (for both software and hardware level) for HPC. (13) Application Porting / Migration: Bidder should provide technical help for migration of existing applications to the HPC. (14) Warranty and Support: The 3 years warranty with onsite support will be effective from the date on which IPR accepts the system. A) Mandatory documents to be uploaded by vendor during tendering: a) Documents asked under PART-1 (A) Eligibility Criteria b) Document with the formula to achieve 24TF for CPU Nodes and 6 TF for pure GPU. c) Undertaking letter mentioning Make and model of Master/Head Nodes, Compute Nodes, compute Nodes with GPU, Cluster Management Software, Racks, storage server, and storage must be from same OEM d) Implementation schedule. e) Tender form/ Undertaking as per format available with the tender/enquiry B) Acceptance Criteria : Bidder must demonstrate theoretical peak performance as below: Theoretical Peak Performance along with method of arriving at the result. LINPACK Performance at least 70% for pure CPU nodes with Turbo OFF. LINPACK Performance at least 70% for CPU +GPU nodes with Turbo OFF. IOR /IO Zone benchmark for storage with at least 1 GB/s Write and 1 GB/s read. (Both READ and WRITE happening concurrently) Page 5 of 5