MAHA. - Supercomputing System for Bioinformatics
|
|
- Aron Flynn
- 5 years ago
- Views:
Transcription
1 MAHA - Supercomputing System for Bioinformatics
2 Outline 1. MAHA HW 2. MAHA SW 3. MAHA Storage System 2
3 ETRI HPC R&D Area - Overview Research area Computing HW MAHA System HW - Rpeak : GPGPU / MIC Based. GPGPU/MIC/Memory nodes - High-speed/Low-latency Network. Management : Ethernet 1Gbps. Computational : InfiniBand 40Gbps. Computational : PCIeLink 128/256 Gbps I/O SSD+HDD File System SW - Max. 700Gbps / 1M IOPS. 40 SSD storage servers (equal to 600 HDD servers) - Power Saving. Dynamic power control on unused HDD/servers (Speed down/sleep/power off according to access rate) <MAHA System Architecture> <MAHA System Layout - Plan> Bio Application Parallelized genome analysis SW - Parallelized genome analysis pipeline. Optimized genome indexing. Parallelized sequence mapping. Parallelized SNP extraction and analysis. Visualization Protein folding analysis SW - 3 dimensional protein mapping - Protein docking analysis and DB Sysyem SW Bio workflow mgmt. - HPC environ. supporting bio workflows. Ease of use Heterogeneous resource mgmt. - Resource mgmt. for bio applications. Performance improvement Integrated cluster mgmt. - Single point of mgmt. for MAHA system. Simplified deployment & mgmt. 3
4 MAHA Supercomputing System (Jan., 2013) Heterogeneous supercomputer 104 TeraFLOPS with CPU and accelerators (GPGPU, MIC) 53.2 TeraFLOPS compute node based on GPGPU (M2090) 51.3 TeraFLOPS compute node based on MIC (Xeon Phi) Number of cores : over 36,000 cores 54 Compute nodes, 3 Management Nodes, 19 Storage nodes 194 TeraByte of storage (SSD= 34 TB, HDD= 160 TB) CPU GFLOPS/CPU Intel E5 8 Cores@2.6GHz GPGPU 665 GFLOPS/GPGPU Fermi 512Cores@1.3GHz Node 1.67 TFLOPS Dual CPU, Dual GPGPU 32 GB Memory Subrack 8.3 TFLOPS 5 Nodes 160 GB Memory Subrack 11.5 TFLOPS 5 Nodes 160 GB Memory Node 2.3 TFLOPS Dual CPU, Dual Phi 32 GB Memory Phi 1 TFLOPS/MIC Phi TM > 50 Cores CPU GFLOPS/CPU E5 8 Cores@2.6GHz 4
5 MAHA Supercomputing System (Jan., 2013) Network 10/1 Gbps Mgmt. Network 1G / 10G Ethernt for System Management 40 Gbps Computational Network 40 Gbps QDR InifiniBand for Computational Network SSD Storage Server MotherBoard RAID Controller 4 SATA2 port Backplane SSD Management Node PCI-E x4 3Gbps SATA port connection SSD SSD SSD Dual CPU E5) 332 Gigaflops 1 User Login Node 2 Management node Accelerated Compute Node Dual CPU E5) 332 Gigaflops 32 GB memory Dual GPGPU M2090) 1,330 Gigaflops Dual MIC Phi) > 2,000 Gigaflops MAHA Supercomputing System (100 TeraFLOPS) MAID Storage Server MotherBoard PCI-E x4 RAID Controller 16 SATA2 port 3Gbps SATA port connection Backplane HDD 5
6 MAHA Supercomputing System Performance (Jan., 2012) Hybrid HPL (High Performance Linpack) Average R max = 29.9 TeraFLOPS System Efficiency * = 56.2% Avg. : TFs Max : TFs 6
7 MAHA Supercomputing Facility MAHA Supercomputing facility Server room : 42 m 2 Hybrid cooling : Cold outside air or internal air conditionig MAHA system MAHA system 7
8 MAHA Supercomputing Roadmap MAHA Supercomputing System 200 TeraFLOPS in 2013 In year 2015, MAHA will reach 300 TeraFLOPS (R peak ) TFs 100 TFs 200 TFs 250 TFs 300 TFs 8
9 Outline 1. MAHA HW 2. MAHA SW 3. MAHA Storage System 9
10 MAHA System Workplace: Objective HPC software solution specially designed for bioinformatics applications For end users (especially in the field of bioinformatics) User-friendly HPC environment supporting bio workflows End users can easily define workflows of bio applications and then efficiently execute them in HPC systems For system administrator Integrated cluster management tool * MIC: Product based on Intel Many Integrated Core architecture 10
11 MAHA System Workplace: Features & Benefits Features User-friendly HPC environment supporting bio workflow Easy configuration for execution with the aid of workflow analysis Workflow transformation for efficient execution in HPC environment Performance improvement through the support for execution of bio applications Single point of management for MAHA system & services Benefits For end users Easy to use even for non-experts Improved performance For system administrators Simplified deployment & management 11
12 MAHA System Workplace: Function (1/3) Bio Workflow Management for HPC Environment Bio workflow definition & execution management XML-based workflow model Web UI-based workflow lifecycle management Bio workflow execution engine Transform a user-defined workflow to multiple HPC jobs Cooperate with the resource management software Bio workflow analysis tool Help to find out the characteristics and resource requirements 12
13 MAHA System Workplace: Function (2/3) Resource Management for Bio Applications Job scheduling & resource allocation End user s view: process a workflow as fast as possible System s view: process as many workflows as possible in a given time Support for execution of bio applications Solve performance problems by analyzing the characteristics of bio applications 13
14 MAHA System Workplace: Function (3/3) Integrated Cluster Management for MAHA System & Services Provisioning management Cluster operation management Monitoring management Service (including MAHA System Workplace) management Web UI for MAHA System Workplace 14
15 Outline 1. MAHA HW 2. MAHA SW 3. MAHA Storage System 15
16 Objective of the MAHA-FS Distributed File System for HPC application, specially for Genome Analysis Upgrading the performance of the GLORY-FS (developed by ETRI) Supporting competitive performance to the Lustre (700Gbps, 1 Million IOPS) Compatibility with existing various genome analysis applications 16
17 Features and Benefits of the MAHA-FS Features Hybrid Storage Support for high performance/cost with SSD (700Gbps, 1 million IOPS) Support for high capacity/cost with HDD (more than petabytes) Low power consumption Reduce storage power consumption with cutting off un-accessed HDD (up to 50%) Advantage For genome analyst Reduce TCO for large scale storage No need to modify their exist genome analysis applications For administrators Easy management for peta-scale storage 17
18 Performance & Capacity Considerations for NGS workload Peak 933 MB/s is required for 1 human genome analysis (I/O speed equivalent to 10 SATA HDD) Computing Node 3d 14h 5m 32s Align Sample Sort merge mpileup Peak: MB/s Avg: MB/s Peak: MB/s Avg: MB/s Peak: 68 MB/s Avg: 19.6 MB/s Peak: MB/s Avg: MB/s Peak: MB/s Avg: 2.93 MB/s Reference Genome (11 GB) Source Data (218 GB) Temporary Data (96 GB) Intermediate Result Data (?? GB) Final Result Data (819 GB) Shared Storage (MAHA-FS) Total 1.2 TB capacity is required for 1 human genome analysis NGS: Next Generation Sequencing 18
19 Storage Architecture Consideration for NGS workload MAHA-FS Architecture MAHA-FS Metadata Server on Commodity Parts - Server, Chassis - RAID controller - 1G/10G/40G NIC - SATA SSD/HDD MAHA-FS Clients (Compute Nodes) 1/10G/IB Fabric x86 based storage server and SATA HDD for Lower TCO MAHA-FS Storage Servers on Commodity Parts - Server, Chassis - RAID controller - 1G/10G/40G NIC - SATA SSD/HDD and/or and/or SATASSD SATA HDD SATA SSD SATA HDD Resiliency supported by replication/migration built in the MAHA-FS SW Existing HPC Architecture (Lustre) Lustre Clients (Compute Nodes) Lustre Metadata Server Metadata Storage Array - NetApp E2624 1/10G/IB Fabric Lustre Storage Servers (Active/Active) Data Storage Arrays - NetApp E5460/DE NetApp E5424/DE5600 SAN (Fiber Channel) and/or SAN (Fiber Channel) and/or External Data Storage Array and the Fiber Channel fabric is the main cause of the High Cost SAS HDD SAS HDD Resiliency supported by redundant server and storage hardware (No support for resiliency within the Lustre) From NetApp 19
20 MAHA Storage H/W Test-bed Status (2012) Commodity SSD Storage Server MotherBoard RAID Controller * 3 (8 SATA2 port) Backplane SSD PCI-E x4 3Gbps SATA port connection SSD SSD 10G Ethernet/40G Infiniband (VPI Adapter) Total Capacity Built: 34 TB - # SATA SSDs: 192 ea - # Servers: 10 ea Total Capacity Planned: 85 TB (2015) Commodity HDD Storage Server Backplane MotherBoard PCI-E x4 RAID Controller * 1 (16 SATA2 port) 3Gbps SATA port connection HDD Total Capacity Built: 160 TB - # SATA HDDs: 160 ea - # Servers: 9 ea Total Capacity Planned: 400 TB (2015) G Ethernet/40G Infiniband (VPI Adapter) 20
21 MAHA File System SW Architecture Light-weight Metadata Access Protocol (NFS-like protocol) User-level File System No kernel patch/dependency User Kernel App FUSE Low Level VFS & Cache MAHA-FS Client FS Client LMD I/F Hybrid I/O /dev/fuse MAHA-FS Metadata Server MDS SRV Metadata Server Core MySQL NMD Management Protocol DS Clnt I/F Linux Kernel Participation Heartbeat MySQL NMD /proc Light-weight Metadata Engine (NMD) Berkeley DB-like Engine optimized for file system metadata (10 times faster) FUSE kernel module Linux Client MAHA-FS Data Server Data Server Core Hybrid I/O Protocol MDS Clnt I/F Hybrid I/O FS I/F PROC I/F Dynamic selection between two I/O protocol based on the workload characteristic - Sequential I/O optimized Protocol - Random I/O optimized Protocol ext4 Linux Kernel ext4 /proc 21
22 Overall Performance of the MAHA-FS v1.0 Overall Performance Results (Jan. 2013) - Aggregate Sequential I/O: > 70Gbps, Aggregate Random I/O: > 20 Million IOPS - Metadata Performance: > 100,000 open/sec, > 50,000 create/sec Target Metric (2012) Target Metric (2012) 22
23 Micro-benchmark Results of the MAHA-FS v1.0 Metadata Performance (1 st Result, Sep. 2012) - About 4 times faster than the Lustre (but need more careful examination) (File Creation: 52,437 ops, File Open: 116,005 ops) File creation performance (ops/sec) File open performance (ops/sec) 3.4 times better 4.6 times better 15,000 (measured) 25,000 (announced) 9,000 (measured) Looks faster, but needs more look 23
24 Micro-benchmark Results of the MAHA-FS v1.0 Data I/O Performance (Sep. 2012, Jan. 2013) - Still struggling to achieve better performance Additional Tuning and Testing is on going. 24
25 NGS pipeline benchmark Results of the MAHA-FS v1.0 NGS-pl benchmark Results (1 st Result, Dec. 2012) - Slightly faster than the Lustre, but slower then the NFS NGS pipeline analysis applications NGS1 NGS2 NGS3 NGS4 NGS5 NGS6 NFS1 NFS2 Isolated two NFS server NGS1 NGS2 NGS3 NGS4 NGS5 NGS6 DS1 DS2 Parallel/Distributed File server (Lustre/MAHA-FS) Benchmark Storage Environment Comparison of file systems for NGS-pl workload Just the 1 st result. No more, no less.. (hour) 25
26 Thank You 26
Dell EMC Ready Bundle for HPC Digital Manufacturing Dassault Systѐmes Simulia Abaqus Performance
Dell EMC Ready Bundle for HPC Digital Manufacturing Dassault Systѐmes Simulia Abaqus Performance This Dell EMC technical white paper discusses performance benchmarking results and analysis for Simulia
More informationIsilon Performance. Name
1 Isilon Performance Name 2 Agenda Architecture Overview Next Generation Hardware Performance Caching Performance Streaming Reads Performance Tuning OneFS Architecture Overview Copyright 2014 EMC Corporation.
More informationFeedback on BeeGFS. A Parallel File System for High Performance Computing
Feedback on BeeGFS A Parallel File System for High Performance Computing Philippe Dos Santos et Georges Raseev FR 2764 Fédération de Recherche LUmière MATière December 13 2016 LOGO CNRS LOGO IO December
More informationDell EMC Ready Bundle for HPC Digital Manufacturing ANSYS Performance
Dell EMC Ready Bundle for HPC Digital Manufacturing ANSYS Performance This Dell EMC technical white paper discusses performance benchmarking results and analysis for ANSYS Mechanical, ANSYS Fluent, and
More informationFUJITSU PHI Turnkey Solution
FUJITSU PHI Turnkey Solution Integrated ready to use XEON-PHI based platform Dr. Pierre Lagier ISC2014 - Leipzig PHI Turnkey Solution challenges System performance challenges Parallel IO best architecture
More informationSun Lustre Storage System Simplifying and Accelerating Lustre Deployments
Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments Torben Kling-Petersen, PhD Presenter s Name Principle Field Title andengineer Division HPC &Cloud LoB SunComputing Microsystems
More information1. ALMA Pipeline Cluster specification. 2. Compute processing node specification: $26K
1. ALMA Pipeline Cluster specification The following document describes the recommended hardware for the Chilean based cluster for the ALMA pipeline and local post processing to support early science and
More informationLustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE
Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE Hitoshi Sato *1, Shuichi Ihara *2, Satoshi Matsuoka *1 *1 Tokyo Institute
More informationEmerging Technologies for HPC Storage
Emerging Technologies for HPC Storage Dr. Wolfgang Mertz CTO EMEA Unstructured Data Solutions June 2018 The very definition of HPC is expanding Blazing Fast Speed Accessibility and flexibility 2 Traditional
More informationResults from TSUBAME3.0 A 47 AI- PFLOPS System for HPC & AI Convergence
Results from TSUBAME3.0 A 47 AI- PFLOPS System for HPC & AI Convergence Jens Domke Research Staff at MATSUOKA Laboratory GSIC, Tokyo Institute of Technology, Japan Omni-Path User Group 2017/11/14 Denver,
More informationArchitecting Storage for Semiconductor Design: Manufacturing Preparation
White Paper Architecting Storage for Semiconductor Design: Manufacturing Preparation March 2012 WP-7157 EXECUTIVE SUMMARY The manufacturing preparation phase of semiconductor design especially mask data
More informationOverview of Tianhe-2
Overview of Tianhe-2 (MilkyWay-2) Supercomputer Yutong Lu School of Computer Science, National University of Defense Technology; State Key Laboratory of High Performance Computing, China ytlu@nudt.edu.cn
More informationExtremely Fast Distributed Storage for Cloud Service Providers
Solution brief Intel Storage Builders StorPool Storage Intel SSD DC S3510 Series Intel Xeon Processor E3 and E5 Families Intel Ethernet Converged Network Adapter X710 Family Extremely Fast Distributed
More informationSFA12KX and Lustre Update
Sep 2014 SFA12KX and Lustre Update Maria Perez Gutierrez HPC Specialist HPC Advisory Council Agenda SFA12KX Features update Partial Rebuilds QoS on reads Lustre metadata performance update 2 SFA12KX Features
More informationVirtualization of the MS Exchange Server Environment
MS Exchange Server Acceleration Maximizing Users in a Virtualized Environment with Flash-Powered Consolidation Allon Cohen, PhD OCZ Technology Group Introduction Microsoft (MS) Exchange Server is one of
More informationDDN. DDN Updates. Data DirectNeworks Japan, Inc Shuichi Ihara. DDN Storage 2017 DDN Storage
DDN DDN Updates Data DirectNeworks Japan, Inc Shuichi Ihara DDN A Broad Range of Technologies to Best Address Your Needs Protection Security Data Distribution and Lifecycle Management Open Monitoring Your
More informationHIGH-PERFORMANCE STORAGE FOR DISCOVERY THAT SOARS
HIGH-PERFORMANCE STORAGE FOR DISCOVERY THAT SOARS OVERVIEW When storage demands and budget constraints collide, discovery suffers. And it s a growing problem. Driven by ever-increasing performance and
More informationDDN s Vision for the Future of Lustre LUG2015 Robert Triendl
DDN s Vision for the Future of Lustre LUG2015 Robert Triendl 3 Topics 1. The Changing Markets for Lustre 2. A Vision for Lustre that isn t Exascale 3. Building Lustre for the Future 4. Peak vs. Operational
More informationComputer Science Section. Computational and Information Systems Laboratory National Center for Atmospheric Research
Computer Science Section Computational and Information Systems Laboratory National Center for Atmospheric Research My work in the context of TDD/CSS/ReSET Polynya new research computing environment Polynya
More informationGROMACS (GPU) Performance Benchmark and Profiling. February 2016
GROMACS (GPU) Performance Benchmark and Profiling February 2016 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Dell, Mellanox, NVIDIA Compute
More informationHP GTC Presentation May 2012
HP GTC Presentation May 2012 Today s Agenda: HP s Purpose-Built SL Server Line Desktop GPU Computing Revolution with HP s Z Workstations Hyperscale the new frontier for HPC New HPC customer requirements
More informationNVM Express over Fabrics Storage Solutions for Real-time Analytics
NVM Express over Fabrics Storage Solutions for Real-time Analytics Presented by Paul Prince, CTO Santa Clara, CA 1 NVMe Over Fabrics NVMf Why do we need NVMf? What is it? How does it fit in the Market?
More informationInspur AI Computing Platform
Inspur Server Inspur AI Computing Platform 3 Server NF5280M4 (2CPU + 3 ) 4 Server NF5280M5 (2 CPU + 4 ) Node (2U 4 Only) 8 Server NF5288M5 (2 CPU + 8 ) 16 Server SR BOX (16 P40 Only) Server target market
More informationCoordinating Parallel HSM in Object-based Cluster Filesystems
Coordinating Parallel HSM in Object-based Cluster Filesystems Dingshan He, Xianbo Zhang, David Du University of Minnesota Gary Grider Los Alamos National Lab Agenda Motivations Parallel archiving/retrieving
More informationInfiniBand Networked Flash Storage
InfiniBand Networked Flash Storage Superior Performance, Efficiency and Scalability Motti Beck Director Enterprise Market Development, Mellanox Technologies Flash Memory Summit 2016 Santa Clara, CA 1 17PB
More informationDDN. DDN Updates. DataDirect Neworks Japan, Inc Nobu Hashizume. DDN Storage 2018 DDN Storage 1
1 DDN DDN Updates DataDirect Neworks Japan, Inc Nobu Hashizume DDN Storage 2018 DDN Storage 1 2 DDN A Broad Range of Technologies to Best Address Your Needs Your Use Cases Research Big Data Enterprise
More informationParallel File Systems for HPC
Introduction to Scuola Internazionale Superiore di Studi Avanzati Trieste November 2008 Advanced School in High Performance and Grid Computing Outline 1 The Need for 2 The File System 3 Cluster & A typical
More informationIntel Select Solutions for Professional Visualization with Advantech Servers & Appliances
Solution Brief Intel Select Solution for Professional Visualization Intel Xeon Processor Scalable Family Powered by Intel Rendering Framework Intel Select Solutions for Professional Visualization with
More informationThe way toward peta-flops
The way toward peta-flops ISC-2011 Dr. Pierre Lagier Chief Technology Officer Fujitsu Systems Europe Where things started from DESIGN CONCEPTS 2 New challenges and requirements! Optimal sustained flops
More informationDataON and Intel Select Hyper-Converged Infrastructure (HCI) Maximizes IOPS Performance for Windows Server Software-Defined Storage
Solution Brief DataON and Intel Select Hyper-Converged Infrastructure (HCI) Maximizes IOPS Performance for Windows Server Software-Defined Storage DataON Next-Generation All NVMe SSD Flash-Based Hyper-Converged
More informationNew Approach to Unstructured Data
Innovations in All-Flash Storage Deliver a New Approach to Unstructured Data Table of Contents Developing a new approach to unstructured data...2 Designing a new storage architecture...2 Understanding
More informationDatura The new HPC-Plant at Albert Einstein Institute
Datura The new HPC-Plant at Albert Einstein Institute Nico Budewitz Max Planck Institue for Gravitational Physics, Germany Cluster Day, 2011 Outline 1 History HPC-Plants at AEI -2009 Peyote, Lagavulin,
More informationSGI Overview. HPC User Forum Dearborn, Michigan September 17 th, 2012
SGI Overview HPC User Forum Dearborn, Michigan September 17 th, 2012 SGI Market Strategy HPC Commercial Scientific Modeling & Simulation Big Data Hadoop In-memory Analytics Archive Cloud Public Private
More informationEnterprise Network Compute System (ENCS)
Enterprise Network Compute System (ENCS) Cisco vbranch Architecture Per Jensen, per@cisco.com Sept 2017 Agenda: Tech Update september-2017 1. ENCS update + demo v/per 2. Viptela update + demo v/dr Søren
More informationOnto Petaflops with Kubernetes
Onto Petaflops with Kubernetes Vishnu Kannan Google Inc. vishh@google.com Key Takeaways Kubernetes can manage hardware accelerators at Scale Kubernetes provides a playground for ML ML journey with Kubernetes
More informationDESCRIPTION GHz, 1.536TB shared memory RAM, and 20.48TB RAW internal storage teraflops About ScaleMP
DESCRIPTION The Auburn University College of Engineering Computational Fluid Dynamics Cluster is built using Dell M1000E Blade Chassis Server Platform. The Cluster will consist of (4) M1000E Blade Chassis
More informationHPC Storage Use Cases & Future Trends
Oct, 2014 HPC Storage Use Cases & Future Trends Massively-Scalable Platforms and Solutions Engineered for the Big Data and Cloud Era Atul Vidwansa Email: atul@ DDN About Us DDN is a Leader in Massively
More informationIntel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage
Intel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage Evaluation of Lustre File System software enhancements for improved Metadata performance Wojciech Turek, Paul Calleja,John
More informationSTAR-CCM+ Performance Benchmark and Profiling. July 2014
STAR-CCM+ Performance Benchmark and Profiling July 2014 Note The following research was performed under the HPC Advisory Council activities Participating vendors: CD-adapco, Intel, Dell, Mellanox Compute
More informationOptimizing Local File Accesses for FUSE-Based Distributed Storage
Optimizing Local File Accesses for FUSE-Based Distributed Storage Shun Ishiguro 1, Jun Murakami 1, Yoshihiro Oyama 1,3, Osamu Tatebe 2,3 1. The University of Electro-Communications, Japan 2. University
More informationrepresent parallel computers, so distributed systems such as Does not consider storage or I/O issues
Top500 Supercomputer list represent parallel computers, so distributed systems such as SETI@Home are not considered Does not consider storage or I/O issues Both custom designed machines and commodity machines
More informationAssessing performance in HP LeftHand SANs
Assessing performance in HP LeftHand SANs HP LeftHand Starter, Virtualization, and Multi-Site SANs deliver reliable, scalable, and predictable performance White paper Introduction... 2 The advantages of
More informationHPE Scalable Storage with Intel Enterprise Edition for Lustre*
HPE Scalable Storage with Intel Enterprise Edition for Lustre* HPE Scalable Storage with Intel Enterprise Edition For Lustre* High Performance Storage Solution Meets Demanding I/O requirements Performance
More informationPerformance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA
Performance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA Pak Lui, Gilad Shainer, Brian Klaff Mellanox Technologies Abstract From concept to
More informationCAS 2K13 Sept Jean-Pierre Panziera Chief Technology Director
CAS 2K13 Sept. 2013 Jean-Pierre Panziera Chief Technology Director 1 personal note 2 Complete solutions for Extreme Computing b ubullx ssupercomputer u p e r c o p u t e r suite s u e Production ready
More informationAgenda. Sun s x Sun s x86 Strategy. 2. Sun s x86 Product Portfolio. 3. Virtualization < 1 >
Agenda Sun s x86 1. Sun s x86 Strategy 2. Sun s x86 Product Portfolio 3. Virtualization < 1 > 1. SUN s x86 Strategy Customer Challenges Power and cooling constraints are very real issues Energy costs are
More informationHPC Innovation Lab Update. Dell EMC HPC Community Meeting 3/28/2017
HPC Innovation Lab Update Dell EMC HPC Community Meeting 3/28/2017 Dell EMC HPC Innovation Lab charter Design, develop and integrate Heading HPC systems Lorem ipsum Flexible reference dolor sit amet, architectures
More informationBuilding a High IOPS Flash Array: A Software-Defined Approach
Building a High IOPS Flash Array: A Software-Defined Approach Weafon Tsao Ph.D. VP of R&D Division, AccelStor, Inc. Santa Clara, CA Clarification Myth 1: S High-IOPS SSDs = High-IOPS All-Flash Array SSDs
More informationSuperMike-II Launch Workshop. System Overview and Allocations
: System Overview and Allocations Dr Jim Lupo CCT Computational Enablement jalupo@cct.lsu.edu SuperMike-II: Serious Heterogeneous Computing Power System Hardware SuperMike provides 442 nodes, 221TB of
More informationData center requirements
Prerequisites, page 1 Data center workflow, page 2 Determine data center requirements, page 2 Gather data for initial data center planning, page 2 Determine the data center deployment model, page 3 Determine
More informationAn Overview of Fujitsu s Lustre Based File System
An Overview of Fujitsu s Lustre Based File System Shinji Sumimoto Fujitsu Limited Apr.12 2011 For Maximizing CPU Utilization by Minimizing File IO Overhead Outline Target System Overview Goals of Fujitsu
More informationS THE MAKING OF DGX SATURNV: BREAKING THE BARRIERS TO AI SCALE. Presenter: Louis Capps, Solution Architect, NVIDIA,
S7750 - THE MAKING OF DGX SATURNV: BREAKING THE BARRIERS TO AI SCALE Presenter: Louis Capps, Solution Architect, NVIDIA, lcapps@nvidia.com A TALE OF ENLIGHTENMENT Basic OK List 10 for x = 1 to 3 20 print
More informationGame-changing Extreme GPU computing with The Dell PowerEdge C4130
Game-changing Extreme GPU computing with The Dell PowerEdge C4130 A Dell Technical White Paper This white paper describes the system architecture and performance characterization of the PowerEdge C4130.
More informationAtrato SOLVE - Scalable Offload Logical Volume Engine
Atrato SOLVE - Scalable Offload Logical Volume Engine Dr. Sam Siewert 1 Atrato Velocity Series Atrato Virtualization Software (AVS) New Levels of Storage Virtualization Application intelligence and autonomics
More informationShared Parallel Filesystems in Heterogeneous Linux Multi-Cluster Environments
LCI HPC Revolution 2005 26 April 2005 Shared Parallel Filesystems in Heterogeneous Linux Multi-Cluster Environments Matthew Woitaszek matthew.woitaszek@colorado.edu Collaborators Organizations National
More information2012 HPC Advisory Council
Q1 2012 2012 HPC Advisory Council DDN Big Data & InfiniBand Storage Solutions Overview Toine Beckers Director of HPC Sales, EMEA The Global Big & Fast Data Leader DDN delivers highly scalable & highly-efficient
More informationData life cycle monitoring using RoBinHood at scale. Gabriele Paciucci Solution Architect Bruno Faccini Senior Support Engineer September LAD
Data life cycle monitoring using RoBinHood at scale Gabriele Paciucci Solution Architect Bruno Faccini Senior Support Engineer September 2015 - LAD Agenda Motivations Hardware and software setup The first
More information2014 VMware Inc. All rights reserved.
2014 VMware Inc. All rights reserved. Agenda Virtual SAN 1 Why VSAN Software Defined Storage 2 Introducing Virtual SAN 3 Hardware Requirements 4 DEMO 5 Questions 2 The Software-Defined Data Center Expand
More informationSSG-6028R-E1CR16T SSG-6028R-E1CR24N/L SSG-2028R-E1CR48N/L SSG-6048R-E1CR60N/L
SuperStorage Simply Double and Top-Loading Storage Solutions SSG-6048R-E1CR90L SSG-2028R-NR48N SSG-6028R-E1CR16T SSG-6028R-E1CR24N/L SSG-2028R-E1CR48N/L SSG-6048R-E1CR60N/L Dual Socket Systems Support
More informationDesign and Evaluation of a 2048 Core Cluster System
Design and Evaluation of a 2048 Core Cluster System, Torsten Höfler, Torsten Mehlan and Wolfgang Rehm Computer Architecture Group Department of Computer Science Chemnitz University of Technology December
More informationMegaGauss (MGs) Cluster Design Overview
MegaGauss (MGs) Cluster Design Overview NVIDIA Tesla (Fermi) S2070 Modules Based Solution Version 6 (Apr 27, 2010) Alexander S. Zaytsev p. 1 of 15: "Title" Front view: planar
More informationDiamond Networks/Computing. Nick Rees January 2011
Diamond Networks/Computing Nick Rees January 2011 2008 computing requirements Diamond originally had no provision for central science computing. Started to develop in 2007-2008, with a major development
More informationRAIDIX 4.5. Product Features. Document revision 1.0
RAIDIX 4.5 Product Features Document revision 1.0 2017 Table of Content TABLE OF CONTENT... 2 RAIDIX 4.5... 3 HOW IT WORKS... 3 DUPLICATION OF HARDWARE COMPONENTS... 4 NETWORK ATTACHED STORAGE... 5 DISTINCTIVE
More informationMission-Critical Lustre at Santos. Adam Fox, Lustre User Group 2016
Mission-Critical Lustre at Santos Adam Fox, Lustre User Group 2016 About Santos One of the leading oil and gas producers in APAC Founded in 1954 South Australia Northern Territory Oil Search Cooper Basin
More informationDell EMC HPC System for Life Sciences v1.4
Dell EMC HPC System for Life Sciences v1.4 Designed for genomics sequencing analysis, bioinformatics and computational biology Dell EMC Engineering April 2017 A Dell EMC Reference Architecture Revisions
More informationHPC Hardware Overview
HPC Hardware Overview John Lockman III April 19, 2013 Texas Advanced Computing Center The University of Texas at Austin Outline Lonestar Dell blade-based system InfiniBand ( QDR) Intel Processors Longhorn
More informationExactly as much as you need.
Exactly as much as you need. Get IT All with PRIMERGY RX300 S6 & PRIMERGY RX200 S6 1 Copyright 2011 FUJITSU Agenda 1. Get IT All: The Offer 2. Dynamic Infrastructures 3. PRIMERGY Portfolio Overview 4.
More informationHPC Architectures. Types of resource currently in use
HPC Architectures Types of resource currently in use Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us
More informationThe BioHPC Nucleus Cluster & Future Developments
1 The BioHPC Nucleus Cluster & Future Developments Overview Today we ll talk about the BioHPC Nucleus HPC cluster with some technical details for those interested! How is it designed? What hardware does
More informationLustre architecture for Riccardo Veraldi for the LCLS IT Team
Lustre architecture for LCLS@SLAC Riccardo Veraldi for the LCLS IT Team 2 LCLS Experimental Floor 3 LCLS Parameters 4 LCLS Physics LCLS has already had a significant impact on many areas of science, including:
More informationHCI: Hyper-Converged Infrastructure
Key Benefits: Innovative IT solution for high performance, simplicity and low cost Complete solution for IT workloads: compute, storage and networking in a single appliance High performance enabled by
More informationEMC & VMWARE STRATEGIC FORUM NEW YORK MARCH David Goulden President & COO. Copyright 2013 EMC Corporation. All rights reserved.
1 EMC & VMWARE STRATEGIC FORUM NEW YORK MARCH 13 2013 David Goulden President & COO 2 The Business Drivers Increase Revenue Lower Operational Costs Reduce Risk 3 CLOUD TRANSFORMS IT Lower Operational Costs
More informationDensity Optimized System Enabling Next-Gen Performance
Product brief High Performance Computing (HPC) and Hyper-Converged Infrastructure (HCI) Intel Server Board S2600BP Product Family Featuring the Intel Xeon Processor Scalable Family Density Optimized System
More informationDesign a Remote-Office or Branch-Office Data Center with Cisco UCS Mini
White Paper Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini February 2015 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 9 Contents
More informationSoftware Defined Storage at the Speed of Flash. PRESENTATION TITLE GOES HERE Carlos Carrero Rajagopal Vaideeswaran Symantec
Software Defined Storage at the Speed of Flash PRESENTATION TITLE GOES HERE Carlos Carrero Rajagopal Vaideeswaran Symantec Agenda Introduction Software Technology Architecture Review Oracle Configuration
More informationXcellis Technical Overview: A deep dive into the latest hardware designed for StorNext 5
TECHNOLOGY BRIEF Xcellis Technical Overview: A deep dive into the latest hardware designed for StorNext 5 ABSTRACT Xcellis represents the culmination of over 15 years of file system and data management
More informationDatabase Services at CERN with Oracle 10g RAC and ASM on Commodity HW
Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW UKOUG RAC SIG Meeting London, October 24 th, 2006 Luca Canali, CERN IT CH-1211 LCGenève 23 Outline Oracle at CERN Architecture of CERN
More informationLAMMPS-KOKKOS Performance Benchmark and Profiling. September 2015
LAMMPS-KOKKOS Performance Benchmark and Profiling September 2015 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox, NVIDIA
More informationLCE: Lustre at CEA. Stéphane Thiell CEA/DAM
LCE: Lustre at CEA Stéphane Thiell CEA/DAM (stephane.thiell@cea.fr) 1 Lustre at CEA: Outline Lustre at CEA updates (2009) Open Computing Center (CCRT) updates CARRIOCAS (Lustre over WAN) project 2009-2010
More informationUnderstanding Write Behaviors of Storage Backends in Ceph Object Store
Understanding Write Behaviors of Storage Backends in Object Store Dong-Yun Lee, Kisik Jeong, Sang-Hoon Han, Jin-Soo Kim, Joo-Young Hwang and Sangyeun Cho How Amplifies Writes client Data Store, please
More informationCreate a Flexible, Scalable High-Performance Storage Cluster with WekaIO Matrix
Solution brief Intel Storage Builders WekaIO Matrix Intel eon Processor E5-2600 Product Family Intel Ethernet Converged Network Adapter 520 Intel SSD Data Center Family Data Plane Development Kit Create
More informationIBM InfoSphere Streams v4.0 Performance Best Practices
Henry May IBM InfoSphere Streams v4.0 Performance Best Practices Abstract Streams v4.0 introduces powerful high availability features. Leveraging these requires careful consideration of performance related
More informationMicrosoft Exchange Server 2010 workload optimization on the new IBM PureFlex System
Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System Best practices Roland Mueller IBM Systems and Technology Group ISV Enablement April 2012 Copyright IBM Corporation, 2012
More informationArchitecting High Performance Computing Systems for Fault Tolerance and Reliability
Architecting High Performance Computing Systems for Fault Tolerance and Reliability Blake T. Gonzales HPC Computer Scientist Dell Advanced Systems Group blake_gonzales@dell.com Agenda HPC Fault Tolerance
More informationThe Last Bottleneck: How Parallel I/O can improve application performance
The Last Bottleneck: How Parallel I/O can improve application performance HPC ADVISORY COUNCIL STANFORD WORKSHOP; DECEMBER 6 TH 2011 REX TANAKIT DIRECTOR OF INDUSTRY SOLUTIONS AGENDA Panasas Overview Who
More informationThe Architecture and the Application Performance of the Earth Simulator
The Architecture and the Application Performance of the Earth Simulator Ken ichi Itakura (JAMSTEC) http://www.jamstec.go.jp 15 Dec., 2011 ICTS-TIFR Discussion Meeting-2011 1 Location of Earth Simulator
More informationGateways to Discovery: Cyberinfrastructure for the Long Tail of Science
Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science ECSS Symposium, 12/16/14 M. L. Norman, R. L. Moore, D. Baxter, G. Fox (Indiana U), A Majumdar, P Papadopoulos, W Pfeiffer, R. S.
More informationDDN About Us Solving Large Enterprise and Web Scale Challenges
1 DDN About Us Solving Large Enterprise and Web Scale Challenges History Founded in 98 World s Largest Private Storage Company Growing, Profitable, Self Funded Headquarters: Santa Clara and Chatsworth,
More informationTHE SUMMARY. CLUSTER SERIES - pg. 3. ULTRA SERIES - pg. 5. EXTREME SERIES - pg. 9
PRODUCT CATALOG THE SUMMARY CLUSTER SERIES - pg. 3 ULTRA SERIES - pg. 5 EXTREME SERIES - pg. 9 CLUSTER SERIES THE HIGH DENSITY STORAGE FOR ARCHIVE AND BACKUP When downtime is not an option Downtime is
More informationNVMFS: A New File System Designed Specifically to Take Advantage of Nonvolatile Memory
NVMFS: A New File System Designed Specifically to Take Advantage of Nonvolatile Memory Dhananjoy Das, Sr. Systems Architect SanDisk Corp. 1 Agenda: Applications are KING! Storage landscape (Flash / NVM)
More informationRefining and redefining HPC storage
Refining and redefining HPC storage High-Performance Computing Demands a New Approach to HPC Storage Stick with the storage status quo and your story has only one ending more and more dollars funneling
More informationPureSystems: Changing The Economics And Experience Of IT
PureSystems: Changing The Economics And Experience Of IT Flex And PureFlex - More For Your Infrastructure Money Copies: http://www.ibm.com/ibm/puresystems/events/assets/index.html Friendly Bank Needs A
More informationNexentaVSA for View. Hardware Configuration Reference nv4v-v A
NexentaVSA for View Hardware Configuration Reference 1.0 5000-nv4v-v0.0-000003-A Copyright 2012 Nexenta Systems, ALL RIGHTS RESERVED Notice: No part of this publication may be reproduced or transmitted
More informationNext-Generation NVMe-Native Parallel Filesystem for Accelerating HPC Workloads
Next-Generation NVMe-Native Parallel Filesystem for Accelerating HPC Workloads Liran Zvibel CEO, Co-founder WekaIO @liranzvibel 1 WekaIO Matrix: Full-featured and Flexible Public or Private S3 Compatible
More informationABySS Performance Benchmark and Profiling. May 2010
ABySS Performance Benchmark and Profiling May 2010 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox Compute resource - HPC
More informationCS500 SMARTER CLUSTER SUPERCOMPUTERS
CS500 SMARTER CLUSTER SUPERCOMPUTERS OVERVIEW Extending the boundaries of what you can achieve takes reliable computing tools matched to your workloads. That s why we tailor the Cray CS500 cluster supercomputer
More informationLustre at the OLCF: Experiences and Path Forward. Galen M. Shipman Group Leader Technology Integration
Lustre at the OLCF: Experiences and Path Forward Galen M. Shipman Group Leader Technology Integration A Demanding Computational Environment Jaguar XT5 18,688 Nodes Jaguar XT4 7,832 Nodes Frost (SGI Ice)
More informationScaling Across the Supercomputer Performance Spectrum
Scaling Across the Supercomputer Performance Spectrum Cray s XC40 system leverages the combined advantages of next-generation Aries interconnect and Dragonfly network topology, Intel Xeon processors, integrated
More informationScaling to Petaflop. Ola Torudbakken Distinguished Engineer. Sun Microsystems, Inc
Scaling to Petaflop Ola Torudbakken Distinguished Engineer Sun Microsystems, Inc HPC Market growth is strong CAGR increased from 9.2% (2006) to 15.5% (2007) Market in 2007 doubled from 2003 (Source: IDC
More informationAlgorithms and Data Structures for Efficient Free Space Reclamation in WAFL
Algorithms and Data Structures for Efficient Free Space Reclamation in WAFL Ram Kesavan Technical Director, WAFL NetApp, Inc. SDC 2017 1 Outline Garbage collection in WAFL Usenix FAST 2017 ACM Transactions
More information