Leonhard: a new cluster for Big Data at ETH

Size: px
Start display at page:

Download "Leonhard: a new cluster for Big Data at ETH"

Transcription

1 Leonhard: a new cluster for Big Data at ETH Bernd Rinn, Head of Scientific IT Services Olivier Byrde, Group leader High Performance Computing Bernd Rinn & Olivier Byrde

2 Agenda Welcome address by Rui Brandao, Director of IT Services (10 ) HPC & Big Data strategy of IT Services (10 ) Update on Euler III expansion (10 ) Overview of the Leonhard cluster (10 ) Leonhard Open vs. Leonhard Med (10 ) Timeline and prices (10 ) Q & A (30 ) Bernd Rinn & Olivier Byrde

3 Agenda Welcome address by Rui Brandao HPC & Big Data strategy of IT Services Update on Euler III expansion Overview of the Leonhard cluster Leonhard Open vs. Leonhard Med Timeline and prices Q & A Bernd Rinn & Olivier Byrde

4 Agenda Welcome address by Rui Brandao HPC & Big Data strategy of IT Services Update on Euler III expansion Overview of the Leonhard cluster Leonhard Open vs. Leonhard Med Timeline and prices Q & A Bernd Rinn & Olivier Byrde

5 Scientific IT vs. Commodity IT Bernd Rinn & Olivier Byrde

6 Scientific IT Services (SIS) SIS is a section of IT Services created 4 years ago to offer scientific computing support and services to researchers at ETH Our staff (about 30) are all experts in scientific computing algorithms, data analysis, visualization, software development, HPC system management, data management, etc.; many have a background in a scientific domain Our mission is to take care of scientists IT problems, so that they can focus on their research Our customers can pay for these services either on a project basis or via a subscription for expert services; we provide computing power to researchers who are investing in the central clusters of ETH Bernd Rinn & Olivier Byrde

7 Scientific IT Services (cont.) High- Performance Computing Scientific Software Data Analysis and Management Consulting & Training High-performance computing and Big Data Analytics Writing, porting and optimizing scientific software Building data processing pipelines and data management solutions Triaging computing needs in research / for research projects Bernd Rinn & Olivier Byrde

8 Services / Groups of SIS High Performance Computing The HPC group is responsible for the procurement, installation and operation of the central HPC and Big Data clusters of ETH and provides support Scientific Software and Data Management The SSDM group develops and maintains bespoke scientific software according to the needs of our customers, including software for data acquisition, management and analysis Data Analysis and Management The Research Informatics (RI) group builds data pipelines and data management solutions, offers consulting services, and provides training on scientific computing topics (e.g. parallel programming, Python, Spark, research data management) Bernd Rinn & Olivier Byrde

9 What impacts our goals for HPC & Big Data Our goals are defined by Strategic goals set by ETH Council, ETH Exec. Board, and IT Services The (constantly evolving) needs of our existing customers The wishes of prospective customers However, these goals are subject to many constraints Available budget and manpower Technical (space, power and cooling; technological evolution; security) Legal (WTO procurement rules; Swiss data protection laws) Time (customers deadlines; delivery and installation time for new hardware) Our goals are shaped by these needs and constraints Bernd Rinn & Olivier Byrde

10 Optimization under boundary conditions We work closely with all stakeholders to determine which goals we can fulfill within the imposed constraints Some services may be better provided by colleagues, e.g. Local IT Support (ISG) or another ITS section, e.g. S4D or SD Swiss National Supercomputing Centre (CSCS) We must focus on goals which have the highest impact for ETH Goals that benefit multiple research groups and many ETH researchers Goals of strategic importance Bernd Rinn & Olivier Byrde

11 From Brutus to Euler and Leonhard Brutus (decommissioned last month) was a multi-purpose cluster Standard compute nodes optimized for energy efficiency (due to power & cooling limitations) GPGPU nodes Its duties are being taken over by Euler and the new Leonhard cluster Euler is focusing on high-performance / high-throughput computing (HPC / HTC) Leonhard will be the platform of choice for special applications, especially GPGPUs Euler III and Leonhard are extending into new areas Euler III provides very high single-core performance for serial applications Leonhard Open provides fast, scalable and cost-effective storage for Big Data applications Leonhard Med provides secure environment (including built-in encryption) for computing on sensitive biomedical data Bernd Rinn & Olivier Byrde

12 Performance growth Storage growth 1500 Leonhard 5000 Leonhard Peak performance [TF] Euler Brutus Storage capacity [TB] Euler Brutus Bernd Rinn & Olivier Byrde

13 Agenda Welcome address by Rui Brandao HPC & Big Data strategy of IT Services Update on Euler III expansion Overview of the Leonhard cluster Leonhard Open vs. Leonhard Med Timeline and prices Q & A Bernd Rinn & Olivier Byrde

14 Motivation for Euler III Previous Euler expansions have been aimed at large parallel jobs Compute nodes with 24 cores, connected together via high-speed InfiniBand network It turns out that many Euler users (still) run single-core jobs They do not need InfiniBand, and would rather have smaller, faster CPUs In 2016, these single-core jobs represented 84% of all jobs submitted on Euler 17% of all CPU time used on Euler These users have seen little performance improvements since Euler I Let s do something different for them! Bernd Rinn & Olivier Byrde

15 And now for something completely different Euler III is based on Hewlett-Packard s Moonshot platform It is a radical change from the compute nodes that we have used so far In fact it is so radical that Euler III is the largest system of its kind in the world J Each compute node is an HPE m710x server featuring A quad-core Intel Xeon E3-1585Lv5 Skylake GHz 32 GB of DDR4 RAM clocked at 2133 MHz A 256-GB NVMe SSD 10G Ethernet network That s more computing power than an old Brutus node, in 1/10 th the volume! Bernd Rinn & Olivier Byrde

16 Battery Chassis connector Power button Intel Xeon E3-1585Lv5 (under heat sink) m.2 NVMe SSD (4x) Memory modules (4x) m.2 SATA SSD (1x) Photos courtesy of Hewlett-Packard Enterprise Bernd Rinn & Olivier Byrde

17 Photo by O. Byrde, ETH Zurich Bernd Rinn & Olivier Byrde

18 Photo by O. Byrde, ETH Zurich Bernd Rinn & Olivier Byrde

19 Current status of Euler III In total Euler III contains 1215 nodes / 4860 cores / 9720 threads That is enough to handle all single-core jobs currently running on Euler It was opened to selected beta users one month ago Their feedback regarding performance has been very positive The hardware has proven extremely reliable (1 minor incident per week on average) Beta testing will now focus on the new software environment (CentOS 7 and LSF 10) that we plan to introduce on Euler as a whole in a few months Please let us know if you would like to give it a try Bernd Rinn & Olivier Byrde

20 Future Euler expansions & upgrades We are already working on Euler IV The type and number of nodes will be defined in the coming months Please let us know what you need, so that we can take it into account Installation expected in September 2017 We are also working on a massive storage upgrade New storage system with petabytes of high-speed storage Installation expected in June 2017 Bernd Rinn & Olivier Byrde

21 Agenda Welcome address by Rui Brandao HPC & Big Data strategy of IT Services Update on Euler III expansion Overview of the Leonhard cluster Leonhard Open vs. Leonhard Med Timeline and prices Q & A Bernd Rinn & Olivier Byrde

22 New Leonhard cluster for Big Data applications at ETH The main features of Leonhard are: Multiple petabytes of fast parallel storage, scaling to tens of petabytes Different compute nodes for specific use cases (high throughput, large memory, GPUs, etc.) Enhanced security for sensitive applications (e.g. personalized medicine) Innovative tools and methods to analyze these data (Hadoop, R, Spark, TensorFlow, etc.) Leonhard was approved by the Executive Board of ETH in September 2016 The Executive Board also provided partial funding for the medical part The rest is financed by shareholders and by IT Services Bernd Rinn & Olivier Byrde

23 Infrastructure A secure computer room has been built just for Leonhard It contains 22 racks and 12 heat exchangers arranged in a so-called warm island Flexible cooling supports low- to high-density systems using kw per rack Ideal for the mix of storage, compute and GPU nodes needed for Big Data The whole electrical infrastructure is protected against power interruptions (UPS) The room is locked and accessible to authorized personnel only Bernd Rinn & Olivier Byrde

24 Warm island Picture courtesy of Modulan GmbH Bernd Rinn & Olivier Byrde

25 Photo by O. Byrde, ETH Zurich Bernd Rinn & Olivier Byrde

26 Storage systems Core storage Redundant NFS servers for system management, applications and users homes Usable capacity of 70 TB, stored entirely on SSD Built-in encryption Big Data storage Parallel file system based on IBM Spectrum Scale Advanced Edition (formerly GPFS) Initial capacity of 3.5 PB split between two separate storage systems ( PB) Expandable to 20 PB ( PB) on short notice, and to tens of PB within a few months Performance of ~10 GB/s per system; will grow to GB/s as we expand capacity Built-in encryption Bernd Rinn & Olivier Byrde

27 Data protection Hardware protection The storage systems of Leonhard use double-parity RAID to protect against disk failure They use redundant file servers configured for high availability (HA) operation File system protection All file systems support snapshots to protect against accidental file modification / deletion Snapshot frequency and retention time to be defined according to customers needs Backup System state, applications and users homes are backed up daily to a remote system Big Data storage is not backed up (tape backup may be an option; this will need to be discussed and tested together with the Storage Group of IT Services) Until then, critical data on Leonhard Med can be duplicated on Leonhard Open Bernd Rinn & Olivier Byrde

28 Compute nodes Standard / large-memory nodes 2 x 18-core Intel Xeon CPUs 128 / 512 GB RAM GPU nodes (on order; not delivered yet) 2 x 10-core Intel Xeon CPUs 256 GB RAM 8 x Nvidia GTX-1080 GPUs Different types of nodes will be added later as needed Very large memory, Xeon Phi, Hadoop, etc. Bernd Rinn & Olivier Byrde

29 Software Same software stack as Euler Same OS, admin tools, batch system, compilers, libraries and applications Allows us to exploit synergies between the two clusters Makes life easier to people using both clusters Some software (e.g. CUDA library for Nvidia GPUs) will be usable only on Leonhard Bernd Rinn & Olivier Byrde

30 Access restricted to shareholders Unlike Euler, which is open to all users of ETH, Leonhard will be accessible to shareholders only Shareholders can grant access to collaborators outside their group as needed SIS will create temporary accounts on demand for groups who would like to test Leonhard before becoming shareholders Bernd Rinn & Olivier Byrde

31 Agenda Welcome address by Rui Brandao HPC & Big Data strategy of IT Services Update on Euler III expansion Overview of the Leonhard cluster Leonhard Open vs. Leonhard Med Timeline and prices Q & A Bernd Rinn & Olivier Byrde

32 Leonhard is actually made of two clusters Leonhard Open, for the analysis of open (public) research data Same security level as Euler (firewall, login, NETHZ user & group file permissions) File system encryption supported but not mandatory Leonhard Med, for the analysis of sensitive biomedical research data Access restricted to people officially authorized to handle these data Enhanced security to protect against both theft and inadvertent leak of sensitive data Compliant with Swiss law and other regulations for medical data, such as the US Health Insurance Portability and Accountability Act ( Security policies defined by a committee of legal, medical and IT experts Bernd Rinn & Olivier Byrde

33 Leonhard Open vs. Leonhard Med Leonhard Open Leonhard Med Intended for Open Big Data applications Sensitive Big Data applications from biomedical research Designed for Open to Shareholders and their collaborators Specific groups and users on project basis Open to Security Standard for academic cluster, similar to Euler Enhanced, policy based and strictly enforced Security Data encryption Supported Mandatory for certain applications File encryption Storage capacity 2 PB initially, expandable to >10 PB 1.5 PB initially, expandable to >10 PB Storage capacity Bernd Rinn & Olivier Byrde

34 Schematic view Enhanced security Login node Key server Login node Open storage Open compute nodes Med compute nodes Med storage One-way data flow (replication) Bernd Rinn & Olivier Byrde

35 Flexible architecture To simplify hardware maintenance and system administration, Leonhard Open and Leonhard Med are based on identical components One type of server for admin, login and compute nodes The partition between the two clusters is not fixed Compute nodes can be moved from one cluster to the other as needed For security reasons, this cannot be done on-the-fly (nodes must be drained and erased, networks must be reconfigured, OS must be reinstalled, etc.) You can thus buy a share in Leonhard Open and convert it to Leonhard Med (or vice versa) later if needed for technical or compliance reasons Bernd Rinn & Olivier Byrde

36 Agenda Welcome address by Rui Brandao HPC & Big Data strategy of IT Services Update on Euler III expansion Overview of the Leonhard cluster Leonhard Open vs. Leonhard Med Timeline and prices Q & A Bernd Rinn & Olivier Byrde

37 Timeline Leonhard Open is being installed right now Beta testing will start in late February or early March Normal operation is expected to start in April Leonhard Med is open to alpha users since December 2016 Beta testing will start in March Normal operation will start when security and compliance issues have been defined, implemented and verified Bernd Rinn & Olivier Byrde

38 Current Euler price list Compute nodes (valid for 4 years) Standard node (24 cores, 64 GB RAM, InfiniBand FDR) Large-memory node (24 cores, 256 GB RAM, InfiniBand FDR) Storage (valid for 4 years) High-performance work storage, per TB Long-term project storage, per TB Future high-performance storage (mid-2017), per TB Bernd Rinn & Olivier Byrde

39 Preliminary Leonhard price list (subject to change) Compute nodes (valid for 4 years) Standard node (36 cores, 128 GB RAM, InfiniBand EDR) Large-memory node (36 cores, 512 GB, IB EDR) GPU node (20 cores, 256 GB, 8 x GTX-1080, IB EDR) Big Data storage (valid for 4 years) First 20 TB Each additional block of 20 TB, up to 100 TB Each additional block of 50 TB, up to 500 TB Larger capacities Storage is only offered in combination with compute nodes (and vice versa) Minimum investment: 1 compute node and 20 TB of storage Bernd Rinn & Olivier Byrde

40 Availability of Leonhard nodes Standard and large-memory nodes are available now in limited quantities We can order additional ones at any time (delivery time: 6-8 weeks) World-wide RAM prices have increased a lot; this may affect the price of our next order The first batch of GPU nodes are sold out All the GPU nodes that we ordered have already been sold to shareholders According to WTO rules, we need to issue a call for tender to order additional nodes We are presently working on it this process will take several months If you have special wishes (GPU model, number of GPUs per node), please let us know Bernd Rinn & Olivier Byrde

41 Agenda Welcome address by Rui Brandao HPC & Big Data strategy of IT Services Update on Euler III expansion Overview of the Leonhard cluster Leonhard Open vs. Leonhard Med Timeline and prices Q & A Bernd Rinn & Olivier Byrde

42 Useful links General information IT Services Scientific IT Services Wiki Main page Euler Leonhard Support IT Services support / servicedesk@id.ethz.ch Cluster support cluster-support@id.ethz.ch Bernd Rinn & Olivier Byrde

Brutus. Above and beyond Hreidar and Gonzales

Brutus. Above and beyond Hreidar and Gonzales Brutus Above and beyond Hreidar and Gonzales Dr. Olivier Byrde Head of HPC Group, IT Services, ETH Zurich Teodoro Brasacchio HPC Group, IT Services, ETH Zurich 1 Outline High-performance computing at ETH

More information

PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE SHEET) Supply and installation of High Performance Computing System

PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE SHEET) Supply and installation of High Performance Computing System INSTITUTE FOR PLASMA RESEARCH (An Autonomous Institute of Department of Atomic Energy, Government of India) Near Indira Bridge; Bhat; Gandhinagar-382428; India PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE

More information

University at Buffalo Center for Computational Research

University at Buffalo Center for Computational Research University at Buffalo Center for Computational Research The following is a short and long description of CCR Facilities for use in proposals, reports, and presentations. If desired, a letter of support

More information

Simplified Multi-Tenancy for Data Driven Personalized Health Research

Simplified Multi-Tenancy for Data Driven Personalized Health Research Simplified Multi-Tenancy for Data Driven Personalized Health Research Diego Moreno HPC Storage Specialist @ Scientific IT Services, ETH Zürich LAD 2018, Paris Agenda ETH Zurich and the Scientific IT Services

More information

Standard Service Level Agreement (Service based SLA) for Scientific Compute Clusters

Standard Service Level Agreement (Service based SLA) for Scientific Compute Clusters Informatikdiens IT Services Direction ETH Zürich Stampfenbachstrasse 69 8092 Zürich www.id.ethz.ch Standard Service Level Agreement (Service based SLA) for Scientific Compute Clusters Table of Contents

More information

Organizational Update: December 2015

Organizational Update: December 2015 Organizational Update: December 2015 David Hudak Doug Johnson Alan Chalker www.osc.edu Slide 1 OSC Organizational Update Leadership changes State of OSC Roadmap Web app demonstration (if time) Slide 2

More information

CSD3 The Cambridge Service for Data Driven Discovery. A New National HPC Service for Data Intensive science

CSD3 The Cambridge Service for Data Driven Discovery. A New National HPC Service for Data Intensive science CSD3 The Cambridge Service for Data Driven Discovery A New National HPC Service for Data Intensive science Dr Paul Calleja Director of Research Computing University of Cambridge Problem statement Today

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System x idataplex CINECA, Italy Lenovo System

More information

The Cambridge Bio-Medical-Cloud An OpenStack platform for medical analytics and biomedical research

The Cambridge Bio-Medical-Cloud An OpenStack platform for medical analytics and biomedical research The Cambridge Bio-Medical-Cloud An OpenStack platform for medical analytics and biomedical research Dr Paul Calleja Director of Research Computing University of Cambridge Global leader in science & technology

More information

S8765 Performance Optimization for Deep- Learning on the Latest POWER Systems

S8765 Performance Optimization for Deep- Learning on the Latest POWER Systems S8765 Performance Optimization for Deep- Learning on the Latest POWER Systems Khoa Huynh Senior Technical Staff Member (STSM), IBM Jonathan Samn Software Engineer, IBM Evolving from compute systems to

More information

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center The Stampede is Coming Welcome to Stampede Introductory Training Dan Stanzione Texas Advanced Computing Center dan@tacc.utexas.edu Thanks for Coming! Stampede is an exciting new system of incredible power.

More information

Broadberry. Artificial Intelligence Server for Fraud. Date: Q Application: Artificial Intelligence

Broadberry. Artificial Intelligence Server for Fraud. Date: Q Application: Artificial Intelligence TM Artificial Intelligence Server for Fraud Date: Q2 2017 Application: Artificial Intelligence Tags: Artificial intelligence, GPU, GTX 1080 TI HM Revenue & Customs The UK s tax, payments and customs authority

More information

PLAN-E Workshop Switzerland. Welcome! September 8, 2016

PLAN-E Workshop Switzerland. Welcome! September 8, 2016 PLAN-E Workshop Switzerland Welcome! September 8, 2016 The Swiss National Supercomputing Centre Driving innovation in computational research in Switzerland Michele De Lorenzi (CSCS) PLAN-E September 8,

More information

CS500 SMARTER CLUSTER SUPERCOMPUTERS

CS500 SMARTER CLUSTER SUPERCOMPUTERS CS500 SMARTER CLUSTER SUPERCOMPUTERS OVERVIEW Extending the boundaries of what you can achieve takes reliable computing tools matched to your workloads. That s why we tailor the Cray CS500 cluster supercomputer

More information

Veritas NetBackup on Cisco UCS S3260 Storage Server

Veritas NetBackup on Cisco UCS S3260 Storage Server Veritas NetBackup on Cisco UCS S3260 Storage Server This document provides an introduction to the process for deploying the Veritas NetBackup master server and media server on the Cisco UCS S3260 Storage

More information

IBM CORAL HPC System Solution

IBM CORAL HPC System Solution IBM CORAL HPC System Solution HPC and HPDA towards Cognitive, AI and Deep Learning Deep Learning AI / Deep Learning Strategy for Power Power AI Platform High Performance Data Analytics Big Data Strategy

More information

Quotations invited. 2. The supplied hardware should have 5 years comprehensive onsite warranty (24 x 7 call logging) from OEM directly.

Quotations invited. 2. The supplied hardware should have 5 years comprehensive onsite warranty (24 x 7 call logging) from OEM directly. Enquiry No: IITK/ME/mkdas/2016/01 May 04, 2016 Quotations invited Sealed quotations are invited for the purchase of an HPC cluster with the specification outlined below. Technical as well as the commercial

More information

Tape Data Storage in Practice Minnesota Supercomputing Institute

Tape Data Storage in Practice Minnesota Supercomputing Institute Tape Data Storage in Practice Minnesota Supercomputing Institute GlobusWorld 208 Jeffrey McDonald, PhD Associate Director for Operations HPC Resources MSI Users PI Accounts: 843 Users: > 4600 Mesabi Cores:

More information

Habanero Operating Committee. January

Habanero Operating Committee. January Habanero Operating Committee January 25 2017 Habanero Overview 1. Execute Nodes 2. Head Nodes 3. Storage 4. Network Execute Nodes Type Quantity Standard 176 High Memory 32 GPU* 14 Total 222 Execute Nodes

More information

Data Analytics and Storage System (DASS) Mixing POSIX and Hadoop Architectures. 13 November 2016

Data Analytics and Storage System (DASS) Mixing POSIX and Hadoop Architectures. 13 November 2016 National Aeronautics and Space Administration Data Analytics and Storage System (DASS) Mixing POSIX and Hadoop Architectures 13 November 2016 Carrie Spear (carrie.e.spear@nasa.gov) HPC Architect/Contractor

More information

Erkenntnisse aus aktuellen Performance- Messungen mit LS-DYNA

Erkenntnisse aus aktuellen Performance- Messungen mit LS-DYNA 14. LS-DYNA Forum, Oktober 2016, Bamberg Erkenntnisse aus aktuellen Performance- Messungen mit LS-DYNA Eric Schnepf 1, Dr. Eckardt Kehl 1, Chih-Song Kuo 2, Dymitrios Kyranas 2 1 Fujitsu Technology Solutions

More information

UAntwerpen, 24 June 2016

UAntwerpen, 24 June 2016 Tier-1b Info Session UAntwerpen, 24 June 2016 VSC HPC environment Tier - 0 47 PF Tier -1 623 TF Tier -2 510 Tf 16,240 CPU cores 128/256 GB memory/node IB EDR interconnect Tier -3 HOPPER/TURING STEVIN THINKING/CEREBRO

More information

An ESS implementation in a Tier 1 HPC Centre

An ESS implementation in a Tier 1 HPC Centre An ESS implementation in a Tier 1 HPC Centre Maximising Performance - the NeSI Experience José Higino (NeSI Platforms and NIWA, HPC Systems Engineer) Outline What is NeSI? The National Platforms Framework

More information

Guillimin HPC Users Meeting. Bart Oldeman

Guillimin HPC Users Meeting. Bart Oldeman June 19, 2014 Bart Oldeman bart.oldeman@mcgill.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Outline Compute Canada News Upcoming Maintenance Downtime in August Storage System

More information

Purchasing Services SVC East Fowler Avenue Tampa, Florida (813)

Purchasing Services SVC East Fowler Avenue Tampa, Florida (813) Purchasing Services SVC 1073 4202 East Fowler Avenue Tampa, Florida 33620 (813) 974-2481 Web Address: http://www.usf.edu/business-finance/purchasing/staff-procedures/index.aspx April 25, 2018 Invitation

More information

<Insert Picture Here> Oracle Storage

<Insert Picture Here> Oracle Storage Oracle Storage Jennifer Feng Principal Product Manager IT Challenges Have Not Slowed Increasing Demand for Storage Capacity and Performance 2010 New Digital Data ( Replicated (¼ Created,

More information

PRACE Project Access Technical Guidelines - 19 th Call for Proposals

PRACE Project Access Technical Guidelines - 19 th Call for Proposals PRACE Project Access Technical Guidelines - 19 th Call for Proposals Peer-Review Office Version 5 06/03/2019 The contributing sites and the corresponding computer systems for this call are: System Architecture

More information

The Leading Parallel Cluster File System

The Leading Parallel Cluster File System The Leading Parallel Cluster File System www.thinkparq.com www.beegfs.io ABOUT BEEGFS What is BeeGFS BeeGFS (formerly FhGFS) is the leading parallel cluster file system, developed with a strong focus on

More information

(software agnostic) Computational Considerations

(software agnostic) Computational Considerations (software agnostic) Computational Considerations The Issues CPU GPU Emerging - FPGA, Phi, Nervana Storage Networking CPU 2 Threads core core Processor/Chip Processor/Chip Computer CPU Threads vs. Cores

More information

IBM Power Advanced Compute (AC) AC922 Server

IBM Power Advanced Compute (AC) AC922 Server IBM Power Advanced Compute (AC) AC922 Server The Best Server for Enterprise AI Highlights IBM Power Systems Accelerated Compute (AC922) server is an acceleration superhighway to enterprise- class AI. A

More information

SGI Overview. HPC User Forum Dearborn, Michigan September 17 th, 2012

SGI Overview. HPC User Forum Dearborn, Michigan September 17 th, 2012 SGI Overview HPC User Forum Dearborn, Michigan September 17 th, 2012 SGI Market Strategy HPC Commercial Scientific Modeling & Simulation Big Data Hadoop In-memory Analytics Archive Cloud Public Private

More information

ACCRE High Performance Compute Cluster

ACCRE High Performance Compute Cluster 6 중 1 2010-05-16 오후 1:44 Enabling Researcher-Driven Innovation and Exploration Mission / Services Research Publications User Support Education / Outreach A - Z Index Our Mission History Governance Services

More information

VxRail: Level Up with New Capabilities and Powers

VxRail: Level Up with New Capabilities and Powers VxRail: Level Up with New Capabilities and Powers Piotr Karaszewski Jakub Bałchan GLOBAL SPONSORS HCI and CI Tu r n k e y e x p e r i e n c e Engineered Manufactured Managed Supported Sustained APPLIANCES

More information

ALCF Argonne Leadership Computing Facility

ALCF Argonne Leadership Computing Facility ALCF Argonne Leadership Computing Facility ALCF Data Analytics and Visualization Resources William (Bill) Allcock Leadership Computing Facility Argonne Leadership Computing Facility Established 2006. Dedicated

More information

IBM Storage Solutions & Software Defined Infrastructure

IBM Storage Solutions & Software Defined Infrastructure IBM Storage Solutions & Software Defined Infrastructure Strategy, Trends, Directions Calline Sanchez, Vice President, IBM Enterprise Systems Storage Twitter: @cksanche LinkedIn: www.linkedin.com/pub/calline-sanchez/9/599/b09/

More information

Comet Virtualization Code & Design Sprint

Comet Virtualization Code & Design Sprint Comet Virtualization Code & Design Sprint SDSC September 23-24 Rick Wagner San Diego Supercomputer Center Meeting Goals Build personal connections between the IU and SDSC members of the Comet team working

More information

(Detailed Terms, Technical Specifications, and Bidding Format) This section describes the main objective to be achieved through this procurement.

(Detailed Terms, Technical Specifications, and Bidding Format) This section describes the main objective to be achieved through this procurement. ITEM:- Supply and Installation of Servers for Private Cloud Annexure Tender No: - 201601945 IIT Bombay is setting up a cloud to host important applications and data for a national project. The cloud is

More information

Parallelism and Concurrency. COS 326 David Walker Princeton University

Parallelism and Concurrency. COS 326 David Walker Princeton University Parallelism and Concurrency COS 326 David Walker Princeton University Parallelism What is it? Today's technology trends. How can we take advantage of it? Why is it so much harder to program? Some preliminary

More information

LBRN - HPC systems : CCT, LSU

LBRN - HPC systems : CCT, LSU LBRN - HPC systems : CCT, LSU HPC systems @ CCT & LSU LSU HPC Philip SuperMike-II SuperMIC LONI HPC Eric Qeenbee2 CCT HPC Delta LSU HPC Philip 3 Compute 32 Compute Two 2.93 GHz Quad Core Nehalem Xeon 64-bit

More information

vstart 50 VMware vsphere Solution Specification

vstart 50 VMware vsphere Solution Specification vstart 50 VMware vsphere Solution Specification Release 1.3 for 12 th Generation Servers Dell Virtualization Solutions Engineering Revision: A00 March 2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

SMART SERVER AND STORAGE SOLUTIONS FOR GROWING BUSINESSES

SMART SERVER AND STORAGE SOLUTIONS FOR GROWING BUSINESSES Jan - Mar 2009 SMART SERVER AND STORAGE SOLUTIONS FOR GROWING BUSINESSES For more details visit: http://www-07preview.ibm.com/smb/in/expressadvantage/xoffers/index.html IBM Servers & Storage Configured

More information

JÜLICH SUPERCOMPUTING CENTRE Site Introduction Michael Stephan Forschungszentrum Jülich

JÜLICH SUPERCOMPUTING CENTRE Site Introduction Michael Stephan Forschungszentrum Jülich JÜLICH SUPERCOMPUTING CENTRE Site Introduction 09.04.2018 Michael Stephan JSC @ Forschungszentrum Jülich FORSCHUNGSZENTRUM JÜLICH Research Centre Jülich One of the 15 Helmholtz Research Centers in Germany

More information

Oracle Exadata X7. Uwe Kirchhoff Oracle ACS - Delivery Senior Principal Service Delivery Engineer

Oracle Exadata X7. Uwe Kirchhoff Oracle ACS - Delivery Senior Principal Service Delivery Engineer Oracle Exadata X7 Uwe Kirchhoff Oracle ACS - Delivery Senior Principal Service Delivery Engineer 05.12.2017 Oracle Engineered Systems ZFS Backup Appliance Zero Data Loss Recovery Appliance Exadata Database

More information

IBM Power AC922 Server

IBM Power AC922 Server IBM Power AC922 Server The Best Server for Enterprise AI Highlights More accuracy - GPUs access system RAM for larger models Faster insights - significant deep learning speedups Rapid deployment - integrated

More information

Microsoft SQL Server on Stratus ftserver Systems

Microsoft SQL Server on Stratus ftserver Systems W H I T E P A P E R Microsoft SQL Server on Stratus ftserver Systems Security, scalability and reliability at its best Uptime that approaches six nines Significant cost savings for your business Only from

More information

Advances of parallel computing. Kirill Bogachev May 2016

Advances of parallel computing. Kirill Bogachev May 2016 Advances of parallel computing Kirill Bogachev May 2016 Demands in Simulations Field development relies more and more on static and dynamic modeling of the reservoirs that has come a long way from being

More information

The Cirrus Research Computing Cloud

The Cirrus Research Computing Cloud The Cirrus Research Computing Cloud Faculty of Science What is Cloud Computing? Cloud computing is a physical cluster which runs virtual machines Unlike a typical cluster there is no one operating system

More information

IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads

IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads 89 Fifth Avenue, 7th Floor New York, NY 10003 www.theedison.com @EdisonGroupInc 212.367.7400 IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads A Competitive Test and Evaluation Report

More information

Introducing Panasas ActiveStor 14

Introducing Panasas ActiveStor 14 Introducing Panasas ActiveStor 14 SUPERIOR PERFORMANCE FOR MIXED FILE SIZE ENVIRONMENTS DEREK BURKE, PANASAS EUROPE INTRODUCTION TO PANASAS Storage that accelerates the world s highest performance and

More information

19. prosince 2018 CIIRC Praha. Milan Král, IBM Radek Špimr

19. prosince 2018 CIIRC Praha. Milan Král, IBM Radek Špimr 19. prosince 2018 CIIRC Praha Milan Král, IBM Radek Špimr CORAL CORAL 2 CORAL Installation at ORNL CORAL Installation at LLNL Order of Magnitude Leap in Computational Power Real, Accelerated Science ACME

More information

Users and utilization of CERIT-SC infrastructure

Users and utilization of CERIT-SC infrastructure Users and utilization of CERIT-SC infrastructure Equipment CERIT-SC is an integral part of the national e-infrastructure operated by CESNET, and it leverages many of its services (e.g. management of user

More information

Emerging Technologies for HPC Storage

Emerging Technologies for HPC Storage Emerging Technologies for HPC Storage Dr. Wolfgang Mertz CTO EMEA Unstructured Data Solutions June 2018 The very definition of HPC is expanding Blazing Fast Speed Accessibility and flexibility 2 Traditional

More information

Smarter Clusters from the Supercomputer Experts

Smarter Clusters from the Supercomputer Experts Smarter Clusters from the Supercomputer Experts Maximize Your Results with Flexible, High-Performance Cray CS500 Cluster Supercomputers In science and business, as soon as one question is answered another

More information

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments Torben Kling-Petersen, PhD Presenter s Name Principle Field Title andengineer Division HPC &Cloud LoB SunComputing Microsystems

More information

HPE ProLiant ML350 Gen10 Server

HPE ProLiant ML350 Gen10 Server Digital data sheet HPE ProLiant ML350 Gen10 Server ProLiant ML Servers What's new Support for Intel Xeon Scalable processors full stack. 2600 MT/s HPE DDR4 SmartMemory RDIMM/LRDIMM offering 8, 16, 32,

More information

Preparing GPU-Accelerated Applications for the Summit Supercomputer

Preparing GPU-Accelerated Applications for the Summit Supercomputer Preparing GPU-Accelerated Applications for the Summit Supercomputer Fernanda Foertter HPC User Assistance Group Training Lead foertterfs@ornl.gov This research used resources of the Oak Ridge Leadership

More information

HPE ProLiant ML110 Gen10 Server

HPE ProLiant ML110 Gen10 Server Digital data sheet HPE ProLiant ML110 Gen10 Server ProLiant ML Servers What's new New SMB focused offers regionally released as Smart Buy Express in the U.S. and Canada, Top Value in Europe, and Intelligent

More information

HPE ProLiant ML110 Gen P 8GB-R S100i 4LFF NHP SATA 350W PS DVD Entry Server/TV (P )

HPE ProLiant ML110 Gen P 8GB-R S100i 4LFF NHP SATA 350W PS DVD Entry Server/TV (P ) Digital data sheet HPE ProLiant ML110 Gen10 3104 1P 8GB-R S100i 4LFF NHP SATA 350W PS DVD Entry Server/TV (P03684-425) ProLiant ML Servers What's new New SMB focused offers regionally released as Smart

More information

Veritas NetBackup Appliance Family OVERVIEW BROCHURE

Veritas NetBackup Appliance Family OVERVIEW BROCHURE Veritas NetBackup Appliance Family OVERVIEW BROCHURE Veritas NETBACKUP APPLIANCES Veritas understands the shifting needs of the data center and offers NetBackup Appliances as a way for customers to simplify

More information

TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING

TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING Table of Contents: The Accelerated Data Center Optimizing Data Center Productivity Same Throughput with Fewer Server Nodes

More information

Introducing SUSE Enterprise Storage 5

Introducing SUSE Enterprise Storage 5 Introducing SUSE Enterprise Storage 5 1 SUSE Enterprise Storage 5 SUSE Enterprise Storage 5 is the ideal solution for Compliance, Archive, Backup and Large Data. Customers can simplify and scale the storage

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0) PRACE 16th Call Technical Guidelines for Applicants V1: published on 26/09/17 TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0) The contributing sites and the corresponding computer systems

More information

Making Supercomputing More Available and Accessible Windows HPC Server 2008 R2 Beta 2 Microsoft High Performance Computing April, 2010

Making Supercomputing More Available and Accessible Windows HPC Server 2008 R2 Beta 2 Microsoft High Performance Computing April, 2010 Making Supercomputing More Available and Accessible Windows HPC Server 2008 R2 Beta 2 Microsoft High Performance Computing April, 2010 Windows HPC Server 2008 R2 Windows HPC Server 2008 R2 makes supercomputing

More information

The Stampede is Coming: A New Petascale Resource for the Open Science Community

The Stampede is Coming: A New Petascale Resource for the Open Science Community The Stampede is Coming: A New Petascale Resource for the Open Science Community Jay Boisseau Texas Advanced Computing Center boisseau@tacc.utexas.edu Stampede: Solicitation US National Science Foundation

More information

Data storage services at KEK/CRC -- status and plan

Data storage services at KEK/CRC -- status and plan Data storage services at KEK/CRC -- status and plan KEK/CRC Hiroyuki Matsunaga Most of the slides are prepared by Koichi Murakami and Go Iwai KEKCC System Overview KEKCC (Central Computing System) The

More information

IBM System p5 510 and 510Q Express Servers

IBM System p5 510 and 510Q Express Servers More value, easier to use, and more performance for the on demand world IBM System p5 510 and 510Q Express Servers System p5 510 or 510Q Express rack-mount servers Highlights Up to 4-core scalability with

More information

Specialised Server Technology for HD Surveillance

Specialised Server Technology for HD Surveillance Specialised Server Technology for HD Surveillance With our entry level server technology starting with double the throughput of commercially available IP CCTV standards, Secure Logiq servers have been

More information

An Oracle White Paper December Accelerating Deployment of Virtualized Infrastructures with the Oracle VM Blade Cluster Reference Configuration

An Oracle White Paper December Accelerating Deployment of Virtualized Infrastructures with the Oracle VM Blade Cluster Reference Configuration An Oracle White Paper December 2010 Accelerating Deployment of Virtualized Infrastructures with the Oracle VM Blade Cluster Reference Configuration Introduction...1 Overview of the Oracle VM Blade Cluster

More information

GPUs and Emerging Architectures

GPUs and Emerging Architectures GPUs and Emerging Architectures Mike Giles mike.giles@maths.ox.ac.uk Mathematical Institute, Oxford University e-infrastructure South Consortium Oxford e-research Centre Emerging Architectures p. 1 CPUs

More information

HPC and IT Issues Session Agenda. Deployment of Simulation (Trends and Issues Impacting IT) Mapping HPC to Performance (Scaling, Technology Advances)

HPC and IT Issues Session Agenda. Deployment of Simulation (Trends and Issues Impacting IT) Mapping HPC to Performance (Scaling, Technology Advances) HPC and IT Issues Session Agenda Deployment of Simulation (Trends and Issues Impacting IT) Discussion Mapping HPC to Performance (Scaling, Technology Advances) Discussion Optimizing IT for Remote Access

More information

Hewlett Packard Enterprise HPE GEN10 PERSISTENT MEMORY PERFORMANCE THROUGH PERSISTENCE

Hewlett Packard Enterprise HPE GEN10 PERSISTENT MEMORY PERFORMANCE THROUGH PERSISTENCE Hewlett Packard Enterprise HPE GEN10 PERSISTENT MEMORY PERFORMANCE THROUGH PERSISTENCE Digital transformation is taking place in businesses of all sizes Big Data and Analytics Mobility Internet of Things

More information

Brown County Virtualization Project

Brown County Virtualization Project Brown County Virtualization Project By Nicholas Duncanson Submitted to the Faculty of the Information Technology Program in Partial Fulfillment of the Requirements for the Degree of Bachelor of Science

More information

PowerServe HPC/AI Servers

PowerServe HPC/AI Servers Product Data Sheet PowerServe HPC/AI Servers Realize the power of cutting-edge technology HPC, Machine Learning and AI enterprise servers hand crafted for maximum performance. With 25+ years experience

More information

Welcome to the XSEDE Big Data Workshop

Welcome to the XSEDE Big Data Workshop Welcome to the XSEDE Big Data Workshop John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2018 Who are we? Our satellite sites: Tufts University University of Utah Purdue

More information

The rcuda middleware and applications

The rcuda middleware and applications The rcuda middleware and applications Will my application work with rcuda? rcuda currently provides binary compatibility with CUDA 5.0, virtualizing the entire Runtime API except for the graphics functions,

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx

More information

OpenFOAM Performance Testing and Profiling. October 2017

OpenFOAM Performance Testing and Profiling. October 2017 OpenFOAM Performance Testing and Profiling October 2017 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Huawei, Mellanox Compute resource - HPC

More information

BIG DATA AND HADOOP ON THE ZFS STORAGE APPLIANCE

BIG DATA AND HADOOP ON THE ZFS STORAGE APPLIANCE BIG DATA AND HADOOP ON THE ZFS STORAGE APPLIANCE BRETT WENINGER, MANAGING DIRECTOR 10/21/2014 ADURANT APPROACH TO BIG DATA Align to Un/Semi-structured Data Instead of Big Scale out will become Big Greatest

More information

HP s Performance Oriented Datacenter

HP s Performance Oriented Datacenter HP s Performance Oriented Datacenter and Automation SEAH Kwang Leng Marketing Manager Enterprise Storage and Servers Asia Pacific & Japan 2008 Hewlett-Packard Development Company, L.P. The information

More information

Mission-Critical Lustre at Santos. Adam Fox, Lustre User Group 2016

Mission-Critical Lustre at Santos. Adam Fox, Lustre User Group 2016 Mission-Critical Lustre at Santos Adam Fox, Lustre User Group 2016 About Santos One of the leading oil and gas producers in APAC Founded in 1954 South Australia Northern Territory Oil Search Cooper Basin

More information

Server Success Checklist: 20 Features Your New Hardware Must Have

Server Success Checklist: 20 Features Your New Hardware Must Have Server Success Checklist: 20 Features Your New Hardware Must Have June 2015 x86 rack servers keep getting more powerful, more energy efficient, and easier to manage. This is making it worthwhile for many

More information

The Hyperion Project: Collaboration for an Advanced Technology Cluster Testbed. November 2008

The Hyperion Project: Collaboration for an Advanced Technology Cluster Testbed. November 2008 1 The Hyperion Project: Collaboration for an Advanced Technology Cluster Testbed November 2008 Extending leadership to the HPC community November 2008 2 Motivation Collaborations Hyperion Cluster Timeline

More information

Nimble Storage vs HPE 3PAR: A Comparison Snapshot

Nimble Storage vs HPE 3PAR: A Comparison Snapshot Nimble Storage vs HPE 3PAR: A 1056 Baker Road Dexter, MI 48130 t. 734.408.1993 Nimble Storage vs HPE 3PAR: A INTRODUCTION: Founders incorporated Nimble Storage in 2008 with a mission to provide customers

More information

Xyratex ClusterStor6000 & OneStor

Xyratex ClusterStor6000 & OneStor Xyratex ClusterStor6000 & OneStor Proseminar Ein-/Ausgabe Stand der Wissenschaft von Tim Reimer Structure OneStor OneStorSP OneStorAP ''Green'' Advancements ClusterStor6000 About Scale-Out Storage Architecture

More information

SNAP Performance Benchmark and Profiling. April 2014

SNAP Performance Benchmark and Profiling. April 2014 SNAP Performance Benchmark and Profiling April 2014 Note The following research was performed under the HPC Advisory Council activities Participating vendors: HP, Mellanox For more information on the supporting

More information

Department of Mechanical Engineering, Indian Institute of Technology Madras Chennai 36

Department of Mechanical Engineering, Indian Institute of Technology Madras Chennai 36 Department of Mechanical Engineering, Indian Institute of Technology Madras Chennai 36 Dr. V V S D Ratna Kumar Annabattula & Dr. Manoj Pandey Phone :044-2257 4719 Project Coordinators Fax :044-2257 4652

More information

IBM System p5 550 and 550Q Express servers

IBM System p5 550 and 550Q Express servers The right solutions for consolidating multiple applications on a single system IBM System p5 550 and 550Q Express servers Highlights Up to 8-core scalability using Quad-Core Module technology Point, click

More information

System Design of Kepler Based HPC Solutions. Saeed Iqbal, Shawn Gao and Kevin Tubbs HPC Global Solutions Engineering.

System Design of Kepler Based HPC Solutions. Saeed Iqbal, Shawn Gao and Kevin Tubbs HPC Global Solutions Engineering. System Design of Kepler Based HPC Solutions Saeed Iqbal, Shawn Gao and Kevin Tubbs HPC Global Solutions Engineering. Introduction The System Level View K20 GPU is a powerful parallel processor! K20 has

More information

Guillimin HPC Users Meeting. Bryan Caron

Guillimin HPC Users Meeting. Bryan Caron July 17, 2014 Bryan Caron bryan.caron@mcgill.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Outline Compute Canada News Upcoming Maintenance Downtime in August Storage System

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System X idataplex CINECA, Italy The site selection

More information

IBM System Storage DCS3700

IBM System Storage DCS3700 IBM System Storage DCS3700 Maximize performance, scalability and storage density at an affordable price Highlights Gain fast, highly dense storage capabilities at an affordable price Deliver simplified

More information

Genius Quick Start Guide

Genius Quick Start Guide Genius Quick Start Guide Overview of the system Genius consists of a total of 116 nodes with 2 Skylake Xeon Gold 6140 processors. Each with 18 cores, at least 192GB of memory and 800 GB of local SSD disk.

More information

Trends in HPC (hardware complexity and software challenges)

Trends in HPC (hardware complexity and software challenges) Trends in HPC (hardware complexity and software challenges) Mike Giles Oxford e-research Centre Mathematical Institute MIT seminar March 13th, 2013 Mike Giles (Oxford) HPC Trends March 13th, 2013 1 / 18

More information

Performance Pack. Benchmarking with PlanetPress Connect and PReS Connect

Performance Pack. Benchmarking with PlanetPress Connect and PReS Connect Performance Pack Benchmarking with PlanetPress Connect and PReS Connect Contents 2 Introduction 4 Benchmarking results 5 First scenario: Print production on demand 5 Throughput vs. Output Speed 6 Second

More information

Advanced Research Compu2ng Informa2on Technology Virginia Tech

Advanced Research Compu2ng Informa2on Technology Virginia Tech Advanced Research Compu2ng Informa2on Technology Virginia Tech www.arc.vt.edu Personnel Associate VP for Research Compu6ng: Terry Herdman (herd88@vt.edu) Director, HPC: Vijay Agarwala (vijaykag@vt.edu)

More information

IBM Power Systems: Open innovation to put data to work Dexter Henderson Vice President IBM Power Systems

IBM Power Systems: Open innovation to put data to work Dexter Henderson Vice President IBM Power Systems IBM Power Systems: Open innovation to put data to work Dexter Henderson Vice President IBM Power Systems 2014 IBM Corporation Powerful Forces are Changing the Way Business Gets Done Data growing exponentially

More information

ACCELERATED COMPUTING: THE PATH FORWARD. Jen-Hsun Huang, Co-Founder and CEO, NVIDIA SC15 Nov. 16, 2015

ACCELERATED COMPUTING: THE PATH FORWARD. Jen-Hsun Huang, Co-Founder and CEO, NVIDIA SC15 Nov. 16, 2015 ACCELERATED COMPUTING: THE PATH FORWARD Jen-Hsun Huang, Co-Founder and CEO, NVIDIA SC15 Nov. 16, 2015 COMMODITY DISRUPTS CUSTOM SOURCE: Top500 ACCELERATED COMPUTING: THE PATH FORWARD It s time to start

More information

HPE ProLiant ML350 Gen P 16GB-R E208i-a 8SFF 1x800W RPS Solution Server (P04674-S01)

HPE ProLiant ML350 Gen P 16GB-R E208i-a 8SFF 1x800W RPS Solution Server (P04674-S01) Digital data sheet HPE ProLiant ML350 Gen10 4110 1P 16GB-R E208i-a 8SFF 1x800W RPS Solution Server (P04674-S01) ProLiant ML Servers What's new Support for Intel Xeon Scalable processors full stack. 2600

More information

Enterprise Architectures The Pace Accelerates Camberley Bates Managing Partner & Analyst

Enterprise Architectures The Pace Accelerates Camberley Bates Managing Partner & Analyst Enterprise Architectures The Pace Accelerates Camberley Bates Managing Partner & Analyst Change is constant in IT.But some changes alter forever the way we do things Inflections & Architectures Solid State

More information

Future Trends in Hardware and Software for use in Simulation

Future Trends in Hardware and Software for use in Simulation Future Trends in Hardware and Software for use in Simulation Steve Feldman VP/IT, CD-adapco April, 2009 HighPerformanceComputing Building Blocks CPU I/O Interconnect Software General CPU Maximum clock

More information