A Converged HPC & Big Data System for Nontraditional and HPC Research

Size: px
Start display at page:

Download "A Converged HPC & Big Data System for Nontraditional and HPC Research"

Transcription

1 A Converged HPC & Big Data System for Nontraditional and HPC Research Nick Nystrom Sr. Director of Research Overview March 15, Pittsburgh Supercomputing Center

2 2 Bridges, a new kind of supercomputer at PSC, is designed to converge HPC and Big Data, empower new research communities, bring desktop convenience to high-end computing, expand remote access, and help researchers to work more intuitively. Funded by a $17.2M NSF award, Bridges emphasizes usability, flexibility, and interactivity Available at no charge for open research and course support and by arrangement to industry Popular programming languages and applications: R, MATLAB, Python, Hadoop, Spark, 846 compute nodes containing 128GB (800), 3TB (42), and 12TB (4) of RAM each; 96 GPUs Dedicated nodes for persistent databases, web servers, and distributed services The first deployment of the Intel Omni-Path Architecture fabric

3 Acquisition and operation of Bridges are made possible by the National Science Foundation through award #ACI ($17.2M): Bridges: From Communities and Data to Workflows and Insight delivered Bridges All trademarks, service marks, trade names, trade dress, product names, and logos appearing herein are the property of their respective owners. 3

4 For open research: Accessing Bridges Allocations on Bridges are available at no charge through XSEDE, For industrial affiliates, Pennsylvania-based researchers, and others to foster discovery and innovation and broaden participation in data-intensive computing: Up to 10% of Bridges capacity is available on a discretionary basis. Contact Nick Nystrom, Bridges PI, at nystrom@psc.edu. 4

5 Motivating Use Cases Data-intensive applications & workflows Gateways the power of HPC without the programming Shared data collections & related analysis tools Cross-domain analytics } Deep learning Graph analytics, machine learning, genome sequence assembly, and other large-memory applications Scaling beyond the laptop Scaling research to teams and collaborations In-memory databases Optimization & parameter sweeps Distributed & service-oriented architectures Data assimilation from large instruments and Internet data Leveraging an extensive collection of interoperating software Research areas that haven t used HPC New approaches to traditional HPC fields (e.g., machine learning) Coupling applications in novel ways Leveraging large memory, GPUs, and high bandwidth 5

6 Objectives Overview Deliver HPC to new users and research communities. Bring the power of HPC to Big Data. Streamline access and provide burst capability to campuses. Approach Unify compute and storage on a single, high-performance fabric to unify HPC and Big Data. Embrace heterogeneity to enable expression of uniquely efficient application frameworks. Very large coherent shared memory 12TB, 3TB, and 128GB of RAM per server supports productivity, transparent scaling, and important applications in (for example) data analytics, statistics, the life sciences, engineering, and machine learning. State-of-the-art GPUs support deep learning and acceleration of a broad range of applications. Implement a uniquely flexible environment featuring interactivity, gateways, databases, distributed services, high-productivity programming languages and frameworks, and virtualization and containers. 6

7 Interactivity Interactivity is the feature most frequently requested by nontraditional HPC communities. Interactivity provides immediate feedback for doing exploratory data analytics and testing hypotheses. Bridges offers interactivity through a combination of shared and dedicated resources to maximize availability while accommodating different needs. 7

8 Gateways and Tools for Building Them Gateways provide easy-to-use access to Bridges HPC and data resources, allowing users to launch jobs, orchestrate complex workflows, and manage data from their browsers. Provide HPC-as-a-Service Extensive use of VMs, databases, and distributed services Galaxy (PSU, Johns Hopkins) The Causal Web (Pitt, CMU) GenePattern (Broad Institute) 8

9 Virtualization and Containers Virtual Machines (VMs) enable flexibility, security, customization, reproducibility, ease of use, and interoperability with other services. User demand is for custom database and web server installations to develop data-intensive, distributed applications and containers for custom software stacks and portability. Bridges leverages OpenStack to provision resources, between interactive, batch, Hadoop, and VM uses. 9

10 High-Productivity Programming Supporting languages that communities already use is vital for them to apply HPC to their research questions. 10

11 Spark, Hadoop & Related Approaches Bridges large memory offers unique benefits to Spark applications. Bridges enables workflows that integrate Spark/Hadoop, HPC, GPU, and/or large shared-memory components. 11

12 Conceptual Architecture Users, XSEDE, campuses, instruments Web Server nodes Database nodes Data Transfer nodes Login nodes Parallel File System Intel Omni-Path Architecture fabric Management nodes ESM Nodes 12TB RAM 4 nodes LSM Nodes 3TB RAM 42 nodes RSM Nodes 128GB RAM 800 nodes, 48 with GPUs 12

13 Bridges Virtual Tour: 20 Storage Building Blocks, implementing the parallel Pylon storage system (10 PB usable) 4 MDS nodes 2 front-end nodes 2 boot nodes 8 management nodes 6 core Intel OPA edge switches: fully interconnected, 2 links per switch Intel OPA cables 4 HPE Integrity Superdome X (12TB) compute nodes each with 2 OPA IB gateway nodes 42 HPE ProLiant DL580 (3 TB) compute nodes 12 HPE ProLiant DL380 database nodes 6 HPE ProLiant DL360 web server nodes 20 leaf Intel OPA edge switches Purpose-built Intel Omni-Path Architecture topology for data-intensive HPC 32 RSM nodes, each with 2 NVIDIA Tesla P100 GPUs 800 HPE Apollo 2000 (128GB) compute nodes 16 RSM nodes, each with 2 NVIDIA Tesla K80 GPUs 13

14 Node Types Type RAM Phase n CPU / GPU / other Server 14 ESM 12TB b Intel Xeon E v3 (18c, 2.3/3.1 GHz, 45MB LLC) HPE Integrity 12TB c Intel Xeon E v4 (22c, 2.2/3.3 GHz, 55MB LLC) Superdome X 3TB b Intel Xeon E v3 (16c, 2.2/3.2 GHz, 40 MB LLC) LSM 3TB c Intel Xeon E v4 (20c, 2.1/3.0 GHz, 50 MB LLC) RSM 128GB b Intel Xeon E v3 (14c, 2.3/3.3 GHz, 35MB LLC) HPE ProLiant DL580 RSM-GPU 128GB b Intel Xeon E v3 + 2 NVIDIA Tesla K80 HPE Apollo GB c Intel Xeon E v4 (16c, 2.1/3.0 GHz, 40MB LLC) + 2 NVIDIA Tesla P100 DB-s 128GB b Intel Xeon E v3 + SSD HPE ProLiant DL360 DB-h 128GB b Intel Xeon E v3 + HDDs HPE ProLiant DL380 Web 128GB b Intel Xeon E v3 HPE ProLiant DL360 Other a 128GB b Intel Xeon E v3 Gateway 64GB b Intel Xeon E v3 (14c, 2.0/3.0 GHz, 35MB LLC) 64GB c Intel Xeon E v3 128GB b Intel Xeon E v3 (12c, 2.5/3.3 GHz, 30 MB LLC) Storage 256GB c Intel Xeon E v4 (14c, 2.4/3.3 GHz, 35 MB LLC) Total TB 908 a. Other nodes = front end (2) + management/log (8) + boot (4) + MDS (4) b. DDR c. DDR HPE ProLiant DL360, HPE ProLiant DL380 HPE ProLiant DL380 Supermicro X10DRi

15 System Capacities Compute nodes only, per node type RSM RSM GPU LSM ESM Total Peak fp64 (Tf/s) ,346 RAM (TB) Local disk (TB) 6, ,328 Parallel file system: usable space (PB) 10 15

16 Intel Omni-Path Architecture (OPA) Bridges is the first production deployment of Omni-Path Omni-Path connects all nodes and the shared filesystem, providing Bridges and its users with: 100 Gbps line speed per port; 25 GB/s bidirectional bandwidth per port Measured 0.93μs latency, GB/s/dir 160M MPI messages per second 48-port edge switch reduces interconnect complexity and cost HPC performance, reliability, and QoS OFA-compliant applications supported without modification Early access to this new, important, forward-looking technology Bridges deploys OPA in a two-tier island (leaf-spine) topology developed by PSC for cost-effective, data-intensive HPC 16

17 Data Management Pylon: A large, central, high-performance storage system 10 PB usable Visible to all compute nodes Currently implemented as two complementary file systems: /pylon1: Lustre, targeted for $SCRATCH use; non-wiped directories available by request /pylon2: SLASH2, targeted for large datasets, community repositories, and distributed clients Distributed (node-local) storage Enhance application portability Improve overall system performance Improve performance consistency to the shared filesystem Aggregate 7.2 PB (6 PB non-gpu RSM: Hadoop, Spark) 17

18 Database and Web Server Nodes Dedicated database nodes power persistent relational and NoSQL databases Support data management and data-driven workflows SSDs for high IOPs; HDDs for high capacity (examples) Dedicated web server nodes Enable distributed, service-oriented architectures High-bandwidth connections to XSEDE and the Internet 18

19 GPU Nodes Bridges GPUs are accelerating both deep learning and simulation codes Phase 1: 16 nodes, each with: 2 NVIDIA Tesla K80 GPUs (32 total) 2 Intel Xeon E v3 (14c, 2.3/3.3 GHz) 128GB DDR RAM Phase 2: +32 nodes, each with: 2 NVIDIA Tesla P100 GPUs (64 total) 2 Intel Xeon E v4 (16c, 2.1/3.0 GHz) 128GB DDR RAM Kepler architecture 2496 CUDA cores (128/SM) 7.08B transistors on 561mm 2 die (28nm) 2 24 GB GDDR5; GB/s 562 MHz base 876 MHz boost 2.91 Tf/s (64b), 8.73 Tf/s (32b) Pascal architecture 3584 CUDA cores (64/SM) 15.3B transistors on 610mm 2 die (16nm) 16GB CoWoS HBM2 at 720 GB/s w/ ECC 1126 MHz base 1303 MHz boost 4.7 Tf/s (64b), 9.3 Tf/s (32b), 18.7 Tf/s (16b) Page migration engine improves unified memory 64 P100 GPUs 600 Tf/s (32b) 19

20 20 Examples of Research that Bridges is Enabing

21 The Causal Web Center for Causal Discovery, an NIH Big Data to Knowledge Center of Excellence Browser-based UI Prepare and upload data Run causal Internet discovery algorithms Visualize results Web node VM Apache Tomcat Messaging Database node VM Authentication Data mgmt. Provenance Other DBs Execute causal discovery algorithms Omni-Path Pylon filesystem TCGA fmri Algorithm execution: FGS continuous and discrete, GFCI, and other algorithms, building on TETRAD LSM Node (3TB) Memoryresident datasets ESM Node (12TB) 21

22 Galaxy Galaxy provides a powerful and flexible platform for creating and running reproducible scientific workflows, mostly focused on bioinformatics applications, but also adopted by other domains. Some specialize, e.g., metagenomics or proteomics Galaxy Main at TACC now routes large-memory Trinity assembly jobs to Bridges A second, Docker-based Galaxy instance at PSC servers a local user community 22

23 Improved SNP Detection in Metagenomic Populations Wenxuan Zhong, Xin Xing, and Ping Ma, Univ. of Georgia Assembled 378 Gigabase pairs (Gbp) of gut microbial DNA from normal and diabetic patients Massive metagenome assembly took only 16 hours using an MPI-based metagenome assembler, Ray, on 20 Bridges RM nodes connected by Omni-Path Identified 2,480 species-level clusters in assembled sequence data Ran MetaGen clustering software across 10 RM nodes, clustering 500,000 contiguous sequences in only 14 hours Currently testing new likelihood-based statistical method for fast and accurate SNP detection from these metagenomic sequencing data The team is now using Bridges to test a new statistical method on the sequence data to identify critical differences in gut microbes associated with diabetes. Environmental Shotgun Sequencing (ESS). (A) Sampling from habitat; (B) filtering particles, typically by size; (C) DNA extraction and lysis; (D) cloning and library; (E) sequence the clones; (F) sequence assembly. By John C. Wooley, Adam Godzik, Iddo Friedberg - CC BY 2.5, 23

24 Characterizing Diverse Microbial Ecosystems from Terabase-Scale Metagenomic Data Brian Couger, Oklahoma State University Assembled 11 metagenomes sampled from diverse sources, comprising over 3 trillion bases of sequence data Including a recent massive assembly of 1.6 Tbp of metagenomic data from an oil sands tailings pond, a bioremediation target Excellent performance of MPI-based Ray assembler on 90 RM nodes completed assembly in only 4.25 days Analysis of assembled data in progress to characterize organisms present in these diverse environments and identify new microbial phyla Oil sands tailings pond. By NASA Earth Observatory. Public Domain, 24

25 Creation of World s Largest k-mer Database Rachid Ounit and Chris Mason, Cornell University Created a database of 153 billion species-specific nucleotide sequences (k-mers) Analyzed entire NCBI Reference Sequence (RefSeq) archive containing over 15k species to create the k-mer database Required a massive in-memory hash table Computation took 24 days and 4.8 TB of RAM on a 12 TB node of Bridges Allows rapid identification and classification of species in metagenomics samples By Madprime - Own work, CC BY-SA 3.0, 25

26 Applying Deep Learning to Connectomics Goal: Explore the potential of deep learning to automate segmentation of high-resolution scanning electron microscope (SEM) images of brain tissue and the tracing of neurons through 3D volumes to automate generation of the connectome, a comprehensive map of neural connections. Motivation: This project builds on an ongoing collaboration between PSC, Harvard University, and the Allen Institute for Brain Science, through which we have access to high-quality raw and labeled data. The SEM data volume for mouse cortex imaging is ~3TB/day, and data processing is currently human-intensive. Forthcoming camera systems will increase data bandwidth by a factor of 65. Images courtesy Florian Engert, David Hildebrand, and their students at the Center for Brain Science, Harvard. Datasets: Zebrafish larva ( ), mouse,. Data volume: mouse brain 430mm 3 ~1.4 exabytes. Collaborators: Ishtar Nyawĩra, Iris Qian, Annie Zhang, Joel Welling, John Urbanic, Art Wetzel, Nick Nystrom 26

27 Some of the Deep Learning Projects Using Bridges Deep Learning of Game Strategies for RoboCup, Manuela Veloso (CMU) Automatic Evaluation of Scientific Writing, Diane Litman (U. of Pittsburgh) Image Classification Applied in Economic Studies, Param Singh (CMU) Exploring Stability, Cost, and Performance in Adversarial Deep Learning, Matt Fredrikson (CMU) Enabling Robust Image Understanding Using Deep Learning, Adriana Kovashka (U. of Pittsburgh) Preparing Grounds to Launch All-US Students Kaggle Competition on Drug Prediction, Gil Alterovitz (Harvard Medical School/Boston Children s Hospital) Deep Learning the Gene Regulatory Code, Shaun Mahony (Penn State) Automatic Building of Speech Recognizers for Non-Experts, Florian Metze (CMU) Development of a Hybrid Computational Approach for Macroscale Simulation of Exciton Diffusion in Polymer Thin Films, Based on Combined Machine Learning, Quantum- Classical Simulations and Master Equation Techniques, Peter Rossky (Rice U.) Developing Large-Scale Distributed Deep Learning Methods for Protein Bioinformatics, Junbo Xu (Toyota Technological Institute at Chicago) Education Allocation for the Course Unstructured Data & Big Data: Acquisition to Analysis, Dokyun Lee (CMU) Deciphering Cellular Signaling System by Deep Mining a Comprehensive Genomic Compendium, Xinghua Lu (U. of Pittsburgh) Quantifying California Current Plankton Using Machine Learning, Mark Ohman (Scripps Institution of Oceanography) Petuum, a Distributed System for High- Performance Machine Learning, Eric Xing (CMU) Automatic Pain Assessment, Michael Reale (SUNY Polytechnic Institute) Learning to Parse Images and Videos, Deva Ramanan (CMU) Deep Recurrent Models for Fine-Grained Recognition, Michael Lam (Oregon State) 27

28 Thermal Hydraulics of Next-Generation Gas-Cooled Nuclear Reactors PI Mark Kimber, Texas A&M University 28 Completed 3 Large Eddy Simulations of turbulent thermal mixing of helium coolant in the lower plenum of the Gen IV High- Temperature Gas-cooled Reactor (HTGR) Performed using OpenFOAM, a open-source CFD code Simulations are for scaled-down versions of representative sections of the HTGR lower plenum, each containing 7 support posts and 6 high-reynolds number jets in a cross-flow Each simulation involves a high-quality block-structured mesh with 16+ million cells, and was run on 20 regular Bridges nodes (560 cores) for ~10 million time steps (~1 second of simulated time) Helps to understand the turbulent thermal mixing in great detail, as well as temperature distributions and hotspots on support posts Anirban Jana (PSC) is co-pi and computational lead for this DOE project Acknowledgements: DOE-NEUP (Grant nos. DE-NE ) NSF-XSEDE (Grant no. CTS160002) Distribution of vorticity in a representative unit cell of the HTGR lower plenum containing 7 support posts and 6 jets in a crossflow (right to left). Turbulent mixing of core coolant jets in the lower plenum can cause temperature fluctuations and thermal fatigue of the support posts.

29 Simulations of Nanodisc Systems Jifey Qi and Wonpil Im, Lehigh University Using NAMD on Bridges NVIDIA P100 GPU nodes to simulate a number of nanodisc systems to be used as examples for the Nanodisc Builder module in CHARMM-GUI For a system size of 263,888 atoms, the simulation speed on one P100 node (6.25 ns/day) is faster than on three dual-cpu nodes (4.09 ns/day) CHARMM-GUI provides a web-based graphical user interface to generate various molecular simulation systems and input files (for CHARMM, NAMD, GROMACS, AMBER, GENESIS, LAMMPS, Desmond, OpenMM, and CHARMM/OpenMM) to facilitate and standardize the usage of common and advanced simulation techniques A nanodisc system showing two helical proteins warping around a discoidal lipid bilayer. 29

30 Reconstructing Large Historical Social Networks Christopher Warren, Carnegie Mellon University Six Degrees of Francis Bacon: Employs statistical graph learning and interactive web interfaces to reconstruct and communicate the social networks of Britain from approximately Analyzes the 58,625 biographical entries of the Oxford Dictionary of National Biography (ODNB), a 62M-word corpus produced by 10,000 scholars. Employs a modified Poisson Graphical Lasso method, confidence estimation, chronological filter and probabilistic techniques for name disambiguation, and expert validation. The methods are applicable to other historical and contemporary societies and other document collections. Six Degrees of Francis Bacon, Additional resources: Using Supercomputers to Illuminate the Renaissance: Warren, C. N., et al. (2016). "Six Degrees of Francis Bacon: A Statistical Method for Reconstructing Large Historical Social Networks." Digital Humaties Quarterly 10(3). Methods and R code: 30

31 Investigating Economic Impacts of Images and Natural Language in E-commerce Dokyun Lee, CMU Tepper School of Business Security and uncertain quality create challenges for sharing economies Lee et al. studied the impact of high-quality, verified photos for Airbnb hosts 17,000 properties over 4 months Used Bridges GPU nodes and large R jobs Difference-in-Difference (DD) analysis showed that on average, rooms with verified photos are booked 9% more often Separating effects of photo verification from photo quality and room reviews indicates that high photo quality results in $2,455 of additional yearly earnings They found asymmetric spillover effects: on the neighborhood level, there appears to be higher overall demand if more rooms have verified photos 31

32 Leveraging Bridges to Support Anton 2 2 nd -generation molecular dynamics supercomputer in production at PSC Performs MD simulations 100 faster than other resources Allows users to run simulations 4 faster and larger than the previous 1 st -generation Anton system at PSC Allocations on 1 st -generation Anton have resulted in over 140 publications so far Bridges Supports Anton 2 Users Anton 2 has specific resources for analysis of large MD trajectories; simulations require pre-equilibrated MD systems The Anton 2 filesystem will be mounted on Bridges with Omni-Path to facilitate in situ analysis of large trajectories 2016 D.E. Shaw Research. Used with permission D.E. Shaw Research. Used with permission. 32

33 Thank You Questions? 33

A Big Big Data Platform

A Big Big Data Platform A Big Big Data Platform John Urbanic, Parallel Computing Scientist 2017 Pittsburgh Supercomputing Center The Shift to Big Data New Emphases Pan-STARRS telescope http://pan-starrs.ifa.hawaii.edu/public/

More information

A Big Big Data Platform

A Big Big Data Platform A Big Big Data Platform John Urbanic, Parallel Computing Scientist 2017 Pittsburgh Supercomputing Center The Shift to Big Data New Emphases Pan-STARRS telescope http://pan-starrs.ifa.hawaii.edu/public/

More information

Building Bridges: A System for New HPC Communities

Building Bridges: A System for New HPC Communities Building Bridges: A System for New HPC Communities HPC User Forum 59 LRZ, Garching October 16, 2015 Presenter: Jim Kasdorf Director, Special Projects Pittsburgh Supercomputing Center kasdorf@psc.edu 2015

More information

A Converged HPC & Big Data Architecture in Production

A Converged HPC & Big Data Architecture in Production A Converged HPC & Big Data Architecture in Production Nick Nystrom Sr. Director of Research & Bridges PI/PD nystrom@psc.edu HP-CAST Invited Keynote November 11, 2016 Salt Lake City 2016 Pittsburgh Supercomputing

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2018 Our Environment Today Your laptops or workstations: only used for portal access Bridges

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2017 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2016 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

The Data exacell DXC. J. Ray Scott DXC PI May 17, 2016

The Data exacell DXC. J. Ray Scott DXC PI May 17, 2016 The Data exacell DXC J. Ray Scott DXC PI May 17, 2016 DXC Leadership Mike Levine Co-Scientific Director Co-PI Nick Nystrom Senior Director of Research Co-PI Ralph Roskies Co-Scientific Director Co-PI Robin

More information

Welcome to the XSEDE Big Data Workshop

Welcome to the XSEDE Big Data Workshop Welcome to the XSEDE Big Data Workshop John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2017 Who are we? Your hosts: Pittsburgh Supercomputing Center Our satellite sites:

More information

Welcome to the XSEDE Big Data Workshop

Welcome to the XSEDE Big Data Workshop Welcome to the XSEDE Big Data Workshop John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2018 Who are we? Our satellite sites: Tufts University University of Utah Purdue

More information

The Data Exacell (DXC): Data Infrastructure Building Blocks for Integrating Analytics with Data Management

The Data Exacell (DXC): Data Infrastructure Building Blocks for Integrating Analytics with Data Management The Data Exacell (DXC): Data Infrastructure Building Blocks for Integrating Analytics with Data Management Nick Nystrom, Michael J. Levine, Ralph Roskies, and J Ray Scott Pittsburgh Supercomputing Center

More information

Welcome to the XSEDE Big Data Workshop

Welcome to the XSEDE Big Data Workshop Welcome to the XSEDE Big Data Workshop John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2018 Who are we? Our satellite sites: Tufts University Purdue University Howard

More information

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center The Stampede is Coming Welcome to Stampede Introductory Training Dan Stanzione Texas Advanced Computing Center dan@tacc.utexas.edu Thanks for Coming! Stampede is an exciting new system of incredible power.

More information

HPC Capabilities at Research Intensive Universities

HPC Capabilities at Research Intensive Universities HPC Capabilities at Research Intensive Universities Purushotham (Puri) V. Bangalore Department of Computer and Information Sciences and UAB IT Research Computing UAB HPC Resources 24 nodes (192 cores)

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2017 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2018 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

LBRN - HPC systems : CCT, LSU

LBRN - HPC systems : CCT, LSU LBRN - HPC systems : CCT, LSU HPC systems @ CCT & LSU LSU HPC Philip SuperMike-II SuperMIC LONI HPC Eric Qeenbee2 CCT HPC Delta LSU HPC Philip 3 Compute 32 Compute Two 2.93 GHz Quad Core Nehalem Xeon 64-bit

More information

CSD3 The Cambridge Service for Data Driven Discovery. A New National HPC Service for Data Intensive science

CSD3 The Cambridge Service for Data Driven Discovery. A New National HPC Service for Data Intensive science CSD3 The Cambridge Service for Data Driven Discovery A New National HPC Service for Data Intensive science Dr Paul Calleja Director of Research Computing University of Cambridge Problem statement Today

More information

Comet Virtualization Code & Design Sprint

Comet Virtualization Code & Design Sprint Comet Virtualization Code & Design Sprint SDSC September 23-24 Rick Wagner San Diego Supercomputer Center Meeting Goals Build personal connections between the IU and SDSC members of the Comet team working

More information

Predicting Service Outage Using Machine Learning Techniques. HPE Innovation Center

Predicting Service Outage Using Machine Learning Techniques. HPE Innovation Center Predicting Service Outage Using Machine Learning Techniques HPE Innovation Center HPE Innovation Center - Our AI Expertise Sense Learn Comprehend Act Computer Vision Machine Learning Natural Language Processing

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2017 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science ECSS Symposium, 12/16/14 M. L. Norman, R. L. Moore, D. Baxter, G. Fox (Indiana U), A Majumdar, P Papadopoulos, W Pfeiffer, R. S.

More information

TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING

TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING Table of Contents: The Accelerated Data Center Optimizing Data Center Productivity Same Throughput with Fewer Server Nodes

More information

SuperMike-II Launch Workshop. System Overview and Allocations

SuperMike-II Launch Workshop. System Overview and Allocations : System Overview and Allocations Dr Jim Lupo CCT Computational Enablement jalupo@cct.lsu.edu SuperMike-II: Serious Heterogeneous Computing Power System Hardware SuperMike provides 442 nodes, 221TB of

More information

Large Scale Remote Interactive Visualization

Large Scale Remote Interactive Visualization Large Scale Remote Interactive Visualization Kelly Gaither Director of Visualization Senior Research Scientist Texas Advanced Computing Center The University of Texas at Austin March 1, 2012 Visualization

More information

Organizational Update: December 2015

Organizational Update: December 2015 Organizational Update: December 2015 David Hudak Doug Johnson Alan Chalker www.osc.edu Slide 1 OSC Organizational Update Leadership changes State of OSC Roadmap Web app demonstration (if time) Slide 2

More information

MOHA: Many-Task Computing Framework on Hadoop

MOHA: Many-Task Computing Framework on Hadoop Apache: Big Data North America 2017 @ Miami MOHA: Many-Task Computing Framework on Hadoop Soonwook Hwang Korea Institute of Science and Technology Information May 18, 2017 Table of Contents Introduction

More information

HETEROGENEOUS HPC, ARCHITECTURAL OPTIMIZATION, AND NVLINK STEVE OBERLIN CTO, TESLA ACCELERATED COMPUTING NVIDIA

HETEROGENEOUS HPC, ARCHITECTURAL OPTIMIZATION, AND NVLINK STEVE OBERLIN CTO, TESLA ACCELERATED COMPUTING NVIDIA HETEROGENEOUS HPC, ARCHITECTURAL OPTIMIZATION, AND NVLINK STEVE OBERLIN CTO, TESLA ACCELERATED COMPUTING NVIDIA STATE OF THE ART 2012 18,688 Tesla K20X GPUs 27 PetaFLOPS FLAGSHIP SCIENTIFIC APPLICATIONS

More information

The Cambridge Bio-Medical-Cloud An OpenStack platform for medical analytics and biomedical research

The Cambridge Bio-Medical-Cloud An OpenStack platform for medical analytics and biomedical research The Cambridge Bio-Medical-Cloud An OpenStack platform for medical analytics and biomedical research Dr Paul Calleja Director of Research Computing University of Cambridge Global leader in science & technology

More information

NERSC Site Update. National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory. Richard Gerber

NERSC Site Update. National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory. Richard Gerber NERSC Site Update National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory Richard Gerber NERSC Senior Science Advisor High Performance Computing Department Head Cori

More information

NVIDIA Update and Directions on GPU Acceleration for Earth System Models

NVIDIA Update and Directions on GPU Acceleration for Earth System Models NVIDIA Update and Directions on GPU Acceleration for Earth System Models Stan Posey, HPC Program Manager, ESM and CFD, NVIDIA, Santa Clara, CA, USA Carl Ponder, PhD, Applications Software Engineer, NVIDIA,

More information

Un8l now, milestones in ar8ficial intelligence (AI) have focused on situa8ons where informa8on is complete, such as chess and Go.

Un8l now, milestones in ar8ficial intelligence (AI) have focused on situa8ons where informa8on is complete, such as chess and Go. BeOng on AI: Sergiu Sanielevici sergiu@psc.edu The 1st Interna8onal Workshop on the US and China Collabora8on in Experience and Best Prac8ce in Supercompu8ng PEARC17 New Orleans July 10, 2017 Un8l now,

More information

NUIT Tech Talk Topics in Research Computing: XSEDE and Northwestern University Campus Champions

NUIT Tech Talk Topics in Research Computing: XSEDE and Northwestern University Campus Champions NUIT Tech Talk Topics in Research Computing: XSEDE and Northwestern University Campus Champions Pradeep Sivakumar pradeep-sivakumar@northwestern.edu Contents What is XSEDE? Introduction Who uses XSEDE?

More information

TESLA P100 PERFORMANCE GUIDE. HPC and Deep Learning Applications

TESLA P100 PERFORMANCE GUIDE. HPC and Deep Learning Applications TESLA P PERFORMANCE GUIDE HPC and Deep Learning Applications MAY 217 TESLA P PERFORMANCE GUIDE Modern high performance computing (HPC) data centers are key to solving some of the world s most important

More information

Emerging Technologies for HPC Storage

Emerging Technologies for HPC Storage Emerging Technologies for HPC Storage Dr. Wolfgang Mertz CTO EMEA Unstructured Data Solutions June 2018 The very definition of HPC is expanding Blazing Fast Speed Accessibility and flexibility 2 Traditional

More information

SUPERCHARGE DEEP LEARNING WITH DGX-1. Markus Weber SC16 - November 2016

SUPERCHARGE DEEP LEARNING WITH DGX-1. Markus Weber SC16 - November 2016 SUPERCHARGE DEEP LEARNING WITH DGX-1 Markus Weber SC16 - November 2016 NVIDIA Pioneered GPU Computing Founded 1993 $7B 9,500 Employees 100M NVIDIA GeForce Gamers The world s largest gaming platform Pioneering

More information

The Stampede is Coming: A New Petascale Resource for the Open Science Community

The Stampede is Coming: A New Petascale Resource for the Open Science Community The Stampede is Coming: A New Petascale Resource for the Open Science Community Jay Boisseau Texas Advanced Computing Center boisseau@tacc.utexas.edu Stampede: Solicitation US National Science Foundation

More information

Making Supercomputing More Available and Accessible Windows HPC Server 2008 R2 Beta 2 Microsoft High Performance Computing April, 2010

Making Supercomputing More Available and Accessible Windows HPC Server 2008 R2 Beta 2 Microsoft High Performance Computing April, 2010 Making Supercomputing More Available and Accessible Windows HPC Server 2008 R2 Beta 2 Microsoft High Performance Computing April, 2010 Windows HPC Server 2008 R2 Windows HPC Server 2008 R2 makes supercomputing

More information

DDN About Us Solving Large Enterprise and Web Scale Challenges

DDN About Us Solving Large Enterprise and Web Scale Challenges 1 DDN About Us Solving Large Enterprise and Web Scale Challenges History Founded in 98 World s Largest Private Storage Company Growing, Profitable, Self Funded Headquarters: Santa Clara and Chatsworth,

More information

Leveraging Software-Defined Storage to Meet Today and Tomorrow s Infrastructure Demands

Leveraging Software-Defined Storage to Meet Today and Tomorrow s Infrastructure Demands Leveraging Software-Defined Storage to Meet Today and Tomorrow s Infrastructure Demands Unleash Your Data Center s Hidden Power September 16, 2014 Molly Rector CMO, EVP Product Management & WW Marketing

More information

RECENT TRENDS IN GPU ARCHITECTURES. Perspectives of GPU computing in Science, 26 th Sept 2016

RECENT TRENDS IN GPU ARCHITECTURES. Perspectives of GPU computing in Science, 26 th Sept 2016 RECENT TRENDS IN GPU ARCHITECTURES Perspectives of GPU computing in Science, 26 th Sept 2016 NVIDIA THE AI COMPUTING COMPANY GPU Computing Computer Graphics Artificial Intelligence 2 NVIDIA POWERS WORLD

More information

GROMACS (GPU) Performance Benchmark and Profiling. February 2016

GROMACS (GPU) Performance Benchmark and Profiling. February 2016 GROMACS (GPU) Performance Benchmark and Profiling February 2016 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Dell, Mellanox, NVIDIA Compute

More information

Building NVLink for Developers

Building NVLink for Developers Building NVLink for Developers Unleashing programmatic, architectural and performance capabilities for accelerated computing Why NVLink TM? Simpler, Better and Faster Simplified Programming No specialized

More information

University at Buffalo Center for Computational Research

University at Buffalo Center for Computational Research University at Buffalo Center for Computational Research The following is a short and long description of CCR Facilities for use in proposals, reports, and presentations. If desired, a letter of support

More information

irods at TACC: Secure Infrastructure for Open Science Chris Jordan

irods at TACC: Secure Infrastructure for Open Science Chris Jordan irods at TACC: Secure Infrastructure for Open Science Chris Jordan What is TACC? Texas Advanced Computing Center Cyberinfrastructure Resources for Open Science University of Texas System 9 Academic, 6

More information

in Action Fujitsu High Performance Computing Ecosystem Human Centric Innovation Innovation Flexibility Simplicity

in Action Fujitsu High Performance Computing Ecosystem Human Centric Innovation Innovation Flexibility Simplicity Fujitsu High Performance Computing Ecosystem Human Centric Innovation in Action Dr. Pierre Lagier Chief Technology Officer Fujitsu Systems Europe Innovation Flexibility Simplicity INTERNAL USE ONLY 0 Copyright

More information

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services WVU RESEARCH COMPUTING INTRODUCTION Introduction to WVU s Research Computing Services WHO ARE WE? Division of Information Technology Services Funded through WVU Research Corporation Provide centralized

More information

Cisco and Cloudera Deliver WorldClass Solutions for Powering the Enterprise Data Hub alerts, etc. Organizations need the right technology and infrastr

Cisco and Cloudera Deliver WorldClass Solutions for Powering the Enterprise Data Hub alerts, etc. Organizations need the right technology and infrastr Solution Overview Cisco UCS Integrated Infrastructure for Big Data and Analytics with Cloudera Enterprise Bring faster performance and scalability for big data analytics. Highlights Proven platform for

More information

Preparing GPU-Accelerated Applications for the Summit Supercomputer

Preparing GPU-Accelerated Applications for the Summit Supercomputer Preparing GPU-Accelerated Applications for the Summit Supercomputer Fernanda Foertter HPC User Assistance Group Training Lead foertterfs@ornl.gov This research used resources of the Oak Ridge Leadership

More information

A Breakthrough in Non-Volatile Memory Technology FUJITSU LIMITED

A Breakthrough in Non-Volatile Memory Technology FUJITSU LIMITED A Breakthrough in Non-Volatile Memory Technology & 0 2018 FUJITSU LIMITED IT needs to accelerate time-to-market Situation: End users and applications need instant access to data to progress faster and

More information

Gen-Z Memory-Driven Computing

Gen-Z Memory-Driven Computing Gen-Z Memory-Driven Computing Our vision for the future of computing Patrick Demichel Distinguished Technologist Explosive growth of data More Data Need answers FAST! Value of Analyzed Data 2005 0.1ZB

More information

NVIDIA DGX SYSTEMS PURPOSE-BUILT FOR AI

NVIDIA DGX SYSTEMS PURPOSE-BUILT FOR AI NVIDIA DGX SYSTEMS PURPOSE-BUILT FOR AI Overview Unparalleled Value Product Portfolio Software Platform From Desk to Data Center to Cloud Summary AI researchers depend on computing performance to gain

More information

Accelerating High Performance Computing.

Accelerating High Performance Computing. Accelerating High Performance Computing http://www.nvidia.com/tesla Computing The 3 rd Pillar of Science Drug Design Molecular Dynamics Seismic Imaging Reverse Time Migration Automotive Design Computational

More information

World s most advanced data center accelerator for PCIe-based servers

World s most advanced data center accelerator for PCIe-based servers NVIDIA TESLA P100 GPU ACCELERATOR World s most advanced data center accelerator for PCIe-based servers HPC data centers need to support the ever-growing demands of scientists and researchers while staying

More information

GPU ACCELERATED COMPUTING. 1 st AlsaCalcul GPU Challenge, 14-Jun-2016, Strasbourg Frédéric Parienté, Tesla Accelerated Computing, NVIDIA Corporation

GPU ACCELERATED COMPUTING. 1 st AlsaCalcul GPU Challenge, 14-Jun-2016, Strasbourg Frédéric Parienté, Tesla Accelerated Computing, NVIDIA Corporation GPU ACCELERATED COMPUTING 1 st AlsaCalcul GPU Challenge, 14-Jun-2016, Strasbourg Frédéric Parienté, Tesla Accelerated Computing, NVIDIA Corporation GAMING PRO ENTERPRISE VISUALIZATION DATA CENTER AUTO

More information

GPFS Experiences from the Argonne Leadership Computing Facility (ALCF) William (Bill) E. Allcock ALCF Director of Operations

GPFS Experiences from the Argonne Leadership Computing Facility (ALCF) William (Bill) E. Allcock ALCF Director of Operations GPFS Experiences from the Argonne Leadership Computing Facility (ALCF) William (Bill) E. Allcock ALCF Director of Operations Argonne National Laboratory Argonne National Laboratory is located on 1,500

More information

Hyper Converged Systems 250 and 380

Hyper Converged Systems 250 and 380 Hyper Converged Systems 250 and 380 Martin Brandstetter Information Systems Architect Month day, year Transform to a hybrid infrastructure Accelerate the delivery of apps and services to your enterprise

More information

Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet

Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet WHITE PAPER Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet Contents Background... 2 The MapR Distribution... 2 Mellanox Ethernet Solution... 3 Test

More information

IBM CORAL HPC System Solution

IBM CORAL HPC System Solution IBM CORAL HPC System Solution HPC and HPDA towards Cognitive, AI and Deep Learning Deep Learning AI / Deep Learning Strategy for Power Power AI Platform High Performance Data Analytics Big Data Strategy

More information

HPE ProLiant ML350 Gen10 Server

HPE ProLiant ML350 Gen10 Server Digital data sheet HPE ProLiant ML350 Gen10 Server ProLiant ML Servers What's new Support for Intel Xeon Scalable processors full stack. 2600 MT/s HPE DDR4 SmartMemory RDIMM/LRDIMM offering 8, 16, 32,

More information

OpenPOWER Performance

OpenPOWER Performance OpenPOWER Performance Alex Mericas Chief Engineer, OpenPOWER Performance IBM Delivering the Linux ecosystem for Power SOLUTIONS OpenPOWER IBM SOFTWARE LINUX ECOSYSTEM OPEN SOURCE Solutions with full stack

More information

Abstract. The Challenges. ESG Lab Review InterSystems IRIS Data Platform: A Unified, Efficient Data Platform for Fast Business Insight

Abstract. The Challenges. ESG Lab Review InterSystems IRIS Data Platform: A Unified, Efficient Data Platform for Fast Business Insight ESG Lab Review InterSystems Data Platform: A Unified, Efficient Data Platform for Fast Business Insight Date: April 218 Author: Kerry Dolan, Senior IT Validation Analyst Abstract Enterprise Strategy Group

More information

The Future of High Performance Computing

The Future of High Performance Computing The Future of High Performance Computing Randal E. Bryant Carnegie Mellon University http://www.cs.cmu.edu/~bryant Comparing Two Large-Scale Systems Oakridge Titan Google Data Center 2 Monolithic supercomputer

More information

The BioHPC Nucleus Cluster & Future Developments

The BioHPC Nucleus Cluster & Future Developments 1 The BioHPC Nucleus Cluster & Future Developments Overview Today we ll talk about the BioHPC Nucleus HPC cluster with some technical details for those interested! How is it designed? What hardware does

More information

IBM Power Systems HPC Cluster

IBM Power Systems HPC Cluster IBM Power Systems HPC Cluster Highlights Complete and fully Integrated HPC cluster for demanding workloads Modular and Extensible: match components & configurations to meet demands Integrated: racked &

More information

Oracle Big Data Connectors

Oracle Big Data Connectors Oracle Big Data Connectors Oracle Big Data Connectors is a software suite that integrates processing in Apache Hadoop distributions with operations in Oracle Database. It enables the use of Hadoop to process

More information

HPE ProLiant ML350 Gen P 16GB-R E208i-a 8SFF 1x800W RPS Solution Server (P04674-S01)

HPE ProLiant ML350 Gen P 16GB-R E208i-a 8SFF 1x800W RPS Solution Server (P04674-S01) Digital data sheet HPE ProLiant ML350 Gen10 4110 1P 16GB-R E208i-a 8SFF 1x800W RPS Solution Server (P04674-S01) ProLiant ML Servers What's new Support for Intel Xeon Scalable processors full stack. 2600

More information

Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE

Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE Hitoshi Sato *1, Shuichi Ihara *2, Satoshi Matsuoka *1 *1 Tokyo Institute

More information

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision At-A-Glance Unified Computing Realized Today, IT organizations assemble their data center environments from individual components.

More information

Rutgers Discovery Informatics Institute (RDI2)

Rutgers Discovery Informatics Institute (RDI2) Rutgers Discovery Informatics Institute (RDI2) Manish Parashar h+p://rdi2.rutgers.edu Modern Science & Society Transformed by Compute & Data The era of Extreme Compute and Big Data New paradigms and prac3ces

More information

HIGH PERFORMANCE COMPUTING (PLATFORMS) SECURITY AND OPERATIONS

HIGH PERFORMANCE COMPUTING (PLATFORMS) SECURITY AND OPERATIONS HIGH PERFORMANCE COMPUTING (PLATFORMS) SECURITY AND OPERATIONS AT PITT Kim F. Wong Center for Research Computing SAC-PA, June 22, 2017 Our service The mission of the Center for Research Computing is to

More information

MAHA. - Supercomputing System for Bioinformatics

MAHA. - Supercomputing System for Bioinformatics MAHA - Supercomputing System for Bioinformatics - 2013.01.29 Outline 1. MAHA HW 2. MAHA SW 3. MAHA Storage System 2 ETRI HPC R&D Area - Overview Research area Computing HW MAHA System HW - Rpeak : 0.3

More information

TACC s Stampede Project: Intel MIC for Simulation and Data-Intensive Computing

TACC s Stampede Project: Intel MIC for Simulation and Data-Intensive Computing TACC s Stampede Project: Intel MIC for Simulation and Data-Intensive Computing Jay Boisseau, Director April 17, 2012 TACC Vision & Strategy Provide the most powerful, capable computing technologies and

More information

LAMMPS-KOKKOS Performance Benchmark and Profiling. September 2015

LAMMPS-KOKKOS Performance Benchmark and Profiling. September 2015 LAMMPS-KOKKOS Performance Benchmark and Profiling September 2015 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox, NVIDIA

More information

Data Movement & Tiering with DMF 7

Data Movement & Tiering with DMF 7 Data Movement & Tiering with DMF 7 Kirill Malkin Director of Engineering April 2019 Why Move or Tier Data? We wish we could keep everything in DRAM, but It s volatile It s expensive Data in Memory 2 Why

More information

Picasso Panel Thinking Beyond 5 G David Corman

Picasso Panel Thinking Beyond 5 G David Corman Picasso Panel Thinking Beyond 5 G David Corman Program Director Directorate for Computer and Information Science and Engineering National Science Foundation June 19, 2018 Some Motivation: Toward Smart

More information

ENABLING NEW SCIENCE GPU SOLUTIONS

ENABLING NEW SCIENCE GPU SOLUTIONS ENABLING NEW SCIENCE TESLA BIO Workbench The NVIDIA Tesla Bio Workbench enables biophysicists and computational chemists to push the boundaries of life sciences research. It turns a standard PC into a

More information

TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING

TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING Accelerated computing is revolutionizing the economics of the data center. HPC enterprise and hyperscale customers deploy

More information

unleashed the future Intel Xeon Scalable Processors for High Performance Computing Alexey Belogortsev Field Application Engineer

unleashed the future Intel Xeon Scalable Processors for High Performance Computing Alexey Belogortsev Field Application Engineer the future unleashed Alexey Belogortsev Field Application Engineer Intel Xeon Scalable Processors for High Performance Computing Growing Challenges in System Architecture The Walls System Bottlenecks Divergent

More information

HPC Storage Use Cases & Future Trends

HPC Storage Use Cases & Future Trends Oct, 2014 HPC Storage Use Cases & Future Trends Massively-Scalable Platforms and Solutions Engineered for the Big Data and Cloud Era Atul Vidwansa Email: atul@ DDN About Us DDN is a Leader in Massively

More information

New Approach to Unstructured Data

New Approach to Unstructured Data Innovations in All-Flash Storage Deliver a New Approach to Unstructured Data Table of Contents Developing a new approach to unstructured data...2 Designing a new storage architecture...2 Understanding

More information

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development Jeremy Fischer Indiana University 9 September 2014 Citation: Fischer, J.L. 2014. ACCI Recommendations on Long Term

More information

AI-accelerated HPC Hardware Infrastructure. Francis Lam Huawei Technologies

AI-accelerated HPC Hardware Infrastructure. Francis Lam Huawei Technologies AI-accelerated HPC Hardware Infrastructure Francis Lam Huawei Technologies Contents Huawei HPC Momentum Boundless Computing AI accelerating HPC HPC accelerating AI www.huawei.com Huawei Confidential 2

More information

High Performance Computing and Data Resources at SDSC

High Performance Computing and Data Resources at SDSC High Performance Computing and Data Resources at SDSC "! Mahidhar Tatineni (mahidhar@sdsc.edu)! SDSC Summer Institute! August 05, 2013! HPC Resources at SDSC Hardware Overview HPC Systems : Gordon, Trestles

More information

The GISandbox: A Science Gateway For Geospatial Computing. Davide Del Vento, Eric Shook, Andrea Zonca

The GISandbox: A Science Gateway For Geospatial Computing. Davide Del Vento, Eric Shook, Andrea Zonca The GISandbox: A Science Gateway For Geospatial Computing Davide Del Vento, Eric Shook, Andrea Zonca 1 Paleoscape Model and Human Origins Simulate Climate and Vegetation during the Last Glacial Maximum

More information

Maximize automotive simulation productivity with ANSYS HPC and NVIDIA GPUs

Maximize automotive simulation productivity with ANSYS HPC and NVIDIA GPUs Presented at the 2014 ANSYS Regional Conference- Detroit, June 5, 2014 Maximize automotive simulation productivity with ANSYS HPC and NVIDIA GPUs Bhushan Desam, Ph.D. NVIDIA Corporation 1 NVIDIA Enterprise

More information

THE EMC ISILON STORY. Big Data In The Enterprise. Deya Bassiouni Isilon Regional Sales Manager Emerging Africa, Egypt & Lebanon.

THE EMC ISILON STORY. Big Data In The Enterprise. Deya Bassiouni Isilon Regional Sales Manager Emerging Africa, Egypt & Lebanon. THE EMC ISILON STORY Big Data In The Enterprise Deya Bassiouni Isilon Regional Sales Manager Emerging Africa, Egypt & Lebanon August, 2012 1 Big Data In The Enterprise Isilon Overview Isilon Technology

More information

NAMD Performance Benchmark and Profiling. January 2015

NAMD Performance Benchmark and Profiling. January 2015 NAMD Performance Benchmark and Profiling January 2015 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox Compute resource

More information

Dell EMC All-Flash solutions are powered by Intel Xeon processors. Learn more at DellEMC.com/All-Flash

Dell EMC All-Flash solutions are powered by Intel Xeon processors. Learn more at DellEMC.com/All-Flash N O I T A M R O F S N A R T T I L H E S FU FLA A IN Dell EMC All-Flash solutions are powered by Intel Xeon processors. MODERNIZE WITHOUT COMPROMISE I n today s lightning-fast digital world, your IT Transformation

More information

GPFS for Life Sciences at NERSC

GPFS for Life Sciences at NERSC GPFS for Life Sciences at NERSC A NERSC & JGI collaborative effort Jason Hick, Rei Lee, Ravi Cheema, and Kjiersten Fagnan GPFS User Group meeting May 20, 2015-1 - Overview of Bioinformatics - 2 - A High-level

More information

Overview of Tianhe-2

Overview of Tianhe-2 Overview of Tianhe-2 (MilkyWay-2) Supercomputer Yutong Lu School of Computer Science, National University of Defense Technology; State Key Laboratory of High Performance Computing, China ytlu@nudt.edu.cn

More information

Short Talk: System abstractions to facilitate data movement in supercomputers with deep memory and interconnect hierarchy

Short Talk: System abstractions to facilitate data movement in supercomputers with deep memory and interconnect hierarchy Short Talk: System abstractions to facilitate data movement in supercomputers with deep memory and interconnect hierarchy François Tessier, Venkatram Vishwanath Argonne National Laboratory, USA July 19,

More information

Arm in HPC. Toshinori Kujiraoka Sales Manager, APAC HPC Tools Arm Arm Limited

Arm in HPC. Toshinori Kujiraoka Sales Manager, APAC HPC Tools Arm Arm Limited Arm in HPC Toshinori Kujiraoka Sales Manager, APAC HPC Tools Arm 2019 Arm Limited Arm Technology Connects the World Arm in IOT 21 billion chips in the past year Mobile/Embedded/IoT/ Automotive/GPUs/Servers

More information

ACCELERATED COMPUTING: THE PATH FORWARD. Jen-Hsun Huang, Co-Founder and CEO, NVIDIA SC15 Nov. 16, 2015

ACCELERATED COMPUTING: THE PATH FORWARD. Jen-Hsun Huang, Co-Founder and CEO, NVIDIA SC15 Nov. 16, 2015 ACCELERATED COMPUTING: THE PATH FORWARD Jen-Hsun Huang, Co-Founder and CEO, NVIDIA SC15 Nov. 16, 2015 COMMODITY DISRUPTS CUSTOM SOURCE: Top500 ACCELERATED COMPUTING: THE PATH FORWARD It s time to start

More information

HPC Enabling R&D at Philip Morris International

HPC Enabling R&D at Philip Morris International HPC Enabling R&D at Philip Morris International Jim Geuther*, Filipe Bonjour, Bruce O Neel, Didier Bouttefeux, Sylvain Gubian, Stephane Cano, and Brian Suomela * Philip Morris International IT Service

More information

Advanced Research Compu2ng Informa2on Technology Virginia Tech

Advanced Research Compu2ng Informa2on Technology Virginia Tech Advanced Research Compu2ng Informa2on Technology Virginia Tech www.arc.vt.edu Personnel Associate VP for Research Compu6ng: Terry Herdman (herd88@vt.edu) Director, HPC: Vijay Agarwala (vijaykag@vt.edu)

More information

TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING

TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING Accelerated computing is revolutionizing the economics of the data center. HPC and hyperscale customers deploy accelerated

More information

Isilon: Raising The Bar On Performance & Archive Use Cases. John Har Solutions Product Manager Unstructured Data Storage Team

Isilon: Raising The Bar On Performance & Archive Use Cases. John Har Solutions Product Manager Unstructured Data Storage Team Isilon: Raising The Bar On Performance & Archive Use Cases John Har Solutions Product Manager Unstructured Data Storage Team What we ll cover in this session Isilon Overview Streaming workflows High ops/s

More information

ABySS Performance Benchmark and Profiling. May 2010

ABySS Performance Benchmark and Profiling. May 2010 ABySS Performance Benchmark and Profiling May 2010 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox Compute resource - HPC

More information

TESLA V100 PERFORMANCE GUIDE. Life Sciences Applications

TESLA V100 PERFORMANCE GUIDE. Life Sciences Applications TESLA V100 PERFORMANCE GUIDE Life Sciences Applications NOVEMBER 2017 TESLA V100 PERFORMANCE GUIDE Modern high performance computing (HPC) data centers are key to solving some of the world s most important

More information