Jack Dongarra INNOVATIVE COMP ING LABORATORY. University i of Tennessee Oak Ridge National Laboratory
|
|
- Isabel Welch
- 6 years ago
- Views:
Transcription
1 Computational Science, High Performance Computing, and the IGMCS Program Jack Dongarra INNOVATIVE COMP ING LABORATORY University i of Tennessee Oak Ridge National Laboratory 1
2 The Third Pillar of 21st Century Science Computational science is a rapidly growing g multidisciplinary field that uses advanced computing capabilities to understand and solve complex problems. Computational Science enables us to Investigate phenomena where economics or constraints preclude experimentation Evaluate complex models and manage massive data volumes Transform business and engineering g practices 2
3 Computational Science Fuses Three Distinct Elements: 3
4 Computational Science As An Emerging Academic Pursuit Many Programs in Computational Science College for Computing Georgia Tech; NJIT; CMU; Degrees Rice, Utah, UCSB; Minor Penn State, U Wisc, SUNY Brockport Certificate Old Dominion, U of Georgia, Boston U, Concentration Cornell, Northeastern, t Colorado State, t Courses 4
5 Graduate Minor in Computational Science Students in one of the three general areas of Computational Science; Applied Mathematics, Computer C t related, or a Domain Science will become exposed to and better versed in the other two areas that are currently outside their home area. A pool of courses which deals with each of the three main areas has been put together by participating department for 5 students to select from.
6 IGMCS: Requirements The Minor requires a combination of course work from three disciplines - Computer related, Mathematics/Stat, and a participating Science/Engineering domain (e.g., Chemical Engineering, Chemistry, Physics). At the Masters level a minor in Computational Science will require 9 hours (3 courses) from the pools. At least 6 hours (2 courses) must be taken outside the student s home area. Students must take at least 3 hours (1 course) from each of the 2 non-home areas Computer Related Applied Mathematics Domain Sciences At the Doctoral level a minor in computation science will require 15 hours (5 courses) from the pools. At least 9 hours (3 courses) must be taken outside the student s t home area. Students must take at least 3 hours (1 course) from each of the 2 non-home areas 6
7 IGMCS Process for Students 1. A student, with guidance from their faculty advisor, lays out a program of courses 2. Next, discussion with department s IGMCS liaisoni 3. Form generated with courses to be taken 4. Form is submitted for approval by the IGMCS Program Committee 7
8 IGMCS Participating Departments Department IGMCS Liaison Biochemistry & Cellular and Molecular Biology Dr. Cynthia Peterson Chemical Engineering Dr. David Keffer d Chemistry Dr. Robert Hinde Earth and Planetary Sciences Dr. Edmund Perfect Ecology & Evolutionary Biology Dr. Louis Gross Electrical Engineering and Computer Science Dr. Jack Dongarra Genome Science & Technology Dr. Cynthia Peterson Geography Dr. Bruce Ralston Information Science Dr. Peiling Wang Materials Science and Engineering i Dr. James Morris morrisj@ornl.gov Mathematics Dr. Chuck Collins ccollins@math.utk.edu Mechanical, Aerospace and Biomedical Engineering Dr. A.J. Baker ajbaker@utk.edu Physics Dr. Thomas Papenbrock tpapenbr@utk.edu Statistics Dr. Hamparsum Bozdogan bozdogan@utk.edu 8
9 Currently 12 students signed up for the program One graduate: Daniel Lucio 9
10 Students in Departments Not Participating in the IGMCS Program A student in such a situation can still participate. Student and advisor should submit to the Chair of the IGMCS Program Committee the courses to be taken. Requirement is still the same: Minor requires a combination of course work from three disciplines - Computer Science related, Mathematics/Stat, and a participating Science/Engineering domain (e.g., Chemical Engineering, Chemistry, Physics). Student s department should be encouraged to participate in the IGMCS program. Easy to do, needs approved set of courses and a liaison 10
11 Internship This is optional o but strongly encouraged. Students in the program can fulfill 3 hrs. of their requirement through an Internship with researchers outside the student s major. The internship may be taken offsite, e.g. ORNL, or on campus by working with a faculty member in another department. Internships must have the approval of the IGMCS Program Committee. 11
12 IGMCS Seminar Series DATE Oct 15 Oct 29 Nov 12 Nov 26 Dec 03 SPEAKER DEPT/AFFILIATION Prof. Jack Dongarra Electrical Eng. and Comp. Science Prof. Lou Gross Ecology & Evolutionary Biology Prof. Jeremy Smith Biochem/Cell & Molec Biol/ORNL IGMCS student Dr. Phil Andrews Project Director of the UT National Center for Computational Sciences 12
13 13
14 H. Meuer, H. Simon, E. Strohmaier, & JD - Listing of the 500 most powerful Computers in the World - Yardstick: Rmax from LINPACK MPP Ax=b, dense problem TPP performance - Updated twice a year Size SC xy in the States in November Meeting in Germany in June - All data available from 14 Rate
15 Performance Development 10 Pflop/s 1 Pflop/s 11.7 PF/s IBM Roadrunner 1.02 PF/s IBM BlueGene/L 100 Tflop/s 10 Tflop/s 1 Tflop/s 1.17 TF/s 59.7 GF/s SUM #1 Intel ASCI Red 6-8 years IBM ASCI White NEC Earth Simulator TF/s 100 Gflop/s Fujitsu 'NWT' 10 Gflop/s #500 My Laptop 1 Gflop/s 100 Mflop/s 0.4 GF/s
16 Performance Development & Projections 10 Eflop/s 1 Eflop/s 100 Pflop/s 10 Pflop/s 1 Pflop/s 100 Tflop/s 10 Tflop/s 1 Tflop/s 100 Gflop/s 10 Gflop/s 1Gflop/s 100 Mflop/s 10 Mflop/s 1 Mflop/s SUM N=1 N=500 Page 16
17 Performance Development & Projections 10 Eflop/s 1 Eflop/s 100 Pflop/s 10 Pflop/s 1 Pflop/s 100 Tflop/s 10 Tflop/s 1 Tflop/s 100 Gflop/s 10 Gflop/s 1Gflop/s 100 Mflop/s 10 Mflop/s 1 Mflop/s ~1000 years SUM N=1 N=500 ~1 year ~8 hours ~1 min 1 Gflop/s 1 Tflop/s 1 Pflop/s 1 Eflop/s O(1) Thread O(10 3 ) Threads O(10 6 )Threads Page 17 O(10 9 ) Threads
18 LANL Roadrunner A Petascale System in 2008 Connected Unit cluster 192 Opteron nodes (180 w/ 2 dual-cell blades connected w/ 4 PCIe x8 links) 13,000 Cell HPC chips 1.33 PetaFlop/s (from Cell) 7,000 dual-core Opterons 122,000 cores 17 clusters Cell chip for each core 2 nd stage InfiniBand 4x DDR interconnect (18 sets of 12 links to 8 switches) 2 nd stage InfiniBand interconnect (8 switches) Based on the 100 Gflop/s (DP) Cell chip Hybrid Design (2 kinds of chips & 3 kinds of cores) Programming required at 3 levels. Dual Core Opteron Chip 18
19 Top10 of the June 2008 List Computer IBM / Roadrunner BladeCenter QS22/LS21 IBM / BlueGene/L eserver Blue Gene Solution IBM / Intrepid Blue Gene/P Solution SUN / Ranger SunBlade x6420 CRAY / Jaguar Rmax [TF/s] Rmax / Rpeak Installation Site Country #Cores Power [MW] MFlops/ Watt 1,026 75% DOE/NNSA/LANL USA 122, % DOE/NNSA/LLNL USA 212, % DOE/OS/ANL USA 163, % NSF/TACC USA 62, % DOE/OS/ORNL USA 30, Cray XT4 QuadCore IBM / JUGENE Forschungszentrum % Blue Gene/P Solution Juelich (FZJ) SGI / Encanto New Mexico Computing % SGI Altix ICE 8200 Applications Center HP / EKA Computational Research % Cluster Platform 3000 BL460c Lab, TATA SONS IBM / Blue Gene/P Solution 10 SGI / Altix ICE 8200EX % Germany 65, USA 14, India 14, % IDRIS France 40, Total Exploration Production France 10,
20 Top10 of the June 2008 List Computer IBM / Roadrunner BladeCenter QS22/LS21 IBM / BlueGene/L eserver Blue Gene Solution IBM / Intrepid Blue Gene/P Solution SUN / Ranger SunBlade x6420 CRAY / Jaguar Rmax [TF/s] Rmax / Rpeak Installation Site Country #Cores Power [MW] MFlops/ Watt 1,026 75% DOE/NNSA/LANL USA 122, % DOE/NNSA/LLNL USA 212, % DOE/OS/ANL USA 163, % NSF/TACC USA 62, % DOE/OS/ORNL USA 30, Cray XT4 QuadCore IBM / JUGENE Forschungszentrum % Blue Gene/P Solution Juelich (FZJ) SGI / Encanto New Mexico Computing % SGI Altix ICE 8200 Applications Center HP / EKA Computational Research % Cluster Platform 3000 BL460c Lab, TATA SONS IBM / Blue Gene/P Solution 10 SGI / Altix ICE 8200EX % Germany 65, USA 14, India 14, % IDRIS France 40, Total Exploration Production France 10,
21 ORNL/UTK Computer Power Cost Projections Over the next 5 years ORNL/UTK will deploy 2 large Petascale systems Using 4 MW today, going to 15MW before year end By 2012 could be using more than 50MW!! Cost estimates based on $ per KwH Cooling adds 30% to the technical load, this is very efficient i Power becomes the architectural driver for future large systems 21 Power Per Year Includes both DOE and NSF systems.
22 Something s Happening Here From K. Olukotun, L. Hammond, H. Sutter, and B. Smith A hardware issue just became a software problem In the old days it was: each year processors would become faster Today the clock speed is fixed or getting slower Things are still doubling every months Moore s Law reinterpretated. Number of cores Number of cores double every months 22
23 Multicore What is multicore? A multicore chip is a single chip (socket) that combines two or more independent processing units that provide independent threads of control Why multicore? The race for ever higher clock speeds is over. In the old days, new the chips where faster Applications ran faster on the new chips Today new chips are not faster, just have more processors per chip Applications and software must use those extra processors to 23 become faster
24 Power Cost of Frequency Power Voltage 2 x Frequency (V 2 F) Frequency Voltage Power Frequency 3 24
25 Power Cost of Frequency Power Voltage 2 x Frequency (V 2 F) Frequency Voltage Power Frequency 3 25
26 Today s Multicores 98% of Top500 Systems Are Based on Multicore 282 use Quad-Core 204 use Dual-Core 3 use Nona-core IBM Cell (9 cores) Intel Clovertown (4 cores) Sun Niagra2 (8 cores) SciCortex (6 cores) Intel Polaris (80 cores) AMD Opteron (4 cores) IBM BG/P (4 cores) 26
27 Moore s Law Reinterpreted Number of cores per chip doubles every 2 year, while clock speed decreases (not increases). Need to deal with systems with millions of concurrent threads Future generation will have billions of threads! Need to be able to easily replace inter- chip parallelism li with intro-chip parallelism N b f th d f ti Number of threads of execution doubles every 2 year 27
28 And then there s the GPGPU s NVIDIA s Tesla T10P T10P chip 240 cores; 1.5 GHz Tpeak 1 Tflop/s - 32 bit floating point Tpeak 100 Gflop/s - 64 bit floating point S1070 board 4 - T10P devices; 700 Watts GTX T10P; 1.3 GHz Tpeak 864 Gflop/s - 32 bit floating point Tpeak 86.4 Gflop/s - 64 bit floating point 28
29 Intel s Larrabee Chip Many X 86 IA cores Scalable to Tflop/s New cache architecture New vector instructions set Vector memory operations Conditionals o Integer and floating point arithmetic New vector processing unit / wide SIMD 29
30 Architecture of Interest Manycore chip Composed of hybrid cores Some general purpose p Some graphics Some floating point 30
31 Architecture of Interest Board composed of multiple chips sharing memory Memory 31
32 Architecture of Interest Rack composed of multiple boards Memory 32
33 Architecture of Interest A room full of these racks Memory Think millions of cores 33
34 Major Changes to Software Must rethink the design of our software Another disruptive technology Similar to what happened with cluster computing and message passing Rethink and rewrite the applications, algorithms, and software 34
35 Exascale Computing Exascale systems (10 18 Flop/s) are likely feasible by 2017± Million processing elements (cores or mini-cores) with chips perhaps as dense as 1,000 cores per socket, clock rates will grow more slowly 3D packaging likely Large-scale optics based interconnects PB of aggregate memory > 10,000 s of I/O channels to Exabytes of secondary storage, disk bandwidth to storage ratios not optimal for HPC use Hardware and software based fault management Achievable performance per watt will likely be the primary measure of progress 35
36 Conclusions Moore s Law Reinterpreted Number of cores per chip doubles every two year, while clock speed roughly stable Threads of execution double every 2 years 100 M cores Need to deal with systems with millions of concurrent threads Future generation will have billions of threads! MPI and programming languages from the 60 s will not make it Power limiting clock rate growth Power becomes the architectural driver for Exescale systems. 36
An Overview of High Performance Computing and Challenges for the Future
An Overview of High Performance Computing and Challenges for the Future Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester 6/15/2009 1 H. Meuer, H. Simon, E. Strohmaier,
More informationJack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester
Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester 12/24/09 1 Take a look at high performance computing What s driving HPC Future Trends 2 Traditional scientific
More informationHigh Performance Computing in Europe and USA: A Comparison
High Performance Computing in Europe and USA: A Comparison Hans Werner Meuer University of Mannheim and Prometeus GmbH 2nd European Stochastic Experts Forum Baden-Baden, June 28-29, 2001 Outlook Introduction
More informationJack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester
Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester 12/3/09 1 ! Take a look at high performance computing! What s driving HPC! Issues with power consumption! Future
More informationPresentations: Jack Dongarra, University of Tennessee & ORNL. The HPL Benchmark: Past, Present & Future. Mike Heroux, Sandia National Laboratories
HPC Benchmarking Presentations: Jack Dongarra, University of Tennessee & ORNL The HPL Benchmark: Past, Present & Future Mike Heroux, Sandia National Laboratories The HPCG Benchmark: Challenges It Presents
More informationAn Overview of High Performance Computing and Future Requirements
An Overview of High Performance Computing and Future Requirements Jack Dongarra University of Tennessee Oak Ridge National Laboratory 1 H. Meuer, H. Simon, E. Strohmaier, & JD - Listing of the 500 most
More informationJack Dongarra University of Tennessee Oak Ridge National Laboratory
Jack Dongarra University of Tennessee Oak Ridge National Laboratory 3/9/11 1 TPP performance Rate Size 2 100 Pflop/s 100000000 10 Pflop/s 10000000 1 Pflop/s 1000000 100 Tflop/s 100000 10 Tflop/s 10000
More informationAn Overview of High Performance Computing
IFIP Working Group 10.3 on Concurrent Systems An Overview of High Performance Computing Jack Dongarra University of Tennessee and Oak Ridge National Laboratory 1/3/2006 1 Overview Look at fastest computers
More informationTOP500 List s Twice-Yearly Snapshots of World s Fastest Supercomputers Develop Into Big Picture of Changing Technology
TOP500 List s Twice-Yearly Snapshots of World s Fastest Supercomputers Develop Into Big Picture of Changing Technology BY ERICH STROHMAIER COMPUTER SCIENTIST, FUTURE TECHNOLOGIES GROUP, LAWRENCE BERKELEY
More informationIt s a Multicore World. John Urbanic Pittsburgh Supercomputing Center
It s a Multicore World John Urbanic Pittsburgh Supercomputing Center Waiting for Moore s Law to save your serial code start getting bleak in 2004 Source: published SPECInt data Moore s Law is not at all
More informationRoadmapping of HPC interconnects
Roadmapping of HPC interconnects MIT Microphotonics Center, Fall Meeting Nov. 21, 2008 Alan Benner, bennera@us.ibm.com Outline Top500 Systems, Nov. 2008 - Review of most recent list & implications on interconnect
More informationPresentation of the 16th List
Presentation of the 16th List Hans- Werner Meuer, University of Mannheim Erich Strohmaier, University of Tennessee Jack J. Dongarra, University of Tennesse Horst D. Simon, NERSC/LBNL SC2000, Dallas, TX,
More informationWhat have we learned from the TOP500 lists?
What have we learned from the TOP500 lists? Hans Werner Meuer University of Mannheim and Prometeus GmbH Sun HPC Consortium Meeting Heidelberg, Germany June 19-20, 2001 Outlook TOP500 Approach Snapshots
More informationHigh Performance Computing in Europe and USA: A Comparison
High Performance Computing in Europe and USA: A Comparison Erich Strohmaier 1 and Hans W. Meuer 2 1 NERSC, Lawrence Berkeley National Laboratory, USA 2 University of Mannheim, Germany 1 Introduction In
More informationIt s a Multicore World. John Urbanic Pittsburgh Supercomputing Center Parallel Computing Scientist
It s a Multicore World John Urbanic Pittsburgh Supercomputing Center Parallel Computing Scientist Waiting for Moore s Law to save your serial code started getting bleak in 2004 Source: published SPECInt
More informationAn Overview of High Performance Computing. Jack Dongarra University of Tennessee and Oak Ridge National Laboratory 11/29/2005 1
An Overview of High Performance Computing Jack Dongarra University of Tennessee and Oak Ridge National Laboratory 11/29/ 1 Overview Look at fastest computers From the Top5 Some of the changes that face
More informationIt s a Multicore World. John Urbanic Pittsburgh Supercomputing Center Parallel Computing Scientist
It s a Multicore World John Urbanic Pittsburgh Supercomputing Center Parallel Computing Scientist Waiting for Moore s Law to save your serial code started getting bleak in 2004 Source: published SPECInt
More informationThe TOP500 list. Hans-Werner Meuer University of Mannheim. SPEC Workshop, University of Wuppertal, Germany September 13, 1999
The TOP500 list Hans-Werner Meuer University of Mannheim SPEC Workshop, University of Wuppertal, Germany September 13, 1999 Outline TOP500 Approach HPC-Market as of 6/99 Market Trends, Architecture Trends,
More informationCSE5351: Parallel Procesisng. Part 1B. UTA Copyright (c) Slide No 1
Slide No 1 CSE5351: Parallel Procesisng Part 1B Slide No 2 State of the Art In Supercomputing Several of the next slides (or modified) are the courtesy of Dr. Jack Dongarra, a distinguished professor of
More informationFra superdatamaskiner til grafikkprosessorer og
Fra superdatamaskiner til grafikkprosessorer og Brødtekst maskinlæring Prof. Anne C. Elster IDI HPC/Lab Parallel Computing: Personal perspective 1980 s: Concurrent and Parallel Pascal 1986: Intel ipsc
More informationTop500
Top500 www.top500.org Salvatore Orlando (from a presentation by J. Dongarra, and top500 website) 1 2 MPPs Performance on massively parallel machines Larger problem sizes, i.e. sizes that make sense Performance
More informationThe TOP500 Project of the Universities Mannheim and Tennessee
The TOP500 Project of the Universities Mannheim and Tennessee Hans Werner Meuer University of Mannheim EURO-PAR 2000 29. August - 01. September 2000 Munich/Germany Outline TOP500 Approach HPC-Market as
More informationrepresent parallel computers, so distributed systems such as Does not consider storage or I/O issues
Top500 Supercomputer list represent parallel computers, so distributed systems such as SETI@Home are not considered Does not consider storage or I/O issues Both custom designed machines and commodity machines
More informationSupercomputers. Alex Reid & James O'Donoghue
Supercomputers Alex Reid & James O'Donoghue The Need for Supercomputers Supercomputers allow large amounts of processing to be dedicated to calculation-heavy problems Supercomputers are centralized in
More informationTOP500 Listen und industrielle/kommerzielle Anwendungen
TOP500 Listen und industrielle/kommerzielle Anwendungen Hans Werner Meuer Universität Mannheim Gesprächsrunde Nichtnumerische Anwendungen im Bereich des Höchstleistungsrechnens des BMBF Berlin, 16./ 17.
More informationOverview. CS 472 Concurrent & Parallel Programming University of Evansville
Overview CS 472 Concurrent & Parallel Programming University of Evansville Selection of slides from CIS 410/510 Introduction to Parallel Computing Department of Computer and Information Science, University
More informationReal Parallel Computers
Real Parallel Computers Modular data centers Overview Short history of parallel machines Cluster computing Blue Gene supercomputer Performance development, top-500 DAS: Distributed supercomputing Short
More informationChapter 1. Introduction
Chapter 1 Introduction Why High Performance Computing? Quote: It is hard to understand an ocean because it is too big. It is hard to understand a molecule because it is too small. It is hard to understand
More informationPractical Scientific Computing
Practical Scientific Computing Performance-optimized Programming Preliminary discussion: July 11, 2008 Dr. Ralf-Peter Mundani, mundani@tum.de Dipl.-Ing. Ioan Lucian Muntean, muntean@in.tum.de MSc. Csaba
More informationPreparing GPU-Accelerated Applications for the Summit Supercomputer
Preparing GPU-Accelerated Applications for the Summit Supercomputer Fernanda Foertter HPC User Assistance Group Training Lead foertterfs@ornl.gov This research used resources of the Oak Ridge Leadership
More informationMaking a Case for a Green500 List
Making a Case for a Green500 List S. Sharma, C. Hsu, and W. Feng Los Alamos National Laboratory Virginia Tech Outline Introduction What Is Performance? Motivation: The Need for a Green500 List Challenges
More informationIntroduction to FREE National Resources for Scientific Computing. Dana Brunson. Jeff Pummill
Introduction to FREE National Resources for Scientific Computing Dana Brunson Oklahoma State University High Performance Computing Center Jeff Pummill University of Arkansas High Peformance Computing Center
More informationTrends in HPC (hardware complexity and software challenges)
Trends in HPC (hardware complexity and software challenges) Mike Giles Oxford e-research Centre Mathematical Institute MIT seminar March 13th, 2013 Mike Giles (Oxford) HPC Trends March 13th, 2013 1 / 18
More informationFabio AFFINITO.
Introduction to High Performance Computing Fabio AFFINITO What is the meaning of High Performance Computing? What does HIGH PERFORMANCE mean??? 1976... Cray-1 supercomputer First commercial successful
More informationAim High. Intel Technical Update Teratec 07 Symposium. June 20, Stephen R. Wheat, Ph.D. Director, HPC Digital Enterprise Group
Aim High Intel Technical Update Teratec 07 Symposium June 20, 2007 Stephen R. Wheat, Ph.D. Director, HPC Digital Enterprise Group Risk Factors Today s s presentations contain forward-looking statements.
More informationIntroduction CPS343. Spring Parallel and High Performance Computing. CPS343 (Parallel and HPC) Introduction Spring / 29
Introduction CPS343 Parallel and High Performance Computing Spring 2018 CPS343 (Parallel and HPC) Introduction Spring 2018 1 / 29 Outline 1 Preface Course Details Course Requirements 2 Background Definitions
More informationHPC Technology Trends
HPC Technology Trends High Performance Embedded Computing Conference September 18, 2007 David S Scott, Ph.D. Petascale Product Line Architect Digital Enterprise Group Risk Factors Today s s presentations
More informationPCS - Part 1: Introduction to Parallel Computing
PCS - Part 1: Introduction to Parallel Computing Institute of Computer Engineering University of Lübeck, Germany Baltic Summer School, Tartu 2009 Part 1 - Overview Reasons for parallel computing Goals
More informationIntel Many Integrated Core (MIC) Architecture
Intel Many Integrated Core (MIC) Architecture Karl Solchenbach Director European Exascale Labs BMW2011, November 3, 2011 1 Notice and Disclaimers Notice: This document contains information on products
More informationTitan - Early Experience with the Titan System at Oak Ridge National Laboratory
Office of Science Titan - Early Experience with the Titan System at Oak Ridge National Laboratory Buddy Bland Project Director Oak Ridge Leadership Computing Facility November 13, 2012 ORNL s Titan Hybrid
More informationConfessions of an Accidental Benchmarker
Confessions of an Accidental Benchmarker http://bit.ly/hpcg-benchmark 1 Appendix B of the Linpack Users Guide Designed to help users extrapolate execution Linpack software package First benchmark report
More informationA Linear Algebra Library for Multicore/Accelerators: the PLASMA/MAGMA Collection
A Linear Algebra Library for Multicore/Accelerators: the PLASMA/MAGMA Collection Jack Dongarra University of Tennessee Oak Ridge National Laboratory 11/24/2009 1 Gflop/s LAPACK LU - Intel64-16 cores DGETRF
More informationSelf Adapting Numerical Software. Self Adapting Numerical Software (SANS) Effort and Fault Tolerance in Linear Algebra Algorithms
Self Adapting Numerical Software (SANS) Effort and Fault Tolerance in Linear Algebra Algorithms Jack Dongarra University of Tennessee and Oak Ridge National Laboratory 9/19/2005 1 Overview Quick look at
More informationFujitsu s Approach to Application Centric Petascale Computing
Fujitsu s Approach to Application Centric Petascale Computing 2 nd Nov. 2010 Motoi Okuda Fujitsu Ltd. Agenda Japanese Next-Generation Supercomputer, K Computer Project Overview Design Targets System Overview
More informationJack Dongarra INNOVATIVE COMP ING LABORATORY. University of Tennessee Oak Ridge National Laboratory University of Manchester 1/17/2008 1
Planned Developments of High End Systems Around the World Jack Dongarra INNOVATIVE COMP ING LABORATORY University of Tennessee Oak Ridge National Laboratory University of Manchester 1/17/2008 1 Planned
More informationEmerging Heterogeneous Technologies for High Performance Computing
MURPA (Monash Undergraduate Research Projects Abroad) Emerging Heterogeneous Technologies for High Performance Computing Jack Dongarra University of Tennessee Oak Ridge National Lab University of Manchester
More informationIBM HPC DIRECTIONS. Dr Don Grice. ECMWF Workshop November, IBM Corporation
IBM HPC DIRECTIONS Dr Don Grice ECMWF Workshop November, 2008 IBM HPC Directions Agenda What Technology Trends Mean to Applications Critical Issues for getting beyond a PF Overview of the Roadrunner Project
More informationCray XC Scalability and the Aries Network Tony Ford
Cray XC Scalability and the Aries Network Tony Ford June 29, 2017 Exascale Scalability Which scalability metrics are important for Exascale? Performance (obviously!) What are the contributing factors?
More informationHigh-Performance Scientific Computing
High-Performance Scientific Computing Instructor: Randy LeVeque TA: Grady Lemoine Applied Mathematics 483/583, Spring 2011 http://www.amath.washington.edu/~rjl/am583 World s fastest computers http://top500.org
More informationC-DAC HPC Trends & Activities in India. Abhishek Das Scientist & Team Leader HPC Solutions Group C-DAC Ministry of Communications & IT Govt of India
C-DAC HPC Trends & Activities in India Abhishek Das Scientist & Team Leader HPC Solutions Group C-DAC Ministry of Communications & IT Govt of India Presentation Outline A brief profile of C-DAC, India
More informationAggregation of Real-Time System Monitoring Data for Analyzing Large-Scale Parallel and Distributed Computing Environments
Aggregation of Real-Time System Monitoring Data for Analyzing Large-Scale Parallel and Distributed Computing Environments Swen Böhm 1,2, Christian Engelmann 2, and Stephen L. Scott 2 1 Department of Computer
More informationBuilding Self-Healing Mass Storage Arrays. for Large Cluster Systems
Building Self-Healing Mass Storage Arrays for Large Cluster Systems NSC08, Linköping, 14. October 2008 Toine Beckers tbeckers@datadirectnet.com Agenda Company Overview Balanced I/O Systems MTBF and Availability
More informationCRAY XK6 REDEFINING SUPERCOMPUTING. - Sanjana Rakhecha - Nishad Nerurkar
CRAY XK6 REDEFINING SUPERCOMPUTING - Sanjana Rakhecha - Nishad Nerurkar CONTENTS Introduction History Specifications Cray XK6 Architecture Performance Industry acceptance and applications Summary INTRODUCTION
More informationUpdate on Cray Activities in the Earth Sciences
Update on Cray Activities in the Earth Sciences Presented to the 13 th ECMWF Workshop on the Use of HPC in Meteorology 3-7 November 2008 Per Nyberg nyberg@cray.com Director, Marketing and Business Development
More informationWhat does Heterogeneity bring?
What does Heterogeneity bring? Ken Koch Scientific Advisor, CCS-DO, LANL LACSI 2006 Conference October 18, 2006 Some Terminology Homogeneous Of the same or similar nature or kind Uniform in structure or
More informationSupercomputing im Jahr eine Analyse mit Hilfe der TOP500 Listen
Supercomputing im Jahr 2000 - eine Analyse mit Hilfe der TOP500 Listen Hans Werner Meuer Universität Mannheim Feierliche Inbetriebnahme von CLIC TU Chemnitz 11. Oktober 2000 TOP500 CLIC TU Chemnitz View
More informationThe Mont-Blanc approach towards Exascale
http://www.montblanc-project.eu The Mont-Blanc approach towards Exascale Alex Ramirez Barcelona Supercomputing Center Disclaimer: Not only I speak for myself... All references to unavailable products are
More informationHPC Capabilities at Research Intensive Universities
HPC Capabilities at Research Intensive Universities Purushotham (Puri) V. Bangalore Department of Computer and Information Sciences and UAB IT Research Computing UAB HPC Resources 24 nodes (192 cores)
More informationParallel Computing: From Inexpensive Servers to Supercomputers
Parallel Computing: From Inexpensive Servers to Supercomputers Lyle N. Long The Pennsylvania State University & The California Institute of Technology Seminar to the Koch Lab http://www.personal.psu.edu/lnl
More informationHigh-Performance Computing - and why Learn about it?
High-Performance Computing - and why Learn about it? Tarek El-Ghazawi The George Washington University Washington D.C., USA Outline What is High-Performance Computing? Why is High-Performance Computing
More informationHigh Performance Computing: Blue-Gene and Road Runner. Ravi Patel
High Performance Computing: Blue-Gene and Road Runner Ravi Patel 1 HPC General Information 2 HPC Considerations Criterion Performance Speed Power Scalability Number of nodes Latency bottlenecks Reliability
More informationThe Road from Peta to ExaFlop
The Road from Peta to ExaFlop Andreas Bechtolsheim June 23, 2009 HPC Driving the Computer Business Server Unit Mix (IDC 2008) Enterprise HPC Web 100 75 50 25 0 2003 2008 2013 HPC grew from 13% of units
More informationReal Parallel Computers
Real Parallel Computers Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel Computing 2005 Short history
More informationExascale: Parallelism gone wild!
IPDPS TCPP meeting, April 2010 Exascale: Parallelism gone wild! Craig Stunkel, Outline Why are we talking about Exascale? Why will it be fundamentally different? How will we attack the challenges? In particular,
More informationHPC and Accelerators. Ken Rozendal Chief Architect, IBM Linux Technology Cener. November, 2008
HPC and Accelerators Ken Rozendal Chief Architect, Linux Technology Cener November, 2008 All statements regarding future directions and intent are subject to change or withdrawal without notice and represent
More informationSteve Scott, Tesla CTO SC 11 November 15, 2011
Steve Scott, Tesla CTO SC 11 November 15, 2011 What goal do these products have in common? Performance / W Exaflop Expectations First Exaflop Computer K Computer ~10 MW CM5 ~200 KW Not constant size, cost
More informationScaling to Petaflop. Ola Torudbakken Distinguished Engineer. Sun Microsystems, Inc
Scaling to Petaflop Ola Torudbakken Distinguished Engineer Sun Microsystems, Inc HPC Market growth is strong CAGR increased from 9.2% (2006) to 15.5% (2007) Market in 2007 doubled from 2003 (Source: IDC
More informationOverlapping Computation and Communication for Advection on Hybrid Parallel Computers
Overlapping Computation and Communication for Advection on Hybrid Parallel Computers James B White III (Trey) trey@ucar.edu National Center for Atmospheric Research Jack Dongarra dongarra@eecs.utk.edu
More informationParallel Computer Architecture - Basics -
Parallel Computer Architecture - Basics - Christian Terboven 19.03.2012 / Aachen, Germany Stand: 15.03.2012 Version 2.3 Rechen- und Kommunikationszentrum (RZ) Agenda Processor
More informationHybrid Architectures Why Should I Bother?
Hybrid Architectures Why Should I Bother? CSCS-FoMICS-USI Summer School on Computer Simulations in Science and Engineering Michael Bader July 8 19, 2013 Computer Simulations in Science and Engineering,
More informationCCS HPC. Interconnection Network. PC MPP (Massively Parallel Processor) MPP IBM
CCS HC taisuke@cs.tsukuba.ac.jp 1 2 CU memoryi/o 2 2 4single chipmulti-core CU 10 C CM (Massively arallel rocessor) M IBM BlueGene/L 65536 Interconnection Network 3 4 (distributed memory system) (shared
More information8/28/12. CSE 820 Graduate Computer Architecture. Richard Enbody. Dr. Enbody. 1 st Day 2
CSE 820 Graduate Computer Architecture Richard Enbody Dr. Enbody 1 st Day 2 1 Why Computer Architecture? Improve coding. Knowledge to make architectural choices. Ability to understand articles about architecture.
More informationSystem Packaging Solution for Future High Performance Computing May 31, 2018 Shunichi Kikuchi Fujitsu Limited
System Packaging Solution for Future High Performance Computing May 31, 2018 Shunichi Kikuchi Fujitsu Limited 2018 IEEE 68th Electronic Components and Technology Conference San Diego, California May 29
More informationWhat is Good Performance. Benchmark at Home and Office. Benchmark at Home and Office. Program with 2 threads Home program.
Performance COMP375 Computer Architecture and dorganization What is Good Performance Which is the best performing jet? Airplane Passengers Range (mi) Speed (mph) Boeing 737-100 101 630 598 Boeing 747 470
More informationBuilding supercomputers from commodity embedded chips
http://www.montblanc-project.eu Building supercomputers from commodity embedded chips Alex Ramirez Barcelona Supercomputing Center Technical Coordinator This project and the research leading to these results
More informationPetaFlop+ Supercomputing. Eric Kronstadt IBM TJ Watson Research Center Yorktown Heights, NY IBM Corporation
PetaFlop+ Supercomputing Eric Kronstadt IBM TJ Watson Research Center Yorktown Heights, NY Multiple PetaFlops - Why should one care? President s Information Technology Advisory Committee (PITAC) report
More informationIt s a Multicore World. John Urbanic Pittsburgh Supercomputing Center Parallel Computing Scientist
It s a Multicore World John Urbanic Pittsburgh Supercomputing Center Parallel Computing Scientist Moore's Law abandoned serial programming around 2004 Courtesy Liberty Computer Architecture Research Group
More informationComplexity and Advanced Algorithms. Introduction to Parallel Algorithms
Complexity and Advanced Algorithms Introduction to Parallel Algorithms Why Parallel Computing? Save time, resources, memory,... Who is using it? Academia Industry Government Individuals? Two practical
More informationHPC and the AppleTV-Cluster
HPC and the AppleTV-Cluster Dieter Kranzlmüller, Karl Fürlinger, Christof Klausecker Munich Network Management Team Ludwig-Maximilians-Universität München (LMU) & Leibniz Supercomputing Centre (LRZ) Outline
More informationEuropean energy efficient supercomputer project
http://www.montblanc-project.eu European energy efficient supercomputer project Simon McIntosh-Smith University of Bristol (Based on slides from Alex Ramirez, BSC) Disclaimer: Speaking for myself... All
More informationCarlo Cavazzoni, HPC department, CINECA
Introduction to Shared memory architectures Carlo Cavazzoni, HPC department, CINECA Modern Parallel Architectures Two basic architectural scheme: Distributed Memory Shared Memory Now most computers have
More informationStockholm Brain Institute Blue Gene/L
Stockholm Brain Institute Blue Gene/L 1 Stockholm Brain Institute Blue Gene/L 2 IBM Systems & Technology Group and IBM Research IBM Blue Gene /P - An Overview of a Petaflop Capable System Carl G. Tengwall
More informationHPC as a Driver for Computing Technology and Education
HPC as a Driver for Computing Technology and Education Tarek El-Ghazawi The George Washington University Washington D.C., USA NOW- July 2015: The TOP 10 Systems Rank Site Computer Cores Rmax [Pflops] %
More informationCS 267: Introduction to Parallel Machines and Programming Models Lecture 3 "
CS 267: Introduction to Parallel Machines and Programming Models Lecture 3 " James Demmel www.cs.berkeley.edu/~demmel/cs267_spr16/!!! Outline Overview of parallel machines (~hardware) and programming models
More information20 Jahre TOP500 mit einem Ausblick auf neuere Entwicklungen
20 Jahre TOP500 mit einem Ausblick auf neuere Entwicklungen Hans Meuer Prometeus GmbH & Universität Mannheim hans@meuer.de ZKI Herbsttagung in Leipzig 11. - 12. September 2012 page 1 Outline Mannheim Supercomputer
More informationThe IBM Blue Gene/Q: Application performance, scalability and optimisation
The IBM Blue Gene/Q: Application performance, scalability and optimisation Mike Ashworth, Andrew Porter Scientific Computing Department & STFC Hartree Centre Manish Modani IBM STFC Daresbury Laboratory,
More informationCS 5803 Introduction to High Performance Computer Architecture: Performance Metrics
CS 5803 Introduction to High Performance Computer Architecture: Performance Metrics A.R. Hurson 323 Computer Science Building, Missouri S&T hurson@mst.edu 1 Instructor: Ali R. Hurson 323 CS Building hurson@mst.edu
More informationIntel Many Integrated Core (MIC) Matt Kelly & Ryan Rawlins
Intel Many Integrated Core (MIC) Matt Kelly & Ryan Rawlins Outline History & Motivation Architecture Core architecture Network Topology Memory hierarchy Brief comparison to GPU & Tilera Programming Applications
More informationThe Future of High- Performance Computing
Lecture 26: The Future of High- Performance Computing Parallel Computer Architecture and Programming CMU 15-418/15-618, Spring 2017 Comparing Two Large-Scale Systems Oakridge Titan Google Data Center Monolithic
More informationHigh Performance Computing with Accelerators
High Performance Computing with Accelerators Volodymyr Kindratenko Innovative Systems Laboratory @ NCSA Institute for Advanced Computing Applications and Technologies (IACAT) National Center for Supercomputing
More informationRace to Exascale: Opportunities and Challenges. Avinash Sodani, Ph.D. Chief Architect MIC Processor Intel Corporation
Race to Exascale: Opportunities and Challenges Avinash Sodani, Ph.D. Chief Architect MIC Processor Intel Corporation Exascale Goal: 1-ExaFlops (10 18 ) within 20 MW by 2018 1 ZFlops 100 EFlops 10 EFlops
More informationSupercomputing with Commodity CPUs: Are Mobile SoCs Ready for HPC?
Supercomputing with Commodity CPUs: Are Mobile SoCs Ready for HPC? Nikola Rajovic, Paul M. Carpenter, Isaac Gelado, Nikola Puzovic, Alex Ramirez, Mateo Valero SC 13, November 19 th 2013, Denver, CO, USA
More informationResources Current and Future Systems. Timothy H. Kaiser, Ph.D.
Resources Current and Future Systems Timothy H. Kaiser, Ph.D. tkaiser@mines.edu 1 Most likely talk to be out of date History of Top 500 Issues with building bigger machines Current and near future academic
More informationCS 267: Introduction to Parallel Machines and Programming Models Lecture 3 "
CS 267: Introduction to Parallel Machines and Lecture 3 " James Demmel www.cs.berkeley.edu/~demmel/cs267_spr15/!!! Outline Overview of parallel machines (~hardware) and programming models (~software) Shared
More informationConcurrent Object Oriented Languages
Concurrent Object Oriented Languages Instructor Name: Franck van Breugel Email: franck@cse.yorku.ca Office: Lassonde Building, room 3046 Office hours: to be announced Overview concurrent algorithms concurrent
More informationProgramming Techniques for Supercomputers
Programming Techniques for Supercomputers Prof. Dr. G. Wellein (a,b) Dr. G. Hager (a) Dr.-Ing. M. Wittmann (a) (a) HPC Services Regionales Rechenzentrum Erlangen (b) Department für Informatik University
More informationPetascale Parallel Computing and Beyond. General trends and lessons. Martin Berzins
Petascale Parallel Computing and Beyond General trends and lessons Martin Berzins 1. Technology Trends 2. Towards Exascale 3. Trends in programming large scale systems What kind of machine will you use
More informationHW Trends and Architectures
Pavel Tvrdík, Jiří Kašpar (ČVUT FIT) HW Trends and Architectures MI-POA, 2011, Lecture 1 1/29 HW Trends and Architectures prof. Ing. Pavel Tvrdík CSc. Ing. Jiří Kašpar Department of Computer Systems Faculty
More informationPower Profiling of Cholesky and QR Factorizations on Distributed Memory Systems
International Conference on Energy-Aware High Performance Computing Hamburg, Germany Bosilca, Ltaief, Dongarra (KAUST, UTK) Power Sept Profiling, DLA Algorithms ENAHPC / 6 Power Profiling of Cholesky and
More informationIt s a Multicore World. John Urbanic Pittsburgh Supercomputing Center Parallel Computing Scientist
It s a Multicore World John Urbanic Pittsburgh Supercomputing Center Parallel Computing Scientist Moore's Law abandoned serial programming around 2004 Courtesy Liberty Computer Architecture Research Group
More information