Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester

Size: px
Start display at page:

Download "Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester"

Transcription

1 Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester 12/3/09 1

2 ! Take a look at high performance computing! What s driving HPC! Issues with power consumption! Future Trends 2

3 TPP performance Rate Size 3

4 100 Pflop/s Pflop/s Pflop/s Tflop/s Tflop/s /% 890% 4$#25%3'()*+,% 0#$2%3'()*+,% 4.#.%1'()*+,% 1 Tflop/s #0$%1'()*+,% 6-8 years 100 Gflop/s Gflop/s 10 1 Gflop/s Mflop/s !..%!"#$%&'()*+,% My Laptop -..%/'()*+,%

5 Looking at the Gordon Bell Prize (Recognize outstanding achievement in high-performance computing applications and encourage development of parallel processing )!! 1 GFlop/s; 1988; Cray Y-MP; 8 Processors!!Static finite element analysis!! 1 TFlop/s; 1998; Cray T3E; 1024 Processors!!Modeling of metallic magnet atoms, using a variation of the locally self-consistent multiple scattering method.!! 1 PFlop/s; 2008; Cray XT5; 1.5x10 5 Processors!!Superconductive materials!! 1 EFlop/s; ~2018;?; 1x10 7 Processors (10 9 threads)

6 Performance Development in Top500 1E+11 1E+10 1 Eflop/s 1E Pflop/s 10 Pflop/s 67/% Pflop/s Tflop/s Tflop/s 890% Gordon Bell Winners Tflop/s Gflop/s 0/s 0p/ 89!..% Gflop/s Gflop/s 100 Mflop/s

7 7

8 Efficiency 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% TOP500 Ranking

9 Efficiency 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% TOP500 Ranking

10 =6B,.9-/3#J#1<3./7#1>29/# )"# )"# )"# )"# ("# ("# ("# '"# &"# %"# %"# $"# *"#!!"# 55%! 9%! 6%! 6%! 4%! 3%! 2%! 2%! 2%! 1%! 1%! 1%! 1%! 7%! +,-./0#1.2./3# +,-./0#4-,5067# 892,:/# ;/972,<# =2,202# AB3.9-2# C/D#E/2F2,0# 1D/0/,# GB33-2# H.2F<# I.>/93#

11 In The Netherlands 3 Systems on Top500 Rank Site Cores Rmax Tflop/s Rmax/ Rpeak Power MW Processor 93 SARA %.55 POWER6 184 Banking % Intel Xeon Nehalem ASTRON/U Groningen %.13 PowerPC 440 System Model IBM pseries 575 IBM xseries Cluster IBM BlueGene/L

12 =B3.67/9#1/57/,.3#!LL# 6:,;<=,% &LL# 'LL# (LL# )LL# L# )$$'# )$$&# )$$!# )$$%# )$$*# )$$M# )$$$# (LLL# (LL)# (LL(# (LL'# (LL&# (LL!# (LL%# (LL*# (LLM# (LL$# I.>/93# ;6N/9,7/,.# O/,069# =F233-P/0# A:20/7-:# G/3/29:># H,0B3.9<#

13 ! Of the 500 Fastest Supercomputer! Worldwide, Industrial Use is > 60% "! "! "! "! "! "! "! "! "! "! "! "! "! "! "! "! "! "! "! "! "! "! "! "! "! "! "! 13

14

15 Rank Site Computer Country Cores Rmax [Tflops] % of Peak Power [MW] Flops/ Watt 1 DOE / OS Oak Ridge Nat Lab Jaguar / Cray Cray XT5 sixcore 2.6 GHz USA 224, DOE / NNSA Los Alamos Nat Lab Roadrunner / IBM BladeCenter QS22/LS21 USA 122,400 1, NSF / NICS / U of Tennessee Jaguar / Cray Cray XT5 sixcore 2.6 GHz USA 98, Forschungszentrum Juelich (FZJ) Jugene / IBM Blue Gene/P Solution Germany 294, National SC Center in Tianjin / NUDT Tianhe-1 / NUDT TH-1 / IntelQC + AMD ATI Radeon 4870 China 71, NASA / Ames Research Center/NAS Pleiades / SGI SGI Altix ICE 8200EX USA 56, DOE / NNSA Lawrence Livermore NL DOE / OS Argonne Nat Lab BlueGene/L IBM eserver Blue Gene Solution Intrepid / IBM Blue Gene/P Solution USA 212, USA 163, NSF TACC/U. of Texas Ranger / Sun SunBlade x6420 USA 62, DOE / NNSA Sandia Nat Lab Sun / SunBlade 6275 USA 41,

16 Rank Site Computer Country Cores Rmax [Tflops] % of Peak Power [MW] MFlops /Watt 1 DOE / OS Oak Ridge Nat Lab Jaguar / Cray Cray XT5 sixcore 2.6 GHz USA 224, DOE / NNSA Los Alamos Nat Lab Roadrunner / IBM BladeCenter QS22/LS21 USA 122,400 1, NSF / NICS / U of Tennessee Jaguar / Cray Cray XT5 sixcore 2.6 GHz USA 98, Forschungszentrum Juelich (FZJ) Jugene / IBM Blue Gene/P Solution Germany 294, National SC Center in Tianjin / NUDT Tianhe-1 / NUDT TH-1 / IntelQC + AMD ATI Radeon 4870 China 71, NASA / Ames Research Center/NAS Pleiades / SGI SGI Altix ICE 8200EX USA 56, DOE / NNSA Lawrence Livermore NL DOE / OS Argonne Nat Lab BlueGene/L IBM eserver Blue Gene Solution Intrepid / IBM Blue Gene/P Solution USA 212, USA 163, NSF TACC/U. of Texas Ranger / Sun SunBlade x6420 USA 62, DOE / NNSA Sandia Nat Lab Sun / SunBlade 6275 USA 41,

17 Recently upgraded to a 2 Pflop/s system with more than 224K cores using AMD s 6 Core chip. Peak performance System memory Disk space Disk bandwidth Interconnect bandwidth PF 300 TB 10 PB 240+ GB/s 374 TB/s

18

19 #! University of Tennessee s National Institute for Computational Sciences #! Housed at ORNL, operated for the NSF, named Kraken #!Number 3 on the Top500 Just upgraded to 1 Pflop/s peak 99,072 cores, AMD 2.6 GHz 6 core chip, w/129 TB memory

20 ! IBM BG/P - 72 Racks with 32 nodecards x 32 compute nodes (total 73,728) #! Compute node: 4-way SMP processor #! Processor type: 32-bit PowerPC 450 core 850 MHz Processors: 294,912 #! Overall peak performance: 1 Pflop/s #! Linpack: Tflop/s #! Main memory: 2 Gbytes per node (aggregate 144 TB) I/O Nodes: 600 Networks: Three-dimensonal torus (compute nodes)! Power Consumption: #! max. 35 kw per rack 20

21 ! Tianhe-1! Hybrid system, commodity + GPUs! Theoretical peak 1.21 Pflop/s! Linpack Benchmark at Tflop/s! 2560 nodes, each node: 2 Intel Quadcore Xeon ,120 AMD ATI 4780 GPUs (each 10 cores) #! 71,680 cores #! Infiniband connected

22 Performance of Top20 Over 10 Years Pflop/s

23 )QM# )Q%# )Q&# )Q(# )# LQM# LQ%# LQ&# LQ(# L# )# (*#!'# *$# )L!# )')# )!*# )M'# (L$# ('!# (%)# (M*# ')'# ''$# '%!# '$)# &)*# &&'# &%$# &$!#

24 Mooreʼs Law is Alive and Well 1.E+07 1.E+06 1.E+05 Transistors (in Thousands) 1.E+04 1.E+03 1.E+02 1.E+01 1.E+00 1.E

25 But Clock Frequency Scaling Replaced by Scaling Cores / Chip 1.E+07 1.E+06 1.E+05 Transistors (in Thousands) Frequency (MHz) Cores 1.E+04 1.E+03 1.E+02 1.E+01 1.E+00 1.E

26 Performance Has Also Slowed, Along with Power 1.E+07 1.E+06 1.E+05 1.E+04 Transistors (in Thousands) Frequency (MHz) Power (W) Cores 1.E+03 1.E+02 1.E+01 1.E+00 1.E

27 !!Frequency! 27

28 !!Frequency! 28

29 ! Number of cores per chip doubles every 2 year, while clock speed decreases (not increases). #!Need to deal with systems with millions of concurrent threads!future generation will have billions of threads! #!Need to be able to easily replace interchip parallelism with intro-chip parallelism! Number of threads of execution doubles every 2 year

30 ! Barriers! Fundamental assumptions of system software architecture did not anticipate exponential growth in parallelism! Number of components and MTBF changes the game! Technical Focus Areas! System Hardware Scalability! System Software Scalability! Applications Scalability! Technical Gap! 1000x improvement in system software scaling! 100x improvement in system software reliability 100,000 90,000 80,000 70,000 60,000 50,000 40,000 30,000 20,000 10,000 0 Average Number of Cores Per Supercomputer

31 ! Have been planning this for years.! Started in June 2008! Independent from the Green500, but we try to learn from each other.! Collect power consumption for: #! Linpack as workload #! Including all essential parts of a system (processor, memory, & interconnect) #! Excluding features related to machine room (Most disk, UPS, )! Analyze these data carefully!! Rule of thumb: 1 MW! 1000 Homes

32 Power [KWatts] TOP500 Rank

33 ! To rank objects by size one needs extensive properties: 600 #! Weight or Volume #! Performance: Flop/s 500 (Rmax (TOP500)) 400 Power Effeciency [MFlops/Watts]! A larger system should have a larger Rmax. #! Power Consumption: 300 Watts! The ratio of 2 extensive 200 properties is an intensive one: 100 #! (weight/volume = 0 density) #! Performance / Power 0 Consumption 100 = 200 Power_efficiency TOP500 Rank! One cannot rank objects with densities BY SIZE: #! Density does not tell anything about size of an object #! The density of lead compared to the density of wood does not tell you if one is heavier or larger the other.! Linpack / Power will always sort smaller systems before larger ones!

34 34

35 Rank Top 500 Rank Green 500 Site Cores RMax Rmax/ Rpeak Power MW Processor System Model SARA %.55 POWER Banking %.25 Intel Xeon Nehalem ASTRON/U Groningen %.13 PowerPC 440 IBM pseries 575 IBM xseries Cluster IBM BlueGene/L

36

37 (8+1) core Embedded Quadcore Dualcore 0

38 !!DOE Exascale Steering Committee!!ANL, LANL, LBNL, LLNL, SNL, ORNL + PNL, BNL!!Charter: Decadal plan to provide exascale applications and technologies for DOE mission ~100 People!!Climate Science (11/08)!!High Energy Physics (12/08)!!Nuclear physics (1/09)!!Fusion Energy (3/09)!!Nuclear Energy (5/09)!!Biology (8/09)!!Basic Energy Science (8/09)!!Joint National Security (10/09)!!Computer Science!!Mathematics!!Computer Architecture Strong science case for the continued escalation of high-end computing.

39 Systems System peak 2 Pflop/s Pflop/s 1 Eflop/s System memory 0.3 PB 5 PB 10 PB Node performance 125 Gflop/s 400 Gflop/s 1-10 Tflop/s Node memory BW 25 GB/s 200 GB/s >400 GB/s Node concurrency 12 O(100) O(1000) Interconnect BW 1.5 GB/s 25 GB/s 50 GB/s System size (nodes) 18, , ,000 O(10 6 ) Total concurrency 225,000 O(10 8 ) O(10 9 ) Storage 15 PB 150 PB 300 PB IO 0.2 TB 10 TB/s 20 TB/s MTTI days days O(1 day) Power 7 MW ~10 MW ~20 MW 39

40 !Must rethink the design of our software #!Another disruptive technology!similar to what happened with cluster computing and message passing #!Rethink and rewrite the applications, algorithms, and software 40

41 ! Steepness of the ascent from terascale to petascale to exascale! Extreme parallelism and hybrid design #! Preparing for million/billion way parallelism! Tightening memory/bandwidth bottleneck #! Limits on power/clock speed implication on multicore #! Reducing communication will become much more intense #! Memory per core changes, byte-to-flop ratio will change! Necessary Fault Tolerance #! MTTF will drop #! Checkpoint/restart has limitations Software infrastructure does not exist today

42 ! Hardware has changed dramatically while software ecosystem has remained stagnant! Previous approaches have not looked at co-design of multiple levels in the system software stack (OS, runtime, compiler, libraries, application frameworks)! Need to exploit new hardware trends (e.g., manycore, heterogeneity) that cannot be handled by existing software stack, memory per socket trends! Emerging software technologies exist, but have not been fully integrated with system software, e.g., UPC, Cilk, CUDA, HPCS! Community codes unprepared for sea change in architectures! No global evaluation of key missing components

43 Build an international plan for developing the next generation open source software for scientific highperformance computing

44 ! We believe this needs to be an international collaboration for various reasons including: #! The scale of investment #! The need for international input on requirements US, Europeans, Asians, and others are working on their own software that should be part of a larger vision for HPC. #! No global evaluation of key missing components #! Hardware features are uncoordinated with software development 44

45 !! SC08 (Austin TX) meeting to generate interest!! Funding from DOE s Office of Science & NSF Office of Cyberinfratructure and sponsorship by Europeans and Asians!! US meeting (Santa Fe, NM) April 6-8, 2009!! 65 people!! NSF s Office of Cyberinfrastructure funding!! European meeting (Paris, France) June 28-29, 2009!! 70 people!! Outline Report!! Asian meeting (Tsukuba Japan) October 18-20, 2009!! Draft roadmap!! Refine Report!! SC09 (Portland OR) BOF to inform others!! Public Comment!! Draft Report presented!! Oxford April

46

47 !

48 ! For the last decade or more, the research investment strategy has been overwhelmingly biased in favor of hardware.! This strategy needs to be rebalanced - barriers to progress are increasingly on the software side.! Moreover, the return on investment is more favorable to software. #! Hardware has a half-life measured in years, while software has a half-life measured in decades.! High Performance Ecosystem out of balance #! Hardware, OS, Compilers, Software, Algorithms, Applications! No Moore s Law for software, algorithms and applications

Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester

Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester 12/24/09 1 Take a look at high performance computing What s driving HPC Future Trends 2 Traditional scientific

More information

Jack Dongarra University of Tennessee Oak Ridge National Laboratory

Jack Dongarra University of Tennessee Oak Ridge National Laboratory Jack Dongarra University of Tennessee Oak Ridge National Laboratory 3/9/11 1 TPP performance Rate Size 2 100 Pflop/s 100000000 10 Pflop/s 10000000 1 Pflop/s 1000000 100 Tflop/s 100000 10 Tflop/s 10000

More information

An Overview of High Performance Computing and Future Requirements

An Overview of High Performance Computing and Future Requirements An Overview of High Performance Computing and Future Requirements Jack Dongarra University of Tennessee Oak Ridge National Laboratory 1 H. Meuer, H. Simon, E. Strohmaier, & JD - Listing of the 500 most

More information

Jack Dongarra INNOVATIVE COMP ING LABORATORY. University i of Tennessee Oak Ridge National Laboratory

Jack Dongarra INNOVATIVE COMP ING LABORATORY. University i of Tennessee Oak Ridge National Laboratory Computational Science, High Performance Computing, and the IGMCS Program Jack Dongarra INNOVATIVE COMP ING LABORATORY University i of Tennessee Oak Ridge National Laboratory 1 The Third Pillar of 21st

More information

Presentations: Jack Dongarra, University of Tennessee & ORNL. The HPL Benchmark: Past, Present & Future. Mike Heroux, Sandia National Laboratories

Presentations: Jack Dongarra, University of Tennessee & ORNL. The HPL Benchmark: Past, Present & Future. Mike Heroux, Sandia National Laboratories HPC Benchmarking Presentations: Jack Dongarra, University of Tennessee & ORNL The HPL Benchmark: Past, Present & Future Mike Heroux, Sandia National Laboratories The HPCG Benchmark: Challenges It Presents

More information

It s a Multicore World. John Urbanic Pittsburgh Supercomputing Center

It s a Multicore World. John Urbanic Pittsburgh Supercomputing Center It s a Multicore World John Urbanic Pittsburgh Supercomputing Center Waiting for Moore s Law to save your serial code start getting bleak in 2004 Source: published SPECInt data Moore s Law is not at all

More information

CSE5351: Parallel Procesisng. Part 1B. UTA Copyright (c) Slide No 1

CSE5351: Parallel Procesisng. Part 1B. UTA Copyright (c) Slide No 1 Slide No 1 CSE5351: Parallel Procesisng Part 1B Slide No 2 State of the Art In Supercomputing Several of the next slides (or modified) are the courtesy of Dr. Jack Dongarra, a distinguished professor of

More information

represent parallel computers, so distributed systems such as Does not consider storage or I/O issues

represent parallel computers, so distributed systems such as Does not consider storage or I/O issues Top500 Supercomputer list represent parallel computers, so distributed systems such as SETI@Home are not considered Does not consider storage or I/O issues Both custom designed machines and commodity machines

More information

TOP500 List s Twice-Yearly Snapshots of World s Fastest Supercomputers Develop Into Big Picture of Changing Technology

TOP500 List s Twice-Yearly Snapshots of World s Fastest Supercomputers Develop Into Big Picture of Changing Technology TOP500 List s Twice-Yearly Snapshots of World s Fastest Supercomputers Develop Into Big Picture of Changing Technology BY ERICH STROHMAIER COMPUTER SCIENTIST, FUTURE TECHNOLOGIES GROUP, LAWRENCE BERKELEY

More information

Overview. CS 472 Concurrent & Parallel Programming University of Evansville

Overview. CS 472 Concurrent & Parallel Programming University of Evansville Overview CS 472 Concurrent & Parallel Programming University of Evansville Selection of slides from CIS 410/510 Introduction to Parallel Computing Department of Computer and Information Science, University

More information

Trends in HPC (hardware complexity and software challenges)

Trends in HPC (hardware complexity and software challenges) Trends in HPC (hardware complexity and software challenges) Mike Giles Oxford e-research Centre Mathematical Institute MIT seminar March 13th, 2013 Mike Giles (Oxford) HPC Trends March 13th, 2013 1 / 18

More information

Top500

Top500 Top500 www.top500.org Salvatore Orlando (from a presentation by J. Dongarra, and top500 website) 1 2 MPPs Performance on massively parallel machines Larger problem sizes, i.e. sizes that make sense Performance

More information

Aggregation of Real-Time System Monitoring Data for Analyzing Large-Scale Parallel and Distributed Computing Environments

Aggregation of Real-Time System Monitoring Data for Analyzing Large-Scale Parallel and Distributed Computing Environments Aggregation of Real-Time System Monitoring Data for Analyzing Large-Scale Parallel and Distributed Computing Environments Swen Böhm 1,2, Christian Engelmann 2, and Stephen L. Scott 2 1 Department of Computer

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction Why High Performance Computing? Quote: It is hard to understand an ocean because it is too big. It is hard to understand a molecule because it is too small. It is hard to understand

More information

It s a Multicore World. John Urbanic Pittsburgh Supercomputing Center Parallel Computing Scientist

It s a Multicore World. John Urbanic Pittsburgh Supercomputing Center Parallel Computing Scientist It s a Multicore World John Urbanic Pittsburgh Supercomputing Center Parallel Computing Scientist Waiting for Moore s Law to save your serial code started getting bleak in 2004 Source: published SPECInt

More information

An Overview of High Performance Computing and Challenges for the Future

An Overview of High Performance Computing and Challenges for the Future An Overview of High Performance Computing and Challenges for the Future Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester 6/15/2009 1 H. Meuer, H. Simon, E. Strohmaier,

More information

Distributed Dense Linear Algebra on Heterogeneous Architectures. George Bosilca

Distributed Dense Linear Algebra on Heterogeneous Architectures. George Bosilca Distributed Dense Linear Algebra on Heterogeneous Architectures George Bosilca bosilca@eecs.utk.edu Centraro, Italy June 2010 Factors that Necessitate to Redesign of Our Software» Steepness of the ascent

More information

Roadmapping of HPC interconnects

Roadmapping of HPC interconnects Roadmapping of HPC interconnects MIT Microphotonics Center, Fall Meeting Nov. 21, 2008 Alan Benner, bennera@us.ibm.com Outline Top500 Systems, Nov. 2008 - Review of most recent list & implications on interconnect

More information

It s a Multicore World. John Urbanic Pittsburgh Supercomputing Center Parallel Computing Scientist

It s a Multicore World. John Urbanic Pittsburgh Supercomputing Center Parallel Computing Scientist It s a Multicore World John Urbanic Pittsburgh Supercomputing Center Parallel Computing Scientist Waiting for Moore s Law to save your serial code started getting bleak in 2004 Source: published SPECInt

More information

Introduction CPS343. Spring Parallel and High Performance Computing. CPS343 (Parallel and HPC) Introduction Spring / 29

Introduction CPS343. Spring Parallel and High Performance Computing. CPS343 (Parallel and HPC) Introduction Spring / 29 Introduction CPS343 Parallel and High Performance Computing Spring 2018 CPS343 (Parallel and HPC) Introduction Spring 2018 1 / 29 Outline 1 Preface Course Details Course Requirements 2 Background Definitions

More information

Cray XC Scalability and the Aries Network Tony Ford

Cray XC Scalability and the Aries Network Tony Ford Cray XC Scalability and the Aries Network Tony Ford June 29, 2017 Exascale Scalability Which scalability metrics are important for Exascale? Performance (obviously!) What are the contributing factors?

More information

Supercomputers. Alex Reid & James O'Donoghue

Supercomputers. Alex Reid & James O'Donoghue Supercomputers Alex Reid & James O'Donoghue The Need for Supercomputers Supercomputers allow large amounts of processing to be dedicated to calculation-heavy problems Supercomputers are centralized in

More information

Presentation of the 16th List

Presentation of the 16th List Presentation of the 16th List Hans- Werner Meuer, University of Mannheim Erich Strohmaier, University of Tennessee Jack J. Dongarra, University of Tennesse Horst D. Simon, NERSC/LBNL SC2000, Dallas, TX,

More information

Making a Case for a Green500 List

Making a Case for a Green500 List Making a Case for a Green500 List S. Sharma, C. Hsu, and W. Feng Los Alamos National Laboratory Virginia Tech Outline Introduction What Is Performance? Motivation: The Need for a Green500 List Challenges

More information

Green Supercomputing

Green Supercomputing Green Supercomputing On the Energy Consumption of Modern E-Science Prof. Dr. Thomas Ludwig German Climate Computing Centre Hamburg, Germany ludwig@dkrz.de Outline DKRZ 2013 and Climate Science The Exascale

More information

The Mont-Blanc approach towards Exascale

The Mont-Blanc approach towards Exascale http://www.montblanc-project.eu The Mont-Blanc approach towards Exascale Alex Ramirez Barcelona Supercomputing Center Disclaimer: Not only I speak for myself... All references to unavailable products are

More information

High Performance Computing in Europe and USA: A Comparison

High Performance Computing in Europe and USA: A Comparison High Performance Computing in Europe and USA: A Comparison Hans Werner Meuer University of Mannheim and Prometeus GmbH 2nd European Stochastic Experts Forum Baden-Baden, June 28-29, 2001 Outlook Introduction

More information

IBM HPC DIRECTIONS. Dr Don Grice. ECMWF Workshop November, IBM Corporation

IBM HPC DIRECTIONS. Dr Don Grice. ECMWF Workshop November, IBM Corporation IBM HPC DIRECTIONS Dr Don Grice ECMWF Workshop November, 2008 IBM HPC Directions Agenda What Technology Trends Mean to Applications Critical Issues for getting beyond a PF Overview of the Roadrunner Project

More information

Building Self-Healing Mass Storage Arrays. for Large Cluster Systems

Building Self-Healing Mass Storage Arrays. for Large Cluster Systems Building Self-Healing Mass Storage Arrays for Large Cluster Systems NSC08, Linköping, 14. October 2008 Toine Beckers tbeckers@datadirectnet.com Agenda Company Overview Balanced I/O Systems MTBF and Availability

More information

Emerging Heterogeneous Technologies for High Performance Computing

Emerging Heterogeneous Technologies for High Performance Computing MURPA (Monash Undergraduate Research Projects Abroad) Emerging Heterogeneous Technologies for High Performance Computing Jack Dongarra University of Tennessee Oak Ridge National Lab University of Manchester

More information

InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment. TOP500 Supercomputers, June 2014

InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment. TOP500 Supercomputers, June 2014 InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment TOP500 Supercomputers, June 2014 TOP500 Performance Trends 38% CAGR 78% CAGR Explosive high-performance

More information

Overview. Idea: Reduce CPU clock frequency This idea is well suited specifically for visualization

Overview. Idea: Reduce CPU clock frequency This idea is well suited specifically for visualization Exploring Tradeoffs Between Power and Performance for a Scientific Visualization Algorithm Stephanie Labasan & Matt Larsen (University of Oregon), Hank Childs (Lawrence Berkeley National Laboratory) 26

More information

Real Parallel Computers

Real Parallel Computers Real Parallel Computers Modular data centers Overview Short history of parallel machines Cluster computing Blue Gene supercomputer Performance development, top-500 DAS: Distributed supercomputing Short

More information

An Overview of High Performance Computing

An Overview of High Performance Computing IFIP Working Group 10.3 on Concurrent Systems An Overview of High Performance Computing Jack Dongarra University of Tennessee and Oak Ridge National Laboratory 1/3/2006 1 Overview Look at fastest computers

More information

Aim High. Intel Technical Update Teratec 07 Symposium. June 20, Stephen R. Wheat, Ph.D. Director, HPC Digital Enterprise Group

Aim High. Intel Technical Update Teratec 07 Symposium. June 20, Stephen R. Wheat, Ph.D. Director, HPC Digital Enterprise Group Aim High Intel Technical Update Teratec 07 Symposium June 20, 2007 Stephen R. Wheat, Ph.D. Director, HPC Digital Enterprise Group Risk Factors Today s s presentations contain forward-looking statements.

More information

A Linear Algebra Library for Multicore/Accelerators: the PLASMA/MAGMA Collection

A Linear Algebra Library for Multicore/Accelerators: the PLASMA/MAGMA Collection A Linear Algebra Library for Multicore/Accelerators: the PLASMA/MAGMA Collection Jack Dongarra University of Tennessee Oak Ridge National Laboratory 11/24/2009 1 Gflop/s LAPACK LU - Intel64-16 cores DGETRF

More information

Parallel Computing: From Inexpensive Servers to Supercomputers

Parallel Computing: From Inexpensive Servers to Supercomputers Parallel Computing: From Inexpensive Servers to Supercomputers Lyle N. Long The Pennsylvania State University & The California Institute of Technology Seminar to the Koch Lab http://www.personal.psu.edu/lnl

More information

Introduction to FREE National Resources for Scientific Computing. Dana Brunson. Jeff Pummill

Introduction to FREE National Resources for Scientific Computing. Dana Brunson. Jeff Pummill Introduction to FREE National Resources for Scientific Computing Dana Brunson Oklahoma State University High Performance Computing Center Jeff Pummill University of Arkansas High Peformance Computing Center

More information

High Performance Computing in Europe and USA: A Comparison

High Performance Computing in Europe and USA: A Comparison High Performance Computing in Europe and USA: A Comparison Erich Strohmaier 1 and Hans W. Meuer 2 1 NERSC, Lawrence Berkeley National Laboratory, USA 2 University of Mannheim, Germany 1 Introduction In

More information

Managing HPC Active Archive Storage with HPSS RAIT at Oak Ridge National Laboratory

Managing HPC Active Archive Storage with HPSS RAIT at Oak Ridge National Laboratory Managing HPC Active Archive Storage with HPSS RAIT at Oak Ridge National Laboratory Quinn Mitchell HPC UNIX/LINUX Storage Systems ORNL is managed by UT-Battelle for the US Department of Energy U.S. Department

More information

HPC Technology Trends

HPC Technology Trends HPC Technology Trends High Performance Embedded Computing Conference September 18, 2007 David S Scott, Ph.D. Petascale Product Line Architect Digital Enterprise Group Risk Factors Today s s presentations

More information

The TOP500 Project of the Universities Mannheim and Tennessee

The TOP500 Project of the Universities Mannheim and Tennessee The TOP500 Project of the Universities Mannheim and Tennessee Hans Werner Meuer University of Mannheim EURO-PAR 2000 29. August - 01. September 2000 Munich/Germany Outline TOP500 Approach HPC-Market as

More information

Jack Dongarra. University of Tennessee Oak Ridge National Laboratory University of Manchester 9/8/2010 1

Jack Dongarra. University of Tennessee Oak Ridge National Laboratory University of Manchester 9/8/2010 1 Impact of Architecture and Technology for Extreme Scale on Software and Algorithm Design Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester 9/8/2010 1 H. Meuer,

More information

Exascale: Parallelism gone wild!

Exascale: Parallelism gone wild! IPDPS TCPP meeting, April 2010 Exascale: Parallelism gone wild! Craig Stunkel, Outline Why are we talking about Exascale? Why will it be fundamentally different? How will we attack the challenges? In particular,

More information

Parallel and Distributed Systems. Hardware Trends. Why Parallel or Distributed Computing? What is a parallel computer?

Parallel and Distributed Systems. Hardware Trends. Why Parallel or Distributed Computing? What is a parallel computer? Parallel and Distributed Systems Instructor: Sandhya Dwarkadas Department of Computer Science University of Rochester What is a parallel computer? A collection of processing elements that communicate and

More information

High-Performance Computing & Simulations in Quantum Many-Body Systems PART I. Thomas Schulthess

High-Performance Computing & Simulations in Quantum Many-Body Systems PART I. Thomas Schulthess High-Performance Computing & Simulations in Quantum Many-Body Systems PART I Thomas Schulthess schulthess@phys.ethz.ch What exactly is high-performance computing? 1E10 1E9 1E8 1E7 relative performance

More information

Motivation Goal Idea Proposition for users Study

Motivation Goal Idea Proposition for users Study Exploring Tradeoffs Between Power and Performance for a Scientific Visualization Algorithm Stephanie Labasan Computer and Information Science University of Oregon 23 November 2015 Overview Motivation:

More information

Titan - Early Experience with the Titan System at Oak Ridge National Laboratory

Titan - Early Experience with the Titan System at Oak Ridge National Laboratory Office of Science Titan - Early Experience with the Titan System at Oak Ridge National Laboratory Buddy Bland Project Director Oak Ridge Leadership Computing Facility November 13, 2012 ORNL s Titan Hybrid

More information

Fra superdatamaskiner til grafikkprosessorer og

Fra superdatamaskiner til grafikkprosessorer og Fra superdatamaskiner til grafikkprosessorer og Brødtekst maskinlæring Prof. Anne C. Elster IDI HPC/Lab Parallel Computing: Personal perspective 1980 s: Concurrent and Parallel Pascal 1986: Intel ipsc

More information

Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester

Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester 11/20/13 1 Rank Site Computer Country Cores Rmax [Pflops] % of Peak Power [MW] MFlops /Watt 1 2 3 4 National

More information

What have we learned from the TOP500 lists?

What have we learned from the TOP500 lists? What have we learned from the TOP500 lists? Hans Werner Meuer University of Mannheim and Prometeus GmbH Sun HPC Consortium Meeting Heidelberg, Germany June 19-20, 2001 Outlook TOP500 Approach Snapshots

More information

MPI RUNTIMES AT JSC, NOW AND IN THE FUTURE

MPI RUNTIMES AT JSC, NOW AND IN THE FUTURE , NOW AND IN THE FUTURE Which, why and how do they compare in our systems? 08.07.2018 I MUG 18, COLUMBUS (OH) I DAMIAN ALVAREZ Outline FZJ mission JSC s role JSC s vision for Exascale-era computing JSC

More information

HPC as a Driver for Computing Technology and Education

HPC as a Driver for Computing Technology and Education HPC as a Driver for Computing Technology and Education Tarek El-Ghazawi The George Washington University Washington D.C., USA NOW- July 2015: The TOP 10 Systems Rank Site Computer Cores Rmax [Pflops] %

More information

HPC Algorithms and Applications

HPC Algorithms and Applications HPC Algorithms and Applications Intro Michael Bader Winter 2015/2016 Intro, Winter 2015/2016 1 Part I Scientific Computing and Numerical Simulation Intro, Winter 2015/2016 2 The Simulation Pipeline phenomenon,

More information

InfiniBand Strengthens Leadership as The High-Speed Interconnect Of Choice

InfiniBand Strengthens Leadership as The High-Speed Interconnect Of Choice InfiniBand Strengthens Leadership as The High-Speed Interconnect Of Choice Providing the Best Return on Investment by Delivering the Highest System Efficiency and Utilization Top500 Supercomputers June

More information

Parallel Programming

Parallel Programming Parallel Programming Introduction Diego Fabregat-Traver and Prof. Paolo Bientinesi HPAC, RWTH Aachen fabregat@aices.rwth-aachen.de WS15/16 Acknowledgements Prof. Felix Wolf, TU Darmstadt Prof. Matthias

More information

Fabio AFFINITO.

Fabio AFFINITO. Introduction to High Performance Computing Fabio AFFINITO What is the meaning of High Performance Computing? What does HIGH PERFORMANCE mean??? 1976... Cray-1 supercomputer First commercial successful

More information

COMPUTING ELEMENT EVOLUTION AND ITS IMPACT ON SIMULATION CODES

COMPUTING ELEMENT EVOLUTION AND ITS IMPACT ON SIMULATION CODES COMPUTING ELEMENT EVOLUTION AND ITS IMPACT ON SIMULATION CODES P(ND) 2-2 2014 Guillaume Colin de Verdière OCTOBER 14TH, 2014 P(ND)^2-2 PAGE 1 CEA, DAM, DIF, F-91297 Arpajon, France October 14th, 2014 Abstract:

More information

Intel Many Integrated Core (MIC) Architecture

Intel Many Integrated Core (MIC) Architecture Intel Many Integrated Core (MIC) Architecture Karl Solchenbach Director European Exascale Labs BMW2011, November 3, 2011 1 Notice and Disclaimers Notice: This document contains information on products

More information

Complexity and Advanced Algorithms. Introduction to Parallel Algorithms

Complexity and Advanced Algorithms. Introduction to Parallel Algorithms Complexity and Advanced Algorithms Introduction to Parallel Algorithms Why Parallel Computing? Save time, resources, memory,... Who is using it? Academia Industry Government Individuals? Two practical

More information

Preparing GPU-Accelerated Applications for the Summit Supercomputer

Preparing GPU-Accelerated Applications for the Summit Supercomputer Preparing GPU-Accelerated Applications for the Summit Supercomputer Fernanda Foertter HPC User Assistance Group Training Lead foertterfs@ornl.gov This research used resources of the Oak Ridge Leadership

More information

Practical Scientific Computing

Practical Scientific Computing Practical Scientific Computing Performance-optimized Programming Preliminary discussion: July 11, 2008 Dr. Ralf-Peter Mundani, mundani@tum.de Dipl.-Ing. Ioan Lucian Muntean, muntean@in.tum.de MSc. Csaba

More information

Scaling to Petaflop. Ola Torudbakken Distinguished Engineer. Sun Microsystems, Inc

Scaling to Petaflop. Ola Torudbakken Distinguished Engineer. Sun Microsystems, Inc Scaling to Petaflop Ola Torudbakken Distinguished Engineer Sun Microsystems, Inc HPC Market growth is strong CAGR increased from 9.2% (2006) to 15.5% (2007) Market in 2007 doubled from 2003 (Source: IDC

More information

The TOP500 list. Hans-Werner Meuer University of Mannheim. SPEC Workshop, University of Wuppertal, Germany September 13, 1999

The TOP500 list. Hans-Werner Meuer University of Mannheim. SPEC Workshop, University of Wuppertal, Germany September 13, 1999 The TOP500 list Hans-Werner Meuer University of Mannheim SPEC Workshop, University of Wuppertal, Germany September 13, 1999 Outline TOP500 Approach HPC-Market as of 6/99 Market Trends, Architecture Trends,

More information

CRAY XK6 REDEFINING SUPERCOMPUTING. - Sanjana Rakhecha - Nishad Nerurkar

CRAY XK6 REDEFINING SUPERCOMPUTING. - Sanjana Rakhecha - Nishad Nerurkar CRAY XK6 REDEFINING SUPERCOMPUTING - Sanjana Rakhecha - Nishad Nerurkar CONTENTS Introduction History Specifications Cray XK6 Architecture Performance Industry acceptance and applications Summary INTRODUCTION

More information

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center The Stampede is Coming Welcome to Stampede Introductory Training Dan Stanzione Texas Advanced Computing Center dan@tacc.utexas.edu Thanks for Coming! Stampede is an exciting new system of incredible power.

More information

High-Performance Scientific Computing

High-Performance Scientific Computing High-Performance Scientific Computing Instructor: Randy LeVeque TA: Grady Lemoine Applied Mathematics 483/583, Spring 2011 http://www.amath.washington.edu/~rjl/am583 World s fastest computers http://top500.org

More information

HPCS HPCchallenge Benchmark Suite

HPCS HPCchallenge Benchmark Suite HPCS HPCchallenge Benchmark Suite David Koester, Ph.D. () Jack Dongarra (UTK) Piotr Luszczek () 28 September 2004 Slide-1 Outline Brief DARPA HPCS Overview Architecture/Application Characterization Preliminary

More information

John Hengeveld Director of Marketing, HPC Evangelist

John Hengeveld Director of Marketing, HPC Evangelist MIC, Intel and Rearchitecting for Exascale John Hengeveld Director of Marketing, HPC Evangelist Intel Data Center Group Dr. Jean-Laurent Philippe, PhD Technical Sales Manager & Exascale Technical Lead

More information

Resources Current and Future Systems. Timothy H. Kaiser, Ph.D.

Resources Current and Future Systems. Timothy H. Kaiser, Ph.D. Resources Current and Future Systems Timothy H. Kaiser, Ph.D. tkaiser@mines.edu 1 Most likely talk to be out of date History of Top 500 Issues with building bigger machines Current and near future academic

More information

CS2214 COMPUTER ARCHITECTURE & ORGANIZATION SPRING Top 10 Supercomputers in the World as of November 2013*

CS2214 COMPUTER ARCHITECTURE & ORGANIZATION SPRING Top 10 Supercomputers in the World as of November 2013* CS2214 COMPUTER ARCHITECTURE & ORGANIZATION SPRING 2014 COMPUTERS : PRESENT, PAST & FUTURE Top 10 Supercomputers in the World as of November 2013* No Site Computer Cores Rmax + (TFLOPS) Rpeak (TFLOPS)

More information

Technology challenges and trends over the next decade (A look through a 2030 crystal ball) Al Gara Intel Fellow & Chief HPC System Architect

Technology challenges and trends over the next decade (A look through a 2030 crystal ball) Al Gara Intel Fellow & Chief HPC System Architect Technology challenges and trends over the next decade (A look through a 2030 crystal ball) Al Gara Intel Fellow & Chief HPC System Architect Today s Focus Areas For Discussion Will look at various technologies

More information

Update on Cray Activities in the Earth Sciences

Update on Cray Activities in the Earth Sciences Update on Cray Activities in the Earth Sciences Presented to the 13 th ECMWF Workshop on the Use of HPC in Meteorology 3-7 November 2008 Per Nyberg nyberg@cray.com Director, Marketing and Business Development

More information

Power Profiling of Cholesky and QR Factorizations on Distributed Memory Systems

Power Profiling of Cholesky and QR Factorizations on Distributed Memory Systems International Conference on Energy-Aware High Performance Computing Hamburg, Germany Bosilca, Ltaief, Dongarra (KAUST, UTK) Power Sept Profiling, DLA Algorithms ENAHPC / 6 Power Profiling of Cholesky and

More information

Confessions of an Accidental Benchmarker

Confessions of an Accidental Benchmarker Confessions of an Accidental Benchmarker http://bit.ly/hpcg-benchmark 1 Appendix B of the Linpack Users Guide Designed to help users extrapolate execution Linpack software package First benchmark report

More information

Oak Ridge National Laboratory Computing and Computational Sciences

Oak Ridge National Laboratory Computing and Computational Sciences Oak Ridge National Laboratory Computing and Computational Sciences OFA Update by ORNL Presented by: Pavel Shamis (Pasha) OFA Workshop Mar 17, 2015 Acknowledgments Bernholdt David E. Hill Jason J. Leverman

More information

Chapter 5b: top500. Top 500 Blades Google PC cluster. Computer Architecture Summer b.1

Chapter 5b: top500. Top 500 Blades Google PC cluster. Computer Architecture Summer b.1 Chapter 5b: top500 Top 500 Blades Google PC cluster Computer Architecture Summer 2005 5b.1 top500: top 10 Rank Site Country/Year Computer / Processors Manufacturer Computer Family Model Inst. type Installation

More information

TOP500 Listen und industrielle/kommerzielle Anwendungen

TOP500 Listen und industrielle/kommerzielle Anwendungen TOP500 Listen und industrielle/kommerzielle Anwendungen Hans Werner Meuer Universität Mannheim Gesprächsrunde Nichtnumerische Anwendungen im Bereich des Höchstleistungsrechnens des BMBF Berlin, 16./ 17.

More information

Overview. High Performance Computing - History of the Supercomputer. Modern Definitions (II)

Overview. High Performance Computing - History of the Supercomputer. Modern Definitions (II) Overview High Performance Computing - History of the Supercomputer Dr M. Probert Autumn Term 2017 Early systems with proprietary components, operating systems and tools Development of vector computing

More information

Fujitsu s Approach to Application Centric Petascale Computing

Fujitsu s Approach to Application Centric Petascale Computing Fujitsu s Approach to Application Centric Petascale Computing 2 nd Nov. 2010 Motoi Okuda Fujitsu Ltd. Agenda Japanese Next-Generation Supercomputer, K Computer Project Overview Design Targets System Overview

More information

Jack Dongarra INNOVATIVE COMP ING LABORATORY. University of Tennessee Oak Ridge National Laboratory University of Manchester 1/17/2008 1

Jack Dongarra INNOVATIVE COMP ING LABORATORY. University of Tennessee Oak Ridge National Laboratory University of Manchester 1/17/2008 1 Planned Developments of High End Systems Around the World Jack Dongarra INNOVATIVE COMP ING LABORATORY University of Tennessee Oak Ridge National Laboratory University of Manchester 1/17/2008 1 Planned

More information

Reflections on Failure in Post-Terascale Parallel Computing

Reflections on Failure in Post-Terascale Parallel Computing Reflections on Failure in Post-Terascale Parallel Computing 2007 Int. Conf. on Parallel Processing, Xi An China Garth Gibson Carnegie Mellon University and Panasas Inc. DOE SciDAC Petascale Data Storage

More information

CS 5803 Introduction to High Performance Computer Architecture: Performance Metrics

CS 5803 Introduction to High Performance Computer Architecture: Performance Metrics CS 5803 Introduction to High Performance Computer Architecture: Performance Metrics A.R. Hurson 323 Computer Science Building, Missouri S&T hurson@mst.edu 1 Instructor: Ali R. Hurson 323 CS Building hurson@mst.edu

More information

Large scale Imaging on Current Many- Core Platforms

Large scale Imaging on Current Many- Core Platforms Large scale Imaging on Current Many- Core Platforms SIAM Conf. on Imaging Science 2012 May 20, 2012 Dr. Harald Köstler Chair for System Simulation Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen,

More information

Outline. Execution Environments for Parallel Applications. Supercomputers. Supercomputers

Outline. Execution Environments for Parallel Applications. Supercomputers. Supercomputers Outline Execution Environments for Parallel Applications Master CANS 2007/2008 Departament d Arquitectura de Computadors Universitat Politècnica de Catalunya Supercomputers OS abstractions Extended OS

More information

HPC Issues for DFT Calculations. Adrian Jackson EPCC

HPC Issues for DFT Calculations. Adrian Jackson EPCC HC Issues for DFT Calculations Adrian Jackson ECC Scientific Simulation Simulation fast becoming 4 th pillar of science Observation, Theory, Experimentation, Simulation Explore universe through simulation

More information

Jülich Supercomputing Centre

Jülich Supercomputing Centre Mitglied der Helmholtz-Gemeinschaft Jülich Supercomputing Centre Norbert Attig Jülich Supercomputing Centre (JSC) Forschungszentrum Jülich (FZJ) Aug 26, 2009 DOAG Regionaltreffen NRW 2 Supercomputing at

More information

HPC Technology Update Challenges or Chances?

HPC Technology Update Challenges or Chances? HPC Technology Update Challenges or Chances? Swiss Distributed Computing Day Thomas Schoenemeyer, Technology Integration, CSCS 1 Move in Feb-April 2012 1500m2 16 MW Lake-water cooling PUE 1.2 New Datacenter

More information

High-Performance Computing - and why Learn about it?

High-Performance Computing - and why Learn about it? High-Performance Computing - and why Learn about it? Tarek El-Ghazawi The George Washington University Washington D.C., USA Outline What is High-Performance Computing? Why is High-Performance Computing

More information

Inauguration Cartesius June 14, 2013

Inauguration Cartesius June 14, 2013 Inauguration Cartesius June 14, 2013 Hardware is Easy...but what about software/applications/implementation/? Dr. Peter Michielse Deputy Director 1 Agenda History Cartesius Hardware path to exascale: the

More information

Resources Current and Future Systems. Timothy H. Kaiser, Ph.D.

Resources Current and Future Systems. Timothy H. Kaiser, Ph.D. Resources Current and Future Systems Timothy H. Kaiser, Ph.D. tkaiser@mines.edu 1 Most likely talk to be out of date History of Top 500 Issues with building bigger machines Current and near future academic

More information

INSPUR and HPC Innovation

INSPUR and HPC Innovation INSPUR and HPC Innovation Dong Qi (Forrest) Product manager Inspur dongqi@inspur.com Contents 1 2 3 4 5 Inspur introduction HPC Challenge and Inspur HPC strategy HPC cases Inspur contribution to HPC community

More information

ALCF Argonne Leadership Computing Facility

ALCF Argonne Leadership Computing Facility ALCF Argonne Leadership Computing Facility ALCF Data Analytics and Visualization Resources William (Bill) Allcock Leadership Computing Facility Argonne Leadership Computing Facility Established 2006. Dedicated

More information

IBM Spectrum Scale IO performance

IBM Spectrum Scale IO performance IBM Spectrum Scale 5.0.0 IO performance Silverton Consulting, Inc. StorInt Briefing 2 Introduction High-performance computing (HPC) and scientific computing are in a constant state of transition. Artificial

More information

Parallel Computing. Parallel Computing. Hwansoo Han

Parallel Computing. Parallel Computing. Hwansoo Han Parallel Computing Parallel Computing Hwansoo Han What is Parallel Computing? Software with multiple threads Parallel vs. concurrent Parallel computing executes multiple threads at the same time on multiple

More information

The Fusion Distributed File System

The Fusion Distributed File System Slide 1 / 44 The Fusion Distributed File System Dongfang Zhao February 2015 Slide 2 / 44 Outline Introduction FusionFS System Architecture Metadata Management Data Movement Implementation Details Unique

More information

Customer Success Story Los Alamos National Laboratory

Customer Success Story Los Alamos National Laboratory Customer Success Story Los Alamos National Laboratory Panasas High Performance Storage Powers the First Petaflop Supercomputer at Los Alamos National Laboratory Case Study June 2010 Highlights First Petaflop

More information

Ronald van der Pol

Ronald van der Pol Ronald van der Pol Contributors! " Ronald van der Pol! " Freek Dijkstra! " Pieter de Boer! " Igor Idziejczak! " Mark Meijerink! " Hanno Pet! " Peter Tavenier (this work is partially funded

More information

HPC Saudi Jeffrey A. Nichols Associate Laboratory Director Computing and Computational Sciences. Presented to: March 14, 2017

HPC Saudi Jeffrey A. Nichols Associate Laboratory Director Computing and Computational Sciences. Presented to: March 14, 2017 Creating an Exascale Ecosystem for Science Presented to: HPC Saudi 2017 Jeffrey A. Nichols Associate Laboratory Director Computing and Computational Sciences March 14, 2017 ORNL is managed by UT-Battelle

More information

Outline. Lecture 11: EIT090 Computer Architecture. Small-scale MIMD designs. Taxonomy. Anders Ardö. November 25, 2009

Outline. Lecture 11: EIT090 Computer Architecture. Small-scale MIMD designs. Taxonomy. Anders Ardö. November 25, 2009 Outline Anders Ardö EIT Electrical and Information Technology, Lund University 1 / 49 2 / 49 Taxonomy SISD (Single Instruction stream, Single Data stream) traditional uniprocessor SIMD (Single Instruction

More information