Compilers for High Performance Computer Systems: Do They Have a Future? Ken Kennedy Rice University
|
|
- Brianne McDonald
- 5 years ago
- Views:
Transcription
1 Compilers for High Performance Computer Systems: Do They Have a Future? Ken Kennedy Rice University
2 Collaborators Raj Bandypadhyay Zoran Budimlic Arun Chauhan Daniel Chavarria-Miranda Keith Cooper Jack Dongarra Rob Fowler John Garvin Lennart Johnsson Chuck Koelbel Cheryl McCosh John Mellor-Crummey Dan Sorensen Linda Torczon
3 A Bit of History In the beginning, there was machine language (or assembly language) 1957: Fortran (John Backus,et.al., IBM) made it possible for every scientist to develop (high-performance) applications Today: programming high end machines is once again the near-exclusive domain of experts.
4 Making Languages Usable It was our belief that if FORTRAN, during its first months, were to translate any reasonable scientific source program into an object program only half as fast as its hand-coded counterpart, then acceptance of our system would be in serious danger... I believe that had we failed to produce efficient programs, the widespread use of languages like FORTRAN would have been seriously delayed. John Backus
5 Compilers for HPC They have not delivered needed increases in productivity over the past decade Why? Platform complexity Application complexity Lack of investment (and patience!)
6 HPF
7 HPF Goals Programming Support for Scalable Parallel Systems Focus on data parallelism Accessible, Machine-Independent Programming Model shared memory single thread of control implicit generation of communication Performance Comparable to Hand-Coded MPI
8 HPF Strategy Fortran 90 Sequential Machine Data Distribution Directives HPF Free CM-2 IBM SP-2 HP/Convex SPP2000
9 HPF Details Fortran 90 Plus Distribution Directives!HPF$ TEMPLATE D(256,256)!HPF$ ALIGN A(I,J) WITH D(I+1,J)!HPF$ DISTRIBUTE D(BLOCK,CYCLIC) Explicit and Implicit Parallelism owner computes extended looping and aggregate assignment HPF Library scatter, gather, reduction, segmented scan Extrinsic Interface
10 HPF Problems Compilers slow to mature poor performance Needed features are missing support for irregular distributions task parallelism Library support lacking no CMSSL equivalent Better compiler strategies are now available Complex relationship between program and performance Developers could not write complex applications with high performance
11 dhpf Performance Efficiency SP class 'C' 1.2 Parallel Efficiency MPI dhpf # procs.
12 IMPACT-3D HPF application: Simulate 3D Rayleigh-Taylor instabilities in plasma using TVD (1334 lines, 45 HPF directives) Problem size: 1024 x 1024 x 2048 Compiled with HPF/ES compiler 7.3 TFLOPS on 2048 ES processors ~ 45% peak Compiled with dhpf on PSC s Lemieux (Alpha+Quadrics) # procs relative speedup GFLOPS % peak
13 Rethinking HPF Language Complexity Adopt the OpenMP directives for SMP parallelism Performance Issues Embrace the HPF/JA extensions Open-source HPF Library Optimize the extrinsic interface Usability Make it possible to extend the notion of distribution
14 Encapsulated Distributions HPF s Fundamental Idea: Separate distribution from data structure Problems: Built-in distributions are not enough for some problems Expert user wants more control over distribution and performance Solution: Make it possible to add new distributions DISTRIBUTE A(Hilbert2D) where Hilbert2D is a distribution library
15 What is a Distribution? Mapping from arrays to storage Must provide a minimum set of methods Get(A,I,J), Put(A,I,J) Get(A,iteratorIJ), Put(A,iteratorIJ) where iteratorij = (1:N,J) or (1:N,1:M:2) or ((I:I), I = 1:N) Owner(A,I,J), Owners(A,iteratorIJ) Reflect (fill overlap regions) Global operators: shift, global sum Rebalance, Redistribute(Distlib2) Must do what compilers need to achieve performance
16 Advantages New distributions can be added as needed Current distributions are special cases Simplifies view of interesting new technologies Out-of-core data distribution Expert users retain more control over performance Manage own distributions Provide communication primitives (shift, global sum) as needed Can include and manage ghost regions Can design adaptivity strategy
17 Challenge: Performance Current compilers get mileage from knowing the details of the distribution Rice dhpf uses integer set framework to reason about regions requiring communication What do we do if the distribution is encapsulated in a collection of methods? Owner(A(I,J)) is a case in point Solution Strategy: Extensive preliminary processing of distribution library
18 Telescoping Languages
19 The Programming Problem: A Strategy Make it possible for end users to become application developers: users integrate software components using scripting languages (e.g., Matlab, Python, Visual Basic, S+) professional programmers develop software components
20 The Programming Problem: An Obstacle Achieving High Performance: translate scripts and components to common intermediate language optimize the resulting program using whole-program compilation
21 Whole-Program Compilation Component Library Script Translator Global Optimizing Compiler Code Generator Problem: long compilation times, even for short scripts! Problem: expert knowledge on specialization lost
22 Telescoping Languages L1 Component Library Compiler Generator Could run for many hours Script Translator L1 Compiler understands library calls as primitives Code Generator Optimized Application
23 Telescoping Languages: Advantages Compile times can be reasonable High-level optimizations can be included User retains substantive control over language performance Generate a new language with userproduced libraries Reliability can be improved
24 Applications
25 Applications Matlab SP (Signal Processing) Optimizations applied to functions Statistical Calculations in Science and Medicine in S (R or S-PLUS) Hundred-fold performance improvements Library Generation and Maintenance Component Integration Systems Parallelism in Matlab
26 Library Generator (LibGen) Prof Dan Sorensen (Rice CAAM) maintains ARPACK, a large-scale eigenvalue solver He prototypes the algorithms in Matlab, then generates 8 variants in Fortran by hand: ({Real, Complex} x {Single, Double} x {Symmetric, Nonsymmetric} Special handling for {Dense, Sparse} Could this hand generation step be avoided?
27 LibGen Performance MATLAB 6.1 MATLAB 6.5 ARPACK LibGen ,000,000x1,000,000
28 LibGen Scaling MATLAB 6.1 MATLAB 6.5 ARPACK LibGen x x x10000
29 Key Technology: Type Inference Translation of Matlab to C or Fortran requires type inference At each point in a function, translator must know types as functions of input parameter types ( type jump functions ) Constraint-based type inference algorithm Polynomial time Can be used to produce different specialized variants based on input types
30 Value of Specialization sparse-symmetric-real dense-symmetric-complex 25 dense-symmetric-real dense-nonsymmetric-complex sec
31 Parallelism in Matlab Add distributions to Matlab arrays Distributions can cross multiple processors A(1:100) A(101:200) A(201:300) A(301:400) Use distributions to determine parallelism Hide parallelism in library component operations Specialize for specific distributions
32 Parallel Matlab D = distributed(n,m,block,block); A = B + C.* D Where is each operation performed? A = D(1:n-2,1:m) + D(3:n,1:m) Some data must be communicated!
33 Parallel Matlab Implementation Matlab Interpreter Invokes Computation and Data Movement Routines on Cluster A(1:100) A(201:300) A(101) A(100) A(201) A(200) A(301) A(300) A(101:200) A(301:400) A(2:399) = (A(1:398) + A(3:400))/2
34 Parallel Matlab Implementation Produce parallel versions of functional components for each supported distribution Produce data movement components for each distribution pair This sounds like a lot of work!
35 Specialization: HPF Legacy Library written with template distribution Useful distributions specified by library designer (standard + user-defined) Examples: multipartitioning, generalized block, adaptive mesh Library specialized using HPF compilation technology to exploit parallelism Performed at language generation time
36 Implementation
37 Parallel Matlab No expensive HPF compile
38 Summary Do compilers for HPC have a future? Yes, but they will focus on productivity of end users as application developers Challenge: achieve high productivity without sacrificing application performance One strategy: Telescoping Languages
39 The End
Generation of High Performance Domain- Specific Languages from Component Libraries. Ken Kennedy Rice University
Generation of High Performance Domain- Specific Languages from Component Libraries Ken Kennedy Rice University Collaborators Raj Bandypadhyay Zoran Budimlic Arun Chauhan Daniel Chavarria-Miranda Keith
More informationParallel Matlab Based on Telescoping Languages and Data Parallel Compilation. Ken Kennedy Rice University
Parallel Matlab Based on Telescoping Languages and Data Parallel Compilation Ken Kennedy Rice University Collaborators Raj Bandypadhyay Zoran Budimlic Arun Chauhan Daniel Chavarria-Miranda Keith Cooper
More informationHigh Performance Computing. Without a Degree in Computer Science
High Performance Computing Without a Degree in Computer Science Smalley s Top Ten 1. energy 2. water 3. food 4. environment 5. poverty 6. terrorism and war 7. disease 8. education 9. democracy 10. population
More informationComponent Architectures
Component Architectures Rapid Prototyping in a Networked Environment Ken Kennedy Rice University http://www.cs.rice.edu/~ken/presentations/lacsicomponentssv01.pdf Participants Ruth Aydt Bradley Broom Zoran
More informationCompilers and Compiler-based Tools for HPC
Compilers and Compiler-based Tools for HPC John Mellor-Crummey Department of Computer Science Rice University http://lacsi.rice.edu/review/2004/slides/compilers-tools.pdf High Performance Computing Algorithms
More informationCompiler Architecture for High-Performance Problem Solving
Compiler Architecture for High-Performance Problem Solving A Quest for High-Level Programming Systems Ken Kennedy Rice University http://www.cs.rice.edu/~ken/presentations/compilerarchitecture.pdf Context
More informationFuture Applications and Architectures
Future Applications and Architectures And Mapping One to the Other Ken Kennedy Rice University http://www.cs.rice.edu/~ken/presentations/futurelacsi06.pdf Viewpoint (Outside DOE) What is the predominant
More informationCompilers and Run-Time Systems for High-Performance Computing
Compilers and Run-Time Systems for High-Performance Computing Blurring the Distinction between Compile-Time and Run-Time Ken Kennedy Rice University http://www.cs.rice.edu/~ken/presentations/compilerruntime.pdf
More informationOverpartioning with the Rice dhpf Compiler
Overpartioning with the Rice dhpf Compiler Strategies for Achieving High Performance in High Performance Fortran Ken Kennedy Rice University http://www.cs.rice.edu/~ken/presentations/hug00overpartioning.pdf
More informationTelescoping MATLAB for DSP Applications
Telescoping MATLAB for DSP Applications PhD Thesis Defense Arun Chauhan Computer Science, Rice University PhD Thesis Defense July 10, 2003 Two True Stories Two True Stories the world of Digital Signal
More informationUG3 Compiling Techniques Overview of the Course
UG3 Compiling Techniques Overview of the Course Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University have explicit permission
More informationCompiler Technology for Problem Solving on Computational Grids
Compiler Technology for Problem Solving on Computational Grids An Overview of Programming Support Research in the GrADS Project Ken Kennedy Rice University http://www.cs.rice.edu/~ken/presentations/gridcompilers.pdf
More informationLCPC Arun Chauhan and Ken Kennedy
Slice-hoisting for Array-size Inference in MATLAB LCPC 2003 Arun Chauhan and Ken Kennedy Computer Science, Rice University LCPC 2003 Oct 4, 2003 History Repeats It was our belief that if FORTRAN, during
More informationJohn Mellor-Crummey Department of Computer Science Center for High Performance Software Research Rice University
Co-Array Fortran and High Performance Fortran John Mellor-Crummey Department of Computer Science Center for High Performance Software Research Rice University LACSI Symposium October 2006 The Problem Petascale
More informationCS415 Compilers Overview of the Course. These slides are based on slides copyrighted by Keith Cooper, Ken Kennedy & Linda Torczon at Rice University
CS415 Compilers Overview of the Course These slides are based on slides copyrighted by Keith Cooper, Ken Kennedy & Linda Torczon at Rice University Critical Facts Welcome to CS415 Compilers Topics in the
More informationCS 526 Advanced Topics in Compiler Construction. 1 of 12
CS 526 Advanced Topics in Compiler Construction 1 of 12 Course Organization Instructor: David Padua 3-4223 padua@uiuc.edu Office hours: By appointment Course material: Website Textbook: Randy Allen and
More informationCompilation for Heterogeneous Platforms
Compilation for Heterogeneous Platforms Grid in a Box and on a Chip Ken Kennedy Rice University http://www.cs.rice.edu/~ken/presentations/heterogeneous.pdf Senior Researchers Ken Kennedy John Mellor-Crummey
More informationPondering the Problem of Programmers Productivity
Pondering the Problem of Programmers Productivity Are we there yet? Arun Chauhan Indiana University Domain-specific Languages Systems Seminar, 2004-11-04 The Big Picture Human-Computer Interface The Big
More informationCompiler Design. Dr. Chengwei Lei CEECS California State University, Bakersfield
Compiler Design Dr. Chengwei Lei CEECS California State University, Bakersfield The course Instructor: Dr. Chengwei Lei Office: Science III 339 Office Hours: M/T/W 1:00-1:59 PM, or by appointment Phone:
More informationCompiling Techniques
Lecture 1: Introduction 20 September 2016 Table of contents 1 2 3 Essential Facts Lecturer: (christophe.dubach@ed.ac.uk) Office hours: Thursdays 11am-12pm Textbook (not strictly required): Keith Cooper
More informationThe Cascade High Productivity Programming Language
The Cascade High Productivity Programming Language Hans P. Zima University of Vienna, Austria and JPL, California Institute of Technology, Pasadena, CA CMWF Workshop on the Use of High Performance Computing
More informationParallelizing MATLAB
Parallelizing MATLAB Arun Chauhan Indiana University ParaM Supercomputing, OSC booth, 2004-11-10 The Performance Gap MATLAB Example function mcc demo x = 1; y = x / 10; z = x * 20; r = y + z; MATLAB Example
More informationCopyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University have explicit
Intermediate Representations Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University have explicit permission to make copies
More informationCo-array Fortran Performance and Potential: an NPB Experimental Study. Department of Computer Science Rice University
Co-array Fortran Performance and Potential: an NPB Experimental Study Cristian Coarfa Jason Lee Eckhardt Yuri Dotsenko John Mellor-Crummey Department of Computer Science Rice University Parallel Programming
More informationCMSC 714 Lecture 6 MPI vs. OpenMP and OpenACC. Guest Lecturer: Sukhyun Song (original slides by Alan Sussman)
CMSC 714 Lecture 6 MPI vs. OpenMP and OpenACC Guest Lecturer: Sukhyun Song (original slides by Alan Sussman) Parallel Programming with Message Passing and Directives 2 MPI + OpenMP Some applications can
More informationContents. Preface xvii Acknowledgments. CHAPTER 1 Introduction to Parallel Computing 1. CHAPTER 2 Parallel Programming Platforms 11
Preface xvii Acknowledgments xix CHAPTER 1 Introduction to Parallel Computing 1 1.1 Motivating Parallelism 2 1.1.1 The Computational Power Argument from Transistors to FLOPS 2 1.1.2 The Memory/Disk Speed
More informationCompiling Java For High Performance on Servers
Compiling Java For High Performance on Servers Ken Kennedy Center for Research on Parallel Computation Rice University Goal: Achieve high performance without sacrificing language compatibility and portability.
More informationA Study of High Performance Computing and the Cray SV1 Supercomputer. Michael Sullivan TJHSST Class of 2004
A Study of High Performance Computing and the Cray SV1 Supercomputer Michael Sullivan TJHSST Class of 2004 June 2004 0.1 Introduction A supercomputer is a device for turning compute-bound problems into
More informationHigh Performance Fortran. James Curry
High Performance Fortran James Curry Wikipedia! New Fortran statements, such as FORALL, and the ability to create PURE (side effect free) procedures Compiler directives for recommended distributions of
More informationAn Evaluation of Data-Parallel Compiler Support for Line-Sweep Applications
Journal of Instruction-Level Parallelism 5 (2003) 1-29 Submitted 10/02; published 4/03 An Evaluation of Data-Parallel Compiler Support for Line-Sweep Applications Daniel Chavarría-Miranda John Mellor-Crummey
More informationTelescoping Languages: A Strategy for Automatic Generation of Scientific Problem-Solving Systems from Annotated Libraries
Telescoping Languages: A Strategy for Automatic Generation of Scientific Problem-Solving Systems from Annotated Libraries Ken Kennedy, Bradley Broom, Keith Cooper, Jack Dongarra, Rob Fowler, Dennis Gannon,
More informationPortable High Performance and Scalability of Partitioned Global Address Space Languages
RICE UNIVERSITY Portable High Performance and Scalability of Partitioned Global Address Space Languages by Cristian Coarfa A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE Doctor
More informationDistributed-memory Algorithms for Dense Matrices, Vectors, and Arrays
Distributed-memory Algorithms for Dense Matrices, Vectors, and Arrays John Mellor-Crummey Department of Computer Science Rice University johnmc@rice.edu COMP 422/534 Lecture 19 25 October 2018 Topics for
More informationYasuo Okabe. Hitoshi Murai. 1. Introduction. 2. Evaluation. Elapsed Time (sec) Number of Processors
Performance Evaluation of Large-scale Parallel Simulation Codes and Designing New Language Features on the (High Performance Fortran) Data-Parallel Programming Environment Project Representative Yasuo
More informationData parallel algorithms, algorithmic building blocks, precision vs. accuracy
Data parallel algorithms, algorithmic building blocks, precision vs. accuracy Robert Strzodka Architecture of Computing Systems GPGPU and CUDA Tutorials Dresden, Germany, February 25 2008 2 Overview Parallel
More informationEvolving HPCToolkit John Mellor-Crummey Department of Computer Science Rice University Scalable Tools Workshop 7 August 2017
Evolving HPCToolkit John Mellor-Crummey Department of Computer Science Rice University http://hpctoolkit.org Scalable Tools Workshop 7 August 2017 HPCToolkit 1 HPCToolkit Workflow source code compile &
More informationCode Shape II Expressions & Assignment
Code Shape II Expressions & Assignment Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University have explicit permission to make
More informationGrid Application Development Software
Grid Application Development Software Department of Computer Science University of Houston, Houston, Texas GrADS Vision Goals Approach Status http://www.hipersoft.cs.rice.edu/grads GrADS Team (PIs) Ken
More informationProgramming Languages and Compilers. Jeff Nucciarone AERSP 597B Sept. 20, 2004
Programming Languages and Compilers Jeff Nucciarone Sept. 20, 2004 Programming Languages Fortran C C++ Java many others Why use Standard Programming Languages? Programming tedious requiring detailed knowledge
More informationTrends in HPC (hardware complexity and software challenges)
Trends in HPC (hardware complexity and software challenges) Mike Giles Oxford e-research Centre Mathematical Institute MIT seminar March 13th, 2013 Mike Giles (Oxford) HPC Trends March 13th, 2013 1 / 18
More informationReflections on Dynamic Languages and Parallelism. David Padua University of Illinois at Urbana-Champaign
Reflections on Dynamic Languages and Parallelism David Padua University of Illinois at Urbana-Champaign 1 Parallel Programming Parallel programming is Sometimes for Productivity Because some problems are
More informationCS426 Compiler Construction Fall 2006
CS426 Compiler Construction David Padua Department of Computer Science University of Illinois at Urbana-Champaign 0. Course organization 2 of 23 Instructor: David A. Padua 4227 SC, 333-4223 Office Hours:
More informationMulticore Computing and Scientific Discovery
scientific infrastructure Multicore Computing and Scientific Discovery James Larus Dennis Gannon Microsoft Research In the past half century, parallel computers, parallel computation, and scientific research
More informationSparse Direct Solvers for Extreme-Scale Computing
Sparse Direct Solvers for Extreme-Scale Computing Iain Duff Joint work with Florent Lopez and Jonathan Hogg STFC Rutherford Appleton Laboratory SIAM Conference on Computational Science and Engineering
More informationMAGMA. Matrix Algebra on GPU and Multicore Architectures
MAGMA Matrix Algebra on GPU and Multicore Architectures Innovative Computing Laboratory Electrical Engineering and Computer Science University of Tennessee Piotr Luszczek (presenter) web.eecs.utk.edu/~luszczek/conf/
More informationHigh Performance Computing
The Need for Parallelism High Performance Computing David McCaughan, HPC Analyst SHARCNET, University of Guelph dbm@sharcnet.ca Scientific investigation traditionally takes two forms theoretical empirical
More informationGeneral Concepts. Abstraction Computational Paradigms Implementation Application Domains Influence on Success Influences on Design
General Concepts Abstraction Computational Paradigms Implementation Application Domains Influence on Success Influences on Design 1 Abstractions in Programming Languages Abstractions hide details that
More informationTechnology for a better society. hetcomp.com
Technology for a better society hetcomp.com 1 J. Seland, C. Dyken, T. R. Hagen, A. R. Brodtkorb, J. Hjelmervik,E Bjønnes GPU Computing USIT Course Week 16th November 2011 hetcomp.com 2 9:30 10:15 Introduction
More informationCommunication and Optimization Aspects of Parallel Programming Models on Hybrid Architectures
Communication and Optimization Aspects of Parallel Programming Models on Hybrid Architectures Rolf Rabenseifner rabenseifner@hlrs.de Gerhard Wellein gerhard.wellein@rrze.uni-erlangen.de University of Stuttgart
More informationHandling Assignment Comp 412
COMP 412 FALL 2018 Handling Assignment Comp 412 source code IR IR target Front End Optimizer Back End code Copyright 2018, Keith D. Cooper & Linda Torczon, all rights reserved. Students enrolled in Comp
More informationExpressing Parallel Com putation
L1-1 Expressing Parallel Com putation Laboratory for Computer Science M.I.T. Lecture 1 Main St r eam Par allel Com put ing L1-2 Most server class machines these days are symmetric multiprocessors (SMP
More informationTop-Down System Design Approach Hans-Christian Hoppe, Intel Deutschland GmbH
Exploiting the Potential of European HPC Stakeholders in Extreme-Scale Demonstrators Top-Down System Design Approach Hans-Christian Hoppe, Intel Deutschland GmbH Motivation & Introduction Computer system
More informationMaking a Case for a Green500 List
Making a Case for a Green500 List S. Sharma, C. Hsu, and W. Feng Los Alamos National Laboratory Virginia Tech Outline Introduction What Is Performance? Motivation: The Need for a Green500 List Challenges
More informationCS415 Compilers Overview of the Course. These slides are based on slides copyrighted by Keith Cooper, Ken Kennedy & Linda Torczon at Rice University
CS415 Compilers Overview of the Course These slides are based on slides copyrighted by Keith Cooper, Ken Kennedy & Linda Torczon at Rice University Welcome to CS415 - Compilers Topics in the design of
More informationParallel Languages: Past, Present and Future
Parallel Languages: Past, Present and Future Katherine Yelick U.C. Berkeley and Lawrence Berkeley National Lab 1 Kathy Yelick Internal Outline Two components: control and data (communication/sharing) One
More informationScientific Programming in C XIV. Parallel programming
Scientific Programming in C XIV. Parallel programming Susi Lehtola 11 December 2012 Introduction The development of microchips will soon reach the fundamental physical limits of operation quantum coherence
More informationCS 470 Spring Parallel Languages. Mike Lam, Professor
CS 470 Spring 2017 Mike Lam, Professor Parallel Languages Graphics and content taken from the following: http://dl.acm.org/citation.cfm?id=2716320 http://chapel.cray.com/papers/briefoverviewchapel.pdf
More informationExploiting Multiple GPUs in Sparse QR: Regular Numerics with Irregular Data Movement
Exploiting Multiple GPUs in Sparse QR: Regular Numerics with Irregular Data Movement Tim Davis (Texas A&M University) with Sanjay Ranka, Mohamed Gadou (University of Florida) Nuri Yeralan (Microsoft) NVIDIA
More informationParallel and High Performance Computing CSE 745
Parallel and High Performance Computing CSE 745 1 Outline Introduction to HPC computing Overview Parallel Computer Memory Architectures Parallel Programming Models Designing Parallel Programs Parallel
More informationOpenMP. A parallel language standard that support both data and functional Parallelism on a shared memory system
OpenMP A parallel language standard that support both data and functional Parallelism on a shared memory system Use by system programmers more than application programmers Considered a low level primitives
More informationTelescoping Languages: A Strategy for Automatic Generation of Scientific Problem-Solving Systems from Annotated Libraries
Telescoping Languages: A Strategy for Automatic Generation of Scientific Problem-Solving Systems from Annotated Libraries Ken Kennedy, Bradley Broom, Keith Cooper, Jack Dongarra, Rob Fowler, Dennis Gannon,
More informationOur new HPC-Cluster An overview
Our new HPC-Cluster An overview Christian Hagen Universität Regensburg Regensburg, 15.05.2009 Outline 1 Layout 2 Hardware 3 Software 4 Getting an account 5 Compiling 6 Queueing system 7 Parallelization
More informationIntroduction II. Overview
Introduction II Overview Today we will introduce multicore hardware (we will introduce many-core hardware prior to learning OpenCL) We will also consider the relationship between computer hardware and
More informationLanguage and Compiler Support for Out-of-Core Irregular Applications on Distributed-Memory Multiprocessors
Language and Compiler Support for Out-of-Core Irregular Applications on Distributed-Memory Multiprocessors Peter Brezany 1, Alok Choudhary 2, and Minh Dang 1 1 Institute for Software Technology and Parallel
More informationScan Primitives for GPU Computing
Scan Primitives for GPU Computing Shubho Sengupta, Mark Harris *, Yao Zhang, John Owens University of California Davis, *NVIDIA Corporation Motivation Raw compute power and bandwidth of GPUs increasing
More informationHigh Performance Computing on GPUs using NVIDIA CUDA
High Performance Computing on GPUs using NVIDIA CUDA Slides include some material from GPGPU tutorial at SIGGRAPH2007: http://www.gpgpu.org/s2007 1 Outline Motivation Stream programming Simplified HW and
More informationTaming High Performance Computing with Compiler Technology
Taming High Performance Computing with Compiler Technology John Mellor-Crummey Department of Computer Science Center for High Performance Software Research www.cs.rice.edu/~johnmc/presentations/rice-4-04.pdf
More informationNew Programming Paradigms: Partitioned Global Address Space Languages
Raul E. Silvera -- IBM Canada Lab rauls@ca.ibm.com ECMWF Briefing - April 2010 New Programming Paradigms: Partitioned Global Address Space Languages 2009 IBM Corporation Outline Overview of the PGAS programming
More informationMartin Kruliš, v
Martin Kruliš 1 Optimizations in General Code And Compilation Memory Considerations Parallelism Profiling And Optimization Examples 2 Premature optimization is the root of all evil. -- D. Knuth Our goal
More informationDetection and Analysis of Iterative Behavior in Parallel Applications
Detection and Analysis of Iterative Behavior in Parallel Applications Karl Fürlinger and Shirley Moore Innovative Computing Laboratory, Department of Electrical Engineering and Computer Science, University
More informationBig Data Analytics Performance for Large Out-Of- Core Matrix Solvers on Advanced Hybrid Architectures
Procedia Computer Science Volume 51, 2015, Pages 2774 2778 ICCS 2015 International Conference On Computational Science Big Data Analytics Performance for Large Out-Of- Core Matrix Solvers on Advanced Hybrid
More informationFast Primitives for Irregular Computations on the NEC SX-4
To appear: Crosscuts 6 (4) Dec 1997 (http://www.cscs.ch/official/pubcrosscuts6-4.pdf) Fast Primitives for Irregular Computations on the NEC SX-4 J.F. Prins, University of North Carolina at Chapel Hill,
More informationPrinciples of Parallel Algorithm Design: Concurrency and Mapping
Principles of Parallel Algorithm Design: Concurrency and Mapping John Mellor-Crummey Department of Computer Science Rice University johnmc@rice.edu COMP 422/534 Lecture 3 17 January 2017 Last Thursday
More informationParallel Paradigms & Programming Models. Lectured by: Pham Tran Vu Prepared by: Thoai Nam
Parallel Paradigms & Programming Models Lectured by: Pham Tran Vu Prepared by: Thoai Nam Outline Parallel programming paradigms Programmability issues Parallel programming models Implicit parallelism Explicit
More informationAnalysis of Matrix Multiplication Computational Methods
European Journal of Scientific Research ISSN 1450-216X / 1450-202X Vol.121 No.3, 2014, pp.258-266 http://www.europeanjournalofscientificresearch.com Analysis of Matrix Multiplication Computational Methods
More informationParallel Computers. c R. Leduc
Parallel Computers Material based on B. Wilkinson et al., PARALLEL PROGRAMMING. Techniques and Applications Using Networked Workstations and Parallel Computers c 2002-2004 R. Leduc Why Parallel Computing?
More informationBLAS. Christoph Ortner Stef Salvini
BLAS Christoph Ortner Stef Salvini The BLASics Basic Linear Algebra Subroutines Building blocks for more complex computations Very widely used Level means number of operations Level 1: vector-vector operations
More informationGenerating Code for Assignment Statements back to work. Comp 412 COMP 412 FALL Chapters 4, 6 & 7 in EaC2e. source code. IR IR target.
COMP 412 FALL 2017 Generating Code for Assignment Statements back to work Comp 412 source code IR IR target Front End Optimizer Back End code Copyright 2017, Keith D. Cooper & Linda Torczon, all rights
More informationMAGMA: a New Generation
1.3 MAGMA: a New Generation of Linear Algebra Libraries for GPU and Multicore Architectures Jack Dongarra T. Dong, M. Gates, A. Haidar, S. Tomov, and I. Yamazaki University of Tennessee, Knoxville Release
More informationCOSC 6374 Parallel Computation. Organizational issues (I)
COSC 6374 Parallel Computation Spring 2007 Organizational issues (I) Classes: Monday, 4.00pm 5.30pm, SEC 204 Wednesday, 4.00pm 5.30pm, SEC 204 Evaluation 2 homeworks, 25% each 2 quizzes, 25% each In case
More informationSlice-hoisting for Array-size Inference in MATLAB
Slice-hoisting for Array-size Inference in MATLAB Arun Chauhan and Ken Kennedy achauhan@cs.rice.edu ken@cs.rice.edu Department of Computer Science, Rice University, Houston, TX 77005 Abstract. Inferring
More informationIntroduction to Multiprocessors (Part I) Prof. Cristina Silvano Politecnico di Milano
Introduction to Multiprocessors (Part I) Prof. Cristina Silvano Politecnico di Milano Outline Key issues to design multiprocessors Interconnection network Centralized shared-memory architectures Distributed
More informationThe View from 35,000 Feet
The View from 35,000 Feet This lecture is taken directly from the Engineering a Compiler web site with only minor adaptations for EECS 6083 at University of Cincinnati Copyright 2003, Keith D. Cooper,
More informationHigh Performance Computing: Architecture, Applications, and SE Issues. Peter Strazdins
High Performance Computing: Architecture, Applications, and SE Issues Peter Strazdins Department of Computer Science, Australian National University e-mail: peter@cs.anu.edu.au May 17, 2004 COMP1800 Seminar2-1
More informationCS 475: Parallel Programming Introduction
CS 475: Parallel Programming Introduction Wim Bohm, Sanjay Rajopadhye Colorado State University Fall 2014 Course Organization n Let s make a tour of the course website. n Main pages Home, front page. Syllabus.
More informationProductive Performance on the Cray XK System Using OpenACC Compilers and Tools
Productive Performance on the Cray XK System Using OpenACC Compilers and Tools Luiz DeRose Sr. Principal Engineer Programming Environments Director Cray Inc. 1 The New Generation of Supercomputers Hybrid
More informationPrinciple Of Parallel Algorithm Design (cont.) Alexandre David B2-206
Principle Of Parallel Algorithm Design (cont.) Alexandre David B2-206 1 Today Characteristics of Tasks and Interactions (3.3). Mapping Techniques for Load Balancing (3.4). Methods for Containing Interaction
More informationChip Multiprocessors COMP Lecture 9 - OpenMP & MPI
Chip Multiprocessors COMP35112 Lecture 9 - OpenMP & MPI Graham Riley 14 February 2018 1 Today s Lecture Dividing work to be done in parallel between threads in Java (as you are doing in the labs) is rather
More informationCode Merge. Flow Analysis. bookkeeping
Historic Compilers Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University have explicit permission to make copies of these materials
More informationOrganizational issues (I)
COSC 6374 Parallel Computation Introduction and Organizational Issues Spring 2008 Organizational issues (I) Classes: Monday, 1.00pm 2.30pm, F 154 Wednesday, 1.00pm 2.30pm, F 154 Evaluation 2 quizzes 1
More informationCompiler-Supported Simulation of Highly Scalable Parallel Applications
Compiler-Supported Simulation of Highly Scalable Parallel Applications Vikram S. Adve 1 Rajive Bagrodia 2 Ewa Deelman 2 Thomas Phan 2 Rizos Sakellariou 3 Abstract 1 University of Illinois at Urbana-Champaign
More informationAMSC/CMSC 662 Computer Organization and Programming for Scientific Computing Fall 2011 Introduction to Parallel Programming Dianne P.
AMSC/CMSC 662 Computer Organization and Programming for Scientific Computing Fall 2011 Introduction to Parallel Programming Dianne P. O Leary c 2011 1 Introduction to Parallel Programming 2 Why parallel
More informationProject Proposals. 1 Project 1: On-chip Support for ILP, DLP, and TLP in an Imagine-like Stream Processor
EE482C: Advanced Computer Organization Lecture #12 Stream Processor Architecture Stanford University Tuesday, 14 May 2002 Project Proposals Lecture #12: Tuesday, 14 May 2002 Lecturer: Students of the class
More informationJust-In-Time Compilers & Runtime Optimizers
COMP 412 FALL 2017 Just-In-Time Compilers & Runtime Optimizers Comp 412 source code IR Front End Optimizer Back End IR target code Copyright 2017, Keith D. Cooper & Linda Torczon, all rights reserved.
More informationBring your application to a new era:
Bring your application to a new era: learning by example how to parallelize and optimize for Intel Xeon processor and Intel Xeon Phi TM coprocessor Manel Fernández, Roger Philp, Richard Paul Bayncore Ltd.
More informationJava Performance Analysis for Scientific Computing
Java Performance Analysis for Scientific Computing Roldan Pozo Leader, Mathematical Software Group National Institute of Standards and Technology USA UKHEC: Java for High End Computing Nov. 20th, 2000
More informationIssues in Multiprocessors
Issues in Multiprocessors Which programming model for interprocessor communication shared memory regular loads & stores SPARCCenter, SGI Challenge, Cray T3D, Convex Exemplar, KSR-1&2, today s CMPs message
More informationPresentations: Jack Dongarra, University of Tennessee & ORNL. The HPL Benchmark: Past, Present & Future. Mike Heroux, Sandia National Laboratories
HPC Benchmarking Presentations: Jack Dongarra, University of Tennessee & ORNL The HPL Benchmark: Past, Present & Future Mike Heroux, Sandia National Laboratories The HPCG Benchmark: Challenges It Presents
More informationMPI 3 and Beyond: Why MPI is Successful and What Challenges it Faces. William Gropp
MPI 3 and Beyond: Why MPI is Successful and What Challenges it Faces William Gropp www.cs.illinois.edu/~wgropp What? MPI-3 Already?! MPI Forum passed MPI-3.0 last Friday, September 21, here in Vienna MPI-2.2
More informationParallelism. Parallel Hardware. Introduction to Computer Systems
Parallelism We have been discussing the abstractions and implementations that make up an individual computer system in considerable detail up to this point. Our model has been a largely sequential one,
More information