Supercomputing in Plain English Part IV: Henry Neeman, Director

Similar documents
Supercomputing and Science An Introduction to High Performance Computing

Parallel Programming & Cluster Computing Stupid Compiler Tricks

Supercomputing in Plain English

Parallel Programming & Cluster Computing Stupid Compiler Tricks

Supercomputing in Plain English

Supercomputing in Plain English

Supercomputing in Plain English

Supercomputing in Plain English

Supercomputing in Plain English

Parallel Programming & Cluster Computing Instruction Level Parallelism

Loop Transformations! Part II!

Autotuning. John Cavazos. University of Delaware UNIVERSITY OF DELAWARE COMPUTER & INFORMATION SCIENCES DEPARTMENT

Supercomputing in Plain English

Simone Campanoni Loop transformations

We will focus on data dependencies: when an operand is written at some point and read at a later point. Example:!

Compiler Optimizations. Chapter 8, Section 8.5 Chapter 9, Section 9.1.7

16.10 Exercises. 372 Chapter 16 Code Improvement. be translated as

Bentley Rules for Optimizing Work

Module 16: Data Flow Analysis in Presence of Procedure Calls Lecture 32: Iteration. The Lecture Contains: Iteration Space.

Compiler Optimization

Compiler Optimizations. Chapter 8, Section 8.5 Chapter 9, Section 9.1.7

Optimising for the p690 memory system

Memory. From Chapter 3 of High Performance Computing. c R. Leduc

CprE 488 Embedded Systems Design. Lecture 6 Software Optimization

CSE 501: Compiler Construction. Course outline. Goals for language implementation. Why study compilers? Models of compilation

Tour of common optimizations

Module 18: Loop Optimizations Lecture 35: Amdahl s Law. The Lecture Contains: Amdahl s Law. Induction Variable Substitution.

Languages and Compiler Design II IR Code Optimization

CS 351 Final Exam Solutions

Control flow graphs and loop optimizations. Thursday, October 24, 13

Introduction to optimizations. CS Compiler Design. Phases inside the compiler. Optimization. Introduction to Optimizations. V.

USC 227 Office hours: 3-4 Monday and Wednesday CS553 Lecture 1 Introduction 4

Reading Assignment. Lazy Evaluation

Compiling for Performance on hp OpenVMS I64. Doug Gordon Original Presentation by Bill Noyce European Technical Update Days, 2005

In examining performance Interested in several things Exact times if computable Bounded times if exact not computable Can be measured

Module 13: INTRODUCTION TO COMPILERS FOR HIGH PERFORMANCE COMPUTERS Lecture 25: Supercomputing Applications. The Lecture Contains: Loop Unswitching

Tiling: A Data Locality Optimizing Algorithm

Review. Topics. Lecture 3. Advanced Programming Topics. Review: How executable files are generated. Defining macros through compilation flags

CS 701. Class Meets. Instructor. Teaching Assistant. Key Dates. Charles N. Fischer. Fall Tuesdays & Thursdays, 11:00 12: Engineering Hall

Kampala August, Agner Fog

Compiler techniques for leveraging ILP

Implementing Coroutines with call/cc. Producer/Consumer using Coroutines

HW 2 is out! Due 9/25!

Optimising with the IBM compilers

Coarse-Grained Parallelism

Data Dependence Analysis

Programmazione Avanzata

CS 2461: Computer Architecture 1

Module 18: Loop Optimizations Lecture 36: Cycle Shrinking. The Lecture Contains: Cycle Shrinking. Cycle Shrinking in Distance Varying Loops

MODEL ANSWERS COMP36512, May 2016

Loop Optimizations. Outline. Loop Invariant Code Motion. Induction Variables. Loop Invariant Code Motion. Loop Invariant Code Motion

CS4961 Parallel Programming. Lecture 7: Introduction to SIMD 09/14/2010. Homework 2, Due Friday, Sept. 10, 11:59 PM. Mary Hall September 14, 2010

CS 293S Parallelism and Dependence Theory

Building a Runnable Program and Code Improvement. Dario Marasco, Greg Klepic, Tess DiStefano

Transforming Imperfectly Nested Loops

Spring 2 Spring Loop Optimizations

CS153: Compilers Lecture 15: Local Optimization

Loops. Lather, Rinse, Repeat. CS4410: Spring 2013

What Do Compilers Do? How Can the Compiler Improve Performance? What Do We Mean By Optimization?

Stack Tracing In A Statically Typed Language

Computer Science 246 Computer Architecture

CS 614 COMPUTER ARCHITECTURE II FALL 2005

A Bad Name. CS 2210: Optimization. Register Allocation. Optimization. Reaching Definitions. Dataflow Analyses 4/10/2013

Static Single Assignment Form in the COINS Compiler Infrastructure

Numerical Modelling in Fortran: day 7. Paul Tackley, 2017

Final Exam Review Questions

OpenACC Course. Office Hour #2 Q&A

Supercomputing in Plain English

Instruction-Level Parallelism. Instruction Level Parallelism (ILP)

Supercomputing in Plain English

Chapter 4. Advanced Pipelining and Instruction-Level Parallelism. In-Cheol Park Dept. of EE, KAIST

IA-64 Compiler Technology

Office Hours: Mon/Wed 3:30-4:30 GDC Office Hours: Tue 3:30-4:30 Thu 3:30-4:30 GDC 5.

Exploiting ILP with SW Approaches. Aleksandar Milenković, Electrical and Computer Engineering University of Alabama in Huntsville

Code Optimization & Performance. CS528 Serial Code Optimization. Great Reality There s more to performance than asymptotic complexity

(Refer Slide Time: 1:27)

Code Merge. Flow Analysis. bookkeeping

!OMP #pragma opm _OPENMP

VIII. DSP Processors. Digital Signal Processing 8 December 24, 2009

Figure 1 Common Sub Expression Optimization Example

Optimized Scientific Computing:

Assignment statement optimizations

MAR 572 Geophysical Simulation

G.1 Introduction: Exploiting Instruction-Level Parallelism Statically G-2 G.2 Detecting and Enhancing Loop-Level Parallelism G-2 G.

Compiler Construction 2010/2011 Loop Optimizations

Type checking. Jianguo Lu. November 27, slides adapted from Sean Treichler and Alex Aiken s. Jianguo Lu November 27, / 39

Lecture: Static ILP. Topics: predication, speculation (Sections C.5, 3.2)

Program Optimization. Jo, Heeseung

Compiling for Advanced Architectures

HY425 Lecture 09: Software to exploit ILP

Lecture Notes on Loop Optimizations

Lecture 2: Single processor architecture and memory

HY425 Lecture 09: Software to exploit ILP

Enhancing Parallelism

Instruction-Level Parallelism Dynamic Branch Prediction. Reducing Branch Penalties

Optimisation p.1/22. Optimisation

Code optimization. Have we achieved optimal code? Impossible to answer! We make improvements to the code. Aim: faster code and/or less space

Compiler Construction 2016/2017 Loop Optimizations

Architecture and Compilers PART I. HPC Fall 2008 Prof. Robert van Engelen

Instruction Level Parallelism (ILP)

Transcription:

Supercomputing in Plain English Part IV: Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma Wednesday September 19 2007

Outline! Dependency Analysis! What is Dependency Analysis?! Control Dependencies! Data Dependencies! Stupid Compiler Tricks! Tricks the Compiler Plays Wednesday September 19 2007 2

Borrowed Slides!!!! This entire section of slides was taken from http://www.oscer.ou.edu/! They have a course much like the one offered here at ARSC! I particularly liked their treatment of dependencies and stupid compiler tricks! This is an abridged version of their talk http://www.oscer.ou.edu/workshops/compilers/sipe_compilers_20070919.ppt Wednesday September 19 2007 3

Dependency Analysis

What Is Dependency Analysis? Dependency analysis describes of how different parts of a program affect one another, and how various parts require other parts in order to operate correctly. A control dependency governs how different sequences of instructions affect each other. A data dependency governs how different pieces of data affect each other. Much of this discussion is from references [1] and [5]. Wednesday September 19 2007 5

Control Dependencies Every program has a well-defined flow of control that moves from instruction to instruction to instruction. This flow can be affected by several kinds of operations:! Loops! Branches (if, select case/switch)! Function/subroutine calls! I/O (typically implemented as calls) Dependencies affect parallelization! Wednesday September 19 2007 6

Branch Dependency y = 7 IF (x /= 0) THEN y = 1.0 / x END IF Note that (x /= 0) means x not equal to zero. The value of y depends on what the condition (x /= 0) evaluates to:! If the condition (x /= 0) evaluates to.true., then y is set to 1.0 / x. (1 divided by x).! Otherwise, y remains 7. Wednesday September 19 2007 7

Loop Carried Dependency DO i = 2, length a(i) = a(i-1) + b(i) Here, each iteration of the loop depends on the previous: iteration i=3 depends on iteration i=2, iteration i=4 depends on iteration i=3, iteration i=5 depends on iteration i=4, etc. This is sometimes called a loop carried dependency. There is no way to execute iteration i until after iteration i-1 has completed, so this loop can t be parallelized. Wednesday September 19 2007 8

Why Do We Care? Loops are the favorite control structures of High Performance Computing, because compilers know how to optimize their performance using instruction-level parallelism: superscalar, pipelining and vectorization can give excellent speedup. Loop carried dependencies affect whether a loop can be parallelized, and how much. Wednesday September 19 2007 9

Call Dependency Example x = 5 y = myfunction(7) z = 22 The flow of the program is interrupted by the call to myfunction, which takes the execution to somewhere else in the program. It s similar to a branch dependency. Wednesday September 19 2007 10

I/O Dependency X = a + b PRINT *, x Y = c + d Typically, I/O is implemented by hidden subroutine calls, so we can think of this as equivalent to a call dependency. Wednesday September 19 2007 11

Reductions Aren t Dependencies array_sum = 0 DO i = 1, length array_sum = array_sum + array(i) A reduction is an operation that converts an array to a scalar. Other kinds of reductions: product,.and.,.or., minimum, maximum, index of minimum, index of maximum, number of occurrences of a particular value, etc. Reductions are so common that hardware and compilers are optimized to handle them. Also, they aren t really dependencies, because the order in which the individual operations are performed doesn t matter. Wednesday September 19 2007 12

Data Dependencies A data dependence occurs when an instruction is dependent on data from a previous instruction and therefore cannot be moved before the earlier instruction [or executed in parallel]. [6] a = x + y + cos(z); b = a * c; The value of b depends on the value of a, so these two statements must be executed in order. Wednesday September 19 2007 13

Output Dependencies x = a / b; y = x + 2; x = d e; Notice that x is assigned two different values, but only one of them is retained after these statements are done executing. In this context, the final value of x is the output. Again, we are forced to execute in order. Wednesday September 19 2007 14

Why Does Order Matter?! Dependencies can affect whether we can execute a particular part of the program in parallel.! If we cannot execute that part of the program in parallel, then it ll be SLOW. Wednesday September 19 2007 15

Loop Dependency Example if ((dst == src1) && (dst == src2)) { for (index = 1; index < length; index++) { dst[index] = dst[index-1] + dst[index]; } } else if (dst == src1) { for (index = 1; index < length; index++) { dst[index] = dst[index-1] + src2[index]; } } else if (dst == src2) { for (index = 1; index < length; index++) { dst[index] = src1[index-1] + dst[index]; } } else if (src1 == src2) { for (index = 1; index < length; index++) { dst[index] = src1[index-1] + src1[index]; } } else { for (index = 1; index < length; index++) { dst[index] = src1[index-1] + src2[index]; } } The various versions of the loop either:! do have loop carried dependencies, or! don t have loop carried dependencies. Wednesday September 19 2007 16

Stupid Compiler Tricks

Stupid Compiler Tricks! Tricks Compilers Play! Scalar Optimizations! Loop Optimizations! Inlining Wednesday September 19 2007 18

Scalar Optimizations! Copy Propagation! Constant Folding! Dead Code Removal! Strength Reduction! Common Subexpression Elimination! Variable Renaming! Loop Optimizations Not every compiler does all of these, so it sometimes can be worth doing these by hand. Much of this discussion is from [2] and [5]. Wednesday September 19 2007 19

Copy Propagation Before x = y z = 1 + x Has data dependency Compile After x = y z = 1 + y No data dependency Wednesday September 19 2007 20

Constant Folding Before add = 100 aug = 200 sum = add + aug After sum = 300 Notice that sum is actually the sum of two constants, so the compiler can precalculate it, eliminating the addition that otherwise would be performed at runtime. Wednesday September 19 2007 21

Dead Code Removal Before var = 5 PRINT *, var STOP PRINT *, var * 2 After var = 5 PRINT *, var STOP Since the last statement never executes, the compiler can eliminate it. Wednesday September 19 2007 22

Strength Reduction Before x = y ** 2.0 a = c / 2.0 After x = y * y a = c * 0.5 Raising one value to the power of another, or dividing, is more expensive than multiplying. If the compiler can tell that the power is a small integer, or that the denominator is a constant, it ll use multiplication instead. Note: In Fortran, y ** 2.0 means y to the power 2. Wednesday September 19 2007 23

Common Subexpression Elimination Before d = c * (a / b) e = (a / b) * 2.0 After adivb = a / b d = c * adivb e = adivb * 2.0 The subexpression (a / b) occurs in both assignment statements, so there s no point in calculating it twice. This is typically only worth doing if the common subexpression is expensive to calculate. Wednesday September 19 2007 24

Variable Renaming Before x = y * z q = r + x * 2 x = a + b After x0 = y * z q = r + x0 * 2 x = a + b The original code has an output dependency, while the new code doesn t but the final value of x is still correct. Wednesday September 19 2007 25

Loop Optimizations! Hoisting Loop Invariant Code! Unswitching! Iteration Peeling! Index Set Splitting! Loop Interchange! Unrolling! Loop Fusion! Loop Fission Not every compiler does all of these, so it sometimes can be worth doing some of these by hand. Much of this discussion is from [3] and [5]. Wednesday September 19 2007 26

Hoisting Loop Invariant Code Code that doesn t change inside the loop is called loop invariant. It doesn t need to be calculated over and over. Before After DO i = 1, n a(i) = b(i) + c * d e = g(n) temp = c * d DO i = 1, n a(i) = b(i) + temp e = g(n) Wednesday September 19 2007 27

Unswitching DO i = 1, n DO j = 2, n IF (t(i) > 0) THEN a(i,j) = a(i,j) * t(i) + b(j) ELSE a(i,j) = 0.0 END IF DO i = 1, n IF (t(i) > 0) THEN DO j = 2, n a(i,j) = a(i,j) * t(i) + b(j) ELSE DO j = 2, n a(i,j) = 0.0 END IF The condition is j-independent. Before So, it can migrate outside the j loop. After Wednesday September 19 2007 28

Before Iteration Peeling DO i = 1, n IF ((i == 1).OR. (i == n)) THEN x(i) = y(i) ELSE x(i) = y(i + 1) + y(i 1) END IF We can eliminate the IF by peeling the weird iterations. After x(1) = y(1) DO i = 2, n - 1 x(i) = y(i + 1) + y(i 1) x(n) = y(n) Wednesday September 19 2007 29

Index Set Splitting DO i = 1, n a(i) = b(i) + c(i) IF (i > 10) THEN d(i) = a(i) + b(i 10) END IF DO i = 1, 10 a(i) = b(i) + c(i) DO i = 11, n a(i) = b(i) + c(i) d(i) = a(i) + b(i 10) Before After Note that this is a generalization of peeling. Wednesday September 19 2007 30

Loop Interchange DO i = 1, ni DO j = 1, nj a(i,j) = b(i,j) Before After DO j = 1, nj DO i = 1, ni a(i,j) = b(i,j) Array elements a(i,j) and a(i+1,j) are near each other in memory, while a(i,j+1) may be far, so it makes sense to make the i loop be the inner loop. (This is reversed in C, C++ and Java.) Wednesday September 19 2007 31

Before Unrolling DO i = 1, n a(i) = a(i)+b(i) After DO i = 1, n, 4 a(i) = a(i) +b(i) a(i+1) = a(i+1)+b(i+1) a(i+2) = a(i+2)+b(i+2) a(i+3) = a(i+3)+b(i+3) You generally shouldn t unroll by hand. Wednesday September 19 2007 32

Why Do Compilers Unroll? We saw last time that a loop with a lot of operations gets better performance (up to some point), especially if there are lots of arithmetic operations but few main memory loads and stores. Unrolling creates multiple operations that typically load from the same, or adjacent, cache lines. So, an unrolled loop has more operations without increasing the memory accesses by much. Also, unrolling decreases the number of comparisons on the loop counter variable, and the number of branches to the top of the loop. Wednesday September 19 2007 33

Loop Fusion DO i = 1, n a(i) = b(i) + 1 DO i = 1, n c(i) = a(i) / 2 DO i = 1, n d(i) = 1 / c(i) DO i = 1, n a(i) = b(i) + 1 c(i) = a(i) / 2 d(i) = 1 / c(i) Before After As with unrolling, this has fewer branches. It also has fewer total memory references. Wednesday September 19 2007 34

Loop Fission DO i = 1, n a(i) = b(i) + 1 c(i) = a(i) / 2 d(i) = 1 / c(i)!! i = 1, n DO i = 1, n a(i) = b(i) + 1!! i = 1, n DO i = 1, n c(i) = a(i) / 2!! i = 1, n DO i = 1, n d(i) = 1 / c(i)!! i = 1, n Before After Fission reduces the cache footprint and the number of operations per iteration. Wednesday September 19 2007 35

To Fuse or to Fizz? The question of when to perform fusion versus when to perform fission, like many many optimization questions, is highly dependent on the application, the platform and a lot of other issues that get very, very complicated. Compilers don t always make the right choices. That s why it s important to examine the actual behavior of the executable. Wednesday September 19 2007 36

Inlining Before DO i = 1, n a(i) = func(i) REAL FUNCTION func (x) func = x * 3 END FUNCTION func After DO i = 1, n a(i) = i * 3 When a function or subroutine is inlined, its contents are transferred directly into the calling routine, eliminating the overhead of making the call. Wednesday September 19 2007 37

To Learn More Supercomputing http://www.oscer.ou.edu/education.php http://symposium2007.oscer.ou.edu/ Wednesday September 19 2007 38

References [1] Steve Behling et al, The POWER4 Processor Introduction and Tuning Guide, IBM, 2001. [2] Intel 64 and IA-32 Architectures Optimization Reference Manual, Order Number: 248966-015 May 2007 http://www.intel.com/design/processor/manuals/248966.pdf [3] Kevin Dowd and Charles Severance, High Performance Computing, 2 nd ed. O Reilly, 1998. [4] Code courtesy of Dan Weber, 2001. Wednesday September 19 2007 39