Topic 12: Cyclic Instruction Scheduling

Similar documents
EECS 583 Class 13 Software Pipelining

ECE 5775 (Fall 17) High-Level Digital Design Automation. More Pipelining

Lecture 21. Software Pipelining & Prefetching. I. Software Pipelining II. Software Prefetching (of Arrays) III. Prefetching via Software Pipelining

CE431 Parallel Computer Architecture Spring Compile-time ILP extraction Modulo Scheduling

Lecture 19. Software Pipelining. I. Example of DoAll Loops. I. Introduction. II. Problem Formulation. III. Algorithm.

EECS 583 Class 15 Register Allocation

CS433 Midterm. Prof Josep Torrellas. October 16, Time: 1 hour + 15 minutes

Topic 14: Scheduling COS 320. Compiling Techniques. Princeton University Spring Lennart Beringer

Instruction Scheduling

Fall 2011 PhD Qualifier Exam

An Algorithm For Software Pipelining

Data Prefetch and Software Pipelining. Stanford University CS243 Winter 2006 Wei Li 1

ECE 5730 Memory Systems

HPL-PD A Parameterized Research Architecture. Trimaran Tutorial

Instruction Re-selection for Iterative Modulo Scheduling on High Performance Muti-issue DSPs

COMPUTER ORGANIZATION AND DESIGN. 5 th Edition. The Hardware/Software Interface. Chapter 4. The Processor

Lecture: Static ILP. Topics: compiler scheduling, loop unrolling, software pipelining (Sections C.5, 3.2)

Instruction Scheduling. Software Pipelining - 3

Computer Architecture and Engineering. CS152 Quiz #5. April 23rd, Professor Krste Asanovic. Name: Answer Key

Instruction Re-selection for Iterative Modulo Scheduling on High Performance Multi-issue DSPs

Several Common Compiler Strategies. Instruction scheduling Loop unrolling Static Branch Prediction Software Pipelining

COMPUTER ORGANIZATION AND DESIGN. 5 th Edition. The Hardware/Software Interface. Chapter 4. The Processor

Chapter 4. Instruction Execution. Introduction. CPU Overview. Multiplexers. Chapter 4 The Processor 1. The Processor.

Computer Architecture and Engineering CS152 Quiz #4 April 11th, 2011 Professor Krste Asanović

Software Pipelining by Modulo Scheduling. Philip Sweany University of North Texas

Lecture 6: Static ILP

Instruction Scheduling

ECE 252 / CPS 220 Advanced Computer Architecture I. Lecture 14 Very Long Instruction Word Machines

Lecture 6 MIPS R4000 and Instruction Level Parallelism. Computer Architectures S

Lecture: Pipeline Wrap-Up and Static ILP

Topic 18 (updated): Virtual Memory

Pipelining and Exploiting Instruction-Level Parallelism (ILP)

Floating Point/Multicycle Pipelining in DLX

What is Pipelining? Time per instruction on unpipelined machine Number of pipe stages

This course provides an overview of the SH-2 32-bit RISC CPU core used in the popular SH-2 series microcontrollers

Topic 14: Dealing with Branches

Topic 14: Dealing with Branches

Page # Let the Compiler Do it Pros and Cons Pros. Exploiting ILP through Software Approaches. Cons. Perhaps a mixture of the two?

Advanced Computer Architecture

EE 3170 Microcontroller Applications

Chapter 3 (Cont III): Exploiting ILP with Software Approaches. Copyright Josep Torrellas 1999, 2001, 2002,

CS252 Graduate Computer Architecture Midterm 1 Solutions

What is Pipelining? RISC remainder (our assumptions)

Parallelism. Execution Cycle. Dual Bus Simple CPU. Pipelining COMP375 1

Module 4c: Pipelining

Computer Science 146. Computer Architecture

Lecture 15: Pipelining. Spring 2018 Jason Tang

Computer Architecture and Engineering CS152 Quiz #4 April 11th, 2012 Professor Krste Asanović

CS152 Computer Architecture and Engineering VLIW, Vector, and Multithreaded Machines

Generic Software pipelining at the Assembly Level

Lecture 13 - VLIW Machines and Statically Scheduled ILP

Memory Hierarchy. ENG3380 Computer Organization and Architecture Cache Memory Part II. Topics. References. Memory Hierarchy

In-order vs. Out-of-order Execution. In-order vs. Out-of-order Execution

Lecture: Static ILP. Topics: predication, speculation (Sections C.5, 3.2)

Static Scheduling Techniques. Loop Unrolling (1 of 2) Loop Unrolling (2 of 2)

Software Pipelining for Coarse-Grained Reconfigurable Instruction Set Processors

Lecture 18 List Scheduling & Global Scheduling Carnegie Mellon

William Stallings Computer Organization and Architecture

CS152 Computer Architecture and Engineering CS252 Graduate Computer Architecture. VLIW, Vector, and Multithreaded Machines

Computer Science 146. Computer Architecture

Chapter 4. The Processor

The Elcor Intermediate Representation. Trimaran Tutorial

A Software Pipelining Framework for Simple Processor Cores

Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

Reduced Code Size Modulo Scheduling in the Absence of Hardware Support

CS 152 Computer Architecture and Engineering. Lecture 16 - VLIW Machines and Statically Scheduled ILP

Simple Machine Model. Lectures 14 & 15: Instruction Scheduling. Simple Execution Model. Simple Execution Model

Spring 2 Spring Loop Optimizations

Name: Computer Science 252 Quiz #2

Pipelining, Instruction Level Parallelism and Memory in Processors. Advanced Topics ICOM 4215 Computer Architecture and Organization Fall 2010

Practice Problems (Con t) The ALU performs operation x and puts the result in the RR The ALU operand Register B is loaded with the contents of Rx

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Appendix C. Authors: John Hennessy & David Patterson. Copyright 2011, Elsevier Inc. All rights Reserved. 1

Updated Exercises by Diana Franklin

NOW Handout Page 1. Review from Last Time #1. CSE 820 Graduate Computer Architecture. Lec 8 Instruction Level Parallelism. Outline

Example. Selecting Instructions to Issue

Computer Science 246 Computer Architecture

CSE 820 Graduate Computer Architecture. week 6 Instruction Level Parallelism. Review from Last Time #1

ASSEMBLY LANGUAGE MACHINE ORGANIZATION

Static, multiple-issue (superscaler) pipelines

Lecture 7. Instruction Scheduling. I. Basic Block Scheduling II. Global Scheduling (for Non-Numeric Code)

Hiroaki Kobayashi 12/21/2004

COSC 6385 Computer Architecture. Instruction Level Parallelism

CPE300: Digital System Architecture and Design

ECE 552 / CPS 550 Advanced Computer Architecture I. Lecture 15 Very Long Instruction Word Machines

Lecture 10: Static ILP Basics. Topics: loop unrolling, static branch prediction, VLIW (Sections )

EE 4683/5683: COMPUTER ARCHITECTURE

CS 152 Computer Architecture and Engineering. Lecture 13 - Out-of-Order Issue and Register Renaming

Arithmetic Processing

Background: Pipelining Basics. Instruction Scheduling. Pipelining Details. Idealized Instruction Data-Path. Last week Register allocation

EECE 417 Computer Systems Architecture

CS152 Computer Architecture and Engineering March 13, 2008 Out of Order Execution and Branch Prediction Assigned March 13 Problem Set #4 Due March 25

CPE 631 Lecture 09: Instruction Level Parallelism and Its Dynamic Exploitation

Course on Advanced Computer Architectures

CS 152 Computer Architecture and Engineering. Lecture 13 - VLIW Machines and Statically Scheduled ILP

Alexandria University

RS-FDRA: A Register-Sensitive Software Pipelining Algorithm for Embedded VLIW Processors

What is Superscalar? CSCI 4717 Computer Architecture. Why the drive toward Superscalar? What is Superscalar? (continued) In class exercise

ECE 5775 (Fall 17) High-Level Digital Design Automation. More Binding Pipelining

Software Pipelining of Loops with Early Exits for the Itanium Architecture

Transcription:

Topic 12: Cyclic Instruction Scheduling COS 320 Compiling Techniques Princeton University Spring 2015 Prof. David August 1

Overlap Iterations Using Pipelining time Iteration 1 2 3 n n 1 2 3 With hardware pipelining, while one instruction is in fetch, another is in decode, another in execute. Same thing here, multiple iterations are processed simultaneously, with each instruction in a separate stage. 1 iteration still takes the same time, but time to complete n iterations is reduced! - 2 -

A Software Pipeline time A B A C B A Prologue - fill the pipe Loop body with 4 ops A B C D D C B A D C B A D C B A D C B D C D Steady state: 4 iterations executed simultaneously, 1 operation from each iteration. Every cycle, an iteration starts and finishes when the pipe is full. Kernel steady state Epilogue - drain the pipe - 3 -

Creating Software Pipelines Lots of software pipelining techniques out there Modulo scheduling Most widely adopted Practical to implement, yields good results Conceptual strategy Unroll the loop completely Then, schedule the code completely with 2 constraints All iteration bodies have identical schedules Each iteration is scheduled to start some fixed number of cycles later than the previous iteration Initiation Interval (II) = fixed delay between the start of successive iterations Given the 2 constraints, the unrolled schedule is repetitive (kernel) except the portion at the beginning (prologue) and end (epilogue) Kernel can be re-rolled to yield a new loop - 4 -

Creating Software Pipelines (2) Create a schedule for 1 iteration of the loop such that when the same schedule is repeated at intervals of II cycles No intra-iteration dependence is violated No inter-iteration dependence is violated No resource conflict arises between operation in same or distinct iterations We will start out assuming special hardware support for the discussion (e.g. IA-64) Rotating registers Predicates Brtop - 5 -

Terminology time Iter 3 II Iter 2 Iter 1 Initiation Interval (II) = fixed delay between the start of successive iterations Each iteration can be divided into stages consisting of II cycles each Number of stages in 1 iteration is termed the stage count (SC) Takes SC-1 cycles to fill/drain the pipe - 6 -

Resource Usage Legality Need to guarantee that No resource is used at 2 points in time that are separated by an interval which is a multiple of II Within a single iteration, the same resource is never used more than 1x at the same time modulo II Known as modulo constraint, where the name modulo scheduling comes from Modulo reservation table solves this problem To schedule an op at time T needing resource R The entry for R at T mod II must be free 0 alu1 alu2 mem bus0 bus1 br II = 3 1 2-7 -

Dependences in a Loop Need worry about 2 kinds Intra-iteration Inter-iteration Delay Minimum time interval between the start of operations Operation read/write times Distance Number of iterations separating the 2 operations involved Distance of 0 means intra-iteration Recurrence manifests itself as a circuit in the dependence graph <1,2> <1,0> 1 2 3 <1,1> <1,0> <1,2> 4 Edges annotated with tuple <delay, distance> - 8 -

Dynamic Single Assignment (DSA) Form Impossible to overlap iterations because each iteration writes to the same register. Remove the anti and output dependences. Rotating register (virtual for now) * Each register is an infinite push down array (Expanded virtual reg or EVR) * Write to top element, but can reference any element * Remap operation slides everything down à r[n] changes to r[n+1] A program is in DSA form if the same virtual register (EVR element) is never assigned to more than 1x on any dynamic execution path 1: r3 = load(r1) 2: r4 = r3 * 26 3: store (r2, r4) 4: r1 = r1 + 4 5: r2 = r2 + 4 6: p1 = cmpp (r1 < r9) 7: brct p1 Loop DSA conversion - 9-1: r3[-1] = load(r1[0]) 2: r4[-1] = r3[-1] * 26 3: store (r2[0], r4[-1]) 4: r1[-1] = r1[0] + 4 5: r2[-1] = r2[0] + 4 6: p1[-1] = cmpp (r1[-1] < r9) remap r1, r2, r3, r4, p1 7: brct p1[-1] Loop

Physical Realization of EVRs EVR may contain an unlimited number values But, only a finite contiguous set of elements of an EVR are ever live at any point in time These must be given physical registers Conventional register file Remaps are essentially copies, so each EVR is realized by a set of physical registers and copies are inserted Rotating registers Direct support for EVRs No copies needed File rotated after each loop iteration is completed - 10 -

Loop Dependence Example 1: r3[-1] = load(r1[0]) 2: r4[-1] = r3[-1] * 26 3: store (r2[0], r4[-1]) 4: r1[-1] = r1[0] + 4 5: r2[-1] = r2[0] + 4 6: p1[-1] = cmpp (r1[-1] < r9) remap r1, r2, r3, r4, p1 7: brct p1[-1] Loop 0,0 3,0 1,0 1 2 3 4 5 2,0 0,0 1,1 1,1 1,1 1,1 In DSA form, there are no inter-iteration anti or output dependences! 6 1,0 7 <delay, distance> - 11 -

Minimum Initiation Interval (MII) Remember, II = number of cycles between the start of successive iterations Modulo scheduling requires a candidate II be selected before scheduling is attempted Try candidate II, see if it works If not, increase by 1, try again repeating until successful MII is a lower bound on the II MII = Max(ResMII, RecMII) ResMII = resource constrained MII Resource usage requirements of 1 iteration RecMII = recurrence constrained MII Latency of the circuits in the dependence graph - 12 -

ResMII Concept: If there were no dependences between the operations, what is the the shortest possible schedule? Simple resource model A processor has a set of resources R. For each resource r in R there is count(r) specifying the number of identical copies ResMII = MAX for all r in R (uses(r) / count(r)) uses(r) = number of times the resource is used in 1 iteration In reality its more complex than this because operations can have multiple alternatives (different choices for resources it could be assigned to) - 13 -

ResMII Example resources: 4 issue, 2 alu, 1 mem, 1 br latencies: add=1, mpy=3, ld = 2, st = 1, br = 1 1: r3 = load(r1) 2: r4 = r3 * 26 3: store (r2, r4) 4: r1 = r1 + 4 5: r2 = r2 + 4 6: p1 = cmpp (r1 < r9) 7: brct p1 Loop ALU: used by 2, 4, 5, 6 à 4 ops / 2 units = 2 Mem: used by 1, 3 à 2 ops / 1 unit = 2 Br: used by 7 à 1 op / 1 unit = 1 ResMII = MAX(2,2,1) = 2-14 -

RecMII Approach: Enumerate all irredundant elementary circuits in the dependence graph RecMII = MAX for all c in C (delay(c) / distance(c)) delay(c) = total latency in dependence cycle c (sum of delays) distance(c) = total iteration distance of cycle c (sum of distances) 1,0 1 2 3,1 delay(c) = 1 + 3 = 4 distance(c) = 0 + 1 = 1 RecMII = 4/1 = 4-15 - cycle k 1 1 k+1 2 3 k+2 4 cycles, k+3 RecMII = 4 k+4 1 k+5 2

RecMII Example 1: r3 = load(r1) 2: r4 = r3 * 26 3: store (r2, r4) 4: r1 = r1 + 4 5: r2 = r2 + 4 6: p1 = cmpp (r1 < r9) 7: brct p1 Loop 0,0 3,0 1,0 1 2 3 4 5 6 7 2,0 1,0 0,0 1,1 1,1 1,1 1,1 4 à 4: 1 / 1 = 1 5 à 5: 1 / 1 = 1 4 à 1 à 4: 1 / 1 = 1 5 à 3 à 5: 1 / 1 = 1 RecMII = MAX(1,1,1,1) = 1 Then, MII = MAX(ResMII, RecMII) MII = MAX(2,1) = 2 <delay, distance> - 16 -

Modulo Scheduling Process Use list scheduling but we need a few twists II is predetermined starts at MII, then is incremented Cyclic dependences complicate matters There is a window where something can be scheduled! Guarantee the repeating pattern 2 constraints enforced on the schedule Each iteration begin exactly II cycles after the previous one Each time an operation is scheduled in 1 iteration, it is tentatively scheduled in subsequent iterations at intervals of II - 17 -

Loop Prolog and Epilog Prolog II = 3 Kernel Epilog Only the kernel involves executing full width of operations Prolog and epilog execute a subset (ramp-up and ramp-down) - 18 -

Separate Code for Prolog and Epilog Loop body with 4 ops A B C D A0 A1 B0 A2 B1 C0 A B C D Prolog - fill the pipe Kernel Bn Cn-1 Dn-2 Cn Dn-1 Dn Epilog - drain the pipe Generate special code before the loop (preheader) to fill the pipe and special code after the loop to drain the pipe. Peel off II-1 iterations for the prolog. Complete II-1 iterations in epilog - 19 -

Removing Prolog/Epilog Prolog II = 3 Kernel Disable using predicated execution Epilog Execute loop kernel on every iteration, but for prolog and epilog selectively disable the appropriate operations to fill/drain the pipeline - 20 -

Kernel-only Code Using Rotating Predicates A0 A1 B0 A2 B1 C0 A B C D A if P[0] B if P[1] C if P[2] D if P[3] Bn Cn-1 Dn-2 Cn Dn-1 Dn P[0] P[1] P[2] P[3] 1 0 0 0 1 1 0 0 1 1 1 0 1 1 1 1 0 1 1 1 0 0 1 1 0 0 0 1 P referred to as the staging predicate - 21 -