CSE 502 Graduate Computer Architecture

Similar documents
Computer Architecture A Quantitative Approach, Fifth Edition. Chapter 3. Instruction-Level Parallelism and Its Exploitation

Copyright 2012, Elsevier Inc. All rights reserved.

ILP: Instruction Level Parallelism

Adapted from David Patterson s slides on graduate computer architecture

Hardware-Based Speculation

Static vs. Dynamic Scheduling

Hardware-based Speculation

Hardware-Based Speculation

Instruction Level Parallelism

Page 1. Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

Chapter 3 (CONT II) Instructor: Josep Torrellas CS433. Copyright J. Torrellas 1999,2001,2002,2007,

CSE 820 Graduate Computer Architecture. week 6 Instruction Level Parallelism. Review from Last Time #1

Instruction-Level Parallelism (ILP)

ELEC 5200/6200 Computer Architecture and Design Fall 2016 Lecture 9: Instruction Level Parallelism

Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

Handout 2 ILP: Part B

Page # CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Michela Taufer

Donn Morrison Department of Computer Science. TDT4255 ILP and speculation

Chapter 3: Instruction Level Parallelism (ILP) and its exploitation. Types of dependences

CSE 502 Graduate Computer Architecture. Lec 8-10 Instruction Level Parallelism

Page 1. Recall from Pipelining Review. Lecture 15: Instruction Level Parallelism and Dynamic Execution

5008: Computer Architecture

Reduction of Data Hazards Stalls with Dynamic Scheduling So far we have dealt with data hazards in instruction pipelines by:

Page 1. CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Pipeline CPI (II) Michela Taufer

NOW Handout Page 1. Review from Last Time #1. CSE 820 Graduate Computer Architecture. Lec 8 Instruction Level Parallelism. Outline

EECC551 Exam Review 4 questions out of 6 questions

Chapter 4. Advanced Pipelining and Instruction-Level Parallelism. In-Cheol Park Dept. of EE, KAIST

The basic structure of a MIPS floating-point unit

Load1 no Load2 no Add1 Y Sub Reg[F2] Reg[F6] Add2 Y Add Reg[F2] Add1 Add3 no Mult1 Y Mul Reg[F2] Reg[F4] Mult2 Y Div Reg[F6] Mult1

Superscalar Architectures: Part 2

Administrivia. CMSC 411 Computer Systems Architecture Lecture 14 Instruction Level Parallelism (cont.) Control Dependencies

Super Scalar. Kalyan Basu March 21,

EITF20: Computer Architecture Part3.2.1: Pipeline - 3

ILP concepts (2.1) Basic compiler techniques (2.2) Reducing branch costs with prediction (2.3) Dynamic scheduling (2.4 and 2.5)

EEC 581 Computer Architecture. Lec 4 Instruction Level Parallelism

Hardware-based speculation (2.6) Multiple-issue plus static scheduling = VLIW (2.7) Multiple-issue, dynamic scheduling, and speculation (2.

CPI IPC. 1 - One At Best 1 - One At best. Multiple issue processors: VLIW (Very Long Instruction Word) Speculative Tomasulo Processor

CPI < 1? How? What if dynamic branch prediction is wrong? Multiple issue processors: Speculative Tomasulo Processor

Instruction Level Parallelism. Taken from

Pipelining and Exploiting Instruction-Level Parallelism (ILP)

CS433 Midterm. Prof Josep Torrellas. October 19, Time: 1 hour + 15 minutes

TDT 4260 lecture 7 spring semester 2015

CISC 662 Graduate Computer Architecture Lecture 13 - CPI < 1

Instruction-Level Parallelism. Instruction Level Parallelism (ILP)

Chapter 3 Instruction-Level Parallelism and its Exploitation (Part 1)

Website for Students VTU NOTES QUESTION PAPERS NEWS RESULTS

Lecture-13 (ROB and Multi-threading) CS422-Spring

Four Steps of Speculative Tomasulo cycle 0

Chapter 4 The Processor 1. Chapter 4D. The Processor

EE 4683/5683: COMPUTER ARCHITECTURE

CPE 631 Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation

Course on Advanced Computer Architectures

Instruction-Level Parallelism and Its Exploitation

Getting CPI under 1: Outline

Hardware-based Speculation

Instruction Level Parallelism. ILP, Loop level Parallelism Dependences, Hazards Speculation, Branch prediction

CPE 631 Lecture 11: Instruction Level Parallelism and Its Dynamic Exploitation

Outline Review: Basic Pipeline Scheduling and Loop Unrolling Multiple Issue: Superscalar, VLIW. CPE 631 Session 19 Exploiting ILP with SW Approaches

Metodologie di Progettazione Hardware-Software

Chapter 3 & Appendix C Part B: ILP and Its Exploitation

Processor: Superscalars Dynamic Scheduling

EE382A Lecture 7: Dynamic Scheduling. Department of Electrical Engineering Stanford University

INSTITUTO SUPERIOR TÉCNICO. Architectures for Embedded Computing

Exploitation of instruction level parallelism

Instruction Level Parallelism. Appendix C and Chapter 3, HP5e

CS433 Homework 2 (Chapter 3)

Instruction Level Parallelism (ILP)

EEC 581 Computer Architecture. Lec 7 Instruction Level Parallelism (2.6 Hardware-based Speculation and 2.7 Static Scheduling/VLIW)

計算機結構 Chapter 4 Exploiting Instruction-Level Parallelism with Software Approaches

Instruction-Level Parallelism and Its Exploitation (Part III) ECE 154B Dmitri Strukov

EEC 581 Computer Architecture. Instruction Level Parallelism (3.6 Hardware-based Speculation and 3.7 Static Scheduling/VLIW)

Advanced issues in pipelining

EECC551 - Shaaban. 1 GHz? to???? GHz CPI > (?)

Complex Pipelining COE 501. Computer Architecture Prof. Muhamed Mudawar

UG4 Honours project selection: Talk to Vijay or Boris if interested in computer architecture projects

Lecture 8 Dynamic Branch Prediction, Superscalar and VLIW. Computer Architectures S

COSC4201 Instruction Level Parallelism Dynamic Scheduling

NOW Handout Page 1. Review from Last Time. CSE 820 Graduate Computer Architecture. Lec 7 Instruction Level Parallelism. Recall from Pipelining Review

Multi-cycle Instructions in the Pipeline (Floating Point)

HY425 Lecture 09: Software to exploit ILP

HY425 Lecture 09: Software to exploit ILP

CMSC411 Fall 2013 Midterm 2 Solutions

DYNAMIC AND SPECULATIVE INSTRUCTION SCHEDULING

Exploiting ILP with SW Approaches. Aleksandar Milenković, Electrical and Computer Engineering University of Alabama in Huntsville

CS433 Homework 2 (Chapter 3)

Slide Set 8. for ENCM 501 in Winter Steve Norman, PhD, PEng

CISC 662 Graduate Computer Architecture Lecture 11 - Hardware Speculation Branch Predictions

DYNAMIC INSTRUCTION SCHEDULING WITH SCOREBOARD

CS 152 Computer Architecture and Engineering

What is ILP? Instruction Level Parallelism. Where do we find ILP? How do we expose ILP?

ESE 545 Computer Architecture Instruction-Level Parallelism (ILP): Speculation, Reorder Buffer, Exceptions, Superscalar Processors, VLIW

Multicycle ALU Operations 2/28/2011. Diversified Pipelines The Path Toward Superscalar Processors. Limitations of Our Simple 5 stage Pipeline

Outline EEL 5764 Graduate Computer Architecture. Chapter 2 - Instruction Level Parallelism. Recall from Pipelining Review

Tutorial 11. Final Exam Review

CPE 631 Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation

Instruction Frequency CPI. Load-store 55% 5. Arithmetic 30% 4. Branch 15% 4

Lecture: Pipeline Wrap-Up and Static ILP

CS252 Graduate Computer Architecture Lecture 6. Recall: Software Pipelining Example

RECAP. B649 Parallel Architectures and Programming

Lecture: Out-of-order Processors. Topics: out-of-order implementations with issue queue, register renaming, and reorder buffer, timing, LSQ

Transcription:

Computer Architecture A Quantitative Approach, Fifth Edition Chapter 3 CSE 502 Graduate Computer Architecture Lec 15-19 Inst. Lvl. Parallelism Instruction-Level Parallelism and Its Exploitation Larry Wittie Computer Science, StonyBrook University http://www.cs.sunysb.edu/~cse502 and ~lw 1

Introduction Introduction Pipelining is a universal technique that started in 1985 Overlaps execution of instructions Exploits Instruction Level Parallelism (ILP) For speed are two main code reordering approaches: Hardware-based dynamic (runtime) approaches Used extensively in server and desktop processors Not used so much in Personal Mobile Device (PMD) processors Compiler-based static (compiler & profiling) approaches Runtime methods not so successful except for scientific applications that have many matrix code loops 2

Instruction-Level Parallelism When exploiting instruction-level parallelism, the goal is to minimize CPI (cycles per instruction) Pipeline CPI = Ideal pipeline CPI + Structural stalls + Data hazard stalls + Control stalls Introduction Parallelism within one basic block is limited (a basic block has no branches in or out) Typical size of basic block = 3-6 instructions Must optimize across branches => out-of-order (OoO) and speculative execution 3

Data Dependence Loop-Level Parallelism Easily unroll loop statically or dynamically Can use SIMD (vector processors and GPUs) Introduction Challenges: Data dependency Instruction j is data dependent on instruction i if Instruction i produces a result that may be used by instruction j Instruction j is data dependent on instruction k and instruction k is data dependent on instruction i Dependent instructions cannot be issued together to be executed simultaneously 4

Data Dependence Dependencies are a property of programs Pipeline organization determines how dependences are detected and which ones must cause a stall Introduction Data dependence conveys: The possibility of a hazard, an event which causes a stall The order in which some results must be calculated An upper bound on exploitable instruction level parallelism Dependencies that flow through memory locations are difficult to detect (if languages let arrays overlap) but occur very rarely 5

Name Dependence Two instructions use the same register name but there is no flow of information (WAR, WAW) Not a true data dependence (RAW), but may cause a problem when reordering instructions Anti-dependence: instruction j writes a register or memory location that instruction i reads (WriteAftRead) Initial ordering (i before j) must be preserved Output dependence: instruction i and instruction j write the same register or memory location (Write After Write) Ordering (i before j) must be preserved To resolve, use renaming techniques (modern way: use hundreds of non-architected CPU-internal registers) Introduction 6

Other Factors Data Hazards Read after write (RAW) - True dependence passes value Write after write (WAW) - A name dependence Write after read (WAR) - A name dependence Control Dependence Ordering of instruction i with respect to a branch instruction Instruction control dependent on a branch cannot be moved before the branch so that its execution is no longer controlled by the branch An instruction not control dependent on a branch cannot be moved after the branch so that its execution is controlled by the branch Introduction 7

Examples Example 1: DADDU R1,R2,R3 BEQZ R4,L DSUBU R1,R1,R6 L: OR R7,R1,R8 OR instruction dependent on DADDU and DSUBU Introduction Example 2: DADDU R12,R2,R3 BEQZ R12,skip DSUBU R4,R5,R6 DADDU R5,R4,R9 skip: OR R7,R8,R9 Assumimg R4 is not used after skip The DSUBU can be moved just before the branch to avoid a stall after DADDU R12 and before BEQZ R12 8

Compiler Techniques for Exposing ILP Pipeline scheduling Separate each dependent instruction from its source instruction by the pipeline latency of the source instruction Example: for (i=999; i>=0; i=i-1) x[i] = x[i] + s; // see assembly code on next slide Compiler Techniques 9

Pipeline Stalls for (i=999; i>=0; i=i-1) x[i] = x[i] + s; Loop: L.D F0,0(R1) stall ADD.D F4,F0,F2 // value s in F2 stall stall S.D F4,0(R1) DADDUI R1,R1,#-8 stall // (assume integer load latency is 1) BNE R1,R2,Loop Compiler Techniques 10

Pipeline Scheduling Re-scheduled code: Loop: L.D F0,0(R1) DADDUI R1,R1,#-8 ADD.D F4,F0,F2 stall stall S.D F4,8(R1) BNE R1,R2,Loop for (i=999; i>=0; i=i-1) x[i] = x[i] + s; Compiler Techniques 11

Loop Unrolling Loop unrolling Unroll by a factor of 4 (assume # elements is divisible by 4) Eliminate unnecessary instructions Loop: L.D F0,0(R1) 1> ADD.D F4,F0,F2 2> S.D F4,0(R1) ;drop DADDUI & BNE L.D F6,-8(R1) 1> ADD.D F8,F6,F2 2> S.D F8,-8(R1) ;drop DADDUI & BNE L.D F10,-16(R1) 1> ADD.D F12,F10,F2 2> S.D F12,-16(R1) ;drop DADDUI & BNE L.D F14,-24(R1) 1> ADD.D F16,F14,F2 2> S.D F16,-24(R1) (11=8+3) DADDUI R1,R1,#-32 1> BNE R1,R2,Loop note: number of live registers vs. original loop (5=2+3) Compiler Techniques 12

Loop Unrolling/Pipeline Scheduling Pipeline schedule the unrolled loop => no data stalls: Loop: L.D F0,0(R1) L.D F6,-8(R1) L.D F10,-16(R1) L.D F14,-24(R1) ADD.D F4,F0,F2 ADD.D F8,F6,F2 ADD.D F12,F10,F2 ADD.D F16,F14,F2 S.D F4,0(R1) S.D F8,-8(R1) DADDUI R1,R1,#-32 S.D F12,16(R1) S.D F16,8(R1) BNE R1,R2,Loop Compiler Techniques 13

Strip Mining What if variable number of loop iterations? Number of iterations = n Goal: make k copies of the loop body Generate pair of loops: First executes n mod k times Second executes n / k times Strip mining Compiler Techniques 14

Strip-Mined Triple Loop Scheduled for (i=n; i>=0; i=i-1) x[i] = x[i] + s; k=3; n=n+1; j=n-(n/k)*k; n=n-1; // j is 2, 1, or 0 for (i=j; i>0; i=i-1) x[n-j+i] = x[n-j+i] + s; //Strip (n+1) mod k for (i=n-j; i>=0; i=i-1) x[i] = x[i] + s; //Triple Loop Scheduled, strip-mined triple loop => no pipeline data stalls: // R1 is address of x[n-j] and R2 is (-8+address of x[0]) Loop: L.D F0,0(R1) //Checking for x[999] to x[0] L.D F6,-8(R1) // j = 1000 mod 3 = 1 L.D F10,-16(R1) ADD.D F4,F0,F2 // Strip x[999-1+1] += s // In triple loop ADD.D F8,F6,F2 // First three are x[999-1] to X[999-3] ADD.D F12,F10,F2 // S.D F4,0(R1) // Last three are x[2] to x[0] S.D F8,-8(R1) // and final 8+(R1) = address of x[0] DADDUI R1,R1,#-24 // so final (R1) = -8+address of x[0] = R2 S.D F12,8(R1) BNE R1,R2,Loop // Loop triple exits properly. 15 Compiler Techniques

Five Loop Unrolling Decisions Requires understanding how one instruction depends on another and how the instructions can be changed or reordered given the dependences: 1. Determine if loop unrolling can be useful by finding that loop iterations are independent (except for loop maintenance code) 2. Use different registers to avoid unnecessary constraints forced by using the same registers for different computations 3. Eliminate the extra test and branch instructions and adjust the loop termination and iteration increment/decrement code 4. Determine that loads and stores in unrolled loop can be interchanged by observing that loads and stores from different iterations are independent Transformation requires analyzing memory addresses and finding that no pairs refer to the same address 5. Schedule (reorder) the code, preserving any dependences needed to yield the same result as the original code 2012, Elsevier Inc. CSE502_Lect15-19_Ch3_Adv_ILP 3/20,22,27,29+4/10/12 slide 16

Three Limits to Loop Unrolling 1. Decrease in amount of overhead amortized with each extra unrolling Amdahl s Law 2. Growth in code size For larger loops, size is a concern if it increases the instruction cache miss rate 3. Register pressure: potential shortfall in registers created by aggressive unrolling and scheduling If not possible to allocate all live values to registers, code may lose some or all of the advantages of loop unrolling Software pipelining is an older compiler technique to unroll loops systematically. Loop unrolling reduces the impact of branches on pipelines; another way is branch prediction. 2012, Elsevier Inc. CSE502_Lect15-19_Ch3_Adv_ILP 3/20,22,27,29+4/10/12 slide 17

Compiler Software-Pipelining of V=S*V Loop Software pipelining structure tolerates the long latencies of FltgPt operations { l.s, mul.s, s.s are single precision (SP) floating-pt. Load, Multiply, Store. } {At start: r1=addr [V(0)], r2=addr[v(last)]+4, f0 = scalar SP fltg multiplier.} Instructions in iteration box are in reverse order, from different iterations. If have separate FltgPt function boxes for L, M, S, can overlap S M L triples. Bg: marks prologue starting iterated code; Ep: marks epilogue to finish code. l.s f2,0(r1) mul.s f4,f0,f2 Bg: addi r1,r1,8 I TIME s.s f4,0(r1) l.s f2,-8(r1) addi r1,r1,4 mul.s f4,f0,f2 T 1 2 3 4 5 6 7 8 bne r1,r2,lp l.s f2,-4(r1) E 1 S l.s f2,0(r1) Lp: s.s f4,-8(r1) mul.s f4,f0,f2 R 2 M S s.s f4,0(r1) mul.s f4,f0,f2 A 3 L M S l.s f2,0(r1) addi r1,r1,4 addi r1,r1,4 T 4 L M S bne r1,r2,lp bne r1,r2,lp l.s f2,0(r1) I 5 L M mul.s f4,f0,f2 Ep: s.s f4,-4(r1) s.s f4,0(r1) O 6 L mul.s f4,f0,f2 addi r1,r1,4 s.s f4,0(r1) N bne r1,r2,lp 18

Branch Prediction Basic 2-bit predictor: For each branch: Predict taken or not taken If the prediction is wrong two consecutive times, change prediction Correlating predictor: Multiple 2-bit predictors for each branch One for each possible combination of outcomes of preceding n branches at all branch locations one global n-bit shift register Local predictor: Multiple 2-bit predictors for each branch One for each possible combination of outcomes for the last n occurrences of this branch each (n-bit shift register + 2 n * 2-bits) Tournament predictor: Combine correlating ( global ) predictor with local predictors and statistics on which predictor is better if say taken vs. not-taken Intel two-level of predictors, each with loop branch counters Fast first-level with larger second level to resolve first-level conflicts 19 Branch Prediction

Branch Prediction Performance Branch Prediction Branch predictor performance vs bit-sizes (Misprediction rates about 4/3 X more for integer vs floating SPEC benchmarks) 20

Dynamic Scheduling Rearrange order of instructions to reduce stalls while maintaining data flow; execute Out-of-Order Advantages: Compiler does not need to have precise knowledge of pipeline structure (run older compiled code efficiently) Handles cases where dependencies are not knowable at compile time (e.g., cache-miss memory-stalls) Dynamic Scheduling Disadvantage: Substantial increase in hardware complexity Complicates exceptions - must flush results produced by code done ahead of exception point and finish all earlier instructions not yet done at time of exception 21

Dynamic Scheduling Dynamic scheduling implies: Out-of-order execution Out-of-order completion Dynamic Scheduling Creates the possibility for WAR and WAW hazards Tomasulo s Approach for IBM 360/91 in 1967 Tracks when operands are available Introduces register renaming in hardware Minimizes WAW and WAR hazards General purpose registers, 16 * 32-bits See en.wikipedia.org/wiki/system/360 22

Register Renaming Example: DIV.D F0,F2,F4 ADD.D F6,F0,F8 S.D F6,0(R1) SUB.D F8,F10,F14 MUL.D F6,F10,F8 antidependence (WAR) F8 antidependence (WAR) F6 Dynamic Scheduling name dependences with F6 and F8 23

Register Renaming Avoid Reuse Example: DIV.D Q,F2,F4 ADD.D S,Q,F8 S.D S,0(R1) SUB.D T,F10,F14 MUL.D U,F10,T Dynamic Scheduling Now only RAW hazards remain, which can be strictly ordered No WAR or WAW hazards (if instructions, same latency) 24

Register Renaming Register renaming is provided by reservation stations (RS) in 360/91 and extra registers in modern chip CPUs An RS contains: The instruction Buffered operand values (as soon as available) Reservation station number of the instruction providing each of the operand values RS fetches and buffers an operand as soon as it becomes available (not necessarily involving register file) Pending instructions designate the RS to which they will send their output Result values broadcast on a result bus, called the common data bus (CDB) Only the last output updates the register file As instructions are issued, the register specifiers are renamed with the reservation station number May be more reservation stations than registers Dynamic Scheduling 25

Tomasulo s Algorithm: 360/91 HW Top-level design: Load and store buffers Contain data and addresses, like reservation stations Issue Dynamic Scheduling 26

Three Steps (after fetching into Instruction Queue): Tomasulo s Algorithm Issue (in original instruction order) Get next instruction from FIFO queue If a RS is available, issue the instruction to the RS with any operand values that are already available In RS, if some operand values are not available, stall the instruction Execute When an operand becomes available on CDBus, store it into any reservation station operand slots waiting for it When all its operands are ready & its function box is no longer busy, start executing a instruction Loads and stores queued in program order only after their effective address (offset + register value) has been calculated (stalls before) No instruction is allowed to initiate execution until all branches that proceed it in program order have completed Write one result at a time (pull in parallel into function slots) Write result on CDB for RS slots, store buffers & registers to latch (Each memory store waits until both its value & address are received) 27 Dynamic Scheduling

Example (F3.7) Dynamic Scheduling 28

Tomasulo s Algorithm Advantages Distribution of hazard detection logic across function boxes Each FIFO instruction, reservation station, and Load/Store buffer function unit checks for the conditions to issue, to start executing, or to access memory for the instruction that it contains There is no single site for hazard detection to become a bottleneck Branches use fast integer logic, not slow floating-point boxes Loop codes unroll automatically across several basic blocks Many floating point operations issued, keeping F-P boxes busy Different function boxes can all execute at the same time The main parallelism limits are (1) only one box per cycle can put a result onto the common data bus (CDB) to be copied by all waiting slots and (2) operations do not execute before earlier branches finish Each function box has multiple input slots; instructions waiting for a box can start execution when their operands are known and the box is not-busy; after instruction issue, slots are serviced in any order Instructions do not have WAR/WAW naming hazards Each result is stored in waiting slots as soon as it reaches the CDB The IDnumbers of awaited result slots replace reused register names 29 Dynamic Scheduling

Hardware-Based Speculation Execute instructions along predicted execution paths but only commit the results if the prediction proves to be correct Instruction commit: allowing an instruction to update the register file only when an instruction is no longer speculative (all earlier branches finished) Need an additional piece of hardware to prevent any irrevocable action (changing a register or memory value) until an instruction commits a ReOrder Buffer for uncommitted results Dynamic Scheduling 30

Reorder Buffer (ROB) Reorder buffer holds the results of instructions between completion and commit Dynamic Scheduling Four fields: Instruction type: branch/store/register Destination field: register number Value field: output value Ready field: completed execution? Modify reservation stations: Operand source is now reorder buffer instead of a functional unit slot 31

Figure 3.11 The basic structure of a FP unit using Tomasulo s algorithm as extended to handle speculation. The major changes are the addition of the ROB and the elimination of the store buffer, whose function is integrated into the ROB. Issue commit Dynamic Scheduling 2012, Elsevier Inc. CSE502_Lect15-19_Ch3_Adv_ILP 3/20,22,27,29+4/10/12 slide 32

Reorder Buffer Register values and memory values are not written until an instruction commits Each instruction commits when it reaches the bottom of the reorder buffer queue; it leaves ROB Whenever a mispredicted branch tries to commit: All speculated entries in ROB (all are above it in ROB) are cleared; they should never have been executed Whenever an instruction tries to commit that is marked as having raised an exception: The exception is recognized for the first time Branch Prediction 33

Multiple Issue and Static Scheduling To achieve CPI < 1, need to complete multiple instructions per clock Solutions: Statically scheduled superscalar (a.k.a multi-issue ) processors VLIW (very long instruction word) processors Dynamically scheduled superscalar processors Multiple Issue and Static Scheduling 34

Multiple Issue Processor Variations Multiple Issue and Static Scheduling 35

Very Long Instruction Word Processors Package multiple operations into one instruction Example VLIW processor with five operations per instruction word: One integer instruction (or branch) Two independent floating-point operations Two independent memory references Must be enough parallelism in the code to fill the available slots Multiple Issue and Static Scheduling Requires massive compile-time analysis to combine instructions from different basic blocks to move them up to fill empty instruction word slots 36

VLIW Processors Disadvantages: Statically finding parallelism (i.e., at compile-time) Code size There are usually many partially filled instruction words No hazard detection hardware The compiler must be perfectly tuned to hardware of bad results are created without warning Multiple Issue and Static Scheduling Binary code compatibility Slight hardware changes means revising the compiler 37

Dynamic Scheduling, Multiple Issue, and Speculation Modern microarchitectures: Dynamic scheduling + multiple issue + speculation Two approaches: Assign reservation stations and update pipeline control table in half clock cycles Only supports 2 instructions/clock Design logic to handle any possible dependencies between the instructions Hybrid approaches Issue logic can become bottleneck Dynamic Scheduling, Multiple Issue, and Speculation 38

Multi- Issue Design Overview The main changes are more function boxes to support execution of more instructions per cycle and a wider CDB to allow multiple function units to post results at the same time. 39 Dynamic Scheduling, Multiple Issue, and Speculation

IBM Power 4 Single-threaded predecessor to Power 5. Eight execution units in an out-of-order engine, each unit may issue one instruction each cycle. Instruction pipeline (IF: instruction fetch, IC: instruction cache, BP: branch predict, D0: decode stage 0, Xfer: transfer, GD: group dispatch, MP: mapping, ISS: instruction issue, RF: register file read, EX: execute, EA: compute address, DC: data caches, F6: six-cycle floating-point execution pipe, Fmt: data format, WB: write back, and CP: group commit) Eight execution units (2 LSU: Load/Store; 2 FXU: Integer Fixed Point Arithmetic; 2 FPU: Floating Point Arith; BXU: Branch Execution; CRL: Condition Register Logical ) 40

Multiple Issue Limit the number of instructions of a given class that can be issued in a bundle, i.e., on same cycle For example: one FP, one integer, one load, one store Determine all dependencies among instructions in the bundle; the cost rises with bundle-size squared If no dependencies exist in the bundle, issue all the instructions into reservation stations for entry into execution units Dynamic Scheduling, Multiple Issue, and Speculation Must also have multiple completion/commit buffers 41

Example Loop: LD R2,0(R1) ;R2=array element 64-bit value DADDIU R2,R2,#1 ;increment R2 by 1 SD R2,0(R1) ;store result DADDIU R1,R1,#8 ;increment array address by 8 bytes BNE R2,R3,Loop ;branch if was not last element (DADDIU is Double-length integer Add Immediate Unsigned) Dynamic Scheduling, Multiple Issue, and Speculation 42

Example (No Speculation) O O O LD LD LD Dynamic Scheduling, Multiple Issue, and Speculation 43

Example (With Speculation) O O O LD LD LD Dynamic Scheduling, Multiple Issue, and Speculation 44

Branch-Target Buffer Need high instruction bandwidth! Branch-Target buffers Next PC prediction buffer, indexed by current PC Adv. Techniques for Instruction Delivery and Speculation 45

Branch Folding Optimization: Larger branch-target buffer Add target instruction into buffer to deal with longer decoding time required by larger buffer Branch folding Adv. Techniques for Instruction Delivery and Speculation 46

Return Address Predictor Most unconditional branches come from function returns The same procedure can be called from multiple sites Causes the buffer to potentially forget about the return address from previous calls Create return address buffer organized as a stack Adv. Techniques for Instruction Delivery and Speculation 47

Integrated Instruction Fetch Unit Design monolithic unit that performs: Branch prediction Instruction prefetch Fetch ahead Instruction memory access and buffering Deal with crossing cache lines Adv. Techniques for Instruction Delivery and Speculation 48

Register Renaming Register renaming vs. reorder buffers Instead of virtual registers from reservation stations and reorder buffer, create a single register pool Contains visible registers and virtual registers Use hardware-based map to rename registers during issue WAW and WAR hazards are avoided Speculation recovery occurs by copying during commit Still need a ROB-like queue to update table in order Simplifies commit: Record that mapping between architectural register and physical register is no longer speculative Free up physical register used to hold older value In other words: SWAP physical registers on commit Physical register de-allocation is more difficult Adv. Techniques for Instruction Delivery and Speculation 49

Integrated Issue and Renaming Combining instruction issue with register renaming: Issue logic pre-reserves enough physical registers for the bundle (fixed number?) Issue logic finds dependencies within bundle, maps registers as necessary Issue logic finds dependencies between current bundle and already in-flight bundles, maps registers as necessary Adv. Techniques for Instruction Delivery and Speculation 50

How Much? How much to speculate Mis-speculation degrades performance and power relative to no speculation May cause additional misses (cache, TLB) Prevent speculative code from causing higher costing misses (e.g. L2) Speculating through multiple branches Complicates speculation recovery No processor can resolve multiple branches per cycle Adv. Techniques for Instruction Delivery and Speculation 51

Energy Efficiency Speculation and energy efficiency Note: speculation is only energy efficient when it significantly improves performance Value prediction Uses: Loads that load from a constant pool Instruction that produces a value from a small set of values Not been incorporated into modern processors Similar idea--address aliasing prediction--is used on some processors (guess whether can reorder two Ld/St instructions that fail if same memory address) Adv. Techniques for Instruction Delivery and Speculation 52