ELEC 5200/6200 Computer Architecture and Design Fall 2016 Lecture 9: Instruction Level Parallelism

Similar documents
Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

Page 1. Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

CPE 631 Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation

CPE 631 Lecture 11: Instruction Level Parallelism and Its Dynamic Exploitation

吳俊興高雄大學資訊工程學系. October Example to eleminate WAR and WAW by register renaming. Tomasulo Algorithm. A Dynamic Algorithm: Tomasulo s Algorithm

Page 1. Recall from Pipelining Review. Lecture 15: Instruction Level Parallelism and Dynamic Execution

Processor: Superscalars Dynamic Scheduling

Instruction Level Parallelism. Taken from

CISC 662 Graduate Computer Architecture. Lecture 10 - ILP 3

Instruction-Level Parallelism (ILP)

CPE 631 Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation

EE 4683/5683: COMPUTER ARCHITECTURE

COSC4201 Instruction Level Parallelism Dynamic Scheduling

CS252 Graduate Computer Architecture Lecture 6. Recall: Software Pipelining Example

Review: Compiler techniques for parallelism Loop unrolling Ÿ Multiple iterations of loop in software:

Superscalar Architectures: Part 2

NOW Handout Page 1. Review from Last Time. CSE 820 Graduate Computer Architecture. Lec 7 Instruction Level Parallelism. Recall from Pipelining Review

Reduction of Data Hazards Stalls with Dynamic Scheduling So far we have dealt with data hazards in instruction pipelines by:

Instruction Level Parallelism

Static vs. Dynamic Scheduling

EEC 581 Computer Architecture. Lec 4 Instruction Level Parallelism

NOW Handout Page 1. Outline. Csci 211 Computer System Architecture. Lec 4 Instruction Level Parallelism. Instruction Level Parallelism

Metodologie di Progettazione Hardware-Software

CSE 502 Graduate Computer Architecture. Lec 8-10 Instruction Level Parallelism

Chapter 3 Instruction-Level Parallelism and its Exploitation (Part 1)

ILP concepts (2.1) Basic compiler techniques (2.2) Reducing branch costs with prediction (2.3) Dynamic scheduling (2.4 and 2.5)

Adapted from David Patterson s slides on graduate computer architecture

ESE 545 Computer Architecture Instruction-Level Parallelism (ILP) and Static & Dynamic Instruction Scheduling Instruction level parallelism

EECC551 Exam Review 4 questions out of 6 questions

Instruction-Level Parallelism and Its Exploitation

Multicycle ALU Operations 2/28/2011. Diversified Pipelines The Path Toward Superscalar Processors. Limitations of Our Simple 5 stage Pipeline

Hardware-based Speculation

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

Website for Students VTU NOTES QUESTION PAPERS NEWS RESULTS

Copyright 2012, Elsevier Inc. All rights reserved.

CSE 820 Graduate Computer Architecture. week 6 Instruction Level Parallelism. Review from Last Time #1

Administrivia. CMSC 411 Computer Systems Architecture Lecture 14 Instruction Level Parallelism (cont.) Control Dependencies

Page # CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Michela Taufer

The basic structure of a MIPS floating-point unit

Pipelining and Exploiting Instruction-Level Parallelism (ILP)

Load1 no Load2 no Add1 Y Sub Reg[F2] Reg[F6] Add2 Y Add Reg[F2] Add1 Add3 no Mult1 Y Mul Reg[F2] Reg[F4] Mult2 Y Div Reg[F6] Mult1

Page 1. CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Pipeline CPI (II) Michela Taufer

Advanced Computer Architecture

Multi-cycle Instructions in the Pipeline (Floating Point)

Chapter 4 The Processor 1. Chapter 4D. The Processor

Lecture 6 MIPS R4000 and Instruction Level Parallelism. Computer Architectures S

Outline EEL 5764 Graduate Computer Architecture. Chapter 2 - Instruction Level Parallelism. Recall from Pipelining Review

CS425 Computer Systems Architecture

CISC 662 Graduate Computer Architecture Lecture 13 - CPI < 1

Review: Evaluating Branch Alternatives. Lecture 3: Introduction to Advanced Pipelining. Review: Evaluating Branch Prediction

Graduate Computer Architecture. Chapter 3. Instruction Level Parallelism and Its Dynamic Exploitation

Exploiting ILP with SW Approaches. Aleksandar Milenković, Electrical and Computer Engineering University of Alabama in Huntsville

Handout 2 ILP: Part B

NOW Handout Page 1. Review from Last Time #1. CSE 820 Graduate Computer Architecture. Lec 8 Instruction Level Parallelism. Outline

What is ILP? Instruction Level Parallelism. Where do we find ILP? How do we expose ILP?

Lecture 4: Introduction to Advanced Pipelining

Four Steps of Speculative Tomasulo cycle 0

NOW Handout Page 1. COSC 5351 Advanced Computer Architecture

Computer Architecture A Quantitative Approach, Fifth Edition. Chapter 3. Instruction-Level Parallelism and Its Exploitation

Instruction Level Parallelism (ILP)

Hardware-based speculation (2.6) Multiple-issue plus static scheduling = VLIW (2.7) Multiple-issue, dynamic scheduling, and speculation (2.

Compiler Optimizations. Lecture 7 Overview of Superscalar Techniques. Memory Allocation by Compilers. Compiler Structure. Register allocation

5008: Computer Architecture

Super Scalar. Kalyan Basu March 21,

Dynamic Scheduling. Better than static scheduling Scoreboarding: Tomasulo algorithm:

Lecture 9: Case Study MIPS R4000 and Introduction to Advanced Pipelining Professor Randy H. Katz Computer Science 252 Spring 1996

EITF20: Computer Architecture Part3.2.1: Pipeline - 3

Advanced Computer Architecture CMSC 611 Homework 3. Due in class Oct 17 th, 2012

Instruction Level Parallelism

CPE 631 Lecture 09: Instruction Level Parallelism and Its Dynamic Exploitation

Lecture-13 (ROB and Multi-threading) CS422-Spring

CMSC 611: Advanced Computer Architecture

ILP: Instruction Level Parallelism

Computer Science 246 Computer Architecture

Outline Review: Basic Pipeline Scheduling and Loop Unrolling Multiple Issue: Superscalar, VLIW. CPE 631 Session 19 Exploiting ILP with SW Approaches

Instruction Frequency CPI. Load-store 55% 5. Arithmetic 30% 4. Branch 15% 4

Chapter 3: Instruction Level Parallelism (ILP) and its exploitation. Types of dependences

Floating Point/Multicycle Pipelining in DLX

Scoreboard information (3 tables) Four stages of scoreboard control

Exploitation of instruction level parallelism

Donn Morrison Department of Computer Science. TDT4255 ILP and speculation

Functional Units. Registers. The Big Picture: Where are We Now? The Five Classic Components of a Computer Processor Input Control Memory

Hardware-Based Speculation

EEC 581 Computer Architecture. Lec 7 Instruction Level Parallelism (2.6 Hardware-based Speculation and 2.7 Static Scheduling/VLIW)

DYNAMIC INSTRUCTION SCHEDULING WITH SCOREBOARD

EEC 581 Computer Architecture. Instruction Level Parallelism (3.6 Hardware-based Speculation and 3.7 Static Scheduling/VLIW)

CPI < 1? How? What if dynamic branch prediction is wrong? Multiple issue processors: Speculative Tomasulo Processor

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568

Hardware-based Speculation

DYNAMIC SPECULATIVE EXECUTION

332 Advanced Computer Architecture Chapter 3. Dynamic scheduling, out-of-order execution, register renaming and speculative execution

Instruction Level Parallelism. Appendix C and Chapter 3, HP5e

Chapter 4. Advanced Pipelining and Instruction-Level Parallelism. In-Cheol Park Dept. of EE, KAIST

Topics. Digital Systems Architecture EECE EECE Predication, Prediction, and Speculation

INSTITUTO SUPERIOR TÉCNICO. Architectures for Embedded Computing

Computer Architectures. Chapter 4. Tien-Fu Chen. National Chung Cheng Univ.

Recall from Pipelining Review. Instruction Level Parallelism and Dynamic Execution

CISC 662 Graduate Computer Architecture Lecture 11 - Hardware Speculation Branch Predictions

EECC551 Review. Dynamic Hardware-Based Speculation

Lecture 15: Instruc.on Level Parallelism -- Introduc.on, Compiler Techniques, and Advanced Branch Predic.on

Transcription:

ELEC 5200/6200 Computer Architecture and Design Fall 2016 Lecture 9: Instruction Level Parallelism Ujjwal Guin, Assistant Professor Department of Electrical and Computer Engineering Auburn University, Auburn, AL 36849 http://www.auburn.edu/~uzg0005/ Adapted from Dr. Chen-Huan Chiang (Intel) [Adapted from Computer Architecture: A Quantitative Approach, Patterson & Hennessy, 2012] 4/28/2017 ELEC 5200-001/6200-001 Lecture 9 1

Instruction Level Parallelism Compiler techniques to increase ILP Loop Unrolling Static Branch Prediction (Lecture 5) Dynamic Branch Prediction (Lecture 5) Overcoming Data Hazards with Dynamic Scheduling Tomasulo s Algorithm

Recall from Pipelining Review Pipeline CPI = Ideal pipeline CPI + Structural Stalls + Data Hazard Stalls + Control Stalls Ideal pipeline CPI: measure of the maximum performance attainable by the implementation Structural hazards: HW cannot support this combination of instructions Data hazards: Instruction depends on result of prior instruction still in the pipeline Control hazards: Caused by delay between the fetching of instructions and decisions about changes in control flow (branches and jumps)

Instruction Level Parallelism Instruction-Level Parallelism (ILP): overlap the execution of instructions to improve performance 2 approaches to exploit ILP: 1) Rely on hardware to help discover and exploit the parallelism dynamically (e.g., Pentium 4, AMD Opteron, IBM Power), and 2) Rely on software technology to find parallelism, statically at compile-time (e.g., Itanium 2)

Data Dependencies and Hazards HW/SW must preserve program order: the order that instructions would execute in if executed sequentially as determined by original source program Dependences are a property of programs Presence of dependence indicates potential for a hazard, but actual hazard and length of any stall is property of the pipeline Importance of the data dependencies 1) indicates the possibility of a hazard 2) determines order in which results must be calculated 3) sets an upper bound on how much parallelism can possibly be exploited HW/SW goal: exploit parallelism by preserving program order only where it affects the outcome of the program

Hazard vs Dependence Dependence: fixed property of instruction stream (i.e., program) Hazard: property of program and processor organization Definition: a hazard is created whenever there is a dependence between instructions, and they are close enough that the overlap during execution would change the order of access to the operand involved in the dependence. implies potential for executing things in wrong order potential only exists if instructions can be simultaneously in-flight (i.e. in the pipeline simultaneously) property of dynamic distance between instrs vs. pipeline depth For example, can have RAW dependence with or without hazard When distance between RAW instructions is larger than the pipeline depth

Data Dependence Instr J is data dependent (aka true dependence) on Instr I: 1. Instr J tries to read operand before Instr I writes it I: add r1,r2,r3 J: sub r4,r1,r3 2. or Instr J is data dependent on Instr K which is dependent on Instr I i.e, the chain dependence If two instructions are data dependent, they cannot execute simultaneously or be completely overlapped Data dependence in instruction sequence data dependence in source code effect of original data dependence must be preserved If data dependence caused a hazard in pipeline, called a Read After Write (RAW) hazard

Name Dependence Name dependence: when 2 instructions use same register or memory location (called a name), but no flow of data between the instructions associated with that name. Two types of name dependence Anti-dependence Output dependence

Name Dependence: Anti-dependence Instr J writes operand before Instr I reads it I: sub r4,r1,r3 J: add r1,r2,r3 K: mul r6,r1,r7 Called an anti-dependence by compiler writers. This results from reuse of the name r1 If anti-dependence caused a hazard in the pipeline, called a Write After Read (WAR) hazard

Name Dependence: Output dependence Instr K writes operand before Instr I writes it. I: sub r1,r4,r3 J: mul r6,r1,r7 K: add r1,r2,r3 Called an output dependence by compiler writers This also results from the reuse of name r1 If output dependence caused a hazard in the pipeline, called a Write After Write (WAW) hazard

Register Renaming Name dependence is not a true dependence No value being transmitted between instructions Instructions involved in a name dependence can execute simultaneously if the name (register number or memory location) used in instructions is changed so instructions do not conflict Register renaming resolves name dependence for register operands Done either statically by compiler, or dynamically by HW

Let s Re-visit Data Hazards Data hazards classified as one of three types Depending on the order of read and write accesses in the instrs Consider two instructions i and j, with i preceding j in the program order Read-after-write (RAW) j tries to read a source before i writes it. Most common Corresponds to a true data dependence Write-after-read (WAR) j tries to write a destination before it is read by i. Corresponds to an anti-dependence Write-after-write (WAW) j tries to write an operand before it is written by i. Corresponds to an output dependence Read-after-Read (RAR) is never a hazard

Loop Unrolling: Software Techniques This C code, add a scalar to a vector: for (i=1000; i>0; i=i 1) x[i] = x[i] + s; Assume following latencies for all examples Ignore delayed branch in these examples Instruction Instruction Latency Stall between producing result using result in cycles in cycles FP ALU op Another FP ALU op 4 3 FP ALU op Store double 3 2 Load double FP ALU op 1 1 Load double Store double 1 0 Note: the latency of FP load to store is 0, since the result of the load can be bypassed without stalling the store; Value loaded from memory; stored to new location --- Used for memory copying

Assumption of FP latency About the assumption of the FP latency In MIPS FP pipeline, parallel pipelines for different types of instructions If FP ALU takes, say 4 EXE phases, then 2 stalls required between FP ALU and Store (assume no forwarding) Note that write taken place on the 1st half and read on the 2nd half FP ALU1 IF ID F1 F2 F3 F4 MEM WB FP ALU2 IF ID stall stall stall F1 F2 F3 F4 MEM FP ALU IF ID F1 F2 F3 F4 MEM WB Store IF ID EXE stall stall MEM WB

FP Loop: Where are the Hazards? First translate C code into MIPS ASM R1 initialized to the address of the element in the array with the highest address F2 contains the scalar value s To simplify, assume 8 is the lowest address of the array (8 bytes, a double-word, for each element) Hence when the array index equal to 0, the end of loop is reached Loop: L.D F0,0(R1) ;F0=vector element ADD.D F4,F0,F2 ;add scalar from F2 S.D F4,0(R1) ;store result DADDUI R1,R1,-8 ;decrement pointer ;8Bytes (DW) double-word BNEZ R1,Loop ;branch R1!=zero 15

9 clock cycles: FP Loop Showing Stalls 1 Loop: L.D F0,0(R1) ;F0=vector element 2 stall 3 ADD.D F4,F0,F2 ;add scalar in F2 4 stall 5 stall 6 S.D F4, 0(R1) ;store result 7 DADDUI R1,R1,-8 ;decrement pointer 8B (DW) 8 stall ;assumes can t forward to branch 9 BNEZ R1,Loop ;branch R1!=zero Instruction Instruction Stall between in producing result using result clock cycles FP ALU op Store double 2 Load double FP ALU op 1 Can we rewrite code to minimize stalls?

Revised FP Loop Minimizing Stalls Swap DADDUI and S.D by changing address of S.D 1 Loop: L.D F0,0(R1) 2 DADDUI R1,R1,-8 3 ADD.D F4,F0,F2 4 stall 5 stall 6 S.D F4, 8(R1) ; was 0(R1) 7 BNEZ R1,Loop Instruction Instruction Latency in producing result using result clock cycles FP ALU op Store double 2 Load double FP ALU op 1 7 clock cycles, but just 3 for execution (L.D, ADD.D,S.D), 4 for loop overhead (DADDUI, BNE and 2 stalls); How to make even faster? --- loop unrolling ; altered offset when move DADDUI

Loop Unrolling A simple scheme to increase the number of instructions relative to the branch overhead instructions Replicates the loop body multiple times, adjusting the loop termination code Also improves scheduling, because Branch is eliminated Instructions from different iterations to be scheduled together Eliminates data use stalls by creating additional independent instructions within the loop body However, want to use different registers for each iteration, thus increasing the required number of registers Otherwise, the use of the same registers in an unrolled loop could prevent scheduling the loop effectively 18

Unroll Loop Four Times (straightforward way) 1 Loop:L.D F0,0(R1) 3 ADD.D F4,F0,F2 6 S.D F4,0(R1) ;drop DADDUI & BNEZ 7 L.D F6,-8(R1) 9 ADD.D F8,F6,F2 12 S.D F8,-8(R1) ;drop DADDUI & BNEZ 13 L.D F10,-16(R1) 15 ADD.D F12,F10,F2 18 S.D F12,-16(R1) ;drop DADDUI & BNEZ 19 L.D F14,-24(R1) 21 ADD.D F16,F14,F2 24 S.D F16,-24(R1) 25 DADDUI R1,R1,#-32 ;alter to 4*8 27 BNEZ R1,LOOP 1 cycle stall 2 cycles stall Rewrite loop to minimize stalls? 1 cycle stall assuming no bypass 27 clock cycles, or 6.75 per iteration Assumes (R1 8) (i.e. size of array; recall that 8 is the lowest address of the array) is multiple of 32, because each loop decreases loop count by 8, it means # of loop iterations is multiple of 4 --- hence unroll 4 loops Note the different registers are used to eliminate name dependence and avoid stalls 19

Unroll Loop Four Times (cont.) Eliminates 3 branches Eliminates 3 decrements of R1 Addresses on the load and store compensated to eliminate the decrement on R1 (i.e. DADDUI R1, R1, #-8) This optimization requires symbolic substitution and simplification to rearrange expressions to allow constants to be collapsed Such as ((i+1)+1) => (i+(1+1)) => i+2 More in Appendix G No scheduling in the above example i.e. every instruction in the unrolled loop is followed by the original instruction which is a dependent instruction Thus causes stalls Could be rewritten to minimize stalls such as swapping the instructions 20

Unrolled Loop with Scheduling that Minimizes Stalls Scheduling, i.e. swaping instructions while preserving any dependences needed to yield the same result as the original Made possible by register renaming 1 Loop:L.D F0,0(R1) 2 L.D F6,-8(R1) 3 L.D F10,-16(R1) 4 L.D F14,-24(R1) 5 ADD.D F4,F0,F2 6 ADD.D F8,F6,F2 7 ADD.D F12,F10,F2 8 ADD.D F16,F14,F2 9 S.D F4,0(R1) 10 S.D F8,-8(R1) 11 DADDUI R1,R1,#-32 12 S.D F12,16(R1) ; 16-32 = -16 13 S.D F16, 8(R1) ; 8-32 = -24 14 BNEZ R1,LOOP 14 clock cycles, or 3.5 per iteration was 6.75 per iteration when unrolled but not scheduled was 7 when scheduled but not unrolled was 9 without unrolling and scheduling

Limits to Loop Unrolling 1. Decrease in amount of overhead amortized with each extra unrolling Amdahl s Law Performance gain reduced due to the decrease of overhead with extra unrolling Example: In the above example (unrolled 4 times), out of the 14 clock cycles, only 2 cycles were loop overhead (DADDUI for index value and BNE for loop termination) Overhead is 2/4=1/2 cycle per original iteration If unrolled 8 times, still 2 cycles are overhead Overhead reduced to 2/8=1/4 cycle per original iteration 2. Growth in Code size Memory usage for large codes For larger loops, concern it increases the instruction cache miss rate 3. Compiler limitation Register pressure: potential shortfall in registers created by aggressive unrolling and scheduling If not be possible to allocate all live values to registers, may lose some or all of its advantage Loop unrolling reduces impact of branches on pipeline; another way is branch prediction (See Lecture 5)

Dynamic Scheduling Dynamic scheduling - hardware rearranges the instruction execution to reduce stalls while maintaining data flow and exception behavior It handles cases when dependences unknown at compile time For example, because they may involve in memory reference It allows the processor to tolerate unpredictable delays such as cache misses, by executing other code while waiting for the miss to resolve It allows code that compiled for one pipeline in mind to run efficiently on a different pipeline It simplifies the compiler Hardware speculation, a technique with significant performance advantages, builds on dynamic scheduling.

HW Schemes: Instruction Parallelism Key idea: Allow instructions behind stall to proceed DIVD ADDD SUBD F0,F2,F4 F10,F0,F8 F12,F8,F14 Enables out-of-order execution and allows out-of-order completion (e.g., SUBD) In a dynamically scheduled pipeline, all instructions still pass through issue stage (i.e. ID) in order (in-order issue) Will distinguish when an instruction begins execution and when it completes execution; between the two times, the instruction is in execution Note: Dynamic execution creates WAR and WAW hazards and makes exceptions harder

Dynamic Scheduling Step 1 Simple pipeline had one stage to check both structural and data hazards: Instruction Decode (ID) also called Instruction Issue To allow out-of-order execution -> Split the ID pipeline stage of simple 5-stage pipeline into 2 stages: Issue: Decode instructions, check for structural hazards Read operands:wait until no data hazards, then read operands IF stage, prior ID stage, fetches instruction into either instruction register or a queue of pending instructions; Instruction then issued from the register or queue (no longer from instruction memory) EX stage follows READ OPERANDS to continue the 5-stage pipeline

A Dynamic Algorithm: Tomasulo s For IBM 360/91 (before caches!) Long memory latency Goal: High Performance without special compilers Small number of floating point registers (only 4 in IBM360) prevented interesting compiler scheduling of operations This led Tomasulo to try to figure out how to get more effective registers renaming in hardware! Why Study 1966 Computer? The descendants of this have flourished! Alpha 21264, Pentium 4, AMD Opteron, Power 5,

Tomasulo s Algorithm Control & buffers distributed with Function Units (FU) FU buffers called reservation stations (RS) have pending operands RS fetches and buffers an operand as soon as it is available Eliminating the need to get the operand from a register Registers in instructions replaced by values or pointers to RS; called register renaming Renaming avoids WAR, WAW hazards More reservation stations than the real registers in the register file so can do optimizations that compilers can t

Tomasulo s Algorithm (cont.) Two other important properties for the use of distributed RS rather than a centralized register file Hazard detection and execution control are distributed The information held in RS at each FU determine when an instruction can begin execution at that unit Results are passed directly to FU from RS, where they are buffered, rather than going through register file Avoids RAW hazards by executing an instruction only when its operands are available This bypass is done via a common result bus that allows all units waiting for an operand to be loaded simultaneously On IBM360, it is the Common Data Bus (CDB) In pipelines with multiple execution units and issuing multiple instructions per clock, more than one common result bus will be needed

Tomasulo s Algorithm (cont.) Memory Load buffers and Store buffers hold data and addresses coming from and going to memory treated as FU with RS as well Integer instructions can go past branches (predict taken), allowing FP ops beyond basic block in FP queue FP registers connected by a pair of buses to FUs and a single bus to Store buffer All results from FUs and from memory are sent to CDB CDB connects to everywhere except for Load buffer 4/28/2017 ELEC 5200-001/6200-001 Lecture 9 29

Tomasulo s Organization of FP unit From Mem FP Op Queue FP Registers Load1 Load2 Load3 Load4 Load5 Load6 Load Buffers Store Buffers Add1 Add2 Add3 Mult1 Mult2 FP adders Reservation Stations FP multipliers To Mem Common Data Bus (CDB)

Fields of Reservation Station Op: Operation to perform in the unit (e.g., + or ) Vj, Vk: Value of Source operands Store buffers has V field, result to be stored Qj, Qk: Reservation stations that will produce the corresponding source operand (value to be written) Note: Qj,Qk=0 => ready Store buffers only have Qi for RS producing result A: (or address) used to hold information for memory address calculation for a load or store Busy: Indicates reservation station or FU is busy Register file has one field Qi -- Register result status Indicates which functional unit will write each register, if one exists. Blank (i.e. 0) when no pending instructions that will write that register. When instructions try to read register file, check this field first If entry is empty, can read value from register file If entry is full, read name of reservation station that holds producing instruction Load and Store buffers each has a field A

Three Stages of an Instruction in FP unit using Tomasulo Algorithm 1. Issue: get instruction from FP Op Queue (or instruction queue) If reservation station free (no structural hazard), control issues instr & sends operands (renames registers). 2. Execute: operate on operands (EX) When both operands ready then execute; if not ready, watch Common Data Bus for result 3. Write result: finish execution (WB) Write on Common Data Bus to all awaiting units; mark reservation station available

Common Data Bus (CDB) Regular data bus: data + destination ( go to bus) Common data bus: data + source ( come from bus) 64 bits of data + 4 bits of Functional Unit source address Write if matches expected Functional Unit (produces result) Does the broadcast Because multiple instructions are waiting for the result For any instruction already has its other operands, once receiving the result needed, the instruction can be released simultaneously by the broadcast if the result on the CDB On the contrary, for a centralized register file, the unit has to wait to read their results from the registers when the register buses are available In the following example, whenever an instruction completes: Broadcast to check who is waiting Update corresponding register status Update corresponding reservation station entries (move from Q to V), or move out of load/store buffers Assumptions used in the following example: 2 clocks for load 2 clocks for floating point +,- 10 for * 40 clks for / (division)

Instruction stream Tomasulo s Example Instruction j k Issue Comp Result Busy Address LD F6 34+ R2 Load1 No LD F2 45+ R3 Load2 No MULTD F0 F2 F4 Load3 No SUBD F8 F6 F2 DIVD F10 F0 F6 ADDD F6 F8 F2 Reservation Stations: S1 S2 RS RS FU count down Time Name Busy Op Vj Vk Qj Qk Add1 No Add2 No Add3 No Mult1 No Mult2 No completion 3 Load/Buffers 3 FP Adder R.S. 2 FP Mult R.S. Register result status Qi: Clock F0 F2 F4 F6 F8 F10 F12... F30 0 FU Clock cycle counter

Tomasulo s Example Cycle 57 Instruction status: Exec Write Instruction j k Issue Comp Result Busy Address LD F6 34+ R2 1 3 4 Load1 No LD F2 45+ R3 2 4 5 Load2 No MULTD F0 F2 F4 3 15 16 Load3 No SUBD F8 F6 F2 4 7 8 DIVD F10 F0 F6 5 56 57 ADDD F6 F8 F2 6 10 11 Reservation Stations: S1 S2 RS RS Time Name Busy Op Vj Vk Qj Qk Add1 No Add2 No Add3 No Mult1 No Mult2 Yes DIVD M*F4 M(A1) Register result status: Clock F0 F2 F4 F6 F8 F10 F12... F30 56 FU M*F4 M(A2) (M-M+M(M-M) Result Once again: In-order issue, out-of-order execution and out-of-order completion.

Why can Tomasulo overlap iterations of loops? Register renaming Multiple iterations use different physical destinations for registers (dynamic loop unrolling). Reservation stations Permit instruction issue to advance past integer control flow operations Also buffer old values of registers - totally avoiding the WAR stall Other perspective: Tomasulo building data flow dependency graph on the fly

Tomasulo s Loop Example Loop: LD F0 0 R1 MULTD F4 F0 F2 SD F4 0 R1 SUBI R1 R1 #8 BNEZ R1 Loop This time assume Multiply takes 4 clocks Assume 1st load takes 8 clocks (due to L1 cache miss) 2nd load takes 1 clock (cache hit) Assume R1=80 in the beginning To be clear, will show clocks for SUBI, BNEZ Reality: integer instructions ahead of Fl. Pt. Instructions Show 2 iterations

Loop s Example Instruction status: Exec Write ITER Instruction j k Issue CompResult Busy Addr Fu 1 LD F0 0 R1 Load1 No 1 MULTD F4 F0 F2 Load2 No Iteration Count 1 SD F4 0 R1 Load3 No 2 LD F0 0 R1 Store1 No 2 MULTD F4 F0 F2 Store2 No 2 SD F4 0 R1 Store3 No Reservation Stations: S1 S2 RS Time Name Busy Op Vj Vk Qj Qk Code: Added Store Buffers Add1 No LD F0 0 R1 Add2 No MULTD F4 F0 F2 Add3 No SD F4 0 R1 Mult1 No SUBI R1 R1 #8 Mult2 No BNEZ R1 Loop Instruction Loop Register result status Clock R1 F0 F2 F4 F6 F8 F10 F12... F30 0 80 Fu Value of Register used for address, iteration control

Loop Example Cycle 20, end of 2 iterations Instruction status: Exec Write ITER Instruction j k Issue CompResult Busy Addr Fu 1 LD F0 0 R1 1 9 10 Load1 Yes 56 1 MULTD F4 F0 F2 2 14 15 Load2 No 1 SD F4 0 R1 3 18 19 Load3 Yes 64 2 LD F0 0 R1 6 10 11 Store1 No 2 MULTD F4 F0 F2 7 15 16 Store2 No 2 SD F4 0 R1 8 19 20 Store3 Yes 64 Mult1 Reservation Stations: S1 S2 RS Time Name Busy Op Vj Vk Qj Qk Code: Add1 No LD F0 0 R1 Add2 No MULTD F4 F0 F2 Add3 No SD F4 0 R1 Mult1 Yes Multd R(F2) Load3 SUBI R1 R1 #8 Mult2 No BNEZ R1 Loop Register result status Clock R1 F0 F2 F4 F6 F8 F10 F12... F30 20 56 Fu Load1 Mult1 Once again: In-order issue, out-of-order execution and out-of-order completion.

Tomasulo s scheme offers 2 major advantages 1. Distribution of the hazard detection logic distributed reservation stations and the CDB If multiple instructions waiting on single result, & each instruction has other operand, then instructions can be released simultaneously by broadcast on CDB If a centralized register file were used, the units would have to read their results from the registers when register buses are available 2. Elimination of stalls for WAW and WAR hazards

Tomasulo Drawbacks Complexity delays of 360/91, MIPS 10000, Alpha 21264, IBM PPC 620 in CA:AQA 2/e, but not in silicon! Many associative stores (CDB) at high speed Performance limited by Common Data Bus Each CDB must go to multiple functional units high capacitance, high wiring density Number of functional units that can complete per cycle limited to one! Multiple CDBs more FU logic for parallel assoc stores Non-precise interrupts! We will address this later

Conclusion Leverage Implicit Parallelism for Performance: Instruction Level Parallelism Loop unrolling by compiler to increase ILP Branch prediction to increase ILP Dynamic HW exploiting ILP Works when can t know dependence at compile time Can hide L1 cache misses Code for one machine runs well on another

Conclusion Cont. Reservations stations: renaming to larger set of registers + buffering source operands Prevents registers as bottleneck Avoids WAR, WAW hazards Allows loop unrolling in HW Not limited to basic blocks (integer units gets ahead, beyond branches) Helps cache misses as well Lasting Contributions Dynamic scheduling Register renaming Load/store disambiguation 360/91 descendants are Intel Pentium 4, IBM Power 5, AMD Athlon/Opteron,

Thank You