Page 1. Recall from Pipelining Review. Lecture 15: Instruction Level Parallelism and Dynamic Execution

Similar documents
Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

Page 1. Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

CPE 631 Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation

CPE 631 Lecture 11: Instruction Level Parallelism and Its Dynamic Exploitation

CPE 631 Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation

吳俊興高雄大學資訊工程學系. October Example to eleminate WAR and WAW by register renaming. Tomasulo Algorithm. A Dynamic Algorithm: Tomasulo s Algorithm

CISC 662 Graduate Computer Architecture. Lecture 10 - ILP 3

CS252 Graduate Computer Architecture Lecture 6. Recall: Software Pipelining Example

Processor: Superscalars Dynamic Scheduling

Review: Compiler techniques for parallelism Loop unrolling Ÿ Multiple iterations of loop in software:

ELEC 5200/6200 Computer Architecture and Design Fall 2016 Lecture 9: Instruction Level Parallelism

Instruction Level Parallelism. Taken from

Reduction of Data Hazards Stalls with Dynamic Scheduling So far we have dealt with data hazards in instruction pipelines by:

Superscalar Architectures: Part 2

COSC4201 Instruction Level Parallelism Dynamic Scheduling

Adapted from David Patterson s slides on graduate computer architecture

Metodologie di Progettazione Hardware-Software

Website for Students VTU NOTES QUESTION PAPERS NEWS RESULTS

Static vs. Dynamic Scheduling

Handout 2 ILP: Part B

NOW Handout Page 1. Review from Last Time. CSE 820 Graduate Computer Architecture. Lec 7 Instruction Level Parallelism. Recall from Pipelining Review

Instruction Level Parallelism

Instruction-Level Parallelism (ILP)

Hardware-based speculation (2.6) Multiple-issue plus static scheduling = VLIW (2.7) Multiple-issue, dynamic scheduling, and speculation (2.

CSE 502 Graduate Computer Architecture. Lec 8-10 Instruction Level Parallelism

NOW Handout Page 1. Outline. Csci 211 Computer System Architecture. Lec 4 Instruction Level Parallelism. Instruction Level Parallelism

Instruction-Level Parallelism and Its Exploitation

Load1 no Load2 no Add1 Y Sub Reg[F2] Reg[F6] Add2 Y Add Reg[F2] Add1 Add3 no Mult1 Y Mul Reg[F2] Reg[F4] Mult2 Y Div Reg[F6] Mult1

ILP concepts (2.1) Basic compiler techniques (2.2) Reducing branch costs with prediction (2.3) Dynamic scheduling (2.4 and 2.5)

CSE 820 Graduate Computer Architecture. week 6 Instruction Level Parallelism. Review from Last Time #1

Four Steps of Speculative Tomasulo cycle 0

Graduate Computer Architecture. Chapter 3. Instruction Level Parallelism and Its Dynamic Exploitation

Chapter 3 Instruction-Level Parallelism and its Exploitation (Part 1)

Chapter 4 The Processor 1. Chapter 4D. The Processor

Outline Review: Basic Pipeline Scheduling and Loop Unrolling Multiple Issue: Superscalar, VLIW. CPE 631 Session 19 Exploiting ILP with SW Approaches

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

Copyright 2012, Elsevier Inc. All rights reserved.

NOW Handout Page 1. Review from Last Time #1. CSE 820 Graduate Computer Architecture. Lec 8 Instruction Level Parallelism. Outline

EECC551 Exam Review 4 questions out of 6 questions

EITF20: Computer Architecture Part3.2.1: Pipeline - 3

ESE 545 Computer Architecture Instruction-Level Parallelism (ILP) and Static & Dynamic Instruction Scheduling Instruction level parallelism

Recall from Pipelining Review. Instruction Level Parallelism and Dynamic Execution

The basic structure of a MIPS floating-point unit

Outline EEL 5764 Graduate Computer Architecture. Chapter 2 - Instruction Level Parallelism. Recall from Pipelining Review

Lecture-13 (ROB and Multi-threading) CS422-Spring

Instruction Level Parallelism

Page # CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Michela Taufer

EE 4683/5683: COMPUTER ARCHITECTURE

Topics. Digital Systems Architecture EECE EECE Predication, Prediction, and Speculation

Chapter 3: Instruction Level Parallelism (ILP) and its exploitation. Types of dependences

NOW Handout Page 1. COSC 5351 Advanced Computer Architecture

Functional Units. Registers. The Big Picture: Where are We Now? The Five Classic Components of a Computer Processor Input Control Memory

CISC 662 Graduate Computer Architecture Lecture 11 - Hardware Speculation Branch Predictions

5008: Computer Architecture

Multicycle ALU Operations 2/28/2011. Diversified Pipelines The Path Toward Superscalar Processors. Limitations of Our Simple 5 stage Pipeline

CS252 Graduate Computer Architecture Lecture 8. Review: Scoreboard (CDC 6600) Explicit Renaming Precise Interrupts February 13 th, 2010

CISC 662 Graduate Computer Architecture Lecture 13 - CPI < 1

Advanced Computer Architecture

Dynamic Scheduling. Better than static scheduling Scoreboarding: Tomasulo algorithm:

Page 1. CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Pipeline CPI (II) Michela Taufer

Hardware-based Speculation

EEC 581 Computer Architecture. Lec 4 Instruction Level Parallelism

Super Scalar. Kalyan Basu March 21,

CPE 631 Lecture 09: Instruction Level Parallelism and Its Dynamic Exploitation

Computer Architecture A Quantitative Approach, Fifth Edition. Chapter 3. Instruction-Level Parallelism and Its Exploitation

Instruction Level Parallelism (ILP)

ILP: Instruction Level Parallelism

Instruction Frequency CPI. Load-store 55% 5. Arithmetic 30% 4. Branch 15% 4

What is ILP? Instruction Level Parallelism. Where do we find ILP? How do we expose ILP?

Scoreboard information (3 tables) Four stages of scoreboard control

Instruction Level Parallelism. ILP, Loop level Parallelism Dependences, Hazards Speculation, Branch prediction

Advanced Computer Architecture CMSC 611 Homework 3. Due in class Oct 17 th, 2012

Hardware-based Speculation

CS433 Midterm. Prof Josep Torrellas. October 19, Time: 1 hour + 15 minutes

Instruction Level Parallelism. Appendix C and Chapter 3, HP5e

332 Advanced Computer Architecture Chapter 3. Dynamic scheduling, out-of-order execution, register renaming and speculative execution

Review: Evaluating Branch Alternatives. Lecture 3: Introduction to Advanced Pipelining. Review: Evaluating Branch Prediction

High Performance Computer Architecture Prof. Ajit Pal Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Advanced issues in pipelining

Lecture 4: Introduction to Advanced Pipelining

Administrivia. CMSC 411 Computer Systems Architecture Lecture 14 Instruction Level Parallelism (cont.) Control Dependencies

Computer Architectures. Chapter 4. Tien-Fu Chen. National Chung Cheng Univ.

EEC 581 Computer Architecture. Instruction Level Parallelism (3.6 Hardware-based Speculation and 3.7 Static Scheduling/VLIW)

EEC 581 Computer Architecture. Lec 7 Instruction Level Parallelism (2.6 Hardware-based Speculation and 2.7 Static Scheduling/VLIW)

Chapter 3 (CONT II) Instructor: Josep Torrellas CS433. Copyright J. Torrellas 1999,2001,2002,2007,

CPI < 1? How? What if dynamic branch prediction is wrong? Multiple issue processors: Speculative Tomasulo Processor

Compiler Optimizations. Lecture 7 Overview of Superscalar Techniques. Memory Allocation by Compilers. Compiler Structure. Register allocation

Lecture 8 Dynamic Branch Prediction, Superscalar and VLIW. Computer Architectures S

Computer Architecture 计算机体系结构. Lecture 4. Instruction-Level Parallelism II 第四讲 指令级并行 II. Chao Li, PhD. 李超博士

Multi-cycle Instructions in the Pipeline (Floating Point)

Chapter 4. Advanced Pipelining and Instruction-Level Parallelism. In-Cheol Park Dept. of EE, KAIST

Advanced Computer Architecture. Chapter 4: More sophisticated CPU architectures

Instruction Pipelining Review

CS152 Computer Architecture and Engineering March 13, 2008 Out of Order Execution and Branch Prediction Assigned March 13 Problem Set #4 Due March 25

ECE 505 Computer Architecture

Minimizing Data hazard Stalls by Forwarding Data Hazard Classification Data Hazards Present in Current MIPS Pipeline

Multiple Instruction Issue and Hardware Based Speculation

INSTITUTO SUPERIOR TÉCNICO. Architectures for Embedded Computing

CS433 Homework 2 (Chapter 3)

Course on Advanced Computer Architectures

Transcription:

CS252 Graduate Computer Architecture Recall from Pipelining Review Lecture 15: Instruction Level Parallelism and Dynamic Execution March 11, 2002 Prof. David E. Culler Computer Science 252 Spring 2002 Pipeline CPI = Ideal pipeline CPI + Structural Stalls + Data Hazard Stalls + Control Stalls Ideal pipeline CPI: measure of the maximum performance attainable by the implementation Structural hazards: HW cannot support this combination of instructions Data hazards: Instruction depends on result of prior instruction still in the pipeline Control hazards: Caused by delay between the fetching of instructions and decisions about changes in control flow (branches and jumps) Lec 15.1 Lec 15.2 Recall Data Hazard Resolution: In-order issue, in-order completion In-Order Issue, Out-of-order Completion I n s t r. O r d e r Time (clock cycles) lw r1, 0(r2) sub r4,r1,r6 and r6,r2,r7 or r8, r2,r9 Ifetch Ifetch ALU Ifetch DMem Bubble Bubble Bubble ALU Ifetch DMem Extend to Multiple instruction issue? What if load had longer delay? Can and issue? ALU DMem ALU DMem Lec 15.3 Ifetch Which hazards are present? RAW? WAR? WAW? load r3 <- r1, r2 add r1 <- r5, r2 sub r3 <- r3, r1 or r3 <- r2, r1 ister Reservations when issue mark destination register busy till complete check all register reservations before issue ALU Add DMem DMem Lec 15.4 Chapter 3 Chapter 4 Ideas to Reduce Stalls Technique Dynamic scheduling Dynamic branch prediction Issuing multiple instructions per cycle Speculat ion Dynamic memory disambigu ation Loop unrolling Bas ic compiler p ipel ine sche duling Compiler dependence analysis Software pipelining and trace scheduling Compiler speculation Reduces Data hazard stalls Control stalls Ideal CPI Data and control stalls Data hazard stalls involving memory Control hazard stalls Data hazard stalls Ideal CPI and data hazard stalls Ideal CPI and data hazard stalls Ideal CPI, data and control stalls Instruction-Level Parallelism (ILP) Basic Block (BB) ILP is quite small BB: a straight-line code sequence with no branches in except to the entry and no branches out except at the exit average dynamic branch frequency 15% to 25% => 4 to 7 instructions execute between a pair of branches Plus instructions in BB likely to depend on each other To obtain substantial performance enhancements, we must exploit ILP across multiple basic blocks Simplest: loop-level parallelism to exploit parallelism among iterations of a loop Vector is one way If not vector, then either dynamic via branch prediction or static via loop unrolling by compiler Lec 15.5 Lec 15.6 Page 1

Data Dependence and Hazards Data Dependence and Hazards Instr J is data dependent on Instr I Instr J tries to read operand before Instr I writes it I: add r1,r2,r3 J: sub r4,r1,r3 or Instr J is data dependent on Instr K which is dependent on Instr I Caused by a True Dependence (compiler term) If true dependence caused a hazard in the pipeline, called a Read After Write (RAW) hazard Dependences are a property of programs Presence of dependence indicates potential for a hazard, but actual hazard and length of any stall is a property of the pipeline Importance of the data dependencies 1) indicates the possibility of a hazard 2) determines order in which results must be calculated 3) sets an upper bound on how much parallelism can possibly be exploited Today looking at HW schemes to avoid hazard Lec 15.7 Lec 15.8 Name Dependence #1: Anti-dependence Name Dependence #2: Output dependence Name dependence: when 2 instructions use same register or memory location, called a name, but no flow of data between the instructions associated with that name; 2 versions of name dependence Instr J writes operand before Instr I reads it I: sub r4,r1,r3 J: add r1,r2,r3 K: mul r6,r1,r7 Called an anti-dependence in compiler work. This results from reuse of the name r1 If anti-dependence caused a hazard in the pipeline, called a Write After Read (WAR) hazard Instr J writes operand before Instr I writes it. I: sub r1,r4,r3 J: add r1,r2,r3 K: mul r6,r1,r7 Called an output dependence by compiler writers This also results from the reuse of name r1 If anti-dependence caused a hazard in the pipeline, called a Write After Write (WAW) hazard Lec 15.9 Lec 15.10 ILP and Data Hazards program order: order instructions would execute in if executed sequentially 1 at a time as determined by original source program HW/SW goal: exploit parallelism by preserving appearance of program order modify order in manner than cannot be observed by program must not affect the outcome of the program Ex: Instructions involved in a name dependence can execute simultaneously if name used in instructions is changed so instructions do not conflict ister renaming resolves name dependence for regs Either by compiler or by HW add r1, r2, r3 sub r2, r4,r5 and r3, r2, 1 Control Dependencies Every instruction is control dependent on some set of branches, and, in general, these control dependencies must be preserved to preserve program order if p1 { S1; }; if p2 { S2; } S1 is control dependent on p1, and S2 is control dependent on p2 but not on p1. Lec 15.11 Lec 15.12 Page 2

Control Dependence Ignored Exception Behavior Control dependence need not always be preserved willing to execute instructions that should not have been executed, thereby violating the control dependences, if can do so without affecting correctness of the program Instead, 2 properties critical to program correctness are exception behavior and data flow Preserving exception behavior => any changes in instruction execution order must not change how exceptions are raised in program (=> no new exceptions) Example: DADDU R2,R3,R4 BEQZ R2,L1 LW R1,0(R2) L1: Problem with moving LW before BEQZ? Lec 15.13 Lec 15.14 Data Flow Data flow: actual flow of data values among instructions that produce results and those that consume them branches make flow dynamic, determine which instruction is supplier of data Example: DADDU R1,R2,R3 BEQZ R4,L DSUBU R1,R5,R6 L: OR R7,R1,R8 OR depends on DADDU or DSUBU? Must preserve data flow on execution CS 252 Administrivia Final Project Proposals due 3/17 send URL to page containing» title & participants» problem statement» annotated bibliography we ll monitor progress through the pages Assignment 3 out, due in 3/19 Quiz 3/21 Lec 15.15 Lec 15.16 Advantages of Dynamic Scheduling Handles cases when dependences unknown at compile time (e.g., because they may involve a memory reference) It simplifies the compiler Allows code that compiled for one pipeline to run efficiently on a different pipeline Hardware speculation, a technique with significant performance advantages, that builds on dynamic scheduling HW Schemes: Instruction Parallelism Key idea: Allow instructions behind stall to proceed DIVD F0,F2,F4 ADDD F10,F0,F8 SUBD F12,F8,F14 Enables out-of-order execution and allows out-of-order completion Will distinguish when an instruction begins execution and when it completes execution; between 2 times, the instruction is in execution In a dynamically scheduled pipeline, all instructions pass through issue stage in order (in-order issue) Lec 15.17 Lec 15.18 Page 3

Dynamic Scheduling Step 1 Simple pipeline had 1 stage to check both structural and data hazards: Instruction Decode (ID), also called Instruction Issue Split the ID pipe stage of simple 5-stage pipeline into 2 stages: Issue Decode instructions, check for structural hazards Read operands Wait until no data hazards, then read operands A Dynamic Algorithm: Tomasulo s Algorithm For IBM 360/91 (before caches!) Goal: High Performance without special compilers Small number of floating point registers (4 in 360) prevented interesting compiler scheduling of operations This led Tomasulo to try to figure out how to get more effective registers renaming in hardware! Why Study 1966 Computer? The descendants of this have flourished! Alpha 21264, HP 8000, MIPS 10000, Pentium III, PowerPC 604, Lec 15.19 Lec 15.20 Tomasulo Algorithm Control & buffers distributed with Function Units (FU) FU buffers called reservation stations ; have pending operands isters in instructions replaced by values or pointers to reservation stations(rs); form of register renaming ; avoids WAR, WAW hazards More reservation stations than registers, so can do optimizations compilers can t Results to FU from RS, not through registers, over Common Data Bus that broadcasts results to all FUs Load and Stores treated as FUs with RSs as well Integer instructions can go past branches, allowing FP ops beyond basic block in FP queue Lec 15.21 From Mem FP Op Queue Load Buffers Load1 Load2 Load3 Load4 Load5 Load6 Add1 Add2 Add3 Tomasulo Organization FP FP adders Mult1 Mult2 Reservation Stations Common Data Bus (CDB) FP isters FP FP multipliers Store Buffers To Mem Lec 15.22 Reservation Station Components Three Stages of Tomasulo Algorithm Op: Operation to perform in the unit (e.g., + or ) Vj, Vk: Value of Source operands Store buffers has V field, result to be stored Qj, Qk: Reservation stations producing source registers (value to be written) te: Qj,Qk=0 => ready Store buffers only have Qi for RS producing result Busy: Indicates reservation station or FU is busy Indicates which functional unit will write each register, if one exists. Blank when no pending instructions that will write that register. Lec 15.23 1. Issue get instruction from FP Op Queue If reservation station free (no structural hazard), control issues instr & sends operands (renames registers). 2. Execute operate on operands (EX) When both operands ready then execute; if not ready, watch Common Data Bus for result 3. Write result finish execution (WB) Write on Common Data Bus to all awaiting units; mark reservation station available rmal data bus: data + destination ( go to bus) Common data bus: data + source ( come from bus) 64 bits of data + 4 bits of Functional Unit source address Write if matches expected Functional Unit (produces result) Does the broadcast Example speed: 3 clocks for Fl.pt. +,-; 10 for * ; 40 clks for / Lec 15.24 Page 4

Instruction stream Tomasulo Example LD 34+ R2 LD F2 45+ R3 Load2 MULTD F0 F2 F4 Load3 SUBD F8 F6 F2 DIVD F10 F0 F6 3 Load/Buffers ADDD F6 F8 F2 RS FU count down Add1 Add2 Add3 Mult1 Mult2 : 0 FU 3 FP Adder R.S. 2 FP Mult R.S. Tomasulo Example Cycle 1 LD 34+ R2 1 Yes 34+R2 LD F2 45+ R3 Load2 MULTD F0 F2 F4 Load3 SUBD F8 F6 F2 DIVD F10 F0 F6 ADDD F6 F8 F2 RS Add1 Add2 Add3 Mult1 Mult2 : 1 FU Load1 Clock cycle counter Lec 15.25 Lec 15.26 Tomasulo Example Cycle 2 LD 34+ R2 1 Yes 34+R2 LD F2 45+ R3 2 Load2 Yes 45+R3 MULTD F0 F2 F4 Load3 SUBD F8 F6 F2 DIVD F10 F0 F6 ADDD F6 F8 F2 RS Add1 Add2 Add3 Mult1 Mult2 : 2 FU Load2 Load1 Tomasulo Example Cycle 3 LD 34+ R2 1 3 Yes 34+R2 LD F2 45+ R3 2 Load2 Yes 45+R3 MULTD F0 F2 F4 3 Load3 SUBD F8 F6 F2 DIVD F10 F0 F6 ADDD F6 F8 F2 RS Add1 Add2 Mult1 Yes MULTD R(F4) Load2 Mult2 : 3 FU Mult1 Load2 Load1 te: Can have multiple loads outstanding Lec 15.27 te: registers names are removed ( renamed ) in Reservation Stations; MULT issued Load1 completing; what is waiting for Load1? Lec 15.28 Tomasulo Example Cycle 4 LD F2 45+ R3 2 4 Load2 Yes 45+R3 MULTD F0 F2 F4 3 Load3 SUBD F8 F6 F2 4 DIVD F10 F0 F6 ADDD F6 F8 F2 RS Add1 Yes SUBD M(A1) Load2 Add2 Mult1 Yes MULTD R(F4) Load2 Mult2 : 4 FU Mult1 Load2 M(A1) Add1 Load2 completing; what is waiting for Load2? Tomasulo Example Cycle 5 LD F2 45+ R3 2 4 5 Load2 MULTD F0 F2 F4 3 Load3 SUBD F8 F6 F2 4 ADDD F6 F8 F2 RS 2 Add1 Yes SUBD M(A1) M(A2) Add2 10 Mult1 Yes MULTD M(A2) R(F4) : 5 FU Mult1 M(A2) M(A1) Add1 Mult2 Timer starts down for Add1, Mult1 Lec 15.29 Lec 15.30 Page 5

Tomasulo Example Cycle 6 LD F2 45+ R3 2 4 5 Load2 MULTD F0 F2 F4 3 Load3 SUBD F8 F6 F2 4 ADDD F6 F8 F2 6 RS 1 Add1 Yes SUBD M(A1) M(A2) Add2 Yes ADDD M(A2) Add1 Mult1 Yes MULTD R(F4) 9 M(A2) : 6 FU Mult1 M(A2) Add2 Add1 Mult2 Issue ADDD here despite name dependency on F6? Tomasulo Example Cycle 7 LD F2 45+ R3 2 4 5 Load2 MULTD F0 F2 F4 3 Load3 SUBD F8 F6 F2 4 7 ADDD F6 F8 F2 6 RS 0 Add1 Yes SUBD M(A1) M(A2) Add2 Yes ADDD M(A2) Add1 Mult1 Yes MULTD R(F4) 8 M(A2) : 7 FU Mult1 M(A2) Add2 Add1 Mult2 Add1 (SUBD) completing; what is waiting for it? Lec 15.31 Lec 15.32 Tomasulo Example Cycle 8 LD F2 45+ R3 2 4 5 Load2 MULTD F0 F2 F4 3 Load3 ADDD F6 F8 F2 6 RS Add1 Yes 2 Add2 ADDD (M-M) M(A2) Yes 7 Mult1 MULTD M(A2) R(F4) : 8 FU Mult1 M(A2) Add2 (M-M) Mult2 Tomasulo Example Cycle 9 LD F2 45+ R3 2 4 5 Load2 MULTD F0 F2 F4 3 Load3 ADDD F6 F8 F2 6 RS Add1 Yes 1 Add2 ADDD (M-M) M(A2) Yes 6 Mult1 MULTD M(A2) R(F4) : 9 FU Mult1 M(A2) Add2 (M-M) Mult2 Lec 15.33 Lec 15.34 Tomasulo Example Cycle 10 LD F2 45+ R3 2 4 5 Load2 MULTD F0 F2 F4 3 Load3 ADDD F6 F8 F2 6 10 RS Add1 Yes 0 Add2 ADDD (M-M) M(A2) Yes 5 Mult1 MULTD M(A2) R(F4) : 10 FU Mult1 M(A2) Add2 (M-M) Mult2 Tomasulo Example Cycle 11 LD F2 45+ R3 2 4 5 Load2 MULTD F0 F2 F4 3 Load3 RS Add1 Add2 Yes 4 Mult1 MULTD M(A2) R(F4) : 11 FU Mult1 M(A2) (M-M+M(M-M) Mult2 Add2 (ADDD) completing; what is waiting for it? Lec 15.35 Write result of ADDD here? All quick instructions complete in this cycle! Lec 15.36 Page 6

Tomasulo Example Cycle 12 LD F2 45+ R3 2 4 5 Load2 MULTD F0 F2 F4 3 Load3 RS Add1 Add2 Yes 3 Mult1 MULTD M(A2) R(F4) : 12 FU Mult1 M(A2) (M-M+M(M-M) Mult2 Tomasulo Example Cycle 13 LD F2 45+ R3 2 4 5 Load2 MULTD F0 F2 F4 3 Load3 RS Add1 Add2 Yes 2 Mult1 MULTD M(A2) R(F4) : 13 FU Mult1 M(A2) (M-M+M(M-M) Mult2 Lec 15.37 Lec 15.38 Tomasulo Example Cycle 14 LD F2 45+ R3 2 4 5 Load2 MULTD F0 F2 F4 3 Load3 RS Add1 Add2 Yes 1 Mult1 MULTD M(A2) R(F4) : 14 FU Mult1 M(A2) (M-M+M(M-M) Mult2 Tomasulo Example Cycle 15 LD F2 45+ R3 2 4 5 Load2 MULTD F0 F2 F4 3 15 Load3 RS Add1 Add2 Yes 0 Mult1 MULTD M(A2) R(F4) : 15 FU Mult1 M(A2) (M-M+M(M-M) Mult2 Mult1 (MULTD) completing; what is waiting for it? Lec 15.39 Lec 15.40 Tomasulo Example Cycle 16 LD F2 45+ R3 2 4 5 Load2 MULTD F0 F2 F4 3 15 16 Load3 RS Add1 Add2 Mult1 40 Mult2 Yes DIVD M*F4 M(A1) : 16 FU M*F4 M(A2) (M-M+M(M-M) Mult2 Faster than light computation (skip a couple of cycles) Just waiting for Mult2 (DIVD) to complete Lec 15.41 Lec 15.42 Page 7

Tomasulo Example Cycle 55 LD F2 45+ R3 2 4 5 Load2 MULTD F0 F2 F4 3 15 16 Load3 RS Add1 Add2 Mult1 1 Mult2 Yes DIVD M*F4 M(A1) : 55 FU M*F4 M(A2) (M-M+M(M-M) Mult2 Tomasulo Example Cycle 56 LD F2 45+ R3 2 4 5 Load2 MULTD F0 F2 F4 3 15 16 Load3 56 RS Add1 Add2 Mult1 0 Mult2 Yes DIVD M*F4 M(A1) : 56 FU M*F4 M(A2) (M-M+M(M-M) Mult2 Mult2 (DIVD) is completing; what is waiting for it? Lec 15.43 Lec 15.44 Tomasulo Example Cycle 57 LD F2 45+ R3 2 4 5 Load2 MULTD F0 F2 F4 3 15 16 Load3 56 57 RS Add1 Add2 Mult1 Mult2 Yes DIVD M*F4 M(A1) : 56 FU M*F4 M(A2) (M-M+M(M-M) Result Once again: In-order issue, out-of-order execution and out-of-order completion. Lec 15.45 Complexity Tomasulo Drawbacks delays of 360/91, MIPS 10000, Alpha 21264, IBM PPC 620 in CA:AQA 2/e, but not in silicon! Many associative stores (CDB) at high speed Performance limited by Common Data Bus Each CDB must go to multiple functional units high capacitance, high wiring density Number of functional units that can complete per cycle limited to one!» Multiple CDBs more FU logic for parallel assoc stores n-precise interrupts! We will address this later Lec 15.46 Tomasulo Loop Example Loop:LD F0 0 R1 MULTD F4 F0 F2 SD F4 0 R1 SUBI R1 R1 #8 BNEZ R1 Loop This time assume Multiply takes 4 clocks Assume 1st load takes 8 clocks (L1 cache miss), 2nd load takes 1 clock (hit) To be clear, will show clocks for SUBI, BNEZ Reality: integer instructions ahead of Fl. Pt. Instructions Show 2 iterations Lec 15.47 Loop Example 1 LD F0 0 R1 Load1 1 MULTD F4 F0 F2 Load2 1 SD F4 0 R1 Load3 Iteration 2 LD F0 0 R1 Store1 2 MULTD F4 F0 F2 Store2 Count 2 SD F4 0 R1 Store3 Added Store Buffers Mult1 SUBI R1 R1 #8 Mult2 BNEZ R1 Loop Instruction Loop 0 80 Fu Value of ister used for address, iteration control Lec 15.48 Page 8

Loop Example Cycle 1 Load2 Load3 Store1 Store2 Store3 Mult1 SUBI R1 R1 #8 Mult2 BNEZ R1 Loop 1 80 Fu Load1 Loop Example Cycle 2 1 MULTD F4 F0 F2 2 Load2 Load3 Store1 Store2 Store3 Mult2 BNEZ R1 Loop 2 80 Fu Load1 Mult1 Lec 15.49 Lec 15.50 Loop Example Cycle 3 1 MULTD F4 F0 F2 2 Load2 1 SD F4 0 R1 3 Load3 Store1 Yes 80 Mult1 Store2 Store3 Mult2 BNEZ R1 Loop 3 80 Fu Load1 Mult1 Loop Example Cycle 4 1 MULTD F4 F0 F2 2 Load2 1 SD F4 0 R1 3 Load3 Store1 Yes 80 Mult1 Store2 Store3 Mult2 BNEZ R1 Loop 4 80 Fu Load1 Mult1 Implicit renaming sets up data flow graph Lec 15.51 Dispatching SUBI Instruction (not in FP queue) Lec 15.52 Loop Example Cycle 5 1 MULTD F4 F0 F2 2 Load2 1 SD F4 0 R1 3 Load3 Store1 Yes 80 Mult1 Store2 Store3 Mult2 BNEZ R1 Loop 5 72 Fu Load1 Mult1 Loop Example Cycle 6 1 MULTD F4 F0 F2 2 Load2 Yes 72 1 SD F4 0 R1 3 Load3 2 LD F0 0 R1 6 Store1 Yes 80 Mult1 Store2 Store3 Mult2 BNEZ R1 Loop 6 72 Fu Load2 Mult1 And, BNEZ instruction (not in FP queue) Lec 15.53 tice that F0 never sees Load from location 80 Lec 15.54 Page 9

Loop Example Cycle 7 1 MULTD F4 F0 F2 2 Load2 Yes 72 1 SD F4 0 R1 3 Load3 2 LD F0 0 R1 6 Store1 Yes 80 Mult1 2 MULTD F4 F0 F2 7 Store2 Store3 Mult2 Yes Multd R(F2) Load2 BNEZ R1 Loop 7 72 Fu Load2 Mult2 ister file completely detached from computation First and Second iteration completely overlapped Lec 15.55 Loop Example Cycle 8 1 MULTD F4 F0 F2 2 Load2 Yes 72 1 SD F4 0 R1 3 Load3 2 LD F0 0 R1 6 Store1 Yes 80 Mult1 2 SD F4 0 R1 8 Store3 Mult2 Yes Multd R(F2) Load2 BNEZ R1 Loop 8 72 Fu Load2 Mult2 Lec 15.56 Loop Example Cycle 9 1 LD F0 0 R1 1 9 Load1 Yes 80 1 MULTD F4 F0 F2 2 Load2 Yes 72 1 SD F4 0 R1 3 Load3 2 LD F0 0 R1 6 Store1 Yes 80 Mult1 2 SD F4 0 R1 8 Store3 Mult2 Yes Multd R(F2) Load2 BNEZ R1 Loop 9 72 Fu Load2 Mult2 Loop Example Cycle 10 1 LD F0 0 R1 1 9 10 Load1 1 MULTD F4 F0 F2 2 Load2 Yes 72 1 SD F4 0 R1 3 Load3 2 LD F0 0 R1 6 10 Store1 Yes 80 Mult1 2 SD F4 0 R1 8 Store3 4 Mult1 Yes Multd M[80] R(F2) SUBI R1 R1 #8 Mult2 Yes Multd R(F2) Load2 BNEZ R1 Loop 10 64 Fu Load2 Mult2 Load1 completing: who is waiting? te: Dispatching SUBI Lec 15.57 Load2 completing: who is waiting? te: Dispatching BNEZ Lec 15.58 Loop Example Cycle 11 1 LD F0 0 R1 1 9 10 Load1 1 MULTD F4 F0 F2 2 Load2 2 LD F0 0 R1 6 10 11 Store1 Yes 80 Mult1 2 SD F4 0 R1 8 Store3 3 Mult1 Yes Multd M[80] R(F2) SUBI R1 R1 #8 4 Mult2 Yes Multd M[72] R(F2) BNEZ R1 Loop 11 64 Fu Load3 Mult2 Next load in sequence Lec 15.59 Loop Example Cycle 12 1 LD F0 0 R1 1 9 10 Load1 1 MULTD F4 F0 F2 2 Load2 2 LD F0 0 R1 6 10 11 Store1 Yes 80 Mult1 2 SD F4 0 R1 8 Store3 2 Mult1 Yes Multd M[80] R(F2) SUBI R1 R1 #8 3 Mult2 Yes Multd M[72] R(F2) BNEZ R1 Loop 12 64 Fu Load3 Mult2 Why not issue third multiply? Lec 15.60 Page 10

Loop Example Cycle 13 1 LD F0 0 R1 1 9 10 Load1 1 MULTD F4 F0 F2 2 Load2 2 LD F0 0 R1 6 10 11 Store1 Yes 80 Mult1 2 SD F4 0 R1 8 Store3 1 Mult1 Yes Multd M[80] R(F2) SUBI R1 R1 #8 2 Mult2 Yes Multd M[72] R(F2) BNEZ R1 Loop 13 64 Fu Load3 Mult2 Why not issue third store? Lec 15.61 Loop Example Cycle 14 1 LD F0 0 R1 1 9 10 Load1 1 MULTD F4 F0 F2 2 14 Load2 2 LD F0 0 R1 6 10 11 Store1 Yes 80 Mult1 2 SD F4 0 R1 8 Store3 0 Mult1 Yes Multd M[80] R(F2) SUBI R1 R1 #8 1 Mult2 Yes Multd M[72] R(F2) BNEZ R1 Loop 14 64 Fu Load3 Mult2 Mult1 completing. Who is waiting? Lec 15.62 Loop Example Cycle 15 1 LD F0 0 R1 1 9 10 Load1 1 MULTD F4 F0 F2 2 14 15 Load2 2 LD F0 0 R1 6 10 11 Store1 Yes 80 [80]*R2 2 MULTD F4 F0 F2 7 15 Store2 Yes 72 Mult2 2 SD F4 0 R1 8 Store3 Mult1 SUBI R1 R1 #8 0 Mult2 Yes Multd M[72] R(F2) BNEZ R1 Loop 15 64 Fu Load3 Mult2 Mult2 completing. Who is waiting? Lec 15.63 Loop Example Cycle 16 1 LD F0 0 R1 1 9 10 Load1 1 MULTD F4 F0 F2 2 14 15 Load2 2 LD F0 0 R1 6 10 11 Store1 Yes 80 [80]*R2 2 MULTD F4 F0 F2 7 15 16 Store2 Yes 72 [72]*R2 2 SD F4 0 R1 8 Store3 4 Mult1 Yes Multd R(F2) Load3 SUBI R1 R1 #8 Mult2 BNEZ R1 Loop 16 64 Fu Load3 Mult1 Lec 15.64 Loop Example Cycle 17 1 LD F0 0 R1 1 9 10 Load1 1 MULTD F4 F0 F2 2 14 15 Load2 2 LD F0 0 R1 6 10 11 Store1 Yes 80 [80]*R2 2 MULTD F4 F0 F2 7 15 16 Store2 Yes 72 [72]*R2 2 SD F4 0 R1 8 Store3 Yes 64 Mult1 Mult1 Yes Multd R(F2) Load3 SUBI R1 R1 #8 Mult2 BNEZ R1 Loop 17 64 Fu Load3 Mult1 Loop Example Cycle 18 1 LD F0 0 R1 1 9 10 Load1 1 MULTD F4 F0 F2 2 14 15 Load2 1 SD F4 0 R1 3 18 Load3 Yes 64 2 LD F0 0 R1 6 10 11 Store1 Yes 80 [80]*R2 2 MULTD F4 F0 F2 7 15 16 Store2 Yes 72 [72]*R2 2 SD F4 0 R1 8 Store3 Yes 64 Mult1 Mult1 Yes Multd R(F2) Load3 SUBI R1 R1 #8 Mult2 BNEZ R1 Loop 18 64 Fu Load3 Mult1 Lec 15.65 Lec 15.66 Page 11

Loop Example Cycle 19 1 LD F0 0 R1 1 9 10 Load1 1 MULTD F4 F0 F2 2 14 15 Load2 1 SD F4 0 R1 3 18 19 Load3 Yes 64 2 LD F0 0 R1 6 10 11 Store1 2 MULTD F4 F0 F2 7 15 16 Store2 Yes 72 [72]*R2 2 SD F4 0 R1 8 19 Store3 Yes 64 Mult1 Mult1 Yes Multd R(F2) Load3 SUBI R1 R1 #8 Mult2 BNEZ R1 Loop 19 56 Fu Load3 Mult1 Lec 15.67 Loop Example Cycle 20 1 LD F0 0 R1 1 9 10 Load1 Yes 56 1 MULTD F4 F0 F2 2 14 15 Load2 1 SD F4 0 R1 3 18 19 Load3 Yes 64 2 LD F0 0 R1 6 10 11 Store1 2 MULTD F4 F0 F2 7 15 16 Store2 2 SD F4 0 R1 8 19 20 Store3 Yes 64 Mult1 Mult1 Yes Multd R(F2) Load3 SUBI R1 R1 #8 Mult2 BNEZ R1 Loop 20 56 Fu Load1 Mult1 Once again: In-order issue, out-of-order execution and out-of-order completion. Lec 15.68 Why can Tomasulo overlap iterations of loops? ister renaming Multiple iterations use different physical destinations for registers (dynamic loop unrolling). Reservation stations Permit instruction issue to advance past integer control flow operations Also buffer old values of registers - totally avoiding the WAR stall that we saw in the scoreboard. Other perspective: Tomasulo building data flow dependency graph on the fly. Tomasulo s scheme offers 2 major advantages (1) the distribution of the hazard detection logic distributed reservation stations and the CDB If multiple instructions waiting on single result, & each instruction has other operand, then instructions can be released simultaneously by broadcast on CDB If a centralized register file were used, the units would have to read their results from the registers when register buses are available. (2) the elimination of stalls for WAW and WAR hazards Lec 15.69 Lec 15.70 What about Precise Interrupts? State of machine looks as if no instruction beyond faulting instructions has issued Tomasulo had: In-order issue, out-of-order execution, and out-of-order completion Need to fix the out-of-order completion aspect so that we can find precise breakpoint in instruction stream. Relationship between precise interrupts and specultation: Speculation: guess and check Important for branch prediction: Need to take our best shot at predicting branch direction. If we speculate and are wrong, need to back up and restart execution to point at which we predicted incorrectly: This is exactly same as precise exceptions! Technique for both precise interrupts/exceptions and speculation: in-order completion or commit Lec 15.71 Lec 15.72 Page 12

HW support for precise interrupts Need HW buffer for results of uncommitted instructions: reorder buffer 3 fields: instr, destination, value Use reorder buffer number instead of reservation station when execution completes Supplies operands between execution complete & commit (Reorder buffer can be operand source => more registers like RS) Instructions commit Once instruction commits, result is put into register As a result, easy to undo speculated instructions on mispredicted branches or exceptions FP Op Queue Res Stations FP Adder Reorder Buffer FP s Res Stations FP Adder Four Steps of Speculative Tomasulo Algorithm 1.Issue get instruction from FP Op Queue If reservation station and reorder buffer slot free, issue instr & send operands & reorder buffer no. for destination (this stage sometimes called dispatch ) 2.Execution operate on operands (EX) When both operands ready then execute; if not ready, watch CDB for result; when both in reservation station, execute; checks RAW (sometimes called issue ) 3.Write result finish execution (WB) Write on Common Data Bus to all awaiting FUs & reorder buffer; mark reservation station available. 4.Commit update register with reorder result When instr. at head of reorder buffer & result present, update register with result (or store to memory) and remove instr from reorder buffer. Mispredicted branch flushes reorder buffer (sometimes called graduation ) Lec 15.73 Lec 15.74 Dest What are the hardware complexities with reorder buffer (ROB)? Result Exceptions? Valid Program Counter Reorder Table FP Op Queue Res Stations FP Adder Compar network Reorder Buffer FP s Res Stations FP Adder How do you find the latest version of a register? (As specified by Smith paper) need associative comparison network Could use future file or just use the register result status buffer to track which specific reorder buffer has received the value Need as many ports on ROB as register file Summary Reservations stations: implicit register renaming to larger set of registers + buffering source operands Prevents registers as bottleneck Avoids WAR, WAW hazards of Scoreboard Allows loop unrolling in HW t limited to basic blocks (integer units gets ahead, beyond branches) Today, helps cache misses as well Don t stall for L1 Data cache miss (insufficient ILP for L2 miss?) Lasting Contributions Dynamic scheduling ister renaming Load/store disambiguation 360/91 descendants are Pentium III; PowerPC 604; MIPS R10000; HP-PA 8000; Alpha 21264 Lec 15.75 Lec 15.76 Page 13