ILP concepts (2.1) Basic compiler techniques (2.2) Reducing branch costs with prediction (2.3) Dynamic scheduling (2.4 and 2.5)

Similar documents
Hardware-based speculation (2.6) Multiple-issue plus static scheduling = VLIW (2.7) Multiple-issue, dynamic scheduling, and speculation (2.

Instruction-Level Parallelism and Its Exploitation

COSC4201 Instruction Level Parallelism Dynamic Scheduling

CISC 662 Graduate Computer Architecture. Lecture 10 - ILP 3

Metodologie di Progettazione Hardware-Software

CPE 631 Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation

Website for Students VTU NOTES QUESTION PAPERS NEWS RESULTS

Reduction of Data Hazards Stalls with Dynamic Scheduling So far we have dealt with data hazards in instruction pipelines by:

Instruction Level Parallelism

CPE 631 Lecture 11: Instruction Level Parallelism and Its Dynamic Exploitation

Adapted from David Patterson s slides on graduate computer architecture

Chapter 3 Instruction-Level Parallelism and its Exploitation (Part 1)

Page 1. Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

Static vs. Dynamic Scheduling

Load1 no Load2 no Add1 Y Sub Reg[F2] Reg[F6] Add2 Y Add Reg[F2] Add1 Add3 no Mult1 Y Mul Reg[F2] Reg[F4] Mult2 Y Div Reg[F6] Mult1

ELEC 5200/6200 Computer Architecture and Design Fall 2016 Lecture 9: Instruction Level Parallelism

Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

Processor: Superscalars Dynamic Scheduling

CPE 631 Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation

Chapter 3: Instruction Level Parallelism (ILP) and its exploitation. Types of dependences

EITF20: Computer Architecture Part3.2.1: Pipeline - 3

Multi-cycle Instructions in the Pipeline (Floating Point)

CS252 Graduate Computer Architecture Lecture 6. Recall: Software Pipelining Example

Scoreboard information (3 tables) Four stages of scoreboard control

Four Steps of Speculative Tomasulo cycle 0

Copyright 2012, Elsevier Inc. All rights reserved.

Dynamic Scheduling. Better than static scheduling Scoreboarding: Tomasulo algorithm:

Page 1. Recall from Pipelining Review. Lecture 15: Instruction Level Parallelism and Dynamic Execution

EECC551 Exam Review 4 questions out of 6 questions

Superscalar Architectures: Part 2

Computer Architecture A Quantitative Approach, Fifth Edition. Chapter 3. Instruction-Level Parallelism and Its Exploitation

The basic structure of a MIPS floating-point unit

Review: Compiler techniques for parallelism Loop unrolling Ÿ Multiple iterations of loop in software:

What is ILP? Instruction Level Parallelism. Where do we find ILP? How do we expose ILP?

ILP: Instruction Level Parallelism

吳俊興高雄大學資訊工程學系. October Example to eleminate WAR and WAW by register renaming. Tomasulo Algorithm. A Dynamic Algorithm: Tomasulo s Algorithm

Instruction Level Parallelism. Taken from

CSE 820 Graduate Computer Architecture. week 6 Instruction Level Parallelism. Review from Last Time #1

Graduate Computer Architecture. Chapter 3. Instruction Level Parallelism and Its Dynamic Exploitation

5008: Computer Architecture

Instruction Frequency CPI. Load-store 55% 5. Arithmetic 30% 4. Branch 15% 4

COSC 6385 Computer Architecture - Instruction Level Parallelism (II)

Hardware-based Speculation

CISC 662 Graduate Computer Architecture Lecture 13 - CPI < 1

Chapter 4 The Processor 1. Chapter 4D. The Processor

Lecture 8 Dynamic Branch Prediction, Superscalar and VLIW. Computer Architectures S

Lecture-13 (ROB and Multi-threading) CS422-Spring

CPI < 1? How? What if dynamic branch prediction is wrong? Multiple issue processors: Speculative Tomasulo Processor

Hardware-based Speculation

Super Scalar. Kalyan Basu March 21,

Handout 2 ILP: Part B

Chapter 3 (CONT II) Instructor: Josep Torrellas CS433. Copyright J. Torrellas 1999,2001,2002,2007,

Multicycle ALU Operations 2/28/2011. Diversified Pipelines The Path Toward Superscalar Processors. Limitations of Our Simple 5 stage Pipeline

EEC 581 Computer Architecture. Instruction Level Parallelism (3.6 Hardware-based Speculation and 3.7 Static Scheduling/VLIW)

Topics. Digital Systems Architecture EECE EECE Predication, Prediction, and Speculation

Chapter 4. Advanced Pipelining and Instruction-Level Parallelism. In-Cheol Park Dept. of EE, KAIST

CPI IPC. 1 - One At Best 1 - One At best. Multiple issue processors: VLIW (Very Long Instruction Word) Speculative Tomasulo Processor

NOW Handout Page 1. Review from Last Time #1. CSE 820 Graduate Computer Architecture. Lec 8 Instruction Level Parallelism. Outline

Compiler Optimizations. Lecture 7 Overview of Superscalar Techniques. Memory Allocation by Compilers. Compiler Structure. Register allocation

Instruction Level Parallelism (ILP)

CPE 631 Lecture 09: Instruction Level Parallelism and Its Dynamic Exploitation

Instruction Level Parallelism

EECC551 Review. Dynamic Hardware-Based Speculation

Review: Evaluating Branch Alternatives. Lecture 3: Introduction to Advanced Pipelining. Review: Evaluating Branch Prediction

EECC551 - Shaaban. 1 GHz? to???? GHz CPI > (?)

ESE 545 Computer Architecture Instruction-Level Parallelism (ILP) and Static & Dynamic Instruction Scheduling Instruction level parallelism

NOW Handout Page 1. Review from Last Time. CSE 820 Graduate Computer Architecture. Lec 7 Instruction Level Parallelism. Recall from Pipelining Review

Page # CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Michela Taufer

DYNAMIC INSTRUCTION SCHEDULING WITH SCOREBOARD

Functional Units. Registers. The Big Picture: Where are We Now? The Five Classic Components of a Computer Processor Input Control Memory

COSC121: Computer Systems. ISA and Performance

INSTITUTO SUPERIOR TÉCNICO. Architectures for Embedded Computing

CISC 662 Graduate Computer Architecture Lecture 11 - Hardware Speculation Branch Predictions

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

CS433 Midterm. Prof Josep Torrellas. October 19, Time: 1 hour + 15 minutes

Lecture 4: Introduction to Advanced Pipelining

Page 1. CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Pipeline CPI (II) Michela Taufer

EEC 581 Computer Architecture. Lec 7 Instruction Level Parallelism (2.6 Hardware-based Speculation and 2.7 Static Scheduling/VLIW)

Advanced Computer Architecture CMSC 611 Homework 3. Due in class Oct 17 th, 2012

CS252 Graduate Computer Architecture Lecture 8. Review: Scoreboard (CDC 6600) Explicit Renaming Precise Interrupts February 13 th, 2010

CSE 502 Graduate Computer Architecture. Lec 8-10 Instruction Level Parallelism

Donn Morrison Department of Computer Science. TDT4255 ILP and speculation

NOW Handout Page 1. Outline. Csci 211 Computer System Architecture. Lec 4 Instruction Level Parallelism. Instruction Level Parallelism

Exploiting ILP with SW Approaches. Aleksandar Milenković, Electrical and Computer Engineering University of Alabama in Huntsville

EE 4683/5683: COMPUTER ARCHITECTURE

CS425 Computer Systems Architecture

Outline Review: Basic Pipeline Scheduling and Loop Unrolling Multiple Issue: Superscalar, VLIW. CPE 631 Session 19 Exploiting ILP with SW Approaches

Computer Architecture 计算机体系结构. Lecture 4. Instruction-Level Parallelism II 第四讲 指令级并行 II. Chao Li, PhD. 李超博士

Advanced Computer Architecture

Tomasulo s Algorithm

Advanced issues in pipelining

Outline EEL 5764 Graduate Computer Architecture. Chapter 2 - Instruction Level Parallelism. Recall from Pipelining Review

Hardware-Based Speculation

Computer Architectures. Chapter 4. Tien-Fu Chen. National Chung Cheng Univ.

Latencies of FP operations used in chapter 4.

Exploitation of instruction level parallelism

Instruction Level Parallelism (ILP)

Complex Pipelining COE 501. Computer Architecture Prof. Muhamed Mudawar

Pipelining: Issue instructions in every cycle (CPI 1) Compiler scheduling (static scheduling) reduces impact of dependences

COSC 6385 Computer Architecture - Pipelining (II)

Transcription:

Instruction-Level Parallelism and its Exploitation: PART 1 ILP concepts (2.1) Basic compiler techniques (2.2) Reducing branch costs with prediction (2.3) Dynamic scheduling (2.4 and 2.5)

Project and Case Studies Project: Lecture Nov 9 Case studies Presentation of pipeline pp case studies on Nov 12 This week: Division into groups (5-6 in each) Selection of case studies: MIPS R10000 Intel Pentium 4 AMD Opteron Sun Rock (UltraSparc) IBM Power 6

Lectures 1. Introduction 2. Instruction-level Parallelism, part 1 3. Instruction-level Parallelism,,part 2 4. Memory Hierarchies 5. Multiprocessors and Thread-Level Parallelism 6. System Aspects and Virtualization 7. Summary and Review

Bottlenecks in Simple Pipelines 4 IF/ID ID/EX EX/MEM MEM/WB P C Data memory Three classes of dependences that limit parallelism: Data hazards (lost cycles due to dependences) Control hazards (lost cycles due to branches) Structural hazards (lost cycles due to lack of resources)

Unlocking Instruction-Level Parallelism: A First Set of Techniques Improve parallel execution in basic pp pipeline to avoid stalls: Static scheduling of instructions (compiler) Dynamic branch prediction (run-time) Exploit parallelism li in programs Add dynamically (run-time) Add for (i=1000; i>0; i=i-1) x[i] = x[i] + 10.0; Add Use many execution units to run things in parallel

Instruction-Level Parallelism Basic Concepts (Ch 2.1) Two instructions must be independent in order to execute in parallel Three classes of dependences that limit parallelism: Data dependences Name dependences d Control dependences Dependences are properties of the program Can lead to hazards which are properties of the pipeline organization

Data Dependences An instruction j is data dependent on instruction i if: instruction i produces a result used by instr. j, or instruction j is data dependent on instruction k and instr. k is data dependent on instr. i Example: [Notation: OP Rx, Ry, Rz Rx <- Ry OP Rz] LD ADDD SD F0,0(R1) F4,F0,F2 0(R1),F4 Easy to detect dependences for registers trickier for memory locations (memory ambiguity/alias problem)

Name Dependences Two instructions use same name (register or memory address) but don t exchange data Anti dependence (WAR if hazard in pipeline) Instruction j writes to a register or memory location that instruction i reads from and instruction i is executed first Output dependence (WAW if hazard in pipeline) Instruction i and instruction j write to the same register or memory location; ordering between instructions must be preserved Name dependences are not fundamental and can sometimes be eliminated through hardware or software techniques (renaming)

Control Dependences Example: if Test1 then { S1 } if Test2 then { S2 } S1 is control dependent on Test1 S2 is control dependent on Test2; but not on Test1 We can t move an instruction that is dependent on a branch before the branch instruction We can t move an instruction that is not control dependent on a branch after the branch instr.

Compiler Techniques to Expose ILP: Example (Ch 2.2) for (i=1000; i>0; i=i-1) x[i] = x[i] + 10.0; Iterations are independent => parallel execution RAW RAW loop: LD F0, 0(R1) ; F0 = array element ADDD F4, F0, F2 ; Add scalar constant SD 0(R1), F4 ; Save result SUBI R1, R1, #8 ; decrement array ptr. BNEZ R1, loop ; reiterate if R1!= 0 NOP ; delayed branch Can we eliminate i all penalties in each iteration? ti

Static Scheduling within each Loop Iteration loop: LD F0, 0(R1) loop: LD F0, 0(R1) stall ADDD F4, F0, F2 stall ADDD F4, F0, F2 stall SUBI R1, R1, #8 stall BNEZ R1, loop SD 0(R1), F4 SD 8(R1), F4 SUBI R1, R1, #8 BNEZ R1, loop NOP Original loop: Statically scheduled loop: Four stall cycles One stall cycle Can we do better by scheduling across iterations?

Loop Unrolling RAW loop: LD F0, 0(R1) RAW ADDD F4, F0, F2 SD 0(R1), F4 ; drop SUBI & BNEZ RAW LD F6, -8(R1) ; adjust displacement ADDD F8, F6, F2 RAW SD -8(R1), F8 ; drop SUBI & BNEZ RAW LD F10, -16(R1) ; adjust displacement ADDD F12, F10, F2 RAW SD -16(R1), F12 ; drop SUBI & BNEZ LD F14, -24(R1) ; adjust displacement RAW ADDD F16, F14, F2 RAW SD -24(R1), F16 SUBI R1, R1, #32 ; alter to 4*8 BNEZ R1, loop NOP Registers must be renamed to avoid WAR hazards A larger chunk of sequential code simplifies scheduling

Statically Scheduled Unrolled Loop loop: LD F0, 0(R1) LD F6, -8(R1) LD F10, -16(R1) LD F14, -24(R1) ADDD F4, F0, F2 ADDD F8, F6, F2 ADDD F12, F10, F2 ADDD F16, F14, F2 SD 0(R1), F4 SD -8(R1), F8 SD -16(R1), F12 SUBI R1, R1, #32 BNEZ R1, loop SD 8(R1), F16 All penalties are eliminated. CPI=1! Important steps: Hoist loads Push stores down Note: the displacement of the store instr. must be changed Effects of loop unrolling: Provides a larger seq. instr. window Simplifies for static and dynamic methods to extract ILP But makes code bigger

Dynamic Methods to Improve ILP Not all potential hazards can be resolved by static scheduling at compile time For static scheduling to work, it is necessary for the compiler to know enough about the processor implementation to predict hazards Therefore, we will also explore dynamic techniques for hazard resolution

General Processor Organization Memory access Fetch instruction Get operands & Issue Integer & Logic Update state Floating point Major bottlenecks Control hazards, memory performance => Fetch bottleneck Data hazards, structural hazards, control hazards => Issue bottleneck

Fetch Bottleneck Control hazards Dynamic branch prediction: Predict outcome of branches and jumps Branch target buffers Issue (and execute) beyond branches Do not update state until prediction verified Memory bottleneck Memory performance improvement (memory hierarchy) Prefetch, multiple fetch

Dynamic Branch Prediction (Ch. 2.3) Branches limit performance because: Branch penalties are high Prevent a lot of ILP from being exploited Solution: Dynamic branch prediction to predict the outcome of conditional branches. Benefits: Reduce time to determine branch condition Reduce time to calculate the branch target address

Branch History Table A simple branch prediction scheme IF ID EX MEM WB PC 1 0 0 1 0 1 0 branches predicted as taken The idea: Use last branch outcome as prediction branch-prediction buffer is indexed by bits from branch- instruction ti PC values If prediction is wrong, then invert prediction Problem: a loop causes two mispredictions in a row

ATwobit Two-bit Prediction Scheme Requires prediction to be wrong twice in order to change prediction => better performance Performance: 0%-18% miss prediction frequency for SPEC92 Integer programs have higher miss frequency than floating point (FP) programs

Correlating Branch Predictors Correlating predictors ((m,n) predictor) Takes into consideration multiple branches and their correlation Performs better than the 2-bit predictor

Issue Bottleneck RAW hazards Dynamic scheduling (out-of-order execution) WAR & WAW hazards Remove name dependencies (register renaming) Structural hazards Dynamic scheduling (out-of-order execution) Memory performance improvement (memory hierarchy, prefetch, non-blocking, load/store buffers) Multiple and pipelined functional units Control hazards Speculative execution Single issue Issue multiple instructions per cycle (superscalar, VLIW)

Dynamic Instruction Scheduling (Ch. 2.4) Key idea: Allow subsequent independent instructions to proceed Instr. gets stuck here DIVD F0,F2,F4 ; takes long time ADDD F10F0F8 F10,F0,F8 ; stalls waiting for F0 SUBD F12,F8,F13 ; Let this instr. bypass the ADDD Enables out-of-order execution => out-of-order completion IF ID EX M WB Two historical schemes used in recent machines: Scoreboard dates back to CDC 6600 in 1963 Tomasulo s s algorithm in IBM 360/91 in 1967

Tomasulo s Algorithm: Hardware Organization (Ch. 2.5) Note: Tomasulo s algorithm is of course applicable also for other types of instructions than floating point.

Basic Ideas Decouple issue from operand fetch Prevents stall due to RAW hazards Register renaming: Translate result register references to instruction (functional unit) references Prevents WAR and WAW hazards Example registers S and T prevent WAW/WAR DIV.D F0,F2,F4 ADD.D F6,F0,F8 S.D F6,0(R1) SUB.D F8,F10,F8 MUL.D F6,F10,F8 DIV.D F0,F2,F4 ADD.D S,F0,F8 S.D S,0(R1) SUB.D T,F10,F8 MUL.D F6,F10,T

Three Stages of Tomasulo s Algorithm 1. Issue get g instruction from FP Op Queue Issue if no structural hazard for a reservation station 2. Execution operate on operands (EX) Execute when both operands are available; if not ready, watch Common Data Bus (CDB) for result 3. Write result finish execution (WB) Write on CDB to all awaiting functional units; mark reservation station available Normal bus: data + destination Common Data Bus: data + source

Tomasulo example, cycle 0 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R2 Busy Address LD F2 45+ R3 Load1 No MULTDF0 F2 F4 Load2 No SUBD F8 F6 F2 Load3 No DIVD F10 F0 F6 ADDD F6 F8 F2 Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk Add1 No Add2 No Add3 No Mult1 No Mult2 No Register result status Clock: 0 FU F0 F2 F4 F6 F8 F10... F30

Tomasulo example, cycle 1 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R2 1 Time Busy Address LD F2 45+ R3 Load1 Yes R2+32 MULTDF0 F2 F4 Load2 No SUBD F8 F6 F2 Load3 No DIVD F10 F0 F6 ADDD F6 F8 F2 Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk Add1 No Add2 No Add3 No Mult1 No Mult2 No Register result status Clock: 1 FU F0 F2 F4 F6 F8 F10... F30 Load1

Tomasulo example, cycle 2 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R2 1 Time Busy Address LD F2 45+ R3 2 1 Load1 Yes R2+32 MULTDF0 F2 F4 Load2 Yes R3+45 SUBD F8 F6 F2 Load3 No DIVD F10 F0 F6 ADDD F6 F8 F2 Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk Add1 No Add2 No Add3 No Mult1 No Mult2 No Register result status F0 F2 F4 F6 F8 F10... F30 FU Load2 Load1 Clock: 2

Tomasulo example, cycle 3 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R2 1 3 Time Busy Address LD F2 45+ R3 2 0 Load1 Yes R2+32 MULTDF0 F2 F4 3 1 Load2 Yes R3+45 SUBD F8 F6 F2 Load3 No DIVD F10 F0 F6 ADDD F6 F8 F2 Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk Add1 No Add2 No Add3 No Mult1 Yes Mult F4 Load2 Mult2 No Register result status F0 F2 F4 F6 F8 F10... F30 FU Mult1 Load2 Load1 Clock: 3

Tomasulo example, cycle 4 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R2 1 3 4 Time Busy Address LD F2 45+ R3 2 4 Load1 No MULTDF0 F2 F4 3 0 Load2 Yes R3+45 SUBD F8 F6 F2 4 Load3 No DIVD F10 F0 F6 ADDD F6 F8 F2 Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk Add1 Yes Sub M(R2+34) Load2 Add2 No Add3 No Mult1 Yes Mult F4 Load2 Mult2 No Register result status F0 F2 F4 F6 F8 F10... F30 FU Mult1 Load2 - Add1 Clock: 4

Tomasulo example, cycle 5 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R2 1 3 4 Time Busy Address LD F2 45+ R3 2 4 5 Load1 No MULTDF0 F2 F4 3 Load2 No SUBD F8 F6 F2 4 Load3 No DIVD F10 F0 F6 5 ADDD F6 F8 F2 Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk 2 Add1 Yes Sub M(R2+34)M(R3+45) - Add2 No Add3 No 10 Mult1 Yes Mult M(R3+45) F4 - Mult2 Yes Div F6 Mult1 Register result status F0 F2 F4 F6 F8 F10... F30 FU Mult1 - Add1 Mult2 Clock: 5

Tomasulo example, cycle 6 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R2 1 3 4 Time Busy Address LD F2 45+ R3 2 4 5 Load1 No MULTDF0 F2 F4 3 Load2 No SUBD F8 F6 F2 4 Load3 No DIVD F10 F0 F6 5 ADDD F6 F8 F2 6 Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk 1 Add1 Yes Sub M(R2+34)M(R3+45) Add2 Yes Add F2 Add1 Add3 No 9 Mult1 Yes Mult M(R3+45) F4 Mult2 Yes Div F6 Mult1 Register result status F0 F2 F4 F6 F8 F10... F30 FU Mult1 Add2 Add1 Mult2 Clock: 6

Tomasulo example, cycle 7 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R2 1 3 4 Time Busy Address LD F2 45+ R3 2 4 5 Load1 No MULTDF0 F2 F4 3 Load2 No SUBD F8 F6 F2 4 7 Load3 No DIVD F10 F0 F6 5 ADDD F6 F8 F2 6 Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk 0 Add1 Yes Sub M(R2+34)M(R3+45) Add2 Yes Add F2 Add1 Add3 No 8 Mult1 Yes Mult M(R3+45) F4 Mult2 Yes Div F6 Mult1 Register result status F0 F2 F4 F6 F8 F10... F30 FU Mult1 Add2 Add1 Mult2 Clock: 7

Tomasulo example, cycle 8 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R2 1 3 4 Time Busy Address LD F2 45+ R3 2 4 5 Load1 No MULTDF0 F2 F4 3 Load2 No SUBD F8 F6 F2 4 7 8 Load3 No DIVD F10 F0 F6 5 ADDD F6 F8 F2 6 Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk Add1 No 2 Add2 Yes Add F6-F2 F2 - Add3 No 7 Mult1 Yes Mult M(R3+45) F4 Mult2 Yes Div F6 Mult1 Register result status F0 F2 F4 F6 F8 F10... F30 FU Mult1 Add2 - Mult2 Clock: 8

Tomasulo example, cycle 10 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R2 1 3 4 Time Busy Address LD F2 45+ R3 2 4 5 Load1 No MULTDF0 F2 F4 3 Load2 No SUBD F8 F6 F2 4 7 8 Load3 No DIVD F10 F0 F6 5 ADDD F6 F8 F2 6 10 Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk Add1 No 0 Add2 Yes Add F6-F2 F2 Add3 No 5 Mult1 Yes Mult M(R3+45) F4 Mult2 Yes Div F6 Mult1 Register result status F0 F2 F4 F6 F8 F10... F30 FU Mult1 Add2 Mult2 Clock: 10

Tomasulo example, cycle 11 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R2 1 3 4 Time Busy Address LD F2 45+ R3 2 4 5 Load1 No MULTDF0 F2 F4 3 Load2 No SUBD F8 F6 F2 4 7 8 Load3 No DIVD F10 F0 F6 5 ADDD F6 F8 F2 6 10 11 Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk Add1 No Add2 No Add3 No 4 Mult1 Yes Mult M(R3+45) F4 Mult2 Yes Div F6 Mult1 Register result status F0 F2 F4 F6 F8 F10... F30 FU Mult1 - Mult2 Clock: 11

Tomasulo example, cycle 15 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R2 1 3 4 Time Busy Address LD F2 45+ R3 2 4 5 Load1 No MULTDF0 F2 F4 3 15 Load2 No SUBD F8 F6 F2 4 7 8 Load3 No DIVD F10 F0 F6 5 ADDD F6 F8 F2 6 10 11 Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk Add1 No Add2 No Add3 No 0 Mult1 Yes Mult M(R3+45) F4 Mult2 Yes Div F6 Mult1 Register result status F0 F2 F4 F6 F8 F10... F30 FU Mult1 Mult2 Clock: 15

Tomasulo example, cycle 16 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R2 1 3 4 Time Busy Address LD F2 45+ R3 2 4 5 Load1 No MULTDF0 F2 F4 3 15 16 Load2 No SUBD F8 F6 F2 4 7 8 Load3 No DIVD F10 F0 F6 5 ADDD F6 F8 F2 6 10 11 Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk Add1 No Add2 No Add3 No Mult1 No 40 Mult2 Yes Div F0 F6 - Register result status F0 F2 F4 F6 F8 F10... F30 FU - Mult2 Clock: 15

Tomasulo example, cycle 56 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R2 1 3 4 Time Busy Address LD F2 45+ R3 2 4 5 Load1 No MULTDF0 F2 F4 3 15 16 Load2 No SUBD F8 F6 F2 4 7 8 Load3 No DIVD F10 F0 F6 5 56 ADDD F6 F8 F2 6 10 11 Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk Add1 No Add2 No Add3 No Mult1 No 0 Mult2 Yes Div F0 F6 Register result status Clock: 56 FU F0 F2 F4 F6 F8 F10... F30 Mult2

Tomasulo example, cycle 57 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R2 1 3 4 Time Busy Address LD F2 45+ R3 2 4 5 Load1 No MULTDF0 F2 F4 3 15 16 Load2 No SUBD F8 F6 F2 4 7 8 Load3 No DIVD F10 F0 F6 5 56 57 ADDD F6 F8 F2 6 10 11 Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk Add1 No Add2 No Add3 No Mult1 No Mult2 No Register result status F0 F2 F4 F6 F8 F10... F30 FU - Clock: 57

Summary Data, name, and control dependences Hazards and static instruction order prevent ILP exploitation Static methods Let compiler reorder instructions to remove hazards Dynamic methods Modify processor organization to remove hazards and dynamically reorder instructions Fetch bottleneck: Dynamic branch prediction Issue bottleneck: Tomasulo s algorithm Next Speculative execution Multiple issue Register renaming Increased instruction fetch bandwidth