Computer Architecture Lecture 13: State Maintenance and Recovery. Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 2/15/2013

Similar documents
15-740/ Computer Architecture Lecture 10: Out-of-Order Execution. Prof. Onur Mutlu Carnegie Mellon University Fall 2011, 10/3/2011

Computer Architecture Lecture 12: Out-of-Order Execution (Dynamic Instruction Scheduling)

Computer Architecture Lecture 14: Out-of-Order Execution. Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 2/18/2013

Precise Exceptions and Out-of-Order Execution. Samira Khan

15-740/ Computer Architecture Lecture 5: Precise Exceptions. Prof. Onur Mutlu Carnegie Mellon University

15-740/ Computer Architecture Lecture 8: Issues in Out-of-order Execution. Prof. Onur Mutlu Carnegie Mellon University

CMSC22200 Computer Architecture Lecture 8: Out-of-Order Execution. Prof. Yanjing Li University of Chicago

Computer Architecture Lecture 15: Load/Store Handling and Data Flow. Prof. Onur Mutlu Carnegie Mellon University Spring 2014, 2/21/2014

Computer Architecture: Out-of-Order Execution II. Prof. Onur Mutlu Carnegie Mellon University

15-740/ Computer Architecture Lecture 12: Issues in OoO Execution. Prof. Onur Mutlu Carnegie Mellon University Fall 2011, 10/7/2011

Chapter. Out of order Execution

15-740/ Computer Architecture Lecture 23: Superscalar Processing (III) Prof. Onur Mutlu Carnegie Mellon University

Handout 2 ILP: Part B

15-740/ Computer Architecture Lecture 22: Superscalar Processing (II) Prof. Onur Mutlu Carnegie Mellon University

EE382A Lecture 7: Dynamic Scheduling. Department of Electrical Engineering Stanford University

CMSC Computer Architecture Lecture 18: Exam 2 Review Session. Prof. Yanjing Li University of Chicago

Reorder Buffer Implementation (Pentium Pro) Reorder Buffer Implementation (Pentium Pro)

15-740/ Computer Architecture Lecture 14: Runahead Execution. Prof. Onur Mutlu Carnegie Mellon University Fall 2011, 10/12/2011

15-740/ Computer Architecture Lecture 21: Superscalar Processing. Prof. Onur Mutlu Carnegie Mellon University

CSE 820 Graduate Computer Architecture. week 6 Instruction Level Parallelism. Review from Last Time #1

UG4 Honours project selection: Talk to Vijay or Boris if interested in computer architecture projects

Page 1. Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

NOW Handout Page 1. Review from Last Time #1. CSE 820 Graduate Computer Architecture. Lec 8 Instruction Level Parallelism. Outline

5008: Computer Architecture

CS152 Computer Architecture and Engineering March 13, 2008 Out of Order Execution and Branch Prediction Assigned March 13 Problem Set #4 Due March 25

Computer Architecture Spring 2016

Portland State University ECE 587/687. Superscalar Issue Logic

Computer Systems Architecture I. CSE 560M Lecture 10 Prof. Patrick Crowley

Lecture-13 (ROB and Multi-threading) CS422-Spring

Hardware-Based Speculation

Announcements. ECE4750/CS4420 Computer Architecture L11: Speculative Execution I. Edward Suh Computer Systems Laboratory

November 7, 2014 Prediction

Chapter 3 Instruction-Level Parallelism and its Exploitation (Part 1)

Design of Digital Circuits Lecture 17: Pipelining Issues. Prof. Onur Mutlu ETH Zurich Spring April 2017

Advanced issues in pipelining

CS 252 Graduate Computer Architecture. Lecture 4: Instruction-Level Parallelism

1 Tomasulo s Algorithm

CS152 Computer Architecture and Engineering. Complex Pipelines

Superscalar Processors

Static vs. Dynamic Scheduling

Hardware-based Speculation

Complex Pipelining COE 501. Computer Architecture Prof. Muhamed Mudawar

Hardware-based speculation (2.6) Multiple-issue plus static scheduling = VLIW (2.7) Multiple-issue, dynamic scheduling, and speculation (2.

ECE 552 / CPS 550 Advanced Computer Architecture I. Lecture 9 Instruction-Level Parallelism Part 2

CS 152 Computer Architecture and Engineering. Lecture 10 - Complex Pipelines, Out-of-Order Issue, Register Renaming

CPE 631 Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation

Computer Architecture: Branch Prediction. Prof. Onur Mutlu Carnegie Mellon University

18-447: Computer Architecture Lecture 23: Tolerating Memory Latency II. Prof. Onur Mutlu Carnegie Mellon University Spring 2012, 4/18/2012

15-740/ Computer Architecture Lecture 7: Pipelining. Prof. Onur Mutlu Carnegie Mellon University Fall 2011, 9/26/2011

Chapter 4 The Processor 1. Chapter 4D. The Processor

E0-243: Computer Architecture

Advanced Computer Architectures

Case Study IBM PowerPC 620

EITF20: Computer Architecture Part3.2.1: Pipeline - 3

Lecture 12 Branch Prediction and Advanced Out-of-Order Superscalars

CPE 631 Lecture 11: Instruction Level Parallelism and Its Dynamic Exploitation

Computer Architecture, Fall 2010 Midterm Exam I

Processor: Superscalars Dynamic Scheduling

Copyright 2012, Elsevier Inc. All rights reserved.

This Set. Scheduling and Dynamic Execution Definitions From various parts of Chapter 4. Description of Two Dynamic Scheduling Methods

Load1 no Load2 no Add1 Y Sub Reg[F2] Reg[F6] Add2 Y Add Reg[F2] Add1 Add3 no Mult1 Y Mul Reg[F2] Reg[F4] Mult2 Y Div Reg[F6] Mult1

Multithreaded Processors. Department of Electrical Engineering Stanford University

EEC 581 Computer Architecture. Instruction Level Parallelism (3.6 Hardware-based Speculation and 3.7 Static Scheduling/VLIW)

Page 1. Recall from Pipelining Review. Lecture 15: Instruction Level Parallelism and Dynamic Execution

Donn Morrison Department of Computer Science. TDT4255 ILP and speculation

Reduction of Data Hazards Stalls with Dynamic Scheduling So far we have dealt with data hazards in instruction pipelines by:

Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

Website for Students VTU NOTES QUESTION PAPERS NEWS RESULTS

CS 152 Computer Architecture and Engineering. Lecture 13 - Out-of-Order Issue and Register Renaming

CS 152 Computer Architecture and Engineering. Lecture 12 - Advanced Out-of-Order Superscalars

CS252 Graduate Computer Architecture Lecture 6. Recall: Software Pipelining Example

The Processor: Instruction-Level Parallelism

CISC 662 Graduate Computer Architecture Lecture 11 - Hardware Speculation Branch Predictions

COMPUTER ORGANIZATION AND DESI

Dynamic Scheduling. CSE471 Susan Eggers 1

CS252 Graduate Computer Architecture Lecture 8. Review: Scoreboard (CDC 6600) Explicit Renaming Precise Interrupts February 13 th, 2010

CS252 Spring 2017 Graduate Computer Architecture. Lecture 8: Advanced Out-of-Order Superscalar Designs Part II

Super Scalar. Kalyan Basu March 21,

This Set. Scheduling and Dynamic Execution Definitions From various parts of Chapter 4. Description of Three Dynamic Scheduling Methods

CPI < 1? How? What if dynamic branch prediction is wrong? Multiple issue processors: Speculative Tomasulo Processor

Processor (IV) - advanced ILP. Hwansoo Han

LSU EE 4720 Dynamic Scheduling Study Guide Fall David M. Koppelman. 1.1 Introduction. 1.2 Summary of Dynamic Scheduling Method 3

Advanced Computer Architecture

EXAM 1 SOLUTIONS. Midterm Exam. ECE 741 Advanced Computer Architecture, Spring Instructor: Onur Mutlu

CS 152, Spring 2011 Section 8

15-740/ Computer Architecture

Wide Instruction Fetch

Chapter 4. Advanced Pipelining and Instruction-Level Parallelism. In-Cheol Park Dept. of EE, KAIST

EEC 581 Computer Architecture. Lec 7 Instruction Level Parallelism (2.6 Hardware-based Speculation and 2.7 Static Scheduling/VLIW)

Computer Architecture A Quantitative Approach, Fifth Edition. Chapter 3. Instruction-Level Parallelism and Its Exploitation

Lecture 19: Instruction Level Parallelism

Topics. Digital Systems Architecture EECE EECE Predication, Prediction, and Speculation

15-740/ Computer Architecture Lecture 28: Prefetching III and Control Flow. Prof. Onur Mutlu Carnegie Mellon University Fall 2011, 11/28/11

Lecture: Out-of-order Processors. Topics: out-of-order implementations with issue queue, register renaming, and reorder buffer, timing, LSQ

CSE502: Computer Architecture CSE 502: Computer Architecture

Design of Digital Circuits ( L) ETH Zürich, Spring 2017

Register Data Flow. ECE/CS 752 Fall Prof. Mikko H. Lipasti University of Wisconsin-Madison

Hardware Speculation Support

CS425 Computer Systems Architecture

Chapter 3 (CONT II) Instructor: Josep Torrellas CS433. Copyright J. Torrellas 1999,2001,2002,2007,

Transcription:

18-447 Computer Architecture Lecture 13: State Maintenance and Recovery Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 2/15/2013

Reminder: Homework 3 Homework 3 Due Feb 25 REP MOVS in Microprogrammed LC-3b, Pipelining, Delay Slots, Interlocking, Branch Prediction 2

Reminder: Lab Assignment 2 Due Today Lab Assignment 2 Due today, Feb 15 Single-cycle MIPS implementation in Verilog All labs are individual assignments No collaboration; please respect the honor code Do not forget the extra credit portion! 3

Lab Assignment 3 Out Lab Assignment 3 Due Friday, March 1 Pipelined MIPS implementation in Verilog All labs are individual assignments No collaboration; please respect the honor code Extra credit: Optimize for execution time! Top assignments with lowest execution times will get extra credit. And, it will be fun to optimize 4

A Note on Testing Your Code Testing is critical in developing any system You are responsible for creating your own test programs and ensuring your designs work for all possible cases That is how real life works also Noone gives you all possible test cases, workloads, users, etc. beforehand 5

Feedback Sheet Due today (Feb 15), in class We would like your honest feedback on the course Please turn it in even if you are late 6

Readings for Today Smith and Plezskun, Implementing Precise Interrupts in Pipelined Processors, IEEE Trans on Computers 1988 (earlier version in ISCA 1985). Smith and Sohi, The Microarchitecture of Superscalar Processors, Proceedings of the IEEE, 1995 More advanced pipelining Interrupt and exception handling Out-of-order and superscalar execution concepts 7

Last Lecture Wrap up control dependence handling Predicated execution Wish branches Multipath execution Call/return prediction Indirect branch prediction Latency issues in branch prediction Interrupts vs. exceptions Reorder buffer as a state maintenance/recovery mechanisms 8

Today State maintenance and recovery mechanisms (Maybe) Begin out-of-order execution 9

Pipelining and Precise Exceptions: Preserving Sequential Semantics

Review: Issues in Pipelining: Multi-Cycle Execute Instructions can take different number of cycles in EXECUTE stage Integer ADD versus FP MULtiply FMUL R4 R1, R2 ADD R3 R1, R2 F D E E E E E E E E W F D E W F D E W F D E W FMUL R2 R5, R6 ADD R4 R5, R6 F D E E E E E E E E W F D E W F D E W What is wrong with this picture? What if FMUL incurs an exception? Sequential semantics of the ISA NOT preserved! 11

Review: Precise Exceptions/Interrupts The architectural state should be consistent when the exception/interrupt is ready to be handled 1. All previous instructions should be completely retired. 2. No later instruction should be retired. Retire = commit = finish execution and update arch. state 12

Review: Solutions Reorder buffer History buffer Future register file Checkpointing Recommended Reading Smith and Plezskun, Implementing Precise Interrupts in Pipelined Processors, IEEE Trans on Computers 1988 and ISCA 1985. Hwu and Patt, Checkpoint Repair for Out-of-order Execution Machines, ISCA 1987. 13

Review: Solution I: Reorder Buffer (ROB) Idea: Complete instructions out-of-order, but reorder them before making results visible to architectural state When instruction is decoded it reserves an entry in the ROB When instruction completes, it writes result into ROB entry When instruction oldest in ROB and it has completed without exceptions, its result moved to reg. file or memory Func Unit Instruction Cache Register File Func Unit Reorder Buffer Func Unit 14

Review: What s in a ROB Entry? V DestRegID DestRegVal StoreAddr StoreData PC Valid bits for reg/data + control bits Exc? Need valid bits to keep track of readiness of the result(s) 15

Review: Reorder Buffer: Independent Operations Results first written to ROB, then to register file at commit time F D E E E E E E E E R W F D E R W F D E R W F D E R W F D E E E E E E E E R W F D E R W F D E R W What if a later operation needs a value in the reorder buffer? Read reorder buffer in parallel with the register file. How? 16

Reorder Buffer: How to Access? A register value can be in the register file, reorder buffer, (or bypass/forwarding paths) Instruction Cache Content Addressable Memory (searched with register ID) Register File Reorder Buffer Func Unit Func Unit Func Unit bypass path 17

Simplifying Reorder Buffer Access Idea: Use indirection Access register file first If register not valid, register file stores the ID of the reorder buffer entry that contains (or will contain) the value of the register Mapping of the register to a ROB entry Access reorder buffer next What is in a reorder buffer entry? V DestRegID DestRegVal StoreAddr StoreData PC/IP Control/val id bits Exc? Can it be simplified further? 18

Aside: Register Renaming with a Reorder Buffer Output and anti dependencies are not true dependencies WHY? The same register refers to values that have nothing to do with each other They exist due to lack of register ID s (i.e. names) in the ISA The register ID is renamed to the reorder buffer entry that will hold the register s value Register ID ROB entry ID Architectural register ID Physical register ID After renaming, ROB entry ID used to refer to the register This eliminates anti- and output- dependencies Gives the illusion that there are a large number of registers 19

In-Order Pipeline with Reorder Buffer Decode (D): Access regfile/rob, allocate entry in ROB, check if instruction can execute, if so dispatch instruction Execute (E): Instructions can complete out-of-order Completion (R): Write result to reorder buffer Retirement/Commit (W): Check for exceptions; if none, write result to architectural register file or memory; else, flush pipeline and start from exception handler In-order dispatch/execution, out-of-order completion, in-order retirement E Integer add F D E E E E Integer mul E E E E E E E E FP mul R W E E E E E E E E... Load/store 20

Reorder Buffer Tradeoffs Advantages Conceptually simple for supporting precise exceptions Can eliminate false dependencies Disadvantages Reorder buffer needs to be accessed to get the results that are yet to be written to the register file CAM or indirection increased latency and complexity Other solutions aim to eliminate the disadvantages History buffer Future file Checkpointing 21

Solution II: History Buffer (HB) Idea: Update the register file when instruction completes, but UNDO UPDATES when an exception occurs When instruction is decoded, it reserves an HB entry When the instruction completes, it stores the old value of its destination in the HB When instruction is oldest and no exceptions/interrupts, the HB entry discarded When instruction is oldest and an exception needs to be handled, old values in the HB are written back into the architectural state from tail to head 22

History Buffer Func Unit Instruction Cache Register File Func Unit History Buffer Func Unit Advantage: Register file contains up-to-date values. History buffer access not on critical path Disadvantage: Used only on exceptions Need to read the old value of the destination register Need to unwind the history buffer upon an exception increased exception/interrupt handling latency 23

Solution III: Future File (FF) + ROB Idea: Keep two register files (speculative and architectural) Arch reg file: Updated in program order for precise exceptions Use a reorder buffer to ensure in-order updates Future reg file: Updated as soon as an instruction completes (if the instruction is the youngest one to write to a register) Future file is used for fast access to latest register values (speculative state) Frontend register file Architectural file is used for state recovery on exceptions (architectural state) Backend register file 24

Future File Func Unit Instruction Cache Future File Func Unit ROB Arch. File Data or Tag V Func Unit Advantage Used only on exceptions No need to read the values from the ROB (no CAM or indirection) Disadvantage Multiple register files Need to copy arch. reg. file to future file on an exception 25

In-Order Pipeline with Future File and Reorder Buffer Decode (D): Access future file, allocate entry in ROB, check if instruction can execute, if so dispatch instruction Execute (E): Instructions can complete out-of-order Completion (R): Write result to reorder buffer and future file Retirement/Commit (W): Check for exceptions; if none, write result to architectural register file or memory; else, flush pipeline, copy architectural file to future file, and start from exception handler In-order dispatch/execution, out-of-order completion, in-order retirement E Integer add F D E E E E Integer mul E E E E E E E E FP mul R W E E E E E E E E... Load/store 26

Checking for and Handling Exceptions in Pipelining When the oldest instruction ready-to-be-retired is detected to have caused an exception, the control logic Recovers architectural state (register file, IP, and memory) Flushes all younger instructions in the pipeline Saves IP and registers (as specified by the ISA) Redirects the fetch engine to the exception handling routine Vectored exceptions 27

Pipelining Issues: Branch Mispredictions A branch misprediction resembles an exception Except it is not visible to software What about branch misprediction recovery? Similar to exception handling except can be initiated before the branch is the oldest instruction All three state recovery methods can be used Difference between exceptions and branch mispredictions? Branch mispredictions are much more common need fast state recovery to minimize performance impact of mispredictions 28

How Fast Is State Recovery? Latency of state recovery affects Exception service latency Interrupt service latency Latency to supply the correct data to instructions fetched after a branch misprediction Which ones above need to be fast? How do the three state maintenance methods fare in terms of recovery latency? Reorder buffer History buffer Future file 29

Branch State Recovery Actions and Latency Reorder Buffer Wait until branch is the oldest instruction in the machine Flush entire pipeline History buffer Undo all instructions after the branch by rewinding from the tail of the history buffer until the branch & restoring old values one by one into the register file Flush instructions in pipeline younger than the branch Future file Wait until branch is the oldest instruction in the machine Copy arch. reg. file to future file Flush entire pipeline 30

Can We Do Better? Goal: Restore the frontend state (future file) such that the correct next instruction after the branch can execute right away after the branch misprediction is resolved Idea: Checkpoint the frontend register state at the time a branch is fetched and keep the checkpointed state updated with results of instructions older than the branch Hwu and Patt, Checkpoint Repair for Out-of-order Execution Machines, ISCA 1987. 31

Checkpointing When a branch is decoded Make a copy of the future file and associate it with the branch When an instruction produces a register value All future file checkpoints that are younger than the instruction are updated with the value When a branch misprediction is detected Restore the checkpointed future file for the mispredicted branch when the branch misprediction is resolved Flush instructions in pipeline younger than the branch Deallocate checkpoints younger than the branch 32

Checkpointing Advantages? Disadvantages? 33

Registers versus Memory So far, we considered mainly registers as part of state What about memory? What are the fundamental differences between registers and memory? Register dependences known statically memory dependences determined dynamically Register state is small memory state is large Register state is not visible to other threads/processors memory state is shared between threads/processors (in a shared memory multiprocessor) 34

Maintaining Speculative Memory State: Stores Handling out-of-order completion of memory operations UNDOing a memory write more difficult than UNDOing a register write. Why? One idea: Keep store address/data in reorder buffer How does a load instruction find its data? Store/write buffer: Similar to reorder buffer, but used only for store instructions Program-order list of un-committed store operations When store is decoded: Allocate a store buffer entry When store address and data become available: Record in store buffer entry When the store is the oldest instruction in the pipeline: Update the memory address (i.e. cache) with store data 35

We did not cover the following slides in lecture. These are for your preparation for the next lecture.

Out-of-Order Execution (Dynamic Instruction Scheduling)

An In-order Pipeline E Integer add F D E E E E Integer mul E E E E E E E E FP mul W E E E E E E E E... Cache miss Problem: A true data dependency stalls dispatch of younger instructions into functional (execution) units Dispatch: Act of sending an instruction to a functional unit 38

Can We Do Better? What do the following two pieces of code have in common (with respect to execution in the previous design)? IMUL R3 R1, R2 ADD R3 R3, R1 ADD R1 R6, R7 IMUL R5 R6, R8 ADD R7 R3, R5 LD R3 R1 (0) ADD R3 R3, R1 ADD R1 R6, R7 IMUL R5 R6, R8 ADD R7 R3, R5 Answer: First ADD stalls the whole pipeline! ADD cannot dispatch because its source registers unavailable Later independent instructions cannot get executed How are the above code portions different? Answer: Load latency is variable (unknown until runtime) What does this affect? Think compiler vs. microarchitecture 39

Preventing Dispatch Stalls Multiple ways of doing it You have already seen THREE: 1. Fine-grained multithreading 2. Value prediction 3. Compile-time instruction scheduling/reordering What are the disadvantages of the above three? Any other way to prevent dispatch stalls? Actually, you have briefly seen the basic idea before Dataflow: fetch and fire an instruction when its inputs are ready Problem: in-order dispatch (scheduling, or execution) Solution: out-of-order dispatch (scheduling, or execution) 40

Out-of-order Execution (Dynamic Scheduling) Idea: Move the dependent instructions out of the way of independent ones Rest areas for dependent instructions: Reservation stations Monitor the source values of each instruction in the resting area When all source values of an instruction are available, fire (i.e. dispatch) the instruction Instructions dispatched in dataflow (not control-flow) order Benefit: Latency tolerance: Allows independent instructions to execute and complete in the presence of a long latency operation 41

In-order vs. Out-of-order Dispatch In order dispatch: F D E E E E R W F D STALL E R W F STALL D E R W F D E E E E E R W F D STALL E R W IMUL R3 R1, R2 ADD R3 R3, R1 ADD R1 R6, R7 IMUL R5 R6, R8 ADD R7 R3, R5 Tomasulo + precise exceptions: F D E E E E R W F D F D WAIT E R E R W W F D E E E E R W F D WAIT E R W 16 vs. 12 cycles 42

Enabling OoO Execution 1. Need to link the consumer of a value to the producer Register renaming: Associate a tag with each data value 2. Need to buffer instructions until they are ready to execute Insert instruction into reservation stations after renaming 3. Instructions need to keep track of readiness of source values Broadcast the tag when the value is produced Instructions compare their source tags to the broadcast tag if match, source value becomes ready 4. When all source values of an instruction are ready, need to dispatch the instruction to its functional unit (FU) What if more instructions become ready than available FUs? 43

Tomasulo s Algorithm OoO with register renaming invented by Robert Tomasulo Used in IBM 360/91 Floating Point Units Read: Tomasulo, An Efficient Algorithm for Exploiting Multiple Arithmetic Units, IBM Journal of R&D, Jan. 1967. What is the major difference today? Precise exceptions: IBM 360/91 did NOT have this Patt, Hwu, Shebanow, HPS, a new microarchitecture: rationale and introduction, MICRO 1985. Patt et al., Critical issues regarding HPS, a high performance microarchitecture, MICRO 1985. Variants used in most high-performance processors Initially in Intel Pentium Pro, AMD K5, Alpha 21264, MIPS R10000, IBM POWER5, IBM z196, Oracle UltraSPARC T4, ARM Cortex A15 44

Two Humps in a Modern Pipeline TAG and VALUE Broadcast Bus F D S C H E D U L E E Integer add E E E E Integer mul E E E E E E E E FP mul E E E E E E E E... R E O R D E R W Load/store in order out of order in order Hump 1: Reservation stations (scheduling window) Hump 2: Reordering (reorder buffer, aka instruction window or active window) 45

General Organization of an OOO Processor Smith and Sohi, The Microarchitecture of Superscalar Processors, Proc. IEEE, Dec. 1995. 46

Tomasulo s Machine: IBM 360/91 from memory from instruction unit FP registers load buffers store buffers operation bus FP FU FP FU reservation stations to memory Common data bus 47

Register Renaming Output and anti dependencies are not true dependencies WHY? The same register refers to values that have nothing to do with each other They exist because not enough register ID s (i.e. names) in the ISA The register ID is renamed to the reservation station entry that will hold the register s value Register ID RS entry ID Architectural register ID Physical register ID After renaming, RS entry ID used to refer to the register This eliminates anti- and output- dependencies Approximates the performance effect of a large number of registers even though ISA has a small number 48

Tomasulo s Algorithm: Renaming Register rename table (register alias table) tag value valid? R0 R1 R2 R3 R4 R5 R6 R7 R8 R9 1 1 1 1 1 1 1 1 1 1 49

Tomasulo s Algorithm If reservation station available before renaming Instruction + renamed operands (source value/tag) inserted into the reservation station Only rename if reservation station is available Else stall While in reservation station, each instruction: Watches common data bus (CDB) for tag of its sources When tag seen, grab value for the source and keep it in the reservation station When both operands available, instruction ready to be dispatched Dispatch instruction to the Functional Unit when instruction is ready After instruction finishes in the Functional Unit Arbitrate for CDB Put tagged value onto CDB (tag broadcast) Register file is connected to the CDB Register contains a tag indicating the latest writer to the register If the tag in the register file matches the broadcast tag, write broadcast value into register (and set valid bit) Reclaim rename tag no valid copy of tag in system! 50

An Exercise MUL R3 R1, R2 ADD R5 R3, R4 ADD R7 R2, R6 ADD R10 R8, R9 MUL R11 R7, R10 ADD R5 R5, R11 F D E W Assume ADD (4 cycle execute), MUL (6 cycle execute) Assume one adder and one multiplier How many cycles in a non-pipelined machine in an in-order-dispatch pipelined machine with imprecise exceptions (no forwarding and full forwarding) in an out-of-order dispatch pipelined machine imprecise exceptions (full forwarding) 51