Lecture 9: Dynamic ILP. Topics: out-of-order processors (Sections )

Similar documents
Lecture: Out-of-order Processors. Topics: out-of-order implementations with issue queue, register renaming, and reorder buffer, timing, LSQ

Lecture 8: Branch Prediction, Dynamic ILP. Topics: static speculation and branch prediction (Sections )

Lecture 11: Out-of-order Processors. Topics: more ooo design details, timing, load-store queue

Out of Order Processing

Lecture: Out-of-order Processors

Lecture 16: Core Design. Today: basics of implementing a correct ooo core: register renaming, commit, LSQ, issue queue

250P: Computer Systems Architecture. Lecture 9: Out-of-order execution (continued) Anton Burtsev February, 2019

Lecture 8: Instruction Fetch, ILP Limits. Today: advanced branch prediction, limits of ILP (Sections , )

Lecture 9: More ILP. Today: limits of ILP, case studies, boosting ILP (Sections )

Dynamic Scheduling. CSE471 Susan Eggers 1

Hardware-Based Speculation


CS152 Computer Architecture and Engineering March 13, 2008 Out of Order Execution and Branch Prediction Assigned March 13 Problem Set #4 Due March 25

Hardware-based Speculation

CS 152 Computer Architecture and Engineering. Lecture 12 - Advanced Out-of-Order Superscalars

EE382A Lecture 7: Dynamic Scheduling. Department of Electrical Engineering Stanford University

Computer Systems Architecture I. CSE 560M Lecture 10 Prof. Patrick Crowley

Handout 2 ILP: Part B

Hardware-based speculation (2.6) Multiple-issue plus static scheduling = VLIW (2.7) Multiple-issue, dynamic scheduling, and speculation (2.

Hardware-based Speculation

Multiple Instruction Issue and Hardware Based Speculation

This Set. Scheduling and Dynamic Execution Definitions From various parts of Chapter 4. Description of Three Dynamic Scheduling Methods

Announcements. ECE4750/CS4420 Computer Architecture L11: Speculative Execution I. Edward Suh Computer Systems Laboratory

Copyright 2012, Elsevier Inc. All rights reserved.

Static vs. Dynamic Scheduling

Reorder Buffer Implementation (Pentium Pro) Reorder Buffer Implementation (Pentium Pro)

Computer Architecture A Quantitative Approach, Fifth Edition. Chapter 3. Instruction-Level Parallelism and Its Exploitation

Page 1. Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

Page 1. Recall from Pipelining Review. Lecture 15: Instruction Level Parallelism and Dynamic Execution

NOW Handout Page 1. Review from Last Time #1. CSE 820 Graduate Computer Architecture. Lec 8 Instruction Level Parallelism. Outline

This Set. Scheduling and Dynamic Execution Definitions From various parts of Chapter 4. Description of Two Dynamic Scheduling Methods

CSE 820 Graduate Computer Architecture. week 6 Instruction Level Parallelism. Review from Last Time #1

CS 152 Computer Architecture and Engineering

Chapter 3 (CONT II) Instructor: Josep Torrellas CS433. Copyright J. Torrellas 1999,2001,2002,2007,

Page 1. CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Pipeline CPI (II) Michela Taufer

EITF20: Computer Architecture Part3.2.1: Pipeline - 3

Lecture-13 (ROB and Multi-threading) CS422-Spring

CS 152, Spring 2011 Section 8

6.823 Computer System Architecture

William Stallings Computer Organization and Architecture 8 th Edition. Chapter 14 Instruction Level Parallelism and Superscalar Processors

Page # CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Michela Taufer

Lecture 4: Advanced Pipelines. Data hazards, control hazards, multi-cycle in-order pipelines (Appendix A.4-A.10)

Adapted from David Patterson s slides on graduate computer architecture

Lecture 19: Instruction Level Parallelism

Lecture: SMT, Cache Hierarchies. Topics: memory dependence wrap-up, SMT processors, cache access basics (Sections B.1-B.3, 2.1)

5008: Computer Architecture

COSC4201 Instruction Level Parallelism Dynamic Scheduling

Reduction of Data Hazards Stalls with Dynamic Scheduling So far we have dealt with data hazards in instruction pipelines by:

Instruction Level Parallelism

Lecture: SMT, Cache Hierarchies. Topics: memory dependence wrap-up, SMT processors, cache access basics and innovations (Sections B.1-B.3, 2.

Chapter 06: Instruction Pipelining and Parallel Processing

ILP: Instruction Level Parallelism

Lecture: SMT, Cache Hierarchies. Topics: memory dependence wrap-up, SMT processors, cache access basics and innovations (Sections B.1-B.3, 2.

Instruction-Level Parallelism (ILP)

LSU EE 4720 Dynamic Scheduling Study Guide Fall David M. Koppelman. 1.1 Introduction. 1.2 Summary of Dynamic Scheduling Method 3

Instruction Level Parallelism (ILP)

Portland State University ECE 587/687. The Microarchitecture of Superscalar Processors

Lecture 6 MIPS R4000 and Instruction Level Parallelism. Computer Architectures S

Advanced Computer Architecture

EECC551 Exam Review 4 questions out of 6 questions

Chapter 4 The Processor 1. Chapter 4D. The Processor

Getting CPI under 1: Outline

CPE 631 Lecture 11: Instruction Level Parallelism and Its Dynamic Exploitation

SISTEMI EMBEDDED. Computer Organization Pipelining. Federico Baronti Last version:

CPE 631 Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation

Computer Systems Architecture

Load1 no Load2 no Add1 Y Sub Reg[F2] Reg[F6] Add2 Y Add Reg[F2] Add1 Add3 no Mult1 Y Mul Reg[F2] Reg[F4] Mult2 Y Div Reg[F6] Mult1

Cache Organizations for Multi-cores

CS 152, Spring 2012 Section 8

EN164: Design of Computing Systems Lecture 24: Processor / ILP 5

Instruction Level Parallelism

CS252 Graduate Computer Architecture Midterm 1 Solutions

Tomasulo s Algorithm

CISC 662 Graduate Computer Architecture Lecture 11 - Hardware Speculation Branch Predictions

The basic structure of a MIPS floating-point unit

Instruction Level Parallelism. ILP, Loop level Parallelism Dependences, Hazards Speculation, Branch prediction

Website for Students VTU NOTES QUESTION PAPERS NEWS RESULTS

Chapter 4. Advanced Pipelining and Instruction-Level Parallelism. In-Cheol Park Dept. of EE, KAIST

Superscalar Processor Design

Pipelining. CSC Friday, November 6, 2015

Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

PIPELINING: HAZARDS. Mahdi Nazm Bojnordi. CS/ECE 6810: Computer Architecture. Assistant Professor School of Computing University of Utah

CSE 502 Graduate Computer Architecture

Case Study IBM PowerPC 620

CS152 Computer Architecture and Engineering. Complex Pipelines

Lecture 7: Static ILP and branch prediction. Topics: static speculation and branch prediction (Appendix G, Section 2.3)

CPI < 1? How? What if dynamic branch prediction is wrong? Multiple issue processors: Speculative Tomasulo Processor

CISC 662 Graduate Computer Architecture Lecture 13 - CPI < 1

Superscalar Processors

Chapter 3 Instruction-Level Parallelism and its Exploitation (Part 1)

ECE 505 Computer Architecture

Computer Architecture and Engineering CS152 Quiz #3 March 22nd, 2012 Professor Krste Asanović

Chapter 3: Instruction Level Parallelism (ILP) and its exploitation. Types of dependences

EE 660: Computer Architecture Superscalar Techniques

Computer Architecture 计算机体系结构. Lecture 4. Instruction-Level Parallelism II 第四讲 指令级并行 II. Chao Li, PhD. 李超博士

Lecture 4: Introduction to Advanced Pipelining

Lecture 7: Static ILP, Branch prediction. Topics: static ILP wrap-up, bimodal, global, local branch prediction (Sections )

Review: Evaluating Branch Alternatives. Lecture 3: Introduction to Advanced Pipelining. Review: Evaluating Branch Prediction

Processor Architecture

EECS 470 Final Project Report

Transcription:

Lecture 9: Dynamic ILP Topics: out-of-order processors (Sections 2.3-2.6) 1

An Out-of-Order Processor Implementation Reorder Buffer (ROB) Branch prediction and instr fetch R1 R1+R2 R2 R1+R3 BEQZ R2 R3 R1+R2 R1 R3+R2 Instr Fetch Queue Decode & Rename Instr 1 Instr 2 Instr 3 Instr 4 Instr 5 Instr 6 T1 T2 T3 T4 T5 T6 T1 R1+R2 T2 T1+R3 BEQZ T2 T4 T1+T2 T5 T4+T2 Register File R1-R32 ALU ALU ALU Results written to ROB and tags broadcast to IQ Issue Queue (IQ) 2

Design Details - I Instructions enter the pipeline in order No need for branch delay slots if prediction happens in time Instructions leave the pipeline in order all instructions that enter also get placed in the ROB the process of an instruction leaving the ROB (in order) is called commit an instruction commits only if it and all instructions before it have completed successfully (without an exception) To preserve precise exceptions, a result is written into the register file only when the instruction commits until then, the result is saved in a temporary register in the ROB 3

Design Details - II Instructions get renamed and placed in the issue queue some operands are available (T1-T6; R1-R32), while others are being produced by instructions in flight (T1-T6) As instructions finish, they write results into the ROB (T1-T6) and broadcast the operand tag (T1-T6) to the issue queue instructions now know if their operands are ready When a ready instruction issues, it reads its operands from T1-T6 and R1-R32 and executes (out-of-order execution) Can you have WAW or WAR hazards? By using more names (T1-T6), name dependences can be avoided 4

Design Details - III If instr-3 raises an exception, wait until it reaches the top of the ROB at this point, R1-R32 contain results for all instructions up to instr-3 save registers, save PC of instr-3, and service the exception If branch is a mispredict, flush all instructions after the branch and start on the correct path mispredicted instrs will not have updated registers (the branch cannot commit until it has completed and the flush happens as soon as the branch completes) Potential problems:? 5

Managing Register Names Temporary values are stored in the register file and not the ROB Logical Registers R1-R32 Physical Registers P1-P64 At the start, R1-R32 can be found in P1-P32 Instructions stop entering the pipeline when P64 is assigned R1 R1+R2 R2 R1+R3 BEQZ R2 R3 R1+R2 P33 P1+P2 P34 P33+P3 BEQZ P34 P35 P33+P34 What happens on commit? 6

The Commit Process On commit, no copy is required The register map table is updated the committed value of R1 is now in P33 and not P1 on an exception, P33 is copied to memory and not P1 An instruction in the issue queue need not modify its input operand when the producer commits When instruction-1 commits, we no longer have any use for P1 it is put in a free pool and a new instruction can now enter the pipeline for every instr that commits, a new instr can enter the pipeline number of in-flight instrs is a constant = number of extra (rename) registers 7

The Alpha 21264 Out-of-Order Implementation Reorder Buffer (ROB) Branch prediction and instr fetch R1 R1+R2 R2 R1+R3 BEQZ R2 R3 R1+R2 R1 R3+R2 Instr Fetch Queue Decode & Rename Instr 1 Instr 2 Instr 3 Instr 4 Instr 5 Instr 6 Register Map Table R1 P1 R2 P2 P33 P1+P2 P34 P33+P3 BEQZ P34 P35 P33+P34 P36 P35+P34 Register File P1-P64 ALU ALU ALU Results written to regfile and tags broadcast to IQ Issue Queue (IQ) 8

Out-of-Order Loads/Stores St R1 [R2] R3 [R4] R5 [R6] R7 [R8] R9 [R10] What if the issue queue also had load/store instructions? Can we continue executing instructions out-of-order? 9

Memory Dependence Checking St St 0x abcdef 0x abcdef 0x abcd00 0x abc000 0x abcd00 The issue queue checks for register dependences and executes instructions as soon as registers are ready Loads/stores access memory as well must check for RAW, WAW, and WAR hazards for memory as well Hence, first check for register dependences to compute effective addresses; then check for memory dependences 10

Memory Dependence Checking St St 0x abcdef 0x abcdef 0x abcd00 0x abc000 0x abcd00 Load and store addresses are maintained in program order in the Load/Store Queue (LSQ) Loads can issue if they are guaranteed to not have true dependences with earlier stores Stores can issue only if we are ready to modify memory (can not recover if an earlier instr raises an exception) 11

The Alpha 21264 Out-of-Order Implementation Reorder Buffer (ROB) Branch prediction and instr fetch R1 R1+R2 R2 R1+R3 BEQZ R2 R3 R1+R2 R1 R3+R2 Instr Fetch Queue Decode & Rename Instr 1 Instr 2 Instr 3 Instr 4 Instr 5 Instr 6 Register Map Table R1 P1 R2 P2 P33 P1+P2 P34 P33+P3 BEQZ P34 P35 P33+P34 P36 P35+P34 Issue Queue (IQ) P37 [P34 + 4] P35 [P36 + 4] LSQ Register File P1-P64 ALU ALU ALU Results written to regfile and tags broadcast to IQ ALU D-Cache 12

Title Bullet 13