CS 252 Graduate Computer Architecture. Lecture 4: Instruction-Level Parallelism

Similar documents
CS 152 Computer Architecture and Engineering. Lecture 13 - Out-of-Order Issue and Register Renaming

Complex Pipelining: Out-of-order Execution & Register Renaming. Multiple Function Units

ECE 552 / CPS 550 Advanced Computer Architecture I. Lecture 9 Instruction-Level Parallelism Part 2

CS 152 Computer Architecture and Engineering. Lecture 10 - Complex Pipelines, Out-of-Order Issue, Register Renaming

Complex Pipelining. Motivation

ECE 252 / CPS 220 Advanced Computer Architecture I. Lecture 8 Instruction-Level Parallelism Part 1

Announcements. ECE4750/CS4420 Computer Architecture L11: Speculative Execution I. Edward Suh Computer Systems Laboratory

CS252 Graduate Computer Architecture Lecture 8. Review: Scoreboard (CDC 6600) Explicit Renaming Precise Interrupts February 13 th, 2010

Donn Morrison Department of Computer Science. TDT4255 ILP and speculation

C 1. Last time. CSE 490/590 Computer Architecture. Complex Pipelining I. Complex Pipelining: Motivation. Floating-Point Unit (FPU) Floating-Point ISA

Processor: Superscalars Dynamic Scheduling

Lecture-13 (ROB and Multi-threading) CS422-Spring

Advanced issues in pipelining

Static vs. Dynamic Scheduling

CPE 631 Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation

COSC 6385 Computer Architecture - Pipelining (II)

CS252 Graduate Computer Architecture Lecture 6. Recall: Software Pipelining Example

Page 1. Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

EITF20: Computer Architecture Part3.2.1: Pipeline - 3

Predict Not Taken. Revisiting Branch Hazard Solutions. Filling the delay slot (e.g., in the compiler) Delayed Branch

Announcement. ECE475/ECE4420 Computer Architecture L4: Advanced Issues in Pipelining. Edward Suh Computer Systems Laboratory

Page 1. Recall from Pipelining Review. Lecture 15: Instruction Level Parallelism and Dynamic Execution

Instruction-Level Parallelism and Its Exploitation

Review: Evaluating Branch Alternatives. Lecture 3: Introduction to Advanced Pipelining. Review: Evaluating Branch Prediction

Lecture 4 Pipelining Part II

CPE 631 Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation

CPE 631 Lecture 11: Instruction Level Parallelism and Its Dynamic Exploitation

Scoreboard information (3 tables) Four stages of scoreboard control

Hardware-based speculation (2.6) Multiple-issue plus static scheduling = VLIW (2.7) Multiple-issue, dynamic scheduling, and speculation (2.

Chapter 3 (CONT II) Instructor: Josep Torrellas CS433. Copyright J. Torrellas 1999,2001,2002,2007,

09-1 Multicycle Pipeline Operations

Tomasulo s Algorithm

CISC 662 Graduate Computer Architecture. Lecture 10 - ILP 3

Load1 no Load2 no Add1 Y Sub Reg[F2] Reg[F6] Add2 Y Add Reg[F2] Add1 Add3 no Mult1 Y Mul Reg[F2] Reg[F4] Mult2 Y Div Reg[F6] Mult1

Complex Pipelining COE 501. Computer Architecture Prof. Muhamed Mudawar

Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

Instruction Level Parallelism

Metodologie di Progettazione Hardware-Software

Instruction Level Parallelism. Appendix C and Chapter 3, HP5e

EITF20: Computer Architecture Part2.2.1: Pipeline-1

CS 152 Computer Architecture and Engineering. Lecture 12 - Advanced Out-of-Order Superscalars

EITF20: Computer Architecture Part2.2.1: Pipeline-1

Chapter 3 Instruction-Level Parallelism and its Exploitation (Part 1)

CS425 Computer Systems Architecture

ESE 545 Computer Architecture Instruction-Level Parallelism (ILP) and Static & Dynamic Instruction Scheduling Instruction level parallelism

5008: Computer Architecture

CSE 820 Graduate Computer Architecture. week 6 Instruction Level Parallelism. Review from Last Time #1

EE382A Lecture 7: Dynamic Scheduling. Department of Electrical Engineering Stanford University

Instruction Pipelining Review

Instruction Level Parallelism. Taken from

ELEC 5200/6200 Computer Architecture and Design Fall 2016 Lecture 9: Instruction Level Parallelism

Handout 2 ILP: Part B

Reduction of Data Hazards Stalls with Dynamic Scheduling So far we have dealt with data hazards in instruction pipelines by:

Multi-cycle Instructions in the Pipeline (Floating Point)

Instruction Level Parallelism

COSC4201. Prof. Mokhtar Aboelaze York University

Minimizing Data hazard Stalls by Forwarding Data Hazard Classification Data Hazards Present in Current MIPS Pipeline

EECC551 Exam Review 4 questions out of 6 questions

E0-243: Computer Architecture

CS152 Computer Architecture and Engineering. Complex Pipelines

Lecture 7: Pipelining Contd. More pipelining complications: Interrupts and Exceptions

Instruction Level Parallelism. ILP, Loop level Parallelism Dependences, Hazards Speculation, Branch prediction

Dynamic Scheduling. CSE471 Susan Eggers 1

CS152 Computer Architecture and Engineering March 13, 2008 Out of Order Execution and Branch Prediction Assigned March 13 Problem Set #4 Due March 25

Hardware-based Speculation

Four Steps of Speculative Tomasulo cycle 0

ILP concepts (2.1) Basic compiler techniques (2.2) Reducing branch costs with prediction (2.3) Dynamic scheduling (2.4 and 2.5)


Multicycle ALU Operations 2/28/2011. Diversified Pipelines The Path Toward Superscalar Processors. Limitations of Our Simple 5 stage Pipeline

CISC 662 Graduate Computer Architecture Lecture 11 - Hardware Speculation Branch Predictions

Hardware-based Speculation

Computer Systems Architecture I. CSE 560M Lecture 5 Prof. Patrick Crowley

EITF20: Computer Architecture Part2.2.1: Pipeline-1

Advanced Computer Architecture

Website for Students VTU NOTES QUESTION PAPERS NEWS RESULTS

Hardware-Based Speculation

The basic structure of a MIPS floating-point unit

CS 152 Computer Architecture and Engineering

Execution/Effective address

Lecture 4: Introduction to Advanced Pipelining

Reorder Buffer Implementation (Pentium Pro) Reorder Buffer Implementation (Pentium Pro)

Branch Prediction & Speculative Execution. Branch Penalties in Modern Pipelines

CMCS Mohamed Younis CMCS 611, Advanced Computer Architecture 1

Instruction Level Parallelism (ILP)

EC 513 Computer Architecture

Adapted from David Patterson s slides on graduate computer architecture

Graduate Computer Architecture. Chapter 3. Instruction Level Parallelism and Its Dynamic Exploitation

Chapter 4 The Processor 1. Chapter 4D. The Processor

NOW Handout Page 1. Review from Last Time #1. CSE 820 Graduate Computer Architecture. Lec 8 Instruction Level Parallelism. Outline

Compiler Optimizations. Lecture 7 Overview of Superscalar Techniques. Memory Allocation by Compilers. Compiler Structure. Register allocation

Computer Architecture 计算机体系结构. Lecture 4. Instruction-Level Parallelism II 第四讲 指令级并行 II. Chao Li, PhD. 李超博士

Functional Units. Registers. The Big Picture: Where are We Now? The Five Classic Components of a Computer Processor Input Control Memory

Instruction Level Parallelism

INSTITUTO SUPERIOR TÉCNICO. Architectures for Embedded Computing

Improving Performance: Pipelining

Course on Advanced Computer Architectures

CISC 662 Graduate Computer Architecture Lecture 13 - CPI < 1

Good luck and have fun!

Pipelining: Issue instructions in every cycle (CPI 1) Compiler scheduling (static scheduling) reduces impact of dependences

Lecture 9: Case Study MIPS R4000 and Introduction to Advanced Pipelining Professor Randy H. Katz Computer Science 252 Spring 1996

Transcription:

CS 252 Graduate Computer Architecture Lecture 4: Instruction-Level Parallelism Krste Asanovic Electrical Engineering and Computer Sciences University of California, Berkeley http://wwweecsberkeleyedu/~krste http://instcsberkeleyedu/~cs252 Review: Pipeline Performance Pipeline CPI = Ideal pipeline CPI + Structural Stalls + Data Hazard Stalls + Control Stalls Ideal pipeline CPI: measure of the maximum performance attainable by the implementation Structural hazards: HW cannot support this combination of instructions Data hazards: Instruction depends on result of prior instruction still in the pipeline Control hazards: Caused by delay between the fetching of instructions and decisions about changes in control flow (branches, jumps, exceptions) 9/11/2007 2

Review: Types of Data Hazards Consider executing a sequence of r k! (r i ) op (r j ) type of instructions Data-dependence r 3! (r 1 ) op (r 2 ) r 5! (r 3 ) op (r 4 ) Read-after-Write (RAW) hazard Anti-dependence r 3! (r 1 ) op (r 2 ) r 1! (r 4 ) op (r 5 ) Output-dependence r 3! (r 1 ) op (r 2 ) r 3! (r 6 ) op (r 7 ) Write-after-Read (WAR) hazard Write-after-Write (WAW) hazard 9/11/2007 3 Data Hazards: An Example dest src1 src2 I 1 DIVD f6, f6, f4 I 2 LD f2, 45(r3) I 3 MULTD f0, f2, f4 I 4 DIVD f8, f6, f2 I 5 SUBD f10, f0, f6 I 6 ADDD f6, f8, f2 RAW Hazards WAR Hazards WAW Hazards 9/11/2007 4

Complex Pipelining ALU Mem IF ID Issue WB Fadd GPR s FPR s Fmul Fdiv Pipelining becomes complex when we want high performance in the presence of: Long latency or partially pipelined floating-point units Multiple function and memory units Memory systems with variable access time Precise exceptions 9/11/2007 5 Complex In-Order Pipeline PC Inst Mem D Decode GPRs X1 + X2 Data Mem X3 W Delay writeback so all operations have same latency to W stage Write ports never oversubscribed (one inst in & one inst out every cycle) Instructions commit in order, simplifies precise exception implementation How to prevent increased writeback latency from slowing down single cycle integer operations? Bypassing FPRs X1 FDiv X2 Fadd X3 X2 Fmul X3 X2 Unpipelined divider X3 W Commit Point 9/11/2007 6

Complex Pipeline ALU Mem IF ID Issue WB GPR s FPR s Fadd Fmul Can we solve write hazards without equalizing all pipeline depths and without bypassing? Fdiv 9/11/2007 7 When is it Safe to Issue an Instruction? Suppose a data structure keeps track of all the instructions in all the functional units The following checks need to be made before the Issue stage can dispatch an instruction Is the required function unit available? Is the input data available? " RAW? Is it safe to write the destination? " WAR? WAW? Is there a structural conflict at the WB stage? 9/11/2007 8

Scoreboard for In-order Issues Busy[FU#] : a bit-vector to indicate FU s availability (FU = Int, Add, Mult, Div) These bits are hardwired to FU's WP[reg#] : a bit-vector to record the registers for which writes are pending These bits are set to true by the Issue stage and set to false by the WB stage Issue checks the instruction (opcode dest src1 src2) against the scoreboard (Busy & WP) to dispatch FU available? RAW? WAR? WAW? Busy[FU#] WP[src1] or WP[src2] cannot arise WP[dest] 9/11/2007 9 In-Order Issue Limitations: an example latency 1 LD F2, 34(R2) 1 1 2 2 LD F4, 45(R3) long 3 MULTD F6, F4, F2 3 4 SUBD F8, F2, F2 1 5 DIVD F4, F2, F8 4 4 5 3 6 ADDD F10, F6, F4 1 6 In-order: 1 (2,1) 2 3 4 4 3 5 5 6 6 In-order restriction prevents instruction 4 from being dispatched 9/11/2007 (underline indicates cycle when instruction writes back) 10

Out-of-Order Issue ALU Mem IF ID Issue WB Fadd Fmul Issue stage buffer holds multiple instructions waiting to issue Decode adds next instruction to buffer if there is space and the instruction does not cause a WAR or WAW hazard Any instruction in buffer whose RAW hazards are satisfied can be issued (for now at most one dispatch per cycle) On a write back (WB), new instructions may get enabled 9/11/2007 11 In-Order Issue Limitations: an example latency 1 LD F2, 34(R2) 1 1 2 2 LD F4, 45(R3) long 3 MULTD F6, F4, F2 3 4 SUBD F8, F2, F2 1 5 DIVD F4, F2, F8 4 4 5 3 6 ADDD F10, F6, F4 1 6 In-order: 1 (2,1) 2 3 4 4 3 5 5 6 6 Out-of-order: 1 (2,1) 4 4 2 3 3 5 5 6 6 Out-of-order execution did not allow any significant improvement! 9/11/2007 12

How many instructions can be in the pipeline? Which features of an ISA limit the number of instructions in the pipeline? Number of Registers Which features of a program limit the number of instructions in the pipeline? Control transfers Out-of-order dispatch by itself does not provide any significant performance improvement! 9/11/2007 13 Overcoming the Lack of Register Names Floating Point pipelines often cannot be kept filled with small number of registers IBM 360 had only 4 Floating Point Registers Can a microarchitecture use more registers than specified by the ISA without loss of ISA compatibility? Robert Tomasulo of IBM suggested an ingenious solution in 1967 based on on-the-fly register renaming 9/11/2007 14

Little s Law Throughput (T) = Number in Flight (N) / Latency (L) Issue Execution WB Example: 4 floating point registers 8 cycles per floating point operation " maximum of! issue per cycle! 9/11/2007 15 Instruction-level Parallelism via Renaming latency 1 LD F2, 34(R2) 1 1 2 2 LD F4, 45(R3) long 3 MULTD F6, F4, F2 3 4 SUBD F8, F2, F2 1 4 X 3 5 DIVD F4, F2, F8 4 5 6 ADDD F10, F6, F4 1 6 In-order: 1 (2,1) 2 3 4 4 3 5 5 6 6 Out-of-order: 1 (2,1) 4 4 5 2 (3,5) 3 6 6 Any antidependence can be eliminated by renaming (renaming! additional storage) Can it be done in hardware? yes! 9/11/2007 16

Register Renaming ALU Mem IF ID Issue WB Fadd Fmul Decode does register renaming and adds instructions to the issue stage reorder buffer (ROB) " renaming makes WAR or WAW hazards impossible Any instruction in ROB whose RAW hazards have been satisfied can be dispatched " Out-of-order or dataflow execution 9/11/2007 17 Dataflow execution Ins# use exec op p1 src1 p2 src2 ptr 2 next to deallocate t 1 t 2 prt 1 next available Reorder buffer Instruction slot is candidate for execution when: It holds a valid instruction ( use bit is set) It has not already started execution ( exec bit is clear) Both operands are available (p1 and p2 are set) 9/11/2007 18 t n

Renaming & Out-of-order Issue An example v1 Renaming table p data F1 F2 v1 t1 F3 F4 t2 t5 F5 F6 t3 F7 F8 v4 t4 Reorder buffer Ins# use exec op p1 src1 p2 src2 1 1 0 10 LD 2 10 01 LD 3 1 0 MUL 10 v2 t2 1 v1 4 1 0 10 SUB 1 v1 1 v1 5 1 0 DIV 1 v1 01 t4 v4 t 1 t 2 t 3 t 4 t 5 data / t i 1 LD F2, 34(R2) 2 LD F4, 45(R3) 3 MULTD F6, F4, F2 4 SUBD F8, F2, F2 5 DIVD F4, F2, F8 6 ADDD F10, F6, F4 When are names in sources replaced by data? Whenever an FU produces data When can a name be reused? Whenever an instruction completes 9/11/2007 19 Data-Driven Execution Renaming table & reg file Reorder buffer Ins# use exec op p1 src1 p2 src2 t 1 t2 t n Replacing the tag by its value is an expensive operation Load Unit 9/11/2007 20 FU FU Store Unit < t, result > Instruction template (ie, tag t) is allocated by the Decode stage, which also stores the tag in the reg file When an instruction completes, its tag is deallocated

Simplifying Allocation/Deallocation Ins# use exec op p1 src1 p2 src2 ptr 2 next to deallocate t 1 t 2 prt 1 next available Reorder buffer Instruction buffer is managed circularly exec bit is set when instruction begins execution When an instruction completes, its use bit is marked free ptr 2 is incremented only if the use bit is marked free 9/11/2007 21 t n IBM 360/91 Floating Point Unit R M Tomasulo, 1967 1 2 3 4 5 6 data load buffers (from memory) instructions 1 2 3 4 p data Floating Point Reg distribute instruction templates by functional units 1 2 3 p data p data Adder 1 2 p data p data Mult store buffers (to memory) p data < t, result > Common bus ensures that data is made available immediately to all the instructions waiting for it 9/11/2007 22

Effectiveness? Renaming and Out-of-order execution was first implemented in 1969 in IBM 360/91 but did not show up in the subsequent models until mid- Nineties Why? Reasons 1 Effective on a very small class of programs 2 Memory latency a much bigger problem 3 Exceptions not precise! One more problem needed to be solved Control transfers 9/11/2007 23 CS252 Administrivia Prereq quiz will be handed back Thursday at end of class Bspace experience? First reading assignment split into two parts, second part due Thursday Next reading assignment distributed Thursday, due Tuesday Please see me if you re currently waitlisted 9/11/2007 24

In the News AMD Quad-Core Released 9/10/2007 First single die quad-core x86 processor Major marketing points include energy efficiency & virtualization features (not clock rate) 9/11/2007 25 Precise Interrupts It must appear as if an interrupt is taken between two instructions (say I i and I i+1 ) the effect of all instructions up to and including I i is totally complete no effect of any instruction after I i has taken place The interrupt handler either aborts the program or restarts it at I i+1 9/11/2007 26

Effect on Interrupts Out-of-order Completion I 1 DIVD f6, f6, f4 I 2 LD f2, 45(r3) I 3 MULTD f0, f2, f4 I 4 DIVD f8, f6, f2 I 5 SUBD f10, f0, f6 I 6 ADDD f6, f8, f2 out-of-order comp 1 2 2 3 1 4 3 5 5 4 6 6 restore f2 restore f10 Consider interrupts Precise interrupts are difficult to implement at high speed - want to start execution of later instructions before exception checks finished on earlier instructions 9/11/2007 27 Exception Handling (In-Order Five-Stage Pipeline) Commit Point PC Inst Mem D Decode E + M Data Mem W Select Handler PC PC Address Exceptions Exc D Illegal Opcode Exc E Overflow Exc M Data Addr Except Kill Writeback Cause Kill F Stage PC D Kill D Stage PC E Kill E Stage PC M Asynchronous Interrupts EPC Hold exception flags in pipeline until commit point (M stage) Exceptions in earlier pipe stages override later exceptions Inject external interrupts at commit point (override others) If exception at commit: update Cause and EPC registers, kill all stages, inject handler PC into fetch stage 9/11/2007 28

Phases of Instruction Execution PC I-cache Fetch Buffer Issue Buffer Func Units Result Buffer Arch State Fetch: Instruction bits retrieved from cache Decode: Instructions placed in appropriate issue (aka dispatch ) stage buffer Execute: Instructions and operands sent to execution units When execution completes, all results and exception flags are available Commit: Instruction irrevocably updates architectural state (aka graduation or completion ) 9/11/2007 29 In-Order Commit for Precise Exceptions In-order Out-of-order In-order Fetch Decode Reorder Buffer Commit Kill Inject handler PC Kill Execute Kill Exception? Instructions fetched and decoded into instruction reorder buffer in-order Execution is out-of-order ( " out-of-order completion) Commit (write-back to architectural state, ie, regfile & memory) is in-order Temporary storage needed to hold results before commit (shadow registers and store buffers) 9/11/2007 30

Extensions for Precise Exceptions Inst# use exec op p1 src1 p2 src2 pd dest data cause ptr 2 next to commit ptr 1 next available Reorder buffer add <pd, dest, data, cause> fields in the instruction template commit instructions to reg file and memory in program order " buffers can be maintained circularly on exception, clear reorder buffer by resetting ptr 1 =ptr 2 (stores must wait for commit before updating memory) 9/11/2007 31 Rollback and Renaming Register File (now holds only committed state) Reorder buffer Ins# use exec op p1 src1 p2 src2 pd dest data t 1 t 2 t n Load Unit FU FU FU Store Unit Commit < t, result > Register file does not contain renaming tags any more How does the decode stage find the tag of a source register? Search the dest field in the reorder buffer 9/11/2007 32

Renaming Table Rename Table r 1 t v r 2 tag valid bit Register File Reorder buffer Ins# use exec op p1 src1 p2 src2 pd dest data t 1 t 2 t n Load Unit FU FU FU Store Unit Commit < t, result > Renaming table is a cache to speed up register name look up It needs to be cleared after each exception taken When else are valid bits cleared? Control transfers 9/11/2007 33 Branch Penalty How many instructions need to be killed on a misprediction? Next fetch started Modern processors may have > 10 pipeline stages between next pc calculation and branch resolution! Branch executed PC I-cache Fetch Buffer Issue Buffer Func Units Result Buffer Fetch Decode Execute Commit Arch State 9/11/2007 34

Average Run-Length between Branches Average dynamic instruction mix from SPEC92: SPECint92 SPECfp92 ALU 39 % 13 % FPU Add 20 % FPU Mult 13 % load 26 % 23 % store 9 % 9 % branch 16 % 8 % other 10 % 12 % SPECint92: SPECfp92: compress, eqntott, espresso, gcc, li doduc, ear, hydro2d, mdijdp2, su2cor What is the average run length between branches? next lecture: Branch prediction & Speculative excecution 9/11/2007 35 Paper Discussion: B5000 vs IBM 360 IBM set foundations for ISAs since 1960s 8-bit byte Byte-addressable memory (as opposed to word-addressable memory) 32-bit words Two's complement arithmetic (but not the first processor) 32-bit (SP) / 64-bit (DP) Floating Point format and registers Commercial use of microcoded CPUs Binary compatibility / computer family B5000 very different model: HLL only, stack, Segmented VM IBM paper made case for ISAs good for microcoded processors " leading to CISC 9/11/2007 36