Lecture 05: Pipelining: Basic/ Intermediate Concepts and Implementation

Similar documents
Lecture 3. Pipelining. Dr. Soner Onder CS 4431 Michigan Technological University 9/23/2009 1

MIPS An ISA for Pipelining

CPE Computer Architecture. Appendix A: Pipelining: Basic and Intermediate Concepts

Pipeline Overview. Dr. Jiang Li. Adapted from the slides provided by the authors. Jiang Li, Ph.D. Department of Computer Science

Modern Computer Architecture

Computer Architecture

EI338: Computer Systems and Engineering (Computer Architecture & Operating Systems)

Pipelining. Maurizio Palesi

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

COSC 6385 Computer Architecture - Pipelining

CISC 662 Graduate Computer Architecture Lecture 6 - Hazards

Page 1. Pipelining: Its Natural! Chapter 3. Pipelining. Pipelined Laundry Start work ASAP. Sequential Laundry A B C D. 6 PM Midnight

Lecture 2: Processor and Pipelining 1

Advanced Parallel Architecture Lessons 5 and 6. Annalisa Massini /2017

What is Pipelining? Time per instruction on unpipelined machine Number of pipe stages

Appendix C. Abdullah Muzahid CS 5513

What is Pipelining? RISC remainder (our assumptions)

COSC4201 Pipelining. Prof. Mokhtar Aboelaze York University

Lecture 3: The Processor (Chapter 4 of textbook) Chapter 4.1

EITF20: Computer Architecture Part2.2.1: Pipeline-1

Chapter 4 The Processor 1. Chapter 4A. The Processor

Chapter 4 The Processor 1. Chapter 4B. The Processor

EITF20: Computer Architecture Part2.2.1: Pipeline-1

C.1 Introduction. What Is Pipelining? C-2 Appendix C Pipelining: Basic and Intermediate Concepts

Pipelining! Advanced Topics on Heterogeneous System Architectures. Politecnico di Milano! Seminar DEIB! 30 November, 2017!

Pipelining Analogy. Pipelined laundry: overlapping execution. Parallelism improves performance. Four loads: Non-stop: Speedup = 8/3.5 = 2.3.

MIPS Pipelining. Computer Organization Architectures for Embedded Computing. Wednesday 8 October 14

mywbut.com Pipelining

Computer Architecture Spring 2016

Chapter 4. The Processor

3/12/2014. Single Cycle (Review) CSE 2021: Computer Organization. Single Cycle with Jump. Multi-Cycle Implementation. Why Multi-Cycle?

CS 110 Computer Architecture. Pipelining. Guest Lecture: Shu Yin. School of Information Science and Technology SIST

Appendix A. Overview

Overview. Appendix A. Pipelining: Its Natural! Sequential Laundry 6 PM Midnight. Pipelined Laundry: Start work ASAP

Instruction Level Parallelism. Appendix C and Chapter 3, HP5e

Instruction Pipelining Review

COMPUTER ORGANIZATION AND DESIGN

CSE 502 Graduate Computer Architecture. Lec 3-5 Performance + Instruction Pipelining Review

COMPUTER ORGANIZATION AND DESIGN

EITF20: Computer Architecture Part2.2.1: Pipeline-1

Advanced Computer Architecture

LECTURE 3: THE PROCESSOR

Department of Computer and IT Engineering University of Kurdistan. Computer Architecture Pipelining. By: Dr. Alireza Abdollahpouri

Pipeline Hazards. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 4. The Processor

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

Processor (II) - pipelining. Hwansoo Han

Full Datapath. Chapter 4 The Processor 2

Computer System. Hiroaki Kobayashi 6/16/2010. Ver /16/2010 Computer Science 1

Minimizing Data hazard Stalls by Forwarding Data Hazard Classification Data Hazards Present in Current MIPS Pipeline

Chapter 4. The Processor

Computer Systems Architecture Spring 2016

Pipelining: Hazards Ver. Jan 14, 2014

Computer Architecture. Lecture 6.1: Fundamentals of

Chapter 4 (Part II) Sequential Laundry

Outline Marquette University

第三章 Instruction-Level Parallelism and Its Dynamic Exploitation. 陈文智 浙江大学计算机学院 2014 年 10 月

CSE 502 Graduate Computer Architecture. Lec 3-5 Performance + Instruction Pipelining Review

Pipelining: Basic and Intermediate Concepts

Pipeline Review. Review

The Big Picture Problem Focus S re r g X r eg A d, M lt2 Sub u, Shi h ft Mac2 M l u t l 1 Mac1 Mac Performance Focus Gate Source Drain BOX

COMPUTER ORGANIZATION AND DESIGN. 5 th Edition. The Hardware/Software Interface. Chapter 4. The Processor

Computer System. Agenda

CSE 533: Advanced Computer Architectures. Pipelining. Instructor: Gürhan Küçük. Yeditepe University

COMPUTER ORGANIZATION AND DESIGN. 5 th Edition. The Hardware/Software Interface. Chapter 4. The Processor

The Processor Pipeline. Chapter 4, Patterson and Hennessy, 4ed. Section 5.3, 5.4: J P Hayes.

Chapter 4. Instruction Execution. Introduction. CPU Overview. Multiplexers. Chapter 4 The Processor 1. The Processor.

CMSC 411 Computer Systems Architecture Lecture 6 Basic Pipelining 3. Complications With Long Instructions

Speeding Up DLX Computer Architecture Hadassah College Spring 2018 Speeding Up DLX Dr. Martin Land

Full Datapath. Chapter 4 The Processor 2

CSE 502 Graduate Computer Architecture. Lec 4-6 Performance + Instruction Pipelining Review

Computer and Information Sciences College / Computer Science Department Enhancing Performance with Pipelining

Pipelining. CSC Friday, November 6, 2015

Processor Architecture. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Data Hazards Compiler Scheduling Pipeline scheduling or instruction scheduling: Compiler generates code to eliminate hazard

Advanced Computer Architecture

COMPUTER ORGANIZATION AND DESIGN

CSEE 3827: Fundamentals of Computer Systems

Lecture 9. Pipeline Hazards. Christos Kozyrakis Stanford University

Processor Architecture

The Processor: Improving the performance - Control Hazards

The Processor (3) Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Advanced Computer Architecture Pipelining

Topics. Lecture 12: Pipelining. Introduction to pipelining. Pipelined datapath. Hazards in pipeline. Performance. Design issues.

Pipelining. Ideal speedup is number of stages in the pipeline. Do we achieve this? 2. Improve performance by increasing instruction throughput ...

ECE232: Hardware Organization and Design

The Processor. Z. Jerry Shi Department of Computer Science and Engineering University of Connecticut. CSE3666: Introduction to Computer Architecture

Appendix C: Pipelining: Basic and Intermediate Concepts

Pipelining, Instruction Level Parallelism and Memory in Processors. Advanced Topics ICOM 4215 Computer Architecture and Organization Fall 2010

CPS104 Computer Organization and Programming Lecture 19: Pipelining. Robert Wagner

ECEC 355: Pipelining

What do we have so far? Multi-Cycle Datapath (Textbook Version)

T = I x CPI x C. Both effective CPI and clock cycle C are heavily influenced by CPU design. CPI increased (3-5) bad Shorter cycle good

14:332:331 Pipelined Datapath

Thomas Polzer Institut für Technische Informatik

Outline. A pipelined datapath Pipelined control Data hazards and forwarding Data hazards and stalls Branch (control) hazards Exception

CENG 3420 Lecture 06: Pipeline

Pipeline Data Hazards. Dealing With Data Hazards

1 Hazards COMP2611 Fall 2015 Pipelined Processor

Lecture 7 Pipelining. Peng Liu.

Transcription:

Lecture 05: Pipelining: Basic/ Intermediate Concepts and Implementation CSE 564 Computer Architecture Summer 2017 Department of Computer Science and Engineering Yonghong Yan yan@oakland.edu www.secs.oakland.edu/~yan 1

Contents 1. Pipelining Introduction 2. The Major Hurdle of Pipelining Pipeline Hazards 3. RISC-V ISA and its Implementation Reading: u Textbook: Appendix C u RISC-V ISA u Chisel Tutorial

Pipelining: Its Natural! Laundry Example Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, and fold u Washer takes 30 minutes u Dryer takes 40 minutes u Folder takes 20 minutes A B C D One load: 90 minutes

Sequential Laundry 6 PM 7 8 9 10 11 Midnight Time T a s k O r d e r A B C D 30 40 20 30 40 20 30 40 20 30 40 20 Sequential laundry takes 6 hours for 4 loads If they learned pipelining, how long would laundry take?

Pipelined Laundry Start Work ASAP 6 PM 7 8 9 10 11 Midnight Time T a s k O r d e r A B C D 30 40 40 40 40 20 Sequential laundry takes 6 hours for 4 loads Pipelined laundry takes 3.5 hours for 4 loads

The Basics of a RISC Instruction Set (1/2) RISC (Reduced Instruction Set Computer) architecture or load-store architecture: u All operations on data apply to data in register and typically change the entire register (32 or 64 bits per register). u The only operations that affect memory are load and store operation. u The instruction formats are few in number with all instructions typically being one size. These simple three properties lead to dramatic simplifications in the implementation of pipelining.

The Basics of a RISC Instruction Set (2/2) MIPS 64 / RISC-V u 32 registers, and R0 = 0; u Three classes of instructions» instruction: add (DADD), subtract (DSUB), and logical operations (such as AND or OR);» Load and store instructions:» Branches and jumps: R-format (add, sub, ) OP rs rt rd sa funct I-format (lw, sw, ) J-format (j) OP rs rt immediate OP jump target

RISC Instruction Set Every instruction can be implemented in at most 5 clock cycles/ stages u Instruction fetch cycle (IF): send PC to memory, fetch the current instruction from memory, and update PC to the next sequential PC by adding 4 to the PC. u Instruction decode/register fetch cycle (ID): decode the instruction, read the registers corresponding to register source specifiers from the register file. u Execution/effective address cycle (EX): perform Memory address calculation for Load/Store, ister-ister instruction and ister-immediate instruction. u Memory access (MEM): Perform memory access for load/store instructions. u Write-back cycle (WB): Write back results to the dest operands for ister-ister instruction or Load instruction.

Classic 5-Stage Pipeline for a RISC Each cycle the hardware will initiate a new instruction and will be executing some part of the five different instructions. u Simple; u However, be ensure that the overlap of instructions in the pipeline cannot cause such a conflict. (also called Hazard) Clock number Instruction number 1 2 3 4 5 6 7 8 9 Instruction i IF ID EX MEM WB Instruction i+1 IF ID EX MEM WB Instruction i+2 IF ID EX MEM WB Instruction i+3 IF ID EX MEM WB Instruction i+4 IF ID EX MEM WB

Computer Pipelines Pipeline properties u Execute billions of instructions, so throughput is what matters. u Pipelining doesn t help latency of single task, it helps throughput of entire workload; u Pipeline rate limited by slowest pipeline stage; u Multiple tasks operating simultaneously; u Potential speedup = Number pipe stages; u Unbalanced lengths of pipe stages reduces speedup; u Time to fill pipeline and time to drain it reduces speedup. The time per instruction on the pipelined processor in ideal conditions is equal to, Time per instruction on unpipelined machine Number of pipe stage However, the stages may not be perfectly balanced. Pipelining yields a reduction in the average execution time per instruction.

Review: Components of a Computer Processor Memory Input Control Enable? Read/Write Datapath Program Counter (PC) Address Program Bytes isters Write Data Arithmetic & Logic Unit () ReadData Data Output Processor-Memory Interface I/O-Memory Interfaces

CPU and Datapath vs Control Datapath: Storage, FU, interconnect sufficient to perform the desired functions u u Inputs are Control Points Outputs are signals Controller: State machine to orchestrate operation on the data path u Based on desired function and signals

5 Stages of MIPS Pipeline Instruction Fetch Instr. Decode. Fetch Execute Addr. Calc Memory Access Write Back Next PC 4 Adder Next SEQ PC RS1 Zero? MUX Address Memory Inst RS2 RD File MUX MUX Data Memory L M D MUX IR <= mem[pc]; PC <= PC + 4 Imm Sign Extend [IR rd ] <= [IR rs ] op IRop [IR rt ] WB Data

Making RISC Pipelining Real Function units used in different cycles u Hence we can overlap the execution of multiple instructions Important things to make it real u Separate instruction and data memories, e.g. I-cache and D-cache, banking» Eliminate a conflict for accessing a single memory. u The ister file is used in the two stages (two R and one W every cycle) u PC» Read from register in ID (second half of CC), and write to register in WB (first half of CC).» Increment and store the PC every clock, and done it during the IF stage.» A branch does not change the PC until the ID stage (have an adder to compute the potential branch target). u Staging data between pipeline stages» Pipeline register

Pipeline Datapath ister files in ID and WB stage u Read from register in ID (second half of CC), and write to register in WB (first half of CC). IM and DM

Pipeline isters Instruction Fetch Instr. Decode. Fetch Execute Addr. Calc Memory Access Write Back Next PC 4 Adder Next SEQ PC RS1 Next SEQ PC Zero? MUX Address Memory IR <= mem[pc]; PC <= PC + 4 A <= [IR rs ]; B <= [IR rt ] rslt <= A op IRop B WB <= rslt [IR rd ] <= WB IF/ID RS2 Imm File Sign Extend ID/EX MUX MUX EX/MEM RD RD RD Data Memory Pipeline isters for Data Staging between Pipeline Stages Named as: IF/ID, ID/EX, EX/MEM, and MEM/WB MEM/WB MUX WB Data

Pipeline isters Edge-triggered property of register is critical

Inst. Set Processor Controller branch requires 3 cycles, store requires 4 cycles, and all other instructions require 5 cycles. IR <= mem[pc]; PC <= PC + 4 br JSR JR jmp A <= [IR rs ]; B <= [IR rt ] RR opfetch-dcd RI LD ST if bop(a,b) PC <= PC+IR im PC <= IR jaddr r <= A op IRop B r <= A op IRop IR im r <= A + IR im WB <= r WB <= r WB <= Mem[r] [IR rd ] <= WB [IR rd ] <= WB [IR rd ] <= WB

Events on Every Pipeline Stage Stage IF ID EX MEM WB Any Instruction IF/ID.IR ß MEM[PC]; IF/ID.NPC ß PC+4 PC ß if ((EX/MEM.opcode=branch) & EX/MEM.cond) {EX/MEM.output} else {PC + 4} ID/EX.A ß s[if/id.ir[rs]]; ID/EX.B ß s[if/id.ir[rt]] ID/EX.NPC ß IF/ID.NPC; ID/EX.Imm ß extend(if/id.ir[imm]); ID/EX.Rw ß IF/ID.IR[Rt or Rd] Instruction Load / Store Branch EX/MEM.output ß ID/EX.A func ID/EX.B, or EX/MEM.output ß ID/EX.A op ID/EX.Imm MEM/WB.output ß EX/MEM.output s[mem/wb.rw] ß MEM/WB.Output EX/MEM.output ß ID/EX.A + ID/EX.Imm EX/MEM.B ß ID/EX.B MEM/WB.LMD ß MEM[EX/MEM.output] or MEM[EX/MEM.output] ß EX/MEM.B For load only: s[mem/wb.rw] ß MEM/WB.LMD EX/MEM.output ß ID/EX.NPC + (ID/EX.Imm << 2) EX/MEM.cond ß br condition

Pipelining Performance (1/2) Pipelining increases throughput, not reduce the execution time of an individual instruction. u In face, slightly increases the execution time (an instruction) due to overhead in the control of the pipeline. u Practical depth of a pipeline is limits by increasing execution time. Pipeline overhead u Unbalanced pipeline stage; u Pipeline stage overhead; u Pipeline register delay; u Clock skew.

Processor Performance CPU Time = Instructions Program * Cycles Instruction *Time Cycle Instructions per program depends on source code, compiler technology, and ISA Cycles per instructions (CPI) depends on ISA and µarchitecture Time per cycle depends upon the µarchitecture and base technology

CPI for Different Instructions 7 cycles 5 cycles 10 cycles Inst 1 Inst 2 Inst 3 Time Total clock cycles = 7+5+10 = 22 Total instruc7ons = 3 CPI = 22/3 = 7.33 CPI is always an average over a large number of instruc7ons 22

Pipeline Performance (2/2) Example 1 (p.c-10): Consider the unpipelined processor in previous section. Assume that it has a 1ns clock cycle and that it uses 4 cycles for operations and branches, and 5 cycles for memory operations. Assume that the relative frequencies of these operations are 40%, 20%, and 40%, respectively. Suppose that due to clock skew and setup, pipelining the processor adds 0.2 ns of overhead to the clock. Ignoring any latency impact, how much speedup in the instruction execution rate will we gain from a pipeline? Answer The average instruction execution time on the unpipelined processor is Average instruction execution time = Clock cycle Average CPI =1 ns ( 40%+ 20% ) 4 + 40% 5 Speedup from pipelining = ( ) =1 ns 4.4 = 4.4 ns Average instruction time unpipelined Average instruction time pipelined 4.4 ns = 1.2 ns In the pipeline, the clock must run at the speed of the slowest stage plus overhead, which will be 1+0.2 ns. = 3.7 times

Performance with Pipeline Stall (1/2) Speedup from pipelining = Average instruction time unpipelined Average instruction time pipelined CPI unpipelined Clock cycle unpipelined = CPI pipelined Clock cycle pipelined CPI unpipelined Clock cycle unpipelined = CPI pipelined Clock cycle pipelined CPI pipelined = IdealCPI + Pipeline stall clock cycles per instruction = 1+ Pipelined stall clock cycles per instruction

Performance with Pipeline Stall (2/2) Speedup from pipelining= = CPI unpipelined CPI pipelined Clock cycle unpipelined Clock cycle pipelined 1 Clock cycle unpipelined 1+ Pipeline stall cycles per instruction Clock cycle pipelined Clock cycle pipelined = Pipeline depth = Clock cycle unpipelined Pipeline depth Clock cycle unpipelined Clock cycle pipelined Speedup from pipelining = = 1 Clock cycle unpipelined 1+ Pipeline stall cycles per instruction Clock cycle pipelined 1 Pipeline depth 1+ Pipeline stall cycles per instruction Pipelining speedup is proportional to the pipeline depth and 1/(1+ stall cycles)

Pipeline Hazards Hazard, that prevent the next instruction in the instruction steam. u Structural hazards: resource conflict, e.g. using the same unit u Data hazards: an instruction depends on the results of a previous instruction u Control hazards: arise from the pipelining of branches and other instructions that change the PC. Hazards in pipelines can make it necessary to stall the pipeline. u Stall will reduce pipeline performance.

Structure Hazards Structure Hazards u If some combination of instructions cannot be accommodated because of resource conflict (resources are pipelining of functional units and duplication of resources).» Occur when some functional unit is not fully pipelined, or» No enough duplicated resources.

One Memory Port/Structural Hazards Time (clock cycles) Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 I n s t r. O r d e r Load Instr 1 Instr 2 Instr 3 Instr 4

One Memory Port/Structural Hazards Time (clock cycles) Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 I n s t r. O r d e r Load Instr 1 Instr 2 Stall Instr 3 Bubble Bubble Bubble Bubble Bubble How do you bubble the pipe?

Performance on Structure Hazard Example 2 (p.c-14): Let s see how much the load structure hazard might cost. Suppose that data reference constitute 40% of the mix, and that the ideal CPI of the pipelined processor, ignoring the structure hazard, is 1. Assume that the processor with the structure hazard has a clock rate that is 1.05 times higher than the clock rate of processor without the hazard. Disregarding any other performance losses, is the pipeline with or without the structure hazard faster, and by how much? Answer The average instruction execution time on the unpipelined processor is Average instruction time Average instruction time ideal = CPI Clock cycle time = 1 Clock cycle time structure hazard ideal ideal = CPI Clock cycle time Clock cycle time = ( 1+ 0.4 1) 1.05 = 1.3 Clock cycle time ideal ideal How to solve structure hazard? (Next slide)

Summary of Structure Hazard An alternative to this structure hazard, designer could provide a separate memory access for instructions. u Splitting the cache into separate instruction and data caches, or u Use a set of buffers, usually called instruction buffers, to hold instruction; However, it will increase cost overhead. u Ex1: pipelining function units or duplicated resources is a high cost; u Ex2: require twice bandwidth and often have higher bandwidth at the pins to support both an instruction and a data cache access every cycle; u Ex3: a floating-point multiplier consumes lots of gates. If the structure hazard is rare, it may not be worth the cost to avoid it.

Data Hazards Data Hazards u Occur when the pipeline changes the order of read/write accesses to operands so that the order differs from the order seen by sequentially executing instructions on an unpipelined processor.» Occur when some functional unit is not fully pipelined, or» No enough duplicated resources. u A example of pipelined execution DADD R1, R2, R3 DSUB R4, R1, R5 AND R6, R1, R7 OR R8, R1, R9 XOR R10, R1, R11

Data Hazard on R1 Time (clock cycles) IF ID/RF EX MEM WB I n s t r. DADD R1,R2,R3 DSUB R4,R1,R5 O r d e r AND R6,R1,R7 OR R8,R1,R9 XOR R10,R1,R11

Solution #1: Insert stalls Time (clock cycles) IF ID/RF EX MEM WB I n s t r. O r d e r DADD R1,R2,R3 Stall Stall DSUB R4,R1,R5 AND R6,R1,R7 OR R8,R1,R9 Bubble Bubble Bubble Bubble Bubble Bubble Bubble Bubble Bubble Bubble XOR R10,R1,R11

Three Generic Data Hazards (1/3) Read After Write (RAW) u Instr J tries to read operand before Instr I writes it I: ADD R1,R2,R3 J: SUB R4,R1,R3 Caused by a true dependence (in compiler nomenclature). This hazard results from an actual need for communication.

Three Generic Data Hazards (2/3) Write After Read (WAR) u Instr J writes operand before Instr I reads it I: SUB R4,R1,R3 J: ADD R1,R2,R3 K: MUL R6,R1,R7 Called an anti-dependence by compiler writers. This results from reuse of the name R1. Can t happen in MIPS 5 stage pipeline because: u All instructions take 5 stages, and u Reads are always in stage 2, and u Writes are always in stage 5

Three Generic Data Hazards (3/3) Write After Write (WAW) u Instr J writes operand before Instr I writes it. I: SUB R1,R4,R3 J: ADD R1,R2,R3 K: MUL R6,R1,R7 useless This hazard also results from the reuse of name r1 Hazard when writes occur in the wrong order Can t happen in our basic 5-stage pipeline because: u All writes are ordered and take place in stage 5 WAR and WAW hazards occur in complex pipelines Notice that Read After Read RAR is NOT a hazard

#2: Forwarding (aka bypassing) to Avoid Data Hazard Time (clock cycles) I n s t r. DADD R1,R2,R3 DSUB R4,R1,R5 Pipeline register O r d e r AND R6,R1,R7 OR R8,R1,R9 XOR R10,R1,R11

Another Example of a RAW Data Hazard Result of sub is needed by and, or, add, & sw instructions Instructions and & or will read old value of r2 from reg file During CC5, r2 is written and read new value is read Time (cycles) value of r2 CC1 CC2 CC3 CC4 CC5 CC6 CC7 CC8 10 10 10 10 10/20 20 20 20 Program Execution Order sub r2, r1, r3 and r4, r2, r5 or r6, r3, r2 add r7, r2, r2 IM IM IM DM IM DM DM DM sw r8, 10(r2) IM DM

Solution #1: Stalling the Pipeline The and instruction cannot fetch r2 until CC5 u The and instruction remains in the IF/ID register until CC5 Two bubbles are inserted into ID/EX at end of CC3 & CC4 u Bubbles are NOP instructions: do not modify registers or memory u Bubbles delay instruction execution and waste clock cycles Instruction Order Time (in cycles) value of r2 CC1 10 CC2 10 CC3 10 CC4 10 CC5 10/20 sub r2, r1, r3 IM DM and r4, r2, r5 IM bubble bubble CC6 20 CC7 20 CC8 or r6, r3, r2 IM DM DM 20

Solution #2: Forwarding Result The result is forwarded (fed back) to the input u No bubbles are inserted into the pipeline and no cycles are wasted result exists in either EX/MEM or MEM/WB register Time (in cycles) CC1 CC2 CC3 CC4 CC5 CC6 CC7 CC8 Program Execution Order sub r2, r1, r3 and r4, r2, r5 or r6, r3, r2 add r7, r2, r2 IM IM IM DM IM DM DM DM sw r8, 10(r2) IM DM

Double Data Hazard Consider the sequence: add r1,r1,r2 sub r1,r1,r3 and r1,r1,r4 Both hazards occur u Want to use the most recent u When executing AND, forward result of SUB» ForwardA = 01 (from the EX/MEM pipe stage)

Data Hazard Even with Forwarding Time (clock cycles) I n s t r. LD R1,0(R2) DSUB R4,R1,R6 O r d e r DAND R6,R1,R7 OR R8,R1,R9

Data Hazard Even with Forwarding Time (clock cycles) I n s t r. LD R1,0(R2) DSUB R4,R1,R6 Bubble O r d e r AND R6,R1,R7 OR R8,R1,R9 Bubble Bubble How is this detected?

Load Delay Not all RAW data hazards can be forwarded u Load has a delay that cannot be eliminated by forwarding In the example shown below u The LW instruction does not have data until end of CC4 u AND wants data at beginning of CC4 - NOT possible Time (cycles) CC1 CC2 CC3 CC4 CC5 CC6 CC7 CC8 Program Order lw r2, 20(r1) and r4, r2, r5 or r6, r3, r2 IF IF IF DM DM DM However, load can forward data to second next instruction add r7, r2, r2 IF DM

Stall the Pipeline for one Cycle Freeze the PC and the IF/ID registers u No new instruction is fetched and instruction after load is stalled Allow the Load in ID/EX register to proceed Introduce a bubble into the ID/EX register Load can forward data after stalling next instruction Time (cycles) CC1 CC2 CC3 CC4 CC5 CC6 CC7 CC8 Program Order lw r2, 20(r1) IM DM and r4, r2, r5 IM bubble or r6, r3, r2 IM DM DM

Forwarding to Avoid LW-SW Data Hazard Time (clock cycles) I n s t r. DADD R1,R2,R3 LD R4,0(R1) O r d e r SD OR R4,12(R1) R8,R6,R9 XOR R10,R9,R11

Compiler Scheduling Compilers can schedule code in a way to avoid load stalls Consider the following statements: a = b + c; d = e f; Slow code: 2 stall cycles lw r10, (r1) # r1 = addr b lw r11, (r2) # r2 = addr c add r12, r10, $11 # stall sw r12, (r3) # r3 = addr a lw r13, (r4) # r4 = addr e lw r14, (r5) # r5 = addr f sub r15, r13, r14 # stall sw r15, (r6) # r6 = addr d Compiler optimizes for performance. Hardware checks for safety. Fast code: No Stalls lw lw r10, 0(r1) r11, 0(r2) lw r13, 0(r4) lw r14, 0(r5) add r12, r10, r11 sw r12, 0(r3) sub r15, r13, r14 sw r14, 0(r6)

Hardware Support for Forwarding

Detecting RAW Hazards Pass register numbers along pipeline u ID/EX.isterRs = register number for Rs in ID/EX u ID/EX.isterRt = register number for Rt in ID/EX u ID/EX.isterRd = register number for Rd in ID/EX Current instruction being executed in ID/EX register Previous instruction is in the EX/MEM register Second previous is in the MEM/WB register RAW Data hazards when 1a. EX/MEM.isterRd = ID/EX.isterRs 1b. EX/MEM.isterRd = ID/EX.isterRt 2a. MEM/WB.isterRd = ID/EX.isterRs 2b. MEM/WB.isterRd = ID/EX.isterRt Fwd from EX/MEM pipeline reg Fwd from MEM/WB pipeline reg

Detecting the Need to Forward But only if forwarding instruction will write to a register! u EX/MEM.Write, MEM/WB.Write And only if Rd for that instruction is not R0 u EX/MEM.isterRd 0 u MEM/WB.isterRd 0

Forwarding Conditions Detecting RAW hazard with Previous Instruction u if (EX/MEM.Write and (EX/MEM.isterRd 0) and (EX/MEM.isterRd = ID/EX.isterRs)) ForwardA = 01 (Forward from EX/MEM pipe stage) u if (EX/MEM.Write and (EX/MEM.isterRd 0) and (EX/MEM.isterRd = ID/EX.isterRt)) ForwardB = 01 (Forward from EX/MEM pipe stage) Detecting RAW hazard with Second Previous u if (MEM/WB.Write and (MEM/WB.isterRd 0) and (MEM/WB.isterRd = ID/EX.isterRs)) ForwardA = 10 (Forward from MEM/WB pipe stage) u if (MEM/WB.Write and (MEM/WB.isterRd 0) and (MEM/WB.isterRd = ID/EX.isterRt)) ForwardB = 10 (Forward from MEM/WB pipe stage)

Control Hazard on Branches: Three Stage Stall 10: BEQ R1,R3,36 14: AND R2,R3,R5 18: OR R6,R1,R7 22: ADD R8,R1,R9 36: XOR R10,R1,R11 What do you do with the 3 instructions in between? How do you do it? Where is the commit?

Branch/Control Hazards Branch instructions can cause great performance loss Branch instructions need two things: u Branch Result Taken or Not Taken u Branch Target» PC + 4 If Branch is NOT taken» PC + 4 + 4 imm If Branch is Taken For our pipeline: 3-cycle branch delay u PC is updated 3 cycles after fetching branch instruction u Branch target address is calculated in the stage u Branch result is also computed in the stage u What to do with the next 3 instructions after branch?

Branch Stall Impact If CPI = 1 without branch stalls, and 30% branch If stalling 3 cycles per branch u => new CPI = 1+0.3 3 = 1.9 Two part solution: u Determine branch taken or not sooner, and u Compute taken branch address earlier MIPS Solution: u Move branch test to ID stage (second stage) u Adder to calculate new PC in ID stage u Branch delay is reduced from 3 to just 1 clock cycle

Pipelined MIPS Datapath Instruction Fetch Instr. Decode. Fetch Execute Addr. Calc Memory Access Write Back Next PC 4 Adder Next SEQ PC Adder RS1 MUX Zero? Address Memory IF/ID RS2 File ID/EX MUX EX/MEM Data Memory MEM/WB MUX Imm Sign Extend RD RD RD WB Data

Four Branch Hazard Alternatives #1: Stall until branch direction is clear #2: Predict Branch Not Taken u Execute successor instructions in sequence u Squash instructions in pipeline if branch actually taken u Advantage of late pipeline state update u 47% MIPS branches not taken on average u PC+4 already calculated, so use it to get next instruction #3: Predict Branch Taken u 53% MIPS branches taken on average u But haven t calculated branch target address in MIPS» MIPS still incurs 1 cycle branch penalty» Other machines: branch target known before outcome

Four Branch Hazard Alternatives #4: Delayed Branch u Define branch to take place AFTER a following instruction branch instruction sequential successor 1 sequential successor 2... sequential successor n branch target if taken u 1 slot delay allows proper decision and branch target address in 5 stage pipeline u MIPS uses this

Scheduling Branch Delay Slots A. From before branch B. From branch target C. From fall through add $1,$2,$3 sub $4,$5,$6 add $1,$2,$3 if $2=0 then if $1=0 then delay slot add $1,$2,$3 if $1=0 then delay slot delay slot u A is the best choice, fills delay slot & reduces instruction count (IC) u In B, the sub instruction may need to be copied, increasing IC u In B and C, must be okay to execute sub when branch fails sub $4,$5,$6 becomes becomes becomes add $1,$2,$3 if $2=0 then if $1=0 then add $1,$2,$3 add $1,$2,$3 if $1=0 then sub $4,$5,$6 sub $4,$5,$6

Delayed Branch Compiler effectiveness for single branch delay slot: u Fills about 60% of branch delay slots. u About 80% of instructions executed in branch delay slots useful in computation. u About 50% (60% x 80%) of slots usefully filled. Delayed Branch downside: As processor go to deeper pipelines and multiple issue, the branch delay grows and need more than one delay slot u Delayed branching has lost popularity compared to more expensive but more flexible dynamic approaches. u Growth in available transistors has made dynamic approaches relatively cheaper.

Evaluating Branch Alternatives The effective pipeline speedup with branch penalties, assuming an ideal CPI of 1, is Pipeline speedup = Pipeline depth 1+ Pipeline stall cycles from branches Because of the following: Pipeline stall cycles from branches = Branch frequency Branch penalty We obtain Pipeline speedup = Pipeline depth 1+ Branch frequency Branch penalty

Performance on Control Hazard (1/2) Example 3 (pa-25): for a deeper pipeline, such as that in a MIPS R4000, it takes at least three pipeline stages before the branch-target address is known and an additional cycle before the branch condition is evaluated, assuming no stalls on the registers in the conditional comparison, A three-stage delay leads to the branch penalties for the three simplest prediction schemes listed in the following Figure A.15. Find the effective additional to the CPI arising from branches for this pipeline, assuming the following frequencies: Figure A.15 Unconditional branch 4% Conditional branch, untaken 6% Conditional branch, taken 10% Branch scheme Penalty unconditional Penalty untaken Penalty taken Flush pipeline 2 3 3 Predicted taken 2 3 2 Predicted un taken 2 0 3

Performance on Control Hazard (2/2) Answer Branch scheme Frequency of event Additions to the CPI from branch cost Unconditional branches Stall pipeline 0.08 Predicted taken Predicted untaken Untaken conditional branches Taken conditional branches All branches 4% 6% 10% 20% 0.08 0.08 0.18 0.18 0.30 0.56 0.20 0.46 0.00 0.30 0.38

Branch Prediction Longer pipelines can t readily determine branch outcome early u Stall penalty becomes unacceptable Predict outcome of branch u Only stall if prediction is wrong In MIPS pipeline u Can predict branches not taken u Fetch instruction after branch, with no delay

MIPS with Predict Not Taken Prediction correct Prediction incorrect

More-Realistic Branch Prediction Static branch prediction u Based on typical branch behavior u Example: loop and if-statement branches» Predict backward branches taken» Predict forward branches not taken Dynamic branch prediction u Hardware measures actual branch behavior» e.g., record recent history of each branch (BPB or BHT) u Assume future behavior will continue the trend» When wrong, stall while re-fetching, and update history

Static Branch Prediction

Dynamic Branch Prediction 1-bit prediction scheme u Low-portion address as address for a one-bit flag for Taken or NotTaken historically u Simple 2-bit prediction u Miss twice to change

Contents 1. Pipelining Introduction 2. The Major Hurdle of Pipelining Pipeline Hazards 3. RISC-V ISA and its Implementation Reading: u Textbook: Appendix C u RISC-V ISA u Chisel Tutorial