T = I x CPI x C. Both effective CPI and clock cycle C are heavily influenced by CPU design. CPI increased (3-5) bad Shorter cycle good

Similar documents
What do we have so far? Multi-Cycle Datapath (Textbook Version)

Multi-cycle Datapath (Our Version)

What do we have so far? Multi-Cycle Datapath

Improve performance by increasing instruction throughput

Major CPU Design Steps

Minimizing Data hazard Stalls by Forwarding Data Hazard Classification Data Hazards Present in Current MIPS Pipeline

Chapter 4 (Part II) Sequential Laundry

Instruction Pipelining Review

Chapter Six. Dataı access. Reg. Instructionı. fetch. Dataı. Reg. access. Dataı. Reg. access. Dataı. Instructionı fetch. 2 ns 2 ns 2 ns 2 ns 2 ns

Pipelining. Ideal speedup is number of stages in the pipeline. Do we achieve this? 2. Improve performance by increasing instruction throughput ...

Advanced Parallel Architecture Lessons 5 and 6. Annalisa Massini /2017

CPU Design Steps. EECC550 - Shaaban

What is Pipelining? RISC remainder (our assumptions)

What is Pipelining? Time per instruction on unpipelined machine Number of pipe stages

Pipelining: Basic Concepts

COMP303 - Computer Architecture Lecture 10. Multi-Cycle Design & Exceptions

Chapter 4. The Processor

Computer and Information Sciences College / Computer Science Department Enhancing Performance with Pipelining

Lecture 6: Pipelining

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 4. The Processor

Pipelined Datapath. Reading. Sections Practice Problems: 1, 3, 8, 12

Chapter 4. The Processor

CPU Organization (Design)

Processor Design CSCE Instructor: Saraju P. Mohanty, Ph. D. NOTE: The figures, text etc included in slides are borrowed

COMPUTER ORGANIZATION AND DESIGN

Lecture 3: The Processor (Chapter 4 of textbook) Chapter 4.1

Advanced Computer Architecture Pipelining

COMPUTER ORGANIZATION AND DESIGN. 5 th Edition. The Hardware/Software Interface. Chapter 4. The Processor

Chapter 4. Instruction Execution. Introduction. CPU Overview. Multiplexers. Chapter 4 The Processor 1. The Processor.

COMPUTER ORGANIZATION AND DESIGN. 5 th Edition. The Hardware/Software Interface. Chapter 4. The Processor

Department of Computer and IT Engineering University of Kurdistan. Computer Architecture Pipelining. By: Dr. Alireza Abdollahpouri

Unpipelined Machine. Pipelining the Idea. Pipelining Overview. Pipelined Machine. MIPS Unpipelined. Similar to assembly line in a factory

COMPUTER ORGANIZATION AND DESIGN

Improving Performance: Pipelining

Pipelined Datapath. Reading. Sections Practice Problems: 1, 3, 8, 12 (2) Lecture notes from MKP, H. H. Lee and S.

Outline. A pipelined datapath Pipelined control Data hazards and forwarding Data hazards and stalls Branch (control) hazards Exception

ECEC 355: Pipelining

LECTURE 3: THE PROCESSOR

EECS 322 Computer Architecture Improving Memory Access: the Cache

Designing a Pipelined CPU

3/12/2014. Single Cycle (Review) CSE 2021: Computer Organization. Single Cycle with Jump. Multi-Cycle Implementation. Why Multi-Cycle?

COSC 6385 Computer Architecture - Pipelining

Full Datapath. Chapter 4 The Processor 2

Lecture 7 Pipelining. Peng Liu.

COMPUTER ORGANIZATION AND DESI

Pipelining Analogy. Pipelined laundry: overlapping execution. Parallelism improves performance. Four loads: Non-stop: Speedup = 8/3.5 = 2.3.

Lecture 5: The Processor

EE557--FALL 1999 MAKE-UP MIDTERM 1. Closed books, closed notes

14:332:331 Pipelined Datapath

Instruction Level Parallelism. Appendix C and Chapter 3, HP5e

EC 413 Computer Organization - Fall 2017 Problem Set 3 Problem Set 3 Solution

Pipelining concepts The DLX architecture A simple DLX pipeline Pipeline Hazards and Solution to overcome

Chapter 4 The Processor 1. Chapter 4B. The Processor

Pipeline Review. Review

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

Computer Architectures. DLX ISA: Pipelined Implementation

Pipelining. Maurizio Palesi

Data Hazards Compiler Scheduling Pipeline scheduling or instruction scheduling: Compiler generates code to eliminate hazard

Instruction word R0 R1 R2 R3 R4 R5 R6 R8 R12 R31

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface 5 th Edition. Chapter 4. The Processor

CSCI 402: Computer Architectures. Fengguang Song Department of Computer & Information Science IUPUI. Today s Content

ELEC 5200/6200 Computer Architecture and Design Spring 2017 Lecture 4: Datapath and Control

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Full Datapath. CSCI 402: Computer Architectures. The Processor (2) 3/21/19. Fengguang Song Department of Computer & Information Science IUPUI

Pipelining concepts The DLX architecture A simple DLX pipeline Pipeline Hazards and Solution to overcome

The Processor. Z. Jerry Shi Department of Computer Science and Engineering University of Connecticut. CSE3666: Introduction to Computer Architecture

CS 152 Computer Architecture and Engineering Lecture 4 Pipelining

Basic Pipelining Concepts

Chapter 4. The Processor

CISC 662 Graduate Computer Architecture Lecture 6 - Hazards

Pipelining and Exploiting Instruction-Level Parallelism (ILP)

Computer Architecture

4. The Processor Computer Architecture COMP SCI 2GA3 / SFWR ENG 2GA3. Emil Sekerinski, McMaster University, Fall Term 2015/16

COSC4201 Pipelining. Prof. Mokhtar Aboelaze York University

1 Hazards COMP2611 Fall 2015 Pipelined Processor

Lecture 8: Data Hazard and Resolution. James C. Hoe Department of ECE Carnegie Mellon University

EECS150 - Digital Design Lecture 10- CPU Microarchitecture. Processor Microarchitecture Introduction

Pipelining. CSC Friday, November 6, 2015

Computer Architecture Computer Science & Engineering. Chapter 4. The Processor BK TP.HCM

Full Datapath. Chapter 4 The Processor 2

Enhanced Performance with Pipelining

CSE 533: Advanced Computer Architectures. Pipelining. Instructor: Gürhan Küçük. Yeditepe University

CS 61C: Great Ideas in Computer Architecture Pipelining and Hazards

COMP303 - Computer Architecture Lecture 8. Designing a Single Cycle Datapath

CS 110 Computer Architecture. Pipelining. Guest Lecture: Shu Yin. School of Information Science and Technology SIST

Instruction fetch. MemRead. IRWrite ALUSrcB = 01. ALUOp = 00. PCWrite. PCSource = 00. ALUSrcB = 00. R-type completion

Lecture 05: Pipelining: Basic/ Intermediate Concepts and Implementation

MIPS An ISA for Pipelining

Pipeline design. Mehran Rezaei

Thomas Polzer Institut für Technische Informatik

CS3350B Computer Architecture Quiz 3 March 15, 2018

EECS 151/251A Fall 2017 Digital Design and Integrated Circuits. Instructor: John Wawrzynek and Nicholas Weaver. Lecture 13 EE141

Pipeline Overview. Dr. Jiang Li. Adapted from the slides provided by the authors. Jiang Li, Ph.D. Department of Computer Science

Determined by ISA and compiler. We will examine two MIPS implementations. A simplified version A more realistic pipelined version

Pipelining: Hazards Ver. Jan 14, 2014

Outline Marquette University

The Processor: Datapath & Control

CPU Organization Datapath Design:

(Basic) Processor Pipeline

C.1 Introduction. What Is Pipelining? C-2 Appendix C Pipelining: Basic and Intermediate Concepts

Transcription:

CPU performance equation: T = I x CPI x C Both effective CPI and clock cycle C are heavily influenced by CPU design. For single-cycle CPU: CPI = 1 good Long cycle time bad On the other hand, for multi-cycle CPU: CPI increased (3-5) bad Shorter cycle good How to lower effective CPI without increasing C: Solution: CPU Pipelining #1 Lec # 7 Winter 2011 1-19-2012

Generic CPU Processing Steps Fetch Decode Operand Fetch (Implied by The Von Neumann Computer odel) Obtain instruction from program storage (memory) The Program Counter (PC) points to next instruction to be processed Determine required actions and instruction size Locate and obtain operand data T = I x CPI x C Execute Result Store Next Compute result value or status Deposit results in storage for later use Determine successor or next instruction (i.e Update PC to fetch next instruction to be processed) ajor CPU Performance Limitation: The Von Neumann computing model implies sequential execution one instruction at a time #2 Lec # 7 Winter 2011 1-19-2012

IPS CPU Design: What do we have so far? ulti-cycle Datapath (Textbook Version) CPI: R-Type = 4, Load = 5, Store 4, Jump/Branch = 3 Only one instruction being processed in datapath How to lower CPI further without increasing CPU clock cycle time, C? T = I x CPI x C Processing an instruction starts when the previous instruction is completed One ALU One emory #3 Lec # 7 Winter 2011 1-19-2012

Operations (Dependant RTN) for Each Cycle R-Type Load Store Branch Jump IF Fetch IR em[pc] PC PC + 4 IR em[pc] PC PC + 4 IR em[pc] PC PC + 4 IR em[pc] PC PC + 4 IR em[pc] PC PC + 4 ID Decode A R[rs] B R[rt] ALUout PC + (SignExt(imm16) x4) A R[rs] B R[rt] ALUout PC + (SignExt(imm16) x4) A R[rs] B R[rt] ALUout PC + (SignExt(imm16) x4) A R[rs] B R[rt] ALUout PC + (SignExt(imm16) x4) A R[rs] B R[rt] ALUout PC + (SignExt(imm16) x4) EX Execution ALUout A funct B ALUout A + SignEx(Imm16) ALUout A + SignEx(Imm16) Zero A - B Zero: PC ALUout PC Jump Address E emory DR em[aluout] em[aluout] B CPI = 3 5 C = 2ns T = I x CPI x C WB Write Back R[rd] ALUout R[rt] DR Reducing the CPI by combining cycles increases CPU clock cycle Fetch (IF) & Decode (ID) cycles are common for all instructions #4 Lec # 7 Winter 2011 1-19-2012

FS State Transition Diagram (From Book) CPI = 3-5 T = I x CPI x C IF ID IR em[pc] PC PC + 4 A R[rs] B R[rt] ALUout PC + (SignExt(imm16) x4) EX ALUout A + SignEx(Im16) PC Jump Address ALUout A func B Zero A -B DR em[aluout] Zero: PC ALUout E R[rd] ALUout WB R[rt] WB DR em[aluout] B 3 rd Edition Figure 5.37 page 338 (See Handout) Reducing the CPI by combining cycles increases CPU clock cycle #5 Lec # 7 Winter 2011 1-19-2012

ulti-cycle Datapath (Our Version) npc_sel Next PC PC Fetch IR File Operand Fetch A B ExtOp ALUSrc ALUctr Ext ALU R emrd emwr em Access emto Data em Dst Wr. File isters added: IR: register Three ALUs, Two emories A, B: Two registers to hold operands read from register file. R: or ALUOut, holds the output of the ALU : or emory data register (DR) to hold data read from data memory Result Store Equal #6 Lec # 7 Winter 2011 1-19-2012

Operations (Dependant RTN) for Each Cycle R-Type Logic Immediate Load Store Branch IF Fetch IR em[pc] PC PC + 4 IR em[pc] PC PC + 4 IR em[pc] PC PC + 4 IR em[pc] PC PC + 4 IR em[pc] PC PC + 4 ID Decode A B R[rs] R[rt] A B R[rs] R[rt A B R[rs] R[rt A B R[rs] R[rt] A B R[rs] R[rt] Zero A - B EX Execution R A funct B R A OR ZeroExt[imm16] R A + SignEx(Im16) R A + SignEx(Im16) If Zero = 1: PC PC + (SignExt(imm16) x4) E emory em[r] em[r] B WB Write Back R[rd] R R[rt] R R[rt] CPI = 3 5 C = 2ns Fetch (IF) & Decode cycles are common for all instructions #7 Lec # 7 Winter 2011 1-19-2012

ulti-cycle Datapath CPI R-Type/Immediate: Require four cycles, CPI =4 IF, ID, EX, WB Loads: Require five cycles, CPI = 5 IF, ID, EX, E, WB Stores: Require four cycles, CPI = 4 IF, ID, EX, E Branches: Require three cycles, CPI = 3 IF, ID, EX Average program 3 CPI 5 depending on program profile (instruction mix). Non-overlapping Processing: Processing an instruction starts when the previous instruction is completed. #8 Lec # 7 Winter 2011 1-19-2012

IPS ulti-cycle Datapath Performance Evaluation What is the average CPI? State diagram gives CPI for each instruction type. Workload (program) below gives frequency of each type. Type CPI i for type Frequency CPI i x freqi i Arith/Logic 4 40% 1.6 Load 5 30% 1.5 Store 4 10% 0.4 branch 3 20% 0.6 Average CPI: 4.1 Better than CPI = 5 if all instructions took the same number of clock cycles (5). #9 Lec # 7 Winter 2011 1-19-2012

CPU Pipelining pipelining is a CPU implementation technique where multiple operations on a number of instructions are overlapped. For Example: The next instruction is fetched in the next cycle without waiting for the current instruction to complete. An instruction execution pipeline involves a number of steps, where each step completes a part of an instruction. Each step is called a pipeline stage or a pipeline segment. The stages or steps are connected in a linear fashion: one stage to the next to form the pipeline (or pipelined CPU datapath) -- instructions enter at one end and progress through the stages and exit at the other end. 1 2 3 4 5 The time to move an instruction one step down the pipeline is is equal to the machine (CPU) cycle and is determined by the stage with the longest processing delay. Pipelining increases the CPU instruction throughput: The number of instructions completed per cycle. Pipeline Throughput : The instruction completion rate of the pipeline and is determined by how often an instruction exists the pipeline. Under ideal conditions (no stall cycles), instruction throughput is one instruction per machine cycle, or ideal effective CPI = 1 Or ideal IPC = 1 Pipelining does not reduce the execution time of an individual instruction: The time needed to complete all processing steps of an instruction (also called instruction completion latency). Pipelining may actually increase individual instruction latency inimum instruction latency = n cycles, where n is the number of pipeline stages 4 th Edition Chapter 4.5-4.8-3 rd Edition Chapter 6.1-6.6 #10 Lec # 7 Winter 2011 1-19-2012 5 stage pipeline T = I x CPI x C

Single Cycle Vs. Pipelining Program execution order Time (in instructions) lw $1, 100($0) fetch 2 4 6 8 10 12 14 16 18 ALU Data access Single Cycle lw $2, 200($0) 8ns fetch ALU Data access C = 8 ns lw $3, 300($0) Time for 1000 instructions = 8 x 1000 = 8000 ns 8ns fetch 8ns... Program execution Time order (in instructions) lw $1, 100($0) lw $2, 200($0) fetch 2ns 4 Pipeline Fill Cycles 2 4 6 8 10 12 14 fetch ALU Data access ALU Data access 1 2 3 4 5 5 Stage Pipeline lw $3, 300($0) 2ns fetch ALU Data access 2ns 2ns 2ns 2ns 2ns Time for 1000 instructions = time to fill pipeline + cycle time x 1000 = 8 + 2 x 1000 = 2008 ns Pipelining Speedup = 8000/2008 = 3.98 Assuming the following datapath/control hardware components delays: emory Units: 2 ns ALU and adders: 2 ns ister File: 1 ns Control Unit < 1 ns C = 2 ns #11 Lec # 7 Winter 2011 1-19-2012

CPU Pipelining: Design Goals The length of the machine clock cycle is determined by the time required for the slowest pipeline stage. An important pipeline design consideration is to balance the length of each pipeline stage. If all stages are perfectly balanced, then the effective time per instruction on a pipelined machine (assuming ideal conditions with no stalls): Time per instruction on unpipelined machine Number of pipeline stages Under these ideal conditions: Similar to non-pipelined multi-cycle CPU 1 2 3 4 5 Speedup from pipelining = the number of pipeline stages = n Goal: One instruction is completed every cycle: CPI = 1. 5 stage pipeline T = I x CPI x C While keeping clock cycle C short #12 Lec # 7 Winter 2011 1-19-2012

From IPS ulti-cycle Datapath: 5 steps or 5 cycles or Stages n = 5 Load Five Stages of Load Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 IF ID EX E WB 1 2 3 4 5 1- Fetch (IF) Fetch Fetch the instruction from the emory. 2- Decode (ID): isters Fetch and Decode. 3- Execute (EX): Calculate the memory address. 4- emory (E): Read the data from the Data emory. 5- Write Back (WB): Write the data back to the register file. n = number of pipeline stages (5 in this case) The number of pipeline stages is determined by the instruction that needs the largest number of cycles 5 stage pipeline And PC update PC PC + 4 #13 Lec # 7 Winter 2011 1-19-2012

Program Order 1 2 3 4 5 Ideal Pipelined Processing Timing Representation (i.e no stall cycles) CPI = 1 (ideal) Fill Cycles = number of stages -1 = n -1 n = 5 stage pipeline Clock cycle Number Time in clock cycles Number 1 2 3 4 5 6 7 8 9 I IF ID EX E WB Ideal CPI = 1 I+1 IF ID EX E WB Ideal CPI = 1 (or IPC =1) I+2 IF ID EX E WB I+3 IF ID EX E WB I +4 IF ID EX E WB 4 cycles = n -1 = 5-1 Time to fill the pipeline n= 5 Pipeline Stages: IF = Fetch ID = Decode EX = Execution E = emory Access WB = Write Back Any individual instruction goes through all five pipeline stages taking 5 cycles to complete Thus instruction latency= 5 cycles Ideal pipeline operation without any stall cycles First instruction, I Completed, I+4 completed Pipeline Fill Cycles: No instructions completed yet Number of fill cycles = Number of pipeline stages - 1 Here 5-1 = 4 fill cycles Ideal pipeline operation: After fill cycles, one instruction is completed per cycle giving the ideal pipeline CPI = 1 (ignoring fill cycles) or s per Cycle = IPC = 1/CPI = 1 #14 Lec # 7 Winter 2011 1-19-2012

Ideal Pipelined Processing Representation (i.e no stall cycles) Time 5 Stage Pipeline 1 2 3 4 5 Pipeline Fill cycles = 5-1 = 4 IF ID EX E WB 1 2 3 4 5 6 7 8 9 10 I1 I2 I3 I4 IF ID EX E WB IF ID EX E WB IF ID EX E WB IF ID EX E WB Any individual instruction goes through all five pipeline stages taking 5 cycles to complete Thus instruction latency = 5 cycles I5 I6 Program Flow IF ID EX E WB IF ID EX E WB Here n = 5 pipeline stages or steps Number of pipeline fill cycles = Number of stages - 1 Here 5-1 = 4 After fill cycles: One instruction is completed every cycle (Effective CPI = 1) (ideally) Ideal pipeline operation without any stall cycles #15 Lec # 7 Winter 2011 1-19-2012

Single Cycle, ulti-cycle, Vs. Pipelined CPU Clk Single Cycle Implementation: Cycle 1 Cycle 2 8 ns Load Store Waste 2ns Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Cycle 8 Cycle 9 Cycle 10 Clk ultiple Cycle Implementation: Load IF ID EX E WB 4 Pipeline Fill Cycles Pipeline Implementation: Load IF ID EX E WB Store IF ID EX E R-type IF Store IF ID EX E WB R-type IF ID EX E WB Assuming the following datapath/control hardware components delays: emory Units: 2 ns ALU and adders: 2 ns ister File: 1 ns Control Unit < 1 ns #16 Lec # 7 Winter 2011 1-19-2012

Single Cycle, ulti-cycle, Pipeline: Performance Comparison Example For 1000 instructions, execution time: Single Cycle achine: CPI = 1 C = 8 ns 8 ns/cycle x 1 CPI x 1000 inst = 8000 ns T = I x CPI x C ulti-cycle achine: 3 CPI 5 C = 2 ns 2 ns/cycle x 4.6 CPI (due to inst mix) x 1000 inst = 9200 ns Depends on program instruction mix Ideal pipelined machine, 5-stages: Effective CPI = 1 C = 2 ns 2 ns/cycle x (1 CPI x 1000 inst + 4 cycle fill) = 2008 ns Speedup = 8000/2008 = 3.98 times faster than single cycle CPU Speedup = 9200/2008 = 4.58 times faster than multi cycle CPU #17 Lec # 7 Winter 2011 1-19-2012

Basic Pipelined CPU Design Steps 1. Analyze instruction set operations using independent RTN => datapath requirements. 2. Select required datapath components and connections. 3. Assemble an initial datapath meeting the ISA requirements. 4. Identify pipeline stages based on operation, balancing stage delays, and ensuring no hardware conflicts exist when common hardware is used by two or more stages simultaneously in the same cycle. 5. Divide the datapath into the stages identified above by adding buffers between the stages of sufficient width to hold: fields. Remaining control lines needed for remaining pipeline stages. All results produced by a stage and any unused results of previous stages. 6. Analyze implementation of each instruction to determine setting of control points that effects the register transfer taking pipeline hazard conditions into account. (ore on this a bit later) 7. Assemble the control logic. i.e registers #18 Lec # 7 Winter 2011 1-19-2012

IPS Pipeline Stage Identification 5 Stage Pipeline 1 2 3 4 5 IF ID EX E WB IF: fetch ux 0 ID: decode/ register file read EX: Execute/ address calculation E: emory access WB: Write back 1 Add 4 Shift left 2 Add result Add PC Address memory Read register 1 Read Read data 1 register 2 isters Read data 2 Write register Write data 16 Sign extend 32 0 ux 1 Zero ALU ALU result Address Data memory Write data Read data ux 1 0 IF ID EX What is needed to divide datapath into pipeline stages? E WB 1 2 3 4 5 Stage 1 Stage 2 Stage 3 Stage 4 Stage 5 Start with initial datapath with: 3 ALUs, 2 emories #19 Lec # 7 Winter 2011 1-19-2012

IPS: An Initial Pipelined Datapath ux 0 1 Buffers (registers) between pipeline stages are added: Everything an instruction needs for the remaining processing stages must be saved in buffers so that it travels with the instruction from one CPU pipeline stage to the next IF/ID ID/EX EX/E E/WB Add 4 Add Add result PC Address memory rs rt Read register 1 Read data 1 Read register 2 isters Read data 2 Write register Write data Shift left 2 0 ux 1 Zero ALU ALU result Address Write data Data memory Read data ux 1 0 This design has A problem 16 Imm16 Sign extend 32 IF ID EX E WB Fetch Decode Execution emory Write Back Stage 1 Stage 2 Stage 3 Stage 4 Stage 5 Can you find a problem even if there are no dependencies? What instructions can we execute to manifest the problem? Hint: Any values an instruction requires must travel with it as it goes through the pipeline stages including instruction fields still needed in later stages n = 5 pipeline stages #20 Lec # 7 Winter 2011 1-19-2012

A Corrected Pipelined Datapath 0 u x 1 4 th Edition Figure 4.41page 355 3 rd Edition Figure 6.17 page 395 IF/ID ID/EX EX/E E/WB Add 4 Shift left 2 Add Add result PC Address memory Read register 1 Read data 1 Read register 2 isters Read data 2 Write register Write data 0 ux 1 Zero ALU ALU result Address Write data Data memory Read data ux 1 0 Classic Five Stage Integer IPS Pipeline 16 Sign extend 32 rt/rd IF ID EX E WB Fetch Decode Execution emory Write Back Stage 1 Stage 2 Stage 3 Stage 4 Stage 5 n = 5 pipeline stages #21 Lec # 7 Winter 2011 1-19-2012

Read/Write Access To ister Bank Two instructions need to access the register bank in the same cycle: One instruction to read operands in its instruction decode (ID) cycle. The other instruction to write to a destination register in its Write Back (WB) cycle. This represents a potential hardware conflict over access to the register bank. Solution: Coordinate register reads and write in the same cycle as follows: Operand register reads in Decode ID cycle occur in the second half of the cycle (indicated here by the dark shading of the second half of the cycle) ister write in Write Back WB cycle occur in the first half of the cycle. (indicated here by the dark shading of the first half of the WB cycle) Program execution order (in instructions) lw $10, 20($1) Time (in clock cycles) CC 1 CC 2 CC 3 CC 4 CC 5 CC 6 IF ID EX E WB I ALU D IF ID EX E WB sub $11, $2, $3 I ALU D #22 Lec # 7 Winter 2011 1-19-2012

1 IF ID EX E WB 2 Write destination register in first half of WB cycle 3 4 Read operand registers in second half of ID cycle 5 Operation of ideal integer in-order 5-stage pipeline #23 Lec # 7 Winter 2011 1-19-2012

IF Adding Pipeline Control Points 0 u x 1 ID IPS Pipeline Version #1 IF/ID ID/EX EX/E E/WB PCSrc Branches resolved here in E (Stage 4) EX E WB Stage 1 Stage 2 Stage 3 Stage 4 Stage 5 Add 4 Write Shift left 2 Add Add result Branch PC Address memory Classic Five Stage Integer IPS Pipeline Read register 1 Read data 1 Read register 2 isters Read Write data 2 register Write data [15 0] [20 16] [15 11] 16 Sign 32 extend ALUSrc 0 u x 1 6 0 u x 1 ALU control ALUOp Zero ALU ALU result Address Write data emwrite Data memory emread Read data emto 1 u x 0 Dst 4 th Ed. Fig. 4.46 page 359 3 rd Ed. Fig. 6.22 page 400 IPS Pipeline Version 1: No forwarding, branch resolved in E stage #24 Lec # 7 Winter 2011 1-19-2012

Pipeline Control Pass needed control signals along from one stage to the next as the instruction travels through the pipeline just like the needed data EX E WB Execution/Address Calculation stage control lines emory access stage control lines stage control lines Dst ALU Op1 ALU Op0 ALU Src Branch em Read em Write write em to R-format 1 1 0 0 0 0 0 1 0 lw 0 0 0 1 0 1 0 1 1 sw X 0 0 1 0 0 1 0 X beq X 0 1 0 1 0 0 0 X All control line values for remaining stages generated in ID WB Control WB Opcode EX WB 1 2 3 4 5 IF ID EX Stage 1 E WB Stage 2 Stage 3 Stage 4 Stage 5 IF/ID ID/EX EX/E E/WB 1 2 3 4 5 IF ID EX E WB 5 Stage Pipeline #25 Lec # 7 Winter 2011 1-19-2012

Pipeline Control Signal (Generation/Latching/Propagation) The ain Control generates the control signals during ID Control signals for EX (ALUSrc, ALUOp...) are used 1 cycle later Control signals for E (emwr/rd, Branch) are used 2 cycles later Control signals for WB (emto Wr) are used 3 cycles later ID EX E WB IF/ID ister Stage 2 Stage 3 Stage 4 Stage 5 ain Control ALUSrc ALUOp Dst emto Wr ID/Ex ister ALUSrc ALUOp Dst emto Wr Ex/em ister emrd emrd emrd emwr emwr emwr Branch Branch Branch emto Wr em/wb ister emto Wr #26 Lec # 7 Winter 2011 1-19-2012

Pipelined Datapath with Control Added PCSrc IPS Pipeline Version #1 IPS Pipeline Version 1: No forwarding, branch resolved in E stage IF Stage 1 0 u x 1 ID Stage 2 Control ID/EX WB EX Stage 3 EX/E WB E WB Stage 4 Stage 5 E/WB IF/ID EX WB Add PC 4 Address memory Classic Five Stage Integer IPS Pipeline 4 th Ed. Fig. 4.51 page 362 3 rd Ed. Fig. 6.27 page 404 Read register 1 Read data 1 Read register 2 isters Read Write data 2 register Write data [15 0] [20 16] [15 11] Write 16 Sign 32 extend Shift left 2 0 u x 1 6 0 u x 1 Add ALU control Dst Add result ALUSrc Zero ALU ALU result ALUOp Branch Write data emwrite Address Data memory Read data emread emto 1 u x 0 Target address of branch determined in EX but PC is updated in E stage (i.e branch is resolved in E, stage 4) #27 Lec # 7 Winter 2011 1-19-2012

Basic Performance Issues In Pipelining Pipelining increases the CPU instruction throughput: The number of instructions completed per unit time. Under ideal conditions (i.e. No stall cycles): Pipelined CPU instruction throughput is one instruction completed per machine cycle, or CPI = 1 Ideally (ignoring pipeline fill cycles) T = I x CPI x C Or throughput: s Per Cycle = IPC =1 Pipelining does not reduce the execution time of an individual instruction: The time needed to complete all processing steps of an instruction (also called instruction completion latency). It usually slightly increases the execution time of individual instructions over unpipelined CPU implementations due to: The increased control overhead of the pipeline and pipeline stage registers delays + Here n = 5 stages Every instruction goes though every stage in the pipeline even if the stage is not needed. (i.e E pipeline stage in the case of R- Type instructions) #28 Lec # 7 Winter 2011 1-19-2012

Pipelining Performance Example Example: For an unpipelined multicycle CPU: Clock cycle = 10ns, 4 cycles for ALU operations and branches and 5 cycles for memory operations with instruction frequencies of 40%, 20% and 40%, respectively. If pipelining adds 1ns to the CPU clock cycle then the speedup in instruction execution from pipelining is: i.e. C = 11 ns Non-pipelined Average execution time/instruction = Clock cycle x Average CPI CPI = 1 = 10 ns x ((40% + 20%) x 4 + 40%x 5) = 10 ns x 4.4 = 44 ns CPI = 4.4 In the pipelined CPU implementation, ideal CPI = 1 Pipelined execution time/instruction = Clock cycle x CPI = (10 ns + 1 ns) x 1 = 11 ns x 1 = 11 ns Speedup from pipelining = Time Per time unpipelined Time per time pipelined = 44 ns / 11 ns = 4 times faster T = I x CPI x C here I did not change C C CPI CPI #29 Lec # 7 Winter 2011 1-19-2012

Resource Not available: Pipeline Hazards Hazards are situations in pipelined CPUs which prevent the next instruction in the instruction stream from executing during the designated clock cycle possibly resulting in one or more stall (or wait) cycles. Hazards reduce the ideal speedup (increase CPI > 1) gained from pipelining and are classified into three classes: Hardware Component Correct Operand (data) value Correct PC i.e A resource the instruction requires for correct execution is not available in the cycle needed Structural hazards: Arise from hardware resource conflicts when the available hardware cannot support all possible combinations of instructions. Data hazards: Arise when an instruction depends on the results of a previous instruction in a way that is exposed by the overlapping of instructions in the pipeline. Control hazards: Arise from the pipelining of conditional branches and other instructions that change the PC. Correct PC not available when needed in IF CPI = 1 + Average Stalls Per Hardware structure (component) conflict Operand not ready yet when needed in EX #30 Lec # 7 Winter 2011 1-19-2012

Performance of Pipelines with Stalls Hazard conditions in pipelines may make it necessary to stall the pipeline by a number of cycles degrading performance from the ideal pipelined CPU CPI of 1. Average CPI pipelined = Ideal CPI + Pipeline stall clock cycles per instruction = 1 + Pipeline stall clock cycles per instruction If pipelining overhead is ignored and we assume that the stages are perfectly balanced then speedup from pipelining is given by: Speedup = CPI unpipelined / CPI pipelined = CPI unpipelined / (1 + Pipeline stall cycles per instruction) When all instructions in the multicycle CPU take the same number of cycles equal to the number of pipeline stages then: Speedup = Pipeline depth / (1 + Pipeline stall cycles per instruction) #31 Lec # 7 Winter 2011 1-19-2012

Structural (or Hardware) Hazards In pipelined machines overlapped instruction execution requires pipelining of functional units and duplication of resources to allow all possible combinations of instructions in the pipeline. To prevent hardware structures conflicts If a resource conflict arises due to a hardware resource being required by more than one instruction in a single cycle, and one or more such instructions cannot be accommodated, then a structural hazard has occurred, for example: e.g. When a pipelined machine has a shared single-memory for both data and instructions. stall the pipeline for one cycle for memory data access i.e A hardware component the instruction requires for correct execution is not available in the cycle needed #32 Lec # 7 Winter 2011 1-19-2012

Or store One shared memory for instructions and data Program Order IPS with emory Unit Structural Hazards s 1-4 above are assumed to be instructions other than loads/stores #33 Lec # 7 Winter 2011 1-19-2012

CPI = 1 + stall clock cycles per instruction = 1 + fraction of loads and stores x 1 Or store One shared memory for instructions and data Program Order One Stall or Wait Cycle Resolving A Structural Hazard with Stalling s 1-3 above are assumed to be instructions other than loads/stores #34 Lec # 7 Winter 2011 1-19-2012

A Structural Hazard Example (i.e loads/stores) Given that data references are 40% for a specific instruction mix or program, and that the ideal pipelined CPI ignoring hazards is equal to 1. A machine with a data memory access structural hazards requires a single stall cycle for data references and has a clock rate 1.05 times higher than the ideal machine. Ignoring other performance losses for this machine: Average instruction time = CPI X Clock cycle time Average instruction time = (1 + 0.4 x 1) x Clock cycle ideal CPI = 1.4 1.05 = 1.3 X Clock cycle time ideal i.e. CPU without structural hazard is 1.3 times faster CPI = 1 + Average Stalls Per #35 Lec # 7 Winter 2011 1-19-2012

i.e Operands Data Hazards Data hazards occur when the pipeline changes the order of read/write accesses to instruction operands in such a way that the resulting access order differs from the original sequential instruction operand access order of the unpipelined CPU resulting in incorrect execution. Data hazards may require one or more instructions to be stalled in the pipeline to ensure correct execution. Example: 1 2 3 4 5 sub $2, $1, $3 and $12, $2, $5 or $13, $6, $2 add $14, $2, $2 sw $15, 100($2) All the instructions after the sub instruction use its result data in register $2 As part of pipelining, these instruction are started before sub is completed: Due to this data hazard instructions need to be stalled for correct execution. (As shown next) i.e Correct operand data not ready yet when needed in EX cycle CPI = 1 + stall clock cycles per instruction Arrows represent data dependencies between instructions s that have no dependencies among them are said to be parallel or independent A high degree of -Level Parallelism (ILP) is present in a given code sequence if it has a large number of parallel instructions #36 Lec # 7 Winter 2011 1-19-2012

Data Hazards Example Problem with starting next instruction before first is finished Data dependencies here that go backward in time create data hazards. Time (in clock cycles) 1 2 3 4 5 sub $2, $1, $3 and $12, $2, $5 or $13, $6, $2 add $14, $2, $2 sw $15, 100($2) 1 Value of register $2: Program execution order (in instructions) sub $2, $1, $3 CC 1 CC 2 CC 3 CC 4 CC 5 CC 6 CC 7 CC 8 CC 9 10 10 10 10 10/ 20 20 20 20 20 I D 2 and $12, $2,$5 I D 3 or $13, $6, $2 I D 4 add $14, $2, $2 I D 5 sw $15, 100($2) I D #37 Lec # 7 Winter 2011 1-19-2012

Data Hazard Resolution: Stall Cycles Stall the pipeline by a number of cycles. The control unit must detect the need to insert stall cycles. In this case two stall cycles are needed. 1 Program execution order (in instructions) Time (in clock cycles) Value of register $2: sub $2, $1, $3 CC 1 CC 2 CC 3 CC 4 CC 5 CC 6 CC 7 CC 8 10 10 10 10 10/ 20 20 20 20 I D CC 9 20 CC 10 20 CC 11 20 CPI = 1 + stall clock cycles per instruction 2 and $12, $2, $5 I STALL STALL D 3 or $13, $6, $2 STALL STALL I D 4 add $14, $2, $2 I D 5 sw $15, 100($2) 2 Stall cycles inserted here to resolve data hazard and ensure correct execution I D Above timing is for IPS Pipeline Version #1 #38 Lec # 7 Winter 2011 1-19-2012

Data Hazard Resolution/Stall Reduction: Data Forwarding Observation: Why not use temporary results produced by memory/alu and not wait for them to be written back in the register bank. Data Forwarding is a hardware-based technique (also called register bypassing or register short-circuiting) used to eliminate or minimize data hazard stalls that makes use of this observation. Using forwarding hardware, the result of an instruction (i.e data) is copied directly (i.e. forwarded) from where it is produced (ALU, memory read port etc.), to where subsequent instructions need it (ALU input register, memory write port etc.) #39 Lec # 7 Winter 2011 1-19-2012

Forwarding In IPS Pipeline The ALU result from the EX/E register may be forwarded or fed back to the ALU input latches as needed instead of the register operand value read in the ID stage. Similarly, the Data emory Unit result from the E/WB register may be fed back to the ALU input latches as needed. If the forwarding hardware detects that a previous ALU operation is to write the register corresponding to a source for the current ALU operation, control logic selects the forwarded result as the ALU input rather than the value read from the register file. #40 Lec # 7 Winter 2011 1-19-2012

ID EX E WB 1 2 3 1 3 1 2 3 2 Forwarding Paths Added This diagram shows better forwarding baths than in textbook #41 Lec # 7 Winter 2011 1-19-2012

Data Hazard Resolution: Forwarding The forwarding unit compares operand registers of the instruction in EX stage with destination registers of the previous two instructions in E and WB If there is a match one or both operands will be obtained from forwarding paths bypassing the registers ID EX E WB 1 3 2 4 th Ed. Fig. 4.54 page 368 3 rd Ed. Fig. 6.30 page 409 Operand ister numbers of instruction in EX Destination ister numbers of instructions in E and WB #42 Lec # 7 Winter 2011 1-19-2012

Pipelined Datapath With Forwarding IF ID EX E WB ain Control ID/EX WB EX/E Opcode Control WB E/WB IF/ID EX WB PC memory isters IF/ID.isterRs Rs u x u x ALU 1 Data memory 3 2 u x IF/ID.isterRt Rt IF/ID.isterRt IF/ID.isterRd Rt Rd u x EX/E.isterRd 4 th Ed. Fig. 4.56 page 370 3 rd Ed. Fig. 6.32 page 411 Forwarding unit E/WB.isterRd The forwarding unit compares operand registers of the instruction in EX stage with destination registers of the previous two instructions in E and WB If there is a match one or both operands will be obtained from forwarding paths bypassing the registers #43 Lec # 7 Winter 2011 1-19-2012

Data Hazard Example With Forwarding 1 Time (in clock cycles) CC 1 CC 2 CC 3 CC 4 CC 5 CC 6 CC 7 CC 8 CC 9 Value of register $2 : 10 10 10 10 10/ 20 20 20 20 20 Value of EX/E : X X X 20 X X X X X Value of E/WB : X X X X 20 X X X X Program execution order (in instructions) sub $2, $1, $3 1 2 3 4 5 6 7 8 9 I D 2 and $12, $2, $5 I D 3 or $13, $6, $2 I D 4 add $14, $2, $2 I D 5 sw $15, 100($2) I D What registers numbers are being compared by the forwarding unit during cycle 5? What about in Cycle 6? #44 Lec # 7 Winter 2011 1-19-2012

A Data Hazard Requiring A Stall A load followed immediately by an R-type instruction that uses the loaded value (or any other type of instruction that needs loaded value in EX stage) Time (in clock cycles) Load 1 Program execution order (in instructions) lw $2, 20($1) CC 1 CC 2 CC 3 CC 4 CC 5 CC 6 I D CC 7 CC 8 CC 9 2 and $4, $2, $5 I D 3 or $8, $2, $6 I D 4 add $9, $4, $2 I D 5 slt $1, $6, $7 I D Even with forwarding in place a stall cycle is needed (shown next) This condition must be detected by hardware #45 Lec # 7 Winter 2011 1-19-2012

A Data Hazard Requiring A Stall A load followed immediately by an R-type instruction that uses the loaded value results in a single stall cycle even with forwarding as shown: Program execution order (in instructions) lw $2, 20($1) Time (in clock cycles) CC 1 CC 2 CC 3 CC 4 CC 5 CC 6 I D CC 7 CC 8 CC 9 CC 10 Stall one cycle then, forward data of lw instruction to and instruction and $4, $2, $5 I D First stall one cycle then forward or $8, $2, $6 I I D bubble add $9, $4, $2 slt $1, $6, $7 A stall cycle CPI = 1 + stall clock cycles per instruction I D I D We can stall the pipeline by keeping all instructions following the lw instruction in the same pipeline stage for one cycle What is the hazard detection unit (shown next slide) doing during cycle 3? #46 Lec # 7 Winter 2011 1-19-2012

Datapath With Hazard Detection Unit A load followed by an instruction that uses the loaded value is detected by the hazard detection unit and a stall cycle is inserted. The hazard detection unit checks if the instruction in the EX stage is a load by checking its emread control line value If that instruction is a load it also checks if any of the operand registers of the instruction in the decode stage (ID) match the destination register of the load. In case of a match it inserts a stall cycle (delays decode and fetch by one cycle). IF/IDWrite IF/ID rs rt Hazard detection unit rt Control ID/EX.emRead ID/EX WB ux 0 EX IPS Pipeline Version 2: With forwarding, branch still resolved in E stage EX/E WB E/WB WB PCWrite PC memory isters ux ALU Data memory ux ux IF/ID.isterRs IF/ID.isterRt IF/ID.isterRt IF/ID.isterRd Rt Rd ux EX/E.isterRd A stall if needed is created by disabling instruction write (keep last instruction) in IF/ID and by inserting a set of control values with zero values in ID/EX ID/EX.isterRt Rs Rt Forwarding unit E/WB.isterRd IF IPS Pipeline Version #2 ID 4 th Edition Figure 4.60 page 375 3 rd Edition Figure 6.36 page 416 EX E WB #47 Lec # 7 Winter 2011 1-19-2012

Stall + Forward Forward Hazard Detection Unit Operation #48 Lec # 7 Winter 2011 1-19-2012

Compiler Scheduling (Re-ordering) Example Reorder the instructions to avoid as many pipeline stalls as possible: lw $15, 0($2) lw $16, 4($2) add $14, $5, $16 sw $16, 4($2) The data hazard occurs on register $16 between the second lw and the add instruction resulting in a stall cycle even with forwarding With forwarding we (or the compiler) need to find only one independent instruction to place between them, swapping the lw instructions works: i.e pipeline version #2 i.e pipeline version #1 Stall No Stalls lw $16, 4($2) lw $15, 0($2) add $14, $5, $16 sw $16, 4($2) Without forwarding we need two independent instructions to place between them, so in addition a nop is added (or the hardware will insert a stall). Or stall cycle lw $16, 4($2) lw $15, 0($2) nop add $14, $5, $16 sw $16, 4($2) Original Code Scheduled Code Scheduled Code With Forwarding No Forwarding #49 Lec # 7 Winter 2011 1-19-2012

i.e version 2 Control Hazards When a conditional branch is executed it may change the PC (when taken) and, without any special measures, leads to stalling the pipeline for a number of cycles until the branch condition is known and PC is updated (branch is resolved). Here end of stage 4 (E) For Pipeline Versions 1 or 2 Otherwise the PC may not be correct when needed in IF In current IPS pipeline, the conditional branch is resolved in stage 4 (E stage) resulting in three stall cycles as shown below: Branch instruction IF ID EX E WB Branch successor stall stall stall IF ID EX E WB Branch successor + 1 IF ID EX E WB Branch successor + 2 3 stall cycles IF ID EX E Branch successor + 3 IF ID EX Branch successor + 4 Branch Penalty IF ID Branch successor + 5 IF Assuming we stall or flush the pipeline on a branch instruction: Three clock cycles are wasted for every branch for current IPS pipeline Branch Penalty = stage number where branch is resolved - 1 here Branch Penalty = 4-1 = 3 Cycles i.e Correct PC is not available when needed in IF Correct PC available here (end of E cycle or stage) #50 Lec # 7 Winter 2011 1-19-2012

Basic Branch Handling in Pipelines 1 One scheme discussed earlier is to always stall ( flush or freeze) the pipeline whenever a conditional branch is decoded by holding or deleting any instructions in the pipeline until the branch destination is known (zero pipeline registers, control lines). Pipeline stall cycles from branches = frequency of branches X branch penalty Ex: Branch frequency = 20% branch penalty = 3 cycles CPI = 1 +.2 x 3 = 1.6 CPI = 1 + stall clock cycles per instruction 2 Another method is to assume or predict that the branch is not taken where the state of the machine is not changed until the branch outcome is definitely known. Execution here continues with the next (PC+4) instruction; stall occurs here when the branch is taken. Pipeline stall cycles from branches = frequency of taken branches X branch penalty Ex: Branch frequency = 20% of which 45% are taken branch penalty = 3 cycles CPI = 1 +.2 x.45 x 3 = 1.27 CPI = 1 + Average Stalls Per #51 Lec # 7 Winter 2011 1-19-2012

Control Hazards: Example Three other instructions are in the pipeline before branch instruction target decision is made when BEQ is in E stage. Program execution order (in instructions) 40 beq $1, $3, 7 Time (in clock cycles) CC 1 I CC 2 CC 3 CC 4 CC 5 CC 6 CC 7 CC 8 CC 9 D Branch Resolved in Stage 4 (E) Thus Taken Branch Penalty = 4 1 = 3 stall cycles Not Taken Direction 44 and $12, $2, $5 I D 48 or $13, $6, $2 I D 52 add $14, $2, $2 I D Taken Direction 72 lw $4, 50($7) If Taken, go here (Target) I D In the above diagram, we are assuming branch not taken Need to add hardware for flushing the three following instructions if we are wrong losing three cycles when the branch is taken. i.e. Taken Branch Penalty i.e the branch was resolved as taken in E stage #52 Lec # 7 Winter 2011 1-19-2012

Hardware Reduction of Branch Stall Cycles Pipeline hardware measures to reduce taken branch stall cycles: 1- Find out whether a branch is taken earlier in the pipeline. 2- Compute the taken PC earlier in the pipeline. In IPS: i.e. pipeline redesign i.e Resolve the branch in an early stage in the pipeline In IPS branch instructions BEQ, BNE, test a register for equality to zero. This can be completed in the ID cycle by moving the zero test into that cycle (ID). Both PCs (taken and not taken) must be computed early. Requires an additional adder in ID because the current ALU is not useable until EX cycle. This results in just a single cycle stall on taken branches. Branch Penalty when taken = stage resolved - 1 = 2-1 = 1 As opposed branch penalty = 3 cycles before (pipelene versions 1 and 2) IPS Pipeline Version #3 IPS Pipeline Version 3: With forwarding, branch resolved in ID stage #53 Lec # 7 Winter 2011 1-19-2012

Reducing Delay (Penalty) of Taken Branches So far: Next PC of a branch known or resolved in E stage: Costs three lost cycles if the branch is taken. If next PC of a branch is known or resolved in EX stage, one cycle is saved. Branch address calculation can be moved to ID stage (stage 2) using a register comparator, costing only one cycle if branch is taken as shown below. Branch Penalty = stage 2-1 = 1 cycle IF.Flush IPS Pipeline Version #3 IPS Pipeline Version 3: With forwarding, branch resolved in ID stage u x Hazard detection unit Control 0 u x ID/EX WB IPS Pipeline Version #3 EX/E WB E/WB IF/ID EX WB PC 4 memory Shift left 2 isters = u x u x ALU Data memory u x Sign extend IF ID EX E WB u x Forwarding unit Here the branch is resolved in ID stage (stage 2) Thus branch penalty if taken = 2-1 = 1 cycle 4 th Edition Figure 4.65 page 384 3 rd Edition Figure 6.41 page 427 #54 Lec # 7 Winter 2011 1-19-2012

Pipeline Performance Example Assume the following IPS instruction mix: Type Frequency Arith/Logic 40% Load 30% of which 25% are followed immediately by an instruction using the loaded value Store 10% branch 20% of which 45% are taken 1 stall What is the resulting CPI for the pipelined IPS with forwarding and branch address calculation in ID stage when using the branch not-taken scheme? CPI = Ideal CPI + Pipeline stall clock cycles per instruction = 1 + stalls by loads + stalls by branches = 1 +.3 x.25 x 1 +.2 x.45 x 1 = 1 +.075 +.09 = 1.165 CPI = 1 + Average Stalls Per 1 stall i.e Version 3 Branch Penalty = 1 cycle #55 Lec # 7 Winter 2011 1-19-2012

ISA Reduction of Branch Penalties: Delayed Branch (Action) When delayed branch is used in an ISA, the branch action is delayed by n cycles (or instructions), following this execution pattern: conditional branch instruction Program Order sequential successor 1 sequential successor 2.. sequential successor n branch target if taken }n branch delay slots These instructions in branch delay slots are always executed regardless of branch direction The sequential successor instructions are said to be in the branch delay slots. These instructions are executed whether or not the branch is taken. In Practice, all ISAs that utilize delayed branching including IPS utilize a single instruction branch delay slot. (All RISC ISAs) The job of the compiler is to make the successor instruction in the delay slot a valid and useful instruction. #56 Lec # 7 Winter 2011 1-19-2012

Delayed Branch Example (Single Branch Delay slot, instruction or cycle used here) Not Taken Branch (no stall) (All RISC ISAs) Taken Branch (no stall) The instruction in the branch delay slot is executed whether the branch is taken or not Here, assuming the IPS pipeline (version 3) with reduced branch penalty = 1 #57 Lec # 7 Winter 2011 1-19-2012

Delayed Branch-delay Slot Scheduling Strategies The branch-delay slot instruction can be chosen from three cases: A An independent instruction from before the branch: Always improves performance when used. The branch must not depend on the rescheduled instruction. B An instruction from the target of the branch: Improves performance if the branch is taken and may require instruction duplication. This instruction must be safe to execute if the branch is not taken. Hard to Find ost Common e.g From Body of a loop C An instruction from the fall through instruction stream: Improves performance when the branch is not taken. The instruction must be safe to execute when the branch is taken. #58 Lec # 7 Winter 2011 1-19-2012

Scheduling The Branch Delay Slot Example: From the body of a loop ost Common choice #59 Lec # 7 Winter 2011 1-19-2012

To reduce or eliminate stalls Compiler Scheduling Example With Branch Delay Slot Schedule the following IPS code for the pipelined IPS CPU with forwarding and reduced branch delay using a single branch delay slot to minimize stall cycles: loop: lw $1,0($2) # $1 array element add $1, $1, $3 # add constant in $3 sw $1,0($2) # store result array element addi $2, $2, -4 # decrement address by 4 bne $2, $4, loop # branch if $2!= $4 Assuming the initial value of $2 = $4 + 40 (i.e it loops 10 times) What is the CPI and total number of cycles needed to run the code with and without scheduling? For IPS Pipeline Version 3 i.e IPS Pipeline Version 3 #60 Lec # 7 Winter 2011 1-19-2012

Compiler Scheduling Example (With Branch Delay Slot) Without compiler scheduling loop: lw $1,0($2) Stall add $1, $1, $3 Three Stalls Per Iteration Needed because new value of $2 is not produced yet sw $1,0($2) addi $2, $2, -4 Stall bne $2, $4, loop Stall (or NOP) Ignoring the initial 4 cycles to fill the pipeline: Each iteration takes = 8 cycles CPI = 8/5 = 1.6 Total cycles = 8 x 10 = 80 cycles ove between lw add ove to branch delay slot Target CPU: IPS Pipeline Version 3 (With forwarding, branch resolved in ID stage) With compiler scheduling: loop: lw $1,0($2) addi $2, $2, -4 add $1, $1, $3 bne $2, $4, loop sw $1, 4($2) Ignoring the initial 4 cycles to fill the pipeline: Each iteration takes = 5 cycles CPI = 5/5 = 1 Total cycles = 5 x 10 = 50 cycles Speedup = 80/ 50 = 1.6 No Stalls Adjust address offset #61 Lec # 7 Winter 2011 1-19-2012

The IPS R4000 Integer Pipeline Implements IPS64 but uses an 8-stage pipeline instead of the classic 5- stage pipeline to achieve a higher clock speed. Pipeline Stages: 1 2 3 4 5 6 7 8 Branch resolved here in stage 4 IF: First half of instruction fetch. Start instruction cache access. IS: Second half of instruction fetch. Complete instruction cache access. RF: decode and register fetch, hazard checking. EX: Execution including branch-target and condition evaluation. DF: Data fetch, first half of data cache access. Data available if a hit. DS: Second half of data fetch access. Complete data cache access. Data available if a cache hit TC: Tag check, determine data cache access hit. WB: Write back for loads and register-register operations. Branch resolved in stage 4. Branch Penalty = 3 cycles if taken ( 2 with branch delay slot) #62 Lec # 7 Winter 2011 1-19-2012

IPS R4000 Example LW data available here Even with forwarding the deeper pipeline leads to a 2-cycle load delay (2 stall cycles). Thus: Deeper Pipelines = ore Stall Cycles T = I x CPI x C Forwarding of LW Data As opposed to 1-cycle in classic 5-stage pipeline #63 Lec # 7 Winter 2011 1-19-2012