CPE Computer Architecture. Appendix A: Pipelining: Basic and Intermediate Concepts

Similar documents
Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

Pipelining. Maurizio Palesi

Page 1. Pipelining: Its Natural! Chapter 3. Pipelining. Pipelined Laundry Start work ASAP. Sequential Laundry A B C D. 6 PM Midnight

Lecture 05: Pipelining: Basic/ Intermediate Concepts and Implementation

mywbut.com Pipelining

Appendix C. Abdullah Muzahid CS 5513

What is Pipelining? Time per instruction on unpipelined machine Number of pipe stages

MIPS An ISA for Pipelining

What is Pipelining? RISC remainder (our assumptions)

EITF20: Computer Architecture Part2.2.1: Pipeline-1

Computer System. Hiroaki Kobayashi 6/16/2010. Ver /16/2010 Computer Science 1

Lecture 3. Pipelining. Dr. Soner Onder CS 4431 Michigan Technological University 9/23/2009 1

EITF20: Computer Architecture Part2.2.1: Pipeline-1

COSC 6385 Computer Architecture - Pipelining

Modern Computer Architecture

Overview. Appendix A. Pipelining: Its Natural! Sequential Laundry 6 PM Midnight. Pipelined Laundry: Start work ASAP

Appendix A. Overview

COSC4201 Pipelining. Prof. Mokhtar Aboelaze York University

Computer Architecture

EITF20: Computer Architecture Part2.2.1: Pipeline-1

Advanced Parallel Architecture Lessons 5 and 6. Annalisa Massini /2017

Pipeline Overview. Dr. Jiang Li. Adapted from the slides provided by the authors. Jiang Li, Ph.D. Department of Computer Science

The Big Picture Problem Focus S re r g X r eg A d, M lt2 Sub u, Shi h ft Mac2 M l u t l 1 Mac1 Mac Performance Focus Gate Source Drain BOX

Instruction Pipelining Review

Pipelining: Basic and Intermediate Concepts

Computer System. Agenda

EI338: Computer Systems and Engineering (Computer Architecture & Operating Systems)

C.1 Introduction. What Is Pipelining? C-2 Appendix C Pipelining: Basic and Intermediate Concepts

CS4617 Computer Architecture

Computer Architecture Spring 2016

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

Instruction Level Parallelism. Appendix C and Chapter 3, HP5e

CISC 662 Graduate Computer Architecture Lecture 6 - Hazards

Minimizing Data hazard Stalls by Forwarding Data Hazard Classification Data Hazards Present in Current MIPS Pipeline

Pipelining: Hazards Ver. Jan 14, 2014

Pipeline Review. Review

Speeding Up DLX Computer Architecture Hadassah College Spring 2018 Speeding Up DLX Dr. Martin Land

CS 110 Computer Architecture. Pipelining. Guest Lecture: Shu Yin. School of Information Science and Technology SIST

CPS104 Computer Organization and Programming Lecture 19: Pipelining. Robert Wagner

Data Hazards Compiler Scheduling Pipeline scheduling or instruction scheduling: Compiler generates code to eliminate hazard

CSE 533: Advanced Computer Architectures. Pipelining. Instructor: Gürhan Küçük. Yeditepe University

The Processor Pipeline. Chapter 4, Patterson and Hennessy, 4ed. Section 5.3, 5.4: J P Hayes.

Advanced Computer Architecture Pipelining

Instruction Pipelining

Pipelining. Each step does a small fraction of the job All steps ideally operate concurrently

Outline. Pipelining basics The Basic Pipeline for DLX & MIPS Pipeline hazards. Handling exceptions Multi-cycle operations

ECEC 355: Pipelining

Pipelining concepts The DLX architecture A simple DLX pipeline Pipeline Hazards and Solution to overcome

Pipelining concepts The DLX architecture A simple DLX pipeline Pipeline Hazards and Solution to overcome

Pipeline: Introduction

Appendix C. Instructor: Josep Torrellas CS433. Copyright Josep Torrellas 1999, 2001, 2002,

Pipelining, Instruction Level Parallelism and Memory in Processors. Advanced Topics ICOM 4215 Computer Architecture and Organization Fall 2010

Instruction Pipelining

Instruction Level Parallelism. ILP, Loop level Parallelism Dependences, Hazards Speculation, Branch prediction

Appendix C: Pipelining: Basic and Intermediate Concepts

ECE 505 Computer Architecture

MIPS Pipelining. Computer Organization Architectures for Embedded Computing. Wednesday 8 October 14

Outline Marquette University

Chapter 5 (a) Overview

Advanced Computer Architecture

3/12/2014. Single Cycle (Review) CSE 2021: Computer Organization. Single Cycle with Jump. Multi-Cycle Implementation. Why Multi-Cycle?

Lecture 5: Pipelining Basics

Computer Architecture. Lecture 6.1: Fundamentals of

Pipelining! Advanced Topics on Heterogeneous System Architectures. Politecnico di Milano! Seminar DEIB! 30 November, 2017!

Chapter 8. Pipelining

Unpipelined Machine. Pipelining the Idea. Pipelining Overview. Pipelined Machine. MIPS Unpipelined. Similar to assembly line in a factory

第三章 Instruction-Level Parallelism and Its Dynamic Exploitation. 陈文智 浙江大学计算机学院 2014 年 10 月

Pipelining Analogy. Pipelined laundry: overlapping execution. Parallelism improves performance. Four loads: Non-stop: Speedup = 8/3.5 = 2.3.

Lecture 2: Processor and Pipelining 1

Execution/Effective address

Lecture 7 Pipelining. Peng Liu.

Computer Systems Architecture Spring 2016

Chapter 3 & Appendix C Pipelining Part A: Basic and Intermediate Concepts

CMSC 411 Computer Systems Architecture Lecture 6 Basic Pipelining 3. Complications With Long Instructions

INSTITUTO SUPERIOR TÉCNICO. Architectures for Embedded Computing

Processor Architecture. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Pipelining. Principles of pipelining Pipeline hazards Remedies. Pre-soak soak soap wash dry wipe. l Chapter 4.4 and 4.5

Readings. H+P Appendix A, Chapter 2.3 This will be partly review for those who took ECE 152

ECE 4750 Computer Architecture, Fall 2017 T05 Integrating Processors and Memories

Basic Pipelining Concepts

Lecture 15: Pipelining. Spring 2018 Jason Tang

Processor Architecture

Appendix C. Authors: John Hennessy & David Patterson. Copyright 2011, Elsevier Inc. All rights Reserved. 1

Lecture 19 Introduction to Pipelining

CSCI 402: Computer Architectures. Fengguang Song Department of Computer & Information Science IUPUI. Today s Content

ECE260: Fundamentals of Computer Engineering

Instr. execution impl. view

Computer and Information Sciences College / Computer Science Department Enhancing Performance with Pipelining

Lecture 3: The Processor (Chapter 4 of textbook) Chapter 4.1

Chapter 4. The Processor

MIPS ISA AND PIPELINING OVERVIEW Appendix A and C

ELCT 501: Digital System Design

ECE473 Computer Architecture and Organization. Pipeline: Control Hazard

Pipelining. CSC Friday, November 6, 2015

Midnight Laundry. IC220 Set #19: Laundry, Co-dependency, and other Hazards of Modern (Architecture) Life. Return to Chapter 4

Chapter 4 The Processor 1. Chapter 4A. The Processor

Lecture 2: Pipelining Basics. Today: chapter 1 wrap-up, basic pipelining implementation (Sections A.1 - A.4)

PIPELINING: HAZARDS. Mahdi Nazm Bojnordi. CS/ECE 6810: Computer Architecture. Assistant Professor School of Computing University of Utah

Lecture 4: Advanced Pipelines. Data hazards, control hazards, multi-cycle in-order pipelines (Appendix A.4-A.10)

COMPUTER ORGANIZATION AND DESIGN

Transcription:

CPE 110408443 Computer Architecture Appendix A: Pipelining: Basic and Intermediate Concepts Sa ed R. Abed [Computer Engineering Department, Hashemite University]

Outline Basic concept of Pipelining The Basic Pipeline for MIPS The Major Hurdles of Pipelining Pipeline Hazards 2

Laundry Example What Is Pipelining? Ann, Betty, Cathy, Dave each has one load of clothes to wash, dry, and fold A B C D Washer takes 30 minutes Dryer takes 40 minutes Folder takes 20 minutes 3

What Is Pipelining? 6 PM 7 8 9 10 11 Midnight Time 30 40 20 30 40 20 30 40 20 30 40 20 T a s k O r d e r A B C D Sequential laundry takes 6 hours for 4 loads Want to reduce the time? - Pipelining!!! 4

What Is Pipelining? 6 PM 7 8 9 Time T a s k O r d e r A B C D 30 40 40 40 40 20 Start work ASAP Pipelined laundry takes 3.5 hours for 4 loads 5

What Is Pipelining? Pipelining is an implementation technique whereby multiple instructions are overlapped in execution It takes advantage of parallelism that exists among instructions => instruction-level parallelism It is the key implementation technique used to make fast CPUs Pipelining doesn t help latency of single task; it helps throughput of entire workload Pipeline rate is limited by the slowest pipeline stage Multiple tasks operating simultaneously Potential speedup = Number of pipe stages Unbalanced lengths of pipe stages reduces speedup 6

MIPS Without Pipelining The execution of instructions is controlled by CPU clock. One specific function in one clock cycle. Every MIPS instruction takes 5 clock cycles in terms of five different stages. Several temporary registers are introduced to implement the 5-stage structure. 7

MIPS Functions Only consider loadstore, BEQZ, and integer Passed To Next Stage IR <- Mem[PC] NPC <- PC + 4 Instruction Fetch (IF): Send out the PC and fetch the instruction from memory into the instruction register (IR); increment the PC by 4 to address the next sequential instruction and store it in next program count register (NPC). IR holds the instruction that will be used in the next stage. NPC holds the value of the next PC. 8

MIPS Functions Passed To Next Stage A <- s[rs]; B <- s[rt]; Imm <- ((IR 16 ) 48 ##IR 16..31 Instruction Decode/ister Fetch (ID): Decode the instruction and access the register file to read the registers. The outputs of the general purpose registers are read into two temporary registers (A & B) for use in later clock cycles. We sign extend the lower 16 bits of the Instruction ister into another temporal register Imm. 9

MIPS Functions Passed To Next Stage Output <- A + Imm; Output <- A func. B; Output <- A op Imm; Output <- NPC+ Imm<<2, Cond = (A==0); Execution/Effective Address Calculation (EX): We perform an operation (for an ) or an address calculation (if the instruction is about load/store or Branch). If an, actually do the operation. If an address calculation, figure out the address and store it for the next cycle. 10

MIPS Functions Passed To Next Stage LMD = Mem[Output] or Mem[Output] = B; If (cond) PC <- Output Memory Access/Branch Completion (MEM): If it is an instruction, do nothing. If it is a load/store instruction, then access memory. If it is a branch instruction, update PC if necessary in terms of condition. 11

MIPS Functions Passed To Next Stage s[rd] <- Output; s[rs] <- Output; s[rt] <- LMD; Write-back (WB): Update the registers from either the or from the data loaded. 12

The classic five-stages pipeline for MIPS We can pipeline the execution with almost no changes by simply starting a new instruction on each clock cycle. Each clock cycle becomes a pipe stage a cycle in the pipe line which results in the execution pattern as a typical way of pipeline structure. Although each instruction takes 5 clock cycles to complete, the hardware will initiate a new instruction during each clock cycle and will be executing some parts of the five different instruction already existing in the pipeline. It may be hard to believe that pipelining is as simple as this. Clock number Instruction number 1 2 3 4 5 6 7 8 9 Instruction i IF ID EX MEM WB Instruction i+1 IF ID EX MEM WB Instruction i+2 IF ID EX MEM WB Instruction i+3 IF ID EX MEM WB Instruction i+4 IF ID EX MEM WB 13

Figure A.2 The pipeline can be thought of as a series of data paths shifted in time 14

Simple MIPS Pipeline MIPS pipeline data path to deal with problems that pipelining introduces in real implementation. It is critical to ensure that instructions at different stage in the pipeline do not attempt to use the hardware resources at the same time (in the same clock cycle) perform different operations with the same functional unit such as on the same clock cycle. Instructions and data memories are separated in different caches (IM/DM). ister file is used in two stages: one for reading in ID and one for writing in WB. To handle a read and a write to the same register, we perform the register write in the first half of the clock and the read in the second. 15

Pipeline implementation for MIPS In order to ensure that instructions in different stages of the pipeline do not interfere with each other, the data path is pipelined by adding a set of registers, one between each pair of pipe stages. The registers serve to convey values and control information from one stage to the next. Most of the data paths flow from left to right, which is from earlier in time to later. The paths flowing from right to left (which carry the register write-back information and PC information on a branch) introduce complications into the pipeline. 16

Events on Pipe Stages of the MIPS Pipeline Stage IF ID Any instruction IF/ID.IR <- Mem[PC]; Figure A.19 IF/ID.NPC, PC <- (If ((EX/MEM.opcode==branch) & EX/MEM.cond){ EX/MEM.Output} else {PC+4}); ID/EX.A <- s[if/id.ir[rs]]; ID/EX.B <- s[if/id.ir[rt]]; ID/EX.NPC <- IF/ID.NPC; ID/EX.IR <- IF/ID.IR; ID/EX.Imm <- sign-extend(if/id.ir[immediate field]); Instruction Load or store Branch EX EX/MEM.IR <- ID/EX.IR; EX/MEM.Output <- ID/EX.A func ID/EX.B; or EX/MEM.IR <- ID/EX.IR EX/MEM.Output <- ID/EX.A + ID/EX.Imm; EX/MEM.Output <- ID/EX.NPC + (ID/EX.Imm << 2); EX/MEM.Output <- ID/EX.A op ID/EX.Imm; EX/MEM.B <- ID/EX.B EX/MEM.cond <- (ID/EX.A ==0); MEM MEM/WB.IR <- EX/MEM.IR; MEM/WB.IR <- EX/MEM.IR; MEM/WB.Output <- EX/MEM.Output; MEM/WB.LMD <- Mem[EX/MEM.Output]; or Mem[EX/MEM.Output] <- EX/MEM.B; WB s[mem/wb.ir[rd]] <- MEM/WB.Output; or s[mem/wb.ir[rt]] <- MEM/WB.Output For load only: s[mem/wb.ir[rt]] <- MEM/WB.LMD 17

Basic Performance Issues for Pipelining Example: Assume that an unpipelined processor has a 1ns clock cycle and that it uses 4 cycles for operations and branches and 5 cycles for memory operations. Assume that the relative frequencies of these operations are 40%, 20%, and 40%, respectively. Suppose that due to clock skew and setup, pipelining the processor adds 0.2 ns overhead to the clock. Ignoring any latency impact, how much speedup in the instruction execution time will we gain from the pipeline implementation? Solution: Avg. instr. exec time unpipelined = Clock cycle time x Avg. CPI = 1ns x (40%x4+20%x4+40%x5) = 4.4ns Ideal situation without any latency, avg. CPI is just only 1 cycle for all kind of instructions and the clock cycle time is equal to 1.0ns + 0.2ns (1.2ns), then Avg. instr. exec time pipelined = 1.2ns x1 = 1.2ns Then, speed up from pipelining is 4.4ns/1.2ns or 3.7 times. What is the result if there is no overhead when implement pipelining? 18

A.2 The Major Hurdle of Pipelining Pipeline Hazard Limits to pipelining: there are situations, called Hazards, prevent next instruction from executing during its designated clock cycle, thus reduce the performance from the ideal speedup. Three classes of hazards are: Structural hazards: arise from resource conflicts when the hardware cannot support all possible combinations of instructions simultaneously in overlapped execution- two different instructions use same h/w in the same cycle. Data hazards: arise when an instruction depends on result of prior instruction still in the pipeline, RAW, WAR and WAW. Control hazards: Pipelining of branches & other instructions that change the PC. Common solution is to stall the pipeline until the hazard is cleared, i.e., inserting one or more bubbles in the pipeline. 19

Performance of Pipelining with Stalls The Pipelined CPI: CPI pipelined = Ideal CPI + Pipeline stall cycles per instr. = 1+ Pipeline stall cycles per instr. Ignoring cycle time overhead of pipelining, and assuming the stages are perfectly balanced (all occupy one clock cycle) and all instructions take the same num of cycles, we have speedup from pipelining: Speedup = = CPI CPI unpipelined pipelined CPIunpipelined = 1+ Pipeline stall cycles per instr. Pipeline depth 1+ Pipeline stall cycles per instr. 20

I n s t r. O r d e r Time (clock cycles) Load Instr 1 Instr 2 Instr 3 Structural Hazards Cycle 1 Cycle 2 Cycle 3 Cycle 4Cycle 5 Cycle 6 Cycle 7 Instr 4 When two or more different instructions want to use same h/w resource in same cycle e.g., MEM uses thesamememory port as IF as shown in this slide. Solution: stall 21

Time (clock cycles) Structural Hazards I n s t r. O r d e r Load Instr 1 Instr 2 Stall Instr 3 Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Bubble Bubble Bubble Bubble Bubble This is another way of looking at the effect of a stall. 22

Structural Hazards This is another way to represent the stall. 23

Stall Dealing With Structural Hazards low cost, simple Increases CPI use for rare case since stalling has performance effect Replicate resource good performance increases cost (+ maybe interconnect delay) useful for cheap or divisible resources E.g., we use separate instruction and data memories in MIPS pipeline 24

Data Hazards Data hazards occur when the pipeline changes the order of read/write accesses to operands (registers) so that the order differs from the order seen by sequentially executing instructions on an unpipelined processor. Where there s real trouble is when we have: instruction A instruction B, and B manipulates (reads or writes) data before A does. This violates the order of the instructions, since the architecture implies that A completes entirely before B is executed. 25

Data Hazards Execution Order is: Instr I Instr J Read After Write (RAW) Instr J tries to read operand before Instr I writes it I: dadd r1,r2,r3 J: dsub r4,r1,r3 Caused by a dependence (in compiler nomenclature). This hazard results from an actual need for communication. 26

Data Hazards Execution Order is: Instr I Instr J Write After Read (WAR) Instr J tries to write operand before Instr I reads it Gets wrong operand I: dsub r4,r1,r3 J: dadd r1,r2,r3 K: mul r6,r1,r7 Called an anti-dependence by compiler writers. This results from reuse of the name r1. Can t happen in MIPS 5 stage pipeline because: All instructions take 5 stages, and Reads are always in stage 2, and Writes are always in stage 5 27

Data Hazards Execution Order is: Instr I Instr J Write After Write (WAW) Instr J tries to write operand before Instr I writes it Leaves wrong result ( Instr I not Instr J ) I: dsub r1,r4,r3 J: dadd r1,r2,r3 K: mul r6,r1,r7 Called an output dependence by compiler writers This also results from the reuse of name r1. Can t happen in MIPS 5 stage pipeline because: All instructions take 5 stages, and Writes are always in stage 5 Will see WAR and WAW in later more complicated pipeline implementations 28

Solutions to Data Hazards Simple Solution to RAW Hardware detects RAW and stalls until the result is written into the register + low cost to implement, simple -- reduces # instruction executed per cycle Minimizing RAW stalls: Forwarding (also called bypassing) Key insight: the result is not really needed by the current instruction until after the previous instruction actually produces it. The result from both the EX/MEM and MEM/WB pipeline registers is always fed back to the inputs. If the forwarding hardware detects that the previous operation has written the register corresponding to a source for the current operation, control logic selects the forwarded result as the input rather than the value read from the register file. 29

Data Hazards Time (clock cycles) CC1 CC2 CC3 CC4 CC5 CC6 CC7 CC8 CC9 IF ID EX MEM WB I n s t r. O r d e r dadd r1,r2,r3 dsub r4,r1,r3 and r6,r1,r7 or r8,r1,r9 xor r10,r1,r11 The use of the result of the ADD instruction in the next two instructions causes a hazard, since the register is not written until after those instructions read it. 30

Forwarding to Avoid Data Hazards Forwarding is the concept of making data available to the input of the for subsequent instructions, even though the generating instruction hasn t gotten to WB in order to write the memory or registers. Time (clock cycles) CC1 CC2 CC3 CC4 CC5 CC6 CC7 CC8 CC9 I n s t r. O r d e r dadd r1,r2,r3 dsub r4,r1,r3 and r6,r1,r7 or r8,r1,r9 xor r10,r1,r11 31

Data Hazards Requiring Stalls Time (clock cycles) CC1 CC2 CC3 CC4 CC5 CC6 CC7 CC8 I n s t r. LD R1,0(R2) DSUB R4,R1,R6 O r d e r AND R6,R1,R7 OR R8,R1,R9 There are some instances where hazards occur, even with forwarding, e.g., the data isn t loaded until after the MEM stage. 32

Data Hazards Requiring Stalls Time (clock cycles) CC1 CC2 CC3 CC4 CC5 CC6 CC7 CC8 I n s t r. O r d e r LD R1,0(R2) DSUB R4,R1,R6 AND R6,R1,R7 Bubble Bubble OR R8,R1,R9 Bubble The stall is necessary for the case. 33

Another Representation of the Stall LD R1, 0(R2) IF ID EX MEM WB DSUB R4, R1, R5 IF ID EX MEM WB AND R6, R1, R7 IF ID EX MEM WB OR R8, R1, R9 IF ID EX MEM WB LD R1, 0(R2) IF ID EX MEM WB DSUB R4, R1, R5 IF ID stall EX MEM WB AND R6, R1, R7 IF stall ID EX MEM WB OR R8, R1, R9 stall IF ID EX MEM WB In the top table, we can see why a stall is needed: The MEM cycle of the load produces a value that is needed in the EX cycle of the DSUB, which occurs at the same time. This problem is solved by inserting a stall, as shown in the bottom table. 34

Control Hazards A control hazard happens when we need to find the destination of a branch, and can t fetch any new instructions until we know that destination. If instruction i is a taken branch, then the PC is normally not changed until the end of ID Control hazards can cause a greater performance loss than do data hazards. 35

Control Hazard on Branches Three-Cycle Stall Time (clock cycles) CC1 CC2 CC3 CC4 CC5 CC6 CC7 CC8 CC9 12: beq r1,r3,36 16: and r2,r3,r5 20: or r6,r1,r7 24: add r8,r1,r9 36: xor r10,r1,r11 36

Branch Stall Impact If CPI = 1, 30% branch, Stall 3 cycles => new CPI = 1.9! Two solutions to this dramatic increase: Determine branch taken or not sooner, AND Compute target address earlier MIPS branch tests if register = 0 or ^ 0 MIPS Solution: Move Zero test to ID stage Adder to calculate target address in ID stage 1 clock cycle penalty for branch versus 3 37

The Pipeline of 1-Cycle Stall for Branch 38

Four Solutions to Branch Hazards #1: Stall until branch direction is clear Simple both for software and hardware Branch penalty is fixed (1-cycle penalty for revised MIPS) Branch instr. IF ID EX MEM WB Branch successor IF IF ID EX MEM WB Branch successor+1 IF ID EX MEM WB Branch successor+2 IF ID EX MEM WB 39

Four Solutions to Branch Hazards #2: Predict Branch Not Taken Continue to fetch instructions as if the branch were a normal instruction. If the branch is taken, turn the fetched instruction into a no-op and restart the fetch at the target address. Untaken branch instr. IF ID EX MEM WB Branch successor IF ID EX MEM WB Branch successor+1 IF ID EX MEM WB Branch successor+2 IF ID EX MEM WB Branch successor+3 IF ID EX MEM WB Taken branch instr. IF ID EX MEM WB Branch successor IF idle idle idle idle Branch target IF ID EX MEM WB Branch successor+1 IF ID EX MEM WB Branch successor+2 IF ID EX MEM WB 40

Four Solutions to Branch Hazards #3: Predict Branch Taken As soon as the branch is decoded and the target address is computed, we assume the branch to be taken and begin fetching and executing at the target. But haven t calculated the target address before we know the branch outcome in MIPS MIPS still incurs 1-cycle branch penalty Useful for other machines on which the target address is known before the branch outcome 41

Four Solutions to Branch Hazards #4: Delayed Branch The execution cycle with a branch delay of one is branch instruction sequential successor 1 branch target if taken The sequential successor is in the branch delay slot. The instruction in the branch delay slot is executed whether or not the branch is taken (for zero cycle penalty) Where to get instructions to fill branch delay slot? From before branch instruction From target address: only valuable when branch taken From fall through: only valuable when branch not taken Canceling or nullifying branches allow more slots to be filled (nonzero cycle penalty, its value depends on the rate of correct predication) the delay-slot instruction is turned into a no-op if incorrectly predicted 42

Four Solutions to Branch Hazards 43

Pipelining Introduction Summary Just overlap tasks, and easy if tasks are independent Speed Up vs. Pipeline Depth; if ideal CPI is 1, then: Speedup = Pipeline Depth 1 + Pipeline stall CPI X Clock Cycle Unpipelined Clock Cycle Pipelined Hazards limit performance on computers: Structural: need more hardware resources Data (RAW,WAR,WAW): need forwarding, compiler scheduling Control: delayed branch, prediction 44