Lecture 9: Case Study MIPS R4000 and Introduction to Advanced Pipelining Professor Randy H. Katz Computer Science 252 Spring 1996

Similar documents
Lecture 6 MIPS R4000 and Instruction Level Parallelism. Computer Architectures S

Review: Evaluating Branch Alternatives. Lecture 3: Introduction to Advanced Pipelining. Review: Evaluating Branch Prediction

Lecture 4: Introduction to Advanced Pipelining

CS425 Computer Systems Architecture

COSC4201. Prof. Mokhtar Aboelaze York University

Execution/Effective address

Page 1. CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Pipeline CPI (II) Michela Taufer

CS252 Graduate Computer Architecture Lecture 5. Interrupt Controller CPU. Interrupts, Software Scheduling around Hazards February 1 st, 2012

Multi-cycle Instructions in the Pipeline (Floating Point)

EE 4683/5683: COMPUTER ARCHITECTURE

Exploiting ILP with SW Approaches. Aleksandar Milenković, Electrical and Computer Engineering University of Alabama in Huntsville

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568

Page # CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Michela Taufer

What is ILP? Instruction Level Parallelism. Where do we find ILP? How do we expose ILP?

Predict Not Taken. Revisiting Branch Hazard Solutions. Filling the delay slot (e.g., in the compiler) Delayed Branch

CMCS Mohamed Younis CMCS 611, Advanced Computer Architecture 1

Instruction Level Parallelism

Lecture 8 Dynamic Branch Prediction, Superscalar and VLIW. Computer Architectures S

Computer Science 246 Computer Architecture

Compiler Optimizations. Lecture 7 Overview of Superscalar Techniques. Memory Allocation by Compilers. Compiler Structure. Register allocation

Administrivia. CMSC 411 Computer Systems Architecture Lecture 14 Instruction Level Parallelism (cont.) Control Dependencies

ELEC 5200/6200 Computer Architecture and Design Fall 2016 Lecture 9: Instruction Level Parallelism

Instruction-Level Parallelism (ILP)

Super Scalar. Kalyan Basu March 21,

Review, # = Levels of the Memory Hierarchy. Staging Xfer Unit. Registers. prog./compiler 1-8 bytes. Instr. Operands.

CMSC 611: Advanced Computer Architecture

Four Steps of Speculative Tomasulo cycle 0

Floating Point/Multicycle Pipelining in DLX

COMP 4211 Seminar Presentation

Review: Pipelining. Key to pipelining: smooth flow Making all instructions the same length can increase performance!

CPE 631 Lecture 09: Instruction Level Parallelism and Its Dynamic Exploitation

Metodologie di Progettazione Hardware-Software

Pipelining and Exploiting Instruction-Level Parallelism (ILP)

Instruction Level Parallelism. Taken from

Outline Review: Basic Pipeline Scheduling and Loop Unrolling Multiple Issue: Superscalar, VLIW. CPE 631 Session 19 Exploiting ILP with SW Approaches

CISC 662 Graduate Computer Architecture Lecture 13 - CPI < 1

Multicycle ALU Operations 2/28/2011. Diversified Pipelines The Path Toward Superscalar Processors. Limitations of Our Simple 5 stage Pipeline

CS152 Computer Architecture and Engineering Lecture 15 Advanced Pipelining Dave Patterson (

EEC 581 Computer Architecture. Lec 4 Instruction Level Parallelism

Lecture 5: VLIW, Software Pipelining, and Limits to ILP. Review: Tomasulo

CS152 Computer Architecture and Engineering Lecture 15

Getting CPI under 1: Outline

Appendix C. Authors: John Hennessy & David Patterson. Copyright 2011, Elsevier Inc. All rights Reserved. 1

Instruction Frequency CPI. Load-store 55% 5. Arithmetic 30% 4. Branch 15% 4

ILP concepts (2.1) Basic compiler techniques (2.2) Reducing branch costs with prediction (2.3) Dynamic scheduling (2.4 and 2.5)

Data Hazards Compiler Scheduling Pipeline scheduling or instruction scheduling: Compiler generates code to eliminate hazard

EECC551 Exam Review 4 questions out of 6 questions

Outline. Pipelining basics The Basic Pipeline for DLX & MIPS Pipeline hazards. Handling exceptions Multi-cycle operations

Page 1. Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

Instruction-Level Parallelism and Its Exploitation

Advanced issues in pipelining

Functional Units. Registers. The Big Picture: Where are We Now? The Five Classic Components of a Computer Processor Input Control Memory

HY425 Lecture 09: Software to exploit ILP

Static vs. Dynamic Scheduling

CS252 Graduate Computer Architecture Lecture 6. Recall: Software Pipelining Example

HY425 Lecture 09: Software to exploit ILP

Advanced Pipelining and Instruction- Level Parallelism 4

Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

CS 152 Computer Architecture and Engineering

CPE 631 Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation

Advanced Computer Architecture Pipelining

Updated Exercises by Diana Franklin

Advanced Computer Architecture. Chapter 4: More sophisticated CPU architectures

CSE 820 Graduate Computer Architecture. week 6 Instruction Level Parallelism. Review from Last Time #1

ESE 545 Computer Architecture Instruction-Level Parallelism (ILP) and Static & Dynamic Instruction Scheduling Instruction level parallelism

NOW Handout Page 1. Review from Last Time #1. CSE 820 Graduate Computer Architecture. Lec 8 Instruction Level Parallelism. Outline

Hardware-Based Speculation

Lecture 5: VLIW, Software Pipelining, and Limits to ILP Professor David A. Patterson Computer Science 252 Spring 1998

Lecture-13 (ROB and Multi-threading) CS422-Spring

Lecture 8: Compiling for ILP and Branch Prediction. Advanced pipelining and instruction level parallelism

Complications with long instructions. CMSC 411 Computer Systems Architecture Lecture 6 Basic Pipelining 3. How slow is slow?

Instruction Pipelining Review

NOW Handout Page 1. Outline. Csci 211 Computer System Architecture. Lec 4 Instruction Level Parallelism. Instruction Level Parallelism

Dynamic Control Hazard Avoidance

09-1 Multicycle Pipeline Operations

CPE 631 Lecture 11: Instruction Level Parallelism and Its Dynamic Exploitation

Lecture: Pipeline Wrap-Up and Static ILP

Page 1. Recall from Pipelining Review. Lecture 15: Instruction Level Parallelism and Dynamic Execution

Advanced Computer Architecture CMSC 611 Homework 3. Due in class Oct 17 th, 2012

ELE 818 * ADVANCED COMPUTER ARCHITECTURES * MIDTERM TEST *

CS425 Computer Systems Architecture

Lecture 15: Instruc.on Level Parallelism -- Introduc.on, Compiler Techniques, and Advanced Branch Predic.on

Review: Compiler techniques for parallelism Loop unrolling Ÿ Multiple iterations of loop in software:

CSE 502 Graduate Computer Architecture. Lec 8-10 Instruction Level Parallelism

Minimizing Data hazard Stalls by Forwarding Data Hazard Classification Data Hazards Present in Current MIPS Pipeline

CS433 Homework 3 (Chapter 3)

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

ECE473 Computer Architecture and Organization. Pipeline: Control Hazard

Pipelining and Vector Processing

ECE 4750 Computer Architecture, Fall 2017 T05 Integrating Processors and Memories

Instruction Level Parallelism. ILP, Loop level Parallelism Dependences, Hazards Speculation, Branch prediction

Basic Pipelining Concepts

Lecture 19: Memory Hierarchy Five Ways to Reduce Miss Penalty (Second Level Cache) Admin

EECC551 Review. Dynamic Hardware-Based Speculation

Lecture 10: Static ILP Basics. Topics: loop unrolling, static branch prediction, VLIW (Sections )

Hardware-based Speculation

EE557--FALL 1999 MAKE-UP MIDTERM 1. Closed books, closed notes

CMSC 411 Computer Systems Architecture Lecture 6 Basic Pipelining 3. Complications With Long Instructions

As the amount of ILP to exploit grows, control dependences rapidly become the limiting factor.

Chapter 3 Instruction-Level Parallelism and its Exploitation (Part 1)

Transcription:

Lecture 9: Case Study MIPS R4000 and Introduction to Advanced Pipelining Professor Randy H. Katz Computer Science 252 Spring 1996 RHK.SP96 1

Review: Evaluating Branch Alternatives Two part solution: Determine branch taken or not sooner, AND Compute taken branch address earlier Pipeline speedup = Pipeline depth 1 + Branch frequency Branch penalty Scheduling Branch CPI speedup v. speedup v. scheme penalty unpipelined stall Stall pipeline 3 1.42 3.5 1.0 Predict taken 1 1.14 4.4 1.26 Predict not taken 1 1.09 4.5 1.29 Delayed branch 0.5 1.07 4.6 1.31 RHK.SP96 2

Review: Evaluating Branch Prediction Strategies Two strategies Backward branch predict taken, forward branch not taken Profile-based prediction: record branch behavior, predict branch based on prior run Instructions between mispredicted branches a better metric than misprediction Instructions per mispredicted branch 100000 10000 1000 100 10 1 alvinn compress doduc espresso gcc Profile-based hydro2d mdljsp2 ora swm256 Direction-based tomcatv RHK.SP96 3

Review: Summary of Pipelining Basics Hazards limit performance Structural: need more HW resources Data: need forwarding, compiler scheduling Control: early evaluation & PC, delayed branch, prediction Increasing length of pipe increases impact of hazards; pipelining helps instruction bandwidth, not latency Interrupts, Instruction Set, FP makes pipelining harder Compilers reduce cost of data and control hazards Load delay slots Branch delay slots Branch prediction Today: Longer pipelines (R4000) => Better branch prediction, more instruction parallelism? RHK.SP96 4

Case Study: MIPS R4000 (100 MHz to 200 MHz) 8 Stage Pipeline: first half of fetching of instruction; PC selection happens here as well as initiation of instruction cache access. second half of access to instruction cache. instruction decode and register fetch, hazard checking and also instruction cache hit detection. EX execution, which includes effective address calculation, ALU operation, and branch target computation and condition evaluation. DF data fetch, first half of access to data cache. DS second half of access to data cache. TC tag check, determine whether the data cache access hit. WB write back for loads and register-register operations. 8 Stages: What is impact on Load delay? Branch delay? Why? RHK.SP96 5

Case Study: MIPS R4000 TWO Cycle Load Latency EX DF EX DS DF EX TC DS DF EX WB TC DS DF EX THREE Cycle Branch Latency (conditions evaluated during EX phase) EX Delay slot plus two stalls Branch likely cancels delay slot if not taken DF EX DS DF EX TC DS DF EX WB TC DS DF EX RHK.SP96 6

MIPS R4000 Floating Point FP Adder, FP Multiplier, FP Divider Last step of FP Multiplier/Divider uses FP Adder HW 8 kinds of stages in FP units: Stage Functional unit Description A FP adder Mantissa ADD stage D FP divider Divide pipeline stage E FP multiplier Exception test stage M FP multiplier First stage of multiplier N FP multiplier Second stage of multiplier R FP adder Rounding stage S FP adder Operand shift stage U Unpack FP numbers RHK.SP96 7

MIPS FP Pipe Stages FP Instr 1 2 3 4 5 6 7 8 Add, Subtract U S+A A+R R+S Multiply U E+M M M M N N+A R Divide U A R D 28 D+A D+R, D+R, D+A, D+R, A, R Square root U E (A+R) 108 A R Negate U S Absolute value U S FP compare U A R Stages: M First stage of multiplier A Mantissa ADD stage N Second stage of multiplier D Divide pipeline stage R Rounding stage E Exception test stage S Operand shift stage U Unpack FP numbers RHK.SP96 8

Not ideal CPI of 1: R4000 Performance Load stalls (1 or 2 clock cycles) Branch stalls (2 cycles + unfilled slots) FP result stalls: RAW data hazard (latency) FP structural stalls: Not enough FP hardware (parallelism) 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0 eqntott espresso gcc li doduc nasa7 ora spice2g6 su2cor tomcatv Base Load stalls Branch stalls FP result stalls FP structural stalls RHK.SP96 9

Advanced Pipelining and Instruction Level Parallelism gcc 17% control transfer 5 instructions + 1 branch Beyond single block to get more instruction level parallelism Loop level parallelism one opportunity, SW and HW Do examples and then explain nomenclature DLX Floating Point as example Measurements suggests R4000 performance FP execution has room for improvement RHK.SP96 10

FP Loop: Where are the Hazards? Loop: LD F0,0(R1) ;F0=vector element ADDD F4,F0,F2 ;add scalar in F2 SD 0(R1),F4 ;store result SUBI R1,R1,8 ;decrement pointer 8B (DW) BNEZ R1,Loop ;branch R1!=zero NOP ;delayed branch slot Instruction Instruction Latency in producing result using result clock cycles FP ALU op Another FP ALU op 3 FP ALU op Store double 2 Load double FP ALU op 1 Load double Store double 0 Integer op Integer op 0 RHK.SP96 11

FP Loop Hazards Loop: LD F0,0(R1) ;F0=vector element ADDD F4,F0,F2 ;add scalar in F2 SD 0(R1),F4 ;store result SUBI R1,R1,8 ;decrement pointer 8B (DW) BNEZ R1,Loop ;branch R1!=zero NOP ;delayed branch slot Instruction Instruction Latency in producing result using result clock cycles FP ALU op Another FP ALU op 3 FP ALU op Store double 2 Load double FP ALU op 1 Load double Store double 0 Integer op Integer op 0 Where are the stalls? RHK.SP96 12

FP Loop Showing Stalls 1 Loop: LD F0,0(R1) ;F0=vector element 2 stall 3 ADDD F4,F0,F2 ;add scalar in F2 4 stall 5 stall 6 SD 0(R1),F4 ;store result 7 SUBI R1,R1,8 ;decrement pointer 8B (DW) 8 BNEZ R1,Loop ;branch R1!=zero 9 stall ;delayed branch slot Instruction Instruction Latency in producing result using result clock cycles FP ALU op Another FP ALU op 3 FP ALU op Store double 2 Load double FP ALU op 1 Rewrite code to minimize stalls? RHK.SP96 13

Revised FP Loop Minimizing Stalls 1 Loop: LD F0,0(R1) 2 stall 3 ADDD F4,F0,F2 4 SUBI R1,R1,8 5 BNEZ R1,Loop ;delayed branch 6 SD 8(R1),F4 ;altered when move past SUBI Instruction Instruction Latency in producing result using result clock cycles FP ALU op Another FP ALU op 3 FP ALU op Store double 2 Load double FP ALU op 1 Unroll loop 4 times code to make faster? RHK.SP96 14

Unroll Loop Four Times 1 Loop: LD F0,0(R1) 2 ADDD F4,F0,F2 3 SD 0(R1),F4 ;drop SUBI & BNEZ 4 LD F6,-8(R1) 5 ADDD F8,F6,F2 6 SD -8(R1),F8 ;drop SUBI & BNEZ 7 LD F10,-16(R1) 8 ADDD F12,F10,F2 9 SD -16(R1),F12 ;drop SUBI & BNEZ 10 LD F14,-24(R1) 11 ADDD F16,F14,F2 12 SD -24(R1),F16 13 SUBI R1,R1,#32 ;alter to 4*8 14 BNEZ R1,LOOP 15 NOP Rewrite loop to minimize stalls? 15 + 4 x (1+2) = 27 clock cycles, or 6.8 per iteration Assumes R1 is multiple of 4 RHK.SP96 15

Unrolled Loop That Minimizes Stalls 1 Loop: LD F0,0(R1) 2 LD F6,-8(R1) 3 LD F10,-16(R1) 4 LD F14,-24(R1) 5 ADDD F4,F0,F2 6 ADDD F8,F6,F2 7 ADDD F12,F10,F2 8 ADDD F16,F14,F2 9 SD 0(R1),F4 10 SD -8(R1),F8 11 SD -16(R1),F12 12 SUBI R1,R1,#32 13 BNEZ R1,LOOP 14 SD 8(R1),F16 ; 8-32 = -24 14 clock cycles, or 3.5 per iteration What assumptions made when moved code? OK to move store past SUBI even though changes register OK to move loads before stores: get right data? When is it safe for compiler to do such changes? RHK.SP96 16

Compiler Perspectives on Code Movement Definitions: compiler concerned about dependencies in program, whether or not a HW hazard depends on a given pipeline (True) Data dependencies (RAW if a hazard for HW) Instruction i produces a result used by instruction j, or Instruction j is data dependent on instruction k, and instruction k is data dependent on instruction i. Easy to determine for registers (fixed names) Hard for memory: Does 100(R4) = 20(R6)? From different loop iterations, does 20(R6) = 20(R6)? RHK.SP96 17

Compiler Perspectives on Code Movement Another kind of dependence called name dependence: two instructions use same name but don t exchange data Antidependence (WAR if a hazard for HW) Instruction j writes a register or memory location that instruction i reads from and instruction i is executed first Output dependence (WAW if a hazard for HW) Instruction i and instruction j write the same register or memory location; ordering between instructions must be preserved. RHK.SP96 18

Compiler Perspectives on Code Movement Again Hard for Memory Accesses Does 100(R4) = 20(R6)? From different loop iterations, does 20(R6) = 20(R6)? Our example required compiler to know that if R1 doesn t change then: 0(R1) -8(R1) -16(R1) -24(R1) There were no dependencies between some loads and stores so they could be moved by each other RHK.SP96 19

Compiler Perspectives on Code Movement Final kind of dependence called control dependence Example if p1 {S1;}; if p2 {S2;} S1 is control dependent on p1 and S2 is control dependent on p2 but not on p1. RHK.SP96 20

Compiler Perspectives on Code Movement Two (obvious) constraints on control dependences: An instruction that is control dependent on a branch cannot be moved before the branch so that its execution is no longer controlled by the branch. An instruction that is not control dependent on a branch cannot be moved to after the branch so that its execution is controlled by the branch. Control dependencies relaxed to get parallelism; get same effect if preserve order of exceptions and data flow RHK.SP96 21

When Safe to Unroll Loop? Example: Where are data dependencies? (A,B,C distinct & nonoverlapping) for (i=1; i<=100; i=i+1) { A[i+1] = A[i] + C[i]; /* S1 */ B[i+1] = B[i] + A[i+1];} /* S2 */ 1. S2 uses the value, A[i+1], computed by S1 in the same iteration. 2. S1 uses a value computed by S1 in an earlier iteration, since iteration i computes A[i+1] which is read in iteration i+1. The same is true of S2 for B[i] and B[i+1]. This is a loop-carried dependence : between iterations Implies that iterations are dependent, and can t be executed in parallel Not the case for our example; each iteration was distinct RHK.SP96 22

Summary Instruction Level Parallelism in SW or HW Loop level parallelism is easiest to see SW parallelism dependencies defined for program, hazards if HW cannot resolve SW dependencies/compiler sophistication determine if compiler can unroll loops RHK.SP96 23