Lecture 6 MIPS R4000 and Instruction Level Parallelism. Computer Architectures S

Similar documents
Lecture 9: Case Study MIPS R4000 and Introduction to Advanced Pipelining Professor Randy H. Katz Computer Science 252 Spring 1996

Review: Evaluating Branch Alternatives. Lecture 3: Introduction to Advanced Pipelining. Review: Evaluating Branch Prediction

Lecture 4: Introduction to Advanced Pipelining

CS425 Computer Systems Architecture

EE 4683/5683: COMPUTER ARCHITECTURE

Floating Point/Multicycle Pipelining in DLX

COSC4201. Prof. Mokhtar Aboelaze York University

Page 1. CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Pipeline CPI (II) Michela Taufer

Exploiting ILP with SW Approaches. Aleksandar Milenković, Electrical and Computer Engineering University of Alabama in Huntsville

Execution/Effective address

Multi-cycle Instructions in the Pipeline (Floating Point)

Instruction-Level Parallelism (ILP)

Pipelining and Exploiting Instruction-Level Parallelism (ILP)

Page # CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Michela Taufer

CS252 Graduate Computer Architecture Lecture 5. Interrupt Controller CPU. Interrupts, Software Scheduling around Hazards February 1 st, 2012

Instruction Level Parallelism

Administrivia. CMSC 411 Computer Systems Architecture Lecture 14 Instruction Level Parallelism (cont.) Control Dependencies

Compiler Optimizations. Lecture 7 Overview of Superscalar Techniques. Memory Allocation by Compilers. Compiler Structure. Register allocation

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568

Computer Science 246 Computer Architecture

CMCS Mohamed Younis CMCS 611, Advanced Computer Architecture 1

CMSC 611: Advanced Computer Architecture

EEC 581 Computer Architecture. Lec 4 Instruction Level Parallelism

Advanced Pipelining and Instruction- Level Parallelism 4

Predict Not Taken. Revisiting Branch Hazard Solutions. Filling the delay slot (e.g., in the compiler) Delayed Branch

Lecture 8 Dynamic Branch Prediction, Superscalar and VLIW. Computer Architectures S

CPE 631 Lecture 09: Instruction Level Parallelism and Its Dynamic Exploitation

ELEC 5200/6200 Computer Architecture and Design Fall 2016 Lecture 9: Instruction Level Parallelism

What is ILP? Instruction Level Parallelism. Where do we find ILP? How do we expose ILP?

Super Scalar. Kalyan Basu March 21,

Review, # = Levels of the Memory Hierarchy. Staging Xfer Unit. Registers. prog./compiler 1-8 bytes. Instr. Operands.

EECC551 Exam Review 4 questions out of 6 questions

Metodologie di Progettazione Hardware-Software

Four Steps of Speculative Tomasulo cycle 0

Instruction Level Parallelism. Taken from

Data Hazards Compiler Scheduling Pipeline scheduling or instruction scheduling: Compiler generates code to eliminate hazard

Outline Review: Basic Pipeline Scheduling and Loop Unrolling Multiple Issue: Superscalar, VLIW. CPE 631 Session 19 Exploiting ILP with SW Approaches

CISC 662 Graduate Computer Architecture Lecture 13 - CPI < 1

COMP 4211 Seminar Presentation

Advanced issues in pipelining

Multicycle ALU Operations 2/28/2011. Diversified Pipelines The Path Toward Superscalar Processors. Limitations of Our Simple 5 stage Pipeline

Instruction-Level Parallelism and Its Exploitation

Page 1. Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

Review: Pipelining. Key to pipelining: smooth flow Making all instructions the same length can increase performance!

Topics. Digital Systems Architecture EECE EECE Software Approaches to ILP Part 2. Ideas To Reduce Stalls. Processor Case Studies

ILP concepts (2.1) Basic compiler techniques (2.2) Reducing branch costs with prediction (2.3) Dynamic scheduling (2.4 and 2.5)

Instruction Pipelining Review

Lecture 15: Instruc.on Level Parallelism -- Introduc.on, Compiler Techniques, and Advanced Branch Predic.on

Instruction Level Parallelism. ILP, Loop level Parallelism Dependences, Hazards Speculation, Branch prediction

Lecture: Pipeline Wrap-Up and Static ILP

Lecture: Static ILP. Topics: compiler scheduling, loop unrolling, software pipelining (Sections C.5, 3.2)

CPE 631 Lecture 11: Instruction Level Parallelism and Its Dynamic Exploitation

CPE 631 Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation

Lecture 10: Static ILP Basics. Topics: loop unrolling, static branch prediction, VLIW (Sections )

CS 152 Computer Architecture and Engineering

CS152 Computer Architecture and Engineering Lecture 15 Advanced Pipelining Dave Patterson (

Page 1. Recall from Pipelining Review. Lecture 15: Instruction Level Parallelism and Dynamic Execution

Updated Exercises by Diana Franklin

Complications with long instructions. CMSC 411 Computer Systems Architecture Lecture 6 Basic Pipelining 3. How slow is slow?

Instruction Level Parallelism (ILP)

Getting CPI under 1: Outline

ESE 545 Computer Architecture Instruction-Level Parallelism (ILP) and Static & Dynamic Instruction Scheduling Instruction level parallelism

NOW Handout Page 1. Outline. Csci 211 Computer System Architecture. Lec 4 Instruction Level Parallelism. Instruction Level Parallelism

HY425 Lecture 09: Software to exploit ILP

CSE 502 Graduate Computer Architecture. Lec 8-10 Instruction Level Parallelism

HY425 Lecture 09: Software to exploit ILP

CS152 Computer Architecture and Engineering Lecture 15

CS433 Homework 3 (Chapter 3)

Instruction Frequency CPI. Load-store 55% 5. Arithmetic 30% 4. Branch 15% 4

Advanced Computer Architecture. Chapter 4: More sophisticated CPU architectures

Lecture 8: Compiling for ILP and Branch Prediction. Advanced pipelining and instruction level parallelism

Hardware-based Speculation

EE557--FALL 1999 MAKE-UP MIDTERM 1. Closed books, closed notes

Dynamic Control Hazard Avoidance

Chapter 3 Instruction-Level Parallelism and its Exploitation (Part 1)

Appendix C. Authors: John Hennessy & David Patterson. Copyright 2011, Elsevier Inc. All rights Reserved. 1

CS252 Graduate Computer Architecture Lecture 6. Recall: Software Pipelining Example

ECSE 425 Lecture 11: Loop Unrolling

EECC551 Review. Dynamic Hardware-Based Speculation

NOW Handout Page 1. Review from Last Time. CSE 820 Graduate Computer Architecture. Lec 7 Instruction Level Parallelism. Recall from Pipelining Review

LECTURE 10. Pipelining: Advanced ILP

Chapter 4. Advanced Pipelining and Instruction-Level Parallelism. In-Cheol Park Dept. of EE, KAIST

Lecture-13 (ROB and Multi-threading) CS422-Spring

Adapted from David Patterson s slides on graduate computer architecture

Computer Architecture Practical 1 Pipelining

Minimizing Data hazard Stalls by Forwarding Data Hazard Classification Data Hazards Present in Current MIPS Pipeline

CMSC 411 Computer Systems Architecture Lecture 6 Basic Pipelining 3. Complications With Long Instructions

ILP: Instruction Level Parallelism

NOW Handout Page 1. Review from Last Time #1. CSE 820 Graduate Computer Architecture. Lec 8 Instruction Level Parallelism. Outline

CSE 820 Graduate Computer Architecture. week 6 Instruction Level Parallelism. Review from Last Time #1

ELE 818 * ADVANCED COMPUTER ARCHITECTURES * MIDTERM TEST *

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

4DM4 Sample Problems for Midterm Tuesday, Oct. 22, 2013

CISC 662 Graduate Computer Architecture. Lecture 10 - ILP 3

09-1 Multicycle Pipeline Operations

Static vs. Dynamic Scheduling

Advanced Computer Architecture CMSC 611 Homework 3. Due in class Oct 17 th, 2012

Computer Science 146. Computer Architecture

Processor: Superscalars Dynamic Scheduling

Transcription:

Lecture 6 MIPS R4000 and Instruction Level Parallelism Computer Architectures 521480S

Case Study: MIPS R4000 (200 MHz, 64-bit instructions, MIPS-3 instruction set) 8 Stage Pipeline: first half of fetching of instruction; PC selection happens here as well as initiation of instruction cache access. IS second half of access to instruction cache. RF instruction decode and register fetch, hazard checking and also instruction cache hit detection. EX execution, which includes effective address calculation, ALU operation, and branch target computation and condition evaluation. DF data fetch, first half of access to data cache. DS second half of access to data cache. TC tag check, determine whether the data cache access hit. WB write back for loads and register-register operations. 8 Stages: What is impact on Load delay? Branch delay? Why?

Case Study: MIPS R4000 TWO Cycle Load Latency IS RF IS EX RF IS DF EX RF IS DS DF EX RF IS TC DS DF EX RF IS WB TC DS DF EX RF IS THREE Cycle Branch Latency (conditions evaluated during EX phase) IS RF IS EX RF IS Delay slot plus two stalls Branch likely cancels delay slot if not taken DF EX RF IS DS DF EX RF IS TC DS DF EX RF IS WB TC DS DF EX RF IS

MIPS R4000 Floating Point FP Adder, FP Multiplier, FP Divider Require multiple execution stages 8 kinds of stages in FP units: Stage Functional unit Description A FP adder Mantissa ADD stage D FP divider Divide pipeline stage E FP multiplier Exception test stage M FP multiplier First stage of multiplier N FP multiplier Second stage of multiplier R FP adder Rounding stage S FP adder Operand shift stage U Unpack FP numbers Single copy of each stage: various instructions may use a stage zero or more times in different orders

MIPS FP Pipe Stages FP Instr 1 2 3 4 5 6 7 8 (clock cycles) Add, Subtract U S+A A+R R+S Multiply U E+M M M M N N+A R Divide U A R D 28 D+A D+R, D+R, D+A, D+R, A, R Square root U E (A+R) 108 A R Negate U S Absolute value U S FP compare U A R Stages: M N R S U First stage of multiplier Second stage of multiplier Rounding stage Operand shift stage Unpack FP numbers Note: This is very non-conventional (FP operations share common computational resources) A D E Mantissa ADD stage Divide pipeline stage Exception test stage

Latency and Initiation Intervals The latency is the number of cycles between an instruction that produces a result and an instruction that uses the result (book definition) The initiation interval is the number of cycles that must elapse before issuing two operations of a given type. FP Instruction Latency Initiation interval Add/Subtract 4 3 Multiply 8 4 Divide 36 35 Square root 112 111 Negate 2 1 Absolute value 2 1 FP compare 3 2

Not ideal CPI of 1: R4000 Performance Load stalls (1 or 2 clock cycles) Branch stalls (2 cycles + unfilled slots) FP result stalls: RAW data hazard (latency) FP structural stalls: Not enough FP hardware (parallelism) 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0 eqntott espresso gcc li doduc nasa7 ora spice2g6 su2cor tomcatv Base Load stalls Branch stalls FP result stalls FP structural stalls

DLX Floating Point Pipeline The DLX floating point pipeline is more conventional. Add, subtract, and multiply are fully pipelined. Divide is not pipelined. FP Instruction Latency Initiation interval Add/Subtract 3 1 Multiply 6 1 Divide 14 15 High-performance machines tend to have even shorter latencies FP Instruction Latency Initiation interval Add/Subtract 2 1 Multiply 2 1 Divide (DP) 12 13

DLX Floating Point Pipeline Integer unit EX FP/integer multiply ID MEM WB FP adder FP/integer divider

Instruction Level Parallelism (ILP) ILP: Overlap execution of unrelated instructions gcc: 17% of control transfers 5 instructions + 1 branch (i.e. one branch for every 6 instructions) Beyond single block to get more instruction level parallelism» a longer sequential block of code exposes more computation that can be scheduled to minimize stalls Loop level parallelism one opportunity DLX Floating Point as example

Instruction Latencies with Forwarding Instruction Instruction Latency in producing result using result clock cycles FP ALU op Another FP ALU op 3 FP ALU op Store double 2 Load double FP ALU op 1 Load double Store double 0 Integer op Integer op 0 Note: for simplicity, these latencies are used in the examples. Futhermore, functional units are assumed to be fully pipelined, so that an operation of any type can be issued on every clock cycle and there are no structural hazards. These modifications can be done without loss of generality.

Simple Loop and Assembly Code for (i=1; i<=1000; i++) x(i) = x(i) + s; Loop: LD F0,0(R1) ;F0=vector element ADDD F4,F0,F2 ;add scalar from F2 SD 0(R1),F4 ;store result SUBI R1,R1,8 ;decrement pointer by 8B (DW) BNEZ R1,Loop ;branch R1!=zero

FP Loop: Where are the Hazards? Loop: LD F0,0(R1) ;F0=vector element ADDD F4,F0,F2 ;add scalar from F2 SD 0(R1),F4 ;store result SUBI R1,R1,8 ;decrement pointer 8B (DW) BNEZ R1,Loop ;branch R1!=zero Instruction Instruction Latency in producing result using result clock cycles FP ALU op Another FP ALU op 3 FP ALU op Store double 2 Load double FP ALU op 1 Load double Store double 0 Integer op Integer op 0 Where are the stalls?

FP Loop Showing Stalls 1 Loop: LD F0,0(R1) ;F0=vector element 2 stall ;stall one cycle between LD and FP ALU op 3 ADDD F4,F0,F2 ;add scalar in F2 4 stall ;stall two cycles between FP ALU op and SD 5 stall 6 SD 0(R1),F4 ;store result 7 SUBI R1,R1,8 ;decrement pointer 8B (DW) 8 BNEZ R1,Loop ;branch R1!=zero 9 stall ;delayed branch slot 9 clock cycles per loop interation. Rewrite code to minimize stalls?

Revised FP Loop Minimizing Stalls 1 Loop: LD F0,0(R1) 2 SUBI R1,R1,8 3 ADDD F4,F0,F2 ; 4 NOP ;Need 2 instr. between FP ALU op and SD 5 BNEZ R1,Loop ;delayed branch 6 SD 8(R1),F4 ;altered when move past SUBI Perform SUBI earlier in the loop Swap BNEZ and SD by changing address of SD Code now takes 6 clock cycles per loop iteration Speedup = 10/6 = 1.67 Unroll loop 4 times to make faster?

Unroll Loop Four Times (straightforward way) 1 Loop: LD F0,0(R1) 2 ADDD F4,F0,F2 3 SD 0(R1),F4 ;drop SUBI & BNEZ 4 LD F6,-8(R1) 5 ADDD F8,F6,F2 6 SD -8(R1),F8 ;drop SUBI & BNEZ 7 LD F10,-16(R1) 8 ADDD F12,F10,F2 9 SD -16(R1),F12 ;drop SUBI & BNEZ 10 LD F14,-24(R1) 11 ADDD F16,F14,F2 12 SD -24(R1),F16 13 SUBI R1,R1,#32 ;alter to 4*8 14 BNEZ R1,LOOP 15 NOP Rewrite loop to minimize stalls? 15 + 4 x 3 + 1= 28 clock cycles, or 7 cycles per iteration What does this assume about the number iterations?

Unrolled Loop That Minimizes Stalls 1 Loop: LD F0,0(R1) 2 LD F6,-8(R1) 3 LD F10,-16(R1) 4 LD F14,-24(R1) 5 ADDD F4,F0,F2 6 ADDD F8,F6,F2 7 ADDD F12,F10,F2 8 ADDD F16,F14,F2 9 SD 0(R1),F4 10 SD -8(R1),F8 13 SUBI R1,R1,#32 What assumptions made when moved code? OK to move store past SUBI even though SUBI changes register OK to move loads before stores: get right data? When is it safe for compiler to make such changes? 11 SD 16(R1),F12 ;16-32 = -16 (book incorrect - pg 227) 12 BNEZ R1,LOOP 14 SD 8(R1),F16 ;8-32 = -24 14 clock cycles or 3.5 clock cycles per iteration What was the cost for this speedup?

Summary of Loop Unrolling Example Determine that it was legal to move the SD after the SUBI and BNEZ, and find the amount to adjust the SD offset. Determine that unrolling the loop would be useful by finding that the loop iterations are independent, except for the loop maintenance code. Use extra registers to avoid unnecessary constraints that would be forced by using the same registers for different computations. Eliminate the extra tests and branches and adjust the loop maintenance code. Determine that the loads and stores in the unrolled loop can be interchanged by observing that the loads and stores from different iterations are independent. This requires analyzing the memory addresses and finding that they do not refer to the same address, which would create data dependencies between iterations. Schedule the code, preserving any dependences needed to yield the same result as the original code.

Compiler Perspectives on Code Movement Definitions: compiler concerned about dependencies in program, whether or not a HW hazard depends on a given pipeline Try to schedule to avoid hazards True Data dependencies (RAW if a hazard for HW) Instruction i produces a result used by instruction j, or Instruction j is data dependent on instruction k, and instruction k is data dependent on instruction i. If dependent, can t execute in parallel (i.e. the computation order matters) Easy to determine for registers (fixed names) Hard for memory: Does 100(R4) = 20(R6)? From different loop iterations, does 20(R6) = 20(R6)?

Where are the data dependencies? Loop: LD F0,0(R1) ;F0=vector element ADDD F4,F0,F2 ;add scalar from F2 SD 0(R1),F4 ;store result SUBI R1,R1,8 ;decrement pointer 8B (DW) BNEZ R1,Loop ;branch R1!=zero

Compiler Perspectives on Code Movement Another kind of dependence called name dependence: two instructions use same name (register or memory location) but don t exchange data Antidependence (WAR if a hazard for HW) Instruction j writes a register or memory location that instruction i reads from and instruction i is executed first Output dependence (WAW if a hazard for HW) Instruction i and instruction j write the same register or memory location; ordering between instructions must be preserved. Name dependences involving registers can be removed by register renaming.

Where are the name dependencies? 1 Loop: LD F0,0(R1) 2 ADDD F4,F0,F2 3 SD 0(R1),F4 ;drop SUBI & BNEZ 4 LD F0,-8(R1) 5 ADDD F4,F0,F2 6 SD -8(R1),F4 ;drop SUBI & BNEZ 7 LD F0,-16(R1) 8 ADDD F4,F0,F2 9 SD -16(R1),F4 ;drop SUBI & BNEZ 10 LD F0,-24(R1) 11 ADDD F4,F0,F2 12 SD -24(R1),F4 13 SUBI R1,R1,#32 ;alter to 4*8 14 BNEZ R1,LOOP 15 NOP How can remove them?

Where are the name dependencies? 1 Loop: LD F0,0(R1) 2 ADDD F4,F0,F2 3 SD 0(R1),F4 ;drop SUBI & BNEZ 4 LD F6,-8(R1) 5 ADDD F8,F6,F2 6 SD -8(R1),F8 ;drop SUBI & BNEZ 7 LD F10,-16(R1) 8 ADDD F12,F10,F2 9 SD -16(R1),F12 ;drop SUBI & BNEZ 10 LD F14,-24(R1) 11 ADDD F16,F14,F2 12 SD -24(R1),F16 13 SUBI R1,R1,#32 ;alter to 4*8 14 BNEZ R1,LOOP 15 NOP Name dependencies are removed by register renaming

Compiler Perspectives on Code Movement Again Name Dependencies are Hard for Memory Accesses Does 100(R4) = 20(R6)? From different loop iterations, does 20(R6) = 20(R6)? Our example required compiler to know that if R1 doesn t change then: 0(R1) -8(R1) -16(R1) -24(R1) There were no dependencies between some loads and stores so they could be moved by each other (i.e. the result of a computation was stored at the same address from where the operand was loaded).

Compiler Perspectives on Code Movement Final kind of dependence called control dependence Example if p1 {S1;}; if p2 {S2;}; S1 is control dependent on p1, and S2 is control dependent on p2 but not on p1. Two (obvious) constraints on control dependences: An instruction that is control dependent on a branch cannot be moved before the branch so that its execution is no longer controlled by the branch. An instruction that is not control dependent on a branch cannot be moved to after the branch so that its execution is controlled by the branch. Sometimes, it is possible to violate these constraints and still get correct execution.

Where are the control dependencies? 1 Loop: LD F0,0(R1) 2 ADDD F4,F0,F2 3 SD 0(R1),F4 4 SUBI R1,R1,8 5 BEQZ R1,exit 6 LD F0,0(R1) 7 ADDD F4,F0,F2 8 SD 0(R1),F4 9 SUBI R1,R1,8 10 BEQZ R1,exit 11 LD F0,0(R1) 12 ADDD F4,F0,F2 13 SD 0(R1),F4 14 SUBI R1,R1,8 15 BEQZ R1,exit exit: What allowed us to remove these control dependencies?

When is it Safe to Unroll Loop? Example: Where are data dependencies? (A,B,C distinct & nonoverlapping) for (i=1; i<=100; i=i+1) { A[i+1] = A[i] + C[i]; /* S1 */ B[i+1] = B[i] + A[i+1];} /* S2 */ 1. S2 uses the value, A[i+1], computed by S1 in the same iteration. 2. S1 uses a value computed by S1 in an earlier iteration, since iteration i computes A[i+1] which is read in iteration i+1. The same is true of S2 for B[i] and B[i+1]. This is a loop-carried dependence : between iterations Implies that iterations are dependent, and can t be executed in parallel Not the case for our prior example; each iteration was distinct

When Safe to Unroll Loop? Example: Where are data dependencies? (A,B,C,D distinct & nonoverlapping) for (i=1; i<=100; i=i+1) { A[i] = A[i] + B[i]; /* S1 */ B[i+1] = C[i] + D[i];} /* S2 */ 1. No dependence from S1 to S2. If there were, then there would be a cycle in the dependencies and the loop would not be parallel. Since this other dependence is absent, interchanging the two statements will not affect the execution of S2. 2. On the first iteration of the loop, statement S1 depends on the value of B[1] computed prior to initiating the loop.

Now Safe to Unroll Loop? (p. 240) OLD: for (i=1; i<=100; i=i+1) { A[i] = A[i] + B[i]; /* S1 */ B[i+1] = C[i] + D[i];} /* S2 */ NEW: A B A[1] = A[1] + B[1]; for (i=1; i<=99; i=i+1) { B[i+1] = C[i] + D[i]; A[i+1] = A[i+1] + B[i+1]; } B[101] = C[100] + D[100]; 1 101 2-100 At interval 2-100 S2 produces a value used by S1 at the same iteration: there is no loop carried dependence any more