Review: Pipelining. Key to pipelining: smooth flow Making all instructions the same length can increase performance!

Similar documents
CS152 Computer Architecture and Engineering Lecture 15

Lecture 9: Case Study MIPS R4000 and Introduction to Advanced Pipelining Professor Randy H. Katz Computer Science 252 Spring 1996

Lecture 6 MIPS R4000 and Instruction Level Parallelism. Computer Architectures S

Review: Summary of Pipelining Basics

Review: Evaluating Branch Alternatives. Lecture 3: Introduction to Advanced Pipelining. Review: Evaluating Branch Prediction

Exploiting ILP with SW Approaches. Aleksandar Milenković, Electrical and Computer Engineering University of Alabama in Huntsville

CS252 Graduate Computer Architecture Lecture 5. Interrupt Controller CPU. Interrupts, Software Scheduling around Hazards February 1 st, 2012

CS425 Computer Systems Architecture

Lecture 4: Introduction to Advanced Pipelining

Lecture 8 Dynamic Branch Prediction, Superscalar and VLIW. Computer Architectures S

COSC4201. Prof. Mokhtar Aboelaze York University

CISC 662 Graduate Computer Architecture Lecture 13 - CPI < 1

Multi-cycle Instructions in the Pipeline (Floating Point)

Execution/Effective address

Page 1. CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Pipeline CPI (II) Michela Taufer

Page # CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Michela Taufer

Review: Compiler techniques for parallelism Loop unrolling Ÿ Multiple iterations of loop in software:

Recap: Summary of Pipelining Basics

Four Steps of Speculative Tomasulo cycle 0

Advanced issues in pipelining

CMSC 411 Computer Systems Architecture Lecture 6 Basic Pipelining 3. Complications With Long Instructions

Computer Science 246 Computer Architecture

CS425 Computer Systems Architecture

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568

Predict Not Taken. Revisiting Branch Hazard Solutions. Filling the delay slot (e.g., in the compiler) Delayed Branch

Outline Review: Basic Pipeline Scheduling and Loop Unrolling Multiple Issue: Superscalar, VLIW. CPE 631 Session 19 Exploiting ILP with SW Approaches

Instruction Level Parallelism. Appendix C and Chapter 3, HP5e

CS61C : Machine Structures

Complications with long instructions. CMSC 411 Computer Systems Architecture Lecture 6 Basic Pipelining 3. How slow is slow?

Pipeline design. Mehran Rezaei

ELEC 5200/6200 Computer Architecture and Design Fall 2016 Lecture 9: Instruction Level Parallelism

CMCS Mohamed Younis CMCS 611, Advanced Computer Architecture 1

Super Scalar. Kalyan Basu March 21,

Lecture 9. Pipeline Hazards. Christos Kozyrakis Stanford University

EE 4683/5683: COMPUTER ARCHITECTURE

Instruction-Level Parallelism (ILP)

COMP 4211 Seminar Presentation

Instruction Pipelining Review

Metodologie di Progettazione Hardware-Software

Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

Lecture 9: Multiple Issue (Superscalar and VLIW)

Computer and Information Sciences College / Computer Science Department Enhancing Performance with Pipelining

Page 1. Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

Lecture 7 Pipelining. Peng Liu.

Administrivia. CMSC 411 Computer Systems Architecture Lecture 14 Instruction Level Parallelism (cont.) Control Dependencies

CPE 631 Lecture 09: Instruction Level Parallelism and Its Dynamic Exploitation

EEC 581 Computer Architecture. Lec 4 Instruction Level Parallelism

Minimizing Data hazard Stalls by Forwarding Data Hazard Classification Data Hazards Present in Current MIPS Pipeline

CISC 662 Graduate Computer Architecture Lecture 7 - Multi-cycles

CISC 662 Graduate Computer Architecture Lecture 7 - Multi-cycles. Interrupts and Exceptions. Device Interrupt (Say, arrival of network message)

ארכי טק טורת יחיד ת עיבוד מרכזי ת

CS152 Computer Architecture and Engineering Lecture 15 Advanced Pipelining Dave Patterson (

Department of Computer and IT Engineering University of Kurdistan. Computer Architecture Pipelining. By: Dr. Alireza Abdollahpouri

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

What is ILP? Instruction Level Parallelism. Where do we find ILP? How do we expose ILP?

LECTURE 10. Pipelining: Advanced ILP

EE557--FALL 1999 MAKE-UP MIDTERM 1. Closed books, closed notes

Pipelining. Maurizio Palesi

EITF20: Computer Architecture Part2.2.1: Pipeline-1

Pipelining and Exploiting Instruction-Level Parallelism (ILP)

Instruction Level Parallelism. ILP, Loop level Parallelism Dependences, Hazards Speculation, Branch prediction

Page 1. Recall from Pipelining Review. Lecture 15: Instruction Level Parallelism and Dynamic Execution

EITF20: Computer Architecture Part2.2.1: Pipeline-1

Lecture 7: Pipelining Contd. More pipelining complications: Interrupts and Exceptions

Load1 no Load2 no Add1 Y Sub Reg[F2] Reg[F6] Add2 Y Add Reg[F2] Add1 Add3 no Mult1 Y Mul Reg[F2] Reg[F4] Mult2 Y Div Reg[F6] Mult1

Lecture 3. Pipelining. Dr. Soner Onder CS 4431 Michigan Technological University 9/23/2009 1

MIPS An ISA for Pipelining

09-1 Multicycle Pipeline Operations

Getting CPI under 1: Outline

EITF20: Computer Architecture Part2.2.1: Pipeline-1

Lecture 3: The Processor (Chapter 4 of textbook) Chapter 4.1

CS 152 Computer Architecture and Engineering

Computer Architecture

CS252 Graduate Computer Architecture Lecture 6. Recall: Software Pipelining Example

COSC4201 Pipelining. Prof. Mokhtar Aboelaze York University

( ) תשס"ח סמסטר ב' May, 2008 Hugo Guterman Web site:

CSE 820 Graduate Computer Architecture. week 6 Instruction Level Parallelism. Review from Last Time #1

Appendix C: Pipelining: Basic and Intermediate Concepts

Lecture: Pipeline Wrap-Up and Static ILP

Compiler Optimizations. Lecture 7 Overview of Superscalar Techniques. Memory Allocation by Compilers. Compiler Structure. Register allocation

EECC551 Exam Review 4 questions out of 6 questions

ILP concepts (2.1) Basic compiler techniques (2.2) Reducing branch costs with prediction (2.3) Dynamic scheduling (2.4 and 2.5)

Basic Pipelining Concepts

Design a MIPS Processor (2/2)

CMSC 611: Advanced Computer Architecture

COMPUTER ORGANIZATION AND DESIGN

Data Hazards Compiler Scheduling Pipeline scheduling or instruction scheduling: Compiler generates code to eliminate hazard

Pipelining. Principles of pipelining. Simple pipelining. Structural Hazards. Data Hazards. Control Hazards. Interrupts. Multicycle operations

TDT 4260 lecture 7 spring semester 2015

Modern Computer Architecture

Dynamic Control Hazard Avoidance

Appendix C. Instructor: Josep Torrellas CS433. Copyright Josep Torrellas 1999, 2001, 2002,

NOW Handout Page 1. Review from Last Time #1. CSE 820 Graduate Computer Architecture. Lec 8 Instruction Level Parallelism. Outline

Multiple Issue ILP Processors. Summary of discussions

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface 5 th Edition. Chapter 4. The Processor

There are different characteristics for exceptions. They are as follows:

Instruction Level Parallelism

Pipeline Overview. Dr. Jiang Li. Adapted from the slides provided by the authors. Jiang Li, Ph.D. Department of Computer Science

ECE473 Computer Architecture and Organization. Pipeline: Control Hazard

Appendix C. Authors: John Hennessy & David Patterson. Copyright 2011, Elsevier Inc. All rights Reserved. 1

Transcription:

Review: Pipelining CS152 Computer Architecture and Engineering Lecture 15 Advanced pipelining/compiler Scheduling October 24, 2001 John Kubiatowicz (http.cs.berkeley.edu/~kubitron) lecture slides: http://www-inst.eecs.berkeley.edu/~cs152/ Key to pipelining: smooth flow Making all instructions the same length can increase performance! Hazards limit performance Structural: need more HW resources Data: need forwarding, compiler scheduling Control: early evaluation & PC, delayed branch, prediction Data hazards must be handled carefully: RAW (Read-After-Write) data hazards handled by forwarding WAW (Write-After-Write) and WAR (Write-After-Read) hazards don t exist in 5-stage pipeline MIPS I instruction set architecture made pipeline visible (delayed branch, delayed load) Change in programmer semantics to make hardware simpler Lec15.1 Lec15.2 Recap: Data Hazards Recap: Data Stationary Control I-Fet ch DCD MemOpFetch OpFetch Exec Store Structural Hazard IFetch DCD I-Fet ch DCD OpFetch Jump Control Hazard The Main Control generates the control signals during Reg/Dec Control signals for Exec (ExtOp, ALUSrc,...) are used 1 cycle later Control signals for Mem (MemWr ranch) are used 2 cycles later Control signals for Wr (MemtoReg MemWr) are used 3 cycles later Reg/Dec Exec Mem Wr IF DCD EX Mem W IFetch DCD IF DCD EX Mem W IF DCD EX Mem W IF DCD OF RAW (read after write) Data Hazard IF DCD OF Ex RS WAW Data Hazard (write after write) Ex Mem WAR Data Hazard (write after read) Lec15.3 IF/ID Register Main Control ExtOp ALUSrc ALUOp RegDst MemW ranch r MemtoReg RegWr ID/Ex Register ExtOp ALUSrc ALUOp RegDst MemW ranch r MemtoReg RegWr Ex/Mem Register MemW rranch MemtoReg RegWr Mem/Wr Register MemtoReg RegWr Lec15.4

Review: Resolve RAW by forwarding (or bypassing) Question: Critical Path??? Forward mux alu S D mem m IAU npc I mem op rw rs rt A im n op rw n op n op PC rw rw Detect nearest valid write op operand register and forward into op latches, bypassing remainder of the pipe Increase muxes to add paths from pipeline registers Data Forwarding = Data ypassing Lec15.5 Forward mux alu S D mem m A im PC Sel Equal ypass path is invariably trouble Options? Make logic really fast Move forwarding after muxes» Problem: screws up branches that require forwarding!» Use same tricks as carry-skip adder to fix this?» This option may just push delay around.! Insert an extra cycle for branches that need forwarding?» Or: hit common case of forwarding from EX stage and stall for forward from memory? Lec15.6 What about Interrupts, Traps, Faults? External Interrupts: Allow pipeline to drain, Fill with NOPs Load PC with interrupt address Faults (within instruction, restartable) Force trap instruction into IF disable writes till trap hits W must save multiple PCs or PC + state Recall: Precise Exceptions Ÿ State of the machine is preserved as if program executed up to the offending instruction All previous instructions completed Offending instruction and all following instructions act as if they have not even started Same system code will work on different implementations Exception/Interrupts: Implementation questions 5 instructions, executing in 5 different pipeline stages! Who caused the interrupt? Stage Problem interrupts occurring IF Page fault on instruction fetch; misaligned memory access; memory-protection violation ID Undefined or illegal opcode EX Arithmetic exception MEM Page fault on data fetch; misaligned memory access; memory-protection violation; memory error How do we stop the pipeline? How do we restart it? Do we interrupt immediately or wait? How do we sort all of this out to maintain preciseness? Lec15.7 Lec15.8

Exception Handling alu S Dmem m IAU npc Imem lw $2,20($ A im n op rw PC Excp detect bad instruction address detect bad instruction Excp Excp Excp detect overflow detect bad data address Allow exception to take effect Lec15.9 Another look at the exception problem Time Data TL ad Inst Inst TL fault Overflow Program Flow Use pipeline to sort this out! Pass exception status along with instruction. Keep track of PCs for every instruction in pipeline. Don t act on exception until it reache W stage Handle interrupts through faulting noop in IF stage When instruction reaches end of MEM stage: Save PC Ÿ EPC, Interrupt vector addr Ÿ PC Turn all instructions in earlier stages into noops! Lec15.10 Resolution: Freeze above & ubble elow FYI: MIPS R3000 clocking discipline IAU npc I mem op rw rs rt PC freeze phi1 phi2 2-phase non-overlapping clocks Pipeline stage is two (level sensitive) latches bubble A im n op rw alu S n op rw D mem m n op rw Flush accomplished by setting invalid bit in pipeline Lec15.11 Edge-triggered phi1 phi2 phi1 Lec15.12

MIPS R3000 Instruction Pipeline Inst Fetch Decode Reg. Read ALU / E.A Memory Write Reg TL I-Cache RF Operation W Resource Usage TL I-cache RF TL ALUALU E.A. TL D-Cache D-Cache W I n s t r. O r d e r Recall: Data Hazard on r1 Time (clock cycles) IF ID/RF EX MEM W add r1,r2,r3 sub r4,r1,r3 and r6,r1,r7 or r8,r1,r9 xor r10,r1,r11 ALU Im Reg Dm Reg ALU Im Reg Dm Reg ALU Im Reg Dm Reg Im ALU Reg Dm Reg ALU Im Reg Dm Reg Write in phase 1, read in phase 2 => eliminates bypass from W With MIPS R3000 pipeline, no need to forward from W stage Lec15.13 Lec15.14 MIPS R3000 Multicycle Operations Is CPI = 1 for our pipeline? op Rd Ra Rb Use control word of local stage to step through multicycle operation Remember that CPI is an Average # cycles/inst mul Rd Ra Rb A Stall all stages above multicycle operation in the pipeline Drain (bubble) stages below it Rd Rd to reg file Ex: Multiply, Divide, Cache Miss R T Alternatively, launch multiply/divide to autonomous unit, only stall pipe if attempt to get result before ready - This means stall mflo/mfhi in decode stage if multiply/divide still executing - Extra credit in Lab 5 does this Lec15.15 CPI here is 1, since the average throughput is 1 instruction every cycle. What if there are stalls or multi-cycle execution? Usually CPI > 1. How close can we get to 1?? Lec15.16

Recall: Compute CPI? Start with ase CPI Add stalls CPI CPI stall CPI base STALL CPI stall u freq STALL u freq type1 type1 type2 type2 Suppose: CPI base =1 Freq branch =20%, freq load =30% Suppose branches always cause 1 cycle stall Loads cause a 100 cycle stall 1% of time Then: CPI = 1 + (1u0.20)+(100 u 0.30u0.01)=1.5 Multicycle? Could treat as: CPI stall =(CYCLES-CPI base ) u freq inst Administrivia Get moving on Lab 5! This lab is even harder than Lab4. Trickier to debug! Start with unpipelined version?» Interesting thought May or may not help Division of labor due tonight at Midnight! Dynamic scheduling techniques discussed in the Other Hennessy & Patterson book: Computer Architecture: A Quantitative Approach Chapter 4 Lec15.17 Lec15.18 Administrivia: e careful about clock edges in lab5! Case Study: MIPS R4000 (200 MHz) Next PC PC Inst. Mem IR Valid Dcd Ctrl Reg File A Exec S Mem Access M Data Mem Since Register file has edge-triggered write: Must have everything set up at end of memory stage This means that M register here is not necessary! IRex Ex Ctrl IRmem D Mem Ctrl IRwb W Ctrl Reg. File Equal Lec15.19 8 Stage Pipeline: IF first half of fetching of instruction; PC selection happens here as well as initiation of instruction cache access. IS second half of access to instruction cache. RF instruction decode and register fetch, hazard checking and also instruction cache hit detection. EX execution, which includes effective address calculation, ALU operation, and branch target computation and condition evaluation. DF data fetch, first half of access to data cache. DS second half of access to data cache. TC tag check, determine whether the data cache access hit. W write back for loads and register-register operations. 8 Stages: What is impact on Load delay? ranch delay? Why? Lec15.20

Case Study: MIPS R4000 MIPS R4000 Floating Point 7:2&\FOH /RDG/DWHQF\ 7+5((&\FOH %UDQFK/DWHQF\ FRQGLWLRQVHYDOXDWHG GXULQJSKDVH 'HOD\VORWSOXVWZRVWDOOV %UDQFKOLNHO\FDQFHOVGHOD\VORWLIQRWWDNHQ 7& 7& :% 7& :% 7& FP Adder, FP Multiplier, FP Divider Last step of FP Multiplier/Divider uses FP Adder HW 8 kinds of stages in FP units: Stage Functional unit Description A FP adder Mantissa ADD stage D FP divider Divide pipeline stage E FP multiplier Exception test stage M FP multiplier First stage of multiplier N FP multiplier Second stage of multiplier R FP adder Rounding stage S FP adder Operand shift stage U Unpack FP numbers Lec15.21 Lec15.22 MIPS FP Pipe Stages R4000 Performance FP Instr 1 2 3 4 5 6 7 8 Add, Subtract U S+A A+R R+S Multiply U E+M M M M N N+A R Divide U A R D 28 D+A D+R, D+R, D+A, D+R, A, R Square root U E (A+R) 108 A R Negate U S Absolute value U S FP compare U A R Stages: M First stage of multiplier N Second stage of multiplier A Mantissa ADD stage R Rounding stage D Divide pipeline stage S Operand shift stage E Exception test stage U Unpack FP numbers Not ideal CPI of 1: FP structural stalls: Not enough FP hardware (parallelism) FP result stalls: RAW data hazard (latency) ranch stalls (2 cycles + unfilled slots) Load stalls (1 or 2 clock cycles) 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0 eqntott espresso gcc li doduc nasa7 ora spice2g6 su2cor tomcatv Lec15.23 ase Load stalls ranch stalls FP result stalls FP structural stalls Lec15.24

Can we somehow make CPI closer to 1? Let s assume full pipelining: If we have a 4-cycle instruction, then we need 3 instructions between a producing instruction and its use: multf $F0,$F2,$F4 delay-1 delay-2 delay-3 addf $F6,$F10,$F0 (DUOLHVWIRUZDUGLQJIRU F\FOHLQVWUXFWLRQV (DUOLHVWIRUZDUGLQJIRU F\FOHLQVWUXFWLRQV )HWFK 'HFRGH ([ ([ ([ ([ :% FP Loop: Where are the Hazards? Loop: LD F0,0(R1) ;F0=vector element ADDD F4,F0,F2 ;add scalar from F2 SD 0(R1),F4 ;store result SUI R1,R1,8 ;decrement pointer 8 (DW) NEZ R1,Loop ;branch R1!=zero NOP ;delayed branch slot,qvwuxfwlrq,qvwuxfwlrq /DWHQF\LQ SURGXFLQJUHVXOW XVLQJUHVXOW FORFNF\FOHV )3$/8RS $QRWKHU)3$/8RS )3$/8RS 6WRUHGRXEOH /RDGGRXEOH )3$/8RS /RDGGRXEOH 6WRUHGRXEOH,QWHJHURS,QWHJHURS DGGI GHOD\ GHOD\ GHOD\ PXOWI :KHUHDUHWKHVWDOOV" Lec15.25 Lec15.26 FP Loop Showing Stalls Revised FP Loop Minimizing Stalls 1 Loop: LD F0,0(R1) ;F0=vector element 2 stall 3 ADDD F4,F0,F2 ;add scalar in F2 4 stall 5 stall 6 SD 0(R1),F4 ;store result 7 SUI R1,R1,8 ;decrement pointer 8 (DW) 8 NEZ R1,Loop ;branch R1!=zero 9 stall ;delayed branch slot,qvwuxfwlrq,qvwuxfwlrq /DWHQF\LQ SURGXFLQJUHVXOW XVLQJUHVXOW FORFNF\FOHV )3$/8RS $QRWKHU)3$/8RS )3$/8RS 6WRUHGRXEOH /RDGGRXEOH )3$/8RS 9 clocks: Rewrite code to minimize stalls? Lec15.27 1 Loop: LD F0,0(R1) 2 stall 3 ADDD F4,F0,F2 4 SUI R1,R1,8 5 NEZ R1,Loop ;delayed branch 6 SD 8(R1),F4 ;altered when move past SUI 6ZDS%1(=DQG6'E\FKDQJLQJDGGUHVVRI6',QVWUXFWLRQ,QVWUXFWLRQ /DWHQF\LQ SURGXFLQJUHVXOW XVLQJUHVXOW FORFNF\FOHV )3$/8RS $QRWKHU)3$/8RS )3$/8RS 6WRUHGRXEOH /RDGGRXEOH )3$/8RS 6 clocks: Unroll loop 4 times code to make faster? Lec15.28

Unroll Loop Four Times (straightforward way) 1 Loop:LD F0,0(R1) F\FOHVWDOO 2 ADDD F4,F0,F2 F\FOHVVWDOO 3 SD 0(R1),F4 ;drop SUI & NEZ 4 LD F6,-8(R1) 5 ADDD F8,F6,F2 6 SD -8(R1),F8 ;drop SUI & NEZ 7 LD F10,-16(R1) 8 ADDD F12,F10,F2 9 SD -16(R1),F12 ;drop SUI & NEZ 10 LD F14,-24(R1) 11 ADDD F16,F14,F2 12 SD -24(R1),F16 13 SUI R1,R1,#32 ;alter to 4*8 14 NEZ R1,LOOP 15 NOP Rewrite loop to minimize stalls? [ FORFNF\FOHVRUSHULWHUDWLRQ $VVXPHV5LVPXOWLSOHRI Unrolled Loop That Minimizes Stalls 1 Loop:LD F0,0(R1) 2 LD F6,-8(R1) 3 LD F10,-16(R1) 4 LD F14,-24(R1) 5 ADDD F4,F0,F2 6 ADDD F8,F6,F2 7 ADDD F12,F10,F2 8 ADDD F16,F14,F2 9 SD 0(R1),F4 10 SD -8(R1),F8 11 SD -16(R1),F12 12 SUI R1,R1,#32 13 NEZ R1,LOOP 14 SD 8(R1),F16 ; 8-32 = -24 FORFNF\FOHVRUSHULWHUDWLRQ :KHQVDIHWRPRYHLQVWUXFWLRQV" What assumptions made when moved code? OK to move store past SUI even though changes register OK to move loads before stores: get right data? When is it safe for compiler to do such changes? Lec15.29 Lec15.30 Getting CPI < 1: Issuing Multiple Instructions/Cycle Getting CPI < 1: Issuing Multiple Instructions/Cycle Two main variations: Superscalar and VLIW Superscalar: varying no. instructions/cycle (1 to 6) Parallelism and dependencies determined/resolved by HW IM PowerPC 604, Sun UltraSparc, DEC Alpha 21164, HP 7100 Very Long Instruction Words (VLIW): fixed number of instructions (16) parallelism determined by compiler Pipeline is exposed; compiler must schedule delays to get right result Explicit Parallel Instruction Computer (EPIC)/ Intel 128 bit packets containing 3 instructions (can execute sequentially) Can link 128 bit packets together to allow more parallelism Compiler determines parallelism, HW checks dependencies and fowards/stalls Superscalar DLX: 2 instructions, 1 FP & 1 anything else Fetch 64-bits/clock cycle; Int on left, FP on right Can only issue 2nd instruction if 1st instruction issues More ports for FP registers to do FP load & FP op in a pair Type PipeStages Int. instruction IF ID EX MEM W FP instruction IF ID EX MEM W Int. instruction IF ID EX MEM W FP instruction IF ID EX MEM W Int. instruction IF ID EX MEM W FP instruction IF ID EX MEM W 1 cycle load delay expands to 3 instructions in SS instruction in right half can t use it, nor instructions in next slot Lec15.31 Lec15.32

Loop Unrolling in Superscalar Integer instruction FP instruction Clock cycle Loop: LD F0,0(R1) 1 LD F6,-8(R1) 2 LD F10,-16(R1) ADDD F4,F0,F2 3 LD F14,-24(R1) ADDD F8,F6,F2 4 LD F18,-32(R1) ADDD F12,F10,F2 5 SD 0(R1),F4 ADDD F16,F14,F2 6 SD -8(R1),F8 ADDD F20,F18,F2 7 SD -16(R1),F12 8 SD -24(R1),F16 9 SUI R1,R1,#40 10 NEZ R1,LOOP 11 SD -32(R1),F20 12 Unrolled 5 times to avoid delays (+1 due to SS) 12 clocks, or 2.4 clocks per iteration Lec15.33 Limits of Superscalar While Integer/FP split is simple for the HW, get CPI of 0.5 only for programs with: Exactly 50% FP operations No hazards If more instructions issue at same time, greater difficulty of decode and issue Even 2-scalar => examine 2 opcodes, 6 register specifiers, & decide if 1 or 2 instructions can issue VLIW: tradeoff instruction space for simple decoding The long instruction word has room for many operations y definition, all the operations the compiler puts in the long instruction word can execute in parallel E.g., 2 integer operations, 2 FP ops, 2 Memory refs, 1 branch» 16 to 24 bits per field => 7*16 or 112 bits to 7*24 or 168 bits wide Need compiling technique that schedules across several branches Lec15.34 Loop Unrolling in VLIW Software Pipelining Memory Memory FP FP Int. op/ Clock reference 1 reference 2 operation 1 op. 2 branch LD F0,0(R1) LD F6,-8(R1) 1 LD F10,-16(R1) LD F14,-24(R1) 2 LD F18,-32(R1) LD F22,-40(R1) ADDD F4, F0, F2 ADDD F8,F6,F2 3 LD F26,-48(R1) ADDD F12, F10, F2 ADDD F16,F14,F2 4 ADDD F20, F18, F2 ADDD F24,F22,F2 5 SD 0(R1),F4 SD -8(R1),F8 ADDD F28, F26, F2 6 SD -16(R1),F12 SD -24(R1),F16 7 SD -32(R1),F20 SD -40(R1),F24 SUI R1,R1,#48 8 SD -0(R1),F28 NEZ R1,LOOP 9 Unrolled 7 times to avoid delays 7 results in 9 clocks, or 1.3 clocks per iteration Need more registers in VLIW(EPIC => 128int + 128FP) Observation: if iterations from loops are independent, then can get more ILP by taking instructions from different iterations Software pipelining: reorganizes loops so that each iteration is made from instructions chosen from different iterations of the original loop (- Tomasulo in SW) Softwarepipelined iteration Iteration 0 Iteration 1 Iteration 2 Iteration 3 Iteration 4 Lec15.35 Lec15.36

Software Pipelining Example efore: Unrolled 3 times 1 LD F0,0(R1) 2 ADDD F4,F0,F2 3 SD 0(R1),F4 4 LD F6,-8(R1) 5 ADDD F8,F6,F2 6 SD -8(R1),F8 7 LD F10,-16(R1) 8 ADDD F12,F10,F2 9 SD -16(R1),F12 10 SUI R1,R1,#24 11 NEZ R1,LOOP After: Software Pipelined 1 SD 0(R1),F4 ; Stores M[i] 2 ADDD F4,F0,F2 ; Adds to M[i-1] 3 LD F0,-16(R1); Loads M[i-2] 4 SUI R1,R1,#8 5 NEZ R1,LOOP overlapped ops Symbolic Loop Unrolling Maximize result-use distance Less code space than unrolling Fill & drain pipe only once per loop vs. once per each unrolled iteration in loop unrolling SW Pipeline Time Loop Unrolled Time Software Pipelining with Loop Unrolling in VLIW Memory Memory FP FP Int. op/ Clock reference 1 reference 2 operation 1 op. 2 branch LD F0,-48(R1) ST 0(R1),F4 ADDD F4,F0,F2 1 LD F6,-56(R1) ST -8(R1),F8 ADDD F8,F6,F2 SUI R1,R1,#24 2 LD F10,-40(R1) ST 8(R1),F12 ADDD F12,F10,F2 NEZ R1,LOOP 3 Software pipelined across 9 iterations of original loop In each iteration of above loop, we:» Store to m,m-8,m-16 (iterations I-3,I-2,I-1)» Compute for m-24,m-32,m-40 (iterations I,I+1,I+2)» Load from m-48,m-56,m-64 (iterations I+3,I+4,I+ 9 results in 9 cycles, or 1 clock per iteration Average: 3.3 ops per clock, 66% efficiency Note: Need less registers for software pipelining (only using 7 registers here, was using 1 Lec15.37 Lec15.38 Can we use HW to get CPI closer to 1? Problems? Why in HW at run time? Works when can t know real dependence at compile time Compiler simpler Code for one machine runs well on another Key idea: Allow instructions behind stall to proceed: DIVD F0,F2,F4 ADDD F10,F0,F8 SUD F12,F8,F14 Out-of-order execution => out-of-order completion. Disadvantages? Complexity Precise interrupts harder! (Talk about this next time) How do we prevent WAR and WAW hazards? How do we deal with variable latency? Forwarding for RAW hazards harder. &ORFN&\FOH1XPEHU,QVWUXFWLRQ /' )5,' 0(0 :% /' )5,' 0(0 :% 08/7' ))),' VWDOO 0 0 0 0 0 0 0 0 0 0 0(0 :% 68%' ))),' $ $ 0(0 :% ',9' ))),' VWDOO VWDOO VWDOO VWDOO VWDOO VWDOO VWDOO VWDOO VWDOO ' ' $''' ))),' $ $ 0(0 :% :$5 5$: Lec15.39 Lec15.40

Summary Precise interrupts are easy to implement in a 5-stage pipeline: Mark exception on pipeline state rather than acting on it Handle exceptions at one place in pipeline (memory-stage/beginning of writeback) Loop unrolling Ÿ Multiple iterations of loop in software: Amortizes loop overhead over several iterations Gives more opportunity for scheduling around stalls Software Pipelining Ÿ take one instruction from each of several iterations of the loop Software overlapping of loop iterations Very Long Instruction Word machines (VLIW) Ÿ Multiple operations coded in single, long instruction Requires compiler to decide which ops can be done in parallel Trace scheduling Ÿ find common path and schedule code as if branches didn t exist (+ add fixup code ) Lec15.41