Advanced Computer Architecture

Similar documents
Pipeline Overview. Dr. Jiang Li. Adapted from the slides provided by the authors. Jiang Li, Ph.D. Department of Computer Science

Lecture 3. Pipelining. Dr. Soner Onder CS 4431 Michigan Technological University 9/23/2009 1

Computer Architecture

EI338: Computer Systems and Engineering (Computer Architecture & Operating Systems)

Modern Computer Architecture

CISC 662 Graduate Computer Architecture Lecture 6 - Hazards

Lecture 2: Processor and Pipelining 1

COSC 6385 Computer Architecture - Pipelining

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

CSE 502 Graduate Computer Architecture. Lec 3-5 Performance + Instruction Pipelining Review

CSE 502 Graduate Computer Architecture. Lec 3-5 Performance + Instruction Pipelining Review

MIPS An ISA for Pipelining

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

Lecture 05: Pipelining: Basic/ Intermediate Concepts and Implementation

Computer Architecture and System

Computer Architecture and System

Pipelining! Advanced Topics on Heterogeneous System Architectures. Politecnico di Milano! Seminar DEIB! 30 November, 2017!

Computer Architecture Spring 2016

CSE 502 Graduate Computer Architecture. Lec 4-6 Performance + Instruction Pipelining Review

COSC4201 Pipelining. Prof. Mokhtar Aboelaze York University

Overview. Appendix A. Pipelining: Its Natural! Sequential Laundry 6 PM Midnight. Pipelined Laundry: Start work ASAP

Appendix A. Overview

Pipelining: Hazards Ver. Jan 14, 2014

CISC 662 Graduate Computer Architecture Lecture 6 - Hazards

Advanced Computer Architecture Pipelining

Lecture - 4. Measurement. Dr. Soner Onder CS 4431 Michigan Technological University 9/29/2009 1

CPE Computer Architecture. Appendix A: Pipelining: Basic and Intermediate Concepts

Lecture 1: Introduction

CMSC 411 Computer Systems Architecture Lecture 6 Basic Pipelining 3. Complications With Long Instructions

Review: Moore s Law. EECS 252 Graduate Computer Architecture Lecture 2. Bell s Law new class per decade

Advanced Parallel Architecture Lessons 5 and 6. Annalisa Massini /2017

Pipelining: Basic and Intermediate Concepts

EITF20: Computer Architecture Part2.2.1: Pipeline-1

EITF20: Computer Architecture Part2.2.1: Pipeline-1

EITF20: Computer Architecture Part2.2.1: Pipeline-1

Appendix C: Pipelining: Basic and Intermediate Concepts

Chapter 4 The Processor 1. Chapter 4B. The Processor

第三章 Instruction-Level Parallelism and Its Dynamic Exploitation. 陈文智 浙江大学计算机学院 2014 年 10 月

Processor Architecture. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Instruction Pipelining Review

Data Hazards Compiler Scheduling Pipeline scheduling or instruction scheduling: Compiler generates code to eliminate hazard

Processor Architecture

Minimizing Data hazard Stalls by Forwarding Data Hazard Classification Data Hazards Present in Current MIPS Pipeline

Pipeline Review. Review

Lecture 3: The Processor (Chapter 4 of textbook) Chapter 4.1

Basic Pipelining Concepts

CSE 533: Advanced Computer Architectures. Pipelining. Instructor: Gürhan Küçük. Yeditepe University

Instruction Level Parallelism. Appendix C and Chapter 3, HP5e

What is Pipelining? Time per instruction on unpipelined machine Number of pipe stages

The Big Picture Problem Focus S re r g X r eg A d, M lt2 Sub u, Shi h ft Mac2 M l u t l 1 Mac1 Mac Performance Focus Gate Source Drain BOX

Pipelining. Maurizio Palesi

Lecture 9. Pipeline Hazards. Christos Kozyrakis Stanford University

Chapter 4 The Processor 1. Chapter 4A. The Processor

Computer Systems Architecture Spring 2016

CS 110 Computer Architecture. Pipelining. Guest Lecture: Shu Yin. School of Information Science and Technology SIST

CISC 662 Graduate Computer Architecture Lecture 5 - Pipeline. Pipelining. Pipelining the Idea. Similar to assembly line in a factory:

LECTURE 3: THE PROCESSOR

HY425 Lecture 05: Branch Prediction

CISC 662 Graduate Computer Architecture Lecture 5 - Pipeline Pipelining

Appendix C. Abdullah Muzahid CS 5513

Department of Computer and IT Engineering University of Kurdistan. Computer Architecture Pipelining. By: Dr. Alireza Abdollahpouri

Advanced Computer Architecture

The Processor Pipeline. Chapter 4, Patterson and Hennessy, 4ed. Section 5.3, 5.4: J P Hayes.

Instr. execution impl. view

Outline. Pipelining basics The Basic Pipeline for DLX & MIPS Pipeline hazards. Handling exceptions Multi-cycle operations

Appendix C. Instructor: Josep Torrellas CS433. Copyright Josep Torrellas 1999, 2001, 2002,

COMPUTER ORGANIZATION AND DESIGN

What is Pipelining? RISC remainder (our assumptions)

Chapter 4. The Processor

The Processor. Z. Jerry Shi Department of Computer Science and Engineering University of Connecticut. CSE3666: Introduction to Computer Architecture

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 4. The Processor

Lecture 2: Computer Performance. Assist.Prof.Dr. Gürhan Küçük Advanced Computer Architectures CSE 533

Chapter 4. The Processor

Performance of computer systems

Pipelining. Each step does a small fraction of the job All steps ideally operate concurrently

Computer and Information Sciences College / Computer Science Department Enhancing Performance with Pipelining

MIPS Pipelining. Computer Organization Architectures for Embedded Computing. Wednesday 8 October 14

Pipelined Processors. Ideal Pipelining. Example: FP Multiplier. 55:132/22C:160 Spring Jon Kuhl 1

ISA. CSE 4201 Minas E. Spetsakis based on Computer Architecture by Hennessy and Patterson

CS 352H Computer Systems Architecture Exam #1 - Prof. Keckler October 11, 2007

Pipelining concepts The DLX architecture A simple DLX pipeline Pipeline Hazards and Solution to overcome

Complications with long instructions. CMSC 411 Computer Systems Architecture Lecture 6 Basic Pipelining 3. How slow is slow?

Lecture 7 Pipelining. Peng Liu.

Instruction Level Parallelism. ILP, Loop level Parallelism Dependences, Hazards Speculation, Branch prediction

ECE331: Hardware Organization and Design

COMPUTER ORGANIZATION AND DESIGN

(Basic) Processor Pipeline

Introduction to Pipelined Datapath

Full Datapath. Chapter 4 The Processor 2

CS 61C: Great Ideas in Computer Architecture Pipelining and Hazards

The Processor: Improving the performance - Control Hazards

Pipelining, Instruction Level Parallelism and Memory in Processors. Advanced Topics ICOM 4215 Computer Architecture and Organization Fall 2010

Pipelining concepts The DLX architecture A simple DLX pipeline Pipeline Hazards and Solution to overcome

COMP2611: Computer Organization. The Pipelined Processor

CSEE 3827: Fundamentals of Computer Systems

ECE331: Hardware Organization and Design

Lecture 7: Pipelining Contd. More pipelining complications: Interrupts and Exceptions

Computer Architecture. Lecture 6.1: Fundamentals of

Graduate Computer Architecture. Chapter 3. Instruction Level Parallelism and Its Dynamic Exploitation

3/12/2014. Single Cycle (Review) CSE 2021: Computer Organization. Single Cycle with Jump. Multi-Cycle Implementation. Why Multi-Cycle?

Transcription:

ECE 563 Advanced Computer Architecture Fall 2010 Lecture 2: Review of Metrics and Pipelining 563 L02.1 Fall 2010

Review from Last Time Computer Architecture >> instruction sets Computer Architecture skill sets are different 5 Quantitative principles of design Quantitative approach to design Solid interfaces that really work Technology tracking and anticipation 563 L02.2 Fall 2010

Review (continued) Other fields often borrow ideas from architecture Quantitative Principles of Design 1. Take Advantage of Parallelism 2. Principle of Locality 3. Focus on the Common Case 4. Amdahl s Law 5. The Processor Performance Equation Careful, quantitative comparisons Define, quantity, and summarize relative performance Define and quantify relative cost Define and quantify dependability Define and quantify power Culture of anticipating and exploiting advances in technology Culture of well-defined interfaces that are carefully implemented and thoroughly checked 563 L02.3 Fall 2010

Metrics used to Compare Designs Cost Die cost and system cost Execution Time average and worst-case Latency vs. Throughput Energy and Power Also peak power and peak switching current Reliability Resiliency to electrical noise, part failure Robustness to bad software, operator error Maintainability System administration costs Compatibility Software costs dominate 563 L02.4 Fall 2010

Cost of Processor Design cost (Non-recurring Engineering Costs, NRE) dominated by engineer-years (~$200K per engineer year) also mask costs (exceeding $1M per spin) Cost of die die area die yield (maturity of manufacturing process, redundancy features) cost/size of wafers die cost ~= f(die area^4) with no redundancy Cost of packaging number of pins (signal + power/ground pins) power dissipation Cost of testing built-in test features? logical complexity of design choice of circuits (minimum clock rates, leakage currents, I/O drivers) Architect affects all of these 563 L02.5 Fall 2010

System-Level Cost Impacts Power supply and cooling Support chipset Off-chip SRAM/DRAM/ROM Off-chip peripherals 563 L02.6 Fall 2010

What is Performance? Latency (or response time or execution time) time to complete one task Bandwidth (or throughput) tasks completed per unit time What does pipelining improve? 563 L02.7 Fall 2010

Definition: Performance Performance is in units of things per sec bigger is better Can you think of a lower-is-better metric? If we are primarily concerned with response time performance(x) = 1 execution_time(x) " X is n times faster than Y" means Performance(X) Execution_time(Y) n = = Performance(Y) Execution_time(X) 563 L02.8 Fall 2010

Performance Guarantees Inputs C B A Execution Rate Average Rate: A > B > C Worst-case Rate: A < B < C 563 L02.9 Fall 2010

Types of Benchmark Synthetic Benchmarks Fake programs invented to try to match the profile and behavior of real applications, e.g., Dhrystone, Whetstone Toy Programs 100-line programs from beginning programming assignments, e.g., Nqueens, quicksort, Towers of Hanoi Kernels small, key pieces of real applications, e.g., matrix multiply, FFT, sorting, Livermore Loops, Linpack Simplified Applications Extract main computational skeleton of real application to simplify porting, e.g., NAS parallel benchmarks, TPC Real Applications Things people actually use their computers for, e.g., car crash simulations, relational databases, Photoshop, Quake 563 L02.10 Fall 2010

Performance: What to measure Usually rely on benchmarks vs. real workloads To increase predictability, collections of benchmark applications-- benchmark suites -- are popular SPECCPU: popular desktop benchmark suite CPU only, split between integer and floating point programs SPECint2000 has 12 integer, SPECfp2000 has 14 integer pgms SPECCPU2006 to be announced Spring 2006 SPECSFS (NFS file server) and SPECWeb (WebServer) added as server benchmarks Transaction Processing Council measures server performance and cost-performance for databases TPC-C Complex query for Online Transaction Processing TPC-H models ad hoc decision support TPC-W a transactional web benchmark TPC-App application server and web services benchmark 563 L02.11 Fall 2010

Summarizing Performance System Rate (Task 1) Rate (Task 2) A 10 20 B 20 10 Which system is faster? 563 L02.12 Fall 2010

depends who s selling System Rate (Task 1) Rate (Task 2) A 10 20 B 20 10 Average throughput System Rate (Task 1) Rate (Task 2) A 0.50 2.00 B 1.00 1.00 Throughput relative to B Average 15 15 Average 1.25 1.00 System Rate (Task 1) Rate (Task 2) A 1.00 1.00 B 2.00 0.50 Throughput relative to A Average 1.00 1.25 563 L02.13 Fall 2010

Summarizing Performance over Set of Benchmark Programs Arithmetic mean of execution times t i (in seconds) 1/n i t i Harmonic mean of execution rates r i (MIPS/MFLOPS) n/ [ i (1/r i )] Both equivalent to workload where each program is run the same number of times Can add weighting factors to model other workload distributions 563 L02.14 Fall 2010

Normalized Execution Time Measure speedup relative to reference machine ratio = t Ref / t A 563 L02.15 Fall 2010

How to Mislead with Performance Reports Select pieces of workload that work well on your design, ignore others Use unrealistic data set sizes for application (too big or too small) Report throughput numbers for a latency benchmark Report latency numbers for a throughput benchmark Report performance on a kernel and claim it represents an entire application Use 16-bit fixed-point arithmetic (because it s fastest on your system) even though application requires 64-bit floating-point arithmetic Use a less efficient algorithm on the competing machine Report speedup for an inefficient algorithm (bubblesort) Compare hand-optimized assembly code with unoptimized C code Compare your design using next year s technology against competitor s year old design (1% performance improvement per week) Ignore the relative cost of the systems being compared Report averages and not individual results Report speedup over unspecified base system, not absolute times Report efficiency not absolute times Report MFLOPS not absolute times (use inefficient algorithm) [ David Bailey Twelve ways to fool the masses when giving performance results for parallel supercomputers ] 563 L02.16 Fall 2010

Benchmarking for Future Machines Variance in performance for parallel architectures is going to be much worse than for serial processors SPECcpu means only really work across very similar machine configurations What is a good benchmarking methodology? 563 L02.17 Fall 2010

Why Power Matters Packaging costs Power supply rail design Chip and system cooling costs Noise immunity and system reliability Battery life (in portable systems) Environmental concerns Office equipment accounted for 5% of total US commercial energy usage in 1993 Energy Star compliant systems 563 L02.18 Fall 2010

Power and Energy Figures of Merit Power consumption in Watts determines battery life in hours Rate at which energy is delivered Peak power determines power ground wiring designs sets packaging limits impacts signal noise margin and reliability analysis Energy efficiency in Joules rate at which power is consumed over time Energy = power * delay Joules = Watts * seconds lower energy number means less power to perform a computation at the same frequency 563 L02.19 Fall 2010

Power versus Energy Watts Power is height of curve Lower power design could simply be slower Approach 1 Approach 2 Watts time Energy is area under curve Two approaches require the same energy Approach 1 Approach 2 time 563 L02.20 Fall 2010

Peak Power versus Lower Energy Peak A Power Peak B Integrate power curve to get energy Time System A has higher peak power, but lower total energy System B has lower peak power, but higher total energy 563 L02.21 Fall 2010

A "Typical" RISC ISA 32-bit fixed format instruction (3 formats) 32 32-bit GPR (R0 contains zero) 3-address, reg-reg arithmetic instruction Single address mode for load/store: base + displacement no indirection Simple branch conditions see: SPARC, MIPS, HP PA-Risc, DEC Alpha, IBM PowerPC, CDC 6600, CDC 7600, Cray-1, Cray-2, Cray-3 563 L02.22 Fall 2010

Example: MIPS ister-ister 31 26 25 2120 16 15 1110 6 5 0 Op 31 26 25 2120 16 15 0 Op Op Rs1 ister-immediate Branch Op Rs2 Rs1 Rd immediate 31 26 25 0 Rd target Opx 31 26 25 2120 16 15 0 Jump / Call Rs1 Rs2 immediate 563 L02.23 Fall 2010

5 Steps of MIPS Datapath Figure A.3, Page A-9 Instruction Fetch Instr. Decode. Fetch Execute Addr. Calc Memory Access Write Back Next PC 4 Adder Next SEQ PC RS1 Next SEQ PC Zero? MUX Memory Address IR <= mem[pc]; PC <= PC + 4 IF/ID RS2 Imm File Sign Extend ID/EX MUX MUX EX/MEM RD RD RD Data Memory MEM/WB MUX WB Data A <= [IR rs ]; rslt <= A op IRop B [IR rd ] <= WB B <= [IR rt ] WB <= rslt 563 L02.24 Fall 2010

Inst. Set Processor Controller IR <= mem[pc]; PC <= PC + 4 A <= [IR rs ]; opfetch-dcd br jmp B <= [IR rt ] RR RI LD if bop(a,b) PC <= IR jaddr r <= A op IRop B r <= A op IRop IR im r <= A + IR im PC <= PC+IR im WB <= r WB <= r WB <= Mem[r] [IR rd ] <= WB [IR rd ] <= WB [IR rd ] <= WB 563 L02.25 Fall 2010

5 Steps of MIPS Datapath Figure A.3, Page A-9 Instruction Fetch Instr. Decode. Fetch Execute Addr. Calc Memory Access Write Back Next PC 4 Adder Next SEQ PC RS1 Next SEQ PC Zero? MUX Address Memory IF/ID RS2 File ID/EX MUX MUX EX/MEM Data Memory MEM/WB MUX Imm Sign Extend RD RD RD WB Data Data stationary control local decode for each instruction phase / pipeline stage 563 L02.26 Fall 2010

Visualizing Pipelining Figure A.2, Page A-8 Time (clock cycles) I n s t r. Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 O r d e r 563 L02.27 Fall 2010

Pipelining is not quite that easy! Limits to pipelining: Hazards prevent next instruction from executing during its designated clock cycle Structural hazards: HW cannot support this combination of instructions (single person to fold and put clothes away) Data hazards: Instruction depends on result of prior instruction still in the pipeline (missing sock) Control hazards: Caused by delay between the fetching of instructions and decisions about changes in control flow (branches and jumps). 563 L02.28 Fall 2010

One Memory Port/Structural Hazards Figure A.4, Page A-14 Time (clock cycles) Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 I n s t r. O r d e r Load Instr 1 Instr 2 Instr 3 Instr 4 563 L02.29 Fall 2010

One Memory Port/Structural Hazards (Similar to Figure A.5, Page A-15) Time (clock cycles) Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 I n s t r. O r d e r Load Instr 1 Instr 2 Stall Instr 3 Bubble Bubble Bubble Bubble Bubble How do you bubble the pipe? 563 L02.30 Fall 2010

Speed Up Equation for Pipelining CPI pipelined Ideal CPI Average Stall cycles per Inst Speedup Ideal CPI Pipeline depth Ideal CPI Pipeline stall CPI Cycle Cycle Time Time unpipelined pipelined For simple RISC pipeline, CPI = 1: Speedup 1 Pipeline Pipeline depth stall CPI Cycle Cycle Time Time unpipelined pipelined 563 L02.31 Fall 2010

Example: Dual-port vs. Single-port Machine A: Dual ported memory ( Harvard Architecture ) Machine B: Single ported memory, but its pipelined implementation has a 1.05 times faster clock rate Ideal CPI = 1 for both Loads are 40% of instructions executed Machine A is 1.33 times faster 563 L02.32 Fall 2010

Data Hazard on R1 Figure A.6, Page A-17 Time (clock cycles) IF ID/RF EX MEM WB I n s t r. add r1,r2,r3 sub r4,r1,r3 O r d e r and r6,r1,r7 or r8,r1,r9 xor r10,r1,r11 Among these 4 arrows, which are hazards and which are not? 563 L02.33 Fall 2010

Three Generic Data Hazards Read After Write (RAW) Instr J tries to read operand before Instr I writes it I: add r1,r2,r3 J: sub r4,r1,r3 Caused by a Dependence (in compiler nomenclature). This hazard results from an actual need for communication. 563 L02.34 Fall 2010

Three Generic Data Hazards Write After Read (WAR) Instr J writes operand before Instr I reads it I: sub r4,r1,r3 J: add r1,r2,r3 K: mul r6,r1,r7 Called an anti-dependence by compiler writers. This results from reuse of the name r1. Can t happen in MIPS 5 stage pipeline because: All instructions take 5 stages, and Reads are always in stage 2, and Writes are always in stage 5 563 L02.35 Fall 2010

Three Generic Data Hazards Write After Write (WAW) Instr J writes operand before Instr I writes it. I: sub r1,r4,r3 J: add r1,r2,r3 K: mul r6,r1,r7 Called an output dependence by compiler writers This also results from the reuse of name r1. Can t happen in MIPS 5 stage pipeline because: All instructions take 5 stages, and Writes are always in stage 5 Will see WAR and WAW in more complicated pipes 563 L02.36 Fall 2010

Forwarding to Avoid Data Hazard Figure A.7, Page A-19 Time (clock cycles) I n s t r. add r1,r2,r3 sub r4,r1,r3 O r d e r and r6,r1,r7 or r8,r1,r9 xor r10,r1,r11 563 L02.37 Fall 2010

HW Change for Forwarding Figure A.23, Page A-37 memory output at the end of MEM NextPC output at the end of EX output at the end of MEM isters ID/EX mux mux EX/MEM Data Memory MEM/WR Immediate mux What circuit detects and resolves this hazard? 563 L02.38 Fall 2010

Forwarding to Avoid LW-SW Data Hazard Figure A.8, Page A-20 Time (clock cycles) I n s t r. add r1,r2,r3 lw r4, 0(r1) O r d e r sw r4,12(r1) or r8,r6,r9 xor r10,r9,r11 563 L02.39 Fall 2010

Data Hazard Even with Forwarding Figure A.9, Page A-21 Time (clock cycles) I n s t r. lw r1, 0(r2) sub r4,r1,r6 O r d e r and r6,r1,r7 or r8,r1,r9 563 L02.40 Fall 2010

Data Hazard Even with Forwarding (Similar to Figure A.10, Page A-21) Time (clock cycles) I n s t r. O r d e r lw r1, 0(r2) sub r4,r1,r6 and r6,r1,r7 Bubble Bubble or r8,r1,r9 Bubble Is this the same bubble? 563 L02.41 Fall 2010

Software Scheduling to Avoid Load Hazards Try producing fast code for a = b + c; d = e f; assuming a, b, c, d,e, and f in memory. Slow code: LW Rb,b LW Rc,c ADD Ra,Rb,Rc SW a,ra LW Re,e LW Rf,f SUB Rd,Re,Rf SW d,rd Compiler optimizes for performance. Hardware checks for safety. 563 L02.42 Fall 2010

Control Hazard on Branches Three Stage Stall 10: beq r1,r3,36 14: and r2,r3,r5 18: or r6,r1,r7 22: add r8,r1,r9 36: xor r10,r1,r11 How to deal with the 3 instructions in between? 563 L02.43 Fall 2010

Branch Stall Impact If CPI = 1, 30% branch, Stall 3 cycles => new CPI = 1.9! Two part solution: Determine branch taken or not sooner, AND Compute taken branch address earlier MIPS branch tests if register = 0 or 0 MIPS Solution: Move Zero test to ID/RF stage (use xor) Adder to calculate new PC in ID/RF stage 1 clock cycle penalty for branch versus 3 563 L02.44 Fall 2010

Pipelined MIPS Datapath Figure A.24, page A-38 Instruction Fetch Instr. Decode. Fetch Execute Addr. Calc Memory Access Write Back Next PC 4 Adder Next SEQ PC RS1 Adder MUX Zero? Address Memory IF/ID RS2 File ID/EX MUX EX/MEM Data Memory MEM/WB MUX Imm Sign Extend RD RD RD WB Data Interplay of instruction set design and cycle time. 563 L02.45 Fall 2010

Four Branch Hazard Alternatives #1: Stall until branch direction is clear #2: Predict Branch Not Taken Execute successor instructions in sequence Squash instructions in pipeline if branch actually taken Advantage of late pipeline state update 47% MIPS branches not taken on average PC+4 already calculated, so use it to get next instruction #3: Predict Branch Taken 53% MIPS branches taken on average But haven t calculated branch target address in MIPS - MIPS still incurs 1 cycle branch penalty - Other machines: branch target known before outcome 563 L02.46 Fall 2010

Four Branch Hazard Alternatives #4: Delayed Branch Define branch to take place AFTER a following instruction branch instruction sequential successor 1 sequential successor 2... sequential successor n branch target if taken Branch delay of length n 1 slot delay allows proper decision and branch target address in 5 stage pipeline MIPS uses this 563 L02.47 Fall 2010

Scheduling Branch Delay Slots (Fig A.14) A. From before branch B. From branch target C. From fall through add $1,$2,$3 if $2=0 then delay slot sub $4,$5,$6 add $1,$2,$3 if $1=0 then delay slot add $1,$2,$3 if $1=0 then delay slot sub $4,$5,$6 becomes becomes becomes if $2=0 then add $1,$2,$3 add $1,$2,$3 if $1=0 then sub $4,$5,$6 or $7 $8 $9 add $1,$2,$3 if $1=0 then or $7 $8 $9 A is the best choice, fills delay slot & reduces instruction count (IC) In B, the sub instruction may need to be copied, increasing IC 563 L02.48 Fall 2010

Delayed Branch Compiler effectiveness for single branch delay slot: Fills about 60% of branch delay slots About 80% of instructions executed in branch delay slots useful in computation About 50% (60% x 80%) of slots usefully filled Delayed Branch downside: As processor go to deeper pipelines and multiple issue, the branch delay grows and need more than one delay slot Delayed branching has lost popularity compared to more expensive but more flexible dynamic approaches Growth in available transistors has made dynamic approaches relatively cheaper 563 L02.49 Fall 2010

Evaluating Branch Alternatives Pipeline speedup = Pipeline depth 1 +Branch frequency Branch penalty Assume 4% unconditional branch, 6% conditional branch- untaken, 10% conditional branch-taken Scheduling Branch CPI speedup v. speedup v. scheme penalty unpipelined stall Stall pipeline 3 1.60 3.1 1.0 Predict taken 1 1.20 4.2 1.33 Predict not taken 1 1.14 4.4 1.40 Delayed branch 0.5 1.10 4.5 1.45 563 L02.50 Fall 2010

Problems with Pipelining Exception: An unusual event happens to an instruction during its execution Examples: divide by zero, undefined opcode Interrupt: Hardware signal to switch the processor to a new instruction stream Example: a sound card interrupts when it needs more audio output samples (an audio click happens if it is left waiting) Problem: the exception or interrupt must appear between 2 instructions (I i and I i+1 ) The effect of all instructions up to and including I i is totalling complete No effect of any instruction after I i can take place The interrupt (exception) handler either aborts program or restarts at instruction I i+1 563 L02.51 Fall 2010

Precise Exceptions In-Order Pipelines Key observation: architected state only change in memory and register write stages. 563 L02.52 Fall 2010

Summary: Metrics and Pipelining Machines compared over many metrics Cost, performance, power, reliability, compatibility, Difficult to compare widely differing machines on benchmark suite Control VIA State Machines and Microprogramming Just overlap tasks; easy if tasks are independent Speed Up Pipeline Depth; if ideal CPI is 1, then: Pipeline depth Cycle Timeunpipelined Speedup 1 Pipeline stall CPI Cycle Timepipelined Hazards limit performance on computers: Structural: need more HW resources Data (RAW,WAR,WAW): need forwarding, compiler scheduling Control: delayed branch, prediction Exceptions, Interrupts add complexity Next time: Read Appendix C! 563 L02.53 Fall 2010

About ECE 563 Next week s class is cancelled!!! Class web site is up http://www.ece.rutgers.edu/~yyzhang/fall10/ Syllabus, notes, links, project descriptions Please check the sakai page as well 563 L02.54 Fall 2010