November 7, 2014 Prediction

Similar documents
Lecture 12 Branch Prediction and Advanced Out-of-Order Superscalars

CS252 Spring 2017 Graduate Computer Architecture. Lecture 8: Advanced Out-of-Order Superscalar Designs Part II

Branch Prediction & Speculative Execution. Branch Penalties in Modern Pipelines

CS 152 Computer Architecture and Engineering. Lecture 12 - Advanced Out-of-Order Superscalars

Announcements. ECE4750/CS4420 Computer Architecture L10: Branch Prediction. Edward Suh Computer Systems Laboratory

CS 152 Computer Architecture and Engineering. Lecture 5 - Pipelining II (Branches, Exceptions)

CS 152, Spring 2011 Section 8

Control Hazards. Prediction

CS252 Graduate Computer Architecture Midterm 1 Solutions

Complex Pipelines and Branch Prediction

Instruction-Level Parallelism and Its Exploitation (Part III) ECE 154B Dmitri Strukov

Control Hazards. Branch Prediction

Wide Instruction Fetch

Looking for Instruction Level Parallelism (ILP) Branch Prediction. Branch Prediction. Importance of Branch Prediction

Computer Architecture 计算机体系结构. Lecture 4. Instruction-Level Parallelism II 第四讲 指令级并行 II. Chao Li, PhD. 李超博士

Instruction Level Parallelism (Branch Prediction)

Looking for Instruction Level Parallelism (ILP) Branch Prediction. Branch Prediction. Importance of Branch Prediction

Computer Architecture: Branch Prediction. Prof. Onur Mutlu Carnegie Mellon University

Copyright 2012, Elsevier Inc. All rights reserved.

Multiple Instruction Issue. Superscalars

Multithreaded Processors. Department of Electrical Engineering Stanford University

Donn Morrison Department of Computer Science. TDT4255 ILP and speculation

Advanced d Instruction Level Parallelism. Computer Systems Laboratory Sungkyunkwan University

E0-243: Computer Architecture

TDT Coarse-Grained Multithreading. Review on ILP. Multi-threaded execution. Contents. Fine-Grained Multithreading

Processor (IV) - advanced ILP. Hwansoo Han

Branch statistics. 66% forward (i.e., slightly over 50% of total branches). Most often Not Taken 33% backward. Almost all Taken

Computer Architecture A Quantitative Approach, Fifth Edition. Chapter 3. Instruction-Level Parallelism and Its Exploitation

Hardware-Based Speculation

" # " $ % & ' ( ) * + $ " % '* + * ' "

Pipelining. Ideal speedup is number of stages in the pipeline. Do we achieve this? 2. Improve performance by increasing instruction throughput ...

CISC 662 Graduate Computer Architecture Lecture 11 - Hardware Speculation Branch Predictions

ECE 4750 Computer Architecture, Fall 2017 T13 Advanced Processors: Branch Prediction

CSE 820 Graduate Computer Architecture. week 6 Instruction Level Parallelism. Review from Last Time #1

Exploitation of instruction level parallelism

Handout 2 ILP: Part B

Superscalar Processors Ch 14

Old formulation of branch paths w/o prediction. bne $2,$3,foo subu $3,.. ld $2,..

Superscalar Processing (5) Superscalar Processors Ch 14. New dependency for superscalar case? (8) Output Dependency?

Hardware-based speculation (2.6) Multiple-issue plus static scheduling = VLIW (2.7) Multiple-issue, dynamic scheduling, and speculation (2.

In-order vs. Out-of-order Execution. In-order vs. Out-of-order Execution

Superscalar Processors

CS425 Computer Systems Architecture

ISA P40 P36 P38 F6 F8 P34 P12 P14 P16 P18 P20 P22 P24

Lecture-13 (ROB and Multi-threading) CS422-Spring

Computer Systems Architecture I. CSE 560M Lecture 10 Prof. Patrick Crowley

ECE 552 / CPS 550 Advanced Computer Architecture I. Lecture 15 Very Long Instruction Word Machines

The Processor: Instruction-Level Parallelism

CS152 Computer Architecture and Engineering March 13, 2008 Out of Order Execution and Branch Prediction Assigned March 13 Problem Set #4 Due March 25

Page 1. CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Pipeline CPI (II) Michela Taufer

NOW Handout Page 1. Review from Last Time #1. CSE 820 Graduate Computer Architecture. Lec 8 Instruction Level Parallelism. Outline

Advanced Computer Architecture

Tutorial 11. Final Exam Review

COMPUTER ORGANIZATION AND DESI

TDT 4260 lecture 7 spring semester 2015

Lecture 8: Instruction Fetch, ILP Limits. Today: advanced branch prediction, limits of ILP (Sections , )

Dynamic Control Hazard Avoidance

The Alpha Microprocessor: Out-of-Order Execution at 600 Mhz. R. E. Kessler COMPAQ Computer Corporation Shrewsbury, MA

Reorder Buffer Implementation (Pentium Pro) Reorder Buffer Implementation (Pentium Pro)

Lecture 18: Instruction Level Parallelism -- Dynamic Superscalar, Advanced Techniques,

Instruction-Level Parallelism. Instruction Level Parallelism (ILP)

15-740/ Computer Architecture Lecture 10: Out-of-Order Execution. Prof. Onur Mutlu Carnegie Mellon University Fall 2011, 10/3/2011

ENGN1640: Design of Computing Systems Topic 06: Advanced Processor Design

CS450/650 Notes Winter 2013 A Morton. Superscalar Pipelines

Lecture 13: Branch Prediction

5008: Computer Architecture

Computer Architecture Lecture 12: Out-of-Order Execution (Dynamic Instruction Scheduling)

CMSC 411 Computer Systems Architecture Lecture 13 Instruction Level Parallelism 6 (Limits to ILP & Threading)

Announcements. EE382A Lecture 6: Register Renaming. Lecture 6 Outline. Dynamic Branch Prediction Using History. 1. Branch Prediction (epilog)

The Alpha Microprocessor: Out-of-Order Execution at 600 MHz. Some Highlights

CS 152, Spring 2012 Section 8

EC 513 Computer Architecture

Current Microprocessors. Efficient Utilization of Hardware Blocks. Efficient Utilization of Hardware Blocks. Pipeline

Real Processors. Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University


Computer Architecture Lecture 15: Load/Store Handling and Data Flow. Prof. Onur Mutlu Carnegie Mellon University Spring 2014, 2/21/2014

Computer Architecture Lecture 14: Out-of-Order Execution. Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 2/18/2013

Dynamic Scheduling. CSE471 Susan Eggers 1

Reduction of Control Hazards (Branch) Stalls with Dynamic Branch Prediction

Computer System Architecture Quiz #2 April 5th, 2019

EN164: Design of Computing Systems Topic 06.b: Superscalar Processor Design

Page # CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Michela Taufer

Hardware-based Speculation

Appendix A.2 (pg. A-21 A-26), Section 4.2, Section 3.4. Performance of Branch Prediction Schemes

Computer Architecture Lecture 13: State Maintenance and Recovery. Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 2/15/2013

EE382A Lecture 7: Dynamic Scheduling. Department of Electrical Engineering Stanford University

Superscalar Processors Ch 13. Superscalar Processing (5) Computer Organization II 10/10/2001. New dependency for superscalar case? (8) Name dependency

UG4 Honours project selection: Talk to Vijay or Boris if interested in computer architecture projects

Simultaneous Multithreading Processor

Static Branch Prediction

Midterm I SOLUTIONS March 18, 2009 CS252 Graduate Computer Architecture

Computer Architecture: Out-of-Order Execution II. Prof. Onur Mutlu Carnegie Mellon University

CS6303 Computer Architecture Regulation 2013 BE-Computer Science and Engineering III semester 2 MARKS

Instruction-Level Parallelism Dynamic Branch Prediction. Reducing Branch Penalties

Lecture 8: Compiling for ILP and Branch Prediction. Advanced pipelining and instruction level parallelism

Instruction Level Parallelism. Appendix C and Chapter 3, HP5e

CS 654 Computer Architecture Summary. Peter Kemper

CS377P Programming for Performance Single Thread Performance Out-of-order Superscalar Pipelines

Instruction Level Parallelism

Lecture 9: More ILP. Today: limits of ILP, case studies, boosting ILP (Sections )

Transcription:

1 cycle per instruction I <= 1 (no pipelining) So far Control hazards hazards 15 740 November 7, 2014 Prediction p stage pipelining I <= p superscalar IW=w I <= wp bypassing/forwarding I <= p OoO superscalar IW=w I <= wp VLIW f function units I <= fp fine grained MT I <= fp Multicore (c cores) I <= wpc SMT IW=w I <= wp Next steps require speculation: prediction recovery 2 Control Flow Penalty Reducing Control Flow Penalty Modern processors may have > 10 pipeline stages between next calculation and branch resolution! How much work is lost if pipeline doesn t follow correct instruction flow? Next fetch started ~ Loop length x pipeline width + buffers Branch executed I-cache Fetch Buffer Issue Buffer Func. Units Result Buffer Arch. State Fetch Decode Execute Commit Software solutions - Eliminate branches loop unrolling, software pipelining, - Increases the run length - Reduce resolution time instruction scheduling - Compute the branch condition as early as possible (of limited value because branches often in critical path through code) Hardware solutions - Find something else to do delay slots - Replaces pipeline bubbles with useful work (requires software cooperation) quickly see diminishing returns - Speculate branch prediction - Speculative execution of instructions beyond the branch - Many advances in accuracy 3 4

Branch Prediction Motivation: Branch penalties limit performance of deeply pipelined processors Modern branch predictors have high accuracy (>95%) and can reduce branch penalties significantly Required hardware support: Prediction structures: Branch history tables, branch target buffers, etc. Mispredict recovery mechanisms: Keep result computation separate from commit Kill instructions following branch in pipeline Restore state to that following branch Importance of Branch Prediction Consider 4 way superscalar with 8 pipeline stages from fetch to dispatch, and 80 entry ROB, and 3 cycles from issue to branch resolution On a mispredict, could throw away 8*4+(80 1)=111 instructions Improving from 90% to 95% prediction accuracy, removes 50% of branch mispredicts - If 1/6 instructions are branches, then move from 60 instructions between mispredicts, to 120 instructions between mispredicts 5 6 Static Branch Prediction Overall probability a branch is taken is ~60 70% but: backward 90% forward 50% Dynamic Branch Prediction learning based on past behavior Temporal correlation - The way a branch resolves may be a good predictor of the way it will resolve at the next execution Spatial correlation - Several branches may resolve in a highly correlated manner (a preferred path of execution) ISA can attach preferred direction semantics to branches, e.g., Motorola MC88110 bne0 (preferred taken) beq0 (not taken) ISA can allow arbitrary choice of statically predicted direction, e.g., HP PA RISC, Intel IA 64 typically reported as ~80% accurate 7 8

One Bit Branch History Predictor For each branch, remember last way branch went Has problem with loop closing backward branches, as two mispredicts occur on every loop execution 1. first iteration predicts loop backwards branch not taken (loop was exited last time) 2. last iteration predicts loop backwards branch taken (loop continued last time) Branch Prediction Bits Assume 2 BP bits per instruction Change the prediction after two consecutive mistakes! taken take right taken taken take wrong taken taken take wrong taken take right taken taken BP state: (predict take/ take) x (last prediction right/wrong) 9 10 I Cache Instruction Opcode Branch? Fetch Branch History Table (BHT) offset Target 00 4K entry BHT, 2 bits/entry, ~80 90% correct predictions + k BHT Index Taken/ Taken? 2 k entry BHT, 2 bits/entry Exploiting Spatial Correlation Yeh and Patt, 1992 if (x[i] < 7) then y += 1; if (x[i] < 5) then c -= 4; If first condition false, second condition also false History register, H, records the direction of the last N branches executed by the processor 11 12

Two Level Branch Predictor Pentium Pro uses the result from the last two branches to select one of the four sets of BHT bits (~95% correct) Fetch 2 bit global branch history shift register Shift in Taken/ Taken results of each branch k 00 Speculating Both Directions An alternative to branch prediction is to execute both directions of a branch speculatively - resource requirement is proportional to the number of concurrent speculative executions - only half the resources engage in useful work when both directions of a branch are executed speculatively - branch prediction takes less resources than speculative execution of both paths With accurate branch prediction, it is more cost effective to dedicate all resources to the predicted direction! Taken/ Taken? 13 14 Limitations of BHTs Only predicts branch direction. Therefore, cannot redirect fetch stream until after branch target is determined. Correctly predicted taken branch penalty Jump Register penalty A Generation/Mux P Instruction Fetch Stage 1 F Instruction Fetch Stage 2 B I J R E UltraSPARC III fetch pipeline Branch Address Calc/Begin Decode Complete Decode Steer Instructions to Functional units Register File Read Integer Execute Remainder of execute pipeline (+ another 6 stages) I Cache Branch Target Buffer (BTB) k 2 k -entry direct-mapped BTB (can also be associative) Entry Valid predicted target Keep both the branch and target in the BTB +4 is fetched if match fails Only taken branches and jumps held in BTB Next determined before branch fetched and decoded = match valid target 15 16

Combining BTB and BHT BTB entries are considerably more expensive than BHT, but can redirect fetches at earlier stage in pipeline and can accelerate indirect branches (JR) BHT can hold many more entries and is more accurate BHT in later pipeline stage corrects when BTB misses a predicted taken branch BTB BHT A Generation/Mux P Instruction Fetch Stage 1 F Instruction Fetch Stage 2 B I J R E Branch Address Calc/Begin Decode Complete Decode Steer Instructions to Functional units Register File Read Integer Execute BTB/BHT only updated after branch resolves in E stage Uses of Jump Register (JR) Switch statements (jump to address of matching case) BTB works well if same case used repeatedly Dynamic function call (jump to run time function address) BTB works well if same function usually called, (e.g., in C++ programming, when objects have same type in virtual function call) Subroutine returns (jump to return address) BTB works well if usually return to the same place Often one function called from many distinct call sites! How well does BTB work for each of these cases? 17 18 Subroutine Return Stack Small structure to accelerate JR for subroutine returns, typically much more accurate than BTBs. fa() { fb(); } fb() { fc(); } fc() { fd(); } Push call address when function call executed &fd() &fc() &fb() Pop return address when subroutine return decoded k entries (typically k=8 16) Return Stack in Pipeline How to use return stack (RS) in deep fetch pipeline? Only know if subroutine call/return at decode RS Push/Pop after decode gives large bubble in fetch stream. RS Return Stack prediction checked A Generation/Mux P Instruction Fetch Stage 1 F Instruction Fetch Stage 2 B I J R E Branch Address Calc/Begin Decode Complete Decode Steer Instructions to Functional units Register File Read Integer Execute 19 20

Return Stack in Pipeline Can remember whether is subroutine call/return using BTB like structure Instead of target, just store push/pop bit Push/Pop before instructions decoded! RS Return Stack prediction checked A Generation/Mux P Instruction Fetch Stage 1 F Instruction Fetch Stage 2 B I J R E Branch Address Calc/Begin Decode Complete Decode Steer Instructions to Functional units Register File Read Integer Execute In Order In Order vs. Out of Order Branch Prediction In Order Issue Fetch Decode Execute Commit Br. Pred. Resolve Speculative fetch but not speculative execution branch resolves before later instructions complete Completed values held in bypass network until commit In Order In Order Out of Order Issue Fetch Decode ROB Commit Br. Pred. Execute Resolve Out of Order Speculative execution, with branches resolved after later instructions complete Completed values held in rename registers in ROB or unified physical register file until commit Both styles of machine can use same branch predictors in front end fetch pipeline, and both can execute multiple instructions per cycle Common to have 10 30 pipeline stages in either style of design 21 22 InO vs. OoO Mispredict Recovery Branch Misprediction in Pipeline In order execution? - Design so no instruction issued after branch can write back before branch resolves - Kill all instructions in pipeline behind mispredicted branch Out of order execution? - Multiple instructions following branch in program order can complete before branch resolves - A simple solution would be to handle like precise traps - Problem? Fetch Inject correct Branch Prediction Decode Kill Kill Kill Branch Resolution Reorder Buffer Execute Complete Commit 23 Can have multiple unresolved branches in ROB Can resolve branches out of order by killing all the instructions in ROB that follow a mispredicted branch MIPS R10K uses four mask bits to tag instructions that are dependent on up to four speculative branches Mask bits cleared as branch resolves, and reused for next branch 24

Rename Table Recovery Have to quickly recover rename table on branch mispredicts MIPS R10K only has four snapshots for each of four outstanding speculative branches Alpha 21264 has 80 snapshots, one per ROB instruction Improving Instruction Fetch Performance of speculative out of order machines often limited by instruction fetch bandwidth - speculative execution can fetch 2 3x more instructions than are committed - mispredict penalties dominated by time to refill instruction window - taken branches are particularly troublesome 25 Increasing Taken Branch Bandwidth (Alpha 21264 I Cache) Generation Branch Prediction Instruction Decode Validity Checks Line Predict fast fetch path Way Predict Cached Instructions 4 insts Fold 2 way tags and BTB into predicted next block Take tag checks, inst. decode, branch predict out of loop Raw RAM speed on critical loop (1 cycle at ~1 GHz) 2 bit hysteresis counter per block prevents overtraining Way 0 Way 1 =? =? Hit/Miss/Way Tournament Branch Predictor (Alpha 21264) Choice predictor learns whether best to use local or global branch history in predicting next branch Global history is speculatively updated but restored on mispredict Claim 90 100% success on range of applications Local history table (1,024x10b) Local prediction (1,024x3b) Prediction Global Prediction (4,096x2b) Choice Prediction (4,096x2b) Global History (12b)

Taken Branch Limit Integer codes have a taken branch every 6 9 instructions To avoid fetch bottleneck, must execute multiple taken branches per cycle when increasing performance This implies: - predicting multiple branches per cycle - fetching multiple non contiguous blocks per cycle k Entry Branch Address Cache (Yeh, Marr, Patt) = Valid predicted target #1 len predicted target #2 match valid target#1 len#1 target#2 Extend BTB to return multiple branch predictions per cycle Fetching Multiple Basic Blocks Requires either - multiported cache: expensive - interleaving: bank conflicts will occur Merging multiple blocks to feed to decoders adds latency increasing mispredict penalty and reducing branch throughput Trace Cache Key Idea: Pack multiple non contiguous basic blocks into one contiguous trace cache line BR BR BR BR BR BR Single fetch brings in multiple basic blocks Trace cache indexed by start address and next n branch predictions Used in Intel Pentium 4 processor to hold decoded uops

1 cycle per instruction I <= 1 (no pipelining) p stage pipelining I <= p superscalar IW=w I <= wp VLIW f function units I <= fp fine grained MT I <= fp Multicore (c cores) I <= wpc So far bypassing/forwarding I <= p Internal Control hazards hazards OoO superscalar IW=w I <= wp SMT IW=w I <= wp Next steps require speculation about memory Load Store Queue Design After control hazards, data hazards through memory are probably next most important bottleneck to superscalar performance Modern superscalars use very sophisticated loadstore reordering techniques to reduce effective memory latency by allowing loads to be speculatively issued 33 34 Store Address Speculative Store Buffer s Store Commit Path L1 Cache Speculative Store Buffer Store Just like register updates, stores should not modify the memory until after the instruction is committed. A speculative store buffer is a structure introduced to hold speculative store data. During decode, store buffer slot allocated in program order Stores split into store address and store data micro operations Store address execution writes tag Store data execution writes data Store commits when oldest instruction and both address and data available: - clear speculative bit and eventually move data to cache On store abort: - clear valid bit Speculative Store Buffer Load bypass from speculative store buffer Load Address s L1 Cache Load If data in both store buffer and cache, which should we use? Speculative store buffer If same address in store buffer twice, which should we use? Youngest store older than load 35 36

Memory Dependencies sd x1, (x2) ld x3, (x4) When can we execute the load? In Order Memory Queue Execute all loads and stores in program order => Load and store cannot leave ROB for execution until all previous loads and stores have completed execution Can still execute loads and stores speculatively, and out of order with respect to other instructions Need a structure to handle memory ordering 37 38 Conservative O o O Load Execution sd x1, (x2) ld x3, (x4) Can execute load before store, if addresses known and x4!= x2 Each load address compared with addresses of all previous uncommitted stores - can use partial conservative check i.e., bottom 12 bits of address, to save hardware Don t execute load if any previous store address not known (MIPS R10K, 16 entry address queue) Address Speculation sd x1, (x2) ld x3, (x4) Guess that x4!= x2 Execute load before store address known Need to hold all completed but uncommitted load/store addresses in program order If subsequently find x4==x2, squash load and all following instructions => Large penalty for inaccurate address speculation 39 40

Memory Dependence Prediction (Alpha 21264) sd x1, (x2) ld x3, (x4) Guess that x4!= x2 and execute load before store If later find x4==x2, squash load and all following instructions, but mark load instruction as store wait Subsequent executions of the same load instruction will wait for all previous stores to complete Periodically clear store wait bits 41 What else slows us down? Cache Misses - value prediction - prefetching - reconfigurable ISA - fine grained multithreading Resource Contention - better scheduling - fine grained multithreading - Reconfigurable ISA Fetch Bandwidth/I cache misses - Reconfigurable ISA - fine grained multithreading - vector processing (SIMD) 42