CMSC 411 Computer Systems Architecture Lecture 13 Instruction Level Parallelism 6 (Limits to ILP & Threading)

Similar documents
CISC 662 Graduate Computer Architecture Lecture 13 - Limits of ILP

Outline EEL 5764 Graduate Computer Architecture. Chapter 3 Limits to ILP and Simultaneous Multithreading. Overcoming Limits - What do we need??

CISC 662 Graduate Computer Architecture Lecture 13 - Limits of ILP

Lecture 7 Instruction Level Parallelism (5) EEC 171 Parallel Architectures John Owens UC Davis

Lecture-13 (ROB and Multi-threading) CS422-Spring

TDT 4260 lecture 7 spring semester 2015

Limits to ILP. Limits to ILP. Overcoming Limits. CS252 Graduate Computer Architecture Lecture 11. Conflicting studies of amount

Review Tomasulo. Lecture 17: ILP and Dynamic Execution #2: Branch Prediction, Multiple Issue. Tomasulo Algorithm and Branch Prediction


Parallel Processing SIMD, Vector and GPU s cont.

Hardware-Based Speculation

Multiple Instruction Issue and Hardware Based Speculation

Handout 2 ILP: Part B

Page 1. Today s Big Idea. Lecture 18: Branch Prediction + analysis resources => ILP

Simultaneous Multithreading Architecture

TDT Coarse-Grained Multithreading. Review on ILP. Multi-threaded execution. Contents. Fine-Grained Multithreading

Exploitation of instruction level parallelism

LIMITS OF ILP. B649 Parallel Architectures and Programming

Lecture 19: Instruction Level Parallelism -- Dynamic Scheduling, Multiple Issue, and Speculation

TDT 4260 TDT ILP Chap 2, App. C

CS252 Graduate Computer Architecture Lecture 10. Review: Branch Prediction/Speculation

Hardware Speculation Support

Instruction Level Parallelism (ILP)

ECSE 425 Lecture 25: Mul1- threading

CS425 Computer Systems Architecture

Chapter 3 Instruction-Level Parallelism and its Exploitation (Part 5)

Instruction-Level Parallelism and Its Exploitation (Part III) ECE 154B Dmitri Strukov

Instruction Level Parallelism

EITF20: Computer Architecture Part3.2.1: Pipeline - 3

Functional Units. Registers. The Big Picture: Where are We Now? The Five Classic Components of a Computer Processor Input Control Memory

Multithreading: Exploiting Thread-Level Parallelism within a Processor

ILP Ends TLP Begins. ILP Limits via an Oracle

Lecture 5: VLIW, Software Pipelining, and Limits to ILP. Review: Tomasulo

Getting CPI under 1: Outline

Week 6 out-of-class notes, discussions and sample problems

UG4 Honours project selection: Talk to Vijay or Boris if interested in computer architecture projects

Multithreaded Processors. Department of Electrical Engineering Stanford University

Lecture 25: Board Notes: Threads and GPUs

ENGN1640: Design of Computing Systems Topic 06: Advanced Processor Design

CSE 502 Graduate Computer Architecture. Lec 11 Simultaneous Multithreading

Super Scalar. Kalyan Basu March 21,

Lecture 8 Dynamic Branch Prediction, Superscalar and VLIW. Computer Architectures S

CS425 Computer Systems Architecture

Hardware-based Speculation

Lecture 9: More ILP. Today: limits of ILP, case studies, boosting ILP (Sections )

CS 152 Computer Architecture and Engineering

CS 654 Computer Architecture Summary. Peter Kemper

Multiprocessors and Thread-Level Parallelism. Department of Electrical & Electronics Engineering, Amrita School of Engineering

CS 152 Computer Architecture and Engineering

Computer and Information Sciences College / Computer Science Department CS 207 D. Computer Architecture. Lecture 9: Multiprocessors

CS422 Computer Architecture

Metodologie di Progettazione Hardware-Software

Chapter 4. Advanced Pipelining and Instruction-Level Parallelism. In-Cheol Park Dept. of EE, KAIST

Lecture 5: VLIW, Software Pipelining, and Limits to ILP Professor David A. Patterson Computer Science 252 Spring 1998

CS6303 Computer Architecture Regulation 2013 BE-Computer Science and Engineering III semester 2 MARKS

CISC 662 Graduate Computer Architecture Lecture 13 - CPI < 1

CPI < 1? How? What if dynamic branch prediction is wrong? Multiple issue processors: Speculative Tomasulo Processor

THREAD LEVEL PARALLELISM

Instruction-Level Parallelism and Its Exploitation

NOW Handout Page 1. Review from Last Time #1. CSE 820 Graduate Computer Architecture. Lec 8 Instruction Level Parallelism. Outline

Lecture 14: Multithreading

Advanced Computer Architecture

Dynamic Hardware Prediction. Basic Branch Prediction Buffers. N-bit Branch Prediction Buffers

Multiple Issue ILP Processors. Summary of discussions

Chapter 03. Authors: John Hennessy & David Patterson. Copyright 2011, Elsevier Inc. All rights Reserved. 1

Page 1. Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

Administrivia. CMSC 411 Computer Systems Architecture Lecture 14 Instruction Level Parallelism (cont.) Control Dependencies

Recall from Pipelining Review. Instruction Level Parallelism and Dynamic Execution

Multithreading Processors and Static Optimization Review. Adapted from Bhuyan, Patterson, Eggers, probably others

Pipelining. Ideal speedup is number of stages in the pipeline. Do we achieve this? 2. Improve performance by increasing instruction throughput ...

Simultaneous Multithreading (SMT)

EECS 452 Lecture 9 TLP Thread-Level Parallelism

ILP Limit: Perfect/Infinite Hardware. Chapter 3: Limits of Instr Level Parallelism. ILP Limit: see Figure in book. Narrow Window Size

Lecture 26: Parallel Processing. Spring 2018 Jason Tang

Computer Architecture A Quantitative Approach, Fifth Edition. Chapter 3. Instruction-Level Parallelism and Its Exploitation

CPI IPC. 1 - One At Best 1 - One At best. Multiple issue processors: VLIW (Very Long Instruction Word) Speculative Tomasulo Processor

CS252 S05. Outline. Dynamic Branch Prediction. Static Branch Prediction. Dynamic Branch Prediction. Dynamic Branch Prediction

CSE 820 Graduate Computer Architecture. week 6 Instruction Level Parallelism. Review from Last Time #1

CS 152 Computer Architecture and Engineering

EN164: Design of Computing Systems Topic 08: Parallel Processor Design (introduction)

In embedded systems there is a trade off between performance and power consumption. Using ILP saves power and leads to DECREASING clock frequency.

Lecture 8: Instruction Fetch, ILP Limits. Today: advanced branch prediction, limits of ILP (Sections , )

Instruction Level Parallelism

ECE473 Computer Architecture and Organization. Pipeline: Control Hazard

Chapter 7. Digital Design and Computer Architecture, 2 nd Edition. David Money Harris and Sarah L. Harris. Chapter 7 <1>

Dynamic Control Hazard Avoidance

Multi-cycle Instructions in the Pipeline (Floating Point)

CS152 Computer Architecture and Engineering March 13, 2008 Out of Order Execution and Branch Prediction Assigned March 13 Problem Set #4 Due March 25

Chapter 06: Instruction Pipelining and Parallel Processing

Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

250P: Computer Systems Architecture. Lecture 9: Out-of-order execution (continued) Anton Burtsev February, 2019

Computer Architectures. Chapter 4. Tien-Fu Chen. National Chung Cheng Univ.

Beyond ILP. Hemanth M Bharathan Balaji. Hemanth M & Bharathan Balaji

ECE 571 Advanced Microprocessor-Based Design Lecture 4

Donn Morrison Department of Computer Science. TDT4255 ILP and speculation

Beyond ILP II: SMT and variants. 1 Simultaneous MT: D. Tullsen, S. Eggers, and H. Levy

Multiple Instruction Issue. Superscalars

Copyright 2012, Elsevier Inc. All rights reserved.

EECC551 - Shaaban. 1 GHz? to???? GHz CPI > (?)

CPE 631 Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation

Transcription:

CMSC 411 Computer Systems Architecture Lecture 13 Instruction Level Parallelism 6 (Limits to ILP & Threading) Limits to ILP Conflicting studies of amount of ILP Benchmarks» vectorized Fortran FP vs. integer C programs Hardware sophistication Compiler sophistication How much ILP is available using existing mechanisms with increasing HW budgets? Do we need to invent new HW/SW mechanisms to keep on processor performance curve? Intel MMX, SSE (Streaming SIMD Extensions): 64 bit ints Intel SSE2: 128 bit, including 2 64-bit FP per clock Motorola AltiVec: 128 bit ints and FPs Supersparc Multimedia ops, etc. CMSC 411-11a (from Patterson) 2 CS252 S05 1

Overcoming Limits Advances in compiler technology + significantly new and different hardware techniques may be able to overcome limitations assumed in studies However, unlikely such advances when coupled with realistic hardware will overcome these limits in near future CMSC 411-11a (from Patterson) 3 Limits to ILP Initial HW Model here; MIPS compilers. Assumptions for ideal/perfect machine to start: 1. Register renaming infinite virtual registers all register WAW & WAR hazards are avoided 2. Branch prediction perfect; no mispredictions 3. Jump prediction all jumps perfectly predicted (returns, case statements) 2 & 3 no control dependencies; perfect speculation & an unbounded buffer of instructions available 4. Memory-address alias analysis addresses known & a load can be moved before a store provided addresses not equal; 1&4 eliminates all but RAW Also: perfect caches; 1 cycle latency for all instructions (FP *,/); unlimited instructions issued/clock cycle; CMSC 411-11a (from Patterson) 4 CS252 S05 2

Limits to ILP HW Model comparison Model Power 5 Instructions Issued Infinite 4 per clock Instruction Window Infinite 200 Size Renaming Registers Infinite 48 integer + 40 Fl. Pt. Branch Prediction Perfect 2% to 6% misprediction (Tournament Branch Predictor) Cache Perfect 64KI, 32KD, 1.92MB L2, 36 MB L3 Memory Alias Analysis Perfect?? CMSC 411-11a (from Patterson) 5 Upper Limit to ILP: Ideal Machine SPEC92 Instructions Per Clock 160 140 120 100 80 60 40 20 0 Integer: 18-60 FP: 75-150 118.7 150.1 75.2 62.6 54.8 17.9 gcc espresso li fpppp doducd tomcatv Programs CMSC 411-11a (from Patterson) 6 CS252 S05 3

Instructions Issued per clock Instruction Window Size Renaming Registers Branch Prediction Limits to ILP HW Model comparison New Model Model Power 5 Infinite Infinite 4 Infinite, 2K, 512, 128, 32 Infinite 200 Infinite Infinite 48 integer + 40 Fl. Pt. Perfect Perfect 2% to 6% misprediction (Tournament Branch Predictor) Cache Perfect Perfect 64KI, 32KD, 1.92MB L2, 36 MB L3 Memory Alias Perfect Perfect?? CMSC 411-11a (from Patterson) 7 More Realistic HW: Window Impact Change from Infinite window 160 SPEC92 2048, 512, 128, 32 FP: 9-150 150 140 Instructions Per Clock IPC 120 100 80 60 40 20 Integer: 8-63 63 55 41 36 15 18 15 10 10 13 12 8 8 119 119 75 61 59 60 49 45 35 34 14 1615 9 14 0 gcc espresso li fpppp doduc tomcatv Infinite 2048 512 128 32 CMSC 411-11a (from Patterson) 8 CS252 S05 4

Instructions Issued per clock Instruction Window Size Renaming Registers Branch Prediction Limits to ILP HW Model comparison New Model Model Power 5 64 Infinite 4 2048 Infinite 200 Infinite Infinite 48 integer + 40 Fl. Pt. Perfect vs. 8K Tournament vs. 512 2-bit vs. profile vs. none Perfect 2% to 6% misprediction (Tournament Branch Predictor) Cache Perfect Perfect 64KI, 32KD, 1.92MB L2, 36 MB L3 Memory Alias Perfect Perfect?? CMSC 411-11a (from Patterson) 9 More Realistic HW: Branch Impact SPEC92 60 50 40 Change from Infinite window to examine to 2048 and maximum issue of 64 instructions 41 per clock cycle 35 61 48 46 45 58 60 FP: 15-45 46 45 45 30 Integer: 6-12 29 IPC 20 10 9 6 16 12 10 6 7 6 6 7 2 2 2 15 13 14 4 19 0 gcc espresso li fpppp doducd tomcatv Program Perfect Perfect Selective predictor Standard 2-bit Static None Tournament BHT (512) Profile No prediction CMSC 411-11a (from Patterson) 10 CS252 S05 5

Misprediction Rates 35% 30% SPEC92 30% Misprediction Rate 25% 20% 15% 10% 5% 0% 23% 18% 18% 16% 14% 14% 12% 12% 6% 5% 4% 3% 1% 1% 2% 2% 0% tomcatv doduc fpppp li espresso gcc Profile-based 2-bit counter Tournament CMSC 411-11a (from Patterson) 11 Limits to ILP HW Model comparison New Model Model Power 5 Instructions Issued per clock Instruction Window Size Renaming Registers Branch Prediction 64 Infinite 4 2048 Infinite 200 Infinite v. 256, 128, 64, 32, none Infinite 48 integer + 40 Fl. Pt. 8K 2-bit Perfect Tournament Branch Predictor Cache Perfect Perfect 64KI, 32KD, 1.92MB L2, 36 MB L3 Memory Alias Perfect Perfect Perfect CMSC 411-11a (from Patterson) 12 CS252 S05 6

More Realistic HW: Renaming Register Impact (N int + N fp) SPEC92 70 60 50 Change 2048 instr window, 64 instr issue, 8K 2 level Prediction 59 49 FP: 11-45 54 45 44 IPC 40 30 20 10 11 10 10 9 Integer: 5-15 15 15 13 12 12 12 11 10 5 4 5 4 35 29 20 16 15 11 6 5 5 5 4 28 7 5 5 0 gcc espresso li fpppp doducd tomcatv Program Infinite 256 128 64 32 None Infinite 256 128 64 32 None CMSC 411-11a (from Patterson) 13 Instructions Issued per clock Instruction Window Size Renaming Registers Limits to ILP HW Model comparison New Model Model Power 5 64 Infinite 4 2048 Infinite 200 256 Int + 256 FP Infinite 48 integer + 40 Fl. Pt. Branch 8K 2-bit Perfect Tournament Prediction Cache Perfect Perfect 64KI, 32KD, 1.92MB L2, 36 MB L3 Memory Alias Perfect v. Stack v. Inspect v. none Perfect Perfect CMSC 411-11a (from Patterson) 14 CS252 S05 7

More Realistic HW: Memory Address Alias Impact SPEC92 50 45 40 35 30 Change 2048 instr window, 64 instr issue, 8K 2 level Prediction, 256 renaming registers 49 49 FP: 4-45 (Fortran, no heap) 45 45 IPC 25 20 15 10 5 10 Integer: 4-9 16 16 15 12 9 7 7 5 5 6 4 4 4 5 3 3 3 4 4 0 gcc espresso li fpppp doducd tomcatv Program Perfect Global/stack Perfect Inspection None Perfect Global/Stack perf; Inspec. heap conflicts Assem. None CMSC 411-11a (from Patterson) 15 How to Exceed ILP Limits of this study? These are not laws of physics; just practical limits for today, and perhaps overcome via research Compiler and ISA advances could change results WAR and WAW hazards through memory: eliminated WAW and WAR hazards through register renaming, but not in memory usage Can get conflicts via allocation of stack frames as a called procedure reuses the memory addresses of a previous frame on the stack CMSC 411-11a (from Patterson) 16 CS252 S05 8

HW v. SW to increase ILP Memory disambiguation: HW best Speculation: HW best when dynamic branch prediction better than compile time prediction Exceptions easier for HW HW doesn t need bookkeeping code or compensation code Very complicated to get right Scheduling: SW can look ahead to schedule better Compiler independence: does not require new compiler, recompilation to run well CMSC 411-11a (from Patterson) 17 Performance Beyond Single Thread ILP There can be much higher natural parallelism in some applications (e.g., Database or Scientific codes) Explicit Thread Level Parallelism or Data Level Parallelism Thread: process with own instructions and data thread may be a process part of a parallel program of multiple processes, or it may be an independent program Each thread has all the state (instructions, data, PC, register state, and so on) necessary to allow it to execute Data (Level) Parallelism: Perform identical operations on data, and lots of data CMSC 411-11a (from Patterson) 18 CS252 S05 9

Thread Level Parallelism (TLP) ILP exploits implicit parallel operations within a loop or straight-line code segment TLP explicitly represented by the use of multiple threads of execution that are inherently parallel Goal: Use multiple instruction streams to improve 1. Throughput of computers that run many programs 2. Execution time of multi-threaded programs TLP could be more cost-effective to exploit than ILP CMSC 411-11a (from Patterson) 19 New Approach: Multithreaded Execution Multithreading: multiple threads to share the functional units of 1 processor via overlapping Processor must duplicate independent state of each thread e.g., a separate copy of register file, a separate PC, and for running independent programs, a separate page table Memory shared through the virtual memory mechanisms, which already support multiple processes HW for fast thread switch; much faster than full process switch 100s to 1000s of clocks When to switch? Alternate instruction per thread (fine grain) When a thread is stalled, perhaps for a cache miss, another thread can be executed (coarse grain) CMSC 411-11a (from Patterson) 20 CS252 S05 10

Fine-Grained Multithreading Switches between threads on each instruction, causing the execution of multiples threads to be interleaved Usually done in a round-robin fashion, skipping any stalled threads CPU must be able to switch threads every clock Advantage is it can hide both short and long stalls, since instructions from other threads executed when one thread stalls Disadvantage is it slows down execution of individual threads, since a thread ready to execute without stalls will be delayed by instructions from other threads Used on Sun s Niagara (will see later) CMSC 411-11a (from Patterson) 21 Coarse-Grained Multithreading Switches threads only on costly stalls, such as L2 cache misses Used in IBM AS/400 Advantages Relieves need to have very fast thread-switching Doesn t slow down thread, since instructions from other threads issued only when the thread encounters a costly stall CMSC 411-11a (from Patterson) 22 CS252 S05 11

Coarse-Grained Multithreading Disadvantage is hard to overcome throughput losses from shorter stalls, due to pipeline start-up costs Since CPU issues instructions from 1 thread, when a stall occurs, the pipeline must be emptied or frozen New thread must fill pipeline before instructions can complete Because of start-up overhead, coarse-grained multithreading better at reducing penalty of high cost stalls, where pipeline refill << stall time CMSC 411-11a (from Patterson) 23 For most apps, most execution units lie idle For an 8-way superscalar. CMSC 411-11a (from Patterson) From: Tullsen, Eggers, and Levy, Simultaneous Multithreading: Maximizing On-chip Parallelism, ISCA 1995. 24 CS252 S05 12

Do both ILP and TLP? TLP and ILP exploit two different kinds of parallel structure in a program Could a processor oriented at ILP exploit TLP? functional units are often idle in data path designed for ILP because of either stalls or dependences in the code Could the TLP be used as a source of independent instructions that might keep the processor busy during stalls? Could TLP be used to employ the functional units that would otherwise lie idle when insufficient ILP exists? CMSC 411-11a (from Patterson) 25 Simultaneous Multi-threading... One thread, 8 units Two threads, 8 units Cycle M M FX FX FP FP BR CC Cycle 1 1 M Figure 3.8 (almost) M FX FX FP FP BR CC 2 3 4 5 6 7 8 2 3 4 5 6 7 8 9 9 M = Load/Store, FX = Fixed Point, FP = Floating Point, BR = Branch, CC = Condition Codes CMSC 411-11a (from Patterson) 26 CS252 S05 13

Simultaneous Multithreading (SMT) Simultaneous multithreading (SMT): insight that dynamically scheduled processor already has many HW mechanisms to support multithreading Large set of virtual registers that can be used to hold the register sets of independent threads Register renaming provides unique register identifiers, so instructions from multiple threads can be mixed in datapath without confusing sources and destinations across threads Out-of-order completion allows the threads to execute out of order, and get better utilization of the HW Just add a per thread renaming table and keep separate PCs Independent commitment can be supported by logically keeping a separate reorder buffer for each thread CMSC 411-11a (from Patterson) 27 Time (processor cycle) Multithreaded Categories Superscalar Fine-Grained Coarse-Grained Multiprocessing Simultaneous Multithreading Thread 1 Thread 2 Thread 3 Thread 4 Thread 5 Idle slot CMSC 411-11a (from Patterson) 28 CS252 S05 14

Design Challenges in SMT Since SMT makes sense only with fine-grained implementation, impact of fine-grained scheduling on single thread performance? A preferred thread approach sacrifices neither throughput nor single-thread performance? Unfortunately, with a preferred thread, the processor is likely to sacrifice some throughput, when preferred thread stalls Larger register file needed to hold multiple contexts CMSC 411-11a (from Patterson) 29 Design Challenges in SMT (cont.) Not affecting clock cycle time, especially in Instruction issue - more candidate instructions need to be considered Instruction completion - choosing which instructions to commit may be challenging Ensuring that cache and TLB conflicts generated by SMT do not degrade performance CMSC 411-11a (from Patterson) 30 CS252 S05 15