ILP concepts (2.1) Basic compiler techniques (2.2) Reducing branch costs with prediction (2.3) Dynamic scheduling (2.4 and 2.5)
|
|
- Joy Henry
- 6 years ago
- Views:
Transcription
1 Instruction-Level Parallelism and its Exploitation: PART 1 ILP concepts (2.1) Basic compiler techniques (2.2) Reducing branch costs with prediction (2.3) Dynamic scheduling (2.4 and 2.5)
2 Project and Case Studies Project: Lecture Nov 9 Case studies Presentation of pipeline pp case studies on Nov 12 This week: Division into groups (5-6 in each) Selection of case studies: MIPS R10000 Intel Pentium 4 AMD Opteron Sun Rock (UltraSparc) IBM Power 6
3 Lectures 1. Introduction 2. Instruction-level Parallelism, part 1 3. Instruction-level Parallelism,,part 2 4. Memory Hierarchies 5. Multiprocessors and Thread-Level Parallelism 6. System Aspects and Virtualization 7. Summary and Review
4 Bottlenecks in Simple Pipelines 4 IF/ID ID/EX EX/MEM MEM/WB P C Data memory Three classes of dependences that limit parallelism: Data hazards (lost cycles due to dependences) Control hazards (lost cycles due to branches) Structural hazards (lost cycles due to lack of resources)
5 Unlocking Instruction-Level Parallelism: A First Set of Techniques Improve parallel execution in basic pp pipeline to avoid stalls: Static scheduling of instructions (compiler) Dynamic branch prediction (run-time) Exploit parallelism li in programs Add dynamically (run-time) Add for (i=1000; i>0; i=i-1) x[i] = x[i] ; Add Use many execution units to run things in parallel
6 Instruction-Level Parallelism Basic Concepts (Ch 2.1) Two instructions must be independent in order to execute in parallel Three classes of dependences that limit parallelism: Data dependences Name dependences d Control dependences Dependences are properties of the program Can lead to hazards which are properties of the pipeline organization
7 Data Dependences An instruction j is data dependent on instruction i if: instruction i produces a result used by instr. j, or instruction j is data dependent on instruction k and instr. k is data dependent on instr. i Example: [Notation: OP Rx, Ry, Rz Rx <- Ry OP Rz] LD ADDD SD F0,0(R1) F4,F0,F2 0(R1),F4 Easy to detect dependences for registers trickier for memory locations (memory ambiguity/alias problem)
8 Name Dependences Two instructions use same name (register or memory address) but don t exchange data Anti dependence (WAR if hazard in pipeline) Instruction j writes to a register or memory location that instruction i reads from and instruction i is executed first Output dependence (WAW if hazard in pipeline) Instruction i and instruction j write to the same register or memory location; ordering between instructions must be preserved Name dependences are not fundamental and can sometimes be eliminated through hardware or software techniques (renaming)
9 Control Dependences Example: if Test1 then { S1 } if Test2 then { S2 } S1 is control dependent on Test1 S2 is control dependent on Test2; but not on Test1 We can t move an instruction that is dependent on a branch before the branch instruction We can t move an instruction that is not control dependent on a branch after the branch instr.
10 Compiler Techniques to Expose ILP: Example (Ch 2.2) for (i=1000; i>0; i=i-1) x[i] = x[i] ; Iterations are independent => parallel execution RAW RAW loop: LD F0, 0(R1) ; F0 = array element ADDD F4, F0, F2 ; Add scalar constant SD 0(R1), F4 ; Save result SUBI R1, R1, #8 ; decrement array ptr. BNEZ R1, loop ; reiterate if R1!= 0 NOP ; delayed branch Can we eliminate i all penalties in each iteration? ti
11 Static Scheduling within each Loop Iteration loop: LD F0, 0(R1) loop: LD F0, 0(R1) stall ADDD F4, F0, F2 stall ADDD F4, F0, F2 stall SUBI R1, R1, #8 stall BNEZ R1, loop SD 0(R1), F4 SD 8(R1), F4 SUBI R1, R1, #8 BNEZ R1, loop NOP Original loop: Statically scheduled loop: Four stall cycles One stall cycle Can we do better by scheduling across iterations?
12 Loop Unrolling RAW loop: LD F0, 0(R1) RAW ADDD F4, F0, F2 SD 0(R1), F4 ; drop SUBI & BNEZ RAW LD F6, -8(R1) ; adjust displacement ADDD F8, F6, F2 RAW SD -8(R1), F8 ; drop SUBI & BNEZ RAW LD F10, -16(R1) ; adjust displacement ADDD F12, F10, F2 RAW SD -16(R1), F12 ; drop SUBI & BNEZ LD F14, -24(R1) ; adjust displacement RAW ADDD F16, F14, F2 RAW SD -24(R1), F16 SUBI R1, R1, #32 ; alter to 4*8 BNEZ R1, loop NOP Registers must be renamed to avoid WAR hazards A larger chunk of sequential code simplifies scheduling
13 Statically Scheduled Unrolled Loop loop: LD F0, 0(R1) LD F6, -8(R1) LD F10, -16(R1) LD F14, -24(R1) ADDD F4, F0, F2 ADDD F8, F6, F2 ADDD F12, F10, F2 ADDD F16, F14, F2 SD 0(R1), F4 SD -8(R1), F8 SD -16(R1), F12 SUBI R1, R1, #32 BNEZ R1, loop SD 8(R1), F16 All penalties are eliminated. CPI=1! Important steps: Hoist loads Push stores down Note: the displacement of the store instr. must be changed Effects of loop unrolling: Provides a larger seq. instr. window Simplifies for static and dynamic methods to extract ILP But makes code bigger
14 Dynamic Methods to Improve ILP Not all potential hazards can be resolved by static scheduling at compile time For static scheduling to work, it is necessary for the compiler to know enough about the processor implementation to predict hazards Therefore, we will also explore dynamic techniques for hazard resolution
15 General Processor Organization Memory access Fetch instruction Get operands & Issue Integer & Logic Update state Floating point Major bottlenecks Control hazards, memory performance => Fetch bottleneck Data hazards, structural hazards, control hazards => Issue bottleneck
16 Fetch Bottleneck Control hazards Dynamic branch prediction: Predict outcome of branches and jumps Branch target buffers Issue (and execute) beyond branches Do not update state until prediction verified Memory bottleneck Memory performance improvement (memory hierarchy) Prefetch, multiple fetch
17 Dynamic Branch Prediction (Ch. 2.3) Branches limit performance because: Branch penalties are high Prevent a lot of ILP from being exploited Solution: Dynamic branch prediction to predict the outcome of conditional branches. Benefits: Reduce time to determine branch condition Reduce time to calculate the branch target address
18 Branch History Table A simple branch prediction scheme IF ID EX MEM WB PC branches predicted as taken The idea: Use last branch outcome as prediction branch-prediction buffer is indexed by bits from branch- instruction ti PC values If prediction is wrong, then invert prediction Problem: a loop causes two mispredictions in a row
19 ATwobit Two-bit Prediction Scheme Requires prediction to be wrong twice in order to change prediction => better performance Performance: 0%-18% miss prediction frequency for SPEC92 Integer programs have higher miss frequency than floating point (FP) programs
20 Correlating Branch Predictors Correlating predictors ((m,n) predictor) Takes into consideration multiple branches and their correlation Performs better than the 2-bit predictor
21 Issue Bottleneck RAW hazards Dynamic scheduling (out-of-order execution) WAR & WAW hazards Remove name dependencies (register renaming) Structural hazards Dynamic scheduling (out-of-order execution) Memory performance improvement (memory hierarchy, prefetch, non-blocking, load/store buffers) Multiple and pipelined functional units Control hazards Speculative execution Single issue Issue multiple instructions per cycle (superscalar, VLIW)
22 Dynamic Instruction Scheduling (Ch. 2.4) Key idea: Allow subsequent independent instructions to proceed Instr. gets stuck here DIVD F0,F2,F4 ; takes long time ADDD F10F0F8 F10,F0,F8 ; stalls waiting for F0 SUBD F12,F8,F13 ; Let this instr. bypass the ADDD Enables out-of-order execution => out-of-order completion IF ID EX M WB Two historical schemes used in recent machines: Scoreboard dates back to CDC 6600 in 1963 Tomasulo s s algorithm in IBM 360/91 in 1967
23 Tomasulo s Algorithm: Hardware Organization (Ch. 2.5) Note: Tomasulo s algorithm is of course applicable also for other types of instructions than floating point.
24 Basic Ideas Decouple issue from operand fetch Prevents stall due to RAW hazards Register renaming: Translate result register references to instruction (functional unit) references Prevents WAR and WAW hazards Example registers S and T prevent WAW/WAR DIV.D F0,F2,F4 ADD.D F6,F0,F8 S.D F6,0(R1) SUB.D F8,F10,F8 MUL.D F6,F10,F8 DIV.D F0,F2,F4 ADD.D S,F0,F8 S.D S,0(R1) SUB.D T,F10,F8 MUL.D F6,F10,T
25 Three Stages of Tomasulo s Algorithm 1. Issue get g instruction from FP Op Queue Issue if no structural hazard for a reservation station 2. Execution operate on operands (EX) Execute when both operands are available; if not ready, watch Common Data Bus (CDB) for result 3. Write result finish execution (WB) Write on CDB to all awaiting functional units; mark reservation station available Normal bus: data + destination Common Data Bus: data + source
26 Tomasulo example, cycle 0 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R2 Busy Address LD F2 45+ R3 Load1 No MULTDF0 F2 F4 Load2 No SUBD F8 F6 F2 Load3 No DIVD F10 F0 F6 ADDD F6 F8 F2 Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk Add1 No Add2 No Add3 No Mult1 No Mult2 No Register result status Clock: 0 FU F0 F2 F4 F6 F8 F10... F30
27 Tomasulo example, cycle 1 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R2 1 Time Busy Address LD F2 45+ R3 Load1 Yes R2+32 MULTDF0 F2 F4 Load2 No SUBD F8 F6 F2 Load3 No DIVD F10 F0 F6 ADDD F6 F8 F2 Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk Add1 No Add2 No Add3 No Mult1 No Mult2 No Register result status Clock: 1 FU F0 F2 F4 F6 F8 F10... F30 Load1
28 Tomasulo example, cycle 2 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R2 1 Time Busy Address LD F2 45+ R3 2 1 Load1 Yes R2+32 MULTDF0 F2 F4 Load2 Yes R3+45 SUBD F8 F6 F2 Load3 No DIVD F10 F0 F6 ADDD F6 F8 F2 Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk Add1 No Add2 No Add3 No Mult1 No Mult2 No Register result status F0 F2 F4 F6 F8 F10... F30 FU Load2 Load1 Clock: 2
29 Tomasulo example, cycle 3 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R2 1 3 Time Busy Address LD F2 45+ R3 2 0 Load1 Yes R2+32 MULTDF0 F2 F4 3 1 Load2 Yes R3+45 SUBD F8 F6 F2 Load3 No DIVD F10 F0 F6 ADDD F6 F8 F2 Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk Add1 No Add2 No Add3 No Mult1 Yes Mult F4 Load2 Mult2 No Register result status F0 F2 F4 F6 F8 F10... F30 FU Mult1 Load2 Load1 Clock: 3
30 Tomasulo example, cycle 4 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R Time Busy Address LD F2 45+ R3 2 4 Load1 No MULTDF0 F2 F4 3 0 Load2 Yes R3+45 SUBD F8 F6 F2 4 Load3 No DIVD F10 F0 F6 ADDD F6 F8 F2 Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk Add1 Yes Sub M(R2+34) Load2 Add2 No Add3 No Mult1 Yes Mult F4 Load2 Mult2 No Register result status F0 F2 F4 F6 F8 F10... F30 FU Mult1 Load2 - Add1 Clock: 4
31 Tomasulo example, cycle 5 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R Time Busy Address LD F2 45+ R Load1 No MULTDF0 F2 F4 3 Load2 No SUBD F8 F6 F2 4 Load3 No DIVD F10 F0 F6 5 ADDD F6 F8 F2 Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk 2 Add1 Yes Sub M(R2+34)M(R3+45) - Add2 No Add3 No 10 Mult1 Yes Mult M(R3+45) F4 - Mult2 Yes Div F6 Mult1 Register result status F0 F2 F4 F6 F8 F10... F30 FU Mult1 - Add1 Mult2 Clock: 5
32 Tomasulo example, cycle 6 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R Time Busy Address LD F2 45+ R Load1 No MULTDF0 F2 F4 3 Load2 No SUBD F8 F6 F2 4 Load3 No DIVD F10 F0 F6 5 ADDD F6 F8 F2 6 Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk 1 Add1 Yes Sub M(R2+34)M(R3+45) Add2 Yes Add F2 Add1 Add3 No 9 Mult1 Yes Mult M(R3+45) F4 Mult2 Yes Div F6 Mult1 Register result status F0 F2 F4 F6 F8 F10... F30 FU Mult1 Add2 Add1 Mult2 Clock: 6
33 Tomasulo example, cycle 7 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R Time Busy Address LD F2 45+ R Load1 No MULTDF0 F2 F4 3 Load2 No SUBD F8 F6 F2 4 7 Load3 No DIVD F10 F0 F6 5 ADDD F6 F8 F2 6 Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk 0 Add1 Yes Sub M(R2+34)M(R3+45) Add2 Yes Add F2 Add1 Add3 No 8 Mult1 Yes Mult M(R3+45) F4 Mult2 Yes Div F6 Mult1 Register result status F0 F2 F4 F6 F8 F10... F30 FU Mult1 Add2 Add1 Mult2 Clock: 7
34 Tomasulo example, cycle 8 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R Time Busy Address LD F2 45+ R Load1 No MULTDF0 F2 F4 3 Load2 No SUBD F8 F6 F Load3 No DIVD F10 F0 F6 5 ADDD F6 F8 F2 6 Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk Add1 No 2 Add2 Yes Add F6-F2 F2 - Add3 No 7 Mult1 Yes Mult M(R3+45) F4 Mult2 Yes Div F6 Mult1 Register result status F0 F2 F4 F6 F8 F10... F30 FU Mult1 Add2 - Mult2 Clock: 8
35 Tomasulo example, cycle 10 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R Time Busy Address LD F2 45+ R Load1 No MULTDF0 F2 F4 3 Load2 No SUBD F8 F6 F Load3 No DIVD F10 F0 F6 5 ADDD F6 F8 F Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk Add1 No 0 Add2 Yes Add F6-F2 F2 Add3 No 5 Mult1 Yes Mult M(R3+45) F4 Mult2 Yes Div F6 Mult1 Register result status F0 F2 F4 F6 F8 F10... F30 FU Mult1 Add2 Mult2 Clock: 10
36 Tomasulo example, cycle 11 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R Time Busy Address LD F2 45+ R Load1 No MULTDF0 F2 F4 3 Load2 No SUBD F8 F6 F Load3 No DIVD F10 F0 F6 5 ADDD F6 F8 F Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk Add1 No Add2 No Add3 No 4 Mult1 Yes Mult M(R3+45) F4 Mult2 Yes Div F6 Mult1 Register result status F0 F2 F4 F6 F8 F10... F30 FU Mult1 - Mult2 Clock: 11
37 Tomasulo example, cycle 15 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R Time Busy Address LD F2 45+ R Load1 No MULTDF0 F2 F Load2 No SUBD F8 F6 F Load3 No DIVD F10 F0 F6 5 ADDD F6 F8 F Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk Add1 No Add2 No Add3 No 0 Mult1 Yes Mult M(R3+45) F4 Mult2 Yes Div F6 Mult1 Register result status F0 F2 F4 F6 F8 F10... F30 FU Mult1 Mult2 Clock: 15
38 Tomasulo example, cycle 16 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R Time Busy Address LD F2 45+ R Load1 No MULTDF0 F2 F Load2 No SUBD F8 F6 F Load3 No DIVD F10 F0 F6 5 ADDD F6 F8 F Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk Add1 No Add2 No Add3 No Mult1 No 40 Mult2 Yes Div F0 F6 - Register result status F0 F2 F4 F6 F8 F10... F30 FU - Mult2 Clock: 15
39 Tomasulo example, cycle 56 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R Time Busy Address LD F2 45+ R Load1 No MULTDF0 F2 F Load2 No SUBD F8 F6 F Load3 No DIVD F10 F0 F ADDD F6 F8 F Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk Add1 No Add2 No Add3 No Mult1 No 0 Mult2 Yes Div F0 F6 Register result status Clock: 56 FU F0 F2 F4 F6 F8 F10... F30 Mult2
40 Tomasulo example, cycle 57 Instruction status Exec. Write Instruction j k Issue compl. result Load buffers LD F6 34+ R Time Busy Address LD F2 45+ R Load1 No MULTDF0 F2 F Load2 No SUBD F8 F6 F Load3 No DIVD F10 F0 F ADDD F6 F8 F Functional unit status src 1 src 2 RS for j RS for k Time Name Busy Op Vj Vk Qj Qk Add1 No Add2 No Add3 No Mult1 No Mult2 No Register result status F0 F2 F4 F6 F8 F10... F30 FU - Clock: 57
41 Summary Data, name, and control dependences Hazards and static instruction order prevent ILP exploitation Static methods Let compiler reorder instructions to remove hazards Dynamic methods Modify processor organization to remove hazards and dynamically reorder instructions Fetch bottleneck: Dynamic branch prediction Issue bottleneck: Tomasulo s algorithm Next Speculative execution Multiple issue Register renaming Increased instruction fetch bandwidth
Hardware-based speculation (2.6) Multiple-issue plus static scheduling = VLIW (2.7) Multiple-issue, dynamic scheduling, and speculation (2.
Instruction-Level Parallelism and its Exploitation: PART 2 Hardware-based speculation (2.6) Multiple-issue plus static scheduling = VLIW (2.7) Multiple-issue, dynamic scheduling, and speculation (2.8)
More informationInstruction-Level Parallelism and Its Exploitation
Chapter 2 Instruction-Level Parallelism and Its Exploitation 1 Overview Instruction level parallelism Dynamic Scheduling Techniques es Scoreboarding Tomasulo s s Algorithm Reducing Branch Cost with Dynamic
More informationCOSC4201 Instruction Level Parallelism Dynamic Scheduling
COSC4201 Instruction Level Parallelism Dynamic Scheduling Prof. Mokhtar Aboelaze Parts of these slides are taken from Notes by Prof. David Patterson (UCB) Outline Data dependence and hazards Exposing parallelism
More informationCISC 662 Graduate Computer Architecture. Lecture 10 - ILP 3
CISC 662 Graduate Computer Architecture Lecture 10 - ILP 3 Michela Taufer http://www.cis.udel.edu/~taufer/teaching/cis662f07 Powerpoint Lecture Notes from John Hennessy and David Patterson s: Computer
More informationMetodologie di Progettazione Hardware-Software
Metodologie di Progettazione Hardware-Software Advanced Pipelining and Instruction-Level Paralelism Metodologie di Progettazione Hardware/Software LS Ing. Informatica 1 ILP Instruction-level Parallelism
More informationCPE 631 Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation
Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation Aleksandar Milenković, milenka@ece.uah.edu Electrical and Computer Engineering University of Alabama in Huntsville Outline Tomasulo
More informationWebsite for Students VTU NOTES QUESTION PAPERS NEWS RESULTS
Advanced Computer Architecture- 06CS81 Hardware Based Speculation Tomasulu algorithm and Reorder Buffer Tomasulu idea: 1. Have reservation stations where register renaming is possible 2. Results are directly
More informationReduction of Data Hazards Stalls with Dynamic Scheduling So far we have dealt with data hazards in instruction pipelines by:
Reduction of Data Hazards Stalls with Dynamic Scheduling So far we have dealt with data hazards in instruction pipelines by: Result forwarding (register bypassing) to reduce or eliminate stalls needed
More informationInstruction Level Parallelism
Instruction Level Parallelism The potential overlap among instruction execution is called Instruction Level Parallelism (ILP) since instructions can be executed in parallel. There are mainly two approaches
More informationCPE 631 Lecture 11: Instruction Level Parallelism and Its Dynamic Exploitation
Lecture 11: Instruction Level Parallelism and Its Dynamic Exploitation Aleksandar Milenkovic, milenka@ece.uah.edu Electrical and Computer Engineering University of Alabama in Huntsville Outline Instruction
More informationAdapted from David Patterson s slides on graduate computer architecture
Mei Yang Adapted from David Patterson s slides on graduate computer architecture Introduction Basic Compiler Techniques for Exposing ILP Advanced Branch Prediction Dynamic Scheduling Hardware-Based Speculation
More informationChapter 3 Instruction-Level Parallelism and its Exploitation (Part 1)
Chapter 3 Instruction-Level Parallelism and its Exploitation (Part 1) ILP vs. Parallel Computers Dynamic Scheduling (Section 3.4, 3.5) Dynamic Branch Prediction (Section 3.3) Hardware Speculation and Precise
More informationPage 1. Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls
CS252 Graduate Computer Architecture Recall from Pipelining Review Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: March 16, 2001 Prof. David A. Patterson Computer Science 252 Spring
More informationStatic vs. Dynamic Scheduling
Static vs. Dynamic Scheduling Dynamic Scheduling Fast Requires complex hardware More power consumption May result in a slower clock Static Scheduling Done in S/W (compiler) Maybe not as fast Simpler processor
More informationLoad1 no Load2 no Add1 Y Sub Reg[F2] Reg[F6] Add2 Y Add Reg[F2] Add1 Add3 no Mult1 Y Mul Reg[F2] Reg[F4] Mult2 Y Div Reg[F6] Mult1
Instruction Issue Execute Write result L.D F6, 34(R2) L.D F2, 45(R3) MUL.D F0, F2, F4 SUB.D F8, F2, F6 DIV.D F10, F0, F6 ADD.D F6, F8, F2 Name Busy Op Vj Vk Qj Qk A Load1 no Load2 no Add1 Y Sub Reg[F2]
More informationELEC 5200/6200 Computer Architecture and Design Fall 2016 Lecture 9: Instruction Level Parallelism
ELEC 5200/6200 Computer Architecture and Design Fall 2016 Lecture 9: Instruction Level Parallelism Ujjwal Guin, Assistant Professor Department of Electrical and Computer Engineering Auburn University,
More informationRecall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls
CS252 Graduate Computer Architecture Recall from Pipelining Review Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: March 16, 2001 Prof. David A. Patterson Computer Science 252 Spring
More informationProcessor: Superscalars Dynamic Scheduling
Processor: Superscalars Dynamic Scheduling Z. Jerry Shi Assistant Professor of Computer Science and Engineering University of Connecticut * Slides adapted from Blumrich&Gschwind/ELE475 03, Peh/ELE475 (Princeton),
More informationCPE 631 Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation
Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation Aleksandar Milenkovic, milenka@ece.uah.edu Electrical and Computer Engineering University of Alabama in Huntsville Outline Instruction
More informationChapter 3: Instruction Level Parallelism (ILP) and its exploitation. Types of dependences
Chapter 3: Instruction Level Parallelism (ILP) and its exploitation Pipeline CPI = Ideal pipeline CPI + stalls due to hazards invisible to programmer (unlike process level parallelism) ILP: overlap execution
More informationEITF20: Computer Architecture Part3.2.1: Pipeline - 3
EITF20: Computer Architecture Part3.2.1: Pipeline - 3 Liang Liu liang.liu@eit.lth.se 1 Outline Reiteration Dynamic scheduling - Tomasulo Superscalar, VLIW Speculation ILP limitations What we have done
More informationMulti-cycle Instructions in the Pipeline (Floating Point)
Lecture 6 Multi-cycle Instructions in the Pipeline (Floating Point) Introduction to instruction level parallelism Recap: Support of multi-cycle instructions in a pipeline (App A.5) Recap: Superpipelining
More informationCS252 Graduate Computer Architecture Lecture 6. Recall: Software Pipelining Example
CS252 Graduate Computer Architecture Lecture 6 Tomasulo, Implicit Register Renaming, Loop-Level Parallelism Extraction Explicit Register Renaming John Kubiatowicz Electrical Engineering and Computer Sciences
More informationScoreboard information (3 tables) Four stages of scoreboard control
Scoreboard information (3 tables) Instruction : issued, read operands and started execution (dispatched), completed execution or wrote result, Functional unit (assuming non-pipelined units) busy/not busy
More informationFour Steps of Speculative Tomasulo cycle 0
HW support for More ILP Hardware Speculative Execution Speculation: allow an instruction to issue that is dependent on branch, without any consequences (including exceptions) if branch is predicted incorrectly
More informationCopyright 2012, Elsevier Inc. All rights reserved.
Computer Architecture A Quantitative Approach, Fifth Edition Chapter 3 Instruction-Level Parallelism and Its Exploitation 1 Branch Prediction Basic 2-bit predictor: For each branch: Predict taken or not
More informationDynamic Scheduling. Better than static scheduling Scoreboarding: Tomasulo algorithm:
LECTURE - 13 Dynamic Scheduling Better than static scheduling Scoreboarding: Used by the CDC 6600 Useful only within basic block WAW and WAR stalls Tomasulo algorithm: Used in IBM 360/91 for the FP unit
More informationPage 1. Recall from Pipelining Review. Lecture 15: Instruction Level Parallelism and Dynamic Execution
CS252 Graduate Computer Architecture Recall from Pipelining Review Lecture 15: Instruction Level Parallelism and Dynamic Execution March 11, 2002 Prof. David E. Culler Computer Science 252 Spring 2002
More informationEECC551 Exam Review 4 questions out of 6 questions
EECC551 Exam Review 4 questions out of 6 questions (Must answer first 2 questions and 2 from remaining 4) Instruction Dependencies and graphs In-order Floating Point/Multicycle Pipelining (quiz 2) Improving
More informationSuperscalar Architectures: Part 2
Superscalar Architectures: Part 2 Dynamic (Out-of-Order) Scheduling Lecture 3.2 August 23 rd, 2017 Jae W. Lee (jaewlee@snu.ac.kr) Computer Science and Engineering Seoul NaMonal University Download this
More informationComputer Architecture A Quantitative Approach, Fifth Edition. Chapter 3. Instruction-Level Parallelism and Its Exploitation
Computer Architecture A Quantitative Approach, Fifth Edition Chapter 3 Instruction-Level Parallelism and Its Exploitation Introduction Pipelining become universal technique in 1985 Overlaps execution of
More informationThe basic structure of a MIPS floating-point unit
Tomasulo s scheme The algorithm based on the idea of reservation station The reservation station fetches and buffers an operand as soon as it is available, eliminating the need to get the operand from
More informationReview: Compiler techniques for parallelism Loop unrolling Ÿ Multiple iterations of loop in software:
CS152 Computer Architecture and Engineering Lecture 17 Dynamic Scheduling: Tomasulo March 20, 2001 John Kubiatowicz (http.cs.berkeley.edu/~kubitron) lecture slides: http://www-inst.eecs.berkeley.edu/~cs152/
More informationWhat is ILP? Instruction Level Parallelism. Where do we find ILP? How do we expose ILP?
What is ILP? Instruction Level Parallelism or Declaration of Independence The characteristic of a program that certain instructions are, and can potentially be. Any mechanism that creates, identifies,
More informationILP: Instruction Level Parallelism
ILP: Instruction Level Parallelism Tassadaq Hussain Riphah International University Barcelona Supercomputing Center Universitat Politècnica de Catalunya Introduction Introduction Pipelining become universal
More information吳俊興高雄大學資訊工程學系. October Example to eleminate WAR and WAW by register renaming. Tomasulo Algorithm. A Dynamic Algorithm: Tomasulo s Algorithm
EEF011 Computer Architecture 計算機結構 吳俊興高雄大學資訊工程學系 October 2004 Example to eleminate WAR and WAW by register renaming Original DIV.D ADD.D S.D SUB.D MUL.D F0, F2, F4 F6, F0, F8 F6, 0(R1) F8, F10, F14 F6,
More informationInstruction Level Parallelism. Taken from
Instruction Level Parallelism Taken from http://www.cs.utsa.edu/~dj/cs3853/lecture5.ppt Outline ILP Compiler techniques to increase ILP Loop Unrolling Static Branch Prediction Dynamic Branch Prediction
More informationCSE 820 Graduate Computer Architecture. week 6 Instruction Level Parallelism. Review from Last Time #1
CSE 820 Graduate Computer Architecture week 6 Instruction Level Parallelism Based on slides by David Patterson Review from Last Time #1 Leverage Implicit Parallelism for Performance: Instruction Level
More informationGraduate Computer Architecture. Chapter 3. Instruction Level Parallelism and Its Dynamic Exploitation
Graduate Computer Architecture Chapter 3 Instruction Level Parallelism and Its Dynamic Exploitation 1 Overview Instruction level parallelism Dynamic Scheduling Techniques Scoreboarding (Appendix A.8) Tomasulo
More information5008: Computer Architecture
5008: Computer Architecture Chapter 2 Instruction-Level Parallelism and Its Exploitation CA Lecture05 - ILP (cwliu@twins.ee.nctu.edu.tw) 05-1 Review from Last Lecture Instruction Level Parallelism Leverage
More informationInstruction Frequency CPI. Load-store 55% 5. Arithmetic 30% 4. Branch 15% 4
PROBLEM 1: An application running on a 1GHz pipelined processor has the following instruction mix: Instruction Frequency CPI Load-store 55% 5 Arithmetic 30% 4 Branch 15% 4 a) Determine the overall CPI
More informationCOSC 6385 Computer Architecture - Instruction Level Parallelism (II)
COSC 6385 Computer Architecture - Instruction Level Parallelism (II) Edgar Gabriel Spring 2016 Data fields for reservation stations Op: operation to perform on source operands S1 and S2 Q j, Q k : reservation
More informationHardware-based Speculation
Hardware-based Speculation M. Sonza Reorda Politecnico di Torino Dipartimento di Automatica e Informatica 1 Introduction Hardware-based speculation is a technique for reducing the effects of control dependences
More informationCISC 662 Graduate Computer Architecture Lecture 13 - CPI < 1
CISC 662 Graduate Computer Architecture Lecture 13 - CPI < 1 Michela Taufer http://www.cis.udel.edu/~taufer/teaching/cis662f07 Powerpoint Lecture Notes from John Hennessy and David Patterson s: Computer
More informationChapter 4 The Processor 1. Chapter 4D. The Processor
Chapter 4 The Processor 1 Chapter 4D The Processor Chapter 4 The Processor 2 Instruction-Level Parallelism (ILP) Pipelining: executing multiple instructions in parallel To increase ILP Deeper pipeline
More informationLecture 8 Dynamic Branch Prediction, Superscalar and VLIW. Computer Architectures S
Lecture 8 Dynamic Branch Prediction, Superscalar and VLIW Computer Architectures 521480S Dynamic Branch Prediction Performance = ƒ(accuracy, cost of misprediction) Branch History Table (BHT) is simplest
More informationLecture-13 (ROB and Multi-threading) CS422-Spring
Lecture-13 (ROB and Multi-threading) CS422-Spring 2018 Biswa@CSE-IITK Cycle 62 (Scoreboard) vs 57 in Tomasulo Instruction status: Read Exec Write Exec Write Instruction j k Issue Oper Comp Result Issue
More informationCPI < 1? How? What if dynamic branch prediction is wrong? Multiple issue processors: Speculative Tomasulo Processor
1 CPI < 1? How? From Single-Issue to: AKS Scalar Processors Multiple issue processors: VLIW (Very Long Instruction Word) Superscalar processors No ISA Support Needed ISA Support Needed 2 What if dynamic
More informationHardware-based Speculation
Hardware-based Speculation Hardware-based Speculation To exploit instruction-level parallelism, maintaining control dependences becomes an increasing burden. For a processor executing multiple instructions
More informationSuper Scalar. Kalyan Basu March 21,
Super Scalar Kalyan Basu basu@cse.uta.edu March 21, 2007 1 Super scalar Pipelines A pipeline that can complete more than 1 instruction per cycle is called a super scalar pipeline. We know how to build
More informationHandout 2 ILP: Part B
Handout 2 ILP: Part B Review from Last Time #1 Leverage Implicit Parallelism for Performance: Instruction Level Parallelism Loop unrolling by compiler to increase ILP Branch prediction to increase ILP
More informationChapter 3 (CONT II) Instructor: Josep Torrellas CS433. Copyright J. Torrellas 1999,2001,2002,2007,
Chapter 3 (CONT II) Instructor: Josep Torrellas CS433 Copyright J. Torrellas 1999,2001,2002,2007, 2013 1 Hardware-Based Speculation (Section 3.6) In multiple issue processors, stalls due to branches would
More informationMulticycle ALU Operations 2/28/2011. Diversified Pipelines The Path Toward Superscalar Processors. Limitations of Our Simple 5 stage Pipeline
//11 Limitations of Our Simple stage Pipeline Diversified Pipelines The Path Toward Superscalar Processors HPCA, Spring 11 Assumes single cycle EX stage for all instructions This is not feasible for Complex
More informationEEC 581 Computer Architecture. Instruction Level Parallelism (3.6 Hardware-based Speculation and 3.7 Static Scheduling/VLIW)
1 EEC 581 Computer Architecture Instruction Level Parallelism (3.6 Hardware-based Speculation and 3.7 Static Scheduling/VLIW) Chansu Yu Electrical and Computer Engineering Cleveland State University Overview
More informationTopics. Digital Systems Architecture EECE EECE Predication, Prediction, and Speculation
Digital Systems Architecture EECE 343-01 EECE 292-02 Predication, Prediction, and Speculation Dr. William H. Robinson February 25, 2004 http://eecs.vanderbilt.edu/courses/eece343/ Topics Aha, now I see,
More informationChapter 4. Advanced Pipelining and Instruction-Level Parallelism. In-Cheol Park Dept. of EE, KAIST
Chapter 4. Advanced Pipelining and Instruction-Level Parallelism In-Cheol Park Dept. of EE, KAIST Instruction-level parallelism Loop unrolling Dependence Data/ name / control dependence Loop level parallelism
More informationCPI IPC. 1 - One At Best 1 - One At best. Multiple issue processors: VLIW (Very Long Instruction Word) Speculative Tomasulo Processor
Single-Issue Processor (AKA Scalar Processor) CPI IPC 1 - One At Best 1 - One At best 1 From Single-Issue to: AKS Scalar Processors CPI < 1? How? Multiple issue processors: VLIW (Very Long Instruction
More informationNOW Handout Page 1. Review from Last Time #1. CSE 820 Graduate Computer Architecture. Lec 8 Instruction Level Parallelism. Outline
CSE 820 Graduate Computer Architecture Lec 8 Instruction Level Parallelism Based on slides by David Patterson Review Last Time #1 Leverage Implicit Parallelism for Performance: Instruction Level Parallelism
More informationCompiler Optimizations. Lecture 7 Overview of Superscalar Techniques. Memory Allocation by Compilers. Compiler Structure. Register allocation
Lecture 7 Overview of Superscalar Techniques CprE 581 Computer Systems Architecture, Fall 2013 Reading: Textbook, Ch. 3 Complexity-Effective Superscalar Processors, PhD Thesis by Subbarao Palacharla, Ch.1
More informationInstruction Level Parallelism (ILP)
Instruction Level Parallelism (ILP) Pipelining supports a limited sense of ILP e.g. overlapped instructions, out of order completion and issue, bypass logic, etc. Remember Pipeline CPI = Ideal Pipeline
More informationCPE 631 Lecture 09: Instruction Level Parallelism and Its Dynamic Exploitation
Lecture 09: Instruction Level Parallelism and Its Dynamic Exploitation Aleksandar Milenkovic, milenka@ece.uah.edu Electrical and Computer Engineering University of Alabama in Huntsville Outline Instruction
More informationInstruction Level Parallelism
Instruction Level Parallelism Dynamic scheduling Scoreboard Technique Tomasulo Algorithm Speculation Reorder Buffer Superscalar Processors 1 Definition of ILP ILP=Potential overlap of execution among unrelated
More informationEECC551 Review. Dynamic Hardware-Based Speculation
EECC551 Review Recent Trends in Computer Design. Computer Performance Measures. Instruction Pipelining. Branch Prediction. Instruction-Level Parallelism (ILP). Loop-Level Parallelism (LLP). Dynamic Pipeline
More informationReview: Evaluating Branch Alternatives. Lecture 3: Introduction to Advanced Pipelining. Review: Evaluating Branch Prediction
Review: Evaluating Branch Alternatives Lecture 3: Introduction to Advanced Pipelining Two part solution: Determine branch taken or not sooner, AND Compute taken branch address earlier Pipeline speedup
More informationEECC551 - Shaaban. 1 GHz? to???? GHz CPI > (?)
Evolution of Processor Performance So far we examined static & dynamic techniques to improve the performance of single-issue (scalar) pipelined CPU designs including: static & dynamic scheduling, static
More informationESE 545 Computer Architecture Instruction-Level Parallelism (ILP) and Static & Dynamic Instruction Scheduling Instruction level parallelism
Computer Architecture ESE 545 Computer Architecture Instruction-Level Parallelism (ILP) and Static & Dynamic Instruction Scheduling 1 Outline ILP Compiler techniques to increase ILP Loop Unrolling Static
More informationNOW Handout Page 1. Review from Last Time. CSE 820 Graduate Computer Architecture. Lec 7 Instruction Level Parallelism. Recall from Pipelining Review
Review from Last Time CSE 820 Graduate Computer Architecture Lec 7 Instruction Level Parallelism Based on slides by David Patterson 4 papers: All about where to draw line between HW and SW IBM set foundations
More informationPage # CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Michela Taufer
CISC 662 Graduate Computer Architecture Lecture 8 - ILP 1 Michela Taufer http://www.cis.udel.edu/~taufer/teaching/cis662f07 Powerpoint Lecture Notes from John Hennessy and David Patterson s: Computer Architecture,
More informationDYNAMIC INSTRUCTION SCHEDULING WITH SCOREBOARD
DYNAMIC INSTRUCTION SCHEDULING WITH SCOREBOARD Slides by: Pedro Tomás Additional reading: Computer Architecture: A Quantitative Approach, 5th edition, Chapter 3, John L. Hennessy and David A. Patterson,
More informationFunctional Units. Registers. The Big Picture: Where are We Now? The Five Classic Components of a Computer Processor Input Control Memory
The Big Picture: Where are We Now? CS152 Computer Architecture and Engineering Lecture 18 The Five Classic Components of a Computer Processor Input Control Dynamic Scheduling (Cont), Speculation, and ILP
More informationCOSC121: Computer Systems. ISA and Performance
COSC121: Computer Systems. ISA and Performance Jeremy Bolton, PhD Assistant Teaching Professor Constructed using materials: - Patt and Patel Introduction to Computing Systems (2nd) - Patterson and Hennessy
More informationINSTITUTO SUPERIOR TÉCNICO. Architectures for Embedded Computing
UNIVERSIDADE TÉCNICA DE LISBOA INSTITUTO SUPERIOR TÉCNICO Departamento de Engenharia Informática Architectures for Embedded Computing MEIC-A, MEIC-T, MERC Lecture Slides Version 3.0 - English Lecture 09
More informationCISC 662 Graduate Computer Architecture Lecture 11 - Hardware Speculation Branch Predictions
CISC 662 Graduate Computer Architecture Lecture 11 - Hardware Speculation Branch Predictions Michela Taufer http://www.cis.udel.edu/~taufer/teaching/cis6627 Powerpoint Lecture Notes from John Hennessy
More informationEI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)
EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture) Dept. of Computer Science & Engineering Chentao Wu wuct@cs.sjtu.edu.cn Download lectures ftp://public.sjtu.edu.cn User:
More informationCS433 Midterm. Prof Josep Torrellas. October 19, Time: 1 hour + 15 minutes
CS433 Midterm Prof Josep Torrellas October 19, 2017 Time: 1 hour + 15 minutes Name: Instructions: 1. This is a closed-book, closed-notes examination. 2. The Exam has 4 Questions. Please budget your time.
More informationLecture 4: Introduction to Advanced Pipelining
Lecture 4: Introduction to Advanced Pipelining Prepared by: Professor David A. Patterson Computer Science 252, Fall 1996 Edited and presented by : Prof. Kurt Keutzer Computer Science 252, Spring 2000 KK
More informationPage 1. CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Pipeline CPI (II) Michela Taufer
CISC 662 Graduate Computer Architecture Lecture 8 - ILP 1 Michela Taufer Pipeline CPI http://www.cis.udel.edu/~taufer/teaching/cis662f07 Powerpoint Lecture Notes from John Hennessy and David Patterson
More informationEEC 581 Computer Architecture. Lec 7 Instruction Level Parallelism (2.6 Hardware-based Speculation and 2.7 Static Scheduling/VLIW)
EEC 581 Computer Architecture Lec 7 Instruction Level Parallelism (2.6 Hardware-based Speculation and 2.7 Static Scheduling/VLIW) Chansu Yu Electrical and Computer Engineering Cleveland State University
More informationAdvanced Computer Architecture CMSC 611 Homework 3. Due in class Oct 17 th, 2012
Advanced Computer Architecture CMSC 611 Homework 3 Due in class Oct 17 th, 2012 (Show your work to receive partial credit) 1) For the following code snippet list the data dependencies and rewrite the code
More informationCS252 Graduate Computer Architecture Lecture 8. Review: Scoreboard (CDC 6600) Explicit Renaming Precise Interrupts February 13 th, 2010
CS252 Graduate Computer Architecture Lecture 8 Explicit Renaming Precise Interrupts February 13 th, 2010 John Kubiatowicz Electrical Engineering and Computer Sciences University of California, Berkeley
More informationCSE 502 Graduate Computer Architecture. Lec 8-10 Instruction Level Parallelism
CSE 502 Graduate Computer Architecture Lec 8-10 Instruction Level Parallelism Larry Wittie Computer Science, StonyBrook University http://www.cs.sunysb.edu/~cse502 and ~lw Slides adapted from David Patterson,
More informationDonn Morrison Department of Computer Science. TDT4255 ILP and speculation
TDT4255 Lecture 9: ILP and speculation Donn Morrison Department of Computer Science 2 Outline Textbook: Computer Architecture: A Quantitative Approach, 4th ed Section 2.6: Speculation Section 2.7: Multiple
More informationNOW Handout Page 1. Outline. Csci 211 Computer System Architecture. Lec 4 Instruction Level Parallelism. Instruction Level Parallelism
Outline Csci 211 Computer System Architecture Lec 4 Instruction Level Parallelism Xiuzhen Cheng Department of Computer Sciences The George Washington University ILP Compiler techniques to increase ILP
More informationExploiting ILP with SW Approaches. Aleksandar Milenković, Electrical and Computer Engineering University of Alabama in Huntsville
Lecture : Exploiting ILP with SW Approaches Aleksandar Milenković, milenka@ece.uah.edu Electrical and Computer Engineering University of Alabama in Huntsville Outline Basic Pipeline Scheduling and Loop
More informationEE 4683/5683: COMPUTER ARCHITECTURE
EE 4683/5683: COMPUTER ARCHITECTURE Lecture 4A: Instruction Level Parallelism - Static Scheduling Avinash Kodi, kodi@ohio.edu Agenda 2 Dependences RAW, WAR, WAW Static Scheduling Loop-carried Dependence
More informationCS425 Computer Systems Architecture
CS425 Computer Systems Architecture Fall 2018 Static Instruction Scheduling 1 Techniques to reduce stalls CPI = Ideal CPI + Structural stalls per instruction + RAW stalls per instruction + WAR stalls per
More informationOutline Review: Basic Pipeline Scheduling and Loop Unrolling Multiple Issue: Superscalar, VLIW. CPE 631 Session 19 Exploiting ILP with SW Approaches
Session xploiting ILP with SW Approaches lectrical and Computer ngineering University of Alabama in Huntsville Outline Review: Basic Pipeline Scheduling and Loop Unrolling Multiple Issue: Superscalar,
More informationComputer Architecture 计算机体系结构. Lecture 4. Instruction-Level Parallelism II 第四讲 指令级并行 II. Chao Li, PhD. 李超博士
Computer Architecture 计算机体系结构 Lecture 4. Instruction-Level Parallelism II 第四讲 指令级并行 II Chao Li, PhD. 李超博士 SJTU-SE346, Spring 2018 Review Hazards (data/name/control) RAW, WAR, WAW hazards Different types
More informationAdvanced Computer Architecture
Advanced Computer Architecture Instruction Level Parallelism Myung Hoon, Sunwoo School of Electrical and Computer Engineering Ajou University Outline ILP Compiler techniques to increase ILP Loop Unrolling
More informationTomasulo s Algorithm
Tomasulo s Algorithm Architecture to increase ILP Removes WAR and WAW dependencies during issue WAR and WAW Name Dependencies Artifact of using the same storage location (variable name) Can be avoided
More informationAdvanced issues in pipelining
Advanced issues in pipelining 1 Outline Handling exceptions Supporting multi-cycle operations Pipeline evolution Examples of real pipelines 2 Handling exceptions 3 Exceptions In pipelined execution, one
More informationOutline EEL 5764 Graduate Computer Architecture. Chapter 2 - Instruction Level Parallelism. Recall from Pipelining Review
Outline EEL 5764 Graduate Computer Architecture Chapter 2 - Instruction Level Parallelism Ann Gordon-Ross Electrical and Computer Engineering University of Florida ILP Compiler techniques to increase ILP
More informationHardware-Based Speculation
Hardware-Based Speculation Execute instructions along predicted execution paths but only commit the results if prediction was correct Instruction commit: allowing an instruction to update the register
More informationComputer Architectures. Chapter 4. Tien-Fu Chen. National Chung Cheng Univ.
Computer Architectures Chapter 4 Tien-Fu Chen National Chung Cheng Univ. chap4-0 Advance Pipelining! Static Scheduling Have compiler to minimize the effect of structural, data, and control dependence "
More informationLatencies of FP operations used in chapter 4.
Instruction-Level Parallelism (ILP) ILP: refers to the overlap execution of instructions. Pipelined CPI = Ideal pipeline CPI + structural stalls + RAW stalls + WAR stalls + WAW stalls + Control stalls.
More informationExploitation of instruction level parallelism
Exploitation of instruction level parallelism Computer Architecture J. Daniel García Sánchez (coordinator) David Expósito Singh Francisco Javier García Blas ARCOS Group Computer Science and Engineering
More informationInstruction Level Parallelism (ILP)
1 / 26 Instruction Level Parallelism (ILP) ILP: The simultaneous execution of multiple instructions from a program. While pipelining is a form of ILP, the general application of ILP goes much further into
More informationComplex Pipelining COE 501. Computer Architecture Prof. Muhamed Mudawar
Complex Pipelining COE 501 Computer Architecture Prof. Muhamed Mudawar Computer Engineering Department King Fahd University of Petroleum and Minerals Presentation Outline Diversified Pipeline Detecting
More informationPipelining: Issue instructions in every cycle (CPI 1) Compiler scheduling (static scheduling) reduces impact of dependences
Dynamic Scheduling Pipelining: Issue instructions in every cycle (CPI 1) Compiler scheduling (static scheduling) reduces impact of dependences Increased compiler complexity, especially when attempting
More informationCOSC 6385 Computer Architecture - Pipelining (II)
COSC 6385 Computer Architecture - Pipelining (II) Edgar Gabriel Spring 2018 Performance evaluation of pipelines (I) General Speedup Formula: Time Speedup Time IC IC ClockCycle ClockClycle CPI CPI For a
More information