TDT 4260 TDT ILP Chap 2, App. C
|
|
- Zoe Ball
- 6 years ago
- Views:
Transcription
1 TDT 4260 ILP Chap 2, App. C
2 Intro Ian Bratt ntnu no)
3 Instruction level parallelism (ILP) A program is sequence of instructions typically written to be executed one after the other Poor usage of CPU resources! (Why?) Better: Execute instructions in parallel 1: Pipeline Partial overlap of instruction execution 2: Multiple issue Total overlap of instruction execution Today: Pipelining and multi-issue issue (if time)
4 Pipelining i (1/3)
5 Pipelining (2/3) Multiple different stages executed in parallel Laundry in 4 different stages Fetch / Execute / Store Assumptions: Task can be split into stages Storage of temporary data Stages synchronized Next operation known before last finished?
6 Pipelining (3/3) Good Utilization: The whole CPU is always in use Fetch unit, decoder, ALU,... Great usage of CPU resources! Most common form of ILP, used everywhere CPU, GPU, digital circuits,... Ideal: time_stage = time_instruction / stages But stages are not perfectly balanced But transfer between stages takes time But pipeline may have to be emptied...
7 Example: MIPS64 (1/2) RISC Load/store Few instruction formats Fixed instruction length 64-bit DADD = 64 bits ADD LD = 64 bits L(oad) 32 registers (R0 = 0) EA = offset(ister) Pipeline IF: Instruction fetch ID: Instruction decode / register fetch EX: Execute / effective address (EA) MEM: Memory access WB: Write back (reg)
8 Example: MIPS64 (2/2) Time (clock cycles) I n s t r. Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Ifetch Ifetch ALU DMem ALU DMem O r d e r Ifetch Ifetch ALU DMem ALU DMem
9 Big Picture: What are some real world examples of pipelining? Why do we pipeline? p Does pipelining increase or decrease instruction throughput? Does pipelining increase or decrease instruction latency?
10 Big Picture (continued): Computer Architecture is the study of design tradeoffs!!!! There is no philosophy p of architecture and no perfect architecture. This is engineering, not science. What are the costs of pipelining? For what types of devices is pipelining not a good choice?
11 Improve speedup? Why not perfect speedup? Sequential programs One instruction dependent on another Not enough CPU resources What can be done? Forwarding (HW) Scheduling (SW / HW) Prediction (SW / HW) Both hardware (dynamic) and compiler (static) can help
12 Dependencies and hazards Dependencies Parallel instructions can be executed in parallel Dependent instructions are not parallel I1: DADD R1, R2, R3 I2: DSUB R4, R1, R5 Property of the instructions Hazards Situation where a dependency causes an instruction to give a wrong result Property of the pipeline Not all dependencies give hazards Dependencies must be close enough in the instruction stream to cause a hazard
13 Dependencies (True) data dependencies One instruction reads what an earlier has written Name dependencies Two instructions use the same register / mem loc But no flow of data between them Two types: Anti and output dependencies Control dependencies Instructions dependent on the result of a branch Again: Independent d of pipeline implementation ti
14 Hazards Data hazards Overlap will give different result from sequential RAW / WAW / WAR Control hazards Branches Ex: Started executing the wrong instruction Structural hazards Pipeline does not support this combination of instr. Ex: ister with one port, two stages want to read
15 Data dependency d Hazard? Figure A.6, Page A-16 IF ID/RF EX MEM WB I Ifetch add r1,r2,r3 n s t r. sub r4,r1,r3r1 r3 Ifetch ALU DMem ALU DMem O r d e r and r6,r1,r7 or r8,r1,r9 xor r10,r1,r11 Ifetch Ifetch ALU Ifetch DMem ALU DMem ALU DMem
16 Data Hazards (1/3) Read After Write (RAW) Instr J tries to read operand before Instr I writes it I: add r1,r2,r3, J: sub r4,r1,r3 Caused by a true data dependency This hazard results from an actual need for communication.
17 Data Hazards (2/3) Write After Read (WAR) Instr J writes operand before Instr I reads it I: sub r4,r1,r3 J: add r1,r2,r3 Caused by an anti dependency This results from reuse of the name r1 Can t happen in MIPS 5 stage pipeline because: All instructions take 5 stages, and Reads are always in stage 2, and Writes are always in stage 5
18 Data Hazards (3/3) Write After Write (WAW) Instr J writes operand before Instr I writes it. I: sub r1,r4,r3 J: add r1,r2,r3 Caused by an output dependency Can t happen in MIPS 5 stage pipeline because: All instructions take 5 stages, and Writes are always in stage 5 WAR and WAW can occur in more complicated pipes
19 Forwarding Figure A.7, Page A-18 IF ID/RF EX MEM WB I Ifetch add r1,r2,r3 n s t r. sub r4,r1,r3r1 r3 Ifetch ALU DMem ALU DMem O r d e r and r6,r1,r7 or r8,r1,r9 xor r10,r1,r11 Ifetch Ifetch ALU Ifetch DMem ALU DMem ALU DMem
20 Structural Hazards (Memory Port) Figure A.4, Page A-14 Time (clock cycles) Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 I n s t r. O r d e r Load Ifetch Instr 1 Instr 2 Instr 3 Instr 4 Ifetch ALU Ifetch DMem ALU Ifetch DMem AL LU Ifetch DMem ALU DMem ALU DMem
21 Hazards, Bubbles (Similar il to Figure A.5, Page A-15) Time (clock cycles) Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Ifetch I Load n s t r. Instr 1 ALU Ifetch Instr 2 Ifetch O r DMem ALU DMem ALU DMem d Stall Bubble Bubble Bubble Bubble Bubble e r Ifetch Instr 3 ALU DMem How do you bubble the pipe? How can we avoid this hazard?
22 Control hazards (1/2) Sequential execution is predictable, (conditional) branches are not May have fetched instructions that should not be executed Simple solution (figure): Stall the pipeline p (bubble) Performance loss depends on number of branches in the program and pipeline implementation Branch penalty Possibly wrong instruction Correct instruction
23 Control hazards (2/2) What can be done? Always stop (previous slide) Also called freeze or flushing of the pipeline Assume no branch (=assume sequential) Must not change state before branch instr. is complete Assume branch Only smart if the target address is ready early Delayed branch Execute a different instruction while branch is evaluated Static techniques (fixed rule or compiler)
24 Dynamic scheduling So far: Static scheduling Instructions executed in program order Any reordering is done by the compiler Dynamic scheduling CPU reorders to get a more optimal order Fewer hazards, fewer stalls,... Must preserve order of operations where reordering could change the result Covered by TDT 4255 Hardware design
25 25 Dataflow execution edu/6.823 October 19, 2011 Ins# use exec op p1 src1 p2 src2 ptr 2 next to deallocate t 1 t 2... prt 1 next available Reorder buffer t n Instruction slot is candidate for execution when: It holds a valid instruction ( use bit is set) It has not already started execution ( exec bit is clear) Both operands are available (p1 and p2 are set)
26 Renaming & Out-of-order Issue October 19, 2011 edu/6.823 An example Renaming table p data F1 F2 v1 t1 F3 F4 t2 t5 F5 F6 t3 F7 F8 v4 t4 Reorder buffer Ins# use exec op p1 src1 p2 src LD LD MUL 10 v2 t2 1 v SUB 1 v1 1 v DIV 1 v1 01 t4 v4 t 1 t 2 t 3 t 4 t data / t i 1 LD F2, 34(R2) 2 LD F4, 45(R3) When are names in sources 3 MULTD F6, F4, F2 replaced by data? 4 SUBD F8, F2, F2 Whenever an FU produces data 5 DIVD F4, F2, F8 When can a name be reused? 6 ADDD F10, F6, F4 Whenever an instruction completes
27 Compiler techniques for ILP For a given pipeline and superscalarity How can these be best utilized? As few stalls from hazards as possible Dynamic scheduling Tomasulo s algorithm etc. (TDT4255) Makes the CPU much more complicated What can be done by the compiler? Has ages to spend, but less knowledge Static ti scheduling, but what else?
28 Example Source code: for (i = 1000; i >0; i=i-1) x[i] = x[i] + s; Notice: Lots of dependencies No dependencies between iterations High loop overhead Loop unrolling MIPS: Loop: LD L.D F0,0(R1) ; F0 = x[i] ADD.D F4,F0,F2 ; F2 = s S.D F4,0(R1) ; Store x[i] + s DADDUI R1,R1,#-8 ; x[i] is 8 bytes BNE R1,R2,Loop ; R1 = R2?
29 Static scheduling Loop: L.D stopp ADD.D stopp stopp S.D DADDUI stopp BNE F0,0(R1) F4,F0,F2 F4,0(R1) R1,R1,#-8 R1,R2,Loop Loop: L.D DADDUI ADD.D stopp stopp S.D BNE F0,0(R1) R1,R1,#-8 F4,F0,F2 F4,8(R1) R1,R2,Loop Result: From 9 cycles per iteration to 7 (Delays from table in figure 2.2)
30 Loop unrolling Loop: LD L.D F0,0(R1) 0(R1) ADD.D F4,F0,F2 S.D F4,0(R1) Loop: L.D F0,0(R1) L.D F6,-8(R1) ADD.D F4,F0,F2 ADD.D F8,F6,F2 SD S.D F4,0(R1) S.D F8,-8(R1) DADDUI R1,R1,#-8 L.D F10,-16(R1) BNE R1,R2,Loop R2 ADD.D D F12,F10,F2 F10 F2 S.D F12,-16(R1) L.D F14,-24(R1) 4( Reduced d loop overhead ADD.D F16,F14,F2 Requires number of iterations S.D F16,-24(R1) divisible by n (here n=4) DADDUI R1,R1,#-32 ister renaming BNE R1,R2,Loop Offsets have changed Stalls not shown
31 Loop: LD L.D F0,0(R1) 0(R1) Loop: LD L.D F0,0(R1) 0(R1) ADD.D F4,F0,F2 L.D F6,-8(R1) S.D F4,0(R1) L.D F10,-16(R1) L.D F6,-8(R1) L.D F14,-24(R1) ADD.D F8,F6,F2 ADD.D F4,F0,F2 SD S.D F8,-8(R1) 8(R ADD.DD F8,F6,F2F6 F L.D F10,-16(R1) ADD.D F12,F10,F2 ADD.DD F12,F10,F2F10 F2 ADD.DD F16,F14,F2F14 F2 S.D F12,-16(R1) S.D F4,0(R1) L.D F14,-24(R1) S.D F8,-8(R1) ADD.D F16,F14,F2 DADDUI R1,R1,#-32 S.D F16,-24(R1) S.D F12,-16(R1) DADDUI R1,R1,#-32R SD S.D F6 F16,-24(R1) BNE R1,R2,Loop BNE R1,R2,Loop Avoids stall after: L.D(1), ADD.D(2), DADDUI(1)
32 Loop unrolling: Summary Original code Scheduling Loop unrolling Unrolled 4 iterationsti Combination Avoids stalls entirely 9 cycles per element 7 cycles per element 6,75 cycles per element 3,5 cycles per element Compiler reduced execution time by 61%
33 Loop unrolling in practice Do not usually know upper bound of loop Suppose it is n, and we would like to unroll the loop to make k copies of the body Instead of a single unrolled loop, we generate a pair of consecutive loops: 1st executes (nmod k) times and has a body that t is the original loop 2nd is the unrolled body surrounded by an outer loop that iterates (n/k) times For large values of n, most of the execution time will be spent in the unrolled loop
34 Getting CPI below 1 CPI 1 if issue only 1 instruction ti every clock cycle Multiple-issue processors come in 3 flavors: 1. Statically-scheduled l d superscalar processors In-order execution Varying number of instructions i issued (compiler) 2. Dynamically-scheduled superscalar processors Out-of-order execution Varying number of instructions issued (CPU) 3. VLIW (very long instruction i word) processors In-order execution Fixed number of instructions ti issued
35 VLIW: Very Large Instruction Word (1/2) Each VLIW has explicit coding for multiple operations Several instructions combined into packets Possibly with parallelism indicated Tradeoff instruction space for simple decoding Room for many operations Independent operations => execute in parallel E.g., 2 integer operations, 2 FP ops, 2 Memory refs, 1 branch
36 VLIW: Very Large Instruction Word (2/2) Assume 2 load/store, 2 fp, 1 int/branch VLIW with 0-5 operations. Why 0? Important to avoid empty instruction slots Loop unrolling Local scheduling Global scheduling Scheduling across branches Difficult to find all dependencies in advance Solution1: Block on memory accesses Solution2: CPU detects some dependencies
37 Recall: Unrolled Loop that minimizes stalls for Scalar Source code: for (i = 1000; i >0; i=i-1) x[i] = x[i] + s; Loop: LD L.D F0,0(R1) 0(R1) L.D F6,-8(R1) L.D F10,-16(R1) L.D F14,-24(R1) ADD.D F4,F0,F2 ADD.DD F8,F6,F2F6 F ADD.D F12,F10,F2 ADD.DD F16,F14,F2F14 F2 S.D F4,0(R1) S.D F8,-8(R1) DADDUI R1,R1,#-32 S.D F12,-16(R1) SD F6 (R ) BNE R1,R2,Loop ister mapping: S.D F16,-24(R1) s F2 i R1
38 Loop Unrolling in VLIW Memory Memory FP FP Int. op/ Clock reference 1 reference 2 operation 1 op. 2 branch L.D F0,0(R1) L.D F6,-8(R1) 1 L.D F10,-16(R1) L.D F14,-24(R1) 2 L.D F18,-32(R1) L.D F22,-40(R1) ADD.D F4,F0,F2 ADD.D F8,F6,F2 3 L.D F26,-48(R1) ADD.D F12,F10,F2 ADD.D F16,F14,F2 4 ADD.D F20,F18,F2 ADD.D F24,F22,F2 5 S.D 0(R1),F4 S.D -8(R1),F8 ADD.D F28,F26,F2 6 S.D -16(R1),F12 S.D -24(R1),F16 7 SD S.D -32(R1),F20 SD S.D -40(R1),F24 DSUBUI R1,R1,#48R 8 S.D -0(R1),F28 BNEZ R1,LOOP 9 Unrolled 7 iterations to avoid delays 7 results in 9 clocks, or 1.3 clocks per iteration (1.8X) Average: 2.5 ops per clock, 50% efficiency Note: Need more registers s in VLIW (15 vs. 6 in SS)
39 Problems with 1st Generation VLIW Increase in code size Loop unrolling Partially empty VLIW Operated in lock-step; no hazard detection HW A stall in any functional unit pipeline causes entire processor to stall, since all functional units must be kept synchronized Compiler might predict function units, but caches hard to predict Binary code compatibility Strict VLIW => different numbers of functional units and unit latencies require different versions of the code
40 IA-64 and EPIC 64 bit instruction set architecture Not a CPU, but an architecture Itanium and Itanium 2 are CPUs based on IA-64 Made by Intel and Hewlett-Packard Uses EPIC: Explicitly Parallel Instruction Computing Departure from the x86 architecture Details in Appendix G.6
41 Instruction bundle (VLIW)
42 Functional units and template Functional units: I (Integer), M (Integer + Memory), F (FP), B (Branch), L + X (64 bit operands + special inst.) Template field: Maps instruction to functional unit Indicates stops: Limitations to ILP
43 Code example (1/2)
44 Code example 2/2
45 Instruction fetching Want to issue >1 instruction every cycle This means fetching >1 instruction Eg E.g. 4-8 instructions fetched every cycle Several problems Bandwidth / Latency Determining which instructions Jumps Branches Integrated instruction fetch unit
46 Branch Target Buffer (BTB) Predicts next instruction address, sends it out before decoding instruction PC of branch sent to BTB When match is found, Predicted PC is returned If branch predicted taken, instruction fetch continues at Predicted PC
47 Return Address Predictor Small buffer of 70% return addresses acts 60% as a stack Caches most 50% recent return 40% addresses Call Push a 30% return address 20% on stack Return Pop an address off stack & predict as new PC Mispre ediction freque ency 10% 0% go m88ksim cc1 compress xlisp ijpeg perl vortex Return address buffer entries
48 Integrated Instruction Fetch Units Recent designs have implemented the fetch stage as a separate, autonomous unit Multiple-issue in one simple pipeline stage is too complex An integrated fetch unit provides: Branch prediction Instruction prefetch Instruction memory access and buffering
49 Limits to ILP Chapter 3 Advances in compiler technology + significantly new and different hardware techniques may be able to overcome limitations assumed in studies However, unlikely such advances when coupled with realistic hardware will overcome these limits in near future How much ILP is available using existing mechanisms with increasing HW budgets?
50 Ideal HW Model 1. ister renaming infinite virtual registers all register WAW & WAR hazards are avoided 2. Branch prediction perfect; no mispredictions 3. Jump prediction all jumps perfectly predicted 2 & 3 no control dependencies; perfect speculation & an unbounded d buffer of instructions ti available 4. Memory-address alias analysis addresses known & a load can be moved before a store provided addresses not equal 1&4 eliminates all but RAW 5. perfect caches; 1 cycle latency for all instructions; unlimited instructions issued/clock cycle
51 Upper Limit it to ILP: Ideal Machine (Figure 3.1) Instr ructio ons Per Cl lock Integer: FP: gcc espresso li fpppp doducd tomcatv Programs
52 Instruction window Ideal HW need to know entire code Obviously not practical ister dependencies scales quadratically Window: The set of instructions examined for simultaneous execution How does the size of the window affect IPC? Too small window => Can t see whole loops Too large window => Hard to implement
53 More Realistic HW: Window Impact Figure 3.2 FP: Clock IPC Instructi ions Per Integer: gcc espresso li f pppp doduc tomcatv Inf inite
54 Questions?
55 Thread Level Parallelism (TLP) ILP exploits implicit parallel operations within a loop or straight-line code segment TLP explicitly represented by the use of multiple threads of execution that are inherently parallel Use multiple instruction streams to improve: 1. Throughput of computers that run many programs 2. Execution time of a single application implemented as a multi-threaded t poga program (paa (parallel poga program)
56 Multi-threaded execution Multi-threading: multiple threads share the functional units of 1 processor via overlapping Must duplicate independent state of each thread e.g., a separate copy of register file, PC and page table Memory shared through virtual memory mechanisms HW for fast thread switch; much hfaster than full process switch 100s to 1000s of clocks When switch? Alternate instruction per thread (fine grain) When a thread is stalled, perhaps for a cache miss, another thread can be executed (coarse grain)
57 Fine-Grained Multithreading Switches between threads on each instruction Multiples threads interleaved Usually round-robin fashion, skipping stalled threads CPU must be able to switch threads every clock Hides both short and long stalls Other threads executed when one thread stalls But slows down execution of individual threads Thread ready to execute without stalls will be delayed by instructions from other threads Used on Sun s Niagara
58 Coarse-Grained Multithreading Switch threads only on costly stalls (L2 cache miss) Advantages No need for very fast thread-switching Doesn t slow down thread, since switches only when thread encounters a costly stall Disadvantage: hard to overcome throughput losses from shorter stalls, due to pipeline start-up costs Since CPU issues instructions from 1 thread, when a stall occurs, the pipeline must be emptied or frozen New thread must fill pipeline before instructions can complete => Better for reducing gpenalty of high cost stalls, where pipeline refill << stall time
59 Do both ILP and TLP? TLP and ILP exploit two different kinds of parallel structure in a program Can a high-ilp processor also exploit TLP? Functional units often idle because of stalls or dependences in the code Can TLP be a source of independent instructions that might reduce processor stalls? Can TLP be used to employ functional units that would otherwise lie idle with insufficient ILP? => Simultaneous Multi-threading (SMT) Intel: Hyper-Threading
60 Simultaneous Multi-threadingthreading Cycle One thread, 8 units M M FX FX FP FP BR CC Cycle Two threads, 8 units M M FX FX FP FP BR CC M = Load/Store, FX = Fixed Point, FP = Floating Point, BR = Branch, CC = Condition Codes
61 Simultaneous Multi-threading (SMT) A dynamically scheduled d processor already has many HW mechanisms to support multi-threading Large set of virtual registers Virtual = not all visible at ISA level ister renaming Dynamic scheduling Just add a per thread renaming table and keeping separate PCs Independent commitment can be supported by logically keeping a separate reorder buffer for each thread
62 Time (proc cessor cycle e) Multi-threaded th d categories Simultaneous Superscalar Fine-Grained Coarse-Grained Multiprocessing Multithreading Thread 1 Thread 3 Thread 5 Thread 2 Thread 4 Idle slot
63 Design Challenges in SMT SMT makes sense only with fine-grained implementation How to reduce the impact on single thread performance? Give priority to one or a few preferred threads Large register file needed to hold multiple contexts Not affecting clock cycle time, especially in Instruction issue - more candidate instructions need to be considered Instruction ti completion -choosing which h instructions ti to commit may be challenging Ensuring that cache and TLB conflicts generated by SMT do not degrade performance
TDT 4260 lecture 7 spring semester 2015
1 TDT 4260 lecture 7 spring semester 2015 Lasse Natvig, The CARD group Dept. of computer & information science NTNU 2 Lecture overview Repetition Superscalar processor (out-of-order) Dependencies/forwarding
More informationLecture-13 (ROB and Multi-threading) CS422-Spring
Lecture-13 (ROB and Multi-threading) CS422-Spring 2018 Biswa@CSE-IITK Cycle 62 (Scoreboard) vs 57 in Tomasulo Instruction status: Read Exec Write Exec Write Instruction j k Issue Oper Comp Result Issue
More informationCMSC 411 Computer Systems Architecture Lecture 13 Instruction Level Parallelism 6 (Limits to ILP & Threading)
CMSC 411 Computer Systems Architecture Lecture 13 Instruction Level Parallelism 6 (Limits to ILP & Threading) Limits to ILP Conflicting studies of amount of ILP Benchmarks» vectorized Fortran FP vs. integer
More informationOutline EEL 5764 Graduate Computer Architecture. Chapter 3 Limits to ILP and Simultaneous Multithreading. Overcoming Limits - What do we need??
Outline EEL 7 Graduate Computer Architecture Chapter 3 Limits to ILP and Simultaneous Multithreading! Limits to ILP! Thread Level Parallelism! Multithreading! Simultaneous Multithreading Ann Gordon-Ross
More informationCISC 662 Graduate Computer Architecture Lecture 13 - Limits of ILP
CISC 662 Graduate Computer Architecture Lecture 13 - Limits of ILP Michela Taufer http://www.cis.udel.edu/~taufer/teaching/cis662f07 Powerpoint Lecture Notes from John Hennessy and David Patterson s: Computer
More informationGetting CPI under 1: Outline
CMSC 411 Computer Systems Architecture Lecture 12 Instruction Level Parallelism 5 (Improving CPI) Getting CPI under 1: Outline More ILP VLIW branch target buffer return address predictor superscalar more
More informationHardware-Based Speculation
Hardware-Based Speculation Execute instructions along predicted execution paths but only commit the results if prediction was correct Instruction commit: allowing an instruction to update the register
More informationCISC 662 Graduate Computer Architecture Lecture 13 - Limits of ILP
CISC 662 Graduate Computer Architecture Lecture 13 - Limits of ILP Michela Taufer http://www.cis.udel.edu/~taufer/teaching/cis662f07 Powerpoint Lecture Notes from John Hennessy and David Patterson s: Computer
More informationDonn Morrison Department of Computer Science. TDT4255 ILP and speculation
TDT4255 Lecture 9: ILP and speculation Donn Morrison Department of Computer Science 2 Outline Textbook: Computer Architecture: A Quantitative Approach, 4th ed Section 2.6: Speculation Section 2.7: Multiple
More informationCopyright 2012, Elsevier Inc. All rights reserved.
Computer Architecture A Quantitative Approach, Fifth Edition Chapter 3 Instruction-Level Parallelism and Its Exploitation 1 Branch Prediction Basic 2-bit predictor: For each branch: Predict taken or not
More information5008: Computer Architecture
5008: Computer Architecture Chapter 2 Instruction-Level Parallelism and Its Exploitation CA Lecture05 - ILP (cwliu@twins.ee.nctu.edu.tw) 05-1 Review from Last Lecture Instruction Level Parallelism Leverage
More informationHardware-based Speculation
Hardware-based Speculation Hardware-based Speculation To exploit instruction-level parallelism, maintaining control dependences becomes an increasing burden. For a processor executing multiple instructions
More informationCSE 820 Graduate Computer Architecture. week 6 Instruction Level Parallelism. Review from Last Time #1
CSE 820 Graduate Computer Architecture week 6 Instruction Level Parallelism Based on slides by David Patterson Review from Last Time #1 Leverage Implicit Parallelism for Performance: Instruction Level
More informationMulti-cycle Instructions in the Pipeline (Floating Point)
Lecture 6 Multi-cycle Instructions in the Pipeline (Floating Point) Introduction to instruction level parallelism Recap: Support of multi-cycle instructions in a pipeline (App A.5) Recap: Superpipelining
More informationNOW Handout Page 1. Review from Last Time #1. CSE 820 Graduate Computer Architecture. Lec 8 Instruction Level Parallelism. Outline
CSE 820 Graduate Computer Architecture Lec 8 Instruction Level Parallelism Based on slides by David Patterson Review Last Time #1 Leverage Implicit Parallelism for Performance: Instruction Level Parallelism
More informationExploitation of instruction level parallelism
Exploitation of instruction level parallelism Computer Architecture J. Daniel García Sánchez (coordinator) David Expósito Singh Francisco Javier García Blas ARCOS Group Computer Science and Engineering
More informationCISC 662 Graduate Computer Architecture Lecture 13 - CPI < 1
CISC 662 Graduate Computer Architecture Lecture 13 - CPI < 1 Michela Taufer http://www.cis.udel.edu/~taufer/teaching/cis662f07 Powerpoint Lecture Notes from John Hennessy and David Patterson s: Computer
More informationHandout 2 ILP: Part B
Handout 2 ILP: Part B Review from Last Time #1 Leverage Implicit Parallelism for Performance: Instruction Level Parallelism Loop unrolling by compiler to increase ILP Branch prediction to increase ILP
More informationSuper Scalar. Kalyan Basu March 21,
Super Scalar Kalyan Basu basu@cse.uta.edu March 21, 2007 1 Super scalar Pipelines A pipeline that can complete more than 1 instruction per cycle is called a super scalar pipeline. We know how to build
More informationExploiting ILP with SW Approaches. Aleksandar Milenković, Electrical and Computer Engineering University of Alabama in Huntsville
Lecture : Exploiting ILP with SW Approaches Aleksandar Milenković, milenka@ece.uah.edu Electrical and Computer Engineering University of Alabama in Huntsville Outline Basic Pipeline Scheduling and Loop
More informationComputer Science 246 Computer Architecture
Computer Architecture Spring 2009 Harvard University Instructor: Prof. dbrooks@eecs.harvard.edu Compiler ILP Static ILP Overview Have discussed methods to extract ILP from hardware Why can t some of these
More informationLecture 8 Dynamic Branch Prediction, Superscalar and VLIW. Computer Architectures S
Lecture 8 Dynamic Branch Prediction, Superscalar and VLIW Computer Architectures 521480S Dynamic Branch Prediction Performance = ƒ(accuracy, cost of misprediction) Branch History Table (BHT) is simplest
More informationEE 4683/5683: COMPUTER ARCHITECTURE
EE 4683/5683: COMPUTER ARCHITECTURE Lecture 4A: Instruction Level Parallelism - Static Scheduling Avinash Kodi, kodi@ohio.edu Agenda 2 Dependences RAW, WAR, WAW Static Scheduling Loop-carried Dependence
More informationHardware-Based Speculation
Hardware-Based Speculation Execute instructions along predicted execution paths but only commit the results if prediction was correct Instruction commit: allowing an instruction to update the register
More informationMetodologie di Progettazione Hardware-Software
Metodologie di Progettazione Hardware-Software Advanced Pipelining and Instruction-Level Paralelism Metodologie di Progettazione Hardware/Software LS Ing. Informatica 1 ILP Instruction-level Parallelism
More informationFour Steps of Speculative Tomasulo cycle 0
HW support for More ILP Hardware Speculative Execution Speculation: allow an instruction to issue that is dependent on branch, without any consequences (including exceptions) if branch is predicted incorrectly
More informationCPI < 1? How? What if dynamic branch prediction is wrong? Multiple issue processors: Speculative Tomasulo Processor
1 CPI < 1? How? From Single-Issue to: AKS Scalar Processors Multiple issue processors: VLIW (Very Long Instruction Word) Superscalar processors No ISA Support Needed ISA Support Needed 2 What if dynamic
More informationTDT Coarse-Grained Multithreading. Review on ILP. Multi-threaded execution. Contents. Fine-Grained Multithreading
Review on ILP TDT 4260 Chap 5 TLP & Hierarchy What is ILP? Let the compiler find the ILP Advantages? Disadvantages? Let the HW find the ILP Advantages? Disadvantages? Contents Multi-threading Chap 3.5
More informationInstruction Level Parallelism
Instruction Level Parallelism The potential overlap among instruction execution is called Instruction Level Parallelism (ILP) since instructions can be executed in parallel. There are mainly two approaches
More informationStatic vs. Dynamic Scheduling
Static vs. Dynamic Scheduling Dynamic Scheduling Fast Requires complex hardware More power consumption May result in a slower clock Static Scheduling Done in S/W (compiler) Maybe not as fast Simpler processor
More informationCPI IPC. 1 - One At Best 1 - One At best. Multiple issue processors: VLIW (Very Long Instruction Word) Speculative Tomasulo Processor
Single-Issue Processor (AKA Scalar Processor) CPI IPC 1 - One At Best 1 - One At best 1 From Single-Issue to: AKS Scalar Processors CPI < 1? How? Multiple issue processors: VLIW (Very Long Instruction
More informationCS425 Computer Systems Architecture
CS425 Computer Systems Architecture Fall 2017 Multiple Issue: Superscalar and VLIW CS425 - Vassilis Papaefstathiou 1 Example: Dynamic Scheduling in PowerPC 604 and Pentium Pro In-order Issue, Out-of-order
More informationComputer Architecture A Quantitative Approach, Fifth Edition. Chapter 3. Instruction-Level Parallelism and Its Exploitation
Computer Architecture A Quantitative Approach, Fifth Edition Chapter 3 Instruction-Level Parallelism and Its Exploitation Introduction Pipelining become universal technique in 1985 Overlaps execution of
More informationEECC551 - Shaaban. 1 GHz? to???? GHz CPI > (?)
Evolution of Processor Performance So far we examined static & dynamic techniques to improve the performance of single-issue (scalar) pipelined CPU designs including: static & dynamic scheduling, static
More informationEECC551 Exam Review 4 questions out of 6 questions
EECC551 Exam Review 4 questions out of 6 questions (Must answer first 2 questions and 2 from remaining 4) Instruction Dependencies and graphs In-order Floating Point/Multicycle Pipelining (quiz 2) Improving
More informationInstruction Level Parallelism. Appendix C and Chapter 3, HP5e
Instruction Level Parallelism Appendix C and Chapter 3, HP5e Outline Pipelining, Hazards Branch prediction Static and Dynamic Scheduling Speculation Compiler techniques, VLIW Limits of ILP. Implementation
More informationEEC 581 Computer Architecture. Instruction Level Parallelism (3.6 Hardware-based Speculation and 3.7 Static Scheduling/VLIW)
1 EEC 581 Computer Architecture Instruction Level Parallelism (3.6 Hardware-based Speculation and 3.7 Static Scheduling/VLIW) Chansu Yu Electrical and Computer Engineering Cleveland State University Overview
More informationAdapted from David Patterson s slides on graduate computer architecture
Mei Yang Adapted from David Patterson s slides on graduate computer architecture Introduction Basic Compiler Techniques for Exposing ILP Advanced Branch Prediction Dynamic Scheduling Hardware-Based Speculation
More informationPipelining and Exploiting Instruction-Level Parallelism (ILP)
Pipelining and Exploiting Instruction-Level Parallelism (ILP) Pipelining and Instruction-Level Parallelism (ILP). Definition of basic instruction block Increasing Instruction-Level Parallelism (ILP) &
More informationEEC 581 Computer Architecture. Lec 7 Instruction Level Parallelism (2.6 Hardware-based Speculation and 2.7 Static Scheduling/VLIW)
EEC 581 Computer Architecture Lec 7 Instruction Level Parallelism (2.6 Hardware-based Speculation and 2.7 Static Scheduling/VLIW) Chansu Yu Electrical and Computer Engineering Cleveland State University
More informationInstruction-Level Parallelism and Its Exploitation
Chapter 2 Instruction-Level Parallelism and Its Exploitation 1 Overview Instruction level parallelism Dynamic Scheduling Techniques es Scoreboarding Tomasulo s s Algorithm Reducing Branch Cost with Dynamic
More informationILP concepts (2.1) Basic compiler techniques (2.2) Reducing branch costs with prediction (2.3) Dynamic scheduling (2.4 and 2.5)
Instruction-Level Parallelism and its Exploitation: PART 1 ILP concepts (2.1) Basic compiler techniques (2.2) Reducing branch costs with prediction (2.3) Dynamic scheduling (2.4 and 2.5) Project and Case
More informationEITF20: Computer Architecture Part3.2.1: Pipeline - 3
EITF20: Computer Architecture Part3.2.1: Pipeline - 3 Liang Liu liang.liu@eit.lth.se 1 Outline Reiteration Dynamic scheduling - Tomasulo Superscalar, VLIW Speculation ILP limitations What we have done
More informationELEC 5200/6200 Computer Architecture and Design Fall 2016 Lecture 9: Instruction Level Parallelism
ELEC 5200/6200 Computer Architecture and Design Fall 2016 Lecture 9: Instruction Level Parallelism Ujjwal Guin, Assistant Professor Department of Electrical and Computer Engineering Auburn University,
More informationInstruction-Level Parallelism (ILP)
Instruction Level Parallelism Instruction-Level Parallelism (ILP): overlap the execution of instructions to improve performance 2 approaches to exploit ILP: 1. Rely on hardware to help discover and exploit
More informationPage 1. Recall from Pipelining Review. Lecture 15: Instruction Level Parallelism and Dynamic Execution
CS252 Graduate Computer Architecture Recall from Pipelining Review Lecture 15: Instruction Level Parallelism and Dynamic Execution March 11, 2002 Prof. David E. Culler Computer Science 252 Spring 2002
More informationInstruction-Level Parallelism and Its Exploitation (Part III) ECE 154B Dmitri Strukov
Instruction-Level Parallelism and Its Exploitation (Part III) ECE 154B Dmitri Strukov Dealing With Control Hazards Simplest solution to stall pipeline until branch is resolved and target address is calculated
More informationUNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568
UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering Computer Architecture ECE 568 Part 10 Compiler Techniques / VLIW Israel Koren ECE568/Koren Part.10.1 FP Loop Example Add a scalar
More informationMultiple Instruction Issue and Hardware Based Speculation
Multiple Instruction Issue and Hardware Based Speculation Soner Önder Michigan Technological University, Houghton MI www.cs.mtu.edu/~soner Hardware Based Speculation Exploiting more ILP requires that we
More informationAdvanced issues in pipelining
Advanced issues in pipelining 1 Outline Handling exceptions Supporting multi-cycle operations Pipeline evolution Examples of real pipelines 2 Handling exceptions 3 Exceptions In pipelined execution, one
More informationExploring different level of parallelism Instruction-level parallelism (ILP): how many of the operations/instructions in a computer program can be performed simultaneously 1. e = a + b 2. f = c + d 3.
More informationInstruction Level Parallelism. ILP, Loop level Parallelism Dependences, Hazards Speculation, Branch prediction
Instruction Level Parallelism ILP, Loop level Parallelism Dependences, Hazards Speculation, Branch prediction Basic Block A straight line code sequence with no branches in except to the entry and no branches
More informationLoad1 no Load2 no Add1 Y Sub Reg[F2] Reg[F6] Add2 Y Add Reg[F2] Add1 Add3 no Mult1 Y Mul Reg[F2] Reg[F4] Mult2 Y Div Reg[F6] Mult1
Instruction Issue Execute Write result L.D F6, 34(R2) L.D F2, 45(R3) MUL.D F0, F2, F4 SUB.D F8, F2, F6 DIV.D F10, F0, F6 ADD.D F6, F8, F2 Name Busy Op Vj Vk Qj Qk A Load1 no Load2 no Add1 Y Sub Reg[F2]
More informationCS 252 Graduate Computer Architecture. Lecture 4: Instruction-Level Parallelism
CS 252 Graduate Computer Architecture Lecture 4: Instruction-Level Parallelism Krste Asanovic Electrical Engineering and Computer Sciences University of California, Berkeley http://wwweecsberkeleyedu/~krste
More informationPage 1. Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls
CS252 Graduate Computer Architecture Recall from Pipelining Review Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: March 16, 2001 Prof. David A. Patterson Computer Science 252 Spring
More informationAdministrivia. CMSC 411 Computer Systems Architecture Lecture 14 Instruction Level Parallelism (cont.) Control Dependencies
Administrivia CMSC 411 Computer Systems Architecture Lecture 14 Instruction Level Parallelism (cont.) HW #3, on memory hierarchy, due Tuesday Continue reading Chapter 3 of H&P Alan Sussman als@cs.umd.edu
More informationILP: Instruction Level Parallelism
ILP: Instruction Level Parallelism Tassadaq Hussain Riphah International University Barcelona Supercomputing Center Universitat Politècnica de Catalunya Introduction Introduction Pipelining become universal
More informationLecture 9: Multiple Issue (Superscalar and VLIW)
Lecture 9: Multiple Issue (Superscalar and VLIW) Iakovos Mavroidis Computer Science Department University of Crete Example: Dynamic Scheduling in PowerPC 604 and Pentium Pro In-order Issue, Out-of-order
More informationChapter 3: Instruction Level Parallelism (ILP) and its exploitation. Types of dependences
Chapter 3: Instruction Level Parallelism (ILP) and its exploitation Pipeline CPI = Ideal pipeline CPI + stalls due to hazards invisible to programmer (unlike process level parallelism) ILP: overlap execution
More informationOutline Review: Basic Pipeline Scheduling and Loop Unrolling Multiple Issue: Superscalar, VLIW. CPE 631 Session 19 Exploiting ILP with SW Approaches
Session xploiting ILP with SW Approaches lectrical and Computer ngineering University of Alabama in Huntsville Outline Review: Basic Pipeline Scheduling and Loop Unrolling Multiple Issue: Superscalar,
More informationINSTRUCTION LEVEL PARALLELISM
INSTRUCTION LEVEL PARALLELISM Slides by: Pedro Tomás Additional reading: Computer Architecture: A Quantitative Approach, 5th edition, Chapter 2 and Appendix H, John L. Hennessy and David A. Patterson,
More informationPage 1. CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Pipeline CPI (II) Michela Taufer
CISC 662 Graduate Computer Architecture Lecture 8 - ILP 1 Michela Taufer Pipeline CPI http://www.cis.udel.edu/~taufer/teaching/cis662f07 Powerpoint Lecture Notes from John Hennessy and David Patterson
More informationLecture: Pipeline Wrap-Up and Static ILP
Lecture: Pipeline Wrap-Up and Static ILP Topics: multi-cycle instructions, precise exceptions, deep pipelines, compiler scheduling, loop unrolling, software pipelining (Sections C.5, 3.2) 1 Multicycle
More informationPage # CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Michela Taufer
CISC 662 Graduate Computer Architecture Lecture 8 - ILP 1 Michela Taufer http://www.cis.udel.edu/~taufer/teaching/cis662f07 Powerpoint Lecture Notes from John Hennessy and David Patterson s: Computer Architecture,
More informationCourse on Advanced Computer Architectures
Surname (Cognome) Name (Nome) POLIMI ID Number Signature (Firma) SOLUTION Politecnico di Milano, July 9, 2018 Course on Advanced Computer Architectures Prof. D. Sciuto, Prof. C. Silvano EX1 EX2 EX3 Q1
More informationRecall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls
CS252 Graduate Computer Architecture Recall from Pipelining Review Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: March 16, 2001 Prof. David A. Patterson Computer Science 252 Spring
More informationLecture: Static ILP. Topics: compiler scheduling, loop unrolling, software pipelining (Sections C.5, 3.2)
Lecture: Static ILP Topics: compiler scheduling, loop unrolling, software pipelining (Sections C.5, 3.2) 1 Static vs Dynamic Scheduling Arguments against dynamic scheduling: requires complex structures
More informationLecture 7 Instruction Level Parallelism (5) EEC 171 Parallel Architectures John Owens UC Davis
Lecture 7 Instruction Level Parallelism (5) EEC 171 Parallel Architectures John Owens UC Davis Credits John Owens / UC Davis 2007 2009. Thanks to many sources for slide material: Computer Organization
More informationInstruction Level Parallelism. Taken from
Instruction Level Parallelism Taken from http://www.cs.utsa.edu/~dj/cs3853/lecture5.ppt Outline ILP Compiler techniques to increase ILP Loop Unrolling Static Branch Prediction Dynamic Branch Prediction
More informationUG4 Honours project selection: Talk to Vijay or Boris if interested in computer architecture projects
Announcements UG4 Honours project selection: Talk to Vijay or Boris if interested in computer architecture projects Inf3 Computer Architecture - 2017-2018 1 Last time: Tomasulo s Algorithm Inf3 Computer
More informationESE 545 Computer Architecture Instruction-Level Parallelism (ILP): Speculation, Reorder Buffer, Exceptions, Superscalar Processors, VLIW
Computer Architecture ESE 545 Computer Architecture Instruction-Level Parallelism (ILP): Speculation, Reorder Buffer, Exceptions, Superscalar Processors, VLIW 1 Review from Last Lecture Leverage Implicit
More informationAs the amount of ILP to exploit grows, control dependences rapidly become the limiting factor.
Hiroaki Kobayashi // As the amount of ILP to exploit grows, control dependences rapidly become the limiting factor. Branches will arrive up to n times faster in an n-issue processor, and providing an instruction
More informationFunctional Units. Registers. The Big Picture: Where are We Now? The Five Classic Components of a Computer Processor Input Control Memory
The Big Picture: Where are We Now? CS152 Computer Architecture and Engineering Lecture 18 The Five Classic Components of a Computer Processor Input Control Dynamic Scheduling (Cont), Speculation, and ILP
More informationTDT4255 Computer Design. Review Lecture. Magnus Jahre. TDT4255 Computer Design
1 TDT4255 Computer Design Review Lecture Magnus Jahre 2 ABOUT THE EXAM 3 About exam The exam will cover a large part of the curriculum (reading list) Exam properties that we seek: Comprehensible and unambiguous
More informationChapter 4. Advanced Pipelining and Instruction-Level Parallelism. In-Cheol Park Dept. of EE, KAIST
Chapter 4. Advanced Pipelining and Instruction-Level Parallelism In-Cheol Park Dept. of EE, KAIST Instruction-level parallelism Loop unrolling Dependence Data/ name / control dependence Loop level parallelism
More informationParallel Processing SIMD, Vector and GPU s cont.
Parallel Processing SIMD, Vector and GPU s cont. EECS4201 Fall 2016 York University 1 Multithreading First, we start with multithreading Multithreading is used in GPU s 2 1 Thread Level Parallelism ILP
More informationMIPS Pipelining. Computer Organization Architectures for Embedded Computing. Wednesday 8 October 14
MIPS Pipelining Computer Organization Architectures for Embedded Computing Wednesday 8 October 14 Many slides adapted from: Computer Organization and Design, Patterson & Hennessy 4th Edition, 2011, MK
More informationInstruction Level Parallelism (ILP)
1 / 26 Instruction Level Parallelism (ILP) ILP: The simultaneous execution of multiple instructions from a program. While pipelining is a form of ILP, the general application of ILP goes much further into
More informationLecture 9: Case Study MIPS R4000 and Introduction to Advanced Pipelining Professor Randy H. Katz Computer Science 252 Spring 1996
Lecture 9: Case Study MIPS R4000 and Introduction to Advanced Pipelining Professor Randy H. Katz Computer Science 252 Spring 1996 RHK.SP96 1 Review: Evaluating Branch Alternatives Two part solution: Determine
More informationComputer Architecture 计算机体系结构. Lecture 4. Instruction-Level Parallelism II 第四讲 指令级并行 II. Chao Li, PhD. 李超博士
Computer Architecture 计算机体系结构 Lecture 4. Instruction-Level Parallelism II 第四讲 指令级并行 II Chao Li, PhD. 李超博士 SJTU-SE346, Spring 2018 Review Hazards (data/name/control) RAW, WAR, WAW hazards Different types
More informationHardware-based speculation (2.6) Multiple-issue plus static scheduling = VLIW (2.7) Multiple-issue, dynamic scheduling, and speculation (2.
Instruction-Level Parallelism and its Exploitation: PART 2 Hardware-based speculation (2.6) Multiple-issue plus static scheduling = VLIW (2.7) Multiple-issue, dynamic scheduling, and speculation (2.8)
More informationSeveral Common Compiler Strategies. Instruction scheduling Loop unrolling Static Branch Prediction Software Pipelining
Several Common Compiler Strategies Instruction scheduling Loop unrolling Static Branch Prediction Software Pipelining Basic Instruction Scheduling Reschedule the order of the instructions to reduce the
More informationInstruction Pipelining Review
Instruction Pipelining Review Instruction pipelining is CPU implementation technique where multiple operations on a number of instructions are overlapped. An instruction execution pipeline involves a number
More informationReduction of Data Hazards Stalls with Dynamic Scheduling So far we have dealt with data hazards in instruction pipelines by:
Reduction of Data Hazards Stalls with Dynamic Scheduling So far we have dealt with data hazards in instruction pipelines by: Result forwarding (register bypassing) to reduce or eliminate stalls needed
More informationEITF20: Computer Architecture Part2.2.1: Pipeline-1
EITF20: Computer Architecture Part2.2.1: Pipeline-1 Liang Liu liang.liu@eit.lth.se 1 Outline Reiteration Pipelining Harzards Structural hazards Data hazards Control hazards Implementation issues Multi-cycle
More informationCS433 Midterm. Prof Josep Torrellas. October 19, Time: 1 hour + 15 minutes
CS433 Midterm Prof Josep Torrellas October 19, 2017 Time: 1 hour + 15 minutes Name: Instructions: 1. This is a closed-book, closed-notes examination. 2. The Exam has 4 Questions. Please budget your time.
More informationReview: Evaluating Branch Alternatives. Lecture 3: Introduction to Advanced Pipelining. Review: Evaluating Branch Prediction
Review: Evaluating Branch Alternatives Lecture 3: Introduction to Advanced Pipelining Two part solution: Determine branch taken or not sooner, AND Compute taken branch address earlier Pipeline speedup
More informationUNIT I (Two Marks Questions & Answers)
UNIT I (Two Marks Questions & Answers) Discuss the different ways how instruction set architecture can be classified? Stack Architecture,Accumulator Architecture, Register-Memory Architecture,Register-
More informationLecture 5: VLIW, Software Pipelining, and Limits to ILP. Review: Tomasulo
Lecture 5: VLIW, Software Pipelining, and Limits to ILP Professor David A. Patterson Computer Science 252 Spring 1998 DAP.F96 1 Review: Tomasulo Prevents Register as bottleneck Avoids WAR, WAW hazards
More informationMinimizing Data hazard Stalls by Forwarding Data Hazard Classification Data Hazards Present in Current MIPS Pipeline
Instruction Pipelining Review: MIPS In-Order Single-Issue Integer Pipeline Performance of Pipelines with Stalls Pipeline Hazards Structural hazards Data hazards Minimizing Data hazard Stalls by Forwarding
More informationComplex Pipelining: Out-of-order Execution & Register Renaming. Multiple Function Units
6823, L14--1 Complex Pipelining: Out-of-order Execution & Register Renaming Laboratory for Computer Science MIT http://wwwcsglcsmitedu/6823 Multiple Function Units 6823, L14--2 ALU Mem IF ID Issue WB Fadd
More informationHY425 Lecture 09: Software to exploit ILP
HY425 Lecture 09: Software to exploit ILP Dimitrios S. Nikolopoulos University of Crete and FORTH-ICS November 4, 2010 ILP techniques Hardware Dimitrios S. Nikolopoulos HY425 Lecture 09: Software to exploit
More informationCSE 490/590 Computer Architecture Homework 2
CSE 490/590 Computer Architecture Homework 2 1. Suppose that you have the following out-of-order datapath with 1-cycle ALU, 2-cycle Mem, 3-cycle Fadd, 5-cycle Fmul, no branch prediction, and in-order fetch
More informationHY425 Lecture 09: Software to exploit ILP
HY425 Lecture 09: Software to exploit ILP Dimitrios S. Nikolopoulos University of Crete and FORTH-ICS November 4, 2010 Dimitrios S. Nikolopoulos HY425 Lecture 09: Software to exploit ILP 1 / 44 ILP techniques
More informationWhat is Pipelining? Time per instruction on unpipelined machine Number of pipe stages
What is Pipelining? Is a key implementation techniques used to make fast CPUs Is an implementation techniques whereby multiple instructions are overlapped in execution It takes advantage of parallelism
More informationInstruction-Level Parallelism. Instruction Level Parallelism (ILP)
Instruction-Level Parallelism CS448 1 Pipelining Instruction Level Parallelism (ILP) Limited form of ILP Overlapping instructions, these instructions can be evaluated in parallel (to some degree) Pipeline
More informationLecture 6 MIPS R4000 and Instruction Level Parallelism. Computer Architectures S
Lecture 6 MIPS R4000 and Instruction Level Parallelism Computer Architectures 521480S Case Study: MIPS R4000 (200 MHz, 64-bit instructions, MIPS-3 instruction set) 8 Stage Pipeline: first half of fetching
More informationProcessor: Superscalars Dynamic Scheduling
Processor: Superscalars Dynamic Scheduling Z. Jerry Shi Assistant Professor of Computer Science and Engineering University of Connecticut * Slides adapted from Blumrich&Gschwind/ELE475 03, Peh/ELE475 (Princeton),
More informationThe basic structure of a MIPS floating-point unit
Tomasulo s scheme The algorithm based on the idea of reservation station The reservation station fetches and buffers an operand as soon as it is available, eliminating the need to get the operand from
More informationCS433 Midterm. Prof Josep Torrellas. October 16, Time: 1 hour + 15 minutes
CS433 Midterm Prof Josep Torrellas October 16, 2014 Time: 1 hour + 15 minutes Name: Alias: Instructions: 1. This is a closed-book, closed-notes examination. 2. The Exam has 4 Questions. Please budget your
More information