CS 614 COMPUTER ARCHITECTURE II FALL 2004

Size: px
Start display at page:

Download "CS 614 COMPUTER ARCHITECTURE II FALL 2004"

Transcription

1 CS 64 COMPUTER ARCHITECTURE II FALL 004 DUE : October, 005 HOMEWORK II READ : - Portions of Chapters 5, 7, 8 and 9 of the Sima book and - Portions of Chapter 3, 4 and Appendix A of the Hennessy book ASSIGNMENT: There are three problems from the Hennessy book. Solve all homework and exam problems as shown in class and past exam solutions. ) Consider the piece of code in Problem 4.8 of the Hennessy book. This code is for the DAXP application we discussed in class. Note that according to Figure.3 on page 37 of the Hennessy book, there is no DSUBUI instruction, even though the code in the problem uses it. The code has to use the available DADDI instruction. See the past exam questions below for the usage of the DADDI instruction. Assume that this is machine model number : the MIPS uses the Tomasulo algorithm of Section 3. and 3.3 of the Hennessy book and as discussed class where there are enough number of CDB buses to eliminate bottlenecks. In addition, there is a perfect memory with no stalls and the functional unit timings are as listed on page A-74 of the Hennessy book : double-precision FP operations ADD.D, MUL.D and DIV.D take 3, and 4 clock periods, respectively. Next assumption is that there are enough functional units for integer instructions not to cause stalls. Another assumption is that, branch predictions are correct for the duration of the loop execution discussed below. As we know a branch instruction takes two clock periods to run. Finally, store instructions complete in the WR stage. In which clock period, will the first iteration of the loop be completed? That is, what is the last clock period in which the Write-Result stage of an instruction from the first iteration be done. To answer the question, continue with the following table : Instruction IF ID EX WR L.D F0, 0(R) MUL.D F0, F0, F 3 4/ Continue Polytechnic University Page of 0 Handout No : 5 October 6, 004

2 ) Consider the same DAXPY code given in Problem 4.8 of the Hennessy book again. Note about the DSUBUI instruction case mentioned in Problem. Assume that the MIPS is implemented as the scalar hardware-speculative Tomasulo algorithm machine as discussed in class. That is, this is machine model number 3. There are enough number of CDB buses to eliminate bottlenecks. In addition, assume that there is a perfect memory with no stalls and the functional unit timings are as listed on page A-74 of the Hennessy book : double-precision FP operations ADD.D, MUL.D and DIV.D take 3, and 4 clock periods, respectively. Another assumption is that there are enough functional units for integer instructions not to cause stalls. Finally, branch predictions are correct for the duration of the loop execution discussed below. As we know a branch instruction takes two clock periods to run. In which clock period, will the first iteration of the loop be completed? That is, what is the last clock period in which the Commit stage of an instruction from the first iteration be done last. Show also whihch instructions are flushed from the pipeline. To answer the question, continue with the following table, without showing the hardware tables : Instruction IF ID EX WR CM L.D F0, 0(R) MUL.D F0, F0, F 3 4/ Continue ) Solve Problem 3.(b) of the Hennessy book. The question is on machine model number 3 discussed in class. The question is on the process of fetching operands of an instruction. As discussed in class, there are two alternatives : fetch the operands of the instruction at issue time and place them in an RS : issue-bound fetch. Eventually, the instruction is scheduled for execution on a functional unit. Obviously, RSs must have long value fields to keep the operand values, (V j /V k ), until the instruction is scheduled. This is what the MIPS machine in section 3.7 does. issue the instruction without fetching the operands : schedule-bound fetch. Eventually, the instruction is scheduled for execution and at that moment the operands are fetched. Therefore, there is no need to have long value fields in RSs to keep the operands, only shorter Q j /Q k fields are needed. The scheme explored in Problem 3.(b) is that operands are still fetched during issue and there are still Q j /Q k fields in RSs. But, there are no value fields, (V j /V k ) in RSs. Polytechnic University Page of 0 CS64 Handout No : 5 October 6, 004

3 RELEVANT QUESTIONS AND ANSWERS Q) The Section Tomasulo algorithm we discussed in class (machine model number ) is for a scalar processor with dynamic scheduling. It has drawbacks, two of which are that i) the CDB bus can carry only one value at a time, becoming a bottleneck and ii) while an instruction is issued to a reservation station, it is possible that the issued instruction misses to take along an operand with it since the operand may have just been put on the CDB by a functional unit and the issue logic is not aware of that. Thus, the instruction would wait indefinitely or would get an incorrect value. Suggest reasonable solutions for these two cases. A) i) In the first Tomasulo algorithm, seven functional units are used : Load, Store, 3 FP Add/Sub and FP Mul/Div. We are not told about the number of other integer functional units (other than Load and Store integer functional units), so we will ignore them for this discussion. All, except the Store unit, need the CDB. Technically, the Store unit needs the CDB since a store in transit to memory can respond to a subsequent Load from the same location. All these seven units can complete simultaneously and want to connect their results to the CDB. So, the CDB needs to have a width of 7 (seven) words as opposed to word. This way, it can carry up to seven values at the same time. This, however, means that there must be six write ports to the FP registers, six write ports to each of the FP Reservation Stations and six write ports for the Load unit and the Store unit so that six different values can be written to FP registers, RSs, load buffer and store buffer entries at the same time. Similarly, for the integer registers (the GPRs), the number of write ports should be increased while the width of the integer CDB is increased. ii) In our algorithm, the operand fetch is made during an issue, not during scheduling. So, when the ID stage decodes an instruction in a particular clock period, it checks the Register Status table to see if the Q i field of an operand register needed by the instruction is blank. In the original implementation, the field is not blank and the value of Q i is FU n (Functional Unit n). This means that the current instruction in ID waits in its RS until that value is computed by FU n. However, in the clock period that the ID stage is working on that particular instruction, that same FU n places its result on the CDB and is in the process of updating the Register Status table to indicate that it has finished, i.e. the Q i entry is being written blank and the result is being written to the register. As you remember, all hardware tables as well as all registers are written at the end of the clock period. Thus, in that critical clock period, this Q i is not blank. So, the ID stage places FU n in the Q x field of the reservation station buffer entry when it issues the instruction, not the value of the operand in the V x field even though the value is now on the CDB. The instruction would get a wrong value when the same FU n computes a result for another instruction. In the worst case, if that FU n is never needed again by the program, the instruction will wait indefinitely, that is the program will wait indefinitely. There are a number of solutions! One solution is that each functional unit has a new output called valid which is in the clock period the result is placed on the CDB. The issue stage would have to check the Register Status table and the special output lines. If the Q i value of the needed register matches the functional unit number whose valid output line is, the ID stage instructs the RS to store the result on the CDB to the V x field for the instruction. Finally, it must be noted that this solution works with any number of CDBs, one or seven. Polytechnic University Page 3 of 0 CS64 Handout No : 5 October 6, 004

4 Q) Consider the following piece of MIPS code written for its unpipelined version : DADDI R, R0, #(64) 0 ; Memory accesses are commented below : L.D F0, 0(Rk) ; Rk points at constant k loop: LW Ra, 0(Rindexa) ; Rindexa points at index vector for vector A L.D F, 0(Rb) ; Rb points at vector B L.D F4, 0(Rd) ; Rd points at vector D DADD.D F6, F, F0 MUL.D F8, F6, F4 S.D 0(Ra), F6 ; Stores to vector A S.D 0(Rc), F8 ; Stores to vector C DADDI R, R, #(-) 0 DADDI Rindexa, Rindexa, #4 ; Rindexa is advanced DADDI Rb, Rb, #8 ; Rb is advanced DADDI Rc, Rc, #8 ; Rc is advanced DADDI Rd, Rd, #8 ; Rd is advanced BNEZ R, loop Assume that the MIPS is scalar and uses the hardware speculative Tomasulo algorithm as discussed in class. That is, this machine model number 3. It has enough buses to eliminate bottlenecks. In addition, there is a perfect memory with no stalls and the functional unit timings are as listed on page 304 of the textbook. There are enough functional units for integer instructions not to cause stalls. Finally, assume that only one instruction per clock period is committed from the Reorder Buffer. Show the execution of the loop for the first two () iterations as we did in class. Show when (in which clock period) the loop will end. Finally, how many iterations of instructions are flushed out of the pipeline? A) The execution is as follows : Iter # Instruction IF ID EX WR CM DADDI R, R0, #(64) L.D F0, 0(Rk) LW Ra, 0(R indexa ) L.D F, 0(Rb) L.D F4, 0(Rd) DADD.D F6, F, F MUL.D F8, F6, F / S.D 0(Ra), F /8 S.D 0(Rc), F /9 DADDI R, R, #(-) /0 Polytechnic University Page 4 of 0 CS64 Handout No : 5 October 6, 004

5 DADDI R indexa, R indexa, # / DADDI Rb, Rb, # / DADDI Rc, Rc, # /3 DADDI Rd, Rd, # /4 BNEZ R, loop 5 6 7/5 LW Ra, 0(R indexa ) /6 L.D F, 0(Rb) /7 L.D F4, 0(Rd) /8 DADD.D F6, F, F /9 MUL.D F8, F6, F4 0 / S.D 0(Ra), F /3 S.D 0(Rc), F /3 DADDI R, R, #(-) /33 DADDI R indexa, R indexa, # /34 DADDI Rb, Rb, # /35 DADDI Rc, Rc, # /36 DADDI Rd, Rd, # /37 BNEZ R, loop /38 3 LW Ra, 0(R indexa ) / LW Ra, 0(R indexa ) /83 64 L.D F, 0(Rb) / L.D F4, 0(Rd) / DADD.D F6, F, F / MUL.D F8, F6, F / S.D 0(Ra), F / S.D 0(Rc), F / DADDI R, R, #(-) / DADDI R indexa, R indexa, # / DADDI Rb, Rb, # /84 64 DADDI Rc, Rc, # /84 The execution completes in clock period DADDI Rd, Rd, # / BNEZ R, loop /844 Polytechnic University Page 5 of 0 CS64 Handout No : 5 October 6, 004

6 65 LW Ra, 0(R indexa ) / L.D F, 0(Rb) / L.D F4, 0(Rd) / DADD.D F6, F, F MUL.D F8, F6, F / S.D 0(Ra), F / S.D 0(Rc), F /844 These instructions are flushed out in the 844 th clock period 65 DADDI R, R, #(-) DADDI R indexa, R indexa, # DADDI Rb, Rb, #8 844 The loop ends at clock period 844. At that point in time, there are 0 speculatively executed instructions in the pipeline of iteration 65. They are discarded (flushed out). Thus, only one iteration of instructions are flushed out. Since FP latencies are short and successive instruction dependencies are few, not too many instructions accumulate in the Reservation Stations and the ROB. Q3) Consider the following piece of MIPS code : LW R8, 0(R9) ; R8 is loaded from the memory ADD.D F0, F, F4 ; F and F4 are already initialized DIV.D F6, F8, F0 ; F8 and F0 are already initialized MUL.D F, F4, F0 ; F4 is already initialized DADDI R8, R8, #(-) 0 SUB.D F8, F, F6 BNEZ R8, loop S.D 0(R0), F8 ; R0 is already initialized The code is an old code, written for the unpipelined MIPS, i.e. there are no delayed loads and no delayed branches : this is machine model number 0. The old code is now run on the hardware speculative MIPS with the Tomasulo algorithm. This is machine model number 3. The latencies of the functional units are as listed on page A-74 of the Hennessy book. Show which instructions are flushed out of the pipeline. Show the timing of the instructions run until the loop is completed. A3) Due to long FP latencies and back-to-back instruction dependencies, many instructions accumulate in the Reservation Stations and the ROB. They are eventually flushed out of the ROB : an undesirable situation. This MIPS ROB buffer has to have at least 8 entries not to stall Polytechnic University Page 6 of 0 CS64 Handout No : 5 October 6, 004

7 any of the loop instructions in the ID stage due to the ROB-full structural hazard. Note that 8 is a large number! A solution to the large ROB size seems to be that we retire more than one instruction at a time. But, unfortunately it will not help us in this application. The only effective solution for this code is the reduction of the long FP latencies... Try this code for the functional unit latencies listed on page 304 of the Hennessy book. iter # IF ID EX WR CM LW R8, 0(R9) ADD.D F0, F, F DIV.D F6, F8, F MUL.D F, F4, F / /48 DADDI R8, R8, #(-) /49 SUB.D F8, F, F / BNEZ R8, loop 7 8 9/5 ADD.D F0, F, F /8-0 /5 DIV.D F6, F8, F0 9 0 / MUL.D F, F4, F0 0 / /9 DADDI R8, R8, #(-) /93 SUB.D F8, F, F6 3 4/ BNEZ R8, loop 3 4 5/95 3 ADD.D F0, F, F / /95 3 DIV.D F6, F8, F / MUL.D F, F4, F / /95 3 DADDI R8, R8, #(-) /95 3 SUB.D F8, F, F /95 3 BNEZ R8, loop 9 0 /95 Iterations 4 through 5 will continue like above, then iteration6 starts : These instructions are flushed out in the 95 th clock period ADD.D F0, F, F /95 DIVD F6, F8, F MULTD F, F4, F SUBI R8, R8, # 95 S.D 0(R0), F The execution completes in 00 Polytechnic University Page 7 of 0 CS64 Handout No : 5 October 6, 004

8 Q4) Consider the following piece of old MIPS code for the unpipelined MIPS processor. That is a code without delayed loads, without delayed branches, without any consideration for the latencies of functional units, etc. : L.D F0, 0(R) ; Load from M MUL.D F0, F0, F0 ; M[i] * M[i] L.D F, 0(R3) ; Load fromn L.D F, 0(R4) ; Load from Q MUL.D F3, F, F ; N[i] * Q[i] ADD.D F4, F0, F3 ; M[i] * M[i] + N[i] * Q[i] S.D 0(R), F4 ; Store to K DADDI R, R, #8 ; Advance the K pointer DADDI R, R, #8 ; Advance the M pointer DADDI R3, R3, #8 ; Advance the N pointer DADDI R4, R4, #8 ; Advance the N pointer DADDI R5, R5, #(-) 0 ; Decrement the loop counter BNEZ R5, loop ; Branch back if not the end Assume that the MIPS is implemented as the scalar hardware-speculative Tomasulo algorithm machine as discussed in class. That is, this is machine model number 3. Assume that there are enough number of CDB buses to eliminate bottlenecks. In addition, assume that there is a perfect memory with no stalls and the functional unit timings are as listed on page 304 of the Hennessy book. Another assumption is that there are enough functional units for integer instructions not to cause stalls. There are separate address and branch units. A branch instruction takes two clock periods to run if its operands are ready (IF and ID stages). Otherwise, it is issued to the EX stage and waits there until its operands are ready. Finally, assume that only one instruction per clock period is committed from the Reorder Buffer. Assume that the loop has two () iterations. In which clock period, will the second iteration of the loop be completed : what is the last clock period in which the Commit stage of an instruction from the second iteration is done? Show which instructions are flushed out of the pipeline. If a new situation is encountered, indicate the assumption made and/or how it is handled. A4) The execution of the loop for two iterations and the flushed out instructions are as follows : Iteration Instruction IF ID EX WR CM L.D F0, 0(R) MUL.D F0, F0, F0 3 4/ L.D F, 0(R3) / L.D F, 0(R4) / MUL.D F3, F, F 5 6 7/8-3 ADD.D F4, F0, F / S.D 0(R), F /8 Polytechnic University Page 8 of 0 CS64 Handout No : 5 October 6, 004

9 Iteration Instruction IF ID EX WR CM DADDI R, R, # /9 DADDI R, R, # /0 DADDI R3, R3, # / DADDI R4, R4, # / DADDI R5, R5, #(-) /3 BNEZ R5, loop 3 4 5/4 L.D F0, 0(R) /5 MUL.D F0, F0, F /8-3/6 L.D F, 0(R3) /7 L.D F, 0(R4) /8 MUL.D F3, F, F 8 9 0/-4 5 6/9 ADD.D F4, F0, F3 9 0 / S.D 0(R), F /3 DADDI R, R, # /3 DADDI R, R, # /33 DADDI R3, R3, # /34 DADDI R4, R4, # /35 DADDI R5, R5, #(-) /36 BNEZ R5, loop 6 7 8/37 3 L.D F0, 0(R) /37 3 MUL.D F0, F0, F / /37 3 L.D F, 0(R3) /37 3 L.D F, 0(R4) /37 3 MUL.D F3, F, F / ADD.D F4, F0, F /37 3 S.D 0(R), F DADDI R, R, # DADDI R, R, # DADDI R3, R3, # DADDI R4, R4, #8 37 These instructions are flushed out at the end of the 37 th clock period Polytechnic University Page 9 of 0 CS64 Handout No : 5 October 6, 004

10 The second iteration of the loop ends at clock period 37. instructions of the third iteration are flushed out of the ROB when the loop completes. Running the old code on the new processor shows why some old code runs slower than the new code for the same application : instructions wait for each other due to data dependencies (FP instructions above) while other instructions can be executed in the meantime (DADDI instructions above). A contemporary compiler would move the DADDI instructions up between FP instructions so that both the stall cycles are reduced and useful work is done. Polytechnic University Page 0 of 0 CS64 Handout No : 5 October 6, 004

CS 614 COMPUTER ARCHITECTURE II FALL 2005

CS 614 COMPUTER ARCHITECTURE II FALL 2005 CS 614 COMPUTER ARCHITECTURE II FALL 2005 DUE : November 9, 2005 HOMEWORK III READ : - Portions of Chapters 5, 6, 7, 8, 9 and 14 of the Sima book and - Portions of Chapters 3, 4, Appendix A and Appendix

More information

CS433 Homework 2 (Chapter 3)

CS433 Homework 2 (Chapter 3) CS Homework 2 (Chapter ) Assigned on 9/19/2017 Due in class on 10/5/2017 Instructions: 1. Please write your name and NetID clearly on the first page. 2. Refer to the course fact sheet for policies on collaboration..

More information

Four Steps of Speculative Tomasulo cycle 0

Four Steps of Speculative Tomasulo cycle 0 HW support for More ILP Hardware Speculative Execution Speculation: allow an instruction to issue that is dependent on branch, without any consequences (including exceptions) if branch is predicted incorrectly

More information

CS433 Midterm. Prof Josep Torrellas. October 19, Time: 1 hour + 15 minutes

CS433 Midterm. Prof Josep Torrellas. October 19, Time: 1 hour + 15 minutes CS433 Midterm Prof Josep Torrellas October 19, 2017 Time: 1 hour + 15 minutes Name: Instructions: 1. This is a closed-book, closed-notes examination. 2. The Exam has 4 Questions. Please budget your time.

More information

CS433 Homework 2 (Chapter 3)

CS433 Homework 2 (Chapter 3) CS433 Homework 2 (Chapter 3) Assigned on 9/19/2017 Due in class on 10/5/2017 Instructions: 1. Please write your name and NetID clearly on the first page. 2. Refer to the course fact sheet for policies

More information

Website for Students VTU NOTES QUESTION PAPERS NEWS RESULTS

Website for Students VTU NOTES QUESTION PAPERS NEWS RESULTS Advanced Computer Architecture- 06CS81 Hardware Based Speculation Tomasulu algorithm and Reorder Buffer Tomasulu idea: 1. Have reservation stations where register renaming is possible 2. Results are directly

More information

CPE 631 Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation

CPE 631 Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation Aleksandar Milenković, milenka@ece.uah.edu Electrical and Computer Engineering University of Alabama in Huntsville Outline Tomasulo

More information

Hardware-based Speculation

Hardware-based Speculation Hardware-based Speculation Hardware-based Speculation To exploit instruction-level parallelism, maintaining control dependences becomes an increasing burden. For a processor executing multiple instructions

More information

Advanced Computer Architecture CMSC 611 Homework 3. Due in class Oct 17 th, 2012

Advanced Computer Architecture CMSC 611 Homework 3. Due in class Oct 17 th, 2012 Advanced Computer Architecture CMSC 611 Homework 3 Due in class Oct 17 th, 2012 (Show your work to receive partial credit) 1) For the following code snippet list the data dependencies and rewrite the code

More information

Reduction of Data Hazards Stalls with Dynamic Scheduling So far we have dealt with data hazards in instruction pipelines by:

Reduction of Data Hazards Stalls with Dynamic Scheduling So far we have dealt with data hazards in instruction pipelines by: Reduction of Data Hazards Stalls with Dynamic Scheduling So far we have dealt with data hazards in instruction pipelines by: Result forwarding (register bypassing) to reduce or eliminate stalls needed

More information

CPE 631 Lecture 11: Instruction Level Parallelism and Its Dynamic Exploitation

CPE 631 Lecture 11: Instruction Level Parallelism and Its Dynamic Exploitation Lecture 11: Instruction Level Parallelism and Its Dynamic Exploitation Aleksandar Milenkovic, milenka@ece.uah.edu Electrical and Computer Engineering University of Alabama in Huntsville Outline Instruction

More information

Processor: Superscalars Dynamic Scheduling

Processor: Superscalars Dynamic Scheduling Processor: Superscalars Dynamic Scheduling Z. Jerry Shi Assistant Professor of Computer Science and Engineering University of Connecticut * Slides adapted from Blumrich&Gschwind/ELE475 03, Peh/ELE475 (Princeton),

More information

CISC 662 Graduate Computer Architecture. Lecture 10 - ILP 3

CISC 662 Graduate Computer Architecture. Lecture 10 - ILP 3 CISC 662 Graduate Computer Architecture Lecture 10 - ILP 3 Michela Taufer http://www.cis.udel.edu/~taufer/teaching/cis662f07 Powerpoint Lecture Notes from John Hennessy and David Patterson s: Computer

More information

Page 1. Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

Page 1. Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls CS252 Graduate Computer Architecture Recall from Pipelining Review Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: March 16, 2001 Prof. David A. Patterson Computer Science 252 Spring

More information

Static vs. Dynamic Scheduling

Static vs. Dynamic Scheduling Static vs. Dynamic Scheduling Dynamic Scheduling Fast Requires complex hardware More power consumption May result in a slower clock Static Scheduling Done in S/W (compiler) Maybe not as fast Simpler processor

More information

Dynamic Scheduling. Better than static scheduling Scoreboarding: Tomasulo algorithm:

Dynamic Scheduling. Better than static scheduling Scoreboarding: Tomasulo algorithm: LECTURE - 13 Dynamic Scheduling Better than static scheduling Scoreboarding: Used by the CDC 6600 Useful only within basic block WAW and WAR stalls Tomasulo algorithm: Used in IBM 360/91 for the FP unit

More information

Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls CS252 Graduate Computer Architecture Recall from Pipelining Review Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: March 16, 2001 Prof. David A. Patterson Computer Science 252 Spring

More information

吳俊興高雄大學資訊工程學系. October Example to eleminate WAR and WAW by register renaming. Tomasulo Algorithm. A Dynamic Algorithm: Tomasulo s Algorithm

吳俊興高雄大學資訊工程學系. October Example to eleminate WAR and WAW by register renaming. Tomasulo Algorithm. A Dynamic Algorithm: Tomasulo s Algorithm EEF011 Computer Architecture 計算機結構 吳俊興高雄大學資訊工程學系 October 2004 Example to eleminate WAR and WAW by register renaming Original DIV.D ADD.D S.D SUB.D MUL.D F0, F2, F4 F6, F0, F8 F6, 0(R1) F8, F10, F14 F6,

More information

Instruction-Level Parallelism and Its Exploitation

Instruction-Level Parallelism and Its Exploitation Chapter 2 Instruction-Level Parallelism and Its Exploitation 1 Overview Instruction level parallelism Dynamic Scheduling Techniques es Scoreboarding Tomasulo s s Algorithm Reducing Branch Cost with Dynamic

More information

Instruction Level Parallelism

Instruction Level Parallelism Instruction Level Parallelism The potential overlap among instruction execution is called Instruction Level Parallelism (ILP) since instructions can be executed in parallel. There are mainly two approaches

More information

For this problem, consider the following architecture specifications: Functional Unit Type Cycles in EX Number of Functional Units

For this problem, consider the following architecture specifications: Functional Unit Type Cycles in EX Number of Functional Units CS333: Computer Architecture Spring 006 Homework 3 Total Points: 49 Points (undergrad), 57 Points (graduate) Due Date: Feb. 8, 006 by 1:30 pm (See course information handout for more details on late submissions)

More information

Good luck and have fun!

Good luck and have fun! Midterm Exam October 13, 2014 Name: Problem 1 2 3 4 total Points Exam rules: Time: 90 minutes. Individual test: No team work! Open book, open notes. No electronic devices, except an unprogrammed calculator.

More information

CPE 631 Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation

CPE 631 Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation Aleksandar Milenkovic, milenka@ece.uah.edu Electrical and Computer Engineering University of Alabama in Huntsville Outline Instruction

More information

CS252 Graduate Computer Architecture Lecture 6. Recall: Software Pipelining Example

CS252 Graduate Computer Architecture Lecture 6. Recall: Software Pipelining Example CS252 Graduate Computer Architecture Lecture 6 Tomasulo, Implicit Register Renaming, Loop-Level Parallelism Extraction Explicit Register Renaming John Kubiatowicz Electrical Engineering and Computer Sciences

More information

EECC551 Exam Review 4 questions out of 6 questions

EECC551 Exam Review 4 questions out of 6 questions EECC551 Exam Review 4 questions out of 6 questions (Must answer first 2 questions and 2 from remaining 4) Instruction Dependencies and graphs In-order Floating Point/Multicycle Pipelining (quiz 2) Improving

More information

COSC4201 Instruction Level Parallelism Dynamic Scheduling

COSC4201 Instruction Level Parallelism Dynamic Scheduling COSC4201 Instruction Level Parallelism Dynamic Scheduling Prof. Mokhtar Aboelaze Parts of these slides are taken from Notes by Prof. David Patterson (UCB) Outline Data dependence and hazards Exposing parallelism

More information

Page 1. Recall from Pipelining Review. Lecture 15: Instruction Level Parallelism and Dynamic Execution

Page 1. Recall from Pipelining Review. Lecture 15: Instruction Level Parallelism and Dynamic Execution CS252 Graduate Computer Architecture Recall from Pipelining Review Lecture 15: Instruction Level Parallelism and Dynamic Execution March 11, 2002 Prof. David E. Culler Computer Science 252 Spring 2002

More information

Load1 no Load2 no Add1 Y Sub Reg[F2] Reg[F6] Add2 Y Add Reg[F2] Add1 Add3 no Mult1 Y Mul Reg[F2] Reg[F4] Mult2 Y Div Reg[F6] Mult1

Load1 no Load2 no Add1 Y Sub Reg[F2] Reg[F6] Add2 Y Add Reg[F2] Add1 Add3 no Mult1 Y Mul Reg[F2] Reg[F4] Mult2 Y Div Reg[F6] Mult1 Instruction Issue Execute Write result L.D F6, 34(R2) L.D F2, 45(R3) MUL.D F0, F2, F4 SUB.D F8, F2, F6 DIV.D F10, F0, F6 ADD.D F6, F8, F2 Name Busy Op Vj Vk Qj Qk A Load1 no Load2 no Add1 Y Sub Reg[F2]

More information

Adapted from David Patterson s slides on graduate computer architecture

Adapted from David Patterson s slides on graduate computer architecture Mei Yang Adapted from David Patterson s slides on graduate computer architecture Introduction Basic Compiler Techniques for Exposing ILP Advanced Branch Prediction Dynamic Scheduling Hardware-Based Speculation

More information

Instruction Frequency CPI. Load-store 55% 5. Arithmetic 30% 4. Branch 15% 4

Instruction Frequency CPI. Load-store 55% 5. Arithmetic 30% 4. Branch 15% 4 PROBLEM 1: An application running on a 1GHz pipelined processor has the following instruction mix: Instruction Frequency CPI Load-store 55% 5 Arithmetic 30% 4 Branch 15% 4 a) Determine the overall CPI

More information

Chapter 3 Instruction-Level Parallelism and its Exploitation (Part 1)

Chapter 3 Instruction-Level Parallelism and its Exploitation (Part 1) Chapter 3 Instruction-Level Parallelism and its Exploitation (Part 1) ILP vs. Parallel Computers Dynamic Scheduling (Section 3.4, 3.5) Dynamic Branch Prediction (Section 3.3) Hardware Speculation and Precise

More information

Computer Architecture Homework Set # 3 COVER SHEET Please turn in with your own solution

Computer Architecture Homework Set # 3 COVER SHEET Please turn in with your own solution CSCE 6 (Fall 07) Computer Architecture Homework Set # COVER SHEET Please turn in with your own solution Eun Jung Kim Write your answers on the sheets provided. Submit with the COVER SHEET. If you need

More information

CISC 662 Graduate Computer Architecture Lecture 11 - Hardware Speculation Branch Predictions

CISC 662 Graduate Computer Architecture Lecture 11 - Hardware Speculation Branch Predictions CISC 662 Graduate Computer Architecture Lecture 11 - Hardware Speculation Branch Predictions Michela Taufer http://www.cis.udel.edu/~taufer/teaching/cis6627 Powerpoint Lecture Notes from John Hennessy

More information

ILP concepts (2.1) Basic compiler techniques (2.2) Reducing branch costs with prediction (2.3) Dynamic scheduling (2.4 and 2.5)

ILP concepts (2.1) Basic compiler techniques (2.2) Reducing branch costs with prediction (2.3) Dynamic scheduling (2.4 and 2.5) Instruction-Level Parallelism and its Exploitation: PART 1 ILP concepts (2.1) Basic compiler techniques (2.2) Reducing branch costs with prediction (2.3) Dynamic scheduling (2.4 and 2.5) Project and Case

More information

Hardware-based Speculation

Hardware-based Speculation Hardware-based Speculation M. Sonza Reorda Politecnico di Torino Dipartimento di Automatica e Informatica 1 Introduction Hardware-based speculation is a technique for reducing the effects of control dependences

More information

Hardware-based speculation (2.6) Multiple-issue plus static scheduling = VLIW (2.7) Multiple-issue, dynamic scheduling, and speculation (2.

Hardware-based speculation (2.6) Multiple-issue plus static scheduling = VLIW (2.7) Multiple-issue, dynamic scheduling, and speculation (2. Instruction-Level Parallelism and its Exploitation: PART 2 Hardware-based speculation (2.6) Multiple-issue plus static scheduling = VLIW (2.7) Multiple-issue, dynamic scheduling, and speculation (2.8)

More information

Super Scalar. Kalyan Basu March 21,

Super Scalar. Kalyan Basu March 21, Super Scalar Kalyan Basu basu@cse.uta.edu March 21, 2007 1 Super scalar Pipelines A pipeline that can complete more than 1 instruction per cycle is called a super scalar pipeline. We know how to build

More information

Chapter 4. Advanced Pipelining and Instruction-Level Parallelism. In-Cheol Park Dept. of EE, KAIST

Chapter 4. Advanced Pipelining and Instruction-Level Parallelism. In-Cheol Park Dept. of EE, KAIST Chapter 4. Advanced Pipelining and Instruction-Level Parallelism In-Cheol Park Dept. of EE, KAIST Instruction-level parallelism Loop unrolling Dependence Data/ name / control dependence Loop level parallelism

More information

Copyright 2012, Elsevier Inc. All rights reserved.

Copyright 2012, Elsevier Inc. All rights reserved. Computer Architecture A Quantitative Approach, Fifth Edition Chapter 3 Instruction-Level Parallelism and Its Exploitation 1 Branch Prediction Basic 2-bit predictor: For each branch: Predict taken or not

More information

CS 2410 Mid term (fall 2015) Indicate which of the following statements is true and which is false.

CS 2410 Mid term (fall 2015) Indicate which of the following statements is true and which is false. CS 2410 Mid term (fall 2015) Name: Question 1 (10 points) Indicate which of the following statements is true and which is false. (1) SMT architectures reduces the thread context switch time by saving in

More information

ELEC 5200/6200 Computer Architecture and Design Fall 2016 Lecture 9: Instruction Level Parallelism

ELEC 5200/6200 Computer Architecture and Design Fall 2016 Lecture 9: Instruction Level Parallelism ELEC 5200/6200 Computer Architecture and Design Fall 2016 Lecture 9: Instruction Level Parallelism Ujjwal Guin, Assistant Professor Department of Electrical and Computer Engineering Auburn University,

More information

Review: Compiler techniques for parallelism Loop unrolling Ÿ Multiple iterations of loop in software:

Review: Compiler techniques for parallelism Loop unrolling Ÿ Multiple iterations of loop in software: CS152 Computer Architecture and Engineering Lecture 17 Dynamic Scheduling: Tomasulo March 20, 2001 John Kubiatowicz (http.cs.berkeley.edu/~kubitron) lecture slides: http://www-inst.eecs.berkeley.edu/~cs152/

More information

CS433 Midterm. Prof Josep Torrellas. October 16, Time: 1 hour + 15 minutes

CS433 Midterm. Prof Josep Torrellas. October 16, Time: 1 hour + 15 minutes CS433 Midterm Prof Josep Torrellas October 16, 2014 Time: 1 hour + 15 minutes Name: Alias: Instructions: 1. This is a closed-book, closed-notes examination. 2. The Exam has 4 Questions. Please budget your

More information

Outline Review: Basic Pipeline Scheduling and Loop Unrolling Multiple Issue: Superscalar, VLIW. CPE 631 Session 19 Exploiting ILP with SW Approaches

Outline Review: Basic Pipeline Scheduling and Loop Unrolling Multiple Issue: Superscalar, VLIW. CPE 631 Session 19 Exploiting ILP with SW Approaches Session xploiting ILP with SW Approaches lectrical and Computer ngineering University of Alabama in Huntsville Outline Review: Basic Pipeline Scheduling and Loop Unrolling Multiple Issue: Superscalar,

More information

Scoreboard information (3 tables) Four stages of scoreboard control

Scoreboard information (3 tables) Four stages of scoreboard control Scoreboard information (3 tables) Instruction : issued, read operands and started execution (dispatched), completed execution or wrote result, Functional unit (assuming non-pipelined units) busy/not busy

More information

The basic structure of a MIPS floating-point unit

The basic structure of a MIPS floating-point unit Tomasulo s scheme The algorithm based on the idea of reservation station The reservation station fetches and buffers an operand as soon as it is available, eliminating the need to get the operand from

More information

Complex Pipelining COE 501. Computer Architecture Prof. Muhamed Mudawar

Complex Pipelining COE 501. Computer Architecture Prof. Muhamed Mudawar Complex Pipelining COE 501 Computer Architecture Prof. Muhamed Mudawar Computer Engineering Department King Fahd University of Petroleum and Minerals Presentation Outline Diversified Pipeline Detecting

More information

Page # CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Michela Taufer

Page # CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Michela Taufer CISC 662 Graduate Computer Architecture Lecture 8 - ILP 1 Michela Taufer http://www.cis.udel.edu/~taufer/teaching/cis662f07 Powerpoint Lecture Notes from John Hennessy and David Patterson s: Computer Architecture,

More information

5008: Computer Architecture

5008: Computer Architecture 5008: Computer Architecture Chapter 2 Instruction-Level Parallelism and Its Exploitation CA Lecture05 - ILP (cwliu@twins.ee.nctu.edu.tw) 05-1 Review from Last Lecture Instruction Level Parallelism Leverage

More information

Hardware-Based Speculation

Hardware-Based Speculation Hardware-Based Speculation Execute instructions along predicted execution paths but only commit the results if prediction was correct Instruction commit: allowing an instruction to update the register

More information

CS433 Homework 3 (Chapter 3)

CS433 Homework 3 (Chapter 3) CS433 Homework 3 (Chapter 3) Assigned on 10/3/2017 Due in class on 10/17/2017 Instructions: 1. Please write your name and NetID clearly on the first page. 2. Refer to the course fact sheet for policies

More information

Metodologie di Progettazione Hardware-Software

Metodologie di Progettazione Hardware-Software Metodologie di Progettazione Hardware-Software Advanced Pipelining and Instruction-Level Paralelism Metodologie di Progettazione Hardware/Software LS Ing. Informatica 1 ILP Instruction-level Parallelism

More information

Computer Architecture A Quantitative Approach, Fifth Edition. Chapter 3. Instruction-Level Parallelism and Its Exploitation

Computer Architecture A Quantitative Approach, Fifth Edition. Chapter 3. Instruction-Level Parallelism and Its Exploitation Computer Architecture A Quantitative Approach, Fifth Edition Chapter 3 Instruction-Level Parallelism and Its Exploitation Introduction Pipelining become universal technique in 1985 Overlaps execution of

More information

EE557--FALL 1999 MAKE-UP MIDTERM 1. Closed books, closed notes

EE557--FALL 1999 MAKE-UP MIDTERM 1. Closed books, closed notes NAME: STUDENT NUMBER: EE557--FALL 1999 MAKE-UP MIDTERM 1 Closed books, closed notes Q1: /1 Q2: /1 Q3: /1 Q4: /1 Q5: /15 Q6: /1 TOTAL: /65 Grade: /25 1 QUESTION 1(Performance evaluation) 1 points We are

More information

CMSC411 Fall 2013 Midterm 2 Solutions

CMSC411 Fall 2013 Midterm 2 Solutions CMSC411 Fall 2013 Midterm 2 Solutions 1. (12 pts) Memory hierarchy a. (6 pts) Suppose we have a virtual memory of size 64 GB, or 2 36 bytes, where pages are 16 KB (2 14 bytes) each, and the machine has

More information

CSE 490/590 Computer Architecture Homework 2

CSE 490/590 Computer Architecture Homework 2 CSE 490/590 Computer Architecture Homework 2 1. Suppose that you have the following out-of-order datapath with 1-cycle ALU, 2-cycle Mem, 3-cycle Fadd, 5-cycle Fmul, no branch prediction, and in-order fetch

More information

CS152 Computer Architecture and Engineering March 13, 2008 Out of Order Execution and Branch Prediction Assigned March 13 Problem Set #4 Due March 25

CS152 Computer Architecture and Engineering March 13, 2008 Out of Order Execution and Branch Prediction Assigned March 13 Problem Set #4 Due March 25 CS152 Computer Architecture and Engineering March 13, 2008 Out of Order Execution and Branch Prediction Assigned March 13 Problem Set #4 Due March 25 http://inst.eecs.berkeley.edu/~cs152/sp08 The problem

More information

CS 2410 Mid term (fall 2018)

CS 2410 Mid term (fall 2018) CS 2410 Mid term (fall 2018) Name: Question 1 (6+6+3=15 points): Consider two machines, the first being a 5-stage operating at 1ns clock and the second is a 12-stage operating at 0.7ns clock. Due to data

More information

Chapter 3 (CONT II) Instructor: Josep Torrellas CS433. Copyright J. Torrellas 1999,2001,2002,2007,

Chapter 3 (CONT II) Instructor: Josep Torrellas CS433. Copyright J. Torrellas 1999,2001,2002,2007, Chapter 3 (CONT II) Instructor: Josep Torrellas CS433 Copyright J. Torrellas 1999,2001,2002,2007, 2013 1 Hardware-Based Speculation (Section 3.6) In multiple issue processors, stalls due to branches would

More information

COSC 6385 Computer Architecture - Instruction Level Parallelism (II)

COSC 6385 Computer Architecture - Instruction Level Parallelism (II) COSC 6385 Computer Architecture - Instruction Level Parallelism (II) Edgar Gabriel Spring 2016 Data fields for reservation stations Op: operation to perform on source operands S1 and S2 Q j, Q k : reservation

More information

EITF20: Computer Architecture Part3.2.1: Pipeline - 3

EITF20: Computer Architecture Part3.2.1: Pipeline - 3 EITF20: Computer Architecture Part3.2.1: Pipeline - 3 Liang Liu liang.liu@eit.lth.se 1 Outline Reiteration Dynamic scheduling - Tomasulo Superscalar, VLIW Speculation ILP limitations What we have done

More information

CPE 631 Lecture 09: Instruction Level Parallelism and Its Dynamic Exploitation

CPE 631 Lecture 09: Instruction Level Parallelism and Its Dynamic Exploitation Lecture 09: Instruction Level Parallelism and Its Dynamic Exploitation Aleksandar Milenkovic, milenka@ece.uah.edu Electrical and Computer Engineering University of Alabama in Huntsville Outline Instruction

More information

Lecture-13 (ROB and Multi-threading) CS422-Spring

Lecture-13 (ROB and Multi-threading) CS422-Spring Lecture-13 (ROB and Multi-threading) CS422-Spring 2018 Biswa@CSE-IITK Cycle 62 (Scoreboard) vs 57 in Tomasulo Instruction status: Read Exec Write Exec Write Instruction j k Issue Oper Comp Result Issue

More information

Computer Architecture Practical 1 Pipelining

Computer Architecture Practical 1 Pipelining Computer Architecture Issued: Monday 28 January 2008 Due: Friday 15 February 2008 at 4.30pm (at the ITO) This is the first of two practicals for the Computer Architecture module of CS3. Together the practicals

More information

Chapter 4 The Processor 1. Chapter 4D. The Processor

Chapter 4 The Processor 1. Chapter 4D. The Processor Chapter 4 The Processor 1 Chapter 4D The Processor Chapter 4 The Processor 2 Instruction-Level Parallelism (ILP) Pipelining: executing multiple instructions in parallel To increase ILP Deeper pipeline

More information

CSE 820 Graduate Computer Architecture. week 6 Instruction Level Parallelism. Review from Last Time #1

CSE 820 Graduate Computer Architecture. week 6 Instruction Level Parallelism. Review from Last Time #1 CSE 820 Graduate Computer Architecture week 6 Instruction Level Parallelism Based on slides by David Patterson Review from Last Time #1 Leverage Implicit Parallelism for Performance: Instruction Level

More information

Multiple Instruction Issue and Hardware Based Speculation

Multiple Instruction Issue and Hardware Based Speculation Multiple Instruction Issue and Hardware Based Speculation Soner Önder Michigan Technological University, Houghton MI www.cs.mtu.edu/~soner Hardware Based Speculation Exploiting more ILP requires that we

More information

What is ILP? Instruction Level Parallelism. Where do we find ILP? How do we expose ILP?

What is ILP? Instruction Level Parallelism. Where do we find ILP? How do we expose ILP? What is ILP? Instruction Level Parallelism or Declaration of Independence The characteristic of a program that certain instructions are, and can potentially be. Any mechanism that creates, identifies,

More information

CS152 Computer Architecture and Engineering. Complex Pipelines

CS152 Computer Architecture and Engineering. Complex Pipelines CS152 Computer Architecture and Engineering Complex Pipelines Assigned March 6 Problem Set #3 Due March 20 http://inst.eecs.berkeley.edu/~cs152/sp12 The problem sets are intended to help you learn the

More information

Superscalar Architectures: Part 2

Superscalar Architectures: Part 2 Superscalar Architectures: Part 2 Dynamic (Out-of-Order) Scheduling Lecture 3.2 August 23 rd, 2017 Jae W. Lee (jaewlee@snu.ac.kr) Computer Science and Engineering Seoul NaMonal University Download this

More information

ILP: Instruction Level Parallelism

ILP: Instruction Level Parallelism ILP: Instruction Level Parallelism Tassadaq Hussain Riphah International University Barcelona Supercomputing Center Universitat Politècnica de Catalunya Introduction Introduction Pipelining become universal

More information

Chapter 3: Instruction Level Parallelism (ILP) and its exploitation. Types of dependences

Chapter 3: Instruction Level Parallelism (ILP) and its exploitation. Types of dependences Chapter 3: Instruction Level Parallelism (ILP) and its exploitation Pipeline CPI = Ideal pipeline CPI + stalls due to hazards invisible to programmer (unlike process level parallelism) ILP: overlap execution

More information

ELE 818 * ADVANCED COMPUTER ARCHITECTURES * MIDTERM TEST *

ELE 818 * ADVANCED COMPUTER ARCHITECTURES * MIDTERM TEST * ELE 818 * ADVANCED COMPUTER ARCHITECTURES * MIDTERM TEST * SAMPLE 1 Section: Simple pipeline for integer operations For all following questions we assume that: a) Pipeline contains 5 stages: IF, ID, EX,

More information

Reorder Buffer Implementation (Pentium Pro) Reorder Buffer Implementation (Pentium Pro)

Reorder Buffer Implementation (Pentium Pro) Reorder Buffer Implementation (Pentium Pro) Reorder Buffer Implementation (Pentium Pro) Hardware data structures retirement register file (RRF) (~ IBM 360/91 physical registers) physical register file that is the same size as the architectural registers

More information

Page 1. CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Pipeline CPI (II) Michela Taufer

Page 1. CISC 662 Graduate Computer Architecture. Lecture 8 - ILP 1. Pipeline CPI. Pipeline CPI (I) Pipeline CPI (II) Michela Taufer CISC 662 Graduate Computer Architecture Lecture 8 - ILP 1 Michela Taufer Pipeline CPI http://www.cis.udel.edu/~taufer/teaching/cis662f07 Powerpoint Lecture Notes from John Hennessy and David Patterson

More information

CISC 662 Graduate Computer Architecture Lecture 13 - CPI < 1

CISC 662 Graduate Computer Architecture Lecture 13 - CPI < 1 CISC 662 Graduate Computer Architecture Lecture 13 - CPI < 1 Michela Taufer http://www.cis.udel.edu/~taufer/teaching/cis662f07 Powerpoint Lecture Notes from John Hennessy and David Patterson s: Computer

More information

EE557--FALL 2001 MIDTERM 2. Open books

EE557--FALL 2001 MIDTERM 2. Open books NAME: SOLUTIONS STUDENT NUMBER: EE557--FALL 2001 MIDTERM 2 Open books Q1: /16 Q2: /12 Q3: /8 Q4: /8 Q5: /8 Q6: /8 TOTAL: /60 Grade: /25 1 QUESTION 1(Tomasulo with ROB) 16 points Consider the following

More information

Multicycle ALU Operations 2/28/2011. Diversified Pipelines The Path Toward Superscalar Processors. Limitations of Our Simple 5 stage Pipeline

Multicycle ALU Operations 2/28/2011. Diversified Pipelines The Path Toward Superscalar Processors. Limitations of Our Simple 5 stage Pipeline //11 Limitations of Our Simple stage Pipeline Diversified Pipelines The Path Toward Superscalar Processors HPCA, Spring 11 Assumes single cycle EX stage for all instructions This is not feasible for Complex

More information

Instruction Level Parallelism

Instruction Level Parallelism Instruction Level Parallelism Dynamic scheduling Scoreboard Technique Tomasulo Algorithm Speculation Reorder Buffer Superscalar Processors 1 Definition of ILP ILP=Potential overlap of execution among unrelated

More information

Instruction Level Parallelism (ILP)

Instruction Level Parallelism (ILP) Instruction Level Parallelism (ILP) Pipelining supports a limited sense of ILP e.g. overlapped instructions, out of order completion and issue, bypass logic, etc. Remember Pipeline CPI = Ideal Pipeline

More information

Hardware-Based Speculation

Hardware-Based Speculation Hardware-Based Speculation Execute instructions along predicted execution paths but only commit the results if prediction was correct Instruction commit: allowing an instruction to update the register

More information

Instruction Level Parallelism. Taken from

Instruction Level Parallelism. Taken from Instruction Level Parallelism Taken from http://www.cs.utsa.edu/~dj/cs3853/lecture5.ppt Outline ILP Compiler techniques to increase ILP Loop Unrolling Static Branch Prediction Dynamic Branch Prediction

More information

EEC 581 Computer Architecture. Instruction Level Parallelism (3.6 Hardware-based Speculation and 3.7 Static Scheduling/VLIW)

EEC 581 Computer Architecture. Instruction Level Parallelism (3.6 Hardware-based Speculation and 3.7 Static Scheduling/VLIW) 1 EEC 581 Computer Architecture Instruction Level Parallelism (3.6 Hardware-based Speculation and 3.7 Static Scheduling/VLIW) Chansu Yu Electrical and Computer Engineering Cleveland State University Overview

More information

CS252 Graduate Computer Architecture Lecture 8. Review: Scoreboard (CDC 6600) Explicit Renaming Precise Interrupts February 13 th, 2010

CS252 Graduate Computer Architecture Lecture 8. Review: Scoreboard (CDC 6600) Explicit Renaming Precise Interrupts February 13 th, 2010 CS252 Graduate Computer Architecture Lecture 8 Explicit Renaming Precise Interrupts February 13 th, 2010 John Kubiatowicz Electrical Engineering and Computer Sciences University of California, Berkeley

More information

EEC 581 Computer Architecture. Lec 7 Instruction Level Parallelism (2.6 Hardware-based Speculation and 2.7 Static Scheduling/VLIW)

EEC 581 Computer Architecture. Lec 7 Instruction Level Parallelism (2.6 Hardware-based Speculation and 2.7 Static Scheduling/VLIW) EEC 581 Computer Architecture Lec 7 Instruction Level Parallelism (2.6 Hardware-based Speculation and 2.7 Static Scheduling/VLIW) Chansu Yu Electrical and Computer Engineering Cleveland State University

More information

CPI < 1? How? What if dynamic branch prediction is wrong? Multiple issue processors: Speculative Tomasulo Processor

CPI < 1? How? What if dynamic branch prediction is wrong? Multiple issue processors: Speculative Tomasulo Processor 1 CPI < 1? How? From Single-Issue to: AKS Scalar Processors Multiple issue processors: VLIW (Very Long Instruction Word) Superscalar processors No ISA Support Needed ISA Support Needed 2 What if dynamic

More information

EE557--FALL 2000 MIDTERM 2. Open books and notes

EE557--FALL 2000 MIDTERM 2. Open books and notes NAME: Solutions STUDENT NUMBER: EE557--FALL 2000 MIDTERM 2 Open books and notes Time limit: 1hour and 20 minutes MAX. No extension. Q1: /12 Q2: /8 Q3: /9 Q4: /8 Q5: /8 Q6: /5 TOTAL: /50 Grade: /25 1 QUESTION

More information

DYNAMIC SPECULATIVE EXECUTION

DYNAMIC SPECULATIVE EXECUTION DYNAMIC SPECULATIVE EXECUTION Slides by: Pedro Tomás Additional reading: Computer Architecture: A Quantitative Approach, 5th edition, Chapter 3, John L. Hennessy and David A. Patterson, Morgan Kaufmann,

More information

T T T T T T N T T T T T T T T N T T T T T T T T T N T T T T T T T T T T T N.

T T T T T T N T T T T T T T T N T T T T T T T T T N T T T T T T T T T T T N. A1: Architecture (25 points) Consider these four possible branch predictors: (A) Static backward taken, forward not taken (B) 1-bit saturating counter (C) 2-bit saturating counter (D) Global predictor

More information

CS 423 Computer Architecture Spring Lecture 04: A Superscalar Pipeline

CS 423 Computer Architecture Spring Lecture 04: A Superscalar Pipeline CS 423 Computer Architecture Spring 2012 Lecture 04: A Superscalar Pipeline Ozcan Ozturk http://www.cs.bilkent.edu.tr/~ozturk/cs423/ [Adapted from Computer Organization and Design, Patterson & Hennessy,

More information

EECC551 - Shaaban. 1 GHz? to???? GHz CPI > (?)

EECC551 - Shaaban. 1 GHz? to???? GHz CPI > (?) Evolution of Processor Performance So far we examined static & dynamic techniques to improve the performance of single-issue (scalar) pipelined CPU designs including: static & dynamic scheduling, static

More information

CPI IPC. 1 - One At Best 1 - One At best. Multiple issue processors: VLIW (Very Long Instruction Word) Speculative Tomasulo Processor

CPI IPC. 1 - One At Best 1 - One At best. Multiple issue processors: VLIW (Very Long Instruction Word) Speculative Tomasulo Processor Single-Issue Processor (AKA Scalar Processor) CPI IPC 1 - One At Best 1 - One At best 1 From Single-Issue to: AKS Scalar Processors CPI < 1? How? Multiple issue processors: VLIW (Very Long Instruction

More information

INSTITUTO SUPERIOR TÉCNICO. Architectures for Embedded Computing

INSTITUTO SUPERIOR TÉCNICO. Architectures for Embedded Computing UNIVERSIDADE TÉCNICA DE LISBOA INSTITUTO SUPERIOR TÉCNICO Departamento de Engenharia Informática Architectures for Embedded Computing MEIC-A, MEIC-T, MERC Lecture Slides Version 3.0 - English Lecture 09

More information

Exploiting ILP with SW Approaches. Aleksandar Milenković, Electrical and Computer Engineering University of Alabama in Huntsville

Exploiting ILP with SW Approaches. Aleksandar Milenković, Electrical and Computer Engineering University of Alabama in Huntsville Lecture : Exploiting ILP with SW Approaches Aleksandar Milenković, milenka@ece.uah.edu Electrical and Computer Engineering University of Alabama in Huntsville Outline Basic Pipeline Scheduling and Loop

More information

Compiler Optimizations. Lecture 7 Overview of Superscalar Techniques. Memory Allocation by Compilers. Compiler Structure. Register allocation

Compiler Optimizations. Lecture 7 Overview of Superscalar Techniques. Memory Allocation by Compilers. Compiler Structure. Register allocation Lecture 7 Overview of Superscalar Techniques CprE 581 Computer Systems Architecture, Fall 2013 Reading: Textbook, Ch. 3 Complexity-Effective Superscalar Processors, PhD Thesis by Subbarao Palacharla, Ch.1

More information

CS420/520 Homework Assignment: Pipelining

CS420/520 Homework Assignment: Pipelining CS42/52 Homework Assignment: Pipelining Total: points. 6.2 []: Using a drawing similar to the Figure 6.8 below, show the forwarding paths needed to execute the following three instructions: Add $2, $3,

More information

Course on Advanced Computer Architectures

Course on Advanced Computer Architectures Surname (Cognome) Name (Nome) POLIMI ID Number Signature (Firma) SOLUTION Politecnico di Milano, July 9, 2018 Course on Advanced Computer Architectures Prof. D. Sciuto, Prof. C. Silvano EX1 EX2 EX3 Q1

More information

Donn Morrison Department of Computer Science. TDT4255 ILP and speculation

Donn Morrison Department of Computer Science. TDT4255 ILP and speculation TDT4255 Lecture 9: ILP and speculation Donn Morrison Department of Computer Science 2 Outline Textbook: Computer Architecture: A Quantitative Approach, 4th ed Section 2.6: Speculation Section 2.7: Multiple

More information

Functional Units. Registers. The Big Picture: Where are We Now? The Five Classic Components of a Computer Processor Input Control Memory

Functional Units. Registers. The Big Picture: Where are We Now? The Five Classic Components of a Computer Processor Input Control Memory The Big Picture: Where are We Now? CS152 Computer Architecture and Engineering Lecture 18 The Five Classic Components of a Computer Processor Input Control Dynamic Scheduling (Cont), Speculation, and ILP

More information

Instruction Level Parallelism. Appendix C and Chapter 3, HP5e

Instruction Level Parallelism. Appendix C and Chapter 3, HP5e Instruction Level Parallelism Appendix C and Chapter 3, HP5e Outline Pipelining, Hazards Branch prediction Static and Dynamic Scheduling Speculation Compiler techniques, VLIW Limits of ILP. Implementation

More information