Computer Architecture Spring 2016

Similar documents
COSC 6385 Computer Architecture - Pipelining

Modern Computer Architecture

COSC 6385 Computer Architecture - Memory Hierarchies (I)

CISC 662 Graduate Computer Architecture Lecture 6 - Hazards

COSC 6385 Computer Architecture. - Memory Hierarchies (I)

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

Modern Computer Architecture

Computer Architecture

CPE 631 Lecture 04: CPU Caches

Pipeline Overview. Dr. Jiang Li. Adapted from the slides provided by the authors. Jiang Li, Ph.D. Department of Computer Science

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

Lecture 3. Pipelining. Dr. Soner Onder CS 4431 Michigan Technological University 9/23/2009 1

EI338: Computer Systems and Engineering (Computer Architecture & Operating Systems)

ארכיטקטורת יחידת עיבוד מרכזי ת

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

CISC 662 Graduate Computer Architecture Lecture 16 - Cache and virtual memory review

Topics. Computer Organization CS Improving Performance. Opportunity for (Easy) Points. Three Generic Data Hazards

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

Pipelining, Instruction Level Parallelism and Memory in Processors. Advanced Topics ICOM 4215 Computer Architecture and Organization Fall 2010

Handout 4 Memory Hierarchy

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory

EEC 170 Computer Architecture Fall Cache Introduction Review. Review: The Memory Hierarchy. The Memory Hierarchy: Why Does it Work?

Memory Hierarchy Motivation, Definitions, Four Questions about Memory Hierarchy

Lecture 2: Processor and Pipelining 1

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

CPE Computer Architecture. Appendix A: Pipelining: Basic and Intermediate Concepts

Performance! (1/latency)! 1000! 100! 10! Capacity Access Time Cost. CPU Registers 100s Bytes <10s ns. Cache K Bytes ns 1-0.

EE 4683/5683: COMPUTER ARCHITECTURE

Course Administration

MIPS An ISA for Pipelining

Lecture 1: Introduction

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Data Hazards Compiler Scheduling Pipeline scheduling or instruction scheduling: Compiler generates code to eliminate hazard

Advanced Computer Architecture

Lecture 05: Pipelining: Basic/ Intermediate Concepts and Implementation

CS3350B Computer Architecture

Lecture 9. Pipeline Hazards. Christos Kozyrakis Stanford University

Page 1. Memory Hierarchies (Part 2)

Computer Systems Architecture

Pipeline Hazards. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

CS152 Computer Architecture and Engineering Lecture 17: Cache System

CSE 533: Advanced Computer Architectures. Pipelining. Instructor: Gürhan Küçük. Yeditepe University

CMSC 411 Computer Systems Architecture Lecture 6 Basic Pipelining 3. Complications With Long Instructions

HY425 Lecture 05: Branch Prediction

Control Hazards - branching causes problems since the pipeline can be filled with the wrong instructions.

Department of Computer and IT Engineering University of Kurdistan. Computer Architecture Pipelining. By: Dr. Alireza Abdollahpouri

COSC4201 Pipelining. Prof. Mokhtar Aboelaze York University

Computer Systems Architecture Spring 2016

Advanced Computer Architecture Pipelining

EECS150 - Digital Design Lecture 11 SRAM (II), Caches. Announcements

Memory. Lecture 22 CS301

ECE 331 Hardware Organization and Design. UMass ECE Discussion 10 4/5/2018

Advanced Computer Architecture

ECE 2300 Digital Logic & Computer Organization. Caches

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568/668

Let!s go back to a course goal... Let!s go back to a course goal... Question? Lecture 22 Introduction to Memory Hierarchies

Advanced Computer Architecture

Lecture 17 Introduction to Memory Hierarchies" Why it s important " Fundamental lesson(s)" Suggested reading:" (HP Chapter

ECE7995 (6) Improving Cache Performance. [Adapted from Mary Jane Irwin s slides (PSU)]

EITF20: Computer Architecture Part2.2.1: Pipeline-1

EITF20: Computer Architecture Part2.2.1: Pipeline-1

14:332:331. Week 13 Basics of Cache

14:332:331. Week 13 Basics of Cache

LECTURE 3: THE PROCESSOR

ECE331: Hardware Organization and Design

ECE154A Introduction to Computer Architecture. Homework 4 solution

ECE 2300 Digital Logic & Computer Organization. More Caches Measuring Performance

1 Hazards COMP2611 Fall 2015 Pipelined Processor

EEC 170 Computer Architecture Fall Improving Cache Performance. Administrative. Review: The Memory Hierarchy. Review: Principle of Locality

Question?! Processor comparison!

Memory Hierarchy Review

Chapter 4 The Processor 1. Chapter 4A. The Processor

Pipelining! Advanced Topics on Heterogeneous System Architectures. Politecnico di Milano! Seminar DEIB! 30 November, 2017!

CSE 502 Graduate Computer Architecture. Lec 3-5 Performance + Instruction Pipelining Review

3/12/2014. Single Cycle (Review) CSE 2021: Computer Organization. Single Cycle with Jump. Multi-Cycle Implementation. Why Multi-Cycle?

Basic Pipelining Concepts

Lecture 3: The Processor (Chapter 4 of textbook) Chapter 4.1

Lecture 9 Pipeline and Cache

ECE331: Hardware Organization and Design

(Basic) Processor Pipeline

EN2910A: Advanced Computer Architecture Topic 02: Review of classical concepts

CPU Pipelining Issues

ECE ECE4680

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350):

CSE 502 Graduate Computer Architecture. Lec 3-5 Performance + Instruction Pipelining Review

Computer Architecture CS372 Exam 3

Administrivia. CMSC 411 Computer Systems Architecture Lecture 8 Basic Pipelining, cont., & Memory Hierarchy. SPEC92 benchmarks

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

Instr. execution impl. view

Outline. A pipelined datapath Pipelined control Data hazards and forwarding Data hazards and stalls Branch (control) hazards Exception

EITF20: Computer Architecture Part2.2.1: Pipeline-1

CENG 3420 Computer Organization and Design. Lecture 08: Cache Review. Bei Yu

Chapter 4. The Processor

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

Pipelining. CSC Friday, November 6, 2015

Memory hierarchy review. ECE 154B Dmitri Strukov

Chapter 4. The Processor

Advanced Parallel Architecture Lessons 5 and 6. Annalisa Massini /2017

Processor Architecture

Transcription:

Computer Architecture Spring 2016 Lecture 02: Introduction II Shuai Wang Department of Computer Science and Technology Nanjing University

Pipeline Hazards Major hurdle to pipelining: hazards prevent the next instruction from executing during its designated clock cycle Structural hazards: hardware cannot support all possible combinations of instructions simultaneously Data hazards: instruction depends on result of a previous instruction still in the pipeline Control hazards: caused by delay between the fetching of instructions and decisions about changes in control flow

One Memory Port/Structural Hazards Time (clock cycles) Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 I n s t r. O r d e r Load Instr 1 Instr 2 Instr 3 Instr 4

One Memory Port/Structural Hazards Time (clock cycles) Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 I n s t r. O r d e r Load Instr 1 Instr 2 Stall Instr 3 Bubble Bubble Bubble Bubble Bubble

Time (clock cycles) Data Hazard on R1 IF ID/RF EX MEM WB I n s t r. add r1,r2,r3 sub r4,r1,r3 O r d e r and r6,r1,r7 or r8,r1,r9 xor r10,r1,r11

Three Generic Data Hazards Read After Write (RAW) InstrJ tries to read operand before InstrI writes it I: add r1,r2,r3 J: sub r4,r1,r3 Caused by a Dependence. This hazard results from an actual need for communication.

Three Generic Data Hazards Write After Read (WAR) InstrJ tries to write operand before InstrI reads it I: sub r4,r1,r3 J: add r1,r2,r3 K: mul r6,r1,r7 Called an anti-dependence by compiler writers. This results from reuse of the name r1. Can t happen in MIPS 5 stage pipeline because: All instructions take 5 stages, and Reads are always in stage 2, and Writes are always in stage 5

Three Generic Data Hazards Write After Write (WAW) InstrJ writes operand before InstrI writes it. I: sub r1,r4,r3 J: add r1,r2,r3 K: mul r6,r1,r7 Called an output dependence by compiler writers This also results from the reuse of name r1. Can t happen in MIPS 5 stage pipeline because: All instructions take 5 stages, and Writes are always in stage 5 Will see WAR and WAW in later more complicated pipes

Forwarding to Avoid Data Hazard Time (clock cycles) I n s t r. add r1,r2,r3 sub r4,r1,r3 O r d e r and r6,r1,r7 or r8,r1,r9 xor r10,r1,r11

mux HW Change for Forwarding mux MEM/WR EX/MEM Data Memory ID/EX mux NextPC isters Immediate

Data Hazard Even with Forwarding Time (clock cycles) I n s t r. lw r1, 0(r2) sub r4,r1,r6 O r d e r and r6,r1,r7 or r8,r1,r9

Data Hazard Even with Forwarding I n s t r. O r d e r Time (clock cycles) lw r1, 0(r2) sub r4,r1,r6 and r6,r1,r7 or r8,r1,r9 Bubble Bubble Bubble

Control Hazard on Branches: 3 Stage Stall 10: beq r1,r3,36 14: and r2,r3,r5 18: or r6,r1,r7 22: add r8,r1,r9 36: xor r10,r1,r11

Example: Branch Stall Impact If 30% branch, Stall 3 cycles significant Two part solution: Determine branch taken or not sooner, AND Compute taken branch address earlier MIPS branch tests if register = 0 or 0 MIPS Solution: Move Zero test to ID/RF stage Adder to calculate new PC in ID/RF stage 1 clock cycle penalty for branch versus 3

Write Back MUX WB Data Pipelined MIPS Datapath Memory Access Execute Addr. Calc Instr. Decode. Fetch Instruction Fetch MUX Next SEQ PC Zero? Adder Adder RS1 4 RS2 MEM/WB EX/MEM ID/EX File IF/ID Memory Data Memory MUX Sign Extend Imm RD RD RD Next PC Address

Four Branch Hazard Alternatives #1: Stall until branch direction is clear #2: Predict Branch Not Taken Execute successor instructions in sequence Squash instructions in pipeline if branch actually taken Advantage of late pipeline state update 47% MIPS branches not taken on average PC+4 already calculated, so use it to get next instruction #3: Predict Branch Taken 53% MIPS branches taken on average But haven t calculated branch target address in MIPS MIPS still incurs 1 cycle branch penalty Other machines: branch target known before outcome

Four Branch Hazard Alternatives #4: Delayed Branch Define branch to take place AFTER a following instruction branch instruction sequential successor1 sequential successor2... sequential successorn branch target if taken 1 slot delay allows proper decision and branch target address in 5 stage pipeline MIPS uses this

Scheduling Branch-delay Slot Three branch-scheduling schemes: From before branch From target From fall through In (a) the delay slot is scheduled with an independent instruction from before the branch. This is the best choice. Strategies (b) and (c) are used when (a) is not possible. To make this optimization legal for (b) and (c), it must be OK to execute the SUB instruction when the branch goes in the unexpected direction. OK means that the work might be wasted but the program will still execute correctly.

Delayed Branch Where to get instructions to fill branch delay slot? Before branch instruction From the target address: only valuable when branch taken From fall through: only valuable when branch not taken Canceling branches allow more slots to be filled Compiler effectiveness for single branch delay slot: Fills about 60% of branch delay slots About 80% of instructions executed in branch delay slots useful in computation About 50% (60% x 80%) of slots usefully filled Delayed Branch downside: 7-8 stage pipelines, multiple instructions issued per clock (superscalar)

Today s Lecture Review of the following topics Pipelining Caches Virtual memory Fundamentals of computer design Performance Cost Technology

The Principle of Locality Principle of locality: Programs tend to reuse data and instructions they have used recently. Program access a relatively small portion of the address space at any instant of time. Two Different Types of Locality: Temporal Locality (Locality in Time): If an item is referenced, it will tend to be referenced again soon (e.g., loops, reuse) Spatial Locality (Locality in Space): If an item is referenced, items whose addresses are close by tend to be referenced soon (e.g., straightline code, array access) Last 15 years, HW (hardware) relied on locality for speed

Memory Hierarchy: Terminology Hit: data appears in some block in the upper level (example: Block X) Hit Rate: the fraction of memory access found in the upper level Hit Time: Time to access the upper level which consists of RAM access time + Time to determine hit/miss Miss: data needs to be retrieve from a block in the lower level (Block Y) Miss Rate = 1 - (Hit Rate) Miss Penalty: Time to replace a block in the upper level + Time to deliver the block to the processor Hit Time << Miss Penalty (500 instructions on 21264!) To Processor From Processor Upper Level Memory Blk X Lower Level Memory Blk Y

Measuring Cache Performance Hit rate: fraction found in that level So high that usually talk about Miss rate Miss rate fallacy: as MIPS to CPU performance, Miss rate to average memory access time in memory Average memory-access time = Hit time + Miss rate x Miss penalty (ns or clocks) Miss penalty: time to replace a block from lower level, including time to replace in CPU access time: time to lower level = f (latency to lower level) transfer time: time to transfer block = f (BW between upper & lower levels)

Cache Organization: Direct Mapped Memory Address Memory 0 1 2 3 4 5 6 7 8 9 A B C D E F 4 Byte Direct Mapped Cache Cache Index 0 1 2 3 Location 0 can be occupied by data from: Memory location 0, 4, 8,... etc. In general: any memory location whose 2 LSBs of the address are 0s Address<1:0> => cache index Which one should we place in the cache? How can we tell which one is in the cache?

1 KB Direct Mapped Cache, 32B blocks For a 2 ** N byte cache: The uppermost (32 - N) bits are always the Cache Tag The lowest M bits are the Byte Select (Block Size = 2 ** M)

Valid Two-way Set Associative Cache N-way set associative: N entries for each Cache Index N direct mapped caches operates in parallel (N typically 2 to 4) Example: Two-way set associative cache Cache Index selects a set from the cache The two tags in the set are compared in parallel Data is selected based on the tag result Cache Tag : : : Cache Data Cache Index Cache Data Cache Block 0 Cache Block 0 : Cache Tag Valid : : Adr Tag Compare Sel1 1 Mux 0 Sel0 Compare OR Hit Cache Block

Valid Disadvantage of Set Associative Cache N-way Set Associative Cache v. Direct Mapped Cache: N comparators vs. 1 Extra MUX delay for the data Data comes AFTER Hit/Miss In a direct mapped cache, Cache Block is available BEFORE Hit/Miss: Possible to assume a hit and continue. Recover later if miss. Cache Tag Cache Data Cache Block 0 : : : Cache Index Cache Data Cache Block 0 : Cache Tag Valid : : Adr Tag Compare Sel1 1 Mux 0 Sel0 Compare OR Hit Cache Block

Four Questions for Memory Hierarchy Q1: Where can a block be placed in the upper level? (Block placement) Q2: How is a block found if it is in the upper level? (Block identification) Q3: Which block should be replaced on a miss? (Block replacement) Q4: What happens on a write? (Write strategy)

Q1: Where can a block be placed in the upper level? Block 12 placed in 8 block cache: Fully associative, direct mapped, 2-way set associative S.A. Mapping = Block Number Modulo Number Sets Cache Full Mapped 01234567 Direct Mapped (12 mod 8) = 4 01234567 2-Way Assoc (12 mod 4) = 0 01234567 Memory 1111111111222222222233 01234567890123456789012345678901

Q2: How is a block found if it is in the upper Tag on each block level? No need to check index or block offset Increasing associativity shrinks index, expands tag Block Address Tag Index Block Offset

Q3: Which block should be replaced on a miss? Easy for Direct Mapped Set Associative or Fully Associative: Random LRU (Least Recently Used) Assoc: 2-way 4-way 8-way Size LRU Ran LRU Ran LRU Ran 16 KB 5.2% 5.7% 4.7% 5.3% 4.4% 5.0% 64 KB 1.9% 2.0% 1.5% 1.7% 1.4% 1.5% 256 KB 1.15% 1.17% 1.13% 1.13% 1.12% 1.12%

Q4: What happens on a write? Write through The information is written to both the block in the cache and to the block in the lower-level memory. Write back The information is written only to the block in the cache. The modified cache block is written to main memory only when it is replaced. Is block clean or dirty? Pros and Cons of each? WT: read misses cannot result in writes WB: no repeated writes to same location WT always combined with write buffers so that don t wait for lower level memory

Write Buffer for Write Through Processor Cache DRAM Write Buffer A Write Buffer is needed between the Cache and Memory Processor: writes data into the cache and the write buffer Memory controller: write contents of the buffer to memory Write buffer is just a FIFO: Typical number of entries: 4 Works fine if: Store frequency (w.r.t. time) << 1 / DRAM write cycle Memory system designer s nightmare: Store frequency (w.r.t. time) -> 1 / DRAM write cycle Write buffer saturation

The Cache Design Space Several interacting dimensions cache size block size associativity replacement policy write-through vs write-back The optimal choice is a compromise depends on access characteristics workload use (I-cache, D-cache, TLB) depends on technology / cost Simplicity often wins Bad Good Cache Size Factor A Less Associativity Block Size Factor B More