ECE 30 Introduction to Computer Engineering

Similar documents
ECE331: Hardware Organization and Design

Main Memory Supporting Caches

registers data 1 registers MEMORY ADDRESS on-chip cache off-chip cache main memory: real address space part of virtual addr. sp.

data block 0, word 0 block 0, word 1 block 1, word 0 block 1, word 1 block 2, word 0 block 2, word 1 block 3, word 0 block 3, word 1 Word index cache

Memory Technology. Caches 1. Static RAM (SRAM) Dynamic RAM (DRAM) Magnetic disk. Ideal memory. 0.5ns 2.5ns, $2000 $5000 per GB

Memory Hierarchy. Caching Chapter 7. Locality. Program Characteristics. What does that mean?!? Exploiting Spatial & Temporal Locality

Improving our Simple Cache

ECE 2300 Digital Logic & Computer Organization. More Caches

Advanced Memory Organizations

ECE331: Hardware Organization and Design

ECE 30, Lab #8 Spring 2014

14:332:331. Week 13 Basics of Cache

CENG 3420 Computer Organization and Design. Lecture 08: Cache Review. Bei Yu

14:332:331. Week 13 Basics of Cache

Chapter 7 Large and Fast: Exploiting Memory Hierarchy. Memory Hierarchy. Locality. Memories: Review

registers data 1 registers MEMORY ADDRESS on-chip cache off-chip cache main memory: real address space part of virtual addr. sp.

Caches. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Cache memories are small, fast SRAM-based memories managed automatically in hardware. Hold frequently accessed blocks of main memory

ECE331: Hardware Organization and Design

Structure of Computer Systems

CISC 360. Cache Memories Exercises Dec 3, 2009

ECE7995 (6) Improving Cache Performance. [Adapted from Mary Jane Irwin s slides (PSU)]

ELEC 5200/6200 Computer Architecture and Design Spring 2017 Lecture 7: Memory Organization Part II

Chapter Seven. Large & Fast: Exploring Memory Hierarchy

The Memory Hierarchy. Cache, Main Memory, and Virtual Memory (Part 2)

ECE232: Hardware Organization and Design

Com S 321 Problem Set 3

SISTEMI EMBEDDED. Computer Organization Memory Hierarchy, Cache Memory. Federico Baronti Last version:

ECE 2300 Digital Logic & Computer Organization. More Caches

CPE 631 Advanced Computer Systems Architecture: Homework #2

EE 457 Unit 7b. Main Memory Organization

ECE331: Hardware Organization and Design

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies

Advanced Caching Techniques

Why memory hierarchy? Memory hierarchy. Memory hierarchy goals. CS2410: Computer Architecture. L1 cache design. Sangyeun Cho

Portland State University ECE 587/687. Caches and Memory-Level Parallelism

Chapter 5 Large and Fast: Exploiting Memory Hierarchy (Part 1)

CS 433 Homework 4. Assigned on 10/17/2017 Due in class on 11/7/ Please write your name and NetID clearly on the first page.

A Cache Hierarchy in a Computer System

Chapter 5A. Large and Fast: Exploiting Memory Hierarchy

Memory Hierarchy. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

ECE331 Homework 4. Due Monday, August 13, 2018 (via Moodle)

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350):

Trying to design a simple yet efficient L1 cache. Jean-François Nguyen

Pollard s Attempt to Explain Cache Memory

COSC 6385 Computer Architecture - Memory Hierarchy Design (III)

Adapted from instructor s supplementary material from Computer. Patterson & Hennessy, 2008, MK]

Portland State University ECE 587/687. Caches and Memory-Level Parallelism

ELE 758 * DIGITAL SYSTEMS ENGINEERING * MIDTERM TEST * Circle the memory type based on electrically re-chargeable elements

Lecture 11 Cache. Peng Liu.

ECE 2300 Digital Logic & Computer Organization. More Caches Measuring Performance

Advanced Computer Architecture

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Figure 1: Organisation for 128KB Direct Mapped Cache with 16-word Block Size and Word Addressable

HY225 Lecture 12: DRAM and Virtual Memory

CS 251, Winter 2018, Assignment % of course mark

CS 61C: Great Ideas in Computer Architecture Caches Part 2

The levels of a memory hierarchy. Main. Memory. 500 By 1MB 4GB 500GB 0.25 ns 1ns 20ns 5ms

Page 1. Memory Hierarchies (Part 2)

Lecture 24: Virtual Memory, Multiprocessors

Page 1. Multilevel Memories (Improving performance using a little cash )

Textbook: Burdea and Coiffet, Virtual Reality Technology, 2 nd Edition, Wiley, Textbook web site:

CENG 3420 Computer Organization and Design. Lecture 08: Memory - I. Bei Yu

Caches II. CSE 351 Spring Instructor: Ruth Anderson

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

COSC 6385 Computer Architecture - Memory Hierarchies (I)

University of Western Ontario, Computer Science Department CS3350B, Computer Architecture Quiz 1 (30 minutes) January 21, 2015

The check bits are in bit numbers 8, 4, 2, and 1.

Chapter 7: Large and Fast: Exploiting Memory Hierarchy

Computer Science 146. Computer Architecture

Course Administration

CS/CoE 1541 Exam 2 (Spring 2019).

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

COSC 6385 Computer Architecture. - Memory Hierarchies (II)

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Locality. Cache. Direct Mapped Cache. Direct Mapped Cache

Caching Prof. James L. Frankel Harvard University. Version of 5:16 PM 5-Apr-2016 Copyright 2016 James L. Frankel. All rights reserved.

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS 3510 Comp&Net Arch

MIPS) ( MUX

CS3350B Computer Architecture

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Memory Hierarchy. Slides contents from:

Memory Hierarchies. Instructor: Dmitri A. Gusev. Fall Lecture 10, October 8, CS 502: Computers and Communications Technology

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568/668

How to Write Fast Numerical Code Spring 2011 Lecture 7. Instructor: Markus Püschel TA: Georg Ofenbeck

CPSC 3300 Spring 2016 Final Exam Version A No Calculators

CS650 Computer Architecture. Lecture 9 Memory Hierarchy - Main Memory

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 5. Large and Fast: Exploiting Memory Hierarchy

COSC3330 Computer Architecture Lecture 19. Cache

Memory System Design Part II. Bharadwaj Amrutur ECE Dept. IISc Bangalore.

ECE 3056: Architecture, Concurrency, and Energy of Computation. Sample Problem Set: Memory Systems

Main Memory (Fig. 7.13) Main Memory

Advanced Caching Techniques

Memory. Lecture 22 CS301

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

ECE 341 Final Exam Solution

Transcription:

ECE 0 Introduction to Computer Engineering Study Problems, Set #9 Spring 01 1. Given the following series of address references given as word addresses:,,, 1, 1, 1,, 8, 19,,,,, 7,, and. Assuming a direct-mapped cache with 1 one-word blocks that is initially empty, label each reference in the list as a hit or a miss and show the final contents of the cache (make sure to represent the cache structure correctly; show the index for every line of the cache and include valid and tag bits as well besides the data). Address Index Tag Hit/Miss 1 1 1 8 19 7 1

Solution: index = address % 1 tag = address / 1 Address Index Tag Hit/Miss 0 M 0 M 0 M 1 0 1 M 1 5 1 M 1 1 0 M 0 M 8 0 M 19 1 M 0 H 0 M 1 M 0 M 7 1 M 0 M 0 M Index Valid Tag Data 0 1 8 1 0 1 0 1 0 1 0 5 1 1 1 1 0 7 0 8 0 9 0 10 0 1 0 1 0 1 1 0 1 1 0 15 0 The Data column contains the word address, in memory, of the content that is stored, as data, in the cache.

. Using the series of references given in Problem 1, show the hits and misses and final cache contents for a direct-mapped cache with four-word blocks and a total size of 1 words (make sure to represent the cache structure correctly; again, show the index for every line of the cache and include valid and tag bits). Address Block offset Index Tag Hit/Miss 1 1 1 8 19 7

Solution: block offset = address % index = (address % 1) / tag = address / 1 Address Block Offset Index Tag Hit/Miss 0 0 M 0 0 H 0 M 1 0 0 1 M 1 1 1 1 M 1 1 0 M 0 0 M 8 0 0 M 19 0 1 M 0 H 0 0 M 1 1 H 0 1 0 M 7 1 M 1 0 H 0 M Index Valid Tag Data 0 1 0 1 0 0 1 1 1 0 5 7 1 0 8 9 10 1 0 1 1 1 15 The Data column contains the word address, in memory, of the content that is stored, as data, in the cache.

. Consider a memory hierarchy using one of the three organizations for main memory shown in the following figure. Assume that the cache block size is 1 words, that the width of organization (b) in the figure is four words, and that the number of banks in organization (c) is four. If the main memory latency for a new access is 10 memory bus clock cycles and the transfer time is 1 memory bus clock cycle, what are the miss penalties for each of these organizations? Solution: Cache block size is 1 words. (a) One-word wide memory. Miss penalty = 1 Memory access time + 1 Transfer time (for sending the address) + 1 Transfer time (for sending 1 words) = 177 memory bus clock cycles. (b) Four-word wide memory. Miss penalty = Memory access time + 1 Transfer time (for sending the address) + Transfer time (for sending sets of words) = 5 memory bus clock cycles. (c) banks of one-word wide interleaved memory. Miss penalty = Memory access time + 1 Transfer time (for sending the address) + 1 Transfer time (for sending 1 words) = 57 memory bus clock cycles. 5

. Using the series of references given in Problem 1 and assuming a two-way set-associative cache (replacement scheme: least-recently-used) with two-word blocks and a total size of 1 words that is initially empty, label each reference in the table below as a hit or a miss and show the final contents of the cache (make sure to represent the cache structure correctly; show the index for every line of the cache and include all special bits as well besides the data). Address Block offset Set index Tag Set entry Hit/Miss 1 1 1 8 19 7

Solution: Two-way set associative cache with a total of 1 words. Two-word blocks. Use LRU for replacement policy. block offset = address % index = (address % 8) / tag = address / 8 Address Block offset Set index Tag Set entry Hit/Miss 0 1 0 0 M 1 1 0 0 H 1 1 1 1 M 1 0 0 0 M 1 1 0 M 1 1 1 1 M 0 0 8 1 M 8 0 0 0 M 19 1 1 0 M 1 1 1 1 H 1 1 0 0 M 0 0 M 0 0 0 M 7 1 1 1 M 0 0 1 M 1 1 1 0 M Set Index Valid Entry 0 Entry 1 Tag Data Data Valid Tag 0 1 0 1 0 1 8 9 1 8 5 1 1 1 10 1 7 1 0 5 1 1 1 1 1 1 0 7 The Data column contains the word address, in memory, of the content that is stored, as data, in the cache. 7

5. Using the series of references given in Problem 1 and assuming a fully associative cache (replacement scheme: least-recently-used) with four-word blocks and a total size of 1 words that is initially empty, label each reference in the table below as a hit or a miss and show the final contents of the cache (make sure to represent the cache structure correctly; show the index for every line of the cache and include all special bits as well besides the data). Address Block offset Tag Set Entry Hit/Miss 1 1 1 8 19 7 8

Solution: Fully associative cache (1 set) with a total of 1 words. Four words per block blocks. Number of tag bits = number of block-address bits. No index must search every block for match. Each block must contain both the tag and the data bits. Use LRU for replacement policy. Address Block offset Tag Set Entry Hit/Miss 0 0 M 0 0 H 1 M 1 0 M 1 1 5 M 1 1 0 M 0 1 1 M 8 0 1 M 19 M 0 M 0 1 M 5 M 0 1 M 7 0 M 1 H 1 M Set entry Valid Tag Data 0 1 0 1 5 7 1 1 8 9 10 1 5 0 1 1 1 5 7 The Data column contains the word address, in memory, of the content that is stored, as data, in the cache. 9