Computer Science 324 Computer Architecture Mount Holyoke College Fall Topic Notes: Building Memory

Similar documents
Topic Notes: Building Memory

What s a Tri-state Buffer?

Computer Science 324 Computer Architecture Mount Holyoke College Fall Topic Notes: Data Paths and Microprogramming

Memory Supplement for Section 3.6 of the textbook

Picture of memory. Word FFFFFFFD FFFFFFFE FFFFFFFF

EECS 150 Homework 7 Solutions Fall (a) 4.3 The functions for the 7 segment display decoder given in Section 4.3 are:

falling edge Intro Computer Organization

Overview. Memory Classification Read-Only Memory (ROM) Random Access Memory (RAM) Functional Behavior of RAM. Implementing Static RAM

Engin 100 (section 250), Winter 2015, Technical Lecture 3 Page 1 of 5. Use pencil!

Chapter Operation Pinout Operation 35

Computer Organization & Architecture M. A, El-dosuky

Chapter 3 - Memory Management

1. Internal Architecture of 8085 Microprocessor

Slide Set 9. for ENCM 369 Winter 2018 Section 01. Steve Norman, PhD, PEng

CS 31: Intro to Systems Digital Logic. Kevin Webb Swarthmore College February 3, 2015

Memory and Programmable Logic

CS 31: Intro to Systems Digital Logic. Kevin Webb Swarthmore College February 2, 2016

A microprocessor-based system

CSE 380 Computer Operating Systems

Computer Science 324 Computer Architecture Mount Holyoke College Fall Topic Notes: Bits and Bytes and Numbers

Digital VLSI Testing Prof. Santanu Chattopadhyay Department of Electronics and EC Engineering India Institute of Technology, Kharagpur.

Memory Management. Kevin Webb Swarthmore College February 27, 2018

Computer Systems C S Cynthia Lee Today s materials adapted from Kevin Webb at Swarthmore College

10/24/2016. Let s Name Some Groups of Bits. ECE 120: Introduction to Computing. We Just Need a Few More. You Want to Use What as Names?!

EECS150 - Digital Design Lecture 20 - Finite State Machines Revisited

Project and Production Management Prof. Arun Kanda Department of Mechanical Engineering Indian Institute of Technology, Delhi

Topic Notes: Bits and Bytes and Numbers

Lecture 17 Introduction to Memory Hierarchies" Why it s important " Fundamental lesson(s)" Suggested reading:" (HP Chapter

Memory, Latches, & Registers

Using PSpice to Simulate Transmission Lines K. A. Connor Summer 2000 Fields and Waves I

The 9S12 in Expanded Mode - Using MSI logic to build ports Huang Chapter 14

Chapter 5 Internal Memory

271/471 Verilog Tutorial

M. Horowitz E40M Lecture 13

The Memory Component

Chapter 6 Memory 11/3/2015. Chapter 6 Objectives. 6.2 Types of Memory. 6.1 Introduction

Digital Systems Design with PLDs and FPGAs Kuruvilla Varghese Department of Electronic Systems Engineering Indian Institute of Science Bangalore

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Topics of this Slideset. CS429: Computer Organization and Architecture. Digital Signals. Truth Tables. Logic Design

LECTURE 11. Memory Hierarchy

Memory, Latches, & Registers

CS/MA 109 Fall Wayne Snyder Computer Science Department Boston University

CS/EE 3710 Computer Architecture Lab Checkpoint #2 Datapath Infrastructure

Administrivia. CMSC 411 Computer Systems Architecture Lecture 19 Storage Systems, cont. Disks (cont.) Disks - review

Computer Science 324 Computer Architecture Mount Holyoke College Fall Topic Notes: MIPS Instruction Set Architecture

Memory System Design. Outline

Memory Management Topics. CS 537 Lecture 11 Memory. Virtualizing Resources

Music. Numbers correspond to course weeks EULA ESE150 Spring click OK Based on slides DeHon 1. !

CHAPTER 6 Memory. CMPS375 Class Notes (Chap06) Page 1 / 20 Dr. Kuo-pao Yang

CMPE 413/ CMSC 711. Project Specification: 16 bit 2 s complement Adder and 8 bit 2 s complement multiplier. GND. Input bus. Latches I[8]-I[15]

EECS Components and Design Techniques for Digital Systems. Lec 07 PLAs and FSMs 9/ Big Idea: boolean functions <> gates.

CSE380 - Operating Systems. Communicating with Devices

ECE331: Hardware Organization and Design

COSC 243. Computer Architecture 1. COSC 243 (Computer Architecture) Lecture 6 - Computer Architecture 1 1

From last time to this time

BUILDING BLOCKS OF A BASIC MICROPROCESSOR. Part 1 PowerPoint Format of Lecture 3 of Book

Virtual Memory. Kevin Webb Swarthmore College March 8, 2018

5 R1 The one green in the same place so either of these could be green.

Chapter01.fm Page 1 Monday, August 23, :52 PM. Part I of Change. The Mechanics. of Change

COMP I/O, interrupts, exceptions April 3, 2016

ECE 550D Fundamentals of Computer Systems and Engineering. Fall 2017

Continuing with whatever we saw in the previous lectures, we are going to discuss or continue to discuss the hardwired logic design.

Homework, etc. Computer Science Foundations

Chapter 10 Error Detection and Correction 10.1

Summer 2003 Lecture 18 07/09/03

CS 31: Introduction to Computer Systems. 03: Binary Arithmetic January 29

Lecture 21: Combinational Circuits. Integrated Circuits. Integrated Circuits, cont. Integrated Circuits Combinational Circuits

Ext3/4 file systems. Don Porter CSE 506

Dec Hex Bin ORG ; ZERO. Introduction To Computing

CSE 373: Data Structures and Algorithms. Memory and Locality. Autumn Shrirang (Shri) Mare

Chapter 3 - Top Level View of Computer Function

I/O, Disks, and RAID Yi Shi Fall Xi an Jiaotong University

Fundamentals of Database Systems Prof. Arnab Bhattacharya Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

Computer Organization. 8th Edition. Chapter 5 Internal Memory

(Refer Slide Time: 00:01:53)

Memory Hierarchy. Memory Flavors Principle of Locality Program Traces Memory Hierarchies Associativity. (Study Chapter 5)

1. (11 pts) For each question, state which answer is the most apropriate. First one is done for you.

Topic Notes: Bits and Bytes and Numbers

12/2/2016. Error Detection May Not Be Enough. ECE 120: Introduction to Computing. Can We Use Redundancy to Correct Errors?

Microprocessor Architecture. mywbut.com 1

CS 31: Intro to Systems Binary Arithmetic. Kevin Webb Swarthmore College January 26, 2016

Views of Memory. Real machines have limited amounts of memory. Programmer doesn t want to be bothered. 640KB? A few GB? (This laptop = 2GB)

Cache Coherence Tutorial

MICROPROCESSOR AND MICROCONTROLLER BASED SYSTEMS

Memory. Objectives. Introduction. 6.2 Types of Memory

CS Computer Architecture

Memory. Memory Technologies

CS429: Computer Organization and Architecture

CS162 Operating Systems and Systems Programming Lecture 11 Page Allocation and Replacement"

Computer Science 432/563 Operating Systems The College of Saint Rose Spring Topic Notes: Memory Hierarchy

Job Posting (Aug. 19) ECE 425. ARM7 Block Diagram. ARM Programming. Assembly Language Programming. ARM Architecture 9/7/2017. Microprocessor Systems

CPE/EE 421/521 Fall 2004 Chapter 4 The CPU Hardware Model. Dr. Rhonda Kay Gaede UAH. The CPU Hardware Model - Overview

Announcement. Computer Architecture (CSC-3501) Lecture 20 (08 April 2008) Chapter 6 Objectives. 6.1 Introduction. 6.

CHAPTER 6 Memory. CMPS375 Class Notes Page 1/ 16 by Kuo-pao Yang

Data Cache Final Project Report ECE251: VLSI Systems Design UCI Spring, 2000

Error-Correcting Codes

PROFESSOR: Well, now that we've given you some power to make independent local state and to model objects,

Lab 16: Data Busses, Tri-State Outputs and Memory

CS 31: Intro to Systems Caching. Kevin Webb Swarthmore College March 24, 2015

Recall: Address Space Map. 13: Memory Management. Let s be reasonable. Processes Address Space. Send it to disk. Freeing up System Memory

Transcription:

Computer Science 324 Computer rchitecture Mount Holyoke College Fall 2007 Topic Notes: Building Memory We ll next look at how we can use the devices we ve been looking at to construct memory. Tristate Buffer We ve seen and used inverters. We also mentioned when we first saw inverters that we can put two together in series and have a buffer. We can draw one of these as a triangular symbol but with no circle at the end. What use is that? It will slow down but also boost the current in the circuit. Recall that large fan-out can cause trouble with not enough current to drive subsequent gates. buffer can take care of that. We think about a wire or an output as having values of 0 or 1. It can be easy to think about the value 0 as the absence of a value, but that s not true. Connected to ground or to a low gate output is different than not connected. You can think of the wire as transmitting some value. 0 value means the wire like a pipe spilling out 0 s, a 1 value is like a pipe spilling out 1 s. Or like a pipe with hot water flowing vs. a pipe with cold water flowing. But a disconnected wire or a pipe with no water source isn t spilling out anything. So even in our circuits so far, there could be a wire or a gate not connected to anything. Now, we ll look at a new device called a tristate buffer. in out control When the control line is high, we have a wire (with some amplifying properties) like a buffer. When control is low, it looks like a broken wire physically disconnected.

324 Computer rchitecture Fall 2007 on t worry how it s built for now. What use is a device like this? Well, if we have a point in a circuit where we know that exactly one of a number of outputs will be high, as in a MUX: We couldn t just tie the outputs of the Ns together because we can t feed back the output of some gates through the output lines of others. We had to feed them through an OR gate to the output. 2

324 Computer rchitecture Fall 2007 Since only one of the tristate buffers will be connected, we can safely do what looks like a wired or. riving the Bus We can use this idea to allow one of multiple inputs to be placed onto a wire for transmission somewhere else. Such a wire is called a bus. bus must have exactly 0 or 1 drivers. This will be useful when we want to connect up our CPUs to a bank of memory. In that case, tristate buffers can be used to determine whether the bus is being used to send data from the CPU to memory, from the memory to the CPU, or neither. Both can t be done at once. If we activated multiple tristate buffers and had multiple bus drivers, we can make some toasters (sending the signal backwards through our gates). Building Memory evices Suppose we wish to build a device that holds 4 bits of memory. 3

324 Computer rchitecture Fall 2007 logn R/WRT 4x1 What does all this mean? 1. 4 1: 4 is the number of units of memory, 1 is the addressability, the smallest number of bits we can address. 2. is a bidirectional data bus: when we want to store a value in the memory, we put that value on when we want to retrieve a value from memory, its value is put on consists of a number of wires equal to the addressability of the memory 3. is the address lines need to be able to select from among all n units of memory so we need log 2 (n) address lines specifies which bits are being read from or written to the data bus 4. R/WRT selects whether the chip should be reading or writing who s driving the bus? 5. is chip select. re we doing anything with this chip right now? How can we build this out of components we have seen? We use 4 -type flip-flops and connect them up to our inputs, as appropriate. 4

324 Computer rchitecture Fall 2007 R/WRT 00 CLK Q 2 00 01 10 11 decoder Q 01 CLK Q 10 CLK Q 11 CLK How it works: 1. Correct address line is set high by the decoder. This will disable 3 of the 4 CLK inputs and 3 of the 4 N gates that mask Q outputs of the flip-flops. 2. ata line is fed as input, unconditionally, to all flip-flops. 3. When ( chip select ) is low, all 4 input N gates are low, inhibiting any CLK input. lso, and the N gate at the output is low, disconnecting the tristate buffer. 4. When R/WRT is high, an input to all 4 input N gates is low, so CLK is inhibited. 5. When R/WRT is low, the N gate controlling the tristate buffer is low, meaning the output is not being sent to. 6. When both and R/WRT are high, the output being sampled from one of the flipflops is being passed through to. One of the tri-state buffers is driving the bus. 7. When is high and R/WRT is low, we assume someone else is driving the bus and is putting the values on the bus that we wish to store in one of our flip-flops. 5

324 Computer rchitecture Fall 2007 We have a 4-bit memory! 4 4 Memory We normally don t think about storing our memory just one bit at a time. How about building a device that can store 4 4-bit values? 4 2 4x4 R/WRT Here, we have the same 2 address lines, a R/WRT line and a chip select, but instead of a single data line, we have 4 data lines. We will build our 4 4 device out of 4 4 1 devices: 4 2 0 1 2 3 4x1 4x1 4x1 4x1 R/WRT R/WRT R/WRT R/WRT R/WRT Note that the data bus is actually 4 wires, and we only connect one of the 4 to each of the 4 1 s. 4 8 Memory So far, we ve gone from the -type flip flip, (which we can think of as a 1 1 memory device) to a 4 1 by adding 2 address lines, and using a decoder to have those address lines activate one of 4 1 1 devices. Then we went from 4 1 to 4 4 by having 4 data lines, each of which goes to one of the 4 1 devices. 6

324 Computer rchitecture Fall 2007 We could do the same thing for a 4 8 device 8 data lines, each of which is fed into a separate 4 1 device. But if we already have 4 4 devices, we can expand to 4 8: 8 2 0 3 4 7 4 4 4x4 4x4 R/WRT R/WRT That s 4 bytes of memory. 1KB memory R/WRT Now let s consider how we might build a kilobyte of memory out of our 4 8 devices. The device we re trying to build will look like this: 8 10 1Kx8 R/WRT We ll need 256 4 8 devices, each of which looks like this: 8 2 4x8 R/WRT 7

324 Computer rchitecture Fall 2007 We refer to each of these 4 8 devices as a bank of memory. Of our 10 address lines, 8 are used to select among the 256 banks and the other 2 select among the 4 bytes on the bank chosen. If we think of our address lines as bits 9 8... 0, two main possibilities come to mind for how to organize things: Option 1: 1 and 0 select the byte within a bank and 9... 2 select the bank. Option 2: 9 and 8 select the byte within a bank and 7... 0 select the bank. In either case, we want all 8 data lines wired to each bank, we want the R/WRT wired to each bank. Our two lines to select a byte within a bank are wired to the address lines of the bank. The other 8 address lines are passed through a decoder, and the decoder outputs are Ned with the inputs of the whole circuit then wired to the inputs of each bank: Bank 0 10 2 4x8 8 Bank 1 4x8 8 to 256 ECOER...... Bank 255 4x8 8

324 Computer rchitecture Fall 2007 Which bytes are stored on which chips with the two layout options? Bank Option 1 Option 2 0 0 3 0, 256, 512, 768 1 4 7 1, 257, 513, 769 2 8 11 2, 258, 514, 770......... 254 1016 1019 254, 510, 766, 1022 255 1020 1023 255, 511, 767, 1023 Each possible configuration has advangates and disadvantages. Option 1 means we can lose a chip and still have large chunks of contiguous memory available. Option 2 has advantages for chip layout. SIMM Layout You have probably seen and maybe used memory chips of the SIMM (or, single in-line memory module) type. Suppose we have 1 KB of memory of 8-bit bytes. This requires 10 bits for full addressing, 8 data lines. But this is not normally how it would be set up. More likely, you would find 64 data lines and only 7 address lines. What does this mean? 7 address lines 64 data lines It s really an 8-byte addressable memory. We load/store memory in chunks of 8 bytes at a time, so there are only 128 addressable chunks. When memory is requested, say address 1: = 9 8... 0 = 0000000001 The 7 high bits of the address 9...3 = 0000000 are sent to the SIMM on the address lines. We get back bytes 0 7, even though we only wanted 1. It s up to the CPU, which still has the full address (including 2 1 0 ) to pick out the byte it s interested in. To access location 517 = 1000000101, we request 8-byte chunk 64, and take byte 5 of those that come back. 9

324 Computer rchitecture Fall 2007 This memory is organized using Option 2 from our discussion about how to arrange memory among multiple banks. This may at first seem wasteful, but the memory chip can get all 8 bytes easily and more wires in and out means we can transfer more memory more quickly. Plus.. there s a good chance that any memory access will be followed by additional memory access to nearby locations (think local variables, an array, sequential program text). This locality is a natural feature of most programs. ll machines today have memory that is addressable in some chunk larger than one byte. The decisions about how we break this down have ripple effects throughout the architecture. We will soon see this in much detail. Error etection and Correction You have probably heard about error correcting memory. If we want to do this, we need to build in some redundancy. We can detect a single-bit memory error by adding a parity bit. For example, if we have an 8-bit data value and we want to maintain even parity, we add a 9th bit that makes the total number of bits that are set an even number: byte even parity bit 00000000 0 00000001 1 01110100 0 11000111 1 11111111 0 From this, we can check, each time we retrieve a byte, that is has even paity. If not, we know that something is wrong. But that s all it tells us. Since we don t know which of the 9 bits are wrong, we can t fix it. Some of you may have had computers that crash with a Blue Screen of eath saying that a memory parity error was detected. This is not just a Windows thing - I have had Unix systems crash with a kernel error that a memory parity error was detected. To fix it, we need more extra bits. Here s one error correction scheme that can fix a single bit error but at the expense of 4 extra bits for each byte of memory (50% overhead). We have 12 bits used to represent the 8-bit value. We number them 12-1 and use the ones whose numbers are powers of 2 as parity bits: 10

324 Computer rchitecture Fall 2007 12 11 10 9 8 7 6 5 4 3 2 1 1100 1011 1010 1001 1000 0111 0110 0101 0100 0011 0010 0001 We use the parity bits as follows: 1. Position 1 stores the even parity of odd-numbered bits 2. Position 2 stores the even parity of bits whose number has the 2 s bit set 3. Position 4 stores the even parity of bits whose number has the 4 s bit set 4. Position 8 stores the even parity of bits whose number has the 8 s bit set So to store the value 94 = 01011110 we first fill in the data bits: 0 1 0 1 1 1 1 0 1100 1011 1010 1001 1000 0111 0110 0101 0100 0011 0010 0001 Position 1 stores the even parity of the bits at 3, 5, 7, 9, 11. 4 of those are set to 1, so we set that bit to 0. Position 2 stores the even parity of the bits at 3, 6, 7, 10, 11. 3 of those are set to 1, so we set that bit to 1. Position 4 stores the even parity of the bits at 5, 6, 7, 12. 3 of those are set to 1, so we set that bit to 1. Position 8 stores the even parity of the bits at 9, 10, 11, 12. 2 of these are set to 1, so we set that bit to 0. 0 1 0 1 0 1 1 1 1 0 1 0 1100 1011 1010 1001 1000 0111 0110 0101 0100 0011 0010 0001 When we retrieve a value from memory, we can make sure it s OK by computing the 4 parity bits and comparing to the stored parity bits. If they all match, we re OK. If there s any mismatch, we know there s an error. Let s introduce an error into our stored value. We ll change the third bit to a 1. 0 1 1 1 0 1 1 1 1 0 1 0 1100 1011 1010 1001 1000 0111 0110 0101 0100 0011 0010 0001 So we recompute the parity bits on the value we retrieved, and compare to the stored bits: bit computed stored match P 1 0 0 0 P 2 0 1 1 P 4 1 1 0 P 8 1 0 1 We can quickly detect the mismatches in P 2 and P 8 (hey! XOR!) 11

324 Computer rchitecture Fall 2007 This means that the bit at position 1010 has an error and must be flipped. Convenient! Think about how this might be implemented the memory itself doesn t even need to know we can drop in some XORs to generate parity bits to be stored in memory we XOR again to regenerate parity bits for retrieved values still more XOR to do correction This works even if the parity bit is the one that has an error. It just ends up fixing the parity bit. It does not work for 2-bit errors. 12