falling edge Intro Computer Organization

Similar documents
Error Detection. Hamming Codes 1

Topic #6. Processor Design

Chapter 5 Internal Memory

Computer Organization. 8th Edition. Chapter 5 Internal Memory

CS 320 February 2, 2018 Ch 5 Memory

Internal Memory. Computer Architecture. Outline. Memory Hierarchy. Semiconductor Memory Types. Copyright 2000 N. AYDIN. All rights reserved.

William Stallings Computer Organization and Architecture 6th Edition. Chapter 5 Internal Memory

Model EXAM Question Bank

Organization. 5.1 Semiconductor Main Memory. William Stallings Computer Organization and Architecture 6th Edition

State Elements. Register File Design and Memory Design. An unclocked state element. Latches and Flip-flops

MEMORY AND PROGRAMMABLE LOGIC

CS429: Computer Organization and Architecture

Problem Set 10 Solutions

The Memory Component

Basic Organization Memory Cell Operation. CSCI 4717 Computer Architecture. ROM Uses. Random Access Memory. Semiconductor Memory Types

Memory and Programmable Logic

Storage Elements & Sequential Circuits

Cycle Time for Non-pipelined & Pipelined processors

Random Access Memory (RAM)

Logic and Computer Design Fundamentals. Chapter 8 Memory Basics

10/24/2016. Let s Name Some Groups of Bits. ECE 120: Introduction to Computing. We Just Need a Few More. You Want to Use What as Names?!

EECS150 - Digital Design Lecture 20 - Finite State Machines Revisited

Code No: R Set No. 1

Memory. Objectives. Introduction. 6.2 Types of Memory

Chapter Operation Pinout Operation 35

Memory Devices. Future?

Topic Notes: Building Memory

Chapter 8 Memory Basics

Memories. Design of Digital Circuits 2017 Srdjan Capkun Onur Mutlu.

Memory Supplement for Section 3.6 of the textbook

Concept of Memory. The memory of computer is broadly categories into two categories:

Memory and Programmable Logic

William Stallings Computer Organization and Architecture 8th Edition. Chapter 5 Internal Memory

The Processor: Datapath & Control

CSEE 3827: Fundamentals of Computer Systems. Storage

6. Latches and Memories

COMPUTER ARCHITECTURES

Computer Science 324 Computer Architecture Mount Holyoke College Fall Topic Notes: Building Memory

Topics of this Slideset. CS429: Computer Organization and Architecture. Digital Signals. Truth Tables. Logic Design

DP8420V 21V 22V-33 DP84T22-25 microcmos Programmable 256k 1M 4M Dynamic RAM Controller Drivers

Ch 5: Designing a Single Cycle Datapath

Announcement. Computer Architecture (CSC-3501) Lecture 20 (08 April 2008) Chapter 6 Objectives. 6.1 Introduction. 6.

ECSE-2610 Computer Components & Operations (COCO)

Summer 2003 Lecture 18 07/09/03

CHAPTER X MEMORY SYSTEMS

Read this before starting!

Note: Closed book no notes or other material allowed, no calculators or other electronic devices.

Topic 21: Memory Technology

Topic 21: Memory Technology

DP8420A,DP8421A,DP8422A

Introduction read-only memory random access memory

Chapter 2: Fundamentals of a microprocessor based system

ENGIN 112 Intro to Electrical and Computer Engineering

CHAPTER 6 Memory. CMPS375 Class Notes Page 1/ 16 by Kuo-pao Yang

Embedded Systems Design: A Unified Hardware/Software Introduction. Outline. Chapter 5 Memory. Introduction. Memory: basic concepts

Embedded Systems Design: A Unified Hardware/Software Introduction. Chapter 5 Memory. Outline. Introduction

Random Access Memory (RAM)

Overview. Memory Classification Read-Only Memory (ROM) Random Access Memory (RAM) Functional Behavior of RAM. Implementing Static RAM

MAHALAKSHMI ENGINEERING COLLEGE TIRUCHIRAPALLI

Chapter 4 Main Memory

(Advanced) Computer Organization & Architechture. Prof. Dr. Hasan Hüseyin BALIK (5 th Week)

Chapter 5. Internal Memory. Yonsei University

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2017 Lecture 13

Where Have We Been? Ch. 6 Memory Technology

Read this before starting!

Address connections Data connections Selection connections

Digital Design, Kyung Hee Univ. Chapter 7. Memory and Programmable Logic

ECE 341. Lecture # 16

Memory Basics. Course Outline. Introduction to Digital Logic. Copyright 2000 N. AYDIN. All rights reserved. 1. Introduction to Digital Logic.

ECE369. Chapter 5 ECE369

ECE 485/585 Microprocessor System Design

CS 31: Intro to Systems Digital Logic. Kevin Webb Swarthmore College February 3, 2015

Memory & Simple I/O Interfacing

CS 31: Intro to Systems Digital Logic. Kevin Webb Swarthmore College February 2, 2016

Semiconductor Memory Types Microprocessor Design & Organisation HCA2102

(Refer Slide Time: 2:20)

Chapter 6 (Lect 3) Counters Continued. Unused States Ring counter. Implementing with Registers Implementing with Counter and Decoder

MODULE 12 APPLICATIONS OF MEMORY DEVICES:

1. (11 pts) For each question, state which answer is the most apropriate. First one is done for you.

CHAPTER 6 Memory. CMPS375 Class Notes (Chap06) Page 1 / 20 Dr. Kuo-pao Yang

ECE 574: Modeling and Synthesis of Digital Systems using Verilog and VHDL. Fall 2017 Final Exam (6.00 to 8.30pm) Verilog SOLUTIONS

DESIGN RAM MEMORIES BY USING FLIP-FLOP CIRCUITS

Memory. Lecture 22 CS301

Memory, Latches, & Registers

Chapter 6 Memory 11/3/2015. Chapter 6 Objectives. 6.2 Types of Memory. 6.1 Introduction

COMPSCI 210 S Computer Systems 1. 6 Sequential Logic Circuit

Reference Sheet for C112 Hardware

ﻪﺘﻓﺮﺸﻴﭘ ﺮﺗﻮﻴﭙﻣﺎﻛ يرﺎﻤﻌﻣ MIPS يرﺎﻤﻌﻣ data path and ontrol control

SECTION-A

This Unit: Main Memory. Building a Memory System. First Memory System Design. An Example Memory System

UNIT V (PROGRAMMABLE LOGIC DEVICES)

(1) Define following terms: Instruction, Machine Cycle, Opcode, Oprand & Instruction Cycle. Instruction:

Picture of memory. Word FFFFFFFD FFFFFFFE FFFFFFFF

Memory. Memory Technologies

Chapter 5B. Large and Fast: Exploiting Memory Hierarchy

Random Access Memory (RAM)

k -bit address bus n-bit data bus Control lines ( R W, MFC, etc.)

Memory & Logic Array. Lecture # 23 & 24 By : Ali Mustafa

! Memory. " RAM Memory. " Serial Access Memories. ! Cell size accounts for most of memory array size. ! 6T SRAM Cell. " Used in most commercial chips

Microcomputers. Outline. Number Systems and Digital Logic Review

Transcription:

Clocks 1 A clock is a free-running signal with a cycle time. A clock may be either high or low, and alternates between the two states. The length of time the clock is high before changing states is its high duration; the low duration is defined similarly. The cycle time of a clock is the sum of its high duration and its low duration. The frequency of the clock is the reciprocal of the cycle time. rising edge falling edge high low time State Elements 2 A state element is a circuit component that is capable of storing a value. At the moment, we are interested primarily in state elements that store logical state information about the system, rather than data storage. A state element may be either unclocked or clocked. Clocked state elements are used in synchronous logic - When should an element that contains state be updated? - Edge-triggered clocking means that the state changes either on the rising or the falling edge. - Clock edge acts as a sampling signal that triggers the update of a state element. A signal that is to be written into a state element must be stable; i.e., it must be unchanging. If a function is to be computed in one clock cycle, then the clock period must be long enough to allow all the relevant signals to become stable.

An Unclocked State Element The set-reset (SR) latch - output depends on present inputs and also on past inputs 3 Latches and Flip-flops Output is equal to the stored value inside the element 4 Assume clocked state elements are used: latch: state changes whenever the inputs change, and the clock is asserted flip-flop: state changes only on a clock edge "logically true", could mean electrically low A clocking methodology defines when signals can be read and written wouldn't want to read a signal at the same time it was being written

Clocked -latch Two inputs: - he data value to be stored () - the clock signal indicating when to read and store Two outputs: - the value of the internal state () and its complement 5 C Clocked Flip-flop Here s a schematic for a flip-flop with a falling edge trigger: 6 C latch C latch C Here s a timing diagram illustrating the behavior of the circuit above: C

Our Implementation An edge triggered methodology Typical execution: - read contents of some state elements, - send values through some combinational logic - write results to one or more state elements 7 State element 1 Combinational logic State element 2 Clock cycle 4-Bit Register Built using flip-flops: 8 Clock input controls when input is "written" to the individual flip-flops. However, the design above isn t quite what we want TP What s wrong with this? How can we fix it?

Register File 9 A register file is a collection of k registers (a sequential logic block) that can be read and written by specifying a register number that determines which register is to be accessed. The interface should minimally include: - an n-bit input to import data for writing (a write port) - an n-bit output to export read data (a read port) - a log(k)-bit input to specify the register number - control bit(s) to enable/disable read/write operations - a control bit to clear all the registers, asynchronously - a clock signal Some designs may provide multiple read or write ports, and additional features. For MIPS, it is convenient to have two read ports and one write port. Why? A File of 4-Bit Registers 10 Aggregating a collection of 4-bit registers, and providing the appropriate register selection and data input/output interface: 4-bit registers decoder to select write register multiplexor to select read register

Random Access 11 Random access memory (RAM) is an array of memory elements. Static RAM (SRAM): - bits are stored as flip-flops (typically 4 or more transistors per bit) - hence static in the sense that the value is preserved as long as power is supplied - somewhat complex circuit per bit, so not terribly dense on chip - typically used for cache memory ynamic RAM (RAM): - bits are stored as a charge in a capacitor - hence dynamic since periodic refreshes are needed to maintain stored values - single transistor needed per data bit, so very dense in comparison to SRAM - much cheaper per bit than SRAM - much slower access time than SRAM - typically used for main memory Basic MIPS Implementation 12 Here's an updated view of the basic architecture needed to implement a subset of the MIPS environment: RAM modules register file

SRAMs 13 Configuration specified by the # of addressable locations (# of rows or height) and the # of bits stored in each location (width). Consider a 4M x 8 SRAM: - 4M locations, each storing 8 bits - 22 address bits to specify the location for a read/write - 8-bit data output line and 8-bit data input line Enable/disable chip access 21-bit address input 16-bit output path Enable/disable read and write access 16-bit wide input path SRAM Performance 14 read access time - the delay from the time the Output enable is true and the address lines are valid until the time the data is on the output lines. - typical read access times might be from 2-4 ns to 8-20 ns, or considerably greater for low-power versions developed for consumer products. write access time - set-up and hold-time requirements for both the address and data lines - write-enable signal is actually a pulse of some minimum width, rather than a clock edge - write access time includes all of these

SRAM Implementation 15 Although the SRAM is conceptually similar to a register file: - impractical to use same design due to the unreasonable size of the multiplexors that would be needed - design is based on a three-state buffer output enable multiplexor build from 3- state buffer elements data signal If output enable is 1, then the buffer's output equals its input data signal. If output enable is 0, then the buffer's output is in a high-impedance state that effectively disables its effect on the bit line to which it is connected. SRAM Implementation 16 At right is a conceptual representation of a 4x2 SRAM unit built from latches that incorporate 3-state buffers. For simplicity, the chip select and output enable signals have been omitted. Although this eliminates the need for a multiplexor, the decoder that IS required will become excessively large if we scale this up to a useful capacity.

SRAM Implementation 17 A 4Mx8 SRAM as an array of 4Kx1024 arrays: decoder generates addresses for the 4096 rows of each of the 8 subarrays each subarray outputs a row of 1024 bits bank of 10-bit multiplexors select one bit from each of the subarrays This requires neither a huge multiplexor nor a huge decoder. A practical version might use a larger number of smaller subarrays. How would that affect the dimensions of the decoder and multiplexors that would be needed? RAM Implementation 18 Each bit is stored as a charge on a capacitor. Periodic refreshes are necessary and typically require 1-2% of the cycles of a RAM module. Access uses a 2-level decoding scheme; a row access selects and transfers a row of values to a row of latches; a column access then selects the desired data from the latches. Refreshing uses the column latches. RAM access times typically range from 45-65 ns, about 5-10 times slower than typical SRAM.

Error etection 19 Error detecting codes enable the detection of errors in data, but do not determine the precise location of the error. - store a few extra state bits per data word to indicate a necessary condition for the data to be correct - if data state does not conform to the state bits, then something is wrong - e.g., represent the correct parity (# of 1 s) of the data word - 1-bit parity codes fail if 2 bits are wrong 1011 1101 0001 0000 1101 0000 1111 0010 1 odd parity: data should have an odd number of 1's A 1-bit parity code is a distance-2 code, in the sense that at least 2 bits must be changed (among the data and parity bits) produce an incorrect but legal pattern. In other words, any two legal patterns are separated by a distance of at least 2. Error Correction 20 Error correcting codes provide sufficient information to locate and correct some data errors. - must use more bits for state representation, e.g. 6 bits for every 32-bit data word - may indicate the existence of errors if up to k bits are wrong - may indicate how to correct the error if up to l bits are wrong, where l < k - c code bits and n data bits 2 c >= n + c + 1 We must have at least a distance-3 code to accomplish this. Given such a code, if we have a data word + error code sequence X that has 1 incorrect bit, then there will be a unique valid data word + error code sequence Y that is a distance of 1 from X, and we can correct the error by replacing X with Y.

Error Correction 21 A distance-3 code is also knows as a single-error correcting, double-error correcting or SECE code. If X has 2 incorrect bits, then we will replace X with an incorrect (but valid) sequence. We cannot both detect 2-bit errors and correct 1-bit errors with a distance-3 code. But, hopefully flipped bits will be a rare occurrence and so sequences with two or more flipped bits will have a negligible probability. Hamming Codes 22 Richard Hamming described a method for generating minimum-length error-correcting codes. Here is the (7,4) Hamming code for 4-bit words: Say we had the data word 0100 and check bits 011. The two valid data words that match that check bit pattern would be 0001 and 0110. The latter would correspond to a single-bit error in the data word, so we would choose that as the correction. Note that if the error was in the check bits, we'd have to assume the data word was correct (or else we have an uncorrectable 2-bit error or worse). In that case, the check bits would have to be 1 bit distance from 110, which they are not. ata bits 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Check bits 000 011 101 110 110 101 011 000 111 100 010 001 001 010 100 111

Hamming Code etails 23 Hamming codes use extra parity bits, each reflecting the correct parity for a different subset of the bits of the code word. Parity bits are stored in positions corresponding to powers of 2 (positions 1, 2, 4, 8, etc.). The encoded data bits are stored in the remaining positions. The parity bits are defined as follows: - position 1: check 1 bit, skip 1 bit, check 1 bit, skip 1 bit, - position 2: check 2 bits, skip 2 bits, - position 2 k : check 2 k bits, skip 2 k bits, Consider the data byte: 10011010 Expand to allow room for the parity bits: 1 _ 001_1010 Now compute the parity bits as defined above Hamming Code etails 24 We have the expanded sequence: 1 _ 0 0 1 _ 1 0 1 0 The parity bit in position 1 (first bit) would depend on the parity of the bits in positions 1, 3, 5, 7, etc: 1 _ 0 0 1 _ 1 0 1 0 Those bits have even parity, so we have: 0 _ 1 _ 0 0 1 _ 1 0 1 0 The parity bit in position 2 would depend on bits in positions 2, 3, 6, 7, etc: 0 _ 1 _ 0 0 1 _ 1 0 1 0 Those bits have odd parity, so we have: 0 1 1 _ 0 0 1 _ 1 0 1 0 Continuing, we obtain the encoded string: 0 1 1 1 0 0 1 0 1 0 1 0

Hamming Code Correction 25 Suppose we receive the string: 0 1 1 1 0 0 1 0 1 1 1 0 How can we determine whether it's correct? Check the parity bits and see which, if any are incorrect. If they are all correct, we must assume the string is correct. Of course, it might contain so many errors that we can't even detect their occurrence, but in that case we have a communication channel that's so noisy that we cannot use it reliably. Checking the parity bits above: 0 1 1 1 0 0 1 0 1 1 1 0 OK WRONG OK WRONG So, what does that tell us, aside from that the string is incorrect? Well, if we assume there's no more than one incorrect bit, we can say that because the incorrect parity bits are in positions 2 and 8, the incorrect bit must be in position 10.