Address Accessible Memories. A.R. Hurson Department of Computer Science Missouri University of Science & Technology

Similar documents
Database Management Systems, 2nd edition, Raghu Ramakrishnan, Johannes Gehrke, McGraw-Hill

Reliable Computing I

I/O CANNOT BE IGNORED

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE

Semiconductor Memory Types Microprocessor Design & Organisation HCA2102

CSCI-GA Database Systems Lecture 8: Physical Schema: Storage

1. Introduction. Traditionally, a high bandwidth file system comprises a supercomputer with disks connected

Lecture 9. I/O Management and Disk Scheduling Algorithms

I/O CANNOT BE IGNORED

Storing Data: Disks and Files

Chapter 9: Peripheral Devices: Magnetic Disks

I/O, Disks, and RAID Yi Shi Fall Xi an Jiaotong University

RAID (Redundant Array of Inexpensive Disks)

William Stallings Computer Organization and Architecture 6th Edition. Chapter 5 Internal Memory

Database Systems II. Secondary Storage

BCN1043. By Dr. Mritha Ramalingam. Faculty of Computer Systems & Software Engineering

Physical Storage Media

Classifying Physical Storage Media. Chapter 11: Storage and File Structure. Storage Hierarchy (Cont.) Storage Hierarchy. Magnetic Hard Disk Mechanism

Classifying Physical Storage Media. Chapter 11: Storage and File Structure. Storage Hierarchy. Storage Hierarchy (Cont.) Speed

CS 5803 Introduction to High Performance Computer Architecture: Address Accessible Memories. A.R. Hurson 323 CS Building, Missouri S&T

Definition of RAID Levels

Page 1. Magnetic Disk Purpose Long term, nonvolatile storage Lowest level in the memory hierarchy. Typical Disk Access Time

COT 4600 Operating Systems Fall 2009

Chapter 6 Objectives

Part IV I/O System. Chapter 12: Mass Storage Structure

Contents. Memory System Overview Cache Memory. Internal Memory. Virtual Memory. Memory Hierarchy. Registers In CPU Internal or Main memory

k -bit address bus n-bit data bus Control lines ( R W, MFC, etc.)

Database Systems. November 2, 2011 Lecture #7. topobo (mit)

Mass-Storage. ICS332 - Fall 2017 Operating Systems. Henri Casanova

machine cycle, the CPU: (a) Fetches an instruction, (b) Decodes the instruction, (c) Executes the instruction, and (d) Stores the result.

Mass-Storage Structure

CSE325 Principles of Operating Systems. Mass-Storage Systems. David P. Duggan. April 19, 2011

Organization. 5.1 Semiconductor Main Memory. William Stallings Computer Organization and Architecture 6th Edition

Chapter 6. Storage and Other I/O Topics

CS 265. Computer Architecture. Wei Lu, Ph.D., P.Eng.

SYSTEM UPGRADE, INC Making Good Computers Better. System Upgrade Teaches RAID

Magnetic Disk. Optical. Magnetic Tape. RAID Removable. CD-ROM CD-Recordable (CD-R) CD-R/W DVD

Principles of Data Management. Lecture #2 (Storing Data: Disks and Files)

ECE Enterprise Storage Architecture. Fall 2018

Tutorial 4 KE What are the differences among sequential access, direct access, and random access?

Lecture-7 Characteristics of Memory: In the broad sense, a microcomputer memory system can be logically divided into three groups: 1) Processor

CSE 380 Computer Operating Systems

Computer Science 146. Computer Architecture


Storing Data: Disks and Files

Chapter 6 External Memory

Overview. EE 4504 Computer Organization. Historically, the limiting factor in a computer s performance has been memory access time

Module 5a: Introduction To Memory System (MAIN MEMORY)

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 420, York College. November 21, 2006

Administrivia. CMSC 411 Computer Systems Architecture Lecture 19 Storage Systems, cont. Disks (cont.) Disks - review

CENG3420 Lecture 08: Memory Organization

UNIT-V MEMORY ORGANIZATION

Computer Systems Organization

Data Storage and Disk Structure

Storage. Hwansoo Han

BBM371- Data Management. Lecture 2: Storage Devices

Mladen Stefanov F48235 R.A.I.D

Storing Data: Disks and Files. Storing and Retrieving Data. Why Not Store Everything in Main Memory? Database Management Systems need to:

Computer Organization. 8th Edition. Chapter 5 Internal Memory

I/O Management and Disk Scheduling. Chapter 11

Storing and Retrieving Data. Storing Data: Disks and Files. Solution 1: Techniques for making disks faster. Disks. Why Not Store Everything in Tapes?

Storing and Retrieving Data. Storing Data: Disks and Files. Solution 1: Techniques for making disks faster. Disks. Why Not Store Everything in Tapes?

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568

Semiconductor Memory Types. Computer & Microprocessor Architecture HCA103. Memory Cell Operation. Semiconductor Memory.

Part IV I/O System Chapter 1 2: 12: Mass S torage Storage Structur Structur Fall 2010

(Advanced) Computer Organization & Architechture. Prof. Dr. Hasan Hüseyin BALIK (5 th Week)

Frequently asked questions from the previous class survey

Appendix D: Storage Systems

CHAPTER 6 Memory. CMPS375 Class Notes (Chap06) Page 1 / 20 Dr. Kuo-pao Yang

A track on a magnetic disk is a concentric rings where data is stored.

Disks and RAID. CS 4410 Operating Systems. [R. Agarwal, L. Alvisi, A. Bracy, E. Sirer, R. Van Renesse]

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Memory Pearson Education, Inc., Hoboken, NJ. All rights reserved.

An Introduction to RAID

Chapter 10: Mass-Storage Systems

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568

CS 320 February 2, 2018 Ch 5 Memory

CS5460: Operating Systems Lecture 20: File System Reliability

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University

Lecture 23: Storage Systems. Topics: disk access, bus design, evaluation metrics, RAID (Sections )

Storing Data: Disks and Files. Storing and Retrieving Data. Why Not Store Everything in Main Memory? Chapter 7

Modern RAID Technology. RAID Primer A Configuration Guide

MEMORY. Objectives. L10 Memory

CISC 7310X. C11: Mass Storage. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 4/19/2018 CUNY Brooklyn College

INSTITUTO SUPERIOR TÉCNICO. Architectures for Embedded Computing

Storage and File Structure. Classification of Physical Storage Media. Physical Storage Media. Physical Storage Media

Module 13: Secondary-Storage Structure

The term "physical drive" refers to a single hard disk module. Figure 1. Physical Drive

William Stallings Computer Organization and Architecture 8th Edition. Chapter 5 Internal Memory

Memory Overview. Overview - Memory Types 2/17/16. Curtis Nelson Walla Walla University

CS 554: Advanced Database System

Storage Systems. Storage Systems

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568

Lecture: Storage, GPUs. Topics: disks, RAID, reliability, GPUs (Appendix D, Ch 4)

Chapter 5. Internal Memory. Yonsei University

Storing Data: Disks and Files

CHAPTER 6 Memory. CMPS375 Class Notes Page 1/ 16 by Kuo-pao Yang

Memory classification:- Topics covered:- types,organization and working

ECE232: Hardware Organization and Design

Transcription:

Address Accessible Memories A.R. Hurson Department of Computer Science Missouri University of Science & Technology 1

Memory System Memory Requirements for a Computer An internal storage medium to store the intermediate as well as the final results, An external storage medium to store input information, and An external storage medium to store permanent results for future 2

Memory System Different parameters can be used in order to classify the memory systems. In the following we will use the access mode in order to classify memory systems Access mode is defined as the way the information stored in the memory is accessed. 3

Memory System Access Mode Address Accessible Memory: Where information is accessed by its address in the memory space. Content Addressable Memory: Where information is accessed by its contents (or partial contents). 4

Memory System Access Mode Within the scope of address accessible memory we can distinguish several sub-classes; Random Access Memory (RAM): Access time is independent of the location of the information. Sequential Access Memory (SAM): Access time is a function of the location of the information. Direct Access Memory (DAM): Access time is partially independent of and partially dependent on the location of the information. 5

Memory System Access Mode Even within each subclass, we can distinguish several sub subclasses. For example within the scope of Direct Access Memory we can recognize different groups: Movable head disk, Fixed head disk, Parallel disk 6

Memory System Movable head disk: Each surface has just one read/write head. To initiate a read or write, the read/write head should be positioned on the right track first seek time. Seek time is a mechanical movement and hence, relatively, very slow and time consuming. 7

Memory System Fixed head disk: Each track has its own read/write head. This eliminates the seek time. However, this performance improvement comes at the expense of cost. 8

Memory System Parallel disk: To respond the growth in performance and capacity of semiconductor, secondary storage technology, introduced RAID Redundant Array of Inexpensive Disks. In short RAID is a large array of small independent disks acting as a single high performance logical disk. 9

Memory System RAID RAID increases the performance and reliability. Data Striping Redundancy 10

Memory System RAID Concept of data striping (distributing data transparently over multiple disks) is used to allow parallel access to the data and hence to improve disk performance. In data striping, the data set is partitioned into equal size segments, and segments are distributed over multiple disks. The size of segment is called the striping unit. 11

Memory System RAID Redundant information allows reconstruction of data if a disk fails. There are two choices to store redundant data: Store redundant information on a small number of separate disks check disks. Distribute the redundant information uniformly over all disks. Redundant information can be an exact duplicate of the data or we can use a Parity scheme additional information that can be used to recover from failure of any one disk in the array. 12

Memory System RAID Level 0: Striping without redundancy Offers the best write performance no redundant data is being updated. Offers the highest Space utilization. Does not offer the best read performance. 13

Memory System RAID Level 0 Non-redundant RAID 14

Memory System RAID Level 1: Mirrored Two identical copies Each disk has a mirror image. Is the most expensive solution space utilization is the lowest. Parallel reads are allowed. Write involves two disks, in some cases this will be done in sequence. Maximum transfer rate is equal to the transfer rate of one disk No striping. 15

Memory System RAID Level 1 Original Disks Mirrored Disks Mirrored 16

Memory System RAID Level 0 + 1: Striping and Mirroring Parallel reads are allowed. Space utilization is the same as level 1. Write involves two disks and cost of write is the same as Level 1. Maximum transfer rate is equal to the aggregate bandwidth of the disks. 17

Memory System RAID Level 2: Error Correcting Codes The striping unit is a single bit. Hamming coding is used as a redundancy scheme. Space utilization increases as the number of data disks increases. Maximum transfer rate is equal to the aggregate bandwidth of the disks Read is very efficient for large requests and is bad for small requests of the size of an individual block. 18

Memory System RAID Level 2 Data Disks Error Correcting Codes Coded Information 19

Memory System RAID Level 3: Bit Interleaved Parity The striping unit is a single bit. Unlike the level 2, the check disk is just a single parity disk, and hence it offers a higher space utilization than level 2. Write protocol is similar to level 2 read-modify-write cycle. Similar to level 2, can process one I/O at a time each read and write request involves all disks. 20

Memory System RAID Level 3 Data Disks Parity Disk Bit Interleaved Parity 21

Memory System RAID Level 4: Block Interleaved Parity The striping unit is a disk block. Read requests of the size of a single block can be served just by one disk. Parallel reads are possible for small requests, large requests can utilize full bandwidth. Write involves modified block and check disk. Space utilization increases with the number of data disks. 22

Memory System RAID Level 4 Data Disks Parity Disk Block Interleaved Parity 23

Memory System RAID Level 5: Block Interleaved Distributed Parity Parity blocks are uniformly distributed among all disks. This eliminates the bottleneck at the check disk. Several writes can be potentially done in parallel. Read requests have a higher level of parallelism. 24

Memory System RAID Level 5 Block Interleaved Distributed Parity 25

Memory System RAID Level 6: P+Q Redundancy Can tolerate higher level of failure than level 2. It requires two check disks and similar to level 5, redundant blocks are uniformly distributed at the block level over the disks. For small and large read requests, and large write requests, the performance is similar to level 5. For small write requests, it behaves the same as level 2. 26

Memory System RAID Level 6 P+Q Redundancy 27

Type Striping Redundancy Space Read Write RAID 0 Bit or block No 100% Parallel Fastest RAID 1 No Mirroring 50% Several reads One write at a time RAID 10 Bit or block Mirroring 50% Same as Raid 0 Same as Raid 1 RAID 2 Single Bit Hamming code 57% Bad for small #of blocks Good for large #of blocks All disks are involved Memory System RAID RAID 3 Single Bit Parity 80% Same as Raid 2 Same as Raid 2 RAID 4 Block Parity 80% Overlapping several reads No overlapping, one data and parity disks are involved RAID 5 Block Parity 80% Higher than Raid 4 Overlapping several writes RAID 6 Block Reed-Solomon 66% Higher than Raid 5 Same as Raid 5 28

Memory System RAM Random access memory can also be grouped into different classes Read Only Memory (ROM) Programmable ROM Erasable Programmable ROM (EPROM) Electrically Alterable ROM (EAROM) Flash Memory 29

Memory System RAM Read/Write Memory (RWM) Static RAM (SRAM) Dynamic RAM (DRAM) Synchronous DRAM Double-Data-Rate SDRAM Volatile/Non-Volatile Memory Destructive/Non-Destructive Read Memory 30

Memory System Within the scope of Random Access Memory we are concerned about two major issues: Access Gap: Is the difference between the CPU cycle time and the main memory cycle time. Size Gap: Is the difference between the size of the main memory and the size of the information space. 31

Memory System Within the scope of Random Access Memory we are concerned about two major issues: Access Gap: Is the difference between the CPU cycle time and the main memory cycle time. Size Gap: Is the difference between the size of the main memory and the size of the information space. 32

Memory System Within the scope of the memory system, the goal is to design and build a system with low cost per bit, high speed, and high capacity. In other words, in the design of a memory system we want to: Match the rate of the information access with the processor speed. Attain adequate performance at a reasonable cost. 33

Memory System The appearance of a variety of hardware as well as software solutions represents the fact that in the worst cases the trade-off between cost, speed, and capacity can be made more attractive by combining different hardware systems coupled with special features memory hierarchy. 34

Memory System Access gap Access gap problem was created by the advances in technology. In fact in early computers, such as IBM 704, CPU and main memory cycle time were identical 12 µsec. IBM 360/195 had the logic delay of 5 ηsec per stage, a CPU cycle time of 54 ηsec and a main memory cycle time of.756 µsec. CDC 7600 had CPU and main memory cycle times of 27.5 ηsec and.275 µsec, respectively. 35

Access gap How to reduce the access gap bottleneck: Software Solutions: Devise algorithmic techniques to reduce the number of accesses to the main memory. Hardware Solutions: Reduce the access gap. Advances in technology Interleaved memory Application of registers Cache memory 36

Access gap Interleaved Memory A memory is n-way interleaved if it is composed of n independent modules, and a word at address i is in module number i mod n. This implies consecutive words in consecutive memory modules. If the n modules can be operated independently and if the memory bus line is time shared among memory modules then one should expect an increase in bandwidth between the main memory and the CPU. 37

Access gap Interleaved Memory Dependencies in the programs branches and randomness in accessing the data will degrade the effect of memory interleaving. 38

Access gap Interleaved Memory To show the effectiveness of memory interleaving, assume a pure sequential program of m instructions. For a conventional system in which main memory is composed of a single module, the system has to go through m-fetch cycles and m-execute cycles in order to execute the program. For a system in which main memory is composed of n modules, the system executes the same program by executing m / n -fetch cycles and m-execute cycles. 39

Access gap Interleaved Memory The concept of modular memory can be traced back to the design of the so-called Harvard-class machines, where the main memory was composed of two modules: namely, program memory and data memory. 40

Access gap Interleaved Memory It was in the design of the ILLIAC II System, where the concept of the interleaved memory was introduced. In this machine, the memory was composed of two units. The even addresses generated by the CPU were sent to the module 0 and the odd addresses were directed to the module 1. 41