Sub Name: Computer Architecture and Organization. UNIT-IV-Memory Organization

Size: px
Start display at page:

Download "Sub Name: Computer Architecture and Organization. UNIT-IV-Memory Organization"

Transcription

1 Sub Code:EC2303 Dept: ECE Sub Name: Computer Architecture and Organization UNIT-IV-Memory Organization 1. Give the features of a ROM cell (AUC APR 08) Data stored in ROM cannot be modified, or can be modified only slowly or with difficulty, so it is mainly used to distribute firmware. ROM refers only to mask ROM,which is fabricated with the desired data permanently stored in it, and thus can never be modified. Despite the simplicity, speed and economies of scale of mask ROM, field-programmability often make reprogrammable memories more flexible and inexpensive 2.List the differences between static RAM and dynamic RAM.(AUC APR 11 08) SRAM is static while DRAM is dynamic. SRAM is faster compared to DRAM SRAM consumes less power than DRAM SRAM uses more transistors per bit of memory compared to DRAM SRAM is more expensive than DRAM Cheaper DRAM is used in main memory while SRAM is commonly used in cache memory 3. Define Locality of Reference. (AUC NOV 09,MAY 08) The tendency of a process to access the same set of memory locations repetitively over a short period of time Two types temporal and locality reference, 4. What is Translation Look aside Buffer? (AUCMAY 06) The translation look aside buffer (TLB) is a cache for page table entries. It works in much the same way as the data cache: it stores recently accessed page table entries. It also relies on locality of reference. Since each TLB entry covers a whole page of physical memory (512-8Kbytes, commonly 4Kbytes), a relatively small number of TLB entries will cover a large amount of program memory. 5. Define memory access time. The time required by a processor to access data or to write data from and to memory chip is referred as access time. 6. Define Cache memory. Memory word are stored in cache data memory and are grouped into small pages called cache blocks or line. The contents of the cache s data memory are thus copies of a set of main memory blocks. A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 1

2 7. Define Virtual Memory. virtual memory is a memory management technique that is implemented using both hardware and software. It maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory. Main storage as seen by a process or task appears as a contiguous address space or collection of contiguous segments. The operating system manages virtual address spaces and the assignment of real memory to virtual memory. 8. What is meant by internal and external fragmentation?(auc APR 11) External Fragmentation: External Fragmentation happens when a dynamic memory allocation algorithm allocates some memory and a small piece is left over that cannot be effectively used. If too much external fragmentation occurs, the amount of usable memory is drastically reduced. Total memory space exists to satisfy a request, but it is not contiguous. Internal Fragmentation: Internal fragmentation is the space wasted inside of allocated memory blocks because of restriction on the allowed sizes of allocated blocks. Allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used. 9. Define Synchronous DRAMS. Synchronous dynamic random access memory (SDRAM) is dynamic random access memory (DRAM) with an interface synchronous with the system bus carrying data between the CPU and the memory controller hub. SDRAM has a rapidly responding synchronous interface, which is in sync with the system bus. SDRAM waits for the clock signal before it responds to control input 10. What is Random Access Memory (RAM)?(AUC MAY 2013) In storage location can be accessed in any order and access time is independent of the location being accessed, the memory is termed as random access memory. 11. What is the use of virtual memory?(auc MAY 12 13) Protection: VM is often used to protect one program from others in the system Base and Bounds: this method allows relocation. User processes cannot be allowed to change these registers, but the OS must be able to do so on a process switch. 12. What is PROM? Semi conductor ROMs whose contents can be changed offline-with some difficulties is called PROMs. 13. What is EPROM? EPROM (Erasable Programmable Read Only Memory) EPROM did not allow the data to be changed. That is the reason why it gained popularity pretty quickly among hardware makers and hobbyists. EPROMS allow them to fully deploy their A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 2

3 program on the chip then test it, once bugs are found they can erase the EPROM then load a modified version for further testing. 14. What is EEPROM? Electrically Erasable Programmable memory. The only difference between the two is that you can erase an EEPROM with electricity EEPROM also allowed manufacturers to release patches if the program. 15. Draw the structure of Memory hierarchy. CPU Levels in the memory hierarchy Level 1 Level 2 Increasing distance from the CPU in access time Level n Size of the memory at each level 16. What is write-through protocol? write operation,the cache location and the main memory location are updated simultaneously. 17. What is write-back (or) copy-back protocol? In this scheme, only the block in cache is modified. The main memory when the block must be replaces in the cache. This requires the use of a dirty bit to keep track of blocks, that have been modified 18. What is RAMBUS memory? The key feature of Rambus technology is a fast signaling method used to transfer information between chip using narrow bus 19. Mention two system organization for caches. Two system organization for caches are a. look aside b. look through 20. What are the categories of memories? SRAM DRAM A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 3

4 21. What is flash memor?. A recent semiconductor technology called flash memory of a same non-volatility as a PROM, but it can be done a bit at a time. 22.What is SRAM AND DRAM? SRAM:Static randam access memory. It tends to be faster.they require no refreshing DRAM:Dynamic random access memory. Data is stored in the form of charges. So continuous refreshing is needed. 23. What is volatile memory? A memory is volatile if the loss of power destroys the stored information. Information can be stored indefinitely in a volatile memory by providing battery backup or other means to maintain a continuous supply of power. 24.Give the difference between EEPROM and Flash memory? The primary difference between EEPROM and flash memory is that flash restricts writes to multiple kilobytes blocks, increasing the memory capacity per chip by reducing area of control. 25. Differences between cache memory and virtual memery 1. In caches, replacement is primarily controlled by the hardware. In VM, replacement is primarily controlled by the os. 2. The Number of bits in the address determines the size of VM, where as cache size is independent of the address size. But there is only one class of cache. PART B 1. Describe the organization of a typical RAM chip. ( AUC MAY 07,) Random-access memory, or RAM, provides large quantities of temporary storage in a computer system. Remember the basic capabilities of a memory: It should be able to store a value. read the value that was saved. change the stored value. A RAM is similar, except that it can store many values. An address will specify which memory value we re interested in. Each value can be a multiple-bit word A Chip Select, CS, enables or disables the RAM. ADRS specifies the address or location to read from or write to. WR selects between reading from or writing to the memory. A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 4

5 To read from memory, WR should be set to 0. OUT will be the n-bit value stored at ADRS. To write to memory, we set WR = 1. DATA is the n-bit value to save in memory. This interface makes it easy to combine RAMs together, 2k x n memory. There are k address lines, which can specify one of 2k addresses. Each address contains an n-bit word. o For example, a 224 x 16 RAM contains 224 = 16M words, each 16 bits long. The RAM would need 24 address lines. The total storage capacity is 224 x 16 = 228 bits. To read from this RAM, the controlling circuit must: Enable the chip by ensuring CS = 1. Select the read operation, by setting WR = 0. Send the desired address to the ADRS input. The contents of that address appear on OUT after a little while. Static memory is modeled using one latch for each bit of storage. A latch can be made with only two NAND or two NOR gates, but a flip-flop requires at least twice that much hardware. smaller is faster, cheaper and requires less power..the tradeoff is that getting the timing exactly right is a pain. these cells to make a 4 x 1 RAM. Since there are four words, ADRS is two bits. Each word is only one bit, so DATA and OUT are one bit each. Word selection is done with a decoder attached to the CS inputs of the RAM cells. Only one cell can be read or written at a time. Notice that the outputs are connected together with a single line! A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 5

6 If the decoder is disabled, then all the three-state buffers will appear to be disconnected, and OUT will also appear disconnected. If the decoder is enabled, then exactly one of its outputs will be true, so only one of the tristate buffers will be connected and produce an output. The net result is we can save some wire and gate costs. little more flexibility in putting circuits together. DATA and OUT are now each four bits long, so you can read and write four-bit words 2. Explain about Cache memory in detail. (AUC NOV 06,APR 08 MAY 13) Cache Memories Speed of the main memory is very low in comparison with the speed of processor A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 6

7 For good performance, the processor cannot spend much time of its time waiting to access instructions and data in main memory. Important to device a scheme that reduces the time to access the information An efficient solution is to use fast cache memory When a cache is full and a memory word that is not in the cache is referenced, the cache control hardware must decide which block should be removed to create space for the new block that contain the referenced word. The basics of Caches " The caches are organized on basis of blocks, the smallest amount of data which can be copied between two adjacent levels at a time. " If data requested by the processor is present in some block in the upper level, it is called a hit. " If data is not found in the upper level, the request is called a miss and the data is retrieved from the lower level in the hierarchy. " The fraction of memory accesses found in the upper level is called a hit ratio. " The storage, which takes advantage of locality of accesses is called a cache A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 7

8 Performance of caches Accessing a Cache Address Mapping in Cache: Direct Mapping In this technique, block j of the main memory maps onto block j modulo 128 of the cache. Main memory blocks 0,128,256, is loaded in the cache, it is stored in cache block 0. Blocks 1,129,257, are stored in cache block 1. Direct mapped cache A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 8

9 Associative Mapping More flexible mapping technique A main memory block can be placed in to any cache block position. Space in the cache can be used more efficiently, but need to search all 128 tag patterns. Set-Associate Mapping Combination of the direct- and associative mapping technique Blocks of the cache are grouped into sets, and the mapping allows a block of the A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 9

10 Main memory to reside in any block of a specific set. Note: Memory blocks 0,64,128,,4032 maps into cache set 0. Calculating Block Size: Write hit policies. A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 10

11 REPLACEMENT POLICY: On a cache miss we need to evict a line to make room for the new line " In an A-way set associative cache,! least-recently used (true LRU is too costly)! pseudo LRU (Approximated LRU - in case of four-way set associativity one bit keeps track of which pair of blocks is LRU, and then tracking which block in each pair is LRU (one bit per pair))! fixed (processing audio stream) For a two-way set associative cache, random replacement has a miss rate about o time higher than LRU replacement. As the caches become larger, the miss rate for both replacement strategies fall, and the difference becomes small. Random replacement is sometimes better than simple LRU approximations that can be easily implemented in hardware. WRITE MISS POLICY: Write allocate allocate a new block on each write fetch on write fetch entire block, then write word into block no-fetch- allocate block, but don t fetch requires valid bits per word more complex eviction Write no-allocate don t allocate a block if it is not already in the cache write around the cache typically used by write through since we need update main memory anyway Write invalidate instead of update for write-through Measuring and Improving Cache Performance A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 11

12 2.Explain the concept of memory hierarchy(aucnov 07 06) Principle of Locality: Program access a relatively small portion of the address space at any instant of time 90/10 rule: 10% of code executed 90% of time Two types of locality: Temporal locality: if an item is referenced, it will tend to be referenced again soon Spatial locality: if an item is referenced, items whose addresses are close by tend to be referenced soon Random access: Access time same for all locations DRAM: Dynamic Random Access Memory High density, low power, cheap, slow Dynamic: need to be refreshed regularly Addresses in 2 halves (memory as a 2D matrix): RAS/CAS (Row/Column Access Strobe) Use for main memory SRAM: Static Random Access Memory Low density, high power, expensive, fast Static: content will last (forever until lose power) Address not divided A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 12

13 Usefor caches Hit: data appears in upper level (Block X) Hit rate: fraction of memory access found in the upper level Hit time: time to access the upper level RAM access time + Time to determine hit/miss Miss: data needs to be retrieved from a block in the lower level (Block Y) Miss Rate = 1 - (Hit Rate) o Miss Penalty: time to replace a block in the upper level + time to deliver the block to the processor (latency + transmit time) o Hit Time << Miss Penalty 3. Describe the working principle of a typical magnetic disk.(auc APR 08,MAY 13) Magnetic disks provide bulk of secondary storage of modern computers Drives rotate at 60 to 200 times per second Transfer rate is rate at which data flow between drive and computer Positioning time (random-access time) is time to move disk arm to desired cylinder (seek time) and time for desired sector to rotate under the disk head (rotational latency) o Head crash results from disk head making contact with the disk surface Disks can be removable Drive attached to computer via I/O bus Busses vary, including EIDE, ATA, SATA, USB, Fibre Channel, SCSI A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 13

14 Host controller in computer uses bus to talk to disk controller built into drive or storage array Disk drives are addressed as large 1-dimensional arrays of logical blocks, where the logical block is the smallest unit of transfer. The 1-dimensional array of logical blocks is mapped into the sectors of the disk sequentially. Sector 0 is the first sector of the first track on the outermost cylinder. Mapping proceeds in order through that track, then the rest of the tracks in that cylinder, and then through the rest of the cylinders from outermost to innermost. Host-attached storage accessed through I/O ports talking to I/O busses SCSI itself is a bus, up to 16 devices on one cable, SCSI initiator requests operation and SCSI targets perform tasks.each target can have up to 8 logical units (disks attached to device controller FC is high-speed serial architecture Can be switched fabric with 24-bit address space the basis of storage area networks (SANs) in which many hosts attach to many storage units Can be arbitrated loop (FC-AL) of 126 devices Network-attached storage (NAS) is storage made available over a network rather than over a local connection (such as a bus) NFS and CIFS are common protocols. Implemented via remote procedure calls (RPCs) between host and storage New iscsi protocol uses IP network to carry the SCSI protocol A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 14

15 Disk scheduling: The operating system is responsible for using hardware efficiently for the disk drives, this means having a fast access time and disk bandwidth. Access time has two major components Seek time is the time for the disk are to move the heads to the cylinder containing the desired sector. Rotational latency is the additional time waiting for the disk torotate the desired sector to the disk head. Minimize seek time Seek time seek distance Disk bandwidth is the total number of bytes transferred, divided by the total time between the first request for service and thecompletion of the last transfer 4. How a virtual address gets translated into a physical address? Explain in detail with a neat diagram. Explain the use of TLB. (AUCAPR 08, NOV 06) What is virtual memory? How is it implemented? (AUCNOV 07,06,08) Virtual memory is a computer system technique which gives an application program the impression that it has contiguous working memory (an address space), while in fact it may be physically fragmented and may even overflow on to disk storage. Virtual memory provides two primary functions: 1. Each process has its own address space, thereby not required to be relocated nor required to use relative addressing mode. 2. Each process sees one contiguous block of free memory upon launch. Fragmentation is hidden. All implementations (Excluding emulators) require hardware support. This is typically in the form of a Memory Management Unit built into the CPU. Systems that use this technique make programming of large applications easier and use real physical memory (e.g. RAM) more efficiently than those without virtual memory. Virtual memory differs significantly from memory virtualization in that virtual memory allows resources to be virtualized as memory for a specific system, as opposed to a large pool of memory being virtualized as smaller pools for many different systems. Note that "virtual memory" is more than just "using disk space to extend physical memory size" - that is merely the extension of the memory hierarchy to include hard disk drives. Extending memory to disk is a normal consequence of using virtual memory techniques, but could be done by other means such as overlays or swapping programs and their data completely out to disk while they are inactive. The definition of "virtual memory" is based on redefining the address space with a contiguous virtual memory addresses to "trick" programs into thinking they are using large blocks of contiguous addresses. A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 15

16 Paged virtual memory Almost all implementations of virtual memory divide the virtual address space of an application program into pages; a page is a block of contiguous virtual memory addresses. Pages are usually at least 4K bytes in size, and systems with large virtual address ranges or large amounts of real memory (e.g. RAM) generally use larger page sizes. Page tables Almost all implementations use page tables to translate the virtual addresses seen by the application program into physical addresses (also referred to as "real addresses") used by the hardware to process instructions. Each entry in the page table contains a mapping for a virtual page to either the real memory address at which the page is stored, or an indicator that the page is currently held in a disk file. (Although most do, some systems may not support use of a disk file for virtual memory.) Systems can have one page table for the whole system or a separate page table for each application. If there is only one, different applications which are running at the same time share a single virtual address space, i.e. they use different parts of a single range of virtual addresses. Systems which use multiple page tables provide multiple virtual address spaces - concurrent applications think they are using the same range of virtual addresses, but their separate page tables redirect to different real addresses. Dynamic address translation while executing an instruction, a CPU fetches an instruction located at a particular virtual address, or fetches data from a specific virtual address or stores data to a particular virtual address, the virtual address must be translated to the corresponding physical address. This is done by a hardware component, sometimes called a memory management unit, which looks up the real address (from the page table) corresponding to a virtual address and passes the real address to the parts of the CPU which execute instructions. Paging supervisor This part of the operating system creates and manages the page tables. If the dynamic address translation hardware raises a page fault exception, the paging supervisor searches the page space on secondary storage for the page containing the required virtual address, reads it into real physical memory, updates the page tables to reflect the new location of the virtual address and finally tells the dynamic address translation mechanism to start the search again. A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 16

17 Usually all of the real physical memory is already in use and the paging supervisor must first save an area of real physical memory to disk and update the page table to say that the associated virtual addresses are no longer in real physical memory but saved on disk. Paging supervisors generally save and overwrite areas of real physical memory which have been least recently used, because these are probably the areas which are used least often. So every time the dynamic address translation hardware matches a virtual address with a real physical memory address, it must put a time-stamp in the page table entry for that virtual address. Permanently resident pages All virtual memory systems have memory areas that are "pinned down", i.e. cannot be swapped out to secondary storage, for example: Interrupt mechanisms generally rely on an array of pointers to the handlers for various types of interrupt (I/O completion, timer event, program error, page fault, etc.). If the pages containing these pointers or the code that they invoke were pageable, interrupt-handling would become even more complex and time-consuming; and it would be especially difficult in the case of page fault interrupts. The page tables are usually not pageable. Data buffers that are accessed outside of the CPU, for example by peripheral devices that use direct memory access (DMA) or by I/O channels. Usually such devices and the buses (connection paths) to which they are attached use physical memory addresses rather than virtual memory addresses. Even on buses with an IOMMU, which is a special memory management unit that can translate virtual addresses used on an I/O bus to physical addresses, the transfer cannot be stopped if a page fault occurs and then restarted when the page fault has been processed. So pages containing locations to which or from which a peripheral device is transferring data are either permanently pinned down or pinned down while the transfer is in progress. Timing-dependent kernel/application areas cannot tolerate the varying response time caused by paging. complex and time-consuming; and it would be especially difficult in the case of page fault interrupts. The page tables are usually not pageable. Data buffers that are accessed outside of the CPU, for example by peripheral devices that use direct memory access (DMA) or by I/O channels. Usually such devices and the buses (connection paths) to which they are attached use physical memory addresses rather than virtual memory addresses. Even on buses with an IOMMU, which is a special memory management unit that can translate virtual addresses used on an I/O bus to physical addresses, the transfer cannot be stopped if a page fault occurs and then restarted when the page fault has been processed. So pages containing locations to which or from which a peripheral device is transferring data are either permanently pinned down or pinned down while the transfer is in progress. Timing-dependent kernel/application areas cannot tolerate the varying response time caused by paging. A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 17

18 Compiler time: If it is known in advance that a program will reside at a specific location of main memory, then the compiler may be told to build the object code with absolute addresses right away. For example, the boot sect in a bootable disk may be compiled with the starting point of code set to 007C:0000. Load time: It is pretty rare that we know the location a program will be assigned ahead of its execution. In most cases, the compiler must generate relocatable code with logical addresses. Thus the address translation may be performed on the code during load time. Figure 3 shows that a program is loaded at location x. If the whole program resides on a monolithic block, then every memory reference may be translated to be physical by added to x. However two disadvantages are A program that is too big to be held in a partition needs some special design, called overlay, which brings heavy burden on programmers. With overlay, a process consists of several portions with each being mapped to the same location of the partition, and A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 18

19 at any time, only one portion may reside in the partition. When another portion is referenced, the current portion will be switched out. A program may be much smaller than a partition, thus space left in the partition will be wasted, which is referred to as internal fragmentation. As an improvement shown in Figure 4 (b), unequal-size partitions may be configured in main memory so that small programs will occupy small partitions and big programs are also likely to be able to fit into big partitions. Although this may solve the above problems with fixed 5 equal-size partitioning to some degree, the fundamental weakness still exists: The number of partitions are the maximum of the number of processes that could reside in main memory at the same time. When most processes are small, the system should be able to accommodate more of them but fails to do so due to the limitation. More flexibility is needed. Dynamic partitioning To overcome difficulties with fixed partitioning, partitioning may be done dynamically, called dynamic partitioning. With it, the main memory portion for user applications is initially a single contiguous block. When a new process is created, the exact amount of memory space is allocated to the process. As time goes on, there will appear many small holes in the main memory, which is referred to 6 as external fragmentation. Thus although much space is still available, it cannot be allocated to new processes. A method for overcoming external fragmentation is compaction. From time to time, the operating system moves the processes so that they occupy contiguous sections and all of the small holes are brought together to make a big block of space. The disadvantage of compaction is: The procedure is time-consuming and requires relocation capability. A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 19

20 Address translation Figure shows the address translation procedure with dynamic partitioning, where the processor provides hardware support for address translation, protection, and relocation. The base register holds the entry point of the program, and may be added to a relative address to generate an absolute address. The bounds register indicates the ending location of the program, which is used to compare with each physical address generated. If the later is within bounds, then the execution may proceed; otherwise, an interrupt is generated, indicating illegal access to memory. The relocation can be easily supported with this mechanism with the new starting address and ending address assigned respectively to the base register and the bounds 7 register. Placement algorithm Different strategies may be taken as to how space is allocated to processes: First fit: Allocate the first hole that is big enough. Searching may start either at the beginning of the set of holes or where the previous first-fit search ended. Best fit: Allocate the smallest hole that is big enough. The entire list of holes must be searched unless it is sorted by size. This strategy produces the smallest leftover hole. Worst fit: Allocate the largest hole. In contrast, this strategy aims to produce the largest leftover hole, which may be big enough to hold another process. A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 20

21 A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 21

22 Implementing Protection with Virtual Memory To enable the OS to implement protection in the VM system, the HW must: 1. Support at least two modes that indicate weather the running process is a user process (executive process) or an OS process (kernel/supervisor process). 2. provide a portion of the CPU state that a user process can read but not write (includes supervisor mode bit). 3. provide mechanism whereby the CPU can go from user mode to supervisor mode (accomplished by a system call exception) and vice versa (return from exception instruction). Only OS process can change page tables. Page tables are held in OS address space thereby preventing a user process from changing them. When processes want to share information in a limited way, the operating system must assist them. The write access bit (in both the TLB and the page table) can be used to restrict the sharing to just read sharing. Cache Misses A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 22

23 5.Discuss in detail about basic memory concepts(auc May 12) The maximum size of the memory in any computer is determined by the addressing scheme. Most modern computers are byte addressable. The Figure 4.1 shows the possible address assignments for a byte-addressable 32-bit computer The big-endian arrangement is used in the processor. The little-endian arrangement is used in Intel processors. Modern implementations of computer memory are rather complex and difficult to understand at the beginning. To simplify that we will see about traditional approaches and then proceed with latest approaches. 1. Data transfer between the memory an d the processor takes place through the use of two processor registers, usually called MAR (memory address register) and MDR (memory data register) 2. If MAR is k bits long and MDR is n bits long, then the memory unit may contain upto 2k addressable location. 3. During a memory cycle, n bits of data are transferred between the memory and the processor. 4. This transfer takes place over the processor bus, which has k address lines and n data lines. 5. The bus also includes the control lines Read/Write (R/W) and Memory Function completed (MFC) for coordinating data transfers. A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 23

24 6. Other control lines are also added to indicate the number of bytes to be transferred. The connection between the processor and memory in shown schematically in Figure The Processor reads data from the memory by loading the address of the required memory location in to the MAR register and sets the R/W line to 1 8. The Memory responds by placing the data from the addressed location on to the data lines, and confirms this by asserting the MFC signal. 9. After receiving the MFC signal, the processor loads the data lines into the MDR register. 10. The processor writes data into a memory location by loading the address of the location into MAR and loading the data into MDR. A write operation sets the R/W line to If read or write operation involve consecutive address locations in the main memory, then a block transfer operation can be performed. Address space 16-bit : 216 = 64K mem. locations 32-bit : 232 = 4G mem. locations 40-bit : 240 = 1 T locations Terminology: Memory access time time between Read and MFC signals Memory cycle time min. time delay between initiation of two successive memory operations Internal Organization of memory chips Form of an array Word line & bit lines 16x8 organization : 16 words of 8 bits each A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 24

25 A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 25

26 DRAMS: Charge on a capacitor Needs Refreshing A single transistor dynamic memory cell A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 26

27 Synchronous DRAMs Synchronized with a clock signal The cell array is the same as in asynchronous DRAMs. The address and data connections are buffered by means of registers. The output of each sense amplifier is connected to a latch. A Read operation causes the contents of all cells in the selected row to be loaded into these latches. But, if only refreshing has to be done the contents of the latches are not changed. Data held in the latches that correspond to the selected column(s) are transferred into the data output register. SDRAMs have several different modes of operation, which can be selected by writing control information into a mode register. A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 27

28 Latency and Bandwidth Transfers between the memory and the processor involve single words of data or small blocks. of words. Large blocks, constituting a page of data, are transferred between the memory and the disk. The speed and efficiency of these transfers have a large impact on the performance of a computer system. A good performance is indicated by two parameters latency and bandwidth. The memory latency is used to refer to the amount of time it takes to transfer a word of data to or from the memory. When transferring blocks of data, it is of interest to know much time is needed to transfer an entire block. Since blocks can be variable in size, it is useful to define performance in bits or bytes that can be transferred in one second. This measure is often referred to as the memory bandwidth. Double Data Rate SDRAM Since the SDRAM transfer data on both edges of the clock, their bandwidth is essentially doubled for long burst transfers. Such devices are known as double-data-rate SDRAMs (DDRSDRAMs) To make it possible to access the data at a high rate, the cell array is organized in two banks. Each bank can be accessed separately. Consecutive words of a given block are stored in different banks. Such interleaving of words allows simultaneous access to two words that are transferred on successive edges of the clock. Memory system considerations Cost Speed Power dissipation A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 28

29 Size of chip Memory controller Used Between processor and memory Refresh Overhead Memory hierarchy A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 29

30 Principle of locality: Temporal locality (locality in time): If an item is referenced, it will tend to be referenced again soon. Spatial locality (locality in space): If an item is referenced, items whose addresses are close by will tend to be referenced soon. Sequentiality (subset of spatial locality ). The principle of locality can be exploited implementing the memory of computer as a memory hierarchy, taking advantage of all types of memories. Method: The level closer to processor (the fastest) is a subset of any level. 6.Explain in detail about associative memory? Associative memory can also be called as Content addressable memory (CAM) CAM is accessed simultaneously and in parallel on the basis of data content rather than by specific address or location Associative memory is more expensive than a RAM because each cell must have storage capability as well as logic circuits Argument register holds an external argument for content matching Key register mask for choosing a particular field or key in the argument word Block digram of associative memory Single cell of associative memory A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 30

31 Associative mapped cache The first is the construction of a memory with storage cells that have the capability of performing simultaneously the functions of storage, nondestructive reading, and comparison. The second method of realizing an associative memory is the programmed organization (modeling) of the memory. It consists of the establishment of associative connections between the information contained in the memory by means of ordered arrangement of the information in the form of sequential chains or groups (lists) connected by linkage addresses whose codes are stored in the same memory cell. A. Joyce-AP/CSE-MAHALAKSHMI ENGINEERING COLLEGE Page 31

k -bit address bus n-bit data bus Control lines ( R W, MFC, etc.)

k -bit address bus n-bit data bus Control lines ( R W, MFC, etc.) THE MEMORY SYSTEM SOME BASIC CONCEPTS Maximum size of the Main Memory byte-addressable CPU-Main Memory Connection, Processor MAR MDR k -bit address bus n-bit data bus Memory Up to 2 k addressable locations

More information

UNIT-V MEMORY ORGANIZATION

UNIT-V MEMORY ORGANIZATION UNIT-V MEMORY ORGANIZATION 1 The main memory of a computer is semiconductor memory.the main memory unit is basically consists of two kinds of memory: RAM (RWM):Random access memory; which is volatile in

More information

Unit IV MEMORY SYSTEM PART A (2 MARKS) 1. What is the maximum size of the memory that can be used in a 16-bit computer and 32 bit computer?

Unit IV MEMORY SYSTEM PART A (2 MARKS) 1. What is the maximum size of the memory that can be used in a 16-bit computer and 32 bit computer? Dept.: CSE Sub. Code: CS2253 Unit IV MEMORY SYSTEM PART A (2 MARKS) Sem: IV Sub. Name: C.O.A 1. What is the maximum size of the memory that can be used in a 16-bit computer and 32 bit computer? The maximum

More information

Memory. Objectives. Introduction. 6.2 Types of Memory

Memory. Objectives. Introduction. 6.2 Types of Memory Memory Objectives Master the concepts of hierarchical memory organization. Understand how each level of memory contributes to system performance, and how the performance is measured. Master the concepts

More information

CENG3420 Lecture 08: Memory Organization

CENG3420 Lecture 08: Memory Organization CENG3420 Lecture 08: Memory Organization Bei Yu byu@cse.cuhk.edu.hk (Latest update: February 22, 2018) Spring 2018 1 / 48 Overview Introduction Random Access Memory (RAM) Interleaving Secondary Memory

More information

CHAPTER 6 Memory. CMPS375 Class Notes Page 1/ 16 by Kuo-pao Yang

CHAPTER 6 Memory. CMPS375 Class Notes Page 1/ 16 by Kuo-pao Yang CHAPTER 6 Memory 6.1 Memory 233 6.2 Types of Memory 233 6.3 The Memory Hierarchy 235 6.3.1 Locality of Reference 237 6.4 Cache Memory 237 6.4.1 Cache Mapping Schemes 239 6.4.2 Replacement Policies 247

More information

CHAPTER 6 Memory. CMPS375 Class Notes (Chap06) Page 1 / 20 Dr. Kuo-pao Yang

CHAPTER 6 Memory. CMPS375 Class Notes (Chap06) Page 1 / 20 Dr. Kuo-pao Yang CHAPTER 6 Memory 6.1 Memory 341 6.2 Types of Memory 341 6.3 The Memory Hierarchy 343 6.3.1 Locality of Reference 346 6.4 Cache Memory 347 6.4.1 Cache Mapping Schemes 349 6.4.2 Replacement Policies 365

More information

Concept of Memory. The memory of computer is broadly categories into two categories:

Concept of Memory. The memory of computer is broadly categories into two categories: Concept of Memory We have already mentioned that digital computer works on stored programmed concept introduced by Von Neumann. We use memory to store the information, which includes both program and data.

More information

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy Chapter 5 Large and Fast: Exploiting Memory Hierarchy Principle of Locality Programs access a small proportion of their address space at any time Temporal locality Items accessed recently are likely to

More information

CPE300: Digital System Architecture and Design

CPE300: Digital System Architecture and Design CPE300: Digital System Architecture and Design Fall 2011 MW 17:30-18:45 CBC C316 Cache 11232011 http://www.egr.unlv.edu/~b1morris/cpe300/ 2 Outline Review Memory Components/Boards Two-Level Memory Hierarchy

More information

PRATHYUSHA INSTITUTE OF TECHNOLOGY AND MANAGEMENT

PRATHYUSHA INSTITUTE OF TECHNOLOGY AND MANAGEMENT PRATHYUSHA INSTITUTE OF TECHNOLOGY AND MANAGEMENT DEPARTMENT OF INFORMATION TECHNOLOGY Staff Name: Prof. S. Athinarayanan UNIT IV PART A Branch: ECE / V Sem 1. Define Memory Access Time and Memory Cycle

More information

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1 CSE 431 Computer Architecture Fall 2008 Chapter 5A: Exploiting the Memory Hierarchy, Part 1 Mary Jane Irwin ( www.cse.psu.edu/~mji ) [Adapted from Computer Organization and Design, 4 th Edition, Patterson

More information

CENG4480 Lecture 09: Memory 1

CENG4480 Lecture 09: Memory 1 CENG4480 Lecture 09: Memory 1 Bei Yu byu@cse.cuhk.edu.hk (Latest update: November 8, 2017) Fall 2017 1 / 37 Overview Introduction Memory Principle Random Access Memory (RAM) Non-Volatile Memory Conclusion

More information

a) Memory management unit b) CPU c) PCI d) None of the mentioned

a) Memory management unit b) CPU c) PCI d) None of the mentioned 1. CPU fetches the instruction from memory according to the value of a) program counter b) status register c) instruction register d) program status word 2. Which one of the following is the address generated

More information

ECE 341. Lecture # 16

ECE 341. Lecture # 16 ECE 341 Lecture # 16 Instructor: Zeshan Chishti zeshan@ece.pdx.edu November 24, 2014 Portland State University Lecture Topics The Memory System Basic Concepts Semiconductor RAM Memories Organization of

More information

Chapter 5B. Large and Fast: Exploiting Memory Hierarchy

Chapter 5B. Large and Fast: Exploiting Memory Hierarchy Chapter 5B Large and Fast: Exploiting Memory Hierarchy One Transistor Dynamic RAM 1-T DRAM Cell word access transistor V REF TiN top electrode (V REF ) Ta 2 O 5 dielectric bit Storage capacitor (FET gate,

More information

Chapter 6 Objectives

Chapter 6 Objectives Chapter 6 Memory Chapter 6 Objectives Master the concepts of hierarchical memory organization. Understand how each level of memory contributes to system performance, and how the performance is measured.

More information

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 5. Large and Fast: Exploiting Memory Hierarchy

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 5. Large and Fast: Exploiting Memory Hierarchy COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface 5 th Edition Chapter 5 Large and Fast: Exploiting Memory Hierarchy Principle of Locality Programs access a small proportion of their address

More information

Contents. Memory System Overview Cache Memory. Internal Memory. Virtual Memory. Memory Hierarchy. Registers In CPU Internal or Main memory

Contents. Memory System Overview Cache Memory. Internal Memory. Virtual Memory. Memory Hierarchy. Registers In CPU Internal or Main memory Memory Hierarchy Contents Memory System Overview Cache Memory Internal Memory External Memory Virtual Memory Memory Hierarchy Registers In CPU Internal or Main memory Cache RAM External memory Backing

More information

,e-pg PATHSHALA- Computer Science Computer Architecture Module 25 Memory Hierarchy Design - Basics

,e-pg PATHSHALA- Computer Science Computer Architecture Module 25 Memory Hierarchy Design - Basics ,e-pg PATHSHALA- Computer Science Computer Architecture Module 25 Memory Hierarchy Design - Basics The objectives of this module are to discuss about the need for a hierarchical memory system and also

More information

Overview. Memory Classification Read-Only Memory (ROM) Random Access Memory (RAM) Functional Behavior of RAM. Implementing Static RAM

Overview. Memory Classification Read-Only Memory (ROM) Random Access Memory (RAM) Functional Behavior of RAM. Implementing Static RAM Memories Overview Memory Classification Read-Only Memory (ROM) Types of ROM PROM, EPROM, E 2 PROM Flash ROMs (Compact Flash, Secure Digital, Memory Stick) Random Access Memory (RAM) Types of RAM Static

More information

Memory Technology. Chapter 5. Principle of Locality. Chapter 5 Large and Fast: Exploiting Memory Hierarchy 1

Memory Technology. Chapter 5. Principle of Locality. Chapter 5 Large and Fast: Exploiting Memory Hierarchy 1 COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface Chapter 5 Large and Fast: Exploiting Memory Hierarchy 5 th Edition Memory Technology Static RAM (SRAM) 0.5ns 2.5ns, $2000 $5000 per GB Dynamic

More information

Chapter 6 Memory 11/3/2015. Chapter 6 Objectives. 6.2 Types of Memory. 6.1 Introduction

Chapter 6 Memory 11/3/2015. Chapter 6 Objectives. 6.2 Types of Memory. 6.1 Introduction Chapter 6 Objectives Chapter 6 Memory Master the concepts of hierarchical memory organization. Understand how each level of memory contributes to system performance, and how the performance is measured.

More information

CPS101 Computer Organization and Programming Lecture 13: The Memory System. Outline of Today s Lecture. The Big Picture: Where are We Now?

CPS101 Computer Organization and Programming Lecture 13: The Memory System. Outline of Today s Lecture. The Big Picture: Where are We Now? cps 14 memory.1 RW Fall 2 CPS11 Computer Organization and Programming Lecture 13 The System Robert Wagner Outline of Today s Lecture System the BIG Picture? Technology Technology DRAM A Real Life Example

More information

Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institute of Technology, Delhi

Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institute of Technology, Delhi Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 13 Virtual memory and memory management unit In the last class, we had discussed

More information

LECTURE 11. Memory Hierarchy

LECTURE 11. Memory Hierarchy LECTURE 11 Memory Hierarchy MEMORY HIERARCHY When it comes to memory, there are two universally desirable properties: Large Size: ideally, we want to never have to worry about running out of memory. Speed

More information

CPU issues address (and data for write) Memory returns data (or acknowledgment for write)

CPU issues address (and data for write) Memory returns data (or acknowledgment for write) The Main Memory Unit CPU and memory unit interface Address Data Control CPU Memory CPU issues address (and data for write) Memory returns data (or acknowledgment for write) Memories: Design Objectives

More information

1. Explain in detail memory classification.[summer-2016, Summer-2015]

1. Explain in detail memory classification.[summer-2016, Summer-2015] 1. Explain in detail memory classification.[summer-2016, Summer-2015] RAM The memory is a basic component of a microcomputer system. It stores binary instructions and data for the microprocessor. There

More information

William Stallings Computer Organization and Architecture 8th Edition. Chapter 5 Internal Memory

William Stallings Computer Organization and Architecture 8th Edition. Chapter 5 Internal Memory William Stallings Computer Organization and Architecture 8th Edition Chapter 5 Internal Memory Semiconductor Memory The basic element of a semiconductor memory is the memory cell. Although a variety of

More information

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY 1 Memories: Review SRAM: value is stored on a pair of inverting gates very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: value is stored

More information

COSC 243. Memory and Storage Systems. Lecture 10 Memory and Storage Systems. COSC 243 (Computer Architecture)

COSC 243. Memory and Storage Systems. Lecture 10 Memory and Storage Systems. COSC 243 (Computer Architecture) COSC 243 1 Overview This Lecture Source: Chapters 4, 5, and 6 (10 th edition) Next Lecture Control Unit and Microprogramming 2 Electromagnetic Induction Move a magnet through a coil to induce a current

More information

Chapter Seven. Large & Fast: Exploring Memory Hierarchy

Chapter Seven. Large & Fast: Exploring Memory Hierarchy Chapter Seven Large & Fast: Exploring Memory Hierarchy 1 Memories: Review SRAM (Static Random Access Memory): value is stored on a pair of inverting gates very fast but takes up more space than DRAM DRAM

More information

Memory memories memory

Memory memories memory Memory Organization Memory Hierarchy Memory is used for storing programs and data that are required to perform a specific task. For CPU to operate at its maximum speed, it required an uninterrupted and

More information

5. Memory Hierarchy Computer Architecture COMP SCI 2GA3 / SFWR ENG 2GA3. Emil Sekerinski, McMaster University, Fall Term 2015/16

5. Memory Hierarchy Computer Architecture COMP SCI 2GA3 / SFWR ENG 2GA3. Emil Sekerinski, McMaster University, Fall Term 2015/16 5. Memory Hierarchy Computer Architecture COMP SCI 2GA3 / SFWR ENG 2GA3 Emil Sekerinski, McMaster University, Fall Term 2015/16 Movie Rental Store You have a huge warehouse with every movie ever made.

More information

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies TDT4255 Lecture 10: Memory hierarchies Donn Morrison Department of Computer Science 2 Outline Chapter 5 - Memory hierarchies (5.1-5.5) Temporal and spacial locality Hits and misses Direct-mapped, set associative,

More information

Memory Hierarchy Y. K. Malaiya

Memory Hierarchy Y. K. Malaiya Memory Hierarchy Y. K. Malaiya Acknowledgements Computer Architecture, Quantitative Approach - Hennessy, Patterson Vishwani D. Agrawal Review: Major Components of a Computer Processor Control Datapath

More information

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili Virtual Memory Lecture notes from MKP and S. Yalamanchili Sections 5.4, 5.5, 5.6, 5.8, 5.10 Reading (2) 1 The Memory Hierarchy ALU registers Cache Memory Memory Memory Managed by the compiler Memory Managed

More information

EIDE, ATA, SATA, USB,

EIDE, ATA, SATA, USB, Magnetic disks provide bulk of secondary storage of modern computers! Drives rotate at 60 to 200 times per second! Transfer rate is rate at which data flow between drive and computer! Positioning time

More information

Course Administration

Course Administration Spring 207 EE 363: Computer Organization Chapter 5: Large and Fast: Exploiting Memory Hierarchy - Avinash Kodi Department of Electrical Engineering & Computer Science Ohio University, Athens, Ohio 4570

More information

Computer Organization. 8th Edition. Chapter 5 Internal Memory

Computer Organization. 8th Edition. Chapter 5 Internal Memory William Stallings Computer Organization and Architecture 8th Edition Chapter 5 Internal Memory Semiconductor Memory Types Memory Type Category Erasure Write Mechanism Volatility Random-access memory (RAM)

More information

Chapter Seven Morgan Kaufmann Publishers

Chapter Seven Morgan Kaufmann Publishers Chapter Seven Memories: Review SRAM: value is stored on a pair of inverting gates very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: value is stored as a charge on capacitor (must be

More information

Memory Technologies. Technology Trends

Memory Technologies. Technology Trends . 5 Technologies Random access technologies Random good access time same for all locations DRAM Dynamic Random Access High density, low power, cheap, but slow Dynamic need to be refreshed regularly SRAM

More information

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy Operating Systems Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. AL-AZHAR University Website : eaymanelshenawy.wordpress.com Email : eaymanelshenawy@yahoo.com Reference

More information

Embedded Systems Design: A Unified Hardware/Software Introduction. Outline. Chapter 5 Memory. Introduction. Memory: basic concepts

Embedded Systems Design: A Unified Hardware/Software Introduction. Outline. Chapter 5 Memory. Introduction. Memory: basic concepts Hardware/Software Introduction Chapter 5 Memory Outline Memory Write Ability and Storage Permanence Common Memory Types Composing Memory Memory Hierarchy and Cache Advanced RAM 1 2 Introduction Memory:

More information

Embedded Systems Design: A Unified Hardware/Software Introduction. Chapter 5 Memory. Outline. Introduction

Embedded Systems Design: A Unified Hardware/Software Introduction. Chapter 5 Memory. Outline. Introduction Hardware/Software Introduction Chapter 5 Memory 1 Outline Memory Write Ability and Storage Permanence Common Memory Types Composing Memory Memory Hierarchy and Cache Advanced RAM 2 Introduction Embedded

More information

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Chapter 8 & Chapter 9 Main Memory & Virtual Memory Chapter 8 & Chapter 9 Main Memory & Virtual Memory 1. Various ways of organizing memory hardware. 2. Memory-management techniques: 1. Paging 2. Segmentation. Introduction Memory consists of a large array

More information

CS3600 SYSTEMS AND NETWORKS

CS3600 SYSTEMS AND NETWORKS CS3600 SYSTEMS AND NETWORKS NORTHEASTERN UNIVERSITY Lecture 9: Mass Storage Structure Prof. Alan Mislove (amislove@ccs.neu.edu) Moving-head Disk Mechanism 2 Overview of Mass Storage Structure Magnetic

More information

ADVANCED COMPUTER ARCHITECTURE TWO MARKS WITH ANSWERS

ADVANCED COMPUTER ARCHITECTURE TWO MARKS WITH ANSWERS ADVANCED COMPUTER ARCHITECTURE TWO MARKS WITH ANSWERS 1.Define Computer Architecture Computer Architecture Is Defined As The Functional Operation Of The Individual H/W Unit In A Computer System And The

More information

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches CS 61C: Great Ideas in Computer Architecture Direct Mapped Caches Instructor: Justin Hsia 7/05/2012 Summer 2012 Lecture #11 1 Review of Last Lecture Floating point (single and double precision) approximates

More information

Chapter 7- Memory System Design

Chapter 7- Memory System Design Chapter 7- Memory ystem esign RM structure: Cells and Chips Memory boards and modules Cache memory Virtual memory The memory as a sub-system of the computer CPU Main Memory Interface equence of events:

More information

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses. 1 Memory Management Address Binding The normal procedures is to select one of the processes in the input queue and to load that process into memory. As the process executed, it accesses instructions and

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2016 Lecture 35 Mass Storage Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 Questions For You Local/Global

More information

4 MEMORY SYSTEM 4.1 BASIC CONCEPTS:

4 MEMORY SYSTEM 4.1 BASIC CONCEPTS: 4 MEMORY SYSTEM 4.1 BASIC CONCEPTS: The maximum size of the Main Memory (MM) that can be used in any computer is determined by its addressing scheme. For example, a 16-bit computer that generates 16-bit

More information

Chapter 5 Internal Memory

Chapter 5 Internal Memory Chapter 5 Internal Memory Memory Type Category Erasure Write Mechanism Volatility Random-access memory (RAM) Read-write memory Electrically, byte-level Electrically Volatile Read-only memory (ROM) Read-only

More information

Chapter 4 Main Memory

Chapter 4 Main Memory Chapter 4 Main Memory Course Outcome (CO) - CO2 Describe the architecture and organization of computer systems Program Outcome (PO) PO1 Apply knowledge of mathematics, science and engineering fundamentals

More information

CS 320 February 2, 2018 Ch 5 Memory

CS 320 February 2, 2018 Ch 5 Memory CS 320 February 2, 2018 Ch 5 Memory Main memory often referred to as core by the older generation because core memory was a mainstay of computers until the advent of cheap semi-conductor memory in the

More information

Announcement. Computer Architecture (CSC-3501) Lecture 20 (08 April 2008) Chapter 6 Objectives. 6.1 Introduction. 6.

Announcement. Computer Architecture (CSC-3501) Lecture 20 (08 April 2008) Chapter 6 Objectives. 6.1 Introduction. 6. Announcement Computer Architecture (CSC-350) Lecture 0 (08 April 008) Seung-Jong Park (Jay) http://www.csc.lsu.edu/~sjpark Chapter 6 Objectives 6. Introduction Master the concepts of hierarchical memory

More information

+ Random-Access Memory (RAM)

+ Random-Access Memory (RAM) + Memory Subsystem + Random-Access Memory (RAM) Key features RAM is traditionally packaged as a chip. Basic storage unit is normally a cell (one bit per cell). Multiple RAM chips form a memory. RAM comes

More information

Computer Organization and Assembly Language (CS-506)

Computer Organization and Assembly Language (CS-506) Computer Organization and Assembly Language (CS-506) Muhammad Zeeshan Haider Ali Lecturer ISP. Multan ali.zeeshan04@gmail.com https://zeeshanaliatisp.wordpress.com/ Lecture 2 Memory Organization and Structure

More information

CREATED BY M BILAL & Arslan Ahmad Shaad Visit:

CREATED BY M BILAL & Arslan Ahmad Shaad Visit: CREATED BY M BILAL & Arslan Ahmad Shaad Visit: www.techo786.wordpress.com Q1: Define microprocessor? Short Questions Chapter No 01 Fundamental Concepts Microprocessor is a program-controlled and semiconductor

More information

CS399 New Beginnings. Jonathan Walpole

CS399 New Beginnings. Jonathan Walpole CS399 New Beginnings Jonathan Walpole Memory Management Memory Management Memory a linear array of bytes - Holds O.S. and programs (processes) - Each cell (byte) is named by a unique memory address Recall,

More information

COMPUTER ORGANIZATION AND DESIGN

COMPUTER ORGANIZATION AND DESIGN COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface 5 th Edition Chapter 5 Large and Fast: Exploiting Memory Hierarchy Principle of Locality Programs access a small proportion of their address

More information

Multilevel Memories. Joel Emer Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology

Multilevel Memories. Joel Emer Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology 1 Multilevel Memories Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Based on the material prepared by Krste Asanovic and Arvind CPU-Memory Bottleneck 6.823

More information

CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY LANGUAGE PROGRAMMING LECTURE 26, FALL 2012

CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY LANGUAGE PROGRAMMING LECTURE 26, FALL 2012 CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY LANGUAGE PROGRAMMING LECTURE 26, FALL 2012 TOPICS TODAY Homework 5 RAM in Circuits Memory Hierarchy Storage Technologies (RAM & Disk) Caching HOMEWORK 5 RAM IN

More information

CS152 Computer Architecture and Engineering Lecture 16: Memory System

CS152 Computer Architecture and Engineering Lecture 16: Memory System CS152 Computer Architecture and Engineering Lecture 16: System March 15, 1995 Dave Patterson (patterson@cs) and Shing Kong (shing.kong@eng.sun.com) Slides available on http://http.cs.berkeley.edu/~patterson

More information

CISC 360. The Memory Hierarchy Nov 13, 2008

CISC 360. The Memory Hierarchy Nov 13, 2008 CISC 360 The Memory Hierarchy Nov 13, 2008 Topics Storage technologies and trends Locality of reference Caching in the memory hierarchy class12.ppt Random-Access Memory (RAM) Key features RAM is packaged

More information

Chapter 12: Mass-Storage Systems. Operating System Concepts 8 th Edition,

Chapter 12: Mass-Storage Systems. Operating System Concepts 8 th Edition, Chapter 12: Mass-Storage Systems, Silberschatz, Galvin and Gagne 2009 Chapter 12: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management

More information

William Stallings Computer Organization and Architecture 6th Edition. Chapter 5 Internal Memory

William Stallings Computer Organization and Architecture 6th Edition. Chapter 5 Internal Memory William Stallings Computer Organization and Architecture 6th Edition Chapter 5 Internal Memory Semiconductor Memory Types Semiconductor Memory RAM Misnamed as all semiconductor memory is random access

More information

CS252 S05. Main memory management. Memory hardware. The scale of things. Memory hardware (cont.) Bottleneck

CS252 S05. Main memory management. Memory hardware. The scale of things. Memory hardware (cont.) Bottleneck Main memory management CMSC 411 Computer Systems Architecture Lecture 16 Memory Hierarchy 3 (Main Memory & Memory) Questions: How big should main memory be? How to handle reads and writes? How to find

More information

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy Chapter 5 Large and Fast: Exploiting Memory Hierarchy Review: Major Components of a Computer Processor Devices Control Memory Input Datapath Output Secondary Memory (Disk) Main Memory Cache Performance

More information

Memory Pearson Education, Inc., Hoboken, NJ. All rights reserved.

Memory Pearson Education, Inc., Hoboken, NJ. All rights reserved. 1 Memory + 2 Location Internal (e.g. processor registers, cache, main memory) External (e.g. optical disks, magnetic disks, tapes) Capacity Number of words Number of bytes Unit of Transfer Word Block Access

More information

CpE 442. Memory System

CpE 442. Memory System CpE 442 Memory System CPE 442 memory.1 Outline of Today s Lecture Recap and Introduction (5 minutes) Memory System: the BIG Picture? (15 minutes) Memory Technology: SRAM and Register File (25 minutes)

More information

Basic Organization Memory Cell Operation. CSCI 4717 Computer Architecture. ROM Uses. Random Access Memory. Semiconductor Memory Types

Basic Organization Memory Cell Operation. CSCI 4717 Computer Architecture. ROM Uses. Random Access Memory. Semiconductor Memory Types CSCI 4717/5717 Computer Architecture Topic: Internal Memory Details Reading: Stallings, Sections 5.1 & 5.3 Basic Organization Memory Cell Operation Represent two stable/semi-stable states representing

More information

CS 61C: Great Ideas in Computer Architecture. The Memory Hierarchy, Fully Associative Caches

CS 61C: Great Ideas in Computer Architecture. The Memory Hierarchy, Fully Associative Caches CS 61C: Great Ideas in Computer Architecture The Memory Hierarchy, Fully Associative Caches Instructor: Alan Christopher 7/09/2014 Summer 2014 -- Lecture #10 1 Review of Last Lecture Floating point (single

More information

Memory. Lecture 22 CS301

Memory. Lecture 22 CS301 Memory Lecture 22 CS301 Administrative Daily Review of today s lecture w Due tomorrow (11/13) at 8am HW #8 due today at 5pm Program #2 due Friday, 11/16 at 11:59pm Test #2 Wednesday Pipelined Machine Fetch

More information

The Memory Component

The Memory Component The Computer Memory Chapter 6 forms the first of a two chapter sequence on computer memory. Topics for this chapter include. 1. A functional description of primary computer memory, sometimes called by

More information

Internal Memory. Computer Architecture. Outline. Memory Hierarchy. Semiconductor Memory Types. Copyright 2000 N. AYDIN. All rights reserved.

Internal Memory. Computer Architecture. Outline. Memory Hierarchy. Semiconductor Memory Types. Copyright 2000 N. AYDIN. All rights reserved. Computer Architecture Prof. Dr. Nizamettin AYDIN naydin@yildiz.edu.tr nizamettinaydin@gmail.com Internal Memory http://www.yildiz.edu.tr/~naydin 1 2 Outline Semiconductor main memory Random Access Memory

More information

CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY LANGUAGE PROGRAMMING LECTURE 26, SPRING 2013

CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY LANGUAGE PROGRAMMING LECTURE 26, SPRING 2013 CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY LANGUAGE PROGRAMMING LECTURE 26, SPRING 2013 TOPICS TODAY End of the Semester Stuff Homework 5 Memory Hierarchy Storage Technologies (RAM & Disk) Caching END OF

More information

MARTHANDAM COLLEGE OF ENGINEERING AND TECHNOLOGY DEPARTMENT OF INFORMATION TECHNOLOGY TWO MARK QUESTIONS AND ANSWERS

MARTHANDAM COLLEGE OF ENGINEERING AND TECHNOLOGY DEPARTMENT OF INFORMATION TECHNOLOGY TWO MARK QUESTIONS AND ANSWERS MARTHANDAM COLLEGE OF ENGINEERING AND TECHNOLOGY DEPARTMENT OF INFORMATION TECHNOLOGY TWO MARK QUESTIONS AND ANSWERS SUB NAME: COMPUTER ORGANIZATION AND ARCHITECTTURE SUB CODE: CS 2253 YEAR/SEM:II/IV Marthandam

More information

Instruction Register. Instruction Decoder. Control Unit (Combinational Circuit) Control Signals (These signals go to register) The bus and the ALU

Instruction Register. Instruction Decoder. Control Unit (Combinational Circuit) Control Signals (These signals go to register) The bus and the ALU Hardwired and Microprogrammed Control For each instruction, the control unit causes the CPU to execute a sequence of steps correctly. In reality, there must be control signals to assert lines on various

More information

Giving credit where credit is due

Giving credit where credit is due CSCE 230J Computer Organization The Memory Hierarchy Dr. Steve Goddard goddard@cse.unl.edu http://cse.unl.edu/~goddard/courses/csce230j Giving credit where credit is due Most of slides for this lecture

More information

chapter 8 The Memory System Chapter Objectives

chapter 8 The Memory System Chapter Objectives chapter 8 The Memory System Chapter Objectives In this chapter you will learn about: Basic memory circuits Organization of the main memory Memory technology Direct memory access as an I/O mechanism Cache

More information

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568/668

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568/668 UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering Computer Architecture ECE 568/668 Part 11 Memory Hierarchy - I Israel Koren ECE568/Koren Part.11.1 ECE568/Koren Part.11.2 Ideal Memory

More information

CS 261 Fall Mike Lam, Professor. Memory

CS 261 Fall Mike Lam, Professor. Memory CS 261 Fall 2016 Mike Lam, Professor Memory Topics Memory hierarchy overview Storage technologies SRAM DRAM PROM / flash Disk storage Tape and network storage I/O architecture Storage trends Latency comparisons

More information

Memory systems. Memory technology. Memory technology Memory hierarchy Virtual memory

Memory systems. Memory technology. Memory technology Memory hierarchy Virtual memory Memory systems Memory technology Memory hierarchy Virtual memory Memory technology DRAM Dynamic Random Access Memory bits are represented by an electric charge in a small capacitor charge leaks away, need

More information

Computer Architecture. Memory Hierarchy. Lynn Choi Korea University

Computer Architecture. Memory Hierarchy. Lynn Choi Korea University Computer Architecture Memory Hierarchy Lynn Choi Korea University Memory Hierarchy Motivated by Principles of Locality Speed vs. Size vs. Cost tradeoff Locality principle Temporal Locality: reference to

More information

Advanced Parallel Architecture Lesson 4 bis. Annalisa Massini /2015

Advanced Parallel Architecture Lesson 4 bis. Annalisa Massini /2015 Advanced Parallel Architecture Lesson 4 bis Annalisa Massini - 2014/2015 Internal Memory RAM Many memory types are random access individual words of memory are directly accessed through wired-in addressing

More information

LECTURE 10: Improving Memory Access: Direct and Spatial caches

LECTURE 10: Improving Memory Access: Direct and Spatial caches EECS 318 CAD Computer Aided Design LECTURE 10: Improving Memory Access: Direct and Spatial caches Instructor: Francis G. Wolff wolff@eecs.cwru.edu Case Western Reserve University This presentation uses

More information

Computer Systems. Memory Hierarchy. Han, Hwansoo

Computer Systems. Memory Hierarchy. Han, Hwansoo Computer Systems Memory Hierarchy Han, Hwansoo Random-Access Memory (RAM) Key features RAM is traditionally packaged as a chip. Basic storage unit is normally a cell (one bit per cell). Multiple RAM chips

More information

COMPUTER ARCHITECTURE AND ORGANIZATION

COMPUTER ARCHITECTURE AND ORGANIZATION Memory System 1. Microcomputer Memory Memory is an essential component of the microcomputer system. It stores binary instructions and datum for the microcomputer. The memory is the place where the computer

More information

Chapter 10: Mass-Storage Systems

Chapter 10: Mass-Storage Systems Chapter 10: Mass-Storage Systems Silberschatz, Galvin and Gagne 2013 Chapter 10: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space

More information

Cache Architectures Design of Digital Circuits 217 Srdjan Capkun Onur Mutlu http://www.syssec.ethz.ch/education/digitaltechnik_17 Adapted from Digital Design and Computer Architecture, David Money Harris

More information

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition Chapter 10: Mass-Storage Systems Silberschatz, Galvin and Gagne 2013 Chapter 10: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space

More information

UNIT:4 MEMORY ORGANIZATION

UNIT:4 MEMORY ORGANIZATION 1 UNIT:4 MEMORY ORGANIZATION TOPICS TO BE COVERED. 4.1 Memory Hierarchy 4.2 Memory Classification 4.3 RAM,ROM,PROM,EPROM 4.4 Main Memory 4.5Auxiliary Memory 4.6 Associative Memory 4.7 Cache Memory 4.8

More information

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface COEN-4710 Computer Hardware Lecture 7 Large and Fast: Exploiting Memory Hierarchy (Chapter 5) Cristinel Ababei Marquette University Department

More information

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005] CSF Cache Introduction [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005] Review: The Memory Hierarchy Take advantage of the principle of locality to present the user with as much

More information

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip Reducing Hit Times Critical Influence on cycle-time or CPI Keep L1 small and simple small is always faster and can be put on chip interesting compromise is to keep the tags on chip and the block data off

More information

EEC 170 Computer Architecture Fall Cache Introduction Review. Review: The Memory Hierarchy. The Memory Hierarchy: Why Does it Work?

EEC 170 Computer Architecture Fall Cache Introduction Review. Review: The Memory Hierarchy. The Memory Hierarchy: Why Does it Work? EEC 17 Computer Architecture Fall 25 Introduction Review Review: The Hierarchy Take advantage of the principle of locality to present the user with as much memory as is available in the cheapest technology

More information

CMPSC 311- Introduction to Systems Programming Module: Caching

CMPSC 311- Introduction to Systems Programming Module: Caching CMPSC 311- Introduction to Systems Programming Module: Caching Professor Patrick McDaniel Fall 2016 Reminder: Memory Hierarchy L0: Registers CPU registers hold words retrieved from L1 cache Smaller, faster,

More information

CENG 3420 Computer Organization and Design. Lecture 08: Memory - I. Bei Yu

CENG 3420 Computer Organization and Design. Lecture 08: Memory - I. Bei Yu CENG 3420 Computer Organization and Design Lecture 08: Memory - I Bei Yu CEG3420 L08.1 Spring 2016 Outline q Why Memory Hierarchy q How Memory Hierarchy? SRAM (Cache) & DRAM (main memory) Memory System

More information