Implementation and Performance Evaluation of RAPID-Cache under Linux

Size: px
Start display at page:

Download "Implementation and Performance Evaluation of RAPID-Cache under Linux"

Transcription

1 Implementation and Performance Evaluation of RAPID-Cache under Linux Ming Zhang, Xubin He, and Qing Yang Department of Electrical and Computer Engineering, University of Rhode Island, Kingston, RI 2881 {mingz, hexb, Abstract Recent research results [1] using simulation have demonstrated that the RAPID-Cache (Redundant, Asymmetrically Parallel, and Inexpensive Disk Cache) has the potential for significantly improving the performance and reliability of disk I/O systems. To validate whether RAPID-Cache can live up to its promise in a real world environment, we have designed and implemented a RAPID-Cache under Linux operating system as a kernel device driver. As expected, measured performance results are very promising. Numerical results using a popular benchmark program have shown a factor of two to six performance gains in terms of average system throughput. Furthermore, the RAPID-Cache driver is completely transparent to the Linux operating system. It does not require any change to the OS nor the ondisk data layout. As a result, it can be used as an add-on to an existing system to obtain immediate performance and reliability improvement. Key Words: Disk I/O, File Cache, RAPID-Cache, Performance Evaluation 1. Introduction Modern disk I/O systems make extensive use of nonvolatile RAM (NVRAM) write caches for asynchronous writes [2,3,4]. Such write caches significantly reduce the response time of disk I/O systems seen by users, particularly in RAID systems. Large write caches can also improve system throughput by taking advantages of both temporal and spatial localities, as data may be overwritten several times or combined together to be large chunks before being written to disks. However, the use of single-copy write cache compromises systems reliability because RAM is less reliable than disks in terms of Mean Time To Failure. Dual-copy caches can overcome the reliability problem but is prohibitively costly since RAM is much more expensive than disk storage. We have proposed a new disk cache architecture called Redundant, Asymmetrically Parallel, and Inexpensive Disk Cache, or RAPID-Cache for short, to provide fault-tolerant caching for disk I/O systems inexpensively. Simulation results [1] have shown that RAPID- Cache is an effective cache structure that provides better performance and reliability with low cost compared to single or dual-copy cache structures. In order to justify the feasibility and validate our simulation results of RAPID-Cache in real world environments, we have implemented a RAPID-Cache prototype under the Red Hat Linux 7.1. Using our implementation, we carried out measurements with different cache configurations including single-copy unified cache, dual-copy unified cache and RAPID-Cache. Numerical results show that all these three cache configurations provide better performance than the basic Linux file system cache. And they also show that the RAPID-Cache architecture can provide the highest performance and reliability with same cost among these cache configurations. The rest of the paper is organized as follows. The next section presents the detailed design and implementation of RAPID-Cache. Section 3 presents our performance evaluation methodology, numerical results, and analysis. We conclude our paper in Section 4. DRAM/NVRAM Primary Unified Cache Data Disk Disk Controller Backup Cache Small NVRAM Log Disk Figure 1. RAPID-Cache on top of a disk system. 1

2 2. Design and Implementation The RAPID-Cache organization consists of two main parts: a unified cache and a backup cache. The unified cache in RAPID-Cache has the same structure as a normal unified cache that can use system DRAM or NVRAM to provide higher reliability. RAPID-Cache s backup cache consists of a two-level hierarchical structure with a small NVRAM on top of a log disk, similar to the structure of our previous work [5, 6]. 2.1 Backup Cache Structure The backup cache in a RAPID-Cache consists of a LRU cache; several segment buffers, a log disk and a disk segment table. The LRU cache and the segment buffers should reside in the NVRAM while the disk segment table can be stored in system DRAM to reduce cost. When a system needs to be recovered after a crash, the disk segment table can be reconstructed by contents in the LRU cache and the log disk. The log disk can be either a separate disk to provide better I/O performance or just a logical partition of data disks to reduce cost. The LRU cache records the recently used data that come from upper layer via the write requests. It contains a hash table, a number of hash entries and some data blocks to store the data. The hash table is hashed by data s LBA (logical block address). The total cache size is configurable in our implementation that can range from several MB to several hundred MB. The segment buffers are used to construct log segments before they are written to the log disk. The number of segment buffers is configurable in the RAPID-Cache, usually between two and eight. More segment buffers allow faster destage of data to log disk but require more costly NVRAM. Furthermore, the speed of moving data from segment buffers to a log disk is limited by the log disk bandwidth. Currently, the number of segment buffers in our RAPID-Cache is eight. The size of each segment buffer is 32KB in our implementation. Each log buffer contains 31 1KB size data slots and a header recording all LBAs of the data in a slot. The size of a segment buffer can also be configured to 64KB or 128KB and thus the number of data slots will be 63 and 127 correspondingly. The log disk is used to store the less frequently accessed data in the backup cache. Data in it is organized in the format of segments similar to that in a Log-structured File System such as the Sprite LFS [7] and the BSD LFS [8]. Each disk segment has the same structure as the segment buffer. A reserved area at the beginning of the log disk is used to store some metadata about the log disk. Such metadata include the size of log disk, how many disk segments it can hold, which is the next available segment, and so on. In order to speed up the update of disk metadata, a copy of the metadata is maintained in the NVRAM. During the normal state, only the value in the NVRAM is updated. And the metadata in the log disk is synchronized by the metadata in the NVRAM when needed. The disk segment table contains information about the log segments on the log disk. For each segment on the log disk, it has a corresponding entry recording all LBAs of slots of data in that segment. That entry also has a bitmap with a bit for each data slot to indicate whether the data is valid or not. Figure 2. Backup Cache detailed structure. 2.2 Operations The operations performed on RAPID-Cache include Read, Write, Destage and Garbage 2

3 Collection. There is also a recovery operation only being executed during system reconstruction. We have implemented all the operations in our implementation. More detailed descriptions of these operations can be found in [1]. 2.3 Interfaces with Linux We can integrate RAPID-Cache with existing Linux system in many different ways. We may modify the Linux kernel source code directly or just let it be a stand-alone kernel module. It can also be implemented at different kernel layers, such as file system layer, block device layer, or even lower storage device driver layer. After carefully examining the Linux kernel structure, especially the Linux md and LVM drivers [9], we decided to build RAPID-Cache as a standalone device driver in the block device layer. It uses one or several real disk partitions as its data device and another disk partition as its log disk. It exports itself as a virtual disk-like block device to upper file systems. After loading the RAPID-Cache module into kernel, we can simply build file systems by mkfs and perform I/O operations on it as if on a real disk device. This implementation has several apparent benefits. First, since it is a stand-alone device driver, it can be installed easily under Linux system without recompiling the kernel and can be immigrated to other versions of kernel with little modification. Second, because it is built in the block device layer and without any modification to upper file system drivers and lower storage device drivers, it works well with all different kinds of file systems and storage devices, greatly broadening its usability. Third, since RAPID-Cache uses existing partitions as its data device and it can be loaded dynamically without any modification to a partition s layout and data on it, it can provides immediate performance improvement with very low cost. 3. Performance Evaluation To observe how well RAPID-Cache performs, we carry out performance evaluation by means of measurements. We concentrate on measuring the overall system performance in several different circumstances. 3.1 Experimental Setup Like other operating systems, Linux file system provides two different operation modes to satisfy different reliability and performance requirements. The two modes are asynchronous mode using write-back and synchronous mode using write-through. Although a write-through mode can provide much higher reliability than a write-back mode, it has a much lower system throughput, especially when handling small writes. Since RAPID-Cache uses NVRAM as its primary unified cache and also provides full redundancy, it can provide the same reliability as the original system when both of them act under synchronous mode and get much better performance. We will run RAPID-Cache in both asynchronous mode and synchronous mode to evaluate the system throughput. We have chosen five different configurations as our target systems listed in Table 1 as follows. Here we choose two different RAPID-Cache configurations, one has same total cache buffer size as the single-copy or dual-copy unified cache and another has same primary cache size as the single-copy unified cache. Denotation System Cache Meaning RAM (MB) Buffer (MB) 256/ 256 Basic System 192/ Single-copy unified cache 192/ Dual-copy unified cache 192/ RAPID-Cache 184/ RAPID-Cache Table 1. Different Measure Target Configurations. Parameters CPU Pentium III 866MHz, 256KB L2 Cache Memory 256MB PC133 ECC Hard Disk Maxtor 5T1H1, ATA-5, 1.2GB, 2MB Buffer, 72RPM, Average Seek Time < 8.7ms, Average Latency = 4.17ms, Data Transfer Rate (To/From Media) up to 57MBytes/sec [1] Table 2. Test Environment Parameters. Table 2 shows the configuration of the test machine. We run all tests under Red Hat Linux 7.1 with kernel We also add some internal 3

4 counters to observe the dynamic behaviors of the cache program. They are: Read_Request and Write_Request are numbers of read or write requests that file system send to a cache. Read_Hit is the number of cache read hits. It is increased each time a data is found in the cache while reading; Write_Hit is the number of cache hits for write operations. Write Hit means the whole block containing the written data is in the cache; Write_Hold is the number of times we find an empty entry to hold the incoming write request although it is not a Write Hit. Since either Write Hit or Write Hold can eliminate real write I/O operations to the data disk, we here define WriteHitRa tio 3.2 Benchmark Write_ Hit + Write_ Hold = 1% Write_ Request The benchmark program we used in our test is PostMark [11], a popular file system benchmark developed by Network Appliance Corp. It measures performance in terms of transaction rates in an ephemeral small-file environment by creating a large pool of continually changing files. PostMark generates an initial pool of random text files ranging in size from a configurable low bound to a configurable high bound. This file pool is of configurable size and can be located on any accessible file system. Once the pool has been created, a specified number of transactions occur. Each transaction consists of a pair of smaller transactions, i.e. Create file or Delete file and Read file or Append file. Each transaction s type and files it affected are chosen randomly. The read and write block size can be tuned. On completion of each run, a report is generated showing some metrics such as elapsed time, transaction rate, total number of files created and so on. In our measurement, we run PostMark in several different configurations. They range from the smallest pool with 1, initial files and 1, transactions to the largest pool with 1, initial files and 35, transactions. The total data set accessed are also from 695.2MB (313.79MB read and MB write) to MB ( MB read and MB write). All of them are much larger than the system memory that is 256MB and cache memory size. All other parameters of PostMark are left unchanged. The default read/write block size is 1KB. 3.3 Measurement Results Asynchronous Mode The first experiment we performed is to measure the overall system performance of above five different configurations in an asynchronous mode. Under the asynchronous mode, the file system acknowledges write complete to the host as soon as a data is written to the file system cache without waiting for disk operations. It is similar to the copy-back in cache memory terminology. Figure 3 shows the measured PostMark throughputs in terms of transactions per second using different requests pools. Throughput(tps) Throughput of Different Cache Configurations 1k 15k 2k 25k 3k 35k 256/ 192/64 192/ / /64+8 Figure 3. System I/O Performance Measured by PostMark using small pools. From Figure 3 we can see that a 64MB single unified cache performs the best. However, as we mentioned in the introduction, a single write cache compromises the system reliability since it creates a single point of failure. This is particularly true for RAID systems. Not only disks are more reliable than RAM but also all modern RAID systems provide data redundancy through parity disks for fault tolerance. If we use only a single write cache, it becomes the most critical component and compromises the system 4

5 reliability. Modern disk systems use dual copy write caches to guarantee reliability. From Figure 3, we can see that both RAPID-Cache configurations show better performance than the dual-copy cache configuration with up to 55% performance gain observed. Compared to the basic system, both RAPID-Cache configurations improve the performance by a factor of 2. We can expect larger performance gain if we use separate memory for caching instead of using system s memory. Table 3 lists the statistical values collected in the experiment using the 1, transactions data set. From this table we can see that the number of read requests is much less than write requests implying that the file system cache did a very good job in caching read requests. It is very interesting to note that almost all read requests filtered out from the file system cache also miss the disk cache and go to the data disk. This is what we have expected because data not present in the file system cache are not frequently used data and are very likely to be destaged to disks from the disk cache. And some of the read data should be the metadata of data disk that need be read from data disk while doing the measurement. We also noticed that the number of read requests for the cache configuration (192/32+32) is slightly larger than the other three configurations. We speculate that the reason for this is high miss ratio for write requests. Such high miss ratio may give rise to more disk operations and therefore more metadata operations. Observing the write hit ratios for the 4 different cache configurations, we noticed that RAPID cache with 192/56+8 configuration has about 9% less hit ratio than the single write cache case. After making the primary unified cache size same as the single-copy unified cache size, the hit ratio comes closer to the single-copy unified cache case (89% vs. 88.5%) resulting in similar performance with extra full duplicate redundancy. In other word, RAPID cache architecture achieves the similar system performance and allows full redundancy with exactly the same hardware resources as a singlecopy unified cache. Read_Request Read_Hit Write_Request Write_Hit Write_Hold Write Hit Ratio 192/ % 192/ % 192/ % 184/ % Table 3. Cache internal counters result for small pool. Throughput with different transactions Write Request Miss Count 192/ / / / k 15k 2k 25k 3k 35k 4k 1k 15k 2k 25k 3k 35k 4k Transactions Transactions 5

6 Figure 4a. Throughput Figure 4b. Write Request Miss Count Read_Request Read_Hit Write_Request Write_Hit Write_Hold Hit Ratio 192/ % 192/ % 192/ % 184/ % Table 4. Cache internal counters result for small pool. We also noticed throughput differences of two RAPID-Cache configurations as a result of different unified cache sizes as shown in Figure 4. The results show that RAPID-Cache with the configuration 184/64+8 always performs better than that of 192/56+8 except for small number of transactions (1k). This performance difference can be attributed to the following facts. Reducing the file system cache from 192 to 184 results in more file system cache misses. As a result, our primary unified cache with size 64 MB will receive more requests coming out of the file system cache. However, the total number of misses from the primary cache is reduced rather than increased as shown in Figure 4. In other word, although the total number of requests to the primary cache is increased, the actual number of write requests that go to the disk is reduced because of the additional 8 MB to the write cache. This result indicates that our write cache does much better job than the traditional file system cache in handling write requests. This is also the reason why configuration 192/64 performs much better than configuration 256/ as shown in Figure Synchronous Mode After evaluating the overall system performance under asynchronous mode, our next experiment is to check how well each cache configuration performs under synchronous mode that provides higher reliability. After mounting the virtual disk-like device in synchronous mode, we run PostMark with the smallest pool to measure the different cache configurations performance. The throughput results are shown in Figure 5 and the internal statistic counter values are listed in Table 4. Throughput (tps) Throughput in Sync Mode 256/ 192/64 192/ / /64+8 Different Cache Configurations Figure 5. Cache Performance Measured by PostMark using small pool. Figure 5 and Table 4 above clearly show that all cache configurations perform much better than the basic configuration. Both RAPID-Cache configurations show 6% performance gain compared to the original Linux system. Such 6 folds performance gain indicates that our cache algorithm works very well. It is important to note that to obtain the same system reliability, the 64MB unified cache for configuration 192/64 should use NVRAM as opposed to standard DRAM. Such NVRAM will increase the system cost. With RAPID cache configuration, however, only the 8MB buffer need to be NVRAM because of the log disk right below the RAM buffer. With full duplicate redundancy, the RAPID cache will have much higher reliability than the baseline Linux system while at the same time achieving 6 times better performance. Two noticeable changes in Table 4 as compared to Table 3 are hit ratio and write request amount. The high hit ratio of the unified cache implies that high efficiency of our cache algorithm. The reason why they are higher than asynchronous case is as follows. In synchronous mode, write data are not cached at file system cache, which means all write requests pass through the file system cache. As a result, data locality is caught 6

7 by the unified write cache as opposed to file system cache in asynchronous mode. As for Write_Request changes from about 3k in asynchronous mode to over 1.5G in synchronous mode, they are mainly for meta data operations. In Linux ext2 file system, metadata such as block group descriptor and inode are used to record information about the file system [11]. A block group descriptor records inode bitmap, data block bitmap, count of free inodes and data blocks etc. Each inode records file name, size, time modified etc. For example, an append operation will request free data blocks from file system, modify the count of free data blocks and the data block bitmap in block group descriptor, modify the file size and time modified in the inode, and write data to data block. This sequence of operations results in a lot of meta data operations, particularly for small data files. Create files and delete files also generate large metadata modifications. Since we mount the file system in synchronous mode, all these operations go to disks instead of being cached in memory resulting in a large amount of write requests see from disk. 4. Conclusions In this paper, we have presented our implementation of RAPID-Cache and carried out performance evaluation base on the implementation. The measured results show great performance improvement compared with the original Linux system without RAPID-Cache. Compared with single-copy unified cache and dual-copy unified cache, RAPID-Cache provides better performance and reliability with low cost. Acknowledgements This research is supported in part by National Science Foundation under grants MIP and CCR Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. The authors would like to thank the anonymous reviewers for their many helpful comments and suggestions. References [1] Y. Hu, Q. Yang, and T. Nightingale, RAPID- Cache a Reliable and Inexpensive Write Cache for High Performance Storage Systems, IEEE Transactions on Parallel and Distributed Systems, Vol. 13, No. 2, February 22 [2] J. Menon and J. Cortney, "The architecture of a fault-tolerant cached RAID controller," in Proceedings of the 2th Annual International Symposium on Computer Architecture, (San Diego, California), pp , May 16-19, [3] K. Treiber and J. Menon, "Simulation study of cached RAID5 designs," in Proceedings of Int'l Symposium on High Performance Computer Architectures, (Raleigh, North Carolina), pp , Jan [4] P. M. Chen, E. K. Lee, G. A. Gibson, R. H. Katz, and D. A. Patterson, "RAID: High-performance, reliable secondary storage," ACM Computing Surveys, vol. 26, pp , June Dartmouth College, July [5] Y. Hu and Q. Yang, "DCD -- disk caching disk: A new approach for boosting I/O performance," in Proceedings of the 23rd International Symposium on Computer Architecture, (Philadelphia, Pennsylvania), pp , May [6] X. He, Q. Yang, VC-RAID: A Large Virtual NVRAM Cache for Software Do-it-yourself RAID, Proceedings of the International Symposium on Information Systems and Engineering (ISE'21), pp , June 21. [7] J. Ousterhout and F. Douglis, "Beating the I/O bottle-neck: A case for log-structured file systems," Technical Report, Computer Science Division, Electrical Engineering and Computer Sciences, University of California at Berkeley, Oct [8] M. Rosenblum and J. Ousterhout, "The design and implementation of a log-structured file system," ACM Transactions on Computer Systems, pp , Feb [9] Hard Disk Drive Specifications Models: 5T6H6, 5T4H4, 5T3H3, 5T2H2, 5T1H1, Maxtor, [1] J. Katcher, PostMark: A New File System Benchmark, Technical Report TR322, Network Appliance, URL: [11] M Beck, H BOHME, Linux Kernel Internals, 2 nd Editions. ADDISON-WESLEY.ISBN:

Introducing SCSI-to-IP Cache for Storage Area Networks

Introducing SCSI-to-IP Cache for Storage Area Networks Introducing -to-ip Cache for Storage Area Networks Xubin He, Qing Yang, and Ming Zhang Department of Electrical and Computer Engineering, University of Rhode Island, Kingston, RI 02881 {hexb, qyang, mingz}@ele.uri.edu

More information

RAPID-Cache A Reliable and Inexpensive Write Cache for High Performance Storage Systems Λ

RAPID-Cache A Reliable and Inexpensive Write Cache for High Performance Storage Systems Λ RAPID-Cache A Reliable and Inexpensive Write Cache for High Performance Storage Systems Λ Yiming Hu y, Tycho Nightingale z, and Qing Yang z y Dept. of Ele. & Comp. Eng. and Comp. Sci. z Dept. of Ele. &

More information

On Design and Implementation of a Large Virtual NVRAM Cache for Software RAID

On Design and Implementation of a Large Virtual NVRAM Cache for Software RAID On Design and Implementation of a Large Virtual NVRAM Cache for Software RAID Xubin He Qing Yang Department of Electrical and Computer Engineering University of Rhode Island, Kingston, RI 02881 U.S.A hexb@ele.uri.edu

More information

Stupid File Systems Are Better

Stupid File Systems Are Better Stupid File Systems Are Better Lex Stein Harvard University Abstract File systems were originally designed for hosts with only one disk. Over the past 2 years, a number of increasingly complicated changes

More information

Design and Performance Evaluation of Networked Storage Architectures

Design and Performance Evaluation of Networked Storage Architectures Design and Performance Evaluation of Networked Storage Architectures Xubin He (Hexb@ele.uri.edu) July 25,2002 Dept. of Electrical and Computer Engineering University of Rhode Island Outline Introduction

More information

Virtual Allocation: A Scheme for Flexible Storage Allocation

Virtual Allocation: A Scheme for Flexible Storage Allocation Virtual Allocation: A Scheme for Flexible Storage Allocation Sukwoo Kang, and A. L. Narasimha Reddy Dept. of Electrical Engineering Texas A & M University College Station, Texas, 77843 fswkang, reddyg@ee.tamu.edu

More information

CSE 153 Design of Operating Systems

CSE 153 Design of Operating Systems CSE 153 Design of Operating Systems Winter 2018 Lecture 22: File system optimizations and advanced topics There s more to filesystems J Standard Performance improvement techniques Alternative important

More information

Lecture 21: Reliable, High Performance Storage. CSC 469H1F Fall 2006 Angela Demke Brown

Lecture 21: Reliable, High Performance Storage. CSC 469H1F Fall 2006 Angela Demke Brown Lecture 21: Reliable, High Performance Storage CSC 469H1F Fall 2006 Angela Demke Brown 1 Review We ve looked at fault tolerance via server replication Continue operating with up to f failures Recovery

More information

Operating Systems. Operating Systems Professor Sina Meraji U of T

Operating Systems. Operating Systems Professor Sina Meraji U of T Operating Systems Operating Systems Professor Sina Meraji U of T How are file systems implemented? File system implementation Files and directories live on secondary storage Anything outside of primary

More information

Mladen Stefanov F48235 R.A.I.D

Mladen Stefanov F48235 R.A.I.D R.A.I.D Data is the most valuable asset of any business today. Lost data, in most cases, means lost business. Even if you backup regularly, you need a fail-safe way to ensure that your data is protected

More information

Storage. CS 3410 Computer System Organization & Programming

Storage. CS 3410 Computer System Organization & Programming Storage CS 3410 Computer System Organization & Programming These slides are the product of many rounds of teaching CS 3410 by Deniz Altinbuke, Kevin Walsh, and Professors Weatherspoon, Bala, Bracy, and

More information

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili Virtual Memory Lecture notes from MKP and S. Yalamanchili Sections 5.4, 5.5, 5.6, 5.8, 5.10 Reading (2) 1 The Memory Hierarchy ALU registers Cache Memory Memory Memory Managed by the compiler Memory Managed

More information

RAID (Redundant Array of Inexpensive Disks)

RAID (Redundant Array of Inexpensive Disks) Magnetic Disk Characteristics I/O Connection Structure Types of Buses Cache & I/O I/O Performance Metrics I/O System Modeling Using Queuing Theory Designing an I/O System RAID (Redundant Array of Inexpensive

More information

Storage. Hwansoo Han

Storage. Hwansoo Han Storage Hwansoo Han I/O Devices I/O devices can be characterized by Behavior: input, out, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections 2 I/O System Characteristics

More information

File. File System Implementation. File Metadata. File System Implementation. Direct Memory Access Cont. Hardware background: Direct Memory Access

File. File System Implementation. File Metadata. File System Implementation. Direct Memory Access Cont. Hardware background: Direct Memory Access File File System Implementation Operating Systems Hebrew University Spring 2009 Sequence of bytes, with no structure as far as the operating system is concerned. The only operations are to read and write

More information

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Computer Organization and Structure. Bing-Yu Chen National Taiwan University Computer Organization and Structure Bing-Yu Chen National Taiwan University Storage and Other I/O Topics I/O Performance Measures Types and Characteristics of I/O Devices Buses Interfacing I/O Devices

More information

ARC: An Approach to Flexible and Robust RAID Systems

ARC: An Approach to Flexible and Robust RAID Systems ARC: An Approach to Flexible and Robust RAID Systems Ba-Quy Vuong and Yiying Zhang Computer Sciences Department, University of Wisconsin-Madison Abstract RAID systems increase data storage reliability

More information

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture) EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture) Dept. of Computer Science & Engineering Chentao Wu wuct@cs.sjtu.edu.cn Download lectures ftp://public.sjtu.edu.cn User:

More information

OPERATING SYSTEM. Chapter 12: File System Implementation

OPERATING SYSTEM. Chapter 12: File System Implementation OPERATING SYSTEM Chapter 12: File System Implementation Chapter 12: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management

More information

Operating Systems. Lecture File system implementation. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Operating Systems. Lecture File system implementation. Master of Computer Science PUF - Hồ Chí Minh 2016/2017 Operating Systems Lecture 7.2 - File system implementation Adrien Krähenbühl Master of Computer Science PUF - Hồ Chí Minh 2016/2017 Design FAT or indexed allocation? UFS, FFS & Ext2 Journaling with Ext3

More information

CSE 120: Principles of Operating Systems. Lecture 10. File Systems. November 6, Prof. Joe Pasquale

CSE 120: Principles of Operating Systems. Lecture 10. File Systems. November 6, Prof. Joe Pasquale CSE 120: Principles of Operating Systems Lecture 10 File Systems November 6, 2003 Prof. Joe Pasquale Department of Computer Science and Engineering University of California, San Diego 2003 by Joseph Pasquale

More information

Maximizing NFS Scalability

Maximizing NFS Scalability Maximizing NFS Scalability on Dell Servers and Storage in High-Performance Computing Environments Popular because of its maturity and ease of use, the Network File System (NFS) can be used in high-performance

More information

File Systems. Kartik Gopalan. Chapter 4 From Tanenbaum s Modern Operating System

File Systems. Kartik Gopalan. Chapter 4 From Tanenbaum s Modern Operating System File Systems Kartik Gopalan Chapter 4 From Tanenbaum s Modern Operating System 1 What is a File System? File system is the OS component that organizes data on the raw storage device. Data, by itself, is

More information

Architecture and Performance Potential of STICS SCSI-To-IP Cache Storage

Architecture and Performance Potential of STICS SCSI-To-IP Cache Storage Architecture and Performance Potential of STICS SCSI-To-IP Cache Storage Xubin He 1 and Qing Yang Department of Electrical and Computer Engineering University of Rhode Island, Kingston, RI 02881 {hexb,

More information

Linux Software RAID Level 0 Technique for High Performance Computing by using PCI-Express based SSD

Linux Software RAID Level 0 Technique for High Performance Computing by using PCI-Express based SSD Linux Software RAID Level Technique for High Performance Computing by using PCI-Express based SSD Jae Gi Son, Taegyeong Kim, Kuk Jin Jang, *Hyedong Jung Department of Industrial Convergence, Korea Electronics

More information

SYSTEM UPGRADE, INC Making Good Computers Better. System Upgrade Teaches RAID

SYSTEM UPGRADE, INC Making Good Computers Better. System Upgrade Teaches RAID System Upgrade Teaches RAID In the growing computer industry we often find it difficult to keep track of the everyday changes in technology. At System Upgrade, Inc it is our goal and mission to provide

More information

The Design and Implementation of a DCD Device Driver for UNIX

The Design and Implementation of a DCD Device Driver for UNIX THE ADVANCED COMPUTING SYSTEMS ASSOCIATION The following paper was originally published in the Proceedings of the 1999 USENIX Annual Technical Conference Monterey, California, USA, June 6 11, 1999 The

More information

- SLED: single large expensive disk - RAID: redundant array of (independent, inexpensive) disks

- SLED: single large expensive disk - RAID: redundant array of (independent, inexpensive) disks RAID and AutoRAID RAID background Problem: technology trends - computers getting larger, need more disk bandwidth - disk bandwidth not riding moore s law - faster CPU enables more computation to support

More information

Chapter 10: Mass-Storage Systems

Chapter 10: Mass-Storage Systems Chapter 10: Mass-Storage Systems Silberschatz, Galvin and Gagne 2013 Chapter 10: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space

More information

Chapter 11: Implementing File Systems

Chapter 11: Implementing File Systems Chapter 11: Implementing File Systems Operating System Concepts 99h Edition DM510-14 Chapter 11: Implementing File Systems File-System Structure File-System Implementation Directory Implementation Allocation

More information

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition Chapter 10: Mass-Storage Systems Silberschatz, Galvin and Gagne 2013 Chapter 10: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space

More information

Contents. Memory System Overview Cache Memory. Internal Memory. Virtual Memory. Memory Hierarchy. Registers In CPU Internal or Main memory

Contents. Memory System Overview Cache Memory. Internal Memory. Virtual Memory. Memory Hierarchy. Registers In CPU Internal or Main memory Memory Hierarchy Contents Memory System Overview Cache Memory Internal Memory External Memory Virtual Memory Memory Hierarchy Registers In CPU Internal or Main memory Cache RAM External memory Backing

More information

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY 1 Memories: Review SRAM: value is stored on a pair of inverting gates very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: value is stored

More information

CSE380 - Operating Systems

CSE380 - Operating Systems CSE380 - Operating Systems Notes for Lecture 17-11/10/05 Matt Blaze, Micah Sherr (some examples by Insup Lee) Implementing File Systems We ve looked at the user view of file systems names, directory structure,

More information

CHAPTER 11: IMPLEMENTING FILE SYSTEMS (COMPACT) By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

CHAPTER 11: IMPLEMENTING FILE SYSTEMS (COMPACT) By I-Chen Lin Textbook: Operating System Concepts 9th Ed. CHAPTER 11: IMPLEMENTING FILE SYSTEMS (COMPACT) By I-Chen Lin Textbook: Operating System Concepts 9th Ed. File-System Structure File structure Logical storage unit Collection of related information File

More information

EIDE Disk Arrays and Its Implement

EIDE Disk Arrays and Its Implement EIDE Disk Arrays and Its Implement Qiong Chen Turku Center for Computer Science, ÅBO Akademi University Turku, Finland Abstract: Along with the information high-speed development, RAID, which has large

More information

ZBD: Using Transparent Compression at the Block Level to Increase Storage Space Efficiency

ZBD: Using Transparent Compression at the Block Level to Increase Storage Space Efficiency ZBD: Using Transparent Compression at the Block Level to Increase Storage Space Efficiency Thanos Makatos, Yannis Klonatos, Manolis Marazakis, Michail D. Flouris, and Angelos Bilas {mcatos,klonatos,maraz,flouris,bilas}@ics.forth.gr

More information

CISC 7310X. C11: Mass Storage. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 4/19/2018 CUNY Brooklyn College

CISC 7310X. C11: Mass Storage. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 4/19/2018 CUNY Brooklyn College CISC 7310X C11: Mass Storage Hui Chen Department of Computer & Information Science CUNY Brooklyn College 4/19/2018 CUNY Brooklyn College 1 Outline Review of memory hierarchy Mass storage devices Reliability

More information

CS3600 SYSTEMS AND NETWORKS

CS3600 SYSTEMS AND NETWORKS CS3600 SYSTEMS AND NETWORKS NORTHEASTERN UNIVERSITY Lecture 11: File System Implementation Prof. Alan Mislove (amislove@ccs.neu.edu) File-System Structure File structure Logical storage unit Collection

More information

CS5460: Operating Systems Lecture 20: File System Reliability

CS5460: Operating Systems Lecture 20: File System Reliability CS5460: Operating Systems Lecture 20: File System Reliability File System Optimizations Modern Historic Technique Disk buffer cache Aggregated disk I/O Prefetching Disk head scheduling Disk interleaving

More information

Enhancements to Linux I/O Scheduling

Enhancements to Linux I/O Scheduling Enhancements to Linux I/O Scheduling Seetharami R. Seelam, UTEP Rodrigo Romero, UTEP Patricia J. Teller, UTEP William Buros, IBM-Austin 21 July 2005 Linux Symposium 2005 1 Introduction Dynamic Adaptability

More information

A Comparison of File. D. Roselli, J. R. Lorch, T. E. Anderson Proc USENIX Annual Technical Conference

A Comparison of File. D. Roselli, J. R. Lorch, T. E. Anderson Proc USENIX Annual Technical Conference A Comparison of File System Workloads D. Roselli, J. R. Lorch, T. E. Anderson Proc. 2000 USENIX Annual Technical Conference File System Performance Integral component of overall system performance Optimised

More information

Using Transparent Compression to Improve SSD-based I/O Caches

Using Transparent Compression to Improve SSD-based I/O Caches Using Transparent Compression to Improve SSD-based I/O Caches Thanos Makatos, Yannis Klonatos, Manolis Marazakis, Michail D. Flouris, and Angelos Bilas {mcatos,klonatos,maraz,flouris,bilas}@ics.forth.gr

More information

Chapter 11: File System Implementation. Objectives

Chapter 11: File System Implementation. Objectives Chapter 11: File System Implementation Objectives To describe the details of implementing local file systems and directory structures To describe the implementation of remote file systems To discuss block

More information

Introducing SCSI-To-IP Cache for Storage Area Networks

Introducing SCSI-To-IP Cache for Storage Area Networks Introducing SCSI-To-IP Cache for Storage Area Networks Xubin He, Qing Yang, and Ming Zhang Department of Electrical and Computer Engineering, University of Rhode Island, Kingston, RI 02881 {hexb, qyang,

More information

Today s Papers. Array Reliability. RAID Basics (Two optional papers) EECS 262a Advanced Topics in Computer Systems Lecture 3

Today s Papers. Array Reliability. RAID Basics (Two optional papers) EECS 262a Advanced Topics in Computer Systems Lecture 3 EECS 262a Advanced Topics in Computer Systems Lecture 3 Filesystems (Con t) September 10 th, 2012 John Kubiatowicz and Anthony D. Joseph Electrical Engineering and Computer Sciences University of California,

More information

Chapter 12: File System Implementation

Chapter 12: File System Implementation Chapter 12: File System Implementation Chapter 12: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management Efficiency

More information

An Efficient Snapshot Technique for Ext3 File System in Linux 2.6

An Efficient Snapshot Technique for Ext3 File System in Linux 2.6 An Efficient Snapshot Technique for Ext3 File System in Linux 2.6 Seungjun Shim*, Woojoong Lee and Chanik Park Department of CSE/GSIT* Pohang University of Science and Technology, Kyungbuk, Republic of

More information

CPE300: Digital System Architecture and Design

CPE300: Digital System Architecture and Design CPE300: Digital System Architecture and Design Fall 2011 MW 17:30-18:45 CBC C316 Virtual Memory 11282011 http://www.egr.unlv.edu/~b1morris/cpe300/ 2 Outline Review Cache Virtual Memory Projects 3 Memory

More information

HP AutoRAID (Lecture 5, cs262a)

HP AutoRAID (Lecture 5, cs262a) HP AutoRAID (Lecture 5, cs262a) Ali Ghodsi and Ion Stoica, UC Berkeley January 31, 2018 (based on slide from John Kubiatowicz, UC Berkeley) Array Reliability Reliability of N disks = Reliability of 1 Disk

More information

Virtual Memory. Patterson & Hennessey Chapter 5 ELEC 5200/6200 1

Virtual Memory. Patterson & Hennessey Chapter 5 ELEC 5200/6200 1 Virtual Memory Patterson & Hennessey Chapter 5 ELEC 5200/6200 1 Virtual Memory Use main memory as a cache for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs

More information

Lecture 2: Memory Systems

Lecture 2: Memory Systems Lecture 2: Memory Systems Basic components Memory hierarchy Cache memory Virtual Memory Zebo Peng, IDA, LiTH Many Different Technologies Zebo Peng, IDA, LiTH 2 Internal and External Memories CPU Date transfer

More information

Da-Wei Chang CSIE.NCKU. Professor Hao-Ren Ke, National Chiao Tung University Professor Hsung-Pin Chang, National Chung Hsing University

Da-Wei Chang CSIE.NCKU. Professor Hao-Ren Ke, National Chiao Tung University Professor Hsung-Pin Chang, National Chung Hsing University Chapter 11 Implementing File System Da-Wei Chang CSIE.NCKU Source: Professor Hao-Ren Ke, National Chiao Tung University Professor Hsung-Pin Chang, National Chung Hsing University Outline File-System Structure

More information

Design and Implementation of a Random Access File System for NVRAM

Design and Implementation of a Random Access File System for NVRAM This article has been accepted and published on J-STAGE in advance of copyediting. Content is final as presented. IEICE Electronics Express, Vol.* No.*,*-* Design and Implementation of a Random Access

More information

I/O CANNOT BE IGNORED

I/O CANNOT BE IGNORED LECTURE 13 I/O I/O CANNOT BE IGNORED Assume a program requires 100 seconds, 90 seconds for main memory, 10 seconds for I/O. Assume main memory access improves by ~10% per year and I/O remains the same.

More information

Chapter 10: File System Implementation

Chapter 10: File System Implementation Chapter 10: File System Implementation Chapter 10: File System Implementation File-System Structure" File-System Implementation " Directory Implementation" Allocation Methods" Free-Space Management " Efficiency

More information

Storage Devices for Database Systems

Storage Devices for Database Systems Storage Devices for Database Systems 5DV120 Database System Principles Umeå University Department of Computing Science Stephen J. Hegner hegner@cs.umu.se http://www.cs.umu.se/~hegner Storage Devices for

More information

Cascade Mapping: Optimizing Memory Efficiency for Flash-based Key-value Caching

Cascade Mapping: Optimizing Memory Efficiency for Flash-based Key-value Caching Cascade Mapping: Optimizing Memory Efficiency for Flash-based Key-value Caching Kefei Wang and Feng Chen Louisiana State University SoCC '18 Carlsbad, CA Key-value Systems in Internet Services Key-value

More information

The Google File System

The Google File System October 13, 2010 Based on: S. Ghemawat, H. Gobioff, and S.-T. Leung: The Google file system, in Proceedings ACM SOSP 2003, Lake George, NY, USA, October 2003. 1 Assumptions Interface Architecture Single

More information

Disk scheduling Disk reliability Tertiary storage Swap space management Linux swap space management

Disk scheduling Disk reliability Tertiary storage Swap space management Linux swap space management Lecture Overview Mass storage devices Disk scheduling Disk reliability Tertiary storage Swap space management Linux swap space management Operating Systems - June 28, 2001 Disk Structure Disk drives are

More information

OPERATING SYSTEMS II DPL. ING. CIPRIAN PUNGILĂ, PHD.

OPERATING SYSTEMS II DPL. ING. CIPRIAN PUNGILĂ, PHD. OPERATING SYSTEMS II DPL. ING. CIPRIAN PUNGILĂ, PHD. File System Implementation FILES. DIRECTORIES (FOLDERS). FILE SYSTEM PROTECTION. B I B L I O G R A P H Y 1. S I L B E R S C H AT Z, G A L V I N, A N

More information

Chapter 6. Storage and Other I/O Topics

Chapter 6. Storage and Other I/O Topics Chapter 6 Storage and Other I/O Topics Introduction I/O devices can be characterized by Behaviour: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections

More information

MODERN FILESYSTEM PERFORMANCE IN LOCAL MULTI-DISK STORAGE SPACE CONFIGURATION

MODERN FILESYSTEM PERFORMANCE IN LOCAL MULTI-DISK STORAGE SPACE CONFIGURATION INFORMATION SYSTEMS IN MANAGEMENT Information Systems in Management (2014) Vol. 3 (4) 273 283 MODERN FILESYSTEM PERFORMANCE IN LOCAL MULTI-DISK STORAGE SPACE CONFIGURATION MATEUSZ SMOLIŃSKI Institute of

More information

Chapter 12: File System Implementation. Operating System Concepts 9 th Edition

Chapter 12: File System Implementation. Operating System Concepts 9 th Edition Chapter 12: File System Implementation Silberschatz, Galvin and Gagne 2013 Chapter 12: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods

More information

Discriminating Hierarchical Storage (DHIS)

Discriminating Hierarchical Storage (DHIS) Discriminating Hierarchical Storage (DHIS) Chaitanya Yalamanchili, Kiron Vijayasankar, Erez Zadok Stony Brook University Gopalan Sivathanu Google Inc. http://www.fsl.cs.sunysb.edu/ Discriminating Hierarchical

More information

Chapter 12: File System Implementation

Chapter 12: File System Implementation Chapter 12: File System Implementation Silberschatz, Galvin and Gagne 2013 Chapter 12: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods

More information

V. Mass Storage Systems

V. Mass Storage Systems TDIU25: Operating Systems V. Mass Storage Systems SGG9: chapter 12 o Mass storage: Hard disks, structure, scheduling, RAID Copyright Notice: The lecture notes are mainly based on modifications of the slides

More information

Optimizing Flash-based Key-value Cache Systems

Optimizing Flash-based Key-value Cache Systems Optimizing Flash-based Key-value Cache Systems Zhaoyan Shen, Feng Chen, Yichen Jia, Zili Shao Department of Computing, Hong Kong Polytechnic University Computer Science & Engineering, Louisiana State University

More information

CA485 Ray Walshe Google File System

CA485 Ray Walshe Google File System Google File System Overview Google File System is scalable, distributed file system on inexpensive commodity hardware that provides: Fault Tolerance File system runs on hundreds or thousands of storage

More information

Week 12: File System Implementation

Week 12: File System Implementation Week 12: File System Implementation Sherif Khattab http://www.cs.pitt.edu/~skhattab/cs1550 (slides are from Silberschatz, Galvin and Gagne 2013) Outline File-System Structure File-System Implementation

More information

Caching and reliability

Caching and reliability Caching and reliability Block cache Vs. Latency ~10 ns 1~ ms Access unit Byte (word) Sector Capacity Gigabytes Terabytes Price Expensive Cheap Caching disk contents in RAM Hit ratio h : probability of

More information

HP AutoRAID (Lecture 5, cs262a)

HP AutoRAID (Lecture 5, cs262a) HP AutoRAID (Lecture 5, cs262a) Ion Stoica, UC Berkeley September 13, 2016 (based on presentation from John Kubiatowicz, UC Berkeley) Array Reliability Reliability of N disks = Reliability of 1 Disk N

More information

Chapter 11: Implementing File

Chapter 11: Implementing File Chapter 11: Implementing File Systems Chapter 11: Implementing File Systems File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management Efficiency

More information

Filesystem. Disclaimer: some slides are adopted from book authors slides with permission

Filesystem. Disclaimer: some slides are adopted from book authors slides with permission Filesystem Disclaimer: some slides are adopted from book authors slides with permission 1 Recap Directory A special file contains (inode, filename) mappings Caching Directory cache Accelerate to find inode

More information

File system internals Tanenbaum, Chapter 4. COMP3231 Operating Systems

File system internals Tanenbaum, Chapter 4. COMP3231 Operating Systems File system internals Tanenbaum, Chapter 4 COMP3231 Operating Systems Architecture of the OS storage stack Application File system: Hides physical location of data on the disk Exposes: directory hierarchy,

More information

CSE 451: Operating Systems. Section 10 Project 3 wrap-up, final exam review

CSE 451: Operating Systems. Section 10 Project 3 wrap-up, final exam review CSE 451: Operating Systems Section 10 Project 3 wrap-up, final exam review Final exam review Goal of this section: key concepts you should understand Not just a summary of lectures Slides coverage and

More information

CS 550 Operating Systems Spring File System

CS 550 Operating Systems Spring File System 1 CS 550 Operating Systems Spring 2018 File System 2 OS Abstractions Process: virtualization of CPU Address space: virtualization of memory The above to allow a program to run as if it is in its own private,

More information

Chapter 11: Implementing File Systems. Operating System Concepts 9 9h Edition

Chapter 11: Implementing File Systems. Operating System Concepts 9 9h Edition Chapter 11: Implementing File Systems Operating System Concepts 9 9h Edition Silberschatz, Galvin and Gagne 2013 Chapter 11: Implementing File Systems File-System Structure File-System Implementation Directory

More information

The Google File System

The Google File System The Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung December 2003 ACM symposium on Operating systems principles Publisher: ACM Nov. 26, 2008 OUTLINE INTRODUCTION DESIGN OVERVIEW

More information

File Systems Management and Examples

File Systems Management and Examples File Systems Management and Examples Today! Efficiency, performance, recovery! Examples Next! Distributed systems Disk space management! Once decided to store a file as sequence of blocks What s the size

More information

Chapter 11: Implementing File Systems

Chapter 11: Implementing File Systems Silberschatz 1 Chapter 11: Implementing File Systems Thursday, November 08, 2007 9:55 PM File system = a system stores files on secondary storage. A disk may have more than one file system. Disk are divided

More information

V. File System. SGG9: chapter 11. Files, directories, sharing FS layers, partitions, allocations, free space. TDIU11: Operating Systems

V. File System. SGG9: chapter 11. Files, directories, sharing FS layers, partitions, allocations, free space. TDIU11: Operating Systems V. File System SGG9: chapter 11 Files, directories, sharing FS layers, partitions, allocations, free space TDIU11: Operating Systems Ahmed Rezine, Linköping University Copyright Notice: The lecture notes

More information

Live Virtual Machine Migration with Efficient Working Set Prediction

Live Virtual Machine Migration with Efficient Working Set Prediction 2011 International Conference on Network and Electronics Engineering IPCSIT vol.11 (2011) (2011) IACSIT Press, Singapore Live Virtual Machine Migration with Efficient Working Set Prediction Ei Phyu Zaw

More information

CPE300: Digital System Architecture and Design

CPE300: Digital System Architecture and Design CPE300: Digital System Architecture and Design Fall 2011 MW 17:30-18:45 CBC C316 Cache 11232011 http://www.egr.unlv.edu/~b1morris/cpe300/ 2 Outline Review Memory Components/Boards Two-Level Memory Hierarchy

More information

Memory Technology. Chapter 5. Principle of Locality. Chapter 5 Large and Fast: Exploiting Memory Hierarchy 1

Memory Technology. Chapter 5. Principle of Locality. Chapter 5 Large and Fast: Exploiting Memory Hierarchy 1 COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface Chapter 5 Large and Fast: Exploiting Memory Hierarchy 5 th Edition Memory Technology Static RAM (SRAM) 0.5ns 2.5ns, $2000 $5000 per GB Dynamic

More information

CSE 120: Principles of Operating Systems. Lecture 10. File Systems. February 22, Prof. Joe Pasquale

CSE 120: Principles of Operating Systems. Lecture 10. File Systems. February 22, Prof. Joe Pasquale CSE 120: Principles of Operating Systems Lecture 10 File Systems February 22, 2006 Prof. Joe Pasquale Department of Computer Science and Engineering University of California, San Diego 2006 by Joseph Pasquale

More information

Current Topics in OS Research. So, what s hot?

Current Topics in OS Research. So, what s hot? Current Topics in OS Research COMP7840 OSDI Current OS Research 0 So, what s hot? Operating systems have been around for a long time in many forms for different types of devices It is normally general

More information

Self-Adaptive Two-Dimensional RAID Arrays

Self-Adaptive Two-Dimensional RAID Arrays Self-Adaptive Two-Dimensional RAID Arrays Jehan-François Pâris 1 Dept. of Computer Science University of Houston Houston, T 77204-3010 paris@cs.uh.edu Thomas J. E. Schwarz Dept. of Computer Engineering

More information

UNIT-V MEMORY ORGANIZATION

UNIT-V MEMORY ORGANIZATION UNIT-V MEMORY ORGANIZATION 1 The main memory of a computer is semiconductor memory.the main memory unit is basically consists of two kinds of memory: RAM (RWM):Random access memory; which is volatile in

More information

The UNIX Time- Sharing System

The UNIX Time- Sharing System The UNIX Time- Sharing System Dennis M. Ritchie and Ken Thompson Bell Laboratories Communications of the ACM July 1974, Volume 17, Number 7 UNIX overview Unix is a general-purpose, multi-user, interactive

More information

Bitmap discard operation for the higher utilization of flash memory storage

Bitmap discard operation for the higher utilization of flash memory storage LETTER IEICE Electronics Express, Vol.13, No.2, 1 10 Bitmap discard operation for the higher utilization of flash memory storage Seung-Ho Lim 1a) and Woo Hyun Ahn 2b) 1 Division of Computer and Electronic

More information

CS 152 Computer Architecture and Engineering. Lecture 11 - Virtual Memory and Caches

CS 152 Computer Architecture and Engineering. Lecture 11 - Virtual Memory and Caches CS 152 Computer Architecture and Engineering Lecture 11 - Virtual Memory and Caches Krste Asanovic Electrical Engineering and Computer Sciences University of California at Berkeley http://www.eecs.berkeley.edu/~krste

More information

COS 318: Operating Systems. NSF, Snapshot, Dedup and Review

COS 318: Operating Systems. NSF, Snapshot, Dedup and Review COS 318: Operating Systems NSF, Snapshot, Dedup and Review Topics! NFS! Case Study: NetApp File System! Deduplication storage system! Course review 2 Network File System! Sun introduced NFS v2 in early

More information

DELL EMC DATA DOMAIN SISL SCALING ARCHITECTURE

DELL EMC DATA DOMAIN SISL SCALING ARCHITECTURE WHITEPAPER DELL EMC DATA DOMAIN SISL SCALING ARCHITECTURE A Detailed Review ABSTRACT While tape has been the dominant storage medium for data protection for decades because of its low cost, it is steadily

More information

Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institute of Technology, Delhi

Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institute of Technology, Delhi Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 13 Virtual memory and memory management unit In the last class, we had discussed

More information

CS307: Operating Systems

CS307: Operating Systems CS307: Operating Systems Chentao Wu 吴晨涛 Associate Professor Dept. of Computer Science and Engineering Shanghai Jiao Tong University SEIEE Building 3-513 wuct@cs.sjtu.edu.cn Download Lectures ftp://public.sjtu.edu.cn

More information

Characterizing Home Pages 1

Characterizing Home Pages 1 Characterizing Home Pages 1 Xubin He and Qing Yang Dept. of Electrical and Computer Engineering University of Rhode Island Kingston, RI 881, USA Abstract Home pages are very important for any successful

More information

Reliable Computing I

Reliable Computing I Instructor: Mehdi Tahoori Reliable Computing I Lecture 8: Redundant Disk Arrays INSTITUTE OF COMPUTER ENGINEERING (ITEC) CHAIR FOR DEPENDABLE NANO COMPUTING (CDNC) National Research Center of the Helmholtz

More information

File system internals Tanenbaum, Chapter 4. COMP3231 Operating Systems

File system internals Tanenbaum, Chapter 4. COMP3231 Operating Systems File system internals Tanenbaum, Chapter 4 COMP3231 Operating Systems Summary of the FS abstraction User's view Hierarchical structure Arbitrarily-sized files Symbolic file names Contiguous address space

More information

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 5. Large and Fast: Exploiting Memory Hierarchy

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 5. Large and Fast: Exploiting Memory Hierarchy COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface 5 th Edition Chapter 5 Large and Fast: Exploiting Memory Hierarchy Principle of Locality Programs access a small proportion of their address

More information