An Efficient LFU-Like Policy for Web Caches

Size: px
Start display at page:

Download "An Efficient LFU-Like Policy for Web Caches"

Transcription

1 An Efficient -Like Policy for Web Caches Igor Tatarinov Abstract This study proposes Cubic Selection Sceme () a new policy for Web caches. The policy is based on the Least Frequently Used () heuristic but takes into account ect size and possibly ect retrieval costs. We show that can be efficiently implemented and it results in high cache performance.. Introduction The World Wide Web (WWW) is often considered a major invention of the 99s. Since its introduction in 99, the Web has been constantly growing and so has the load on the Internet and Web servers. Today, many popular sites serve millions of requests every day. High network/server load is the major factor that negatively affects response time observed by Web surfers. Caching proved to be an easy and inexpensive way to make the Web work faster. Web ects (HTML documents, images, multimedia clips, etc.) can be cached at three different levels: on the Web server (a main-memory server cache), in the network (a proxy cache), and on the client (a built-in browser cache). Data caching in databases and file systems has been the subject of a tremendous number of research studies. Unfortunately, the results of those studies turned out to be inapplicable to Web caching. The traditional cache policies: Least Recently Used (LRU), Least Frequently Used (), etc. which showed good performance in database and file systems, do not perform well for the Web. The main reason for that is a different granularity of Web data items. Web caches deal with whole ects (of different size) whereas database and file system caches deal with data blocks (of the same size). Another important characteristic of Web caching is the possibility to use admission control, i.e., to not cache some of requested ects. A file system/database buffer manager always places requested data blocks in the cache (the cache serves as an interface between the storage subsystem and the application). On the contrary, a Web cache manager may choose not to admit a ect if this can increase cache performance (hit ratio).. Web Cache Performance Metrics Traditional storage systems use a single cache performance metric: the cache hit ratio, that is defined as the fraction of requests satisfied from the cache. When used in Web servers, this metric will be called the ect hit ratio (OHR). Another performance metric applicable to Web server caches is the byte hit ratio (BHR). The byte hit ratio is the ratio of the number of bytes of data fetched from the cache to the total number of bytes requested. In traditional caches, these two metrics are equivalent because all data items have the same size the size of the disk block. When ect retrieval time is independent of ect size, i.e., excessive bandwidth is available and latency dominates (as in a Web server cache), the ect hit ratio directly reflects the response time observed by the users. When ect

2 retrieval time is proportional to ect size (as in a proxy cache in a low-speed network), the byte hit ratio becomes a better metric. In many real-life situations, a more general approach may be needed. Several recent studies [CI97, S+97, WA97] proposed integrating an ect retrieval cost formula into the cache management policy. This results in new performance metrics, e.g., the delay savings ratio [S+97]. It is, therefore, essential that a Web cache policy be easily adjustable to different performance metrics (retrieval costs)., the cache policy proposed in this study, can easily account for various retrieval cost..2 Previous Work: the Early Years [W+96] proposed a simple taxonomy of Web cache policies. Cache policies are viewed as sorting problems that vary in the sorting key used. The following basic ect attributes (sorting keys) can be used: ect size, the last access time (last access recency), and the number of accesses. A combination of attributes can be used to select a victim. For example, (LRU-SIZE) removes the largest ect from the cache first. Objects of the same size are removed in the LRU order. This approach of using a combination of one or more basic ect parameters as a sorting key may seem to be universal. It does have a serious problem, however. If the first key parameter says that ect A is better than ect B, it does not matter how much better ect B might be in the second criterion. Object B will be removed from the cache before ect A. For example, suppose, ect A is bytes and it was last accessed, say, a week ago. Object B is only byte larger ( bytes) but it was accessed only a second ago. If an LRU-SIZE cache manager needed to choose which of the two ects to replace first, it would replace ect B only because it is byte larger. This problem can be partially solved by applying the integer logarithm function to SIZE (see [W+96] for details). () LRU-SIZE tends to discard large ects even when a small amount of free space is needed to accommodate a new ect. [W+96] is a more efficient version of LRU-SIZE. It takes the size of the incoming ect into account. first tests whether there are any ects equal or larger than the incoming ect. If yes, one of them is replaced using LRU. Otherwise, all ects larger than half the size of the incoming ect are considered. If such ects exist in the cache, one or more of them are removed by LRU. If not, the procedure is repeated using one quarter the ect size, and so on. This algorithm yields better cache performance but incurs more CPU overhead than the simpler LRU-SIZE., as well as LRU-SIZE, have one common problem: small ects may never be replaced regardless of how long ago they were accessed last time. (LRU-TH), proposed in [AS+95], avoids the above problem. LRU-TH requires a parameter, threshold, that determines the maximum size of a ect that can be cached. Other than that it is equivalent to the pure LRU. The OHR performance of LRU-TH is comparable to that of in many cases. However, it depends on the value of the threshold that can not be optimized a priori. Additionally, LRU-TH results in a very low byte hit ratio which makes this policy unattractive for use in Web proxies. 2

3 .3 The Knapsack Approach to Web Caching Several studies [AS+96, S+97] independently pointed out that the task of maximizing the expected performance of a Web cache can be considered as a knapsack problem. Indeed, if we assume that caching a ect has a certain benefit for the cache manager (future accesses to the ect will be hits) and that each ect has a weight (its size), then the cache manager has to maximize the total expected benefit from the cache given that the total weight of the cached ects cannot exceed the maximum capacity of the knapsack (cache size). More formally, let D be the set of all ects stored on the Web server, and let C, C D, be the set of cached ects. The task of the cache manager is to find the solution, C, to the following knapsack problem: max benefit( D) such that size( D) CacheSize. The benefit function actually determines the performance D C D C metric that is being optimized. It may also be adjusted (divided) by the cost of ect retrieval [S+97]. The knapsack problem is NP-hard. However, an approximate solution can be found by putting items in the cache in the value order where value benefit =. The approximate solution happens to be very precise when the number of weight items is large. By organizing ect entries as a heap, the task of handling a request (hit or miss) can be performed in O(log n) operations, where n is a number of ect in the cache [S+97]., on the other hand, requires a constant number of operations to handle a hit (only one ect entry is re-queued, see below). (SLRU) SLRU (Self-adjusted LRU) is proposed in [AE+96]. This is a knapsack policy that defines ect benefit as the ratio: where t is the current time, t last is the last ect access time (second to last, if the ect is ( t t last ) being accessed at moment t). SLRU results in a high cache hit rate but this algorithm is computationally very expensive since sorting has to be performed on every cache miss. () Pyramidal Selection Scheme () was designed by the authors of SLRU to reap the benefits of the latter while avoiding its high CPU overhead. divides cached ects into groups based on their log 2 SIZE value. Each group is maintained as an LRU queue. Only the head elements of each queue can be chosen as victims. Each time a replacement has to be made, a head element with the lowest value (see SLRU s value formula) is ejected from the cache. Note that SLRU can not be implemented using an ordinary priority queue because the ect value function (non-linearly) includes current time. (Static caching) [TRS97] proposes static caching, a different approach to Web caching. In static caching, the set of cached ects is updated periodically. The Web server s request log is used to compute the ects that should be cached during the following period. For example, if the period is one day, the values of all accessed ects are computed at the end of each day. The most valuable ects are placed in the cache. No new ects enter the cache until the end of the following day so the cache policy is indeed static. The ect value is defined as number of accesses ect size. Although, the study does not mention it, static caching essentially tries to optimize cache 3

4 performance by periodically solving the above knapsack problem. Static caching has very low CPU overhead and, in many cases, outperforms other policies. Its main disadvantage is slow adaptivity to changes in the workload. The dynamic version of the above cache policy was studied in [T98] where it was referred to as Weighted (). can be implemented using a heap-based priority queue as described above. One serious problem of the cache policies that are based on access counters is their slow adaptivity. An ect that is accessed many times may remain in the cache for a very long time despite that it is no longer accessed. Such situations can frequently occur in the Web. Web documents containing news are extremely popular on the day they are published but not popular at all on the following days. An simple solution to a similar problem is described in [CI96]. That approach requires however that a used-to-be-popular ect be requested again for its value to be downgraded., the policy that we propose, automatically downgrades values (access counters) of stale ects in the cache regardless whether those ects are requested again or not..4 Cache Implementation and Admission Control Issues Before we describe how works, let us mention that any ect cache requires a lookup table. The lookup table is usually implemented as a hash table. Given an ect identifier (oid), the lookup table quickly retrieves an ect handle associated with the give oid. The ect handle contains a pointer to the ect s location in the cache, and possibly other information, e.g., access counter. Several research studies [M96, AE+96, T98] pointed out the importance of admission control in Web caches. It has also been observed that efficient admission control requires retaining access statistics for some ects not currently in the cache. Since Web servers typically store a relatively small number of ects (<,), Web server caches can afford keeping access statistics for all stored ects. Web proxies and browsers, on the other hand, may potentially access any ect on the Internet. Clearly, a different approach is required in this case. For example, the cache manager can maintain another, auxiliary small cache that would contain fixed size entries. Each entry can be used to retain access statistics for a single ect (the ect itself is not cached). The policy that governs the auxiliary cache may be different from that of the main cache. 2. Cubic Selection Scheme () - an Efficient Implementation of Our implementation of has a certain resemblance to the way SLRU is implemented in [AE+96]. In that study a pyramidal selection scheme () is used to select which ects to eject from the cache. / being somewhat more complex than /SLRU results in significantly better performance especially with smaller cache sizes. is based on a cube-like data structure shown in Figure. The cube is formed by a matrix whose elements are groups (queues) of ect entries. Each ect entry is a pointer to the ect s handle in the cache lookup table. Object entries are pictured as smaller cubes within the cube. 4

5 I= log MaxX i Q I Q I Q IJ log X 2 Q i Q Q Q j Q J j LRU queue containing ects with the same log X and log S values X - ect s access counter value S - ect s size 2 log S J= log CacheSize Figure : The Cubic Selection Scheme. Each group contains (entries of the) ects that have the same value of log S and log X where S is an ect s size and X is an ect s access counter value. Accordingly, the height of the cube is log MaxX + where MaxX is the maximum possible value of an access counter whereas the width of the cube is log CacheSize +. (The cubes in our figures have the same width and height only for easier presentation.) The cube does not have a welldefined depth because the ect groups may have different sizes (depths). MaxX is used to prevent overflows of ects access counters. For example, if MaxX=255 ( byte is used to store the counter), an access counter is not allows to increase over 255. The cube will have a height of 8. The ect groups within the cube may be implemented in many different ways as long as the following two operations can be efficiently performed. Firstly, it should be possible to remove a given ect from a group. Secondly, it is essential that groups can be quickly merged (added to one another). (The reason why these two operations are important will be described later.) As can be seen from Figure, our implementation of uses LRU queues for ect groups. A double-linked LRU queue meets both of the requirements but also makes the algorithm of victim selection more intelligent. 2. How works When an ect is requested, the cache manager uses the lookup table to get an ect handle. If the ect is found, it is sent to the user and its access counter is incremented. The ect s entry in one of the queues is moved to the end of the same or different queue depending on whether incrementing the counter made the ect eligible for the next, higher level in the cube. Victim Selection If the requested ect is not cached, the cache manager uses conditions (*) to select a set of victims. If no suitable 5

6 i = log X d = d = d = - d = i j Q Q 2 Q 23 Q 34 Q 34 Top (lru) ects Q 23 Q 2 d = -J Q j = log S Figure 2 How selects victims. Queues on a diagonal d are not considered until all queues on smaller diagonals (-Jd-) have been exhausted.. Figure 3: One diagonal slice of the queue cube. The top ect with the smallest value is removed first on each iteration. set can be found, the ect is passed to the user without being cached. Else, the victims are ejected and the requested ect is placed in the cache. To select a set of potential victims, the cache manager scans the cube in a diagonal fashion as shown in Figure 2 and 3. A higher diagonal is only considered when all lower diagonals have been exhausted. On each diagonal, the top ect with the lowest value (access counter to size ratio) is selected first. If not enough space can be freed by removing that ect, another top ect is selected as a potential victim and so on until the diagonal is exhausted. A higher diagonal is considered next. The above process stops when enough cache space can be freed for the new ect or the total benefit of the potential victims overweighs that of the requested ect. The latter is an important condition that enforces admission control. The second condition also makes handling of large ects more efficient. Large Web ects are not usually popular (have a low benefit). As a result, such ects are rejected very quickly without considering many cached ects Cube Splitting Clearly, the above policy has a problem. If left unmodified, it will eventually result in all ects having the same (MaxX) value of the access counter and ect size will become the only criterion for victim selection. To avoid this problem, ect access counters should be split periodically. The cube allows for doing that very efficiently without actually updating each and every ect s counter. Whenever a split is desired, the cube is re-organized by moving all layers (but the lowest) one level down. As a result, layer 2 queues are appended to the corresponding layer queues whereas the highest layer becomes empty, see Figure 4. The split operation essentially performs a division of all ect access counters by two. Next time an ect entry is accessed, the counter has to be adjusted (divided by two). To implement this, a LastSplitTime value is stored in each ect entry. If LastSplitTime<CacheSplitTime, the counter and LastSplitTime are adjusted. One may notice that if 6

7 log X 4 2 Cube layers split log S Figure 4: A part of a typical cube before and after a split. Numbers in squares are queue lengths. an ect has not been accessed while the cube has been split twice, the ect s access counter will only be split (divided by 2) once. Such small mistakes however do not prevent the algorithm from achieving very high performance. To decide whether a split is necessary, the cache manager maintains a sum of logarithms of the access counters of all cached ects (Total). Therefore, given a cube, one can compute the value of Total as a sum of LayerNumber*ObjectsOnThatLayer. For example, the cube in Figure 4 has a Total of *2+*5+2*+3*+4*5=85. When an ect is moved one layer up in the cube, Total is incremented by one. When a new ect is added to the cache, Total is incremented by the number of the cube layer where the ect was put. Whenever the value of Total #ectsincache (TotalAvg) becomes greater than log MaxX /2, i.e., more weight shifts to the upper layers of the cube, the cube is split and Total is decremented by #ects in cache #ects in layer. For example, in Figure 4, left, TotalAvg=85/42 which is greater than log MaxX /2 = 2, hence, a split was necessary. After the split the value of Total becomes 85 (42-2)=4. Auxiliary Cache Operation In order for to work efficiently, it is essential that an auxiliary cache is maintained. As discussed in Section, such a cache should retain statistics for non-cached ects. If no auxiliary cache is maintained, new ects will always have their access counters equal to one and may never enter the cache. Such an auxiliary cache should also split access counters similarly to the main cache. 3. Simulation Environment 3. Characteristics of Traces Used in This Study For performance analysis, we used trace-driven simulation and request logs (traces) from the following three Web sites: ) the Computer Science Division of the University of California at Berkeley (UCB), 2) ClarkNet Internet service provider (this trace was also studied in [AW96]), and 3) InfoArt content provider. Table shows the characteristics of the studied traces. Data set size represents the total size of unique ects, i.e., the amount of disk space occupied by all requested ects. 7

8 Characteristic/Trace UCB ClarkNet InfoArt Trace start date Jan /97 Aug 28/95 Dec /96 Trace duration month 2 weeks 3 weeks Total valid requests 2,29,646 2,936,945,493,689 Avg request/day 73,924 29,78 64,943 Total Mbytes transferred 43,572 27,567,542 Avg ect size (bytes) 9,937 9,842 7,76 Number of unique ects 4,24 3,575 33,42 Data set size, DSS (Mbytes), Ideal Object Hit Ratio (cache size = DSS) Ideal Byte Hit Ratio (cache size = DSS) Table : Trace Characteristics 3.2 Simulation Details To detect cache hits we used a technique similar to that described in [W+96]. Objects are identified by their file names (paths). If an ect appears in the trace with a new size, it is assumed that the ect has been modified. Thus, a cache hit may only occur if the requested ect is in the cache and its size has not changed. Each simulation had a one-day warm-up period after which all statistical counters were reset to zero. Files that had cgi in their names were ignored since CGI scripts often mark their output as non-cacheable. 3.3 Studied policies and Results The list of studied cache policies is given in Table 2. Figure 5 shows the performance of the policies. Policy name Object attributes used when replacing when admitting LRU last access time always admits access counter always admits size and last access time always admits,, ect value ect value Table 2: Studied cache policies. Both and result in a very high ect hit ratio. is the best policy in terms of the byte hit ratio which can be explained by the fact that approximates a knapsack cache policy with a benefit function of ect_size*access counter. Such a benefit function is clearly oriented towards decreasing the number of bytes missed. The resulting value function is equal to access counter which is the same as in except that the latter does not enforces any admission control. seems to be a good policy for the InfoArt-like sites (news providers) where it performs better than other policy. As was mentioned above, this can be explained by s ability to quickly remove no-longer-unpopular ects from the cache. 8

9 .9.8 ClarkNet.9 InfoArt.9.8 UCB ect hit ratio ect hit ratio ect hit ratio ClarkNet.9.8 InfoArt.9.8 UCB byte hit ratio byte hit ratio byte hit ratio Figure 5: Performance of the cache policies. References [AS+95] M. Abrams, C. Stanbridge, G. Abdulla, S. Williams, and E. Fox. Caching Proxies: Limitations and Potentials. In Proc. of 4 th Int l Conference on WWW, Boston, USA, December 995. ei.cs.vt.edu/~succeed/www4/www4.html [AE+96] C. Aggrawal, M. Epelman, J. Wolf, P. Yu. On Cache Policies for Web Objects. IBM Research Report 269, [AW96] M. Arlit and C. Williamson. Internet Web servers workload characterization and performance implications. IEEE/ACM Trans. Networking, 5(5), 997. ftp://ftp.cs.usask.ca/pub/discus/paper.96-3.ps.z [CI97] P. Cao and S. Irani. Cost-Aware WWW Proxy Caching Algorithms. In Proc. Of USENIX Symposium on Internet Technologies and Systems, [M96] E.P. Markatos. Main Memory Caching of Web Documents. In Proc. of 5 th Int l Conference on WWW, Paris, France, May 996. www5conf.inria.fr/fich_html/papers/p/overview.html [RD9] J. Robinson, M. Devarakonda. Data Cache Management Using Frequency Based Replacement. In Proc. of ACM SIGMETRICS, Boulder, CO, USA, May 99. [S+97] P. Scheuermann, J. Shim, R. Vingralek: A Case for Delay-Conscious Caching of Web Documents. In Proc. of 7 th WWW Conf., Brisbane, Australia, April [T97] I. Tatarinov. Performance Analysis of Cache Policies for Web Servers. In Proc. of 9th Intl. Conf. on Computing and Information, ICCI 98, Winnipeg, Canada, June [TRS97] I. Tatarinov, A. Rousskov, V. Soloviev. Static Caching in Web Servers. In Proc. of 6th Intl. Conf. on Computer Comm. and Networks, IC3N 97. Extended version: [W+96] S. Williams, M. Abrams, C. Stanbridge, G. Abdulla, and E. Fox. Removal Policies in Network Caches for World-Wide Web Documents. In Proc. of ACM Sigcomm96, 996. ei.cs.vt.edu/~succeed/96waasf/ [WA97] R. Wooster and M. Abrams. Proxy Caching That Estimates Page Load Delays. In Proc. of 6 th WWW Conf.,

An Efficient Web Cache Replacement Policy

An Efficient Web Cache Replacement Policy In the Proc. of the 9th Intl. Symp. on High Performance Computing (HiPC-3), Hyderabad, India, Dec. 23. An Efficient Web Cache Replacement Policy A. Radhika Sarma and R. Govindarajan Supercomputer Education

More information

Trace Driven Simulation of GDSF# and Existing Caching Algorithms for Web Proxy Servers

Trace Driven Simulation of GDSF# and Existing Caching Algorithms for Web Proxy Servers Proceeding of the 9th WSEAS Int. Conference on Data Networks, Communications, Computers, Trinidad and Tobago, November 5-7, 2007 378 Trace Driven Simulation of GDSF# and Existing Caching Algorithms for

More information

An Integration Approach of Data Mining with Web Cache Pre-Fetching

An Integration Approach of Data Mining with Web Cache Pre-Fetching An Integration Approach of Data Mining with Web Cache Pre-Fetching Yingjie Fu 1, Haohuan Fu 2, and Puion Au 2 1 Department of Computer Science City University of Hong Kong, Hong Kong SAR fuyingjie@tsinghua.org.cn

More information

Relative Reduced Hops

Relative Reduced Hops GreedyDual-Size: A Cost-Aware WWW Proxy Caching Algorithm Pei Cao Sandy Irani y 1 Introduction As the World Wide Web has grown in popularity in recent years, the percentage of network trac due to HTTP

More information

A CONTENT-TYPE BASED EVALUATION OF WEB CACHE REPLACEMENT POLICIES

A CONTENT-TYPE BASED EVALUATION OF WEB CACHE REPLACEMENT POLICIES A CONTENT-TYPE BASED EVALUATION OF WEB CACHE REPLACEMENT POLICIES F.J. González-Cañete, E. Casilari, A. Triviño-Cabrera Department of Electronic Technology, University of Málaga, Spain University of Málaga,

More information

A Proxy Caching Scheme for Continuous Media Streams on the Internet

A Proxy Caching Scheme for Continuous Media Streams on the Internet A Proxy Caching Scheme for Continuous Media Streams on the Internet Eun-Ji Lim, Seong-Ho park, Hyeon-Ok Hong, Ki-Dong Chung Department of Computer Science, Pusan National University Jang Jun Dong, San

More information

Surveying Formal and Practical Approaches for Optimal Placement of Replicas on the Web

Surveying Formal and Practical Approaches for Optimal Placement of Replicas on the Web Surveying Formal and Practical Approaches for Optimal Placement of Replicas on the Web TR020701 April 2002 Erbil Yilmaz Department of Computer Science The Florida State University Tallahassee, FL 32306

More information

Chapter The LRU* WWW proxy cache document replacement algorithm

Chapter The LRU* WWW proxy cache document replacement algorithm Chapter The LRU* WWW proxy cache document replacement algorithm Chung-yi Chang, The Waikato Polytechnic, Hamilton, New Zealand, itjlc@twp.ac.nz Tony McGregor, University of Waikato, Hamilton, New Zealand,

More information

A New Web Cache Replacement Algorithm 1

A New Web Cache Replacement Algorithm 1 A New Web Cache Replacement Algorithm Anupam Bhattacharjee, Biolob Kumar Debnath Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Dhaka-, Bangladesh

More information

Role of Aging, Frequency, and Size in Web Cache Replacement Policies

Role of Aging, Frequency, and Size in Web Cache Replacement Policies Role of Aging, Frequency, and Size in Web Cache Replacement Policies Ludmila Cherkasova and Gianfranco Ciardo Hewlett-Packard Labs, Page Mill Road, Palo Alto, CA 9, USA cherkasova@hpl.hp.com CS Dept.,

More information

Improving the Performances of Proxy Cache Replacement Policies by Considering Infrequent Objects

Improving the Performances of Proxy Cache Replacement Policies by Considering Infrequent Objects Improving the Performances of Proxy Cache Replacement Policies by Considering Infrequent Objects Hon Wai Leong Department of Computer Science National University of Singapore 3 Science Drive 2, Singapore

More information

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Chapter 8 & Chapter 9 Main Memory & Virtual Memory Chapter 8 & Chapter 9 Main Memory & Virtual Memory 1. Various ways of organizing memory hardware. 2. Memory-management techniques: 1. Paging 2. Segmentation. Introduction Memory consists of a large array

More information

Cache Design for Transcoding Proxy Caching

Cache Design for Transcoding Proxy Caching Cache Design for Transcoding Proxy Caching Keqiu Li, Hong Shen, and Keishi Tajima Graduate School of Information Science Japan Advanced Institute of Science and Technology 1-1 Tatsunokuchi, Ishikawa, 923-1292,

More information

SAT A Split-Up Cache Model to Boost the Performance of Web Cache Replacement Policies

SAT A Split-Up Cache Model to Boost the Performance of Web Cache Replacement Policies BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 11, No 4 Sofia 2011 SAT A Split-Up Cache Model to Boost the Performance of Web Cache Replacement Policies Geetha Krishnan 1,

More information

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced? Chapter 10: Virtual Memory Questions? CSCI [4 6] 730 Operating Systems Virtual Memory!! What is virtual memory and when is it useful?!! What is demand paging?!! When should pages in memory be replaced?!!

More information

Chapter 11: Indexing and Hashing

Chapter 11: Indexing and Hashing Chapter 11: Indexing and Hashing Basic Concepts Ordered Indices B + -Tree Index Files B-Tree Index Files Static Hashing Dynamic Hashing Comparison of Ordered Indexing and Hashing Index Definition in SQL

More information

VIII. DSP Processors. Digital Signal Processing 8 December 24, 2009

VIII. DSP Processors. Digital Signal Processing 8 December 24, 2009 Digital Signal Processing 8 December 24, 2009 VIII. DSP Processors 2007 Syllabus: Introduction to programmable DSPs: Multiplier and Multiplier-Accumulator (MAC), Modified bus structures and memory access

More information

Operating Systems. Memory: replacement policies

Operating Systems. Memory: replacement policies Operating Systems Memory: replacement policies Last time caching speeds up data lookups if the data is likely to be re-requested again data structures for O(1)-lookup data source set-associative (hardware)

More information

Chapter 12: Indexing and Hashing. Basic Concepts

Chapter 12: Indexing and Hashing. Basic Concepts Chapter 12: Indexing and Hashing! Basic Concepts! Ordered Indices! B+-Tree Index Files! B-Tree Index Files! Static Hashing! Dynamic Hashing! Comparison of Ordered Indexing and Hashing! Index Definition

More information

Improving object cache performance through selective placement

Improving object cache performance through selective placement University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2006 Improving object cache performance through selective placement Saied

More information

Preview. Memory Management

Preview. Memory Management Preview Memory Management With Mono-Process With Multi-Processes Multi-process with Fixed Partitions Modeling Multiprogramming Swapping Memory Management with Bitmaps Memory Management with Free-List Virtual

More information

Chapter 12: Indexing and Hashing

Chapter 12: Indexing and Hashing Chapter 12: Indexing and Hashing Basic Concepts Ordered Indices B+-Tree Index Files B-Tree Index Files Static Hashing Dynamic Hashing Comparison of Ordered Indexing and Hashing Index Definition in SQL

More information

1. Background. 2. Demand Paging

1. Background. 2. Demand Paging COSC4740-01 Operating Systems Design, Fall 2001, Byunggu Yu Chapter 10 Virtual Memory 1. Background PROBLEM: The entire process must be loaded into the memory to execute limits the size of a process (it

More information

THE CACHE REPLACEMENT POLICY AND ITS SIMULATION RESULTS

THE CACHE REPLACEMENT POLICY AND ITS SIMULATION RESULTS THE CACHE REPLACEMENT POLICY AND ITS SIMULATION RESULTS 1 ZHU QIANG, 2 SUN YUQIANG 1 Zhejiang University of Media and Communications, Hangzhou 310018, P.R. China 2 Changzhou University, Changzhou 213022,

More information

Optimization of Proxy Caches using Proxy Filters

Optimization of Proxy Caches using Proxy Filters Optimization of Proxy Caches using Proxy Filters Fabian Weber, Marcel Daneck, Christoph Reich Hochschule Furtwangen University 78120 Furtwangen, Germany {fabian.weber, marcel.daneck, christoph.reich}@hs-furtwangen.de

More information

The Last-Copy Approach for Distributed Cache Pruning in a Cluster of HTTP Proxies

The Last-Copy Approach for Distributed Cache Pruning in a Cluster of HTTP Proxies The Last-Copy Approach for Distributed Cache Pruning in a Cluster of HTTP Proxies Reuven Cohen and Itai Dabran Technion, Haifa 32000, Israel Abstract. Web caching has been recognized as an important way

More information

Lightweight caching strategy for wireless content delivery networks

Lightweight caching strategy for wireless content delivery networks Lightweight caching strategy for wireless content delivery networks Jihoon Sung 1, June-Koo Kevin Rhee 1, and Sangsu Jung 2a) 1 Department of Electrical Engineering, KAIST 291 Daehak-ro, Yuseong-gu, Daejeon,

More information

Lecture 5. Treaps Find, insert, delete, split, and join in treaps Randomized search trees Randomized search tree time costs

Lecture 5. Treaps Find, insert, delete, split, and join in treaps Randomized search trees Randomized search tree time costs Lecture 5 Treaps Find, insert, delete, split, and join in treaps Randomized search trees Randomized search tree time costs Reading: Randomized Search Trees by Aragon & Seidel, Algorithmica 1996, http://sims.berkeley.edu/~aragon/pubs/rst96.pdf;

More information

Operating System Concepts

Operating System Concepts Chapter 9: Virtual-Memory Management 9.1 Silberschatz, Galvin and Gagne 2005 Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped

More information

A Generalized Target-Driven Cache Replacement Policy for Mobile Environments

A Generalized Target-Driven Cache Replacement Policy for Mobile Environments A Generalized Target-Driven Cache Replacement Policy for Mobile Environments Liangzhong Yin, Guohong Cao Department of Computer Science & Engineering The Pennsylvania State University University Park,

More information

Evaluating the Impact of Different Document Types on the Performance of Web Cache Replacement Schemes *

Evaluating the Impact of Different Document Types on the Performance of Web Cache Replacement Schemes * Evaluating the Impact of Different Document Types on the Performance of Web Cache Replacement Schemes * Christoph Lindemann and Oliver P. Waldhorst University of Dortmund Department of Computer Science

More information

External Sorting. Why We Need New Algorithms

External Sorting. Why We Need New Algorithms 1 External Sorting All the internal sorting algorithms require that the input fit into main memory. There are, however, applications where the input is much too large to fit into memory. For those external

More information

Analytical Cache Replacement for Large Caches and Multiple Block Containers

Analytical Cache Replacement for Large Caches and Multiple Block Containers Analytical Cache Replacement for Large Caches and Multiple Block Containers David Vengerov david.vengerov@oracle.com Garret Swart garret.swart@oracle.com Draft of 2011/02/14 14:48 Abstract An important

More information

Chapter 9: Virtual-Memory

Chapter 9: Virtual-Memory Chapter 9: Virtual-Memory Management Chapter 9: Virtual-Memory Management Background Demand Paging Page Replacement Allocation of Frames Thrashing Other Considerations Silberschatz, Galvin and Gagne 2013

More information

SF-LRU Cache Replacement Algorithm

SF-LRU Cache Replacement Algorithm SF-LRU Cache Replacement Algorithm Jaafar Alghazo, Adil Akaaboune, Nazeih Botros Southern Illinois University at Carbondale Department of Electrical and Computer Engineering Carbondale, IL 6291 alghazo@siu.edu,

More information

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy Operating Systems Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. AL-AZHAR University Website : eaymanelshenawy.wordpress.com Email : eaymanelshenawy@yahoo.com Reference

More information

Greedy Algorithms CHAPTER 16

Greedy Algorithms CHAPTER 16 CHAPTER 16 Greedy Algorithms In dynamic programming, the optimal solution is described in a recursive manner, and then is computed ``bottom up''. Dynamic programming is a powerful technique, but it often

More information

Identifying Stable File Access Patterns

Identifying Stable File Access Patterns Identifying Stable File Access Patterns Purvi Shah Jehan-François Pâris 1 Ahmed Amer 2 Darrell D. E. Long 3 University of Houston University of Houston University of Pittsburgh U. C. Santa Cruz purvi@cs.uh.edu

More information

Web Proxy Cache Replacement Policies Using Decision Tree (DT) Machine Learning Technique for Enhanced Performance of Web Proxy

Web Proxy Cache Replacement Policies Using Decision Tree (DT) Machine Learning Technique for Enhanced Performance of Web Proxy Web Proxy Cache Replacement Policies Using Decision Tree (DT) Machine Learning Technique for Enhanced Performance of Web Proxy P. N. Vijaya Kumar PhD Research Scholar, Department of Computer Science &

More information

RECHOKe: A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks

RECHOKe: A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < : A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks Visvasuresh Victor Govindaswamy,

More information

Cache Controller with Enhanced Features using Verilog HDL

Cache Controller with Enhanced Features using Verilog HDL Cache Controller with Enhanced Features using Verilog HDL Prof. V. B. Baru 1, Sweety Pinjani 2 Assistant Professor, Dept. of ECE, Sinhgad College of Engineering, Vadgaon (BK), Pune, India 1 PG Student

More information

Impact of Frequency-Based Cache Management Policies on the Performance of Segment Based Video Caching Proxies

Impact of Frequency-Based Cache Management Policies on the Performance of Segment Based Video Caching Proxies Impact of Frequency-Based Cache Management Policies on the Performance of Segment Based Video Caching Proxies Anna Satsiou and Michael Paterakis Laboratory of Information and Computer Networks Department

More information

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES DESIGN AND ANALYSIS OF ALGORITHMS Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES http://milanvachhani.blogspot.in USE OF LOOPS As we break down algorithm into sub-algorithms, sooner or later we shall

More information

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

CS6401- Operating System UNIT-III STORAGE MANAGEMENT UNIT-III STORAGE MANAGEMENT Memory Management: Background In general, to rum a program, it must be brought into memory. Input queue collection of processes on the disk that are waiting to be brought into

More information

Report on Cache-Oblivious Priority Queue and Graph Algorithm Applications[1]

Report on Cache-Oblivious Priority Queue and Graph Algorithm Applications[1] Report on Cache-Oblivious Priority Queue and Graph Algorithm Applications[1] Marc André Tanner May 30, 2014 Abstract This report contains two main sections: In section 1 the cache-oblivious computational

More information

Chapter 4: Memory Management. Part 1: Mechanisms for Managing Memory

Chapter 4: Memory Management. Part 1: Mechanisms for Managing Memory Chapter 4: Memory Management Part 1: Mechanisms for Managing Memory Memory management Basic memory management Swapping Virtual memory Page replacement algorithms Modeling page replacement algorithms Design

More information

SOFT CACHING: WEB CACHE MANAGEMENT

SOFT CACHING: WEB CACHE MANAGEMENT SOFT CACHING: WEB CACHE MANAGEMENT TECHNIQUES FOR IMAGES A. Ortega, F. Carignano Integrated Media Systems Center University of Southern California Los Angeles, CA, USA S. Ayer and M. Vetterli' Visual Communications

More information

Efficient Resource Management for the P2P Web Caching

Efficient Resource Management for the P2P Web Caching Efficient Resource Management for the P2P Web Caching Kyungbaek Kim and Daeyeon Park Department of Electrical Engineering & Computer Science, Division of Electrical Engineering, Korea Advanced Institute

More information

On Caching Search Engine Results

On Caching Search Engine Results Abstract: On Caching Search Engine Results Evangelos P. Markatos Institute of Computer Science (ICS) Foundation for Research & Technology - Hellas (FORTH) P.O.Box 1385 Heraklio, Crete, GR-711-10 GREECE

More information

Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES DESIGN AND ANALYSIS OF ALGORITHMS Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES http://milanvachhani.blogspot.in USE OF LOOPS As we break down algorithm into sub-algorithms, sooner or later we shall

More information

Chapter 3 - Memory Management

Chapter 3 - Memory Management Chapter 3 - Memory Management Luis Tarrataca luis.tarrataca@gmail.com CEFET-RJ L. Tarrataca Chapter 3 - Memory Management 1 / 222 1 A Memory Abstraction: Address Spaces The Notion of an Address Space Swapping

More information

Chapter 6 Memory 11/3/2015. Chapter 6 Objectives. 6.2 Types of Memory. 6.1 Introduction

Chapter 6 Memory 11/3/2015. Chapter 6 Objectives. 6.2 Types of Memory. 6.1 Introduction Chapter 6 Objectives Chapter 6 Memory Master the concepts of hierarchical memory organization. Understand how each level of memory contributes to system performance, and how the performance is measured.

More information

Lecture 8 13 March, 2012

Lecture 8 13 March, 2012 6.851: Advanced Data Structures Spring 2012 Prof. Erik Demaine Lecture 8 13 March, 2012 1 From Last Lectures... In the previous lecture, we discussed the External Memory and Cache Oblivious memory models.

More information

LECTURE NOTES OF ALGORITHMS: DESIGN TECHNIQUES AND ANALYSIS

LECTURE NOTES OF ALGORITHMS: DESIGN TECHNIQUES AND ANALYSIS Department of Computer Science University of Babylon LECTURE NOTES OF ALGORITHMS: DESIGN TECHNIQUES AND ANALYSIS By Faculty of Science for Women( SCIW), University of Babylon, Iraq Samaher@uobabylon.edu.iq

More information

CS 137 Part 8. Merge Sort, Quick Sort, Binary Search. November 20th, 2017

CS 137 Part 8. Merge Sort, Quick Sort, Binary Search. November 20th, 2017 CS 137 Part 8 Merge Sort, Quick Sort, Binary Search November 20th, 2017 This Week We re going to see two more complicated sorting algorithms that will be our first introduction to O(n log n) sorting algorithms.

More information

Dynamic Load balancing for I/O- and Memory- Intensive workload in Clusters using a Feedback Control Mechanism

Dynamic Load balancing for I/O- and Memory- Intensive workload in Clusters using a Feedback Control Mechanism Dynamic Load balancing for I/O- and Memory- Intensive workload in Clusters using a Feedback Control Mechanism Xiao Qin, Hong Jiang, Yifeng Zhu, David R. Swanson Department of Computer Science and Engineering

More information

Advanced Database Systems

Advanced Database Systems Lecture IV Query Processing Kyumars Sheykh Esmaili Basic Steps in Query Processing 2 Query Optimization Many equivalent execution plans Choosing the best one Based on Heuristics, Cost Will be discussed

More information

Multimedia Streaming. Mike Zink

Multimedia Streaming. Mike Zink Multimedia Streaming Mike Zink Technical Challenges Servers (and proxy caches) storage continuous media streams, e.g.: 4000 movies * 90 minutes * 10 Mbps (DVD) = 27.0 TB 15 Mbps = 40.5 TB 36 Mbps (BluRay)=

More information

Module 5: Hash-Based Indexing

Module 5: Hash-Based Indexing Module 5: Hash-Based Indexing Module Outline 5.1 General Remarks on Hashing 5. Static Hashing 5.3 Extendible Hashing 5.4 Linear Hashing Web Forms Transaction Manager Lock Manager Plan Executor Operator

More information

DATA STRUCTURES/UNIT 3

DATA STRUCTURES/UNIT 3 UNIT III SORTING AND SEARCHING 9 General Background Exchange sorts Selection and Tree Sorting Insertion Sorts Merge and Radix Sorts Basic Search Techniques Tree Searching General Search Trees- Hashing.

More information

20-EECE-4029 Operating Systems Spring, 2013 John Franco

20-EECE-4029 Operating Systems Spring, 2013 John Franco 20-EECE-4029 Operating Systems Spring, 2013 John Franco Second Exam name: Question 1: Translation Look-aside Buffer (a) Describe the TLB. Include its location, why it is located there, its contents, and

More information

Memory Management Prof. James L. Frankel Harvard University

Memory Management Prof. James L. Frankel Harvard University Memory Management Prof. James L. Frankel Harvard University Version of 5:42 PM 25-Feb-2017 Copyright 2017, 2015 James L. Frankel. All rights reserved. Memory Management Ideal memory Large Fast Non-volatile

More information

Chapter 12: Indexing and Hashing

Chapter 12: Indexing and Hashing Chapter 12: Indexing and Hashing Database System Concepts, 5th Ed. See www.db-book.com for conditions on re-use Chapter 12: Indexing and Hashing Basic Concepts Ordered Indices B + -Tree Index Files B-Tree

More information

Chapter 8: Virtual Memory. Operating System Concepts

Chapter 8: Virtual Memory. Operating System Concepts Chapter 8: Virtual Memory Silberschatz, Galvin and Gagne 2009 Chapter 8: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating

More information

Chapter 8 Memory Management

Chapter 8 Memory Management 1 Chapter 8 Memory Management The technique we will describe are: 1. Single continuous memory management 2. Partitioned memory management 3. Relocatable partitioned memory management 4. Paged memory management

More information

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses. 1 Memory Management Address Binding The normal procedures is to select one of the processes in the input queue and to load that process into memory. As the process executed, it accesses instructions and

More information

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE RAID SEMINAR REPORT 2004 Submitted on: Submitted by: 24/09/2004 Asha.P.M NO: 612 S7 ECE CONTENTS 1. Introduction 1 2. The array and RAID controller concept 2 2.1. Mirroring 3 2.2. Parity 5 2.3. Error correcting

More information

Design Issues 1 / 36. Local versus Global Allocation. Choosing

Design Issues 1 / 36. Local versus Global Allocation. Choosing Design Issues 1 / 36 Local versus Global Allocation When process A has a page fault, where does the new page frame come from? More precisely, is one of A s pages reclaimed, or can a page frame be taken

More information

Chapter 2: Memory Hierarchy Design Part 2

Chapter 2: Memory Hierarchy Design Part 2 Chapter 2: Memory Hierarchy Design Part 2 Introduction (Section 2.1, Appendix B) Caches Review of basics (Section 2.1, Appendix B) Advanced methods (Section 2.3) Main Memory Virtual Memory Fundamental

More information

Reducing Disk Latency through Replication

Reducing Disk Latency through Replication Gordon B. Bell Morris Marden Abstract Today s disks are inexpensive and have a large amount of capacity. As a result, most disks have a significant amount of excess capacity. At the same time, the performance

More information

Structure of Computer Systems

Structure of Computer Systems 222 Structure of Computer Systems Figure 4.64 shows how a page directory can be used to map linear addresses to 4-MB pages. The entries in the page directory point to page tables, and the entries in a

More information

Hashing. 1. Introduction. 2. Direct-address tables. CmSc 250 Introduction to Algorithms

Hashing. 1. Introduction. 2. Direct-address tables. CmSc 250 Introduction to Algorithms Hashing CmSc 250 Introduction to Algorithms 1. Introduction Hashing is a method of storing elements in a table in a way that reduces the time for search. Elements are assumed to be records with several

More information

Virtual Memory Outline

Virtual Memory Outline Virtual Memory Outline Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel Memory Other Considerations Operating-System Examples

More information

Caches. Cache Memory. memory hierarchy. CPU memory request presented to first-level cache first

Caches. Cache Memory. memory hierarchy. CPU memory request presented to first-level cache first Cache Memory memory hierarchy CPU memory request presented to first-level cache first if data NOT in cache, request sent to next level in hierarchy and so on CS3021/3421 2017 jones@tcd.ie School of Computer

More information

PowerVault MD3 SSD Cache Overview

PowerVault MD3 SSD Cache Overview PowerVault MD3 SSD Cache Overview A Dell Technical White Paper Dell Storage Engineering October 2015 A Dell Technical White Paper TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS

More information

Homework 1 Solutions:

Homework 1 Solutions: Homework 1 Solutions: If we expand the square in the statistic, we get three terms that have to be summed for each i: (ExpectedFrequency[i]), (2ObservedFrequency[i]) and (ObservedFrequency[i])2 / Expected

More information

Chapter 2: Memory Hierarchy Design Part 2

Chapter 2: Memory Hierarchy Design Part 2 Chapter 2: Memory Hierarchy Design Part 2 Introduction (Section 2.1, Appendix B) Caches Review of basics (Section 2.1, Appendix B) Advanced methods (Section 2.3) Main Memory Virtual Memory Fundamental

More information

Popularity-Based PPM: An Effective Web Prefetching Technique for High Accuracy and Low Storage

Popularity-Based PPM: An Effective Web Prefetching Technique for High Accuracy and Low Storage Proceedings of 22 International Conference on Parallel Processing, (ICPP 22), Vancouver, Canada, August 18-21, 22. Popularity-Based : An Effective Web Prefetching Technique for High Accuracy and Low Storage

More information

Heuristic Algorithms for Multiconstrained Quality-of-Service Routing

Heuristic Algorithms for Multiconstrained Quality-of-Service Routing 244 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL 10, NO 2, APRIL 2002 Heuristic Algorithms for Multiconstrained Quality-of-Service Routing Xin Yuan, Member, IEEE Abstract Multiconstrained quality-of-service

More information

We will give examples for each of the following commonly used algorithm design techniques:

We will give examples for each of the following commonly used algorithm design techniques: Review This set of notes provides a quick review about what should have been learned in the prerequisite courses. The review is helpful to those who have come from a different background; or to those who

More information

MEMORY MANAGEMENT/1 CS 409, FALL 2013

MEMORY MANAGEMENT/1 CS 409, FALL 2013 MEMORY MANAGEMENT Requirements: Relocation (to different memory areas) Protection (run time, usually implemented together with relocation) Sharing (and also protection) Logical organization Physical organization

More information

Project 0: Implementing a Hash Table

Project 0: Implementing a Hash Table Project : Implementing a Hash Table CS, Big Data Systems, Spring Goal and Motivation. The goal of Project is to help you refresh basic skills at designing and implementing data structures and algorithms.

More information

4 Hash-Based Indexing

4 Hash-Based Indexing 4 Hash-Based Indexing We now turn to a different family of index structures: hash indexes. Hash indexes are unbeatable when it comes to equality selections, e.g. SELECT FROM WHERE R A = k. If we carefully

More information

INTEGRATING INTELLIGENT PREDICTIVE CACHING AND STATIC PREFETCHING IN WEB PROXY SERVERS

INTEGRATING INTELLIGENT PREDICTIVE CACHING AND STATIC PREFETCHING IN WEB PROXY SERVERS INTEGRATING INTELLIGENT PREDICTIVE CACHING AND STATIC PREFETCHING IN WEB PROXY SERVERS Abstract J. B. PATIL Department of Computer Engineering R. C. Patel Institute of Technology, Shirpur. (M.S.), India

More information

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access) Virtual Memory - programmer views memory as large address space without concerns about the amount of physical memory or memory management. (What do the terms 3-bit (or 6-bit) operating system or overlays

More information

EXTRACTION OF RELEVANT WEB PAGES USING DATA MINING

EXTRACTION OF RELEVANT WEB PAGES USING DATA MINING Chapter 3 EXTRACTION OF RELEVANT WEB PAGES USING DATA MINING 3.1 INTRODUCTION Generally web pages are retrieved with the help of search engines which deploy crawlers for downloading purpose. Given a query,

More information

A Simulation-Based Analysis of Scheduling Policies for Multimedia Servers

A Simulation-Based Analysis of Scheduling Policies for Multimedia Servers A Simulation-Based Analysis of Scheduling Policies for Multimedia Servers Nabil J. Sarhan Chita R. Das Department of Computer Science and Engineering The Pennsylvania State University University Park,

More information

A Memory Management Scheme for Hybrid Memory Architecture in Mission Critical Computers

A Memory Management Scheme for Hybrid Memory Architecture in Mission Critical Computers A Memory Management Scheme for Hybrid Memory Architecture in Mission Critical Computers Soohyun Yang and Yeonseung Ryu Department of Computer Engineering, Myongji University Yongin, Gyeonggi-do, Korea

More information

Paging algorithms. CS 241 February 10, Copyright : University of Illinois CS 241 Staff 1

Paging algorithms. CS 241 February 10, Copyright : University of Illinois CS 241 Staff 1 Paging algorithms CS 241 February 10, 2012 Copyright : University of Illinois CS 241 Staff 1 Announcements MP2 due Tuesday Fabulous Prizes Wednesday! 2 Paging On heavily-loaded systems, memory can fill

More information

Week 2: Tiina Niklander

Week 2: Tiina Niklander Virtual memory Operations and policies Chapters 3.4. 3.6 Week 2: 17.9.2009 Tiina Niklander 1 Policies and methods Fetch policy (Noutopolitiikka) When to load page to memory? Placement policy (Sijoituspolitiikka

More information

Lecture notes for CS Chapter 2, part 1 10/23/18

Lecture notes for CS Chapter 2, part 1 10/23/18 Chapter 2: Memory Hierarchy Design Part 2 Introduction (Section 2.1, Appendix B) Caches Review of basics (Section 2.1, Appendix B) Advanced methods (Section 2.3) Main Memory Virtual Memory Fundamental

More information

A Review on Cache Memory with Multiprocessor System

A Review on Cache Memory with Multiprocessor System A Review on Cache Memory with Multiprocessor System Chirag R. Patel 1, Rajesh H. Davda 2 1,2 Computer Engineering Department, C. U. Shah College of Engineering & Technology, Wadhwan (Gujarat) Abstract

More information

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Ewa Kusmierek and David H.C. Du Digital Technology Center and Department of Computer Science and Engineering University of Minnesota

More information

Dr. Amotz Bar-Noy s Compendium of Algorithms Problems. Problems, Hints, and Solutions

Dr. Amotz Bar-Noy s Compendium of Algorithms Problems. Problems, Hints, and Solutions Dr. Amotz Bar-Noy s Compendium of Algorithms Problems Problems, Hints, and Solutions Chapter 1 Searching and Sorting Problems 1 1.1 Array with One Missing 1.1.1 Problem Let A = A[1],..., A[n] be an array

More information

Chapter 12: Indexing and Hashing (Cnt(

Chapter 12: Indexing and Hashing (Cnt( Chapter 12: Indexing and Hashing (Cnt( Cnt.) Basic Concepts Ordered Indices B+-Tree Index Files B-Tree Index Files Static Hashing Dynamic Hashing Comparison of Ordered Indexing and Hashing Index Definition

More information

Physical Level of Databases: B+-Trees

Physical Level of Databases: B+-Trees Physical Level of Databases: B+-Trees Adnan YAZICI Computer Engineering Department METU (Fall 2005) 1 B + -Tree Index Files l Disadvantage of indexed-sequential files: performance degrades as file grows,

More information

Perform page replacement. (Fig 8.8 [Stal05])

Perform page replacement. (Fig 8.8 [Stal05]) Virtual memory Operations and policies Chapters 3.4. 3.7 1 Policies and methods Fetch policy (Noutopolitiikka) When to load page to memory? Placement policy (Sijoituspolitiikka ) Where to place the new

More information

Database Applications (15-415)

Database Applications (15-415) Database Applications (15-415) DBMS Internals- Part V Lecture 15, March 15, 2015 Mohammad Hammoud Today Last Session: DBMS Internals- Part IV Tree-based (i.e., B+ Tree) and Hash-based (i.e., Extendible

More information

Hashing with Linear Probing and Referential Integrity

Hashing with Linear Probing and Referential Integrity Hashing with Linear Probing and Referential Integrity arxiv:188.6v1 [cs.ds] 1 Aug 18 Peter Sanders Karlsruhe Institute of Technology (KIT), 7618 Karlsruhe, Germany sanders@kit.edu August 1, 18 Abstract

More information

Operating Systems, Fall

Operating Systems, Fall Policies and methods Virtual memory Operations and policies Chapters 3.4. 3.6 Week 2: 17.9.2009 Tiina Niklander 1 Fetch policy (Noutopolitiikka) When to load page to memory? Placement policy (Sijoituspolitiikka

More information