Space and Time-Efficient Hashing of Garbage-Collected Objects

Size: px
Start display at page:

Download "Space and Time-Efficient Hashing of Garbage-Collected Objects"

Transcription

1 SML document # To appear in Theory and Practice of Object Systems, Space and Time-Efficient Hashing of Garbage-Collected Objects Ole Agesen Sun Microsystems 2 Elizabeth Drive Chelmsford, MA 01824, U.S.A. ole.agesen@sun.com Initial version: April Revised: July 1997, February 1998, May Abstract. The hashcode() method found in the Java programming language, and similar methods in other languages, map an arbitrary object to an integer value that is constant for the lifetime of the object. We review existing implementations of the hash operation, specifying the kinds of memory systems for which they work. Then we propose a new implementation of hashing for the hardest case: memory systems with compaction and direct pointers. Our proposal uses just two bits of space per object for the (majority of) objects that are never hashed. 1 Introduction The Java programming language defines the method hashcode() in the topmost class Object [2] (p. 64): public native int hashcode(). For any object, the method returns a 32-bit integer hash value, which programmers can use to build hash tables of objects. Smalltalk [4] (p. 96) and Self [1] (p. 62), as well as other object-oriented languages, define similar methods. The Self version of the method is called _IdentityHash. As this name suggests, hash values reflect object identity (by identity we mean the exact same object, not just two objects which have the same state): if obj1 and obj2 have different hash values, they cannot be identical. The hash operation is typically implemented in the virtual machine or runtime system although many object-oriented languages allow application programmers to override this default implementation for specific classes. For example, the java.lang.string class overrides hashcode() to compute a hash value based on the characters in the string. The resulting hash values, in this case, reflect string equality as defined by the java.lang.string.equals() method: if two strings have different hash values they cannot be equal (and consequently cannot be identical). In this document, we discuss different implemenation strategies for the identity hash operation. For brevity, we shall just refer to it as the hash operation. For correctness and efficiency, the hashcode() operation must: remain constant throughout the lifetime of the object, have good distribution (i.e., two different objects should, with high probability, have different hash values), be efficient to compute, and have low storage overhead. For example, a hashcode() implementation that always returns 0 would be a poor choice since it degrades the performance of data structures that use hashing. For most programs, the vast majority of objects are never hashed using the virtual machine s implementation of hashcode(). First, programs simply don t put each and every object they create into hash tables. Second, as explained above, important classes of objects override the virtual machine s hashcode() implementation. To illustrate, we measured on a version of the source-to-bytecode compiler javac, which is written in the Java programming language. Compiling the java.lang.* package, javac allocated 351,353 objects, hashing only 1,801 of these with the virtual machine s hashcode() operation. In another experiment, we used the hotjava web browser for a period of time. At the end of the run, 1,649,228 objects had been allocated and only 499 objects had been hashed. Since most objects are never hashed, to ensure low space overhead it is essential to use as little space as possible for non-hashed objects. The problem is, of course, that in general it is difficult to predict whether the hashcode() 1

2 operation will be applied to a given object (although for classes that override hashcode(), this prediction may be possible). 2 Traditional implementations 2.1 Non-copying memory systems In systems that use a non-compacting garbage collector, objects never move. Once allocated at a certain address, an object remains there until it become garbage. It is therefore possible to use the address of the object as its hash- Code(). This solution is fast, has no space overhead, and offers an extremely good distribution of hash values: obj1.hashcode() == obj2.hashcode() obj1 == obj2. However, few modern implementations of object-oriented language use non-compacting memory systems, since the induced fragmentation can affect performance negatively. 2.2 Handle-based memory systems The original Java virtual machine [6] and some Smalltalk virtual machines use indirect pointers, called handles in [6], to refer to objects. Handles allow easy relocation of objects during garbage collection since, with handles, there is only one direct pointer to each object: the one in its handle. All other references to the object indirect through the handle. In such handle-based memory systems, while object addresses change over the lifetime of objects and therefore cannot be used for hashing, handle addresses remain constant. Thus, the hashcode() operation can be implemented by returning the address of the object s handle. This implementation, like the object address implementation in non-compacting systems, is fast, has no space overhead, and gives a good distribution (satisfying the relation given above). However, other concerns, including execution efficiency of the system as a whole, may favor memory systems without handles, necessitating a different implementation of the hashcode() operation. 2.3 Direct-pointer, no-handle memory systems Most high-performance implementations of languages use compacting garbage collection algorithms that work with direct pointers. Consequently, hash codes must be implemented in a different way than by using addresses (objects move, so their address is no good, and there are no handles either). A common solution is illustrated by the Self system [3]. Each Self object contains two header words: a map pointer (which is analogous to a class pointer in a classbased language) and a mark word. The mark word contains a number of bit fields related to garbage collection in addition to a 22-bit hash field (22 is not an arbitrary number, but the number of bits remaining once the other needs have been fulfilled). The 22-bit hash field is initialized to zero, but when an object is first hashed a pseudo-random number is generated and stored in the field. While this lazy hash value assignment slows down all hash retrieval operations by an extra test, it is a net win since object allocation is more frequent than hashing for most programs. This hash code implementation has fast retrieval. It also has low storage overhead, if there happens to be enough spare bits in a header word. However, when hashing is the drop that overflows the bucket and necessitates adding an extra header word to every object, the space cost of this implementation of hashcode() is large. Another potential drawback is the temptation to compress the hash function range into whichever number of bits are available to avoid paying the cost of an extra word in every object. While Self s 22 bits probably suffice for most purposes, Squeak s 12-bit hash values may not [5]. Certainly, one could imagine applications for which a full 32-bit (or larger) hash range would improve performance. Some Lisp systems have used a different technique to implement the hashcode() operation. As a way of introducing this technique, reconsider for a moment the very idea of having a hashcode() operation. Instead of defining a hashcode() operation, the virtual machine can provide a built in hash table data type. For specificity, let us call this data type BIHashTable. A BIHashTable maps an arbitrary key object to an arbitrary value object. By virtue of being defined in the virtual machine, BIHashTables can use the key objects addresses for hashing. When the garbage collector moves one or more key objects, it performs extra work to maintain the validity of the BIHashTables. The extra work can be done incrementally by deleting the old (key, value) pair and inserting the new pair, or it 2

3 can be done en masse by building a replacement hash table at the end of each GC, thus saving the cost of the hash table deletion operations. Lisp systems have used variations of BIHashTables for several years; see for example [7] (p ). In a straightforward implementation of BIHashTables, the rehashing imposes extra work at every garbage collection in proportion to the number of key objects that were moved. An optimized implementation may perform the rehashing lazily. Then the garbage collector will update the key addresses in the hash table data structure when keys move, but it will not rehash (i.e., it will not move the (key, value) pairs to their new bins). To detect out-of-date hash tables, a simple GCcount time stamp suffices: is GCcount of last rehash less than current GCcount? The test to determine if rehashing is necessary can be performed after failing lookups (rehash and retry the lookup). BIHashTables, while elegantly solving the problem of using addresses of moving objects as keys in a hash table, have some drawbacks: broader virtual machine interface: instead of providing a simple hashcode() operation, systems with BIHashTables must provide an implementation of hash tables in the virtual machine. high specificity: a mapping from objects to nearly unique integers, as provided by hashcode(), can be used for many purposes other than building hash tables. These disadvantages disappear if BIHashTables are used internally in the virtual machine to implement the hashcode() operation. The idea is to use a BIHashTable, which we will call hashcodetable, to map objects to hash codes. The only access to this table from user code is through the hashcode() operation, implemented in the virtual machine as follows: int hashcode(object *obj) { int h = hashcodetable->lookup(obj); /* Returns NOTFOUND if obj is not in table. */ if (h == NOTFOUND) { /* Assumes NOTFOUND isn t used as a hash code. */ h =...new hash code...; hashcodetable->insert(obj, h); } return h; } The hashcodetable must use weak references for the key objects to allow them to be garbage collected when they are otherwise unreachable. (The need for weak references may add some implementation complexity, unless weak references are already present in the virtual machine for other reasons.) Having reviewed the existing implementation techniques for the hashcode() operation, we now describe a new implementation that has a different and, we believe, competitive time/space trade-off, at least for some systems. 3 New proposal for implementing hashcode() The following proposal for implementing the hashcode() operation works in compacting handleless memory systems and satisfies the four properties mentioned in Section 1. Specifically, it provides a full 32-bit hash range and uses only two bits of space in non-hashed objects. 3.1 Basic idea We initially allocate objects without a field for hash values. Under some circumstances, the object may later be expanded by an extra field, which will hold the hash value of the object. The problem then is to support this expansion without incurring a large overhead or excessive garbage collection and tracing of objects. Indeed, we note that expanding an object is usually not possible without relocating it, since objects are packed tightly to utilize memory efficiently. Moreover, having no handles, it is difficult to relocate an object without scanning large areas of memory to locate and update all pointers to the object. To overcome these difficulties, we reserve two bits in the header of objects, as follows: hasbeenhashed: 0 initially; set to 1 when the hashcode() operation is performed on the object. hashashfield: 0 initially; set to 1 if the object has been expanded with a hash field at the end. 3

4 At object allocation time, both bits are set to 0 and they remain so until the object is hashed. Using these bits, the hashcode() operation is implemented as follows: int hashcode(object *obj) { obj->hasbeenhashed = 1; /* Object has now been hashed. */ if (obj->hashashfield == 0) return (int)obj; /* Use object s address. */ else return obj->hashfield; /* Use "internalized" hash value. */ } This method has two important aspects: it tracks which objects have been hashed, and it selects either the address of the object or a value from a hashfield as the hash value (this field is set by the garbage collector; see below). When the garbage collector relocates objects, it inspects their hash bits. If hasbeenhashed == 0, nothing special needs to be done to the object. If hasbeenhashed == 1 and hashashfield == 0, the object has been hashed, but it has no hash field yet. The garbage collector allocates an extra field, hashfield, at the object s new location, stores the object s old address in the field (which is the object s hash value), then copies the rest of the object and sets hashashfield == 1. From now on, the object will be one field larger than when it was born. We say that the hash value has been internalized. The new field is located at the end of the object so that the original fields offsets remain unchanged. Figure 1 illustrates this process for a system in which objects have a single-word header containing a class pointer and the two hash state bits. hashashfield hasbeenhashed class ptr 10 non-header fields hashashfield hasbeenhashed class ptr 11 non-header fields offsets to these fields remain unchanged hash code In summary, we use the object s address as the hash value for as long as possible (until the object is relocated). When relocation of a hashed object changes its address, we expand the object and internalize the hash value. Finally, it is interesting to note that in memory systems where certain (or all) areas are never compacted, this hash code implementation automatically reverts to a solution that uses just 2 bits of additional space over the space-wise optimal technique of using object addresses in non-relocating systems. 3.2 Generational systems before relocation after relocation Fig. 1. The result of copying an object the first time after it has been hashed. Generational memory systems allocate objects in a small young space. They attempt to gain performance by keeping new and active objects in a small region of memory, which can be garbage collected more efficiently. Restricting new objects to a small area of memory may negatively affect the distribution of hash values. We expect that most objects will be hashed while they are still young and therefore receive a hash value from the limited range of addresses in the young space. For example, a typical young space may be 512 kb. Assuming word-aligned objects, the maximal number of different hash values is 128 k or just 17 bits. To recover a good distribution of hash values we can xor a pseudo-random number onto the object s address and return the result as the hash value. The random number expands the range of hash values to 32 bits (or whatever size is desired). The random number stays constant for the duration of one collection cycle of the young space. After compaction of the young space, the current random number is no longer needed because the garbage collector has internalized the hash values for all objects that used it. The garbage collector then replaces the random number by a new one so that in the next cycle through young space, an object that is hashed for the first time is likely to receive a unique hash value, even if other objects occupied the same address during a previous cycle. 4

5 3.3 Evaluation Table 1 compares the three implementations of the hashcode() operation for handle-less compacting memory systems. Column 1 describes the implementation that uses a hash field in all objects, column 2 describes the implementation that uses a weak BIHashTable, and column 3 describes the technique proposed in this paper. 1. Hash field in every obj. 2. Weak BIHashTable 3. On-demand extension of objs Range of hash codes n bits; often less than full word full word full word Space per unhashed object n bits 0 bits 2 bits Space per hashed object n bits 2 words a 1 word and 2 bits Time, hashcode() load, bit masking, test, branch (optionally set hash code) hash table lookup and perhaps insertion load bit, test, branch, load hashfield or use obj. addr. Time overhead, GC none rehash b extra test to determine obj. size Table 1. Comparison of three techniques for implementing hashcode() in compacting handle-less memory systems. a. We assume that each entry (object address, hash code) in the hash table occupies two words; additional overhead may result from the hash table operating at less than 100% utilization. b. With the optimization described in the text, at most one rehash per GC is required. It can be argued whether this cost should be charged to the GC or the mutator; here we have arbitrarily listed it as a GC cost. First consider space usage. No one implementation technique is always best. If sufficiently many spare bits (n) can be found to hold the hash code within the object, the technique in column 1 wins. Otherwise, if two bits can be found, column 3 wins. Next, consider the speed of the hashcode() operation. Columns 1 and 3 are very fast. Column 1 implements hashcode() with a load, some bit masking, a test to determine if this is the first time the object is hashed (we assume lazy setting of hash codes), and the code to set the hash code if needed. In the common case, when the object has been previously hashed, this amounts to about a handful of instructions. Column 3 s implementation of hash- Code(), shown in Section 3.1, compiles into eight instructions with Sun s cc compiler at optimization level -xo4. The difference between five and eight instructions is probably small when considered in relation to the subsequent use of the hash code (probing some application-level hash table, typically). Column 2 also has a fast hashcode() operation, though not quite as fast as the other two implementation techniques. In many cases, the complexity of hashcode() in column 2 will be comparable to but no worse than the complexity of the subsequent use of the hash value. Essentially, each application-level hash table probe will cost two hash probes: one to look up the hash code followed by one at the application level. Now consider garbage collection overhead. First we compare columns 1 and 3. The technique in column 3 complicates the (copying) collector slightly by imposing extra work on the relocation of objects. For example, to compute the word size of an object about to be relocated, the collector first computes the size without counting the possible extra hash code field. Next, the collector must add one to the size if the hash field is present. This adjustment requires a load of the word containing the hashashfield bit, a bit-wise and to extract the bit, possibly a shift to bring the bit to the right location, and an add to combine the bit and the unadjusted object size. The overhead on computing the object size therefore is three or four instructions. Similarly, computing the after-relocation object size can be done in the same number of extra instructions, but using the hasbeenhashed bit instead of the hashashfield bit. Finally, the collector compares the two sizes to determine if it needs to internalize the hash code (i.e., store the object s old address in the hash field and turn on the hashashfield bit). With careful coding, we expect the total overhead on copying an object to be less than 20 instructions in the case when hash codes are being internalized and less than 12 instructions in other cases. While 12 to 20 extra instructions seem affordable, the bottom-line performance impact, we acknowledge, remains to be determined by measuring on an actual implementation. One important fact was ignored above: the implementation technique in column 3 should be favored over column 1 only when it saves a word per unhashed object. Thus, the system in which the garbage collector performs extra work for each object it relocates (column 3) should be compared against a system in which (most) objects are one word bigger (column 1). The former system can pack more objects in a given size heap, resulting in fewer garbage collections, which in part or perhaps completely offsets the extra work required when relocating objects. While a word per object may not sound like much, the overall effect nevertheless could be significant since the average object size 5

6 in many object-oriented programs is small, perhaps as small as 10 words. In some cases, such as memory-limited embedded systems, a 10% heap size reduction could be crucial. Even for large systems, the resulting locality improvement and d-cache hit rate increase could yield a measurable performance gain. Finally, let us compare garbage collection overhead between column 2 and column 3. The main drawback of column 3, as explained above, is the extra work required when copying objects. Column 2 suffers from two different drawbacks. First, it requires support for weak references. (Of course, for systems that require weak references anyway, this cost should not be taxed to the hashcode() implementation technique). Second, it incurs the cost of rehashing the hashcodetable up to once per garbage collection. The cost of rehashing is proportional to the number of live objects that have been hashed and is being moved. Thus, as the number of hashed objects grows, the performance of column 2 will decrease. 4 Conclusion We have described a new implementation of object identity hashing for handle-less compacting memory systems. The new technique uses on-demand extension of objects, offers a good distribution of hash codes, is simple to implement, and provides a competitive time/space trade-off. It requires only 2 bits of space in the majority of objects that are not hashed, and 2 bits plus a word (or whatever size hash value is desired) in objects that have been hashed and subsequently relocated. Acknowledgments. The main idea in this paper came about after discussions with the other members of the Java Topics Group: David Detlefs, Christine Flood, Steve Heller, Guy Steele, and Derek White. The anonymous reviewers provided comments that significantly improved this paper. References 1. Agesen, O., Bak, L., Chambers, C., Chang, B.-W., Hölzle, U., Maloney, J., Smith, R.B., Ungar, D., and Wolczko, M. The Self 4.0 Programmer s Reference Manual. Available from July Arnold, K. and Gosling, J. The Java Programming Language. The Java Series, Addison-Wesley, Chambers, C., Ungar, D., and Lee, E. An Efficient Implementation of Self, a Dynamically-Typed Object-Oriented Language Based on Prototypes. Lisp and Symbolic Computation 4(3), Kluwer Academic Publishers, June Originally published in Proceedings of the 1989 ACM SIGPLAN Conference on Object-Oriented Programming Systems, Languages & Applications (OOPSLA 89), New Orleans, LA, October Goldberg, A. and Robson, D. Smalltalk-80: the Language and its Implementation. Addison-Wesley, Ingalls, D., Kaehler, T., Maloney, J.H. Wallace, S., and Kay, A. Back to the Future: The Story of Squeak, A Practical Smalltalk Written in Itself. Proceedings of the 1997 ACM SIGPLAN Conference on Object-Oriented Programming Systems, Languages & Applications (OOPSLA 97), Atlanta, GA, October Lindholm, T. and Yellin, F. The Java Virtual Machine Specification. The Java Series, Addison-Wesley, Symbolics. Reference Guide to Symbolics-Lisp, release 6.0, March Sun, Sun Microsystems, and Java are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. 6

Lecture Notes on Garbage Collection

Lecture Notes on Garbage Collection Lecture Notes on Garbage Collection 15-411: Compiler Design Frank Pfenning Lecture 21 November 4, 2014 These brief notes only contain a short overview, a few pointers to the literature with detailed descriptions,

More information

Deallocation Mechanisms. User-controlled Deallocation. Automatic Garbage Collection

Deallocation Mechanisms. User-controlled Deallocation. Automatic Garbage Collection Deallocation Mechanisms User-controlled Deallocation Allocating heap space is fairly easy. But how do we deallocate heap memory no longer in use? Sometimes we may never need to deallocate! If heaps objects

More information

Heap Compression for Memory-Constrained Java

Heap Compression for Memory-Constrained Java Heap Compression for Memory-Constrained Java CSE Department, PSU G. Chen M. Kandemir N. Vijaykrishnan M. J. Irwin Sun Microsystems B. Mathiske M. Wolczko OOPSLA 03 October 26-30 2003 Overview PROBLEM:

More information

CS 345. Garbage Collection. Vitaly Shmatikov. slide 1

CS 345. Garbage Collection. Vitaly Shmatikov. slide 1 CS 345 Garbage Collection Vitaly Shmatikov slide 1 Major Areas of Memory Static area Fixed size, fixed content, allocated at compile time Run-time stack Variable size, variable content (activation records)

More information

Garbage Collection (2) Advanced Operating Systems Lecture 9

Garbage Collection (2) Advanced Operating Systems Lecture 9 Garbage Collection (2) Advanced Operating Systems Lecture 9 Lecture Outline Garbage collection Generational algorithms Incremental algorithms Real-time garbage collection Practical factors 2 Object Lifetimes

More information

Heap Management. Heap Allocation

Heap Management. Heap Allocation Heap Management Heap Allocation A very flexible storage allocation mechanism is heap allocation. Any number of data objects can be allocated and freed in a memory pool, called a heap. Heap allocation is

More information

Managed runtimes & garbage collection. CSE 6341 Some slides by Kathryn McKinley

Managed runtimes & garbage collection. CSE 6341 Some slides by Kathryn McKinley Managed runtimes & garbage collection CSE 6341 Some slides by Kathryn McKinley 1 Managed runtimes Advantages? Disadvantages? 2 Managed runtimes Advantages? Reliability Security Portability Performance?

More information

Managed runtimes & garbage collection

Managed runtimes & garbage collection Managed runtimes Advantages? Managed runtimes & garbage collection CSE 631 Some slides by Kathryn McKinley Disadvantages? 1 2 Managed runtimes Portability (& performance) Advantages? Reliability Security

More information

Finding References in Java Stacks

Finding References in Java Stacks SML document #SML-97-0405 Presented at the OOPSLA 97 Workshop on Garbage Collection and Memory Management, October 5, 1997, Atlanta, GA. Finding References in Java Stacks Ole Agesen, David Detlefs Sun

More information

Java Internals. Frank Yellin Tim Lindholm JavaSoft

Java Internals. Frank Yellin Tim Lindholm JavaSoft Java Internals Frank Yellin Tim Lindholm JavaSoft About This Talk The JavaSoft implementation of the Java Virtual Machine (JDK 1.0.2) Some companies have tweaked our implementation Alternative implementations

More information

Type Feedback for Bytecode Interpreters

Type Feedback for Bytecode Interpreters Type Feedback for Bytecode Interpreters Michael Haupt, Robert Hirschfeld, Marcus Denker To cite this version: Michael Haupt, Robert Hirschfeld, Marcus Denker. Type Feedback for Bytecode Interpreters. ICOOOLPS

More information

Java On Steroids: Sun s High-Performance Java Implementation. History

Java On Steroids: Sun s High-Performance Java Implementation. History Java On Steroids: Sun s High-Performance Java Implementation Urs Hölzle Lars Bak Steffen Grarup Robert Griesemer Srdjan Mitrovic Sun Microsystems History First Java implementations: interpreters compact

More information

Shenandoah An ultra-low pause time Garbage Collector for OpenJDK. Christine H. Flood Roman Kennke

Shenandoah An ultra-low pause time Garbage Collector for OpenJDK. Christine H. Flood Roman Kennke Shenandoah An ultra-low pause time Garbage Collector for OpenJDK Christine H. Flood Roman Kennke 1 What does ultra-low pause time mean? It means that the pause time is proportional to the size of the root

More information

Hardware-Supported Pointer Detection for common Garbage Collections

Hardware-Supported Pointer Detection for common Garbage Collections 2013 First International Symposium on Computing and Networking Hardware-Supported Pointer Detection for common Garbage Collections Kei IDEUE, Yuki SATOMI, Tomoaki TSUMURA and Hiroshi MATSUO Nagoya Institute

More information

Untyped Memory in the Java Virtual Machine

Untyped Memory in the Java Virtual Machine Untyped Memory in the Java Virtual Machine Andreas Gal and Michael Franz University of California, Irvine {gal,franz}@uci.edu Christian W. Probst Technical University of Denmark probst@imm.dtu.dk July

More information

CS 536 Introduction to Programming Languages and Compilers Charles N. Fischer Lecture 11

CS 536 Introduction to Programming Languages and Compilers Charles N. Fischer Lecture 11 CS 536 Introduction to Programming Languages and Compilers Charles N. Fischer Lecture 11 CS 536 Spring 2015 1 Handling Overloaded Declarations Two approaches are popular: 1. Create a single symbol table

More information

Lecture 13: Garbage Collection

Lecture 13: Garbage Collection Lecture 13: Garbage Collection COS 320 Compiling Techniques Princeton University Spring 2016 Lennart Beringer/Mikkel Kringelbach 1 Garbage Collection Every modern programming language allows programmers

More information

Compiler Construction

Compiler Construction Compiler Construction Lecture 18: Code Generation V (Implementation of Dynamic Data Structures) Thomas Noll Lehrstuhl für Informatik 2 (Software Modeling and Verification) noll@cs.rwth-aachen.de http://moves.rwth-aachen.de/teaching/ss-14/cc14/

More information

Compiler Construction

Compiler Construction Compiler Construction Thomas Noll Software Modeling and Verification Group RWTH Aachen University https://moves.rwth-aachen.de/teaching/ss-16/cc/ Recap: Static Data Structures Outline of Lecture 18 Recap:

More information

Garbage Collection. Weiyuan Li

Garbage Collection. Weiyuan Li Garbage Collection Weiyuan Li Why GC exactly? - Laziness - Performance - free is not free - combats memory fragmentation - More flame wars Basic concepts - Type Safety - Safe: ML, Java (not really) - Unsafe:

More information

Habanero Extreme Scale Software Research Project

Habanero Extreme Scale Software Research Project Habanero Extreme Scale Software Research Project Comp215: Garbage Collection Zoran Budimlić (Rice University) Adapted from Keith Cooper s 2014 lecture in COMP 215. Garbage Collection In Beverly Hills...

More information

Robust Memory Management Schemes

Robust Memory Management Schemes Robust Memory Management Schemes Prepared by : Fadi Sbahi & Ali Bsoul Supervised By: Dr. Lo ai Tawalbeh Jordan University of Science and Technology Robust Memory Management Schemes Introduction. Memory

More information

Run-Time Environments/Garbage Collection

Run-Time Environments/Garbage Collection Run-Time Environments/Garbage Collection Department of Computer Science, Faculty of ICT January 5, 2014 Introduction Compilers need to be aware of the run-time environment in which their compiled programs

More information

Hard Real-Time Garbage Collection in the Jamaica Virtual Machine

Hard Real-Time Garbage Collection in the Jamaica Virtual Machine Hard Real-Time Garbage Collection in the Jamaica Virtual Machine Fridtjof Siebert Jamaica Systems siebert@real-time-systems.de Abstract Java s automatic memory management is the main reason that prevents

More information

Mark-Sweep and Mark-Compact GC

Mark-Sweep and Mark-Compact GC Mark-Sweep and Mark-Compact GC Richard Jones Anthony Hoskins Eliot Moss Presented by Pavel Brodsky 04/11/14 Our topics today Two basic garbage collection paradigms: Mark-Sweep GC Mark-Compact GC Definitions

More information

Hash table basics. ate à. à à mod à 83

Hash table basics. ate à. à à mod à 83 Hash table basics After today, you should be able to explain how hash tables perform insertion in amortized O(1) time given enough space ate à hashcode() à 48594983à mod à 83 82 83 ate 84 } EditorTrees

More information

Lecture Notes on Garbage Collection

Lecture Notes on Garbage Collection Lecture Notes on Garbage Collection 15-411: Compiler Design André Platzer Lecture 20 1 Introduction In the previous lectures we have considered a programming language C0 with pointers and memory and array

More information

Algorithms for Dynamic Memory Management (236780) Lecture 4. Lecturer: Erez Petrank

Algorithms for Dynamic Memory Management (236780) Lecture 4. Lecturer: Erez Petrank Algorithms for Dynamic Memory Management (236780) Lecture 4 Lecturer: Erez Petrank!1 March 24, 2014 Topics last week The Copying Garbage Collector algorithm: Basics Cheney s collector Additional issues:

More information

Run-time Environments -Part 3

Run-time Environments -Part 3 Run-time Environments -Part 3 Y.N. Srikant Computer Science and Automation Indian Institute of Science Bangalore 560 012 NPTEL Course on Compiler Design Outline of the Lecture Part 3 What is run-time support?

More information

Hash table basics mod 83 ate. ate

Hash table basics mod 83 ate. ate Hash table basics After today, you should be able to explain how hash tables perform insertion in amortized O(1) time given enough space ate hashcode() 82 83 84 48594983 mod 83 ate Topics: weeks 1-6 Reading,

More information

Name, Scope, and Binding. Outline [1]

Name, Scope, and Binding. Outline [1] Name, Scope, and Binding In Text: Chapter 3 Outline [1] Variable Binding Storage bindings and lifetime Type bindings Type Checking Scope Lifetime vs. Scope Referencing Environments N. Meng, S. Arthur 2

More information

Data Structure for Language Processing. Bhargavi H. Goswami Assistant Professor Sunshine Group of Institutions

Data Structure for Language Processing. Bhargavi H. Goswami Assistant Professor Sunshine Group of Institutions Data Structure for Language Processing Bhargavi H. Goswami Assistant Professor Sunshine Group of Institutions INTRODUCTION: Which operation is frequently used by a Language Processor? Ans: Search. This

More information

Ordered Indices To gain fast random access to records in a file, we can use an index structure. Each index structure is associated with a particular search key. Just like index of a book, library catalog,

More information

Hash table basics mod 83 ate. ate

Hash table basics mod 83 ate. ate Hash table basics After today, you should be able to explain how hash tables perform insertion in amortized O(1) time given enough space ate hashcode() 82 83 84 48594983 mod 83 ate 1. Section 2: 15+ min

More information

Sustainable Memory Use Allocation & (Implicit) Deallocation (mostly in Java)

Sustainable Memory Use Allocation & (Implicit) Deallocation (mostly in Java) COMP 412 FALL 2017 Sustainable Memory Use Allocation & (Implicit) Deallocation (mostly in Java) Copyright 2017, Keith D. Cooper & Zoran Budimlić, all rights reserved. Students enrolled in Comp 412 at Rice

More information

Advanced Programming & C++ Language

Advanced Programming & C++ Language Advanced Programming & C++ Language ~6~ Introduction to Memory Management Ariel University 2018 Dr. Miri (Kopel) Ben-Nissan Stack & Heap 2 The memory a program uses is typically divided into four different

More information

Garbage Collection. Hwansoo Han

Garbage Collection. Hwansoo Han Garbage Collection Hwansoo Han Heap Memory Garbage collection Automatically reclaim the space that the running program can never access again Performed by the runtime system Two parts of a garbage collector

More information

G Programming Languages - Fall 2012

G Programming Languages - Fall 2012 G22.2110-003 Programming Languages - Fall 2012 Lecture 2 Thomas Wies New York University Review Last week Programming Languages Overview Syntax and Semantics Grammars and Regular Expressions High-level

More information

Lecture 15 Garbage Collection

Lecture 15 Garbage Collection Lecture 15 Garbage Collection I. Introduction to GC -- Reference Counting -- Basic Trace-Based GC II. Copying Collectors III. Break Up GC in Time (Incremental) IV. Break Up GC in Space (Partial) Readings:

More information

Introduction hashing: a technique used for storing and retrieving information as quickly as possible.

Introduction hashing: a technique used for storing and retrieving information as quickly as possible. Lecture IX: Hashing Introduction hashing: a technique used for storing and retrieving information as quickly as possible. used to perform optimal searches and is useful in implementing symbol tables. Why

More information

DATABASE PERFORMANCE AND INDEXES. CS121: Relational Databases Fall 2017 Lecture 11

DATABASE PERFORMANCE AND INDEXES. CS121: Relational Databases Fall 2017 Lecture 11 DATABASE PERFORMANCE AND INDEXES CS121: Relational Databases Fall 2017 Lecture 11 Database Performance 2 Many situations where query performance needs to be improved e.g. as data size grows, query performance

More information

Garbage Collection. Steven R. Bagley

Garbage Collection. Steven R. Bagley Garbage Collection Steven R. Bagley Reference Counting Counts number of pointers to an Object deleted when the count hits zero Eager deleted as soon as it is finished with Problem: Circular references

More information

Memory Management. Reading: Silberschatz chapter 9 Reading: Stallings. chapter 7 EEL 358

Memory Management. Reading: Silberschatz chapter 9 Reading: Stallings. chapter 7 EEL 358 Memory Management Reading: Silberschatz chapter 9 Reading: Stallings chapter 7 1 Outline Background Issues in Memory Management Logical Vs Physical address, MMU Dynamic Loading Memory Partitioning Placement

More information

Hashing. Hashing Procedures

Hashing. Hashing Procedures Hashing Hashing Procedures Let us denote the set of all possible key values (i.e., the universe of keys) used in a dictionary application by U. Suppose an application requires a dictionary in which elements

More information

Garbage Collection Algorithms. Ganesh Bikshandi

Garbage Collection Algorithms. Ganesh Bikshandi Garbage Collection Algorithms Ganesh Bikshandi Announcement MP4 posted Term paper posted Introduction Garbage : discarded or useless material Collection : the act or process of collecting Garbage collection

More information

Bits, Words, and Integers

Bits, Words, and Integers Computer Science 52 Bits, Words, and Integers Spring Semester, 2017 In this document, we look at how bits are organized into meaningful data. In particular, we will see the details of how integers are

More information

Object-level tracing toolkit: design, implementation, and purpose

Object-level tracing toolkit: design, implementation, and purpose Object-level tracing toolkit: design, implementation, and purpose Darko Stefanović September 12, 1995 1 Introduction The object-level tracing toolkit (OLTT) is a collection of an interface and several

More information

Project. there are a couple of 3 person teams. a new drop with new type checking is coming. regroup or see me or forever hold your peace

Project. there are a couple of 3 person teams. a new drop with new type checking is coming. regroup or see me or forever hold your peace Project there are a couple of 3 person teams regroup or see me or forever hold your peace a new drop with new type checking is coming using it is optional 1 Compiler Architecture source code Now we jump

More information

Memory Allocation. Static Allocation. Dynamic Allocation. Dynamic Storage Allocation. CS 414: Operating Systems Spring 2008

Memory Allocation. Static Allocation. Dynamic Allocation. Dynamic Storage Allocation. CS 414: Operating Systems Spring 2008 Dynamic Storage Allocation CS 44: Operating Systems Spring 2 Memory Allocation Static Allocation (fixed in size) Sometimes we create data structures that are fixed and don t need to grow or shrink. Dynamic

More information

Parallel GC. (Chapter 14) Eleanor Ainy December 16 th 2014

Parallel GC. (Chapter 14) Eleanor Ainy December 16 th 2014 GC (Chapter 14) Eleanor Ainy December 16 th 2014 1 Outline of Today s Talk How to use parallelism in each of the 4 components of tracing GC: Marking Copying Sweeping Compaction 2 Introduction Till now

More information

Algorithms and Data Structures

Algorithms and Data Structures Algorithms and Data Structures Open Hashing Ulf Leser Open Hashing Open Hashing: Store all values inside hash table A General framework No collision: Business as usual Collision: Chose another index and

More information

Chapter 5 Hashing. Introduction. Hashing. Hashing Functions. hashing performs basic operations, such as insertion,

Chapter 5 Hashing. Introduction. Hashing. Hashing Functions. hashing performs basic operations, such as insertion, Introduction Chapter 5 Hashing hashing performs basic operations, such as insertion, deletion, and finds in average time 2 Hashing a hash table is merely an of some fixed size hashing converts into locations

More information

PROCESS VIRTUAL MEMORY. CS124 Operating Systems Winter , Lecture 18

PROCESS VIRTUAL MEMORY. CS124 Operating Systems Winter , Lecture 18 PROCESS VIRTUAL MEMORY CS124 Operating Systems Winter 2015-2016, Lecture 18 2 Programs and Memory Programs perform many interactions with memory Accessing variables stored at specific memory locations

More information

Advances in Memory Management and Symbol Lookup in pqr

Advances in Memory Management and Symbol Lookup in pqr Advances in Memory Management and Symbol Lookup in pqr Radford M. Neal, University of Toronto Dept. of Statistical Sciences and Dept. of Computer Science http://www.cs.utoronto.ca/ radford http://radfordneal.wordpress.com

More information

Garbage Collection Techniques

Garbage Collection Techniques Garbage Collection Techniques Michael Jantz COSC 340: Software Engineering 1 Memory Management Memory Management Recognizing when allocated objects are no longer needed Deallocating (freeing) the memory

More information

LH*Algorithm: Scalable Distributed Data Structure (SDDS) and its implementation on Switched Multicomputers

LH*Algorithm: Scalable Distributed Data Structure (SDDS) and its implementation on Switched Multicomputers LH*Algorithm: Scalable Distributed Data Structure (SDDS) and its implementation on Switched Multicomputers Written by: Salman Zubair Toor E-Mail: salman.toor@it.uu.se Teacher: Tore Risch Term paper for

More information

Myths and Realities: The Performance Impact of Garbage Collection

Myths and Realities: The Performance Impact of Garbage Collection Myths and Realities: The Performance Impact of Garbage Collection Tapasya Patki February 17, 2011 1 Motivation Automatic memory management has numerous software engineering benefits from the developer

More information

Garbage Collection (1)

Garbage Collection (1) Garbage Collection (1) Advanced Operating Systems Lecture 7 This work is licensed under the Creative Commons Attribution-NoDerivatives 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nd/4.0/

More information

CMSC 330: Organization of Programming Languages

CMSC 330: Organization of Programming Languages CMSC 330: Organization of Programming Languages Memory Management and Garbage Collection CMSC 330 - Spring 2013 1 Memory Attributes! Memory to store data in programming languages has the following lifecycle

More information

Garbage Collection. Akim D le, Etienne Renault, Roland Levillain. May 15, CCMP2 Garbage Collection May 15, / 35

Garbage Collection. Akim D le, Etienne Renault, Roland Levillain. May 15, CCMP2 Garbage Collection May 15, / 35 Garbage Collection Akim Demaille, Etienne Renault, Roland Levillain May 15, 2017 CCMP2 Garbage Collection May 15, 2017 1 / 35 Table of contents 1 Motivations and Definitions 2 Reference Counting Garbage

More information

File System Interface and Implementation

File System Interface and Implementation Unit 8 Structure 8.1 Introduction Objectives 8.2 Concept of a File Attributes of a File Operations on Files Types of Files Structure of File 8.3 File Access Methods Sequential Access Direct Access Indexed

More information

Garbage Collection. Vyacheslav Egorov

Garbage Collection. Vyacheslav Egorov Garbage Collection Vyacheslav Egorov 28.02.2012 class Heap { public: void* Allocate(size_t sz); }; class Heap { public: void* Allocate(size_t sz); void Deallocate(void* ptr); }; class Heap { public: void*

More information

Depth-wise Hashing with Deep Hashing Structures. A two dimensional representation of a Deep Table

Depth-wise Hashing with Deep Hashing Structures. A two dimensional representation of a Deep Table Proceedings of Student Research Day, CSIS, Pace University, May 9th, 2003 Depth-wise Hashing with Deep Hashing Structures Edward Capriolo Abstract The objective of this research is to implement a variation

More information

Shenandoah: An ultra-low pause time garbage collector for OpenJDK. Christine Flood Principal Software Engineer Red Hat

Shenandoah: An ultra-low pause time garbage collector for OpenJDK. Christine Flood Principal Software Engineer Red Hat Shenandoah: An ultra-low pause time garbage collector for OpenJDK Christine Flood Principal Software Engineer Red Hat 1 Why do we need another Garbage Collector? OpenJDK currently has: SerialGC ParallelGC

More information

CMSC 330: Organization of Programming Languages

CMSC 330: Organization of Programming Languages CMSC 330: Organization of Programming Languages Memory Management and Garbage Collection CMSC 330 Spring 2017 1 Memory Attributes Memory to store data in programming languages has the following lifecycle

More information

Slide Set 9. for ENCM 369 Winter 2018 Section 01. Steve Norman, PhD, PEng

Slide Set 9. for ENCM 369 Winter 2018 Section 01. Steve Norman, PhD, PEng Slide Set 9 for ENCM 369 Winter 2018 Section 01 Steve Norman, PhD, PEng Electrical & Computer Engineering Schulich School of Engineering University of Calgary March 2018 ENCM 369 Winter 2018 Section 01

More information

16 Sharing Main Memory Segmentation and Paging

16 Sharing Main Memory Segmentation and Paging Operating Systems 64 16 Sharing Main Memory Segmentation and Paging Readings for this topic: Anderson/Dahlin Chapter 8 9; Siberschatz/Galvin Chapter 8 9 Simple uniprogramming with a single segment per

More information

Acknowledgements These slides are based on Kathryn McKinley s slides on garbage collection as well as E Christopher Lewis s slides

Acknowledgements These slides are based on Kathryn McKinley s slides on garbage collection as well as E Christopher Lewis s slides Garbage Collection Last time Compiling Object-Oriented Languages Today Motivation behind garbage collection Garbage collection basics Garbage collection performance Specific example of using GC in C++

More information

Simple Garbage Collection and Fast Allocation Andrew W. Appel

Simple Garbage Collection and Fast Allocation Andrew W. Appel Simple Garbage Collection and Fast Allocation Andrew W. Appel Presented by Karthik Iyer Background Motivation Appel s Technique Terminology Fast Allocation Arranging Generations Invariant GC Working Heuristic

More information

Review. Partitioning: Divide heap, use different strategies per heap Generational GC: Partition by age Most objects die young

Review. Partitioning: Divide heap, use different strategies per heap Generational GC: Partition by age Most objects die young Generational GC 1 Review Partitioning: Divide heap, use different strategies per heap Generational GC: Partition by age Most objects die young 2 Single-partition scanning Stack Heap Partition #1 Partition

More information

Java Performance Tuning

Java Performance Tuning 443 North Clark St, Suite 350 Chicago, IL 60654 Phone: (312) 229-1727 Java Performance Tuning This white paper presents the basics of Java Performance Tuning and its preferred values for large deployments

More information

Introduction. hashing performs basic operations, such as insertion, better than other ADTs we ve seen so far

Introduction. hashing performs basic operations, such as insertion, better than other ADTs we ve seen so far Chapter 5 Hashing 2 Introduction hashing performs basic operations, such as insertion, deletion, and finds in average time better than other ADTs we ve seen so far 3 Hashing a hash table is merely an hashing

More information

Field Analysis. Last time Exploit encapsulation to improve memory system performance

Field Analysis. Last time Exploit encapsulation to improve memory system performance Field Analysis Last time Exploit encapsulation to improve memory system performance This time Exploit encapsulation to simplify analysis Two uses of field analysis Escape analysis Object inlining April

More information

:.NET Won t Slow You Down

:.NET Won t Slow You Down :.NET Won t Slow You Down Presented by: Kirk Fertitta Chief Technical Officer Pacific MindWorks MEMORY MANAGEMENT Techniques for managing application program memory have always involved tradeoffs between

More information

Module 5: Hash-Based Indexing

Module 5: Hash-Based Indexing Module 5: Hash-Based Indexing Module Outline 5.1 General Remarks on Hashing 5. Static Hashing 5.3 Extendible Hashing 5.4 Linear Hashing Web Forms Transaction Manager Lock Manager Plan Executor Operator

More information

Agenda. CSE P 501 Compilers. Java Implementation Overview. JVM Architecture. JVM Runtime Data Areas (1) JVM Data Types. CSE P 501 Su04 T-1

Agenda. CSE P 501 Compilers. Java Implementation Overview. JVM Architecture. JVM Runtime Data Areas (1) JVM Data Types. CSE P 501 Su04 T-1 Agenda CSE P 501 Compilers Java Implementation JVMs, JITs &c Hal Perkins Summer 2004 Java virtual machine architecture.class files Class loading Execution engines Interpreters & JITs various strategies

More information

CMSC 330: Organization of Programming Languages. Memory Management and Garbage Collection

CMSC 330: Organization of Programming Languages. Memory Management and Garbage Collection CMSC 330: Organization of Programming Languages Memory Management and Garbage Collection CMSC330 Fall 2018 1 Memory Attributes Memory to store data in programming languages has the following lifecycle

More information

4 Hash-Based Indexing

4 Hash-Based Indexing 4 Hash-Based Indexing We now turn to a different family of index structures: hash indexes. Hash indexes are unbeatable when it comes to equality selections, e.g. SELECT FROM WHERE R A = k. If we carefully

More information

Programming Language Implementation

Programming Language Implementation A Practical Introduction to Programming Language Implementation 2014: Week 10 Garbage Collection College of Information Science and Engineering Ritsumeikan University 1 review of last week s topics dynamic

More information

Optimising Multicore JVMs. Khaled Alnowaiser

Optimising Multicore JVMs. Khaled Alnowaiser Optimising Multicore JVMs Khaled Alnowaiser Outline JVM structure and overhead analysis Multithreaded JVM services JVM on multicore An observational study Potential JVM optimisations Basic JVM Services

More information

Hard Real-Time Garbage Collection in Java Virtual Machines

Hard Real-Time Garbage Collection in Java Virtual Machines Hard Real-Time Garbage Collection in Java Virtual Machines... towards unrestricted real-time programming in Java Fridtjof Siebert, IPD, University of Karlsruhe 1 Jamaica Systems Structure Exisiting GC

More information

High-Level Language VMs

High-Level Language VMs High-Level Language VMs Outline Motivation What is the need for HLL VMs? How are these different from System or Process VMs? Approach to HLL VMs Evolutionary history Pascal P-code Object oriented HLL VMs

More information

ECE 598 Advanced Operating Systems Lecture 10

ECE 598 Advanced Operating Systems Lecture 10 ECE 598 Advanced Operating Systems Lecture 10 Vince Weaver http://www.eece.maine.edu/~vweaver vincent.weaver@maine.edu 17 February 2015 Announcements Homework #1 and #2 grades, HW#3 Coming soon 1 Various

More information

Business and Scientific Applications of the Java Programming Language

Business and Scientific Applications of the Java Programming Language Business and Scientific Applications of the Java Programming Language Angelo Bertolli April 24, 2005 Abstract While Java is arguably a good language with that to write both scientific and business applications,

More information

15 Sharing Main Memory Segmentation and Paging

15 Sharing Main Memory Segmentation and Paging Operating Systems 58 15 Sharing Main Memory Segmentation and Paging Readings for this topic: Anderson/Dahlin Chapter 8 9; Siberschatz/Galvin Chapter 8 9 Simple uniprogramming with a single segment per

More information

CS577 Modern Language Processors. Spring 2018 Lecture Garbage Collection

CS577 Modern Language Processors. Spring 2018 Lecture Garbage Collection CS577 Modern Language Processors Spring 2018 Lecture Garbage Collection 1 BASIC GARBAGE COLLECTION Garbage Collection (GC) is the automatic reclamation of heap records that will never again be accessed

More information

Design Issues. Subroutines and Control Abstraction. Subroutines and Control Abstraction. CSC 4101: Programming Languages 1. Textbook, Chapter 8

Design Issues. Subroutines and Control Abstraction. Subroutines and Control Abstraction. CSC 4101: Programming Languages 1. Textbook, Chapter 8 Subroutines and Control Abstraction Textbook, Chapter 8 1 Subroutines and Control Abstraction Mechanisms for process abstraction Single entry (except FORTRAN, PL/I) Caller is suspended Control returns

More information

General Objective:To understand the basic memory management of operating system. Specific Objectives: At the end of the unit you should be able to:

General Objective:To understand the basic memory management of operating system. Specific Objectives: At the end of the unit you should be able to: F2007/Unit6/1 UNIT 6 OBJECTIVES General Objective:To understand the basic memory management of operating system Specific Objectives: At the end of the unit you should be able to: define the memory management

More information

Hash Table and Hashing

Hash Table and Hashing Hash Table and Hashing The tree structures discussed so far assume that we can only work with the input keys by comparing them. No other operation is considered. In practice, it is often true that an input

More information

Improving Mobile Program Performance Through the Use of a Hybrid Intermediate Representation

Improving Mobile Program Performance Through the Use of a Hybrid Intermediate Representation Improving Mobile Program Performance Through the Use of a Hybrid Intermediate Representation Chandra Krintz Computer Science Department University of California, Santa Barbara Abstract We present a novel

More information

Qualifying Exam in Programming Languages and Compilers

Qualifying Exam in Programming Languages and Compilers Qualifying Exam in Programming Languages and Compilers University of Wisconsin Fall 1991 Instructions This exam contains nine questions, divided into two parts. All students taking the exam should answer

More information

Memory Management. Memory Management... Memory Management... Interface to Dynamic allocation

Memory Management. Memory Management... Memory Management... Interface to Dynamic allocation CSc 453 Compilers and Systems Software 24 : Garbage Collection Introduction Department of Computer Science University of Arizona collberg@gmail.com Copyright c 2009 Christian Collberg Dynamic Memory Management

More information

Parallelism of Java Bytecode Programs and a Java ILP Processor Architecture

Parallelism of Java Bytecode Programs and a Java ILP Processor Architecture Australian Computer Science Communications, Vol.21, No.4, 1999, Springer-Verlag Singapore Parallelism of Java Bytecode Programs and a Java ILP Processor Architecture Kenji Watanabe and Yamin Li Graduate

More information

Habanero Extreme Scale Software Research Project

Habanero Extreme Scale Software Research Project Habanero Extreme Scale Software Research Project Comp215: Performance Zoran Budimlić (Rice University) To suffer the penalty of too much haste, which is too little speed. - Plato Never sacrifice correctness

More information

COMPUTER SCIENCE 4500 OPERATING SYSTEMS

COMPUTER SCIENCE 4500 OPERATING SYSTEMS Last update: 3/28/2017 COMPUTER SCIENCE 4500 OPERATING SYSTEMS 2017 Stanley Wileman Module 9: Memory Management Part 1 In This Module 2! Memory management functions! Types of memory and typical uses! Simple

More information

Using Adaptive Optimization Techniques To Teach Mobile Java Computing

Using Adaptive Optimization Techniques To Teach Mobile Java Computing Using Adaptive Optimization Techniques To Teach Mobile Java Computing Chandra Krintz Computer Science Department University of California, Santa Barbara Abstract Dynamic, adaptive optimization is quickly

More information

Exploiting the Behavior of Generational Garbage Collector

Exploiting the Behavior of Generational Garbage Collector Exploiting the Behavior of Generational Garbage Collector I. Introduction Zhe Xu, Jia Zhao Garbage collection is a form of automatic memory management. The garbage collector, attempts to reclaim garbage,

More information

HOT-Compilation: Garbage Collection

HOT-Compilation: Garbage Collection HOT-Compilation: Garbage Collection TA: Akiva Leffert aleffert@andrew.cmu.edu Out: Saturday, December 9th In: Tuesday, December 9th (Before midnight) Introduction It s time to take a step back and congratulate

More information

Using Adaptive Optimization Techniques To Teach Mobile Java Computing

Using Adaptive Optimization Techniques To Teach Mobile Java Computing Using Adaptive Optimization Techniques To Teach Mobile Java Computing Chandra Krintz Computer Science Department University of California, Santa Barbara Abstract Abstract. Dynamic, adaptive optimization

More information

Kakadu and Java. David Taubman, UNSW June 3, 2003

Kakadu and Java. David Taubman, UNSW June 3, 2003 Kakadu and Java David Taubman, UNSW June 3, 2003 1 Brief Summary The Kakadu software framework is implemented in C++ using a fairly rigorous object oriented design strategy. All classes which are intended

More information