EMERY BERGER Research Statement

Size: px
Start display at page:

Download "EMERY BERGER Research Statement"

Transcription

1 EMERY BERGER Research Statement Despite years of research in the design and implementation of programming languages, programming remains difficult and error-prone. While high-level programming languages such as Java offer the promise of speeding development and helping programmers avoid errors, they often impose an unacceptable performance cost, either in execution time or memory consumption. The vast majority of today s software applications from mail servers, database managers and web servers, to nearly all desktop applications are written in C and C++, two unsafe languages. Unfortunately, these languages leave applications defenseless against a wide range of programmer errors. These errors cause programs to misbehave, crash, and leave them susceptible to attack. Current architectural trends promise to further exacerbate these problems. Limitations on processor scaling due to energy consumption and heat dissipation mean that newer CPUs will not increase sequential performance, locking in the overhead of higher-level languages. The widespread adoption of multicore architectures means that programmers will be forced to use concurrency to increase performance, but multithreaded programs are notoriously difficult to write and to debug. New demands such as energy efficiency and non-uniform memory access times will pose further challenges to programmers. My work addresses all of these challenges: increasing the efficiency of high-level languages, making programs written in lower-level languages both safer and faster, and developing new languages that make it easier for programmers to write efficient and correct programs. 1 Contributions My research agenda focuses on automatically improving application performance, security, and correctness. By working at the runtime system level, especially in the memory manager, it is possible to bring performance and reliability benefits to deployed software without programmer intervention. My work in this space has had considerable real-world impact. For example, my Hoard scalable memory manager has been downloaded over 40,000 times. It is currently in use by companies such as AOL, Business Objects, Novell, Reuters, and British Telecom, whose telephony servers Hoard sped up by a factor of five. DieHard, a system that automatically improves both reliability and security, has been downloaded over 10,000 times, and is currently being evaluated within Microsoft for deployment in an upcoming release of Microsoft Office. These systems not only perform well empirically, but also exhibit provably good properties. For instance, Hoard s worst-case memory consumption is asymptotically equivalent to that of the best possible sequential memory manager, and DieHard s probabilistic algorithms provide quantitative guarantees of the resilience to errors that it provides. This is a consistent theme of my work: whenever possible, I develop systems that one can reason about mathematically. My work also crosses the traditional boundaries between operating systems and runtime systems. I have developed cooperative memory managers that combine operating system and garbage collection (GC) support to avoid paging, the costly shuttling of data between memory and the disk. Because disk access is approximately six orders of magnitude slower than main memory accesses, eliminating paging can dramatically increase performance. By avoiding paging, the bookmarking collector speeds Java programs on a modified Linux kernel by between 5X-41X and reduces pause times by 45X-218X (from minutes to milliseconds). Another line of my research focuses not on increasing performance but rather on eliminating performance degradation that results from running contributory systems. These systems rely on a user community that donates CPU time, memory, and disk space (examples include peer-to-peer backup systems, Condor, and Folding@Home). Because these applications compete with the user by triggering paging or reducing available disk space, many users are reluctant to run them. I have developed operating system support that enables the transparent execution of contributory applications, eliminating this key barrier to their widespread adoption. For example, our transparent memory manager limits the performance impact of paging to below 2% while donating hundreds of megabytes of memory. 1

2 While automatic approaches that improve performance or correctness are always desirable, sometimes programmer support is necessary. The second theme of my research focuses on the development of programming languages and software infrastructures that simplify correct and efficient programming. These include the Flux programming language, which lets programmers quickly build deadlock-free concurrent client-server applications from off-theshelf sequential code. Flux makes it easier to write these applications, and the resulting servers match or exceed the performance of hand-written code. Finally, I have developed new measurement methodologies to perform quantitative memory management studies. For example, memory management researchers often condemn special-purpose, custom memory managers, while practitioners advocate their use. My work demonstrates that some custom memory managers are either simpler to use or more efficient (up to 44% faster), but consume more space (up to 230% more). I introduced a new memory management abstraction called reaps that captures the performance of custom memory managers while limiting their space consumption. I also developed a measurement infrastructure called oracular memory management, that for the first time quantifies the cost of using precise garbage collection versus explicit memory management: a good garbage collector can match the performance of explicit memory management, in exchange for 3X-5X more memory. Roadmap Table 1 presents a full list of my research contributions to date; for reasons of space, I describe only my key contributions in detail here. Section 2 presents my work on systems that automatically improve performance, security and correctness. Next, Section 3 describes programming languages and software infrastructures that I have developed to simplify correct and efficient programming. Section 4 then details my quantitative memory management studies. Finally, Section 5 presents planned directions for future work. 2 Transparently Improving Reliability and Performance 2.1 Improving Reliability and Security Nearly all applications in wide use today are written in unsafe languages such as C and C++, and so are vulnerable to memory errors such as buffer overflows, dangling pointers, and reads of uninitialized data. These errors can lead to program crashes, security vulnerabilities, and unpredictable behavior. I have developed a new approach to attack these problems, based on randomization, replication, and both probabilistic analysis and statistical inference. Systems based on these techniques either allow programs to run correctly in the face of memory errors, or detect and automatically correct memory errors, all with high probability. To my knowledge, this is the first use of randomized algorithms to improve program reliability. DieHard: Tolerating Errors with Randomization DieHard [3, 4] prevents heap corruption and provides probabilistic guarantees of avoiding memory errors like dangling pointers and heap buffer overflows. DieHard randomly locates program objects in a heap that is some factor M larger than required (e.g., twice as large). This scattering of memory objects all over memory not only makes some errors unlikely to happen, it also makes it virtually impossible for a hacker to know where vulnerable parts of the program s data are, thus thwarting known heap-based exploits. DieHard s random placement has an additional, even more important effect: it quantifiably increases the odds that a program will run correctly despite having memory errors. In particular, while DieHard prevents invalid and multiple frees and heap corruption, it probabilistically avoids buffer overflows, dangling pointer errors, and uninitialized reads. These probabilities quantify the likelihood that a program will run correctly despite memory errors, providing probabilistic memory safety. DieHard works in two modes: standalone and replicated. The standalone version replaces the memory manager with the DieHard randomized memory manager. This randomization increases the odds that buffer overflows will have no effect, and reduces the risk of dangling pointers. The replicated version provides greater protection against errors by running several instances of the application simultaneously and voting on their output. Because each replica is randomized differently, each replica will likely have a different output if it has an error, and some replicas are likely to run correctly despite the error. 2

3 system description section Transparently improving performance and reliability randomized algorithms DieHard [3, 4] tolerates memory errors with high probability (PLDI 2006) 2.1 Exterminator [17] automatically corrects memory errors (PLDI 2007) 2.1 Archipelago [16] tolerates/detects overflows with high probability 2.1 cooperative memory management (OS+GC) Bookmarking Collection [14] garbage collection without paging (PLDI 2005) 2.2 CRAMM [21, 20] dynamically adjusts heap to maximize performance (OSDI 2006) 2.2 transparency for contributory applications TMM [10] transparent memory management (USENIX 2006) 2.3 TFS [12, 11] transparent file system (FAST 2007) 2.3 efficient memory management Hoard [1, 2] scalable concurrent memory manager for multithreading (ASPLOS-IX) 2.4 MC 2 [18] copying GC with low pause times for embedded devices (OOPSLA 2004) Vam [13] locality-improving memory allocator (MSP 2005) Simplifying correct & efficient programming programming languages Flux [8, 9] PL for composing correct, predictable concurrent servers (USENIX 2006) 3.1 Eon [19] PL for energy-aware perpetual systems (SenSys 2007) 3.2 CODE/POEMS [7] parallel programming languages (IJHPCA 2000) software infrastructure Heap Layers [5] high-performance framework for composing memory managers (PLDI 2001) Quantitative memory management studies reaps [6] reconsidering custom allocation (OOPSLA 2002) 4.1 oracular memory management [15] quantifying garbage collection vs. malloc (OOPSLA 2005) 4.2 Table 1: Overview of my research contributions, grouped thematically. In exchange for space and modest runtime overhead (average 6%), DieHard even in its standalone mode provides significant protection against memory errors in real programs. Without DieHard, a version of the Mozilla web browser crashes every time it loads a particular page that triggers a buffer overflow; with DieHard, it runs correctly 60 70% of the time. A similar bug in the Squid web cache triggers failure on every execution; DieHard enables it to run correctly in 10 out of 10 runs. Exterminator: Finding and Fixing Errors Automatically While DieHard uses randomization to tolerate errors, Exterminator [17] combines randomization with statistical techniques to automatically isolate and correct memory errors. Exterminator exploits DieHard-style randomization to pinpoint errors with high precision. The insight is that, when running with a randomized memory layout, a memory error affects each execution of a program differently. By comparing the contents of the heap across replicas or executions, Exterminator can determine exactly where the error occurred. From this information, Exterminator then derives runtime patches that fix these errors both in current and subsequent executions. In addition, Exterminator enables collaborative bug correction by merging patches generated by multiple users. We have analytically and empirically demonstrated Exterminator s effectiveness at detecting and correcting both injected and real faults. For example, after three runs, Exterminator can locate a heap overflow in the Squid web cache proxy and generate an appropriate patch that prevents the heap overflow in subsequent executions. It also detects and corrects errors in the Mozilla Firefox browser, while not perceptibly affecting performance. 3

4 Archipelago: Isolating Objects in Vast Address Spaces Archipelago [16] takes the notion of random placement across a larger heap to an extreme. In effect, it is a runtime system that trades address space a plentiful resource on 64-bit systems for reliability. Archipelago randomly allocates heap objects far apart in virtual address space, effectively isolating each object from overflows from other objects. Archipelago also protects against dangling pointer errors by preserving the contents of freed objects after they are freed. Archipelago thus trades virtual address space for significantly improved program reliability and security, while limiting physical memory consumption by tracking the working set of an application and compacting cold objects. We find that Archipelago allows applications to continue to run correctly in the face of thousands of memory errors. Across a suite of server applications, Archipelago s performance overhead is 6% on average (between -7% and 22%), making it especially suitable to protect servers that have known security vulnerabilities due to heap memory errors. 2.2 Cooperative Memory Management Garbage collection offers numerous software engineering advantages. However, it interacts poorly with virtual memory managers. Most existing garbage collectors visit many more pages than the application itself and touch pages without regard to which ones are in memory, especially during full-heap garbage collection. The resulting paging can cause throughput to plummet and pause times to spike up to seconds or even minutes. I have developed cooperative memory managers that involve both the operating system s virtual memory manager and the garbage collector in the memory allocation process. These subsystems are generally treated as black boxes, but each side has information that is invaluable to the other. For example, the virtual memory manager knows detailed information about the reference history of each page of memory, and whether it is on disk or in transit between memory and disk. On the other hand, the garbage collector knows which pages in memory are garbage (touched but without any useful information), and which can be returned to the system without needing to have its contents written to disk. Bookmarking Collection: Avoiding Paging Bookmarking collection (BC) [14] is a novel garbage collection algorithm that cooperates with the operating system to eliminate paging. It records summary information ( bookmarks ) about evicted pages that enables it to perform in-memory full-heap collections. Just before memory is paged out, the collector bookmarks the targets of pointers from the pages. Using these bookmarks, BC can perform full garbage collection without loading the pages back from disk. In the absence of memory pressure, the bookmarking collector matches the throughput of the best collector we tested while running in smaller heaps. In the face of memory pressure, it improves throughput by up to a factor of five and reduces pause times by up to a factor of 45 over the next best collector. Compared to a collector that consistently provides high throughput (generational mark-sweep), the bookmarking collector reduces pause times by up to 218X and improves throughput by up to 41X. CRAMM: OS Support for Garbage-Collected Applications The performance of garbage-collected applications is highly sensitive to heap size, the amount of memory available for allocation. Choosing the appropriate heap size for a garbage-collected application one that is large enough to maximize throughput by minimizing the number of collections, but small enough to avoid paging is a key performance challenge. The ideal heap size is one that makes the working set of garbage collection just fit within the process s main memory allocation. However, an a priori best choice is impossible in multiprogrammed environments, since the amount of main memory allocated to each process constantly changes. CRAMM (Cooperative Robust Automatic Memory Management [21, 20]) is a cooperative OS-GC approach that automatically adapts to available memory. CRAMM consists of two parts: (1) a new virtual memory system that collects detailed reference information for (2) an analytical model tailored to the underlying garbage collection algorithm. The CRAMM virtual memory system tracks recent reference behavior (miss curves) with low overhead. The CRAMM heap sizing model uses this information to compute a heap size that maximizes throughput while minimizing paging. This approach pushes most of the complexity of heap sizing into the operating system, allowing its use with a wide range of existing garbage collection algorithms. CRAMM transparently adjusts the amount of space devoted to a garbage-collected program, expanding it to reduce costly garbage collections (thus improving performance) but 4

5 limiting it to remain in main memory. In the face of dynamic changes in memory availability, CRAMM increases the throughput of Java applications by up to 240%. 2.3 Transparent Execution of Contributory Applications Contributory applications allow users to donate unused resources on their personal computers to a shared pool. Applications such as SETI@home, Folding@home, and Freenet are now in wide use and provide a variety of services, including data processing and content distribution. While users are generally willing to give up unused CPU cycles, the use of memory by contributory applications deters participation in such systems. Contributory applications pollute the machine s memory, forcing user pages to be evicted to disk. This paging can disrupt user activity for seconds or even minutes. Similarly, while several research projects have proposed contributory applications to support peer-to-peer storage systems, their adoption has been relatively limited. A key barrier to the adoption of contributory storage systems is that contributing a large quantity of local storage interferes with the principal user of the machine. I have developed operating system support for contributory applications that require substantial memory or disk space. The first supports the transparent contribution of memory. Our transparent memory manager (TMM [10]) controls memory usage by contributory applications, ensuring that users will not notice an increase in the miss rate of their applications. TMM uses sampling to dynamically track the miss curve of the users applications. This curve gives TMM a function that relates the amount of memory available to the number of page faults that would be incurred for that much memory. TMM then allocates as much memory as possible to the contributory application while not forcing an excessive number of page faults in the user s applications. In practice, we have found that TMM is able to limit user page miss overhead to almost imperceptible levels (degrading performance by just 1.7%), while donating hundreds of megabytes of memory. While TMM supports the transparent contribution of memory, the Transparent File System (TFS [11, 12]) supports the transparent contribution of disk space. TFS provides background tasks with large amounts of unreliable storage all of the currently available space without impacting the performance of ordinary file access operations. TFS effectively superimposes another file system on top of the empty space on the disk, which can be overwritten by the user if needed. Because peer-to-peer storage systems must tolerate arbitrary node failures, these overwrites which TFS s allocation policy makes unlikely do not impact correctness. Our studies have shown that TFS allows a peer-to-peer contributory storage system to provide 40% more storage at twice the performance of a user-space storage mechanism. 2.4 Hoard: Scalable Concurrent Memory Management Concurrent, multithreaded C and C++ programs such as web servers, database managers, news servers, and scientific applications are increasingly prevalent. For these applications, the memory allocator is often a bottleneck that severely limits program performance and scalability on multiprocessor systems. Previous allocators suffer from problems that include poor performance and scalability, and heap organizations that introduce false sharing. Worse, many allocators exhibit a dramatic increase in memory consumption when confronted with a producer-consumer pattern of object allocation and freeing. This increase in memory consumption can range from a factor of P (the number of processors) to unbounded memory consumption. This problem was previously unnoticed because researchers had not attempted to formally analyze the characteristics of these allocation algorithms. Hoard [1, 2] is a fast memory manager for C and C++ programs that provably solves all of these problems, dramatically improving performance and scalability. Hoard combines one global heap and per-processor heaps with a novel discipline that bounds memory consumption and has very low synchronization costs in the common case. In the common case, Hoard lets threads allocate and free memory independently. Hoard avoids blowup by moving chunks to the global heap (making them available for reuse by other threads) only when per-processor heaps fall below a certain fullness threshold (e.g., 7/8). In addition to exhibiting memory consumption that is asymptotically identical to that of an ideal sequential memory allocator, Hoard yields low average fragmentation. It improves overall program performance over the standard Solaris allocator by up to a factor of 60 on 14 processors, and up to a factor of 18 over the next best allocator we tested. 5

6 3 Simplifying Correct and Efficient Programming General-purpose programming languages are in wide use, but their generality can make them a poor match for particular application classes. For example, some existing languages provide direct support for concurrency, but do little to ensure that the resulting programs run correctly or perform well. However, an entirely new programming language creates a whole new class of problems, especially if it requires programmers to rewrite all of their programs, or if it prevents access to libraries that they have come to depend on. My approach has been to develop lightweight coordination languages: these are new programming languages whose role is to tie together existing functionality generally written in conventional programming languages into new programs. Using these languages, programmers can compose their programs from previously-written components. This process is far faster and easier than writing entire applications from scratch in a new language. Just as importantly, because these languages expose key information to the compiler, the resulting applications can be both safer and more efficient than their hand-written equivalents. 3.1 Flux: A Language for Composing Scalable Servers Programming high-performance applications is challenging: it is both complicated and error-prone to write the concurrent code required to deliver high performance and scalability. Performance bottlenecks are difficult to identify and correct. Finally, it is difficult to predict performance prior to deployment. Flux [8, 9] is a language that dramatically simplifies the construction of scalable high-performance client-server applications. Flux lets programmers compose off-the-shelf, sequential C, C++, or Java functions into concurrent servers, in a clean syntax inspired by Unix pipes. The Flux compiler type-checks programs and guarantees that they are deadlock-free, and we show that the resulting servers match or exceed the performance of a number of hand-written servers. In addition, the Flux compiler automatically generates discrete event simulators that accurately predict actual server performance under load and with different hardware resources. For example, using information derived from a single processor run, the Flux-generated simulator accurately predicts the performance of a server running on a machine with 16 processors. 3.2 Eon: An Energy-Aware Programming Language Embedded systems can operate perpetually without being connected to a power source by harvesting environmental energy from motion, the sun, wind, or heat differentials. However, programming these perpetual systems is challenging. To adapt to changing energy levels, programmers can adjust the frequency of execution of energy-intensive parts of their code, or deliver higher service levels when energy is plentiful, and lower service levels when energy is scarce. However, it is difficult for programmers to predict how much energy each part of their program consumes to manage the adjustments in available energy. Worse, explicit energy management can tie a program to a particular hardware platform, limiting portability. I co-led the development of Eon [19], a programming language and runtime system designed to support the development of perpetual systems. To our knowledge, Eon is the first energy-aware programming language. Eon is a declarative coordination language based on Flux ( 3.1) that greatly simplifies the programming of perpetual systems. Eon lets programmers compose programs from components written in C or nesc. Paths through the program ( flows ) may be annotated with different energy states. Eon s automatic energy management then dynamically adapts these states to current and predicted energy levels. It chooses flows to execute and adjusts their rates of execution, maximizing the quality of service under available energy constraints. In other words, Eon pushes most of the complexity of programming perpetual systems into the runtime system, hiding these details from the programmer. We have deployed Eon in two perpetual applications that run on widely different hardware platforms: a GPS-based location tracking sensor deployed on a threatened species of turtle (TurtleNet), and a solar-powered camera sensor for remote, ad hoc deployments. To validate our belief that Eon made it easier to write energy-efficient programs, we conducted a user study that compared novice Eon programmers with experienced C programmers. The Eon programmers produced more efficient energy-adaptive systems in substantially less time. 6

7 4 Quantitative Memory Management Studies Academics and practitioners are often at odds when it comes to memory management. While academics favor the use of general-purpose mechanisms and advocate the use of safe memory management systems like garbage collection, practitioners often write their own memory managers and avoid garbage-collected languages because they believe they will degrade performance. I have developed novel measurement frameworks to bring science to these questions, quantifying the performance impact of using different memory management approaches for the first time. 4.1 Reconsidering Custom Memory Allocation Programmers hoping to achieve performance improvements often design custom memory allocators for their own applications. I conducted an in-depth study that explores eight applications that use custom allocators [6]. Several of these allocators do not match the semantics of standard general-purpose allocators; I used my Heap Layers framework [5] to create efficient replacements that layer these semantics on top of a general-purpose allocator (the Lea allocator). Surprisingly, for six of these applications, this general-purpose allocator performs as well as or better than the custom allocators. The two exceptions use regions, which deliver higher performance (improvements of up to 44%). Regions also reduce programmer burden and eliminate a source of memory leaks. However, the inability of programmers to free individual objects within regions can lead to a substantial increase in memory consumption. This limitation precludes the use of regions for common programming idioms, reducing their usefulness. Reaps are a generalization of general-purpose and region-based allocators. They provide a full range of region semantics, allowing associated objects to be deallocated in a single operation, while adding heap-like individual object deletion and reuse. Reaps provide high performance, matching the performance of custom allocators while enabling substantial space savings. 4.2 Quantifying the Performance of GC vs. malloc Garbage collection yields numerous software engineering benefits, but its quantitative impact on performance has been the subject of considerable debate. While one can compare the cost of conservative garbage collection to explicit memory management in C/C++ programs by linking in an appropriate collector, such a comparison is not possible for languages designed for garbage collection (e.g., Java), since programs in these languages naturally do not contain calls to free. I introduced a novel experimental methodology that quantifies the performance of precise garbage collection versus explicit memory management [15]. Our system, called oracular memory management, can treat unaltered Java programs as if they use explicit memory management by relying on oracles to insert calls to free. These oracles are generated from profile information gathered in earlier application runs. By executing inside an architecturally-detailed simulator, this oracular memory manager eliminates the effects of consulting an oracle while measuring the costs of calling malloc and free. We evaluated two different oracles: a liveness-based oracle that aggressively frees objects immediately after their last use, and a reachability-based oracle that conservatively frees objects just after they are last reachable. These oracles span the range of possible placement of explicit deallocation calls. The results quantify the time-space tradeoff of garbage collection: with five times as much memory, a stateof-the-art garbage collector (Appel-style generational, with a non-copying mature space) matches the performance of reachability-based explicit memory management. With only three times as much memory, the collector runs on average 17% slower than explicit memory management. However, with only twice as much memory, garbage collection degrades performance by nearly 70%. When physical memory is scarce, paging causes garbage collection to run an order of magnitude slower than explicit memory management. This latter result, demonstrating that paging was a key challenge to the performance of garbage collection, motivated our work on the bookmarking garbage collector and CRAMM ( 2.2). 7

8 5 Future Directions I plan to continue to focus my research on systems that automatically improve reliability and performance. Automatically correcting errors: I am especially interested in continuing this research direction, which I began with DieHard and then Exterminator. I am investigating domains where it is possible to ascribe reasonable semantics to buggy programs, as I did by assuming infinite heap semantics for programs with memory errors. One area I am currently investigating is concurrency. Multithreaded programming is notoriously difficult to get right, and the shift to multicore architectures means that programmers will increasingly need to rely on such techniques to increase performance. A current hot trend is transactional memory (TM), which replaces error-prone lock statements with an atomic construct. Transactional memory promises to simplify the programming of concurrent applications, but its semantics and implementation approaches remain in flux. More importantly, while TM s performance penalties may eventually be overcome by hardware support, it will neither offer benefits to existing concurrent programs, nor address the key problem of concurrency: deciding what to put into a critical section. I am working on runtime and language-based solutions that will automatically ensure correct execution with respect to an idealized semantics of any concurrent program, without the need for programmer intervention or custom hardware. Memory use debugging: Despite years of study, memory management remains challenging for programmers. Deployed programs, whether written in C or Java, often suffer from memory leaks, where allocated memory is either never explicitly reclaimed or remains reachable but is never reused. Because memory leaks limit the uptime of systems and can exhaust available physical memory, leading to catastrophic paging, it is important to find and remove leaks. I am working on systems that can automatically discover, report, and rectify memory leaks. Revisiting operating system design: While desktop and server systems have changed dramatically over the last thirty years, the designs of existing operating systems remain largely unchanged. I am working to bring OS support up to the challenges of modern applications. My first work in this direction was OS support for garbage-collected applications (CRAMM). I am now working on bridging the gap between CPU schedulers and the virtual memory manager to ensure the responsiveness of modern, memory-hungry interactive applications. I also am working on hypervisor-based support for systems running inside virtual machines: our goal is to intelligently consolidate VMs in order to provide high performance while minimizing energy consumption. 8

9 References [1] E. D. Berger. The Hoard memory allocator. [2] E. D. Berger, K. S. McKinley, R. D. Blumofe, and P. R. Wilson. Hoard: A scalable memory allocator for multithreaded applications. In Proceedings of the International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS-IX), pages , Cambridge, MA, Nov emery/pubs/ berger-asplos2000.pdf. [3] E. D. Berger and B. G. Zorn. DieHard: Probabilistic memory safety for unsafe languages. In Proceedings of the 2006 ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2006), pages , New York, NY, USA, ACM Press. emery/pubs/fp014-berger.pdf. [4] E. D. Berger and B. G. Zorn. Diehard: Efficient probabilistic memory safety. Technical Report UMCS TR , Department of Computer Science, University of Massachusetts Amherst, Mar Submitted for publication to ACM Transactions on Programming Languages and Systems. emery/pubs/diehard-journal-07. pdf. [5] E. D. Berger, B. G. Zorn, and K. S. McKinley. Composing high-performance memory allocators. In Proceedings of the 2001 ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2001), Snowbird, Utah, June emery/pubs/berger-pldi2001.pdf. [6] E. D. Berger, B. G. Zorn, and K. S. McKinley. Reconsidering custom memory allocation. In Proceedings of the Conference on Object-Oriented Programming Systems, Languages, and Applications (OOPSLA) 2002, Seattle, Washington, Nov emery/pubs/berger-oopsla2002.pdf. [7] J. C. Browne, E. D. Berger, and A. Dube. Compositional development of performance models in POEMS. The International Journal of High Performance Computing Applications, 14(4): , Winter emery/pubs/ijhpca2000.pdf. [8] B. Burns, K. Grimaldi, A. Kostadinov, E. D. Berger, and M. D. Corner. Flux: A language for programming high-performance servers. In 2006 USENIX Annual Technical Conference, pages , June emery/pubs/flux-usenix06.pdf. [9] B. Burns, K. Grimaldi, A. Kostadinov, G. Tarasuk-Levin, M. Meehan, E. D. Berger, and M. D. Corner. Flux: Composing efficient and scalable servers, Aug Submitted for publication to ACM Transactions on Programming Languages and Systems. emery/pubs/flux-journal-07.pdf. [10] J. Cipar, M. D. Corner, and E. D. Berger. Transparent contribution of memory. In 2006 USENIX Annual Technical Conference, pages , June emery/pubs/tmm-usenix06.pdf. [11] J. Cipar, M. D. Corner, and E. D. Berger. Contributing storage using the transparent file system. ACM Transactions on Storage, Nov emery/pubs/tfs-tos.pdf. [12] J. Cipar, M. D. Corner, and E. D. Berger. TFS: A Transparent File System for Contributory Storage (Best Paper Award). In Proceedings of USENIX Conference on File and Storage Technologies (FAST), pages , San Jose, CA, February emery/pubs/tfs.pdf. [13] Y. Feng and E. D. Berger. A locality-improving dynamic memory allocator. In Proceedings of the ACM SIGPLAN 2005 Workshop on Memory System Performance (MSP), Chicago, IL, June emery/ pubs/p33-feng.pdf. [14] M. Hertz and E. D. Berger. Garbage collection without paging. In Proceedings of the 2005 ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2005). ACM, June emery/pubs/f034-hertz.pdf. [15] M. Hertz and E. D. Berger. Quantifying the performance of garbage collection vs. explicit memory management. In Proceedings of the 20th annual ACM SIGPLAN Conference on Object-Oriented Programming Systems, Languages, and Applications (OOPSLA), San Diego, CA, Oct emery/pubs/gcvsmalloc.pdf. [16] V. Lvin, G. Novark, E. D. Berger, and B. G. Zorn. Archipelago: Trading address space for reliability and security. Technical Report UMCS TR , Department of Computer Science, University of Massachusetts Amherst, Aug Submitted for publication. emery/pubs/archipelago.pdf. [17] G. Novark, E. D. Berger, and B. G. Zorn. Exterminator: Automatically correcting memory errors with high probability. In Proceedings of the 2007 ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2007), New York, NY, USA, ACM Press. emery/pubs/pldi028-novark.pdf. [18] N. Sachindran, J. E. B. Moss, and E. D. Berger. MC 2 : High-performance garbage collection for memory-constrained environments. In Proceedings of the 19th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA 2004), pages ACM Press, emery/ pubs/p105-sachindran.pdf. 9

10 [19] J. Sorber, A. Kostadinov, M. Garber, M. Brennan, M. D. Corner, and E. D. Berger. Eon: A Language and Runtime System for Perpetual Systems. In Proceedings of The Fifth International ACM Conference on Embedded Networked Sensor Systems (SenSys 07), Syndey, Australia, November emery/pubs/eon.pdf. [20] T. Yang, E. D. Berger, S. F. Kaplan, and J. E. B. Moss. Cramm: Virtual memory support for garbage-collected applications. In Proceedings of the 7th USENIX Symposium on Operating Systems Design and Implementation (OSDI), Nov http: // [21] T. Yang, M. Hertz, E. D. Berger, S. F. Kaplan, and J. E. B. Moss. Automatic heap sizing: Taking real memory into account. In Proceedings of the 2004 ACM SIGPLAN International Symposium on Memory Management (ISMM 2004), pages ACM Press, Nov emery/pubs/ismm-2004-heapsize.pdf. 10

DieHard: Probabilistic Memory Safety for Unsafe Programming Languages

DieHard: Probabilistic Memory Safety for Unsafe Programming Languages DieHard: Probabilistic Memory Safety for Unsafe Programming Languages Emery Berger University of Massachusetts Amherst Ben Zorn Microsoft Research Problems with Unsafe Languages C, C++: pervasive apps,

More information

Quantifying the Performance of Garbage Collection vs. Explicit Memory Management

Quantifying the Performance of Garbage Collection vs. Explicit Memory Management Quantifying the Performance of Garbage Collection vs. Explicit Memory Management Matthew Hertz Canisius College Emery Berger University of Massachusetts Amherst Explicit Memory Management malloc / new

More information

Hierarchical PLABs, CLABs, TLABs in Hotspot

Hierarchical PLABs, CLABs, TLABs in Hotspot Hierarchical s, CLABs, s in Hotspot Christoph M. Kirsch ck@cs.uni-salzburg.at Hannes Payer hpayer@cs.uni-salzburg.at Harald Röck hroeck@cs.uni-salzburg.at Abstract Thread-local allocation buffers (s) are

More information

TFS: A Transparent File System for Contributory Storage

TFS: A Transparent File System for Contributory Storage TFS: A Transparent File System for Contributory Storage James Cipar, Mark Corner, Emery Berger http://prisms.cs.umass.edu/tcsm University of Massachusetts, Amherst Contributory Applications Users contribute

More information

DieHard: Memory Error Fault Tolerance in C and C++

DieHard: Memory Error Fault Tolerance in C and C++ DieHard: Memory Error Fault Tolerance in C and C++ Ben Zorn Microsoft Research In collaboration with Emery Berger and Gene Novark, Univ. of Massachusetts Ted Hart, Microsoft Research DieHard: Memory Error

More information

A Comprehensive Complexity Analysis of User-level Memory Allocator Algorithms

A Comprehensive Complexity Analysis of User-level Memory Allocator Algorithms 2012 Brazilian Symposium on Computing System Engineering A Comprehensive Complexity Analysis of User-level Memory Allocator Algorithms Taís Borges Ferreira, Márcia Aparecida Fernandes, Rivalino Matias

More information

Heap Management. Heap Allocation

Heap Management. Heap Allocation Heap Management Heap Allocation A very flexible storage allocation mechanism is heap allocation. Any number of data objects can be allocated and freed in a memory pool, called a heap. Heap allocation is

More information

Operating System Performance and Large Servers 1

Operating System Performance and Large Servers 1 Operating System Performance and Large Servers 1 Hyuck Yoo and Keng-Tai Ko Sun Microsystems, Inc. Mountain View, CA 94043 Abstract Servers are an essential part of today's computing environments. High

More information

Causes of Software Failures

Causes of Software Failures Causes of Software Failures Hardware Faults Permanent faults, e.g., wear-and-tear component Transient faults, e.g., bit flips due to radiation Software Faults (Bugs) (40% failures) Nondeterministic bugs,

More information

Deallocation Mechanisms. User-controlled Deallocation. Automatic Garbage Collection

Deallocation Mechanisms. User-controlled Deallocation. Automatic Garbage Collection Deallocation Mechanisms User-controlled Deallocation Allocating heap space is fairly easy. But how do we deallocate heap memory no longer in use? Sometimes we may never need to deallocate! If heaps objects

More information

Parallel storage allocator

Parallel storage allocator CSE 539 02/7/205 Parallel storage allocator Lecture 9 Scribe: Jing Li Outline of this lecture:. Criteria and definitions 2. Serial storage allocators 3. Parallel storage allocators Criteria and definitions

More information

Lecture Notes on Advanced Garbage Collection

Lecture Notes on Advanced Garbage Collection Lecture Notes on Advanced Garbage Collection 15-411: Compiler Design André Platzer Lecture 21 November 4, 2010 1 Introduction More information on garbage collection can be found in [App98, Ch 13.5-13.7]

More information

Lecture Notes on Garbage Collection

Lecture Notes on Garbage Collection Lecture Notes on Garbage Collection 15-411: Compiler Design André Platzer Lecture 20 1 Introduction In the previous lectures we have considered a programming language C0 with pointers and memory and array

More information

Real-Time and Embedded Systems (M) Lecture 19

Real-Time and Embedded Systems (M) Lecture 19 Low-Level/Embedded Programming Real-Time and Embedded Systems (M) Lecture 19 Lecture Outline Hardware developments Implications on system design Low-level programming Automatic memory management Timing

More information

Garbage Collection (1)

Garbage Collection (1) Garbage Collection (1) Advanced Operating Systems Lecture 7 This work is licensed under the Creative Commons Attribution-NoDerivatives 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nd/4.0/

More information

Run-Time Environments/Garbage Collection

Run-Time Environments/Garbage Collection Run-Time Environments/Garbage Collection Department of Computer Science, Faculty of ICT January 5, 2014 Introduction Compilers need to be aware of the run-time environment in which their compiled programs

More information

Cling: A Memory Allocator to Mitigate Dangling Pointers. Periklis Akritidis

Cling: A Memory Allocator to Mitigate Dangling Pointers. Periklis Akritidis Cling: A Memory Allocator to Mitigate Dangling Pointers Periklis Akritidis --2010 Use-after-free Vulnerabilities Accessing Memory Through Dangling Pointers Techniques : Heap Spraying, Feng Shui Manual

More information

Exploiting the Behavior of Generational Garbage Collector

Exploiting the Behavior of Generational Garbage Collector Exploiting the Behavior of Generational Garbage Collector I. Introduction Zhe Xu, Jia Zhao Garbage collection is a form of automatic memory management. The garbage collector, attempts to reclaim garbage,

More information

I J C S I E International Science Press

I J C S I E International Science Press Vol. 5, No. 2, December 2014, pp. 53-56 I J C S I E International Science Press Tolerating Memory Leaks at Runtime JITENDER SINGH YADAV, MOHIT YADAV, KIRTI AZAD AND JANPREET SINGH JOLLY CSE B-tech 4th

More information

Autonomous Garbage Collection: Resolve Memory

Autonomous Garbage Collection: Resolve Memory Autonomous Garbage Collection: Resolve Memory Leaks In Long Running Server Applications Brian Willard willabr@mail.northgrum.com Ophir Frieder ophir@csam.iit.edu Electronics and Systems Integration Division

More information

Exterminator: Automatically Correcting Memory Errors with High Probability By Gene Novark, Emery D. Berger, and Benjamin G. Zorn

Exterminator: Automatically Correcting Memory Errors with High Probability By Gene Novark, Emery D. Berger, and Benjamin G. Zorn Exterminator: Automatically Correcting Memory Errors with High Probability By Gene Novark, Emery D. Berger, and Benjamin G. Zorn doi:0.45/409360.40938 Abstract Programs written in C and C++ are susceptible

More information

1 Publishable Summary

1 Publishable Summary 1 Publishable Summary 1.1 VELOX Motivation and Goals The current trend in designing processors with multiple cores, where cores operate in parallel and each of them supports multiple threads, makes the

More information

Garbage Collection Techniques

Garbage Collection Techniques Garbage Collection Techniques Michael Jantz COSC 340: Software Engineering 1 Memory Management Memory Management Recognizing when allocated objects are no longer needed Deallocating (freeing) the memory

More information

Grand Central Dispatch

Grand Central Dispatch A better way to do multicore. (GCD) is a revolutionary approach to multicore computing. Woven throughout the fabric of Mac OS X version 10.6 Snow Leopard, GCD combines an easy-to-use programming model

More information

Performance of Non-Moving Garbage Collectors. Hans-J. Boehm HP Labs

Performance of Non-Moving Garbage Collectors. Hans-J. Boehm HP Labs Performance of Non-Moving Garbage Collectors Hans-J. Boehm HP Labs Why Use (Tracing) Garbage Collection to Reclaim Program Memory? Increasingly common Java, C#, Scheme, Python, ML,... gcc, w3m, emacs,

More information

Myths and Realities: The Performance Impact of Garbage Collection

Myths and Realities: The Performance Impact of Garbage Collection Myths and Realities: The Performance Impact of Garbage Collection Tapasya Patki February 17, 2011 1 Motivation Automatic memory management has numerous software engineering benefits from the developer

More information

Garbage Collection (2) Advanced Operating Systems Lecture 9

Garbage Collection (2) Advanced Operating Systems Lecture 9 Garbage Collection (2) Advanced Operating Systems Lecture 9 Lecture Outline Garbage collection Generational algorithms Incremental algorithms Real-time garbage collection Practical factors 2 Object Lifetimes

More information

Memory Allocation. Static Allocation. Dynamic Allocation. Dynamic Storage Allocation. CS 414: Operating Systems Spring 2008

Memory Allocation. Static Allocation. Dynamic Allocation. Dynamic Storage Allocation. CS 414: Operating Systems Spring 2008 Dynamic Storage Allocation CS 44: Operating Systems Spring 2 Memory Allocation Static Allocation (fixed in size) Sometimes we create data structures that are fixed and don t need to grow or shrink. Dynamic

More information

Fiji VM Safety Critical Java

Fiji VM Safety Critical Java Fiji VM Safety Critical Java Filip Pizlo, President Fiji Systems Inc. Introduction Java is a modern, portable programming language with wide-spread adoption. Goal: streamlining debugging and certification.

More information

Guide to Mitigating Risk in Industrial Automation with Database

Guide to Mitigating Risk in Industrial Automation with Database Guide to Mitigating Risk in Industrial Automation with Database Table of Contents 1.Industrial Automation and Data Management...2 2.Mitigating the Risks of Industrial Automation...3 2.1.Power failure and

More information

Hoard: A Fast, Scalable, and Memory-Efficient Allocator for Shared-Memory Multiprocessors

Hoard: A Fast, Scalable, and Memory-Efficient Allocator for Shared-Memory Multiprocessors Hoard: A Fast, Scalable, and Memory-Efficient Allocator for Shared-Memory Multiprocessors Emery D. Berger Robert D. Blumofe femery,rdbg@cs.utexas.edu Department of Computer Sciences The University of Texas

More information

Programming Language Implementation

Programming Language Implementation A Practical Introduction to Programming Language Implementation 2014: Week 10 Garbage Collection College of Information Science and Engineering Ritsumeikan University 1 review of last week s topics dynamic

More information

Oracle Developer Studio Code Analyzer

Oracle Developer Studio Code Analyzer Oracle Developer Studio Code Analyzer The Oracle Developer Studio Code Analyzer ensures application reliability and security by detecting application vulnerabilities, including memory leaks and memory

More information

Rubicon: Scalable Bounded Verification of Web Applications

Rubicon: Scalable Bounded Verification of Web Applications Joseph P. Near Research Statement My research focuses on developing domain-specific static analyses to improve software security and reliability. In contrast to existing approaches, my techniques leverage

More information

Dynamically Provisioning Distributed Systems to Meet Target Levels of Performance, Availability, and Data Quality

Dynamically Provisioning Distributed Systems to Meet Target Levels of Performance, Availability, and Data Quality Dynamically Provisioning Distributed Systems to Meet Target Levels of Performance, Availability, and Data Quality Amin Vahdat Department of Computer Science Duke University 1 Introduction Increasingly,

More information

Curriculum 2013 Knowledge Units Pertaining to PDC

Curriculum 2013 Knowledge Units Pertaining to PDC Curriculum 2013 Knowledge Units Pertaining to C KA KU Tier Level NumC Learning Outcome Assembly level machine Describe how an instruction is executed in a classical von Neumann machine, with organization

More information

CS 536 Introduction to Programming Languages and Compilers Charles N. Fischer Lecture 11

CS 536 Introduction to Programming Languages and Compilers Charles N. Fischer Lecture 11 CS 536 Introduction to Programming Languages and Compilers Charles N. Fischer Lecture 11 CS 536 Spring 2015 1 Handling Overloaded Declarations Two approaches are popular: 1. Create a single symbol table

More information

WORLD WIDE NEWS GATHERING AUTOMATIC MANAGEMENT

WORLD WIDE NEWS GATHERING AUTOMATIC MANAGEMENT WORLD WIDE NEWS GATHERING AUTOMATIC MANAGEMENT Luís Veiga and Paulo Ferreira {luis.veiga, paulo.ferreira } @ inesc.pt INESC, Rua Alves Redol, 9 - Lisboa -1000 Lisboa - Portugal Abstract The world-wide-web

More information

Modern Buffer Overflow Prevention Techniques: How they work and why they don t

Modern Buffer Overflow Prevention Techniques: How they work and why they don t Modern Buffer Overflow Prevention Techniques: How they work and why they don t Russ Osborn CS182 JT 4/13/2006 1 In the past 10 years, computer viruses have been a growing problem. In 1995, there were approximately

More information

Lecture Notes on Garbage Collection

Lecture Notes on Garbage Collection Lecture Notes on Garbage Collection 15-411: Compiler Design Frank Pfenning Lecture 21 November 4, 2014 These brief notes only contain a short overview, a few pointers to the literature with detailed descriptions,

More information

Computer Systems A Programmer s Perspective 1 (Beta Draft)

Computer Systems A Programmer s Perspective 1 (Beta Draft) Computer Systems A Programmer s Perspective 1 (Beta Draft) Randal E. Bryant David R. O Hallaron August 1, 2001 1 Copyright c 2001, R. E. Bryant, D. R. O Hallaron. All rights reserved. 2 Contents Preface

More information

Space Efficient Conservative Garbage Collection

Space Efficient Conservative Garbage Collection RETROSPECTIVE: Space Efficient Conservative Garbage Collection Hans-J. Boehm HP Laboratories 1501 Page Mill Rd. MS 1138 Palo Alto, CA, 94304, USA Hans.Boehm@hp.com ABSTRACT Both type-accurate and conservative

More information

Milind Kulkarni Research Statement

Milind Kulkarni Research Statement Milind Kulkarni Research Statement With the increasing ubiquity of multicore processors, interest in parallel programming is again on the upswing. Over the past three decades, languages and compilers researchers

More information

Framework for replica selection in fault-tolerant distributed systems

Framework for replica selection in fault-tolerant distributed systems Framework for replica selection in fault-tolerant distributed systems Daniel Popescu Computer Science Department University of Southern California Los Angeles, CA 90089-0781 {dpopescu}@usc.edu Abstract.

More information

Epilogue. Thursday, December 09, 2004

Epilogue. Thursday, December 09, 2004 Epilogue Thursday, December 09, 2004 2:16 PM We have taken a rather long journey From the physical hardware, To the code that manages it, To the optimal structure of that code, To models that describe

More information

Memory Management! How the hardware and OS give application pgms:" The illusion of a large contiguous address space" Protection against each other"

Memory Management! How the hardware and OS give application pgms: The illusion of a large contiguous address space Protection against each other Memory Management! Goals of this Lecture! Help you learn about:" The memory hierarchy" Spatial and temporal locality of reference" Caching, at multiple levels" Virtual memory" and thereby " How the hardware

More information

PROCESS VIRTUAL MEMORY. CS124 Operating Systems Winter , Lecture 18

PROCESS VIRTUAL MEMORY. CS124 Operating Systems Winter , Lecture 18 PROCESS VIRTUAL MEMORY CS124 Operating Systems Winter 2015-2016, Lecture 18 2 Programs and Memory Programs perform many interactions with memory Accessing variables stored at specific memory locations

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Computer Systems Engineering: Spring Quiz I Solutions

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Computer Systems Engineering: Spring Quiz I Solutions Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.033 Computer Systems Engineering: Spring 2011 Quiz I Solutions There are 10 questions and 12 pages in this

More information

Runtime. The optimized program is ready to run What sorts of facilities are available at runtime

Runtime. The optimized program is ready to run What sorts of facilities are available at runtime Runtime The optimized program is ready to run What sorts of facilities are available at runtime Compiler Passes Analysis of input program (front-end) character stream Lexical Analysis token stream Syntactic

More information

Advanced Programming & C++ Language

Advanced Programming & C++ Language Advanced Programming & C++ Language ~6~ Introduction to Memory Management Ariel University 2018 Dr. Miri (Kopel) Ben-Nissan Stack & Heap 2 The memory a program uses is typically divided into four different

More information

Course Outline. Processes CPU Scheduling Synchronization & Deadlock Memory Management File Systems & I/O Distributed Systems

Course Outline. Processes CPU Scheduling Synchronization & Deadlock Memory Management File Systems & I/O Distributed Systems Course Outline Processes CPU Scheduling Synchronization & Deadlock Memory Management File Systems & I/O Distributed Systems 1 Today: Memory Management Terminology Uniprogramming Multiprogramming Contiguous

More information

Binding and Storage. COMP 524: Programming Language Concepts Björn B. Brandenburg. The University of North Carolina at Chapel Hill

Binding and Storage. COMP 524: Programming Language Concepts Björn B. Brandenburg. The University of North Carolina at Chapel Hill Binding and Storage Björn B. Brandenburg The University of North Carolina at Chapel Hill Based in part on slides and notes by S. Olivier, A. Block, N. Fisher, F. Hernandez-Campos, and D. Stotts. What s

More information

Operating Systems: Internals and Design Principles. Chapter 2 Operating System Overview Seventh Edition By William Stallings

Operating Systems: Internals and Design Principles. Chapter 2 Operating System Overview Seventh Edition By William Stallings Operating Systems: Internals and Design Principles Chapter 2 Operating System Overview Seventh Edition By William Stallings Operating Systems: Internals and Design Principles Operating systems are those

More information

SPIN Operating System

SPIN Operating System SPIN Operating System Motivation: general purpose, UNIX-based operating systems can perform poorly when the applications have resource usage patterns poorly handled by kernel code Why? Current crop of

More information

Allocating memory in a lock-free manner

Allocating memory in a lock-free manner Allocating memory in a lock-free manner Anders Gidenstam, Marina Papatriantafilou and Philippas Tsigas Distributed Computing and Systems group, Department of Computer Science and Engineering, Chalmers

More information

Chapter 20: Database System Architectures

Chapter 20: Database System Architectures Chapter 20: Database System Architectures Chapter 20: Database System Architectures Centralized and Client-Server Systems Server System Architectures Parallel Systems Distributed Systems Network Types

More information

Older-First Garbage Collection in Practice: Evaluation in a Java Virtual Machine

Older-First Garbage Collection in Practice: Evaluation in a Java Virtual Machine Older-First Garbage Collection in Practice: Evaluation in a Java Virtual Machine Darko Stefanovic (Univ. of New Mexico) Matthew Hertz (Univ. of Massachusetts) Stephen M. Blackburn (Australian National

More information

Operating Systems. Operating Systems

Operating Systems. Operating Systems The operating system defines our computing experience. It is the first software we see when we turn on the computer, and the last software we see when the computer is turned off. It's the software that

More information

Honours/Master/PhD Thesis Projects Supervised by Dr. Yulei Sui

Honours/Master/PhD Thesis Projects Supervised by Dr. Yulei Sui Honours/Master/PhD Thesis Projects Supervised by Dr. Yulei Sui Projects 1 Information flow analysis for mobile applications 2 2 Machine-learning-guide typestate analysis for UAF vulnerabilities 3 3 Preventing

More information

TFS: A Transparent File System for Contributory Storage

TFS: A Transparent File System for Contributory Storage TFS: A Transparent File System for Contributory Storage James Cipar Mark D. Corner Emery D. Berger Department of Computer Science University of Massachusetts Amherst Amherst, MA 01003 {jcipar, mcorner,

More information

Oracle Developer Studio 12.6

Oracle Developer Studio 12.6 Oracle Developer Studio 12.6 Oracle Developer Studio is the #1 development environment for building C, C++, Fortran and Java applications for Oracle Solaris and Linux operating systems running on premises

More information

Opera&ng Systems CMPSCI 377 Dynamic Memory Management. Emery Berger and Mark Corner University of Massachuse9s Amherst

Opera&ng Systems CMPSCI 377 Dynamic Memory Management. Emery Berger and Mark Corner University of Massachuse9s Amherst Opera&ng Systems CMPSCI 377 Dynamic Memory Management Emery Berger and Mark Corner University of Massachuse9s Amherst Dynamic Memory Management How the heap manager is implemented malloc, free new, delete

More information

Compiler Construction D7011E

Compiler Construction D7011E Compiler Construction D7011E Lecture 14: Memory Management Viktor Leijon Slides largely by Johan Nordlander with material generously provided by Mark P. Jones. 1 First: Run-time Systems 2 The Final Component:

More information

The Google File System

The Google File System October 13, 2010 Based on: S. Ghemawat, H. Gobioff, and S.-T. Leung: The Google file system, in Proceedings ACM SOSP 2003, Lake George, NY, USA, October 2003. 1 Assumptions Interface Architecture Single

More information

Threads SPL/2010 SPL/20 1

Threads SPL/2010 SPL/20 1 Threads 1 Today Processes and Scheduling Threads Abstract Object Models Computation Models Java Support for Threads 2 Process vs. Program processes as the basic unit of execution managed by OS OS as any

More information

Distributed Scheduling for the Sombrero Single Address Space Distributed Operating System

Distributed Scheduling for the Sombrero Single Address Space Distributed Operating System Distributed Scheduling for the Sombrero Single Address Space Distributed Operating System Donald S. Miller Department of Computer Science and Engineering Arizona State University Tempe, AZ, USA Alan C.

More information

Take Back Lost Revenue by Activating Virtuozzo Storage Today

Take Back Lost Revenue by Activating Virtuozzo Storage Today Take Back Lost Revenue by Activating Virtuozzo Storage Today JUNE, 2017 2017 Virtuozzo. All rights reserved. 1 Introduction New software-defined storage (SDS) solutions are enabling hosting companies to

More information

PROCESSES AND THREADS THREADING MODELS. CS124 Operating Systems Winter , Lecture 8

PROCESSES AND THREADS THREADING MODELS. CS124 Operating Systems Winter , Lecture 8 PROCESSES AND THREADS THREADING MODELS CS124 Operating Systems Winter 2016-2017, Lecture 8 2 Processes and Threads As previously described, processes have one sequential thread of execution Increasingly,

More information

SWsoft ADVANCED VIRTUALIZATION AND WORKLOAD MANAGEMENT ON ITANIUM 2-BASED SERVERS

SWsoft ADVANCED VIRTUALIZATION AND WORKLOAD MANAGEMENT ON ITANIUM 2-BASED SERVERS SWsoft ADVANCED VIRTUALIZATION AND WORKLOAD MANAGEMENT ON ITANIUM 2-BASED SERVERS Abstract Virtualization and workload management are essential technologies for maximizing scalability, availability and

More information

Managed runtimes & garbage collection. CSE 6341 Some slides by Kathryn McKinley

Managed runtimes & garbage collection. CSE 6341 Some slides by Kathryn McKinley Managed runtimes & garbage collection CSE 6341 Some slides by Kathryn McKinley 1 Managed runtimes Advantages? Disadvantages? 2 Managed runtimes Advantages? Reliability Security Portability Performance?

More information

Chapter 18: Database System Architectures.! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems!

Chapter 18: Database System Architectures.! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems! Chapter 18: Database System Architectures! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems! Network Types 18.1 Centralized Systems! Run on a single computer system and

More information

The benefits and costs of writing a POSIX kernel in a high-level language

The benefits and costs of writing a POSIX kernel in a high-level language 1 / 38 The benefits and costs of writing a POSIX kernel in a high-level language Cody Cutler, M. Frans Kaashoek, Robert T. Morris MIT CSAIL Should we use high-level languages to build OS kernels? 2 / 38

More information

Protecting Mission-Critical Application Environments The Top 5 Challenges and Solutions for Backup and Recovery

Protecting Mission-Critical Application Environments The Top 5 Challenges and Solutions for Backup and Recovery White Paper Business Continuity Protecting Mission-Critical Application Environments The Top 5 Challenges and Solutions for Backup and Recovery Table of Contents Executive Summary... 1 Key Facts About

More information

Java Without the Jitter

Java Without the Jitter TECHNOLOGY WHITE PAPER Achieving Ultra-Low Latency Table of Contents Executive Summary... 3 Introduction... 4 Why Java Pauses Can t Be Tuned Away.... 5 Modern Servers Have Huge Capacities Why Hasn t Latency

More information

Efficient Memory Allocator with Better Performance and Less Memory Usage

Efficient Memory Allocator with Better Performance and Less Memory Usage Efficient Memory Allocator with Better Performance and Less Memory Usage Xiuhong Li, Altenbek Gulila Abstract Dynamic memory allocator is critical for native (C and C++) programs (malloc and free for C;

More information

Managed runtimes & garbage collection

Managed runtimes & garbage collection Managed runtimes Advantages? Managed runtimes & garbage collection CSE 631 Some slides by Kathryn McKinley Disadvantages? 1 2 Managed runtimes Portability (& performance) Advantages? Reliability Security

More information

Robust Memory Management Schemes

Robust Memory Management Schemes Robust Memory Management Schemes Prepared by : Fadi Sbahi & Ali Bsoul Supervised By: Dr. Lo ai Tawalbeh Jordan University of Science and Technology Robust Memory Management Schemes Introduction. Memory

More information

Separating Access Control Policy, Enforcement, and Functionality in Extensible Systems. Robert Grimm University of Washington

Separating Access Control Policy, Enforcement, and Functionality in Extensible Systems. Robert Grimm University of Washington Separating Access Control Policy, Enforcement, and Functionality in Extensible Systems Robert Grimm University of Washington Extensions Added to running system Interact through low-latency interfaces Form

More information

Dynamic Memory Allocation. Gerson Robboy Portland State University. class20.ppt

Dynamic Memory Allocation. Gerson Robboy Portland State University. class20.ppt Dynamic Memory Allocation Gerson Robboy Portland State University class20.ppt Harsh Reality Memory is not unbounded It must be allocated and managed Many applications are memory dominated Especially those

More information

IX: A Protected Dataplane Operating System for High Throughput and Low Latency

IX: A Protected Dataplane Operating System for High Throughput and Low Latency IX: A Protected Dataplane Operating System for High Throughput and Low Latency Belay, A. et al. Proc. of the 11th USENIX Symp. on OSDI, pp. 49-65, 2014. Reviewed by Chun-Yu and Xinghao Li Summary In this

More information

On-Demand Proactive Defense against Memory Vulnerabilities

On-Demand Proactive Defense against Memory Vulnerabilities On-Demand Proactive Defense against Memory Vulnerabilities Gang Chen, Hai Jin, Deqing Zou, and Weiqi Dai Services Computing Technology and System Lab Cluster and Grid Computing Lab School of Computer Science

More information

Compiler Construction

Compiler Construction Compiler Construction Lecture 18: Code Generation V (Implementation of Dynamic Data Structures) Thomas Noll Lehrstuhl für Informatik 2 (Software Modeling and Verification) noll@cs.rwth-aachen.de http://moves.rwth-aachen.de/teaching/ss-14/cc14/

More information

Acknowledgements These slides are based on Kathryn McKinley s slides on garbage collection as well as E Christopher Lewis s slides

Acknowledgements These slides are based on Kathryn McKinley s slides on garbage collection as well as E Christopher Lewis s slides Garbage Collection Last time Compiling Object-Oriented Languages Today Motivation behind garbage collection Garbage collection basics Garbage collection performance Specific example of using GC in C++

More information

QUANTIFYING AND IMPROVING THE PERFORMANCE OF GARBAGE COLLECTION

QUANTIFYING AND IMPROVING THE PERFORMANCE OF GARBAGE COLLECTION QUANTIFYING AND IMPROVING THE PERFORMANCE OF GARBAGE COLLECTION A Dissertation Presented by MATTHEW HERTZ Submitted to the Graduate School of the University of Massachusetts Amherst in partial fulfillment

More information

Parallel Programming Interfaces

Parallel Programming Interfaces Parallel Programming Interfaces Background Different hardware architectures have led to fundamentally different ways parallel computers are programmed today. There are two basic architectures that general

More information

Garbage Collection Algorithms. Ganesh Bikshandi

Garbage Collection Algorithms. Ganesh Bikshandi Garbage Collection Algorithms Ganesh Bikshandi Announcement MP4 posted Term paper posted Introduction Garbage : discarded or useless material Collection : the act or process of collecting Garbage collection

More information

Introduction to Operating Systems. Chapter Chapter

Introduction to Operating Systems. Chapter Chapter Introduction to Operating Systems Chapter 1 1.3 Chapter 1.5 1.9 Learning Outcomes High-level understand what is an operating system and the role it plays A high-level understanding of the structure of

More information

Last week. Data on the stack is allocated automatically when we do a function call, and removed when we return

Last week. Data on the stack is allocated automatically when we do a function call, and removed when we return Last week Data can be allocated on the stack or on the heap (aka dynamic memory) Data on the stack is allocated automatically when we do a function call, and removed when we return f() {... int table[len];...

More information

Lock vs. Lock-free Memory Project proposal

Lock vs. Lock-free Memory Project proposal Lock vs. Lock-free Memory Project proposal Fahad Alduraibi Aws Ahmad Eman Elrifaei Electrical and Computer Engineering Southern Illinois University 1. Introduction The CPU performance development history

More information

Discriminating Hierarchical Storage (DHIS)

Discriminating Hierarchical Storage (DHIS) Discriminating Hierarchical Storage (DHIS) Chaitanya Yalamanchili, Kiron Vijayasankar, Erez Zadok Stony Brook University Gopalan Sivathanu Google Inc. http://www.fsl.cs.sunysb.edu/ Discriminating Hierarchical

More information

CS 426 Parallel Computing. Parallel Computing Platforms

CS 426 Parallel Computing. Parallel Computing Platforms CS 426 Parallel Computing Parallel Computing Platforms Ozcan Ozturk http://www.cs.bilkent.edu.tr/~ozturk/cs426/ Slides are adapted from ``Introduction to Parallel Computing'' Topic Overview Implicit Parallelism:

More information

CS61C : Machine Structures

CS61C : Machine Structures inst.eecs.berkeley.edu/~cs61c/su06 CS61C : Machine Structures Lecture #6: Memory Management CS 61C L06 Memory Management (1) 2006-07-05 Andy Carle Memory Management (1/2) Variable declaration allocates

More information

Analyzing Real-Time Systems

Analyzing Real-Time Systems Analyzing Real-Time Systems Reference: Burns and Wellings, Real-Time Systems and Programming Languages 17-654/17-754: Analysis of Software Artifacts Jonathan Aldrich Real-Time Systems Definition Any system

More information

Memory Management (Chaper 4, Tanenbaum)

Memory Management (Chaper 4, Tanenbaum) Memory Management (Chaper 4, Tanenbaum) Memory Mgmt Introduction The CPU fetches instructions and data of a program from memory; therefore, both the program and its data must reside in the main (RAM and

More information

ECE519 Advanced Operating Systems

ECE519 Advanced Operating Systems IT 540 Operating Systems ECE519 Advanced Operating Systems Prof. Dr. Hasan Hüseyin BALIK (10 th Week) (Advanced) Operating Systems 10. Multiprocessor, Multicore and Real-Time Scheduling 10. Outline Multiprocessor

More information

System Models. 2.1 Introduction 2.2 Architectural Models 2.3 Fundamental Models. Nicola Dragoni Embedded Systems Engineering DTU Informatics

System Models. 2.1 Introduction 2.2 Architectural Models 2.3 Fundamental Models. Nicola Dragoni Embedded Systems Engineering DTU Informatics System Models Nicola Dragoni Embedded Systems Engineering DTU Informatics 2.1 Introduction 2.2 Architectural Models 2.3 Fundamental Models Architectural vs Fundamental Models Systems that are intended

More information

Chapter 9 Real Memory Organization and Management

Chapter 9 Real Memory Organization and Management Chapter 9 Real Memory Organization and Management Outline 9.1 Introduction 9.2 Memory Organization 9.3 Memory Management 9.4 Memory Hierarchy 9.5 Memory Management Strategies 9.6 Contiguous vs. Noncontiguous

More information

Chapter 9 Real Memory Organization and Management

Chapter 9 Real Memory Organization and Management Chapter 9 Real Memory Organization and Management Outline 9.1 Introduction 9.2 Memory Organization 9.3 Memory Management 9.4 Memory Hierarchy 9.5 Memory Management Strategies 9.6 Contiguous vs. Noncontiguous

More information

A Correctness Proof for a Practical Byzantine-Fault-Tolerant Replication Algorithm

A Correctness Proof for a Practical Byzantine-Fault-Tolerant Replication Algorithm Appears as Technical Memo MIT/LCS/TM-590, MIT Laboratory for Computer Science, June 1999 A Correctness Proof for a Practical Byzantine-Fault-Tolerant Replication Algorithm Miguel Castro and Barbara Liskov

More information