IPv4 and IPv6 Client-Server Designs: The Sockets Performance

Size: px
Start display at page:

Download "IPv4 and IPv6 Client-Server Designs: The Sockets Performance"

Transcription

1 IPv4 and IPv6 Client-Server Designs: The Sockets Performance Teddy Mantoro, Media A. Ayu, Amir Borovac and Aqqiela Z. Z. Zay Department of Computer Science, KICT International Islamic University Malaysia, Kuala Lumpur, Malaysia, Abstract Client-Server has several design alternatives, mainly iterative server and concurrent server. Inefficiency in the use of time and process control can be resulted from choosing a server design improperly. A server has more process control than clients as a server has to respond to multi-query and multi-processing in the same time from different client platforms such as IPv4 or IPv6. This study analyzes the performance of IPv4 and IPv6 in 5 different server designs, i.e. Iterative Server, Concurrent Fork Server, Concurrent Thread Server, Concurrent Pre-Fork Server and Concurrent Pre-Thread Server. The experiments for analyzing the CPU time including kernel and user mode time for each server were performed in TCP sockets using several techniques, it includes assigning 5 to 50 clients with connection from 500 to 5000 consecutive connections for each client on each test for each server. This study compared, discussed and analyzed time allocations for each type of those servers in responding to the query from the clients. This paper reveals that among 5 server designs, iterative server took less time in handling clients, while concurrent-fork server took most CPU time in handling multiple clients. Our experimental results show that IPv4 took less time in kernel mode in all the five server designs, and IPv6 took less time in user mode only under iterative server, pre-fork server, and pre-thread server. However, for the overall performance, IPv4 performs better than IPv6. Keywords- IPv4; IPv6; Iterative Server; Concurrent Server; Fork Server; Thread Server; Pre-Fork Server; Pre-Thread Server I. INTRODUCTION The peak of IPv6 new era is about to begin, as IPv4 address exhaustion and nearly come to the end. The recent allocation of IPv4 address made by Internet Assigned Numbers Authority (IANA) to Asia Pacific Network Information Centre (APNIC) leaving just five batches in the global pool of IPv4. IPv4 uses 32-bit which limit the IP address to only 4,294,967,296 possible unique addresses. In 2009 the estimated number of users using internet increased 6 times became 1802 millions. The unique addresses are estimated to be used up by the end of 2011, but since there will be some reused IP addresses it may be extended until the middle of However the next generation IP has been already discovered since 1995 whose size is increased to 128- bit and it can produce the address space up to 3.4 x possible unique addresses in hexadecimal form. Unfortunately the comparison of performance for both IPv4 and IPv6 is still questionable [1,2]. IPv4 and IPv6 have different structures, for example the header format. Some fields header format in IPv4 are no longer available or being replaced in IPv6 header, such as the 6-bit DSCP field and 2-bit ECN field replace the historical 8- bit traffic class field, the 16-bit payload length is not included in IPv6, etc. It aims to increase the speed of forwarding data and reduce the delay [1]. Beside the issue on IPv4 and IPv6, the server design is another important issue in managing data transfer between several dedicated servers and multiple clients especially in the case of sending and requesting queries from multiple clients and at the same time the clients request for multiple responses as well. As an example of this case is an application for tracking Hajj Pilgrim in Makkah, Saudi Arabia. This type of application handles a lot of queries from peers (pilgrims families in their own country) to monitor/track the Hajj pilgrim/s in crowded areas as many people sending their GPS location data to the slave server and from multiple slave server sending the data to the main server [3]. This paper discusses and analyzes the performance of IPv4 and IPv6 in iterative server and concurrent server as the major type of servers. Five (5) different server designs, i.e. Iterative Server, Concurrent Fork Server, Concurrent Thread Server, Concurrent Pre-Fork Server and Concurrent Pre- Thread Server will be reviewed and tested. The concurrent server has two techniques, i.e. thread and fork. By using thread, we can develop concurrent-thread server and pre-thread server and by using fork, concurrentfork server and pre-fork server can also be developed. The experiments for analyzing the CPU time including kernel and user mode time for each server were performed in TCP client server technique using several techniques, i.e.: 1. Assigning 5 to 50 clients with connection from 500 to 5000 consecutive connections for each client on each test for each server. 2. Assigning 100 clients with 10 connections and bytes for each child. 3. Assigning 5 clients and 500 connections for every server in comparing the performances of IPv4 and IPv6. The rest of this paper is organized as follows: Section 2 is the review of multithreading mechanism and parallelization, it discusses the speculative multithreading in the cases of critical instruction and exception handling. Section 3 discusses the server designs on how they handled clients. The result of the experiments which focus on time efficiency in user mode and kernel mode in handling clients are presented and discussed in Section 4. The discussion includes the performance of IPv4 and IPv6 servers as well as explanation on the conditions causing /12/$ IEEE

2 such performances. The paper is closed by a conclusion in Section 5. II. LITERATURE REVIEW Choosing the best performance server model for a specific task requires a lot of things to be put into consideration. Some of the important factors to be considered are server model/design, type of socket (IPv4 or IPv6), cache, delay, fork, thread, etc. In 2001, Roth and Sohi have worked on how to overcome the critical instruction issue where the loads miss in the cache and indirectly lead to mis-predicted branches. This condition can delay the fetch and completion of next instruction in the future processes [4,5]. They came up with speculative Data Driven Multithreading (DDMT). Their experiments showed that this DDMT can reduce the degradation of the performance that is caused by cache misses and branches misprediction. Not only reducing the degradation, DDMT also reduces the number of instruction that need to be fetched and executed by the machine. Speculative DDMT results in advanced technology by overcoming the limitation in current method to extract the instruction level of parallelism [6]. However, before placing the speculative DDMT, several technologies need to be available, for instance, hardware and software support as well as the algorithm for thread selection and management. In 2004, Bhowmik and Franklin developed compiler framework for speculative DDMT [7]. This compiler framework suggests the use of multiple hardware sequences to fetch and execute in parallel speculative threads which are executed by single program. While previous compiler is only focus on identifying loop-based thread and speculative thread, the compiler proposed by Bhowmik and Franklin has additional capability which is indentifying non-speculative threads and support nested threads. Moreover, this compiler also controls independence information as well as data dependence information and profile-based information. Figure 1. The tree structure of server design III. SERVER DESIGN As mentioned earlier, 5 server designs are used in order to study the performances of IPv4 and IPv6. Each of these designs serves the clients in various ways. They serve the clients either sequentially (iterative) or in parallel (concurrent). Figure 1 shows the classification of server designs based on iterative and concurrent techniques in handling requests from multiple clients. A. Iterative Server An iterative TCP server serves one client at a time. It processes a client s request until complete and then moves to the next client. Since iterative server only serves with one client at a time, there is no process control performed by the server. It deals with the client by allocating the socket and placing it into listen mode and set the maximum number of connections. After accepting the incoming connection request, it will set new socket for the connection. Then it processes the incoming data and formulates the response as well as sends outgoing data across the connection. When finished with a particular client, iterative server will close the connection and de-allocate the socket and return to listening mode to wait for the next connection request. As its characteristics, it sets single socket for every single client [8]. B. Concurrent-Fork Concurrent server means a server which has the capability to serve multiple clients at a time. Concurrent-fork server uses fork function in handling numerous clients. The server creates one child for each client so it has one client per process. The only limit on the number of clients is the operating system limitation on the number of children that can be processed for the user ID under the running server. When the server receives and accepts the client s connection, it forks a copy of itself called child and lets the child handle the client. The fork function creates a new separate process for every single client. The fork function splits the current process into two processes named a parent and a child. The child that refers to the new process has similar model/frame of the process that calls it (refers to parent process). One thing needs to be concerned about is the CPU time of the server forks which increases significantly when the number of clients are getting much higher [9]. C. Pre-Fork Server Pre-fork server has similar idea as concurrent fork server. The difference is on the way it provides the child for each client. In the concurrent fork server, a child process is created after the client s request reaches the server. Whereas in the concurrent pre-fork server, there is a single control process that is responsible to launch child processes, as the child process is created before the client s request reaches the server. These processes listen for the connection and serve them immediately when the client s request to connect arrives. By using this type of server, clients do not need to wait to be served by the child processes because the child process will be ready before the clients get connected to the server. Figure 2 shows the pre-

3 forking schema. Pool of available children are kept waiting for other connection requests to be served [10]. Figure 2. Pre-forking schema D. Concurrent-Thread Server The concept of concurrent-thread server is similar to concurrent-fork server. However the server is designed instead of creating one process for each client, it creates one thread for each client. In term of task s size, thread is lighter than process. Each time there is a new client request to connect, a new thread is created. The performance of having the threads to handle the clients is faster than having the forks process for dealing with the clients [11]. E. Pre-Thread Server The way concurrent pre-thread server handles the clients is almost similar with the concurrent pre-fork server, however in this server design, the clients deal with thread which has lighter process than fork. The server is designed to create a given number of threads, not a fork process, after the server started, and those threads will wait for the incoming connection from client. The thread is available to the server, before the clients connecting request is received by the server [11]. IV. RESULT AND DISCUSSION A. Client-Server Performance To measure the server-client performance, we create a multiple numbers of clients and consecutive connections which run in several servers. Both servers and clients are developed in C++ for Unix/Linux platform. For this purposes socket class is developed as well, which is based on UNIX socket API. The experiments are done in TCP. Processing time is measured for each experiment in order to benchmark/compare their performances. Figure 3 shows the CPU time performance of each server design. Several experiments were done for the five server designs. Each server accepts the child process, starting from 5 to 50 child processes and the connection starts from 500 to 5000 connections for each client. Iterative server has basic server functionality using FIFO in accepting the client requests, which take least CPU time to execute. Concurrent-fork takes most CPU time in executing client s request. This is because the server forked new child for each incoming connection, which is expensive and sometimes in the condition when all resources are in use, and can cause the server to get stuck. The CPU time for Pre-fork and Pre-thread servers have almost the same, but Pre-thread takes slightly less time because it is cheaper to create new thread than to create new fork process in term of the resources used. Similar to the concurrent-thread server, for each connection from a client s request one thread is created to accept that connection. Based on Figure 3, concurrent server has better performance when it uses thread compares to using fork. CPU Time (seconds) CPU time for different servers Number of Connections Figure 3. Performance comparison of the server designs Iterative Forked Preforked Thread Prethreaded The next sets of experiments were done for iterative, concurrent fork, pre-fork, and pre-thread servers. In this experiment, each server was given 100 child processes with 10 connections for each child. In this case, each server has to serve 1000 clients in total with maximum 100 simultaneous connections at the time. Although theoretically concurrent server is the extension of iterative server, this experiment shows that forking for concurrent clients as it arrives could affect the performance. However, in handling small number of clients, there is no significant impact on the performance. In the situation when the number of clients increases, the inefficiency becomes obvious. Table 1 shows that concurrent fork server needs more than 23 seconds (for CPU kernel time and user time) in order to serve all clients compares to other servers which need less than 0.5 second. TABLE I CPU TOTAL TIME FOR EACH SERVER DESIGN Server Design CPU CPU User Kernel Total Time Time Time Iterative Concurrent (one fork per client s request) Pre-fork (each child calling accept) Pre-thread (Mutex locking around accept())

4 The equal distribution of client connections that are handled by children or threads are shown in Table 2. If the server has less than two threads, the efficiency in handling the clients will not be much different between pre-fork and pre-thread server. This is because the children do not utilize the space of the fork or the thread. The greater number of children created, the greater waste in the distribution of clients. In pre-fork server, kernel is responsible for scheduling, while in pre-thread, there is a thread scheduling algorithm which chooses which thread will get the mutex lock. TABLE II COMPARISON OF PRE-FORK AND PRE-THREAD SERVER IN SERVING THE CLIENT S REQUEST Number of Clients Serviced Number of Prefork (each child Prethread (mutex Childs calling accept()) locking around accept()) Total The kernel as a core application that provides a layer between hardware and program application is able to communicate with other processes in performing operation. The way kernel works is based on memory management. In the case of having multiple processes to be handled, kernel will share its physical memory with many other processes. This will lead to a problem of running low on memory and the swap area will be used to lighten the kernel job. Another aspect of memory management is that, it will prevent those processes from accessing the address space of the others [12]. In this experiment, the time processing on kernel mode is measured and analyzed. Figure 4 shows that CPU time for IPv4 is lower compared to IPv6 for those servers. As the servers handle both IPv4 and IPv6 connections, and they allocate memory for both types of sockets, there should be no difference in performance from user perspective. This means that the different performance occurs at the handling connection on the kernel level between using IPv4 socket and IPv6 socket. C. Server Designs Performance 1) Performance of Iterative Server In this section the comparison of performance between IPv4 and IPv6 in any of each five (5) server designs will be discussed in detail. In order to make this approach fully useful, the number of children created by parent process should be monitored, so the number of children created can be reduced or increased. This situation helps to make the server work efficiently in handling the clients. B. IPv4-IPv6 Performance In measuring the impact of IPv4-IPv6 in server design, we run other experiments of five (5) server designs using IPv4 connections and IPv6 connections. In these experiments, clients spawned 15 children which each client establishing 500 consecutive connections to the server. Figure 5. Time allocated for user and kernel mode on iterative server The comparison of IPv4 and IPv6 in iterative server is shown by the following chart (Figure 5). Under user mode, IPv6 gives less CPU time compared to IPv4. However, under kernel mode IPv4 gives less CPU time compared to IPv6. In total time, considered kernel and user mode, IPv4 gives better performance than IPv6. Figure 4. CPU Time usage for 15 x 500 tests 2) Performance of Concurrent-Fork Server In concurrent-fork server where the processes are built to handle each incoming connection after having requested, the results show that it took seconds for IPv4 connection in user mode, while it took seconds for IPv6 under the same mode. When it tested under kernel mode, as shown if Figure 6, the time taken is about seconds in IPv4 and about seconds in IPv6. Based on these results, it shows that for kernel and user mode, IPv4 takes less time compared to IPv6. It also gives better view that IPv4 performs better than IPv6 for concurrent-fork server.

5 Figure 8. Time allocated for user and kernel mode on concurrent thread server Figure 6. Time allocated for user and kernel mode on concurrent fork server 3) Performance of Pre-Fork Server The results of the experiments in pre-fork server where the server had been ready with its child before having requests from multiple clients show that the time it took to serve the client s request under user mode in IPv4 connection is about seconds while in IPv6 connection is about seconds. The result under kernel mode in IPv4 is about seconds while in IPv6 is about seconds. Figure 7 shows that IPv6 performs well under user mode, but in total time, IPv4 gives better result than IPv6. 5) Performance of Pre-Thread Server Results of tests done in pre-thread server when it handled multiple clients show that IPv4 connection under user mode took seconds and IPv6 under the same mode took about seconds. Whereas it took seconds for IPv4 to perform the test under kernel mode and IPv6 took about seconds to perform it. IPv4 takes more time than IPv6 in user mode, but overall, IPv4 has better performance compared to IPv6. The time taken in pre-thread is less than pre-fork because it creates lighter resource consumption as shown if Figure 9. Figure 9. Time allocated for user and kernel mode on pre-thread server Figure 7. Time allocated for user and kernel mode on pre-fork server 4) Performance of Concurrent Thread Server The way server creates threads instead of processes to serve multiple clients makes the operations lighter. For IPv4 connection established on concurrent-thread server, the test took time of second sunder user mode and it took seconds under kernel mode. For IPv6 connection, the test took seconds under user mode and seconds under kernel mode. Figure 8 shows that IPv6 takes more time than IPv4. However, since the server created thread rather than process, it is much lighter compared to concurrent-fork server. The results of our experiments describe that when the number of children increases, the time taken by the server to process the children increases as well. Furthermore, the resource usage increases as the number of children increases. At some point, due to the usage of resources, some problems occur while the server and clients are running. These problems result in the abrupt termination of the running client. The observation explained above are the same when the value of the connection increases, the time and resource usage also increase. The problems mentioned earlier occur in these two cases as well. Exception to this observation is in the case of the increase in the data sent between the client and server. As it can be seen through data in Table 1, generally the time it takes to finish processing these data are more or less the same. This happens perhaps due to the step size of the incrimination which is relatively small. However, small incrimination is

6 needed because of the server s or client s inability to handle very huge data transfer. The resources usage in the hardware side remains problematic as it causes at some time abrupt termination of the connection between the server and client. This happens to trials that involve huge numbers of children, connections or even data transfer. However, re-doing the trials after resources are idle for a while usually will result in a successful trial. In our experiment, clients continue to try connecting to the fork server even though the trials were conducted after a long period of idleness. V. CONCLUSION The existence of IPv6 which will substitute IPv4 brings the issue of performance comparison between those IPv4 and IPv6 need to be concerned in choosing the right server model. Time in performing tasks on both IPs will be the major concern in this comparison. Several experiments were done to compare the performance among the servers and also between IPv4 and IPv6. The experiments for analyzing the CPU time for each server were done by assigning 5 to 50 children with 500 to 5000 consecutive connections for each child on each test for each server. The experiments for analyzing the process control CPU time were also done for iterative server, concurrent fork server, pre-fork server, and pre-thread server by giving each of them 100 children with 10 connections and bytes for each child. The experiments for performance comparison between IPv4 and IPv6 were done by giving each server 5 children with 500 connections for every child. All of the tests were done in TCP only. Comparing among 5 server designs, iterative server took less time in handling clients, while concurrent-fork server took most CPU time in handling multiple clients. The results on performance comparison between IPv4 and IPv6 in those 5 server designs have concluded that IPv4 took less time in kernel mode, and IPv6 took less time in user mode only under iterative server, pre-fork server, and pre-thread server. However, for overall performance, IPv4 has outperformed IPv6 in regard to their CPU time. As for our the future work will include the study of the proof of performance for heavily used server and reducing process control CPU time by creating a pool of children. REFERENCES [1] H. Miyata and M. Endo, Design and Evaluation of IPv4/IPv6 Translator for IP Based Industrial Network Protocol, Proceeding of Industrial Informatics (INDIN), Cardiff, Wales, UK, July [2] Microsoft Official Site, Comparing IPv4 and IPv6 Addresses, August 22 nd cc780310%28ws. 10%29.aspx. Accessed on February 5th, [3] T. Mantoro and A. D. Jaafar, M. F. Aris, Ayu M, HajjLocator: A Hajj Pilgrimage Tracking Framework in Crowded Ubiquitous Environment, The IEEE 2nd International Conference on Multimedia Computing and Systems (ICMCS'11), Ouarzazate, Morocco, April [4] G. S. Sohi and A. Roth, Speculative Multithreaded Processors, Proceedings of the 7th International Conference on High Performance Computing, Bangalore, India, December [5] G. S. Sohi and A. Roth, Speculative Data-Driven Multithreading, Proceeding of High-Performance Computer Architecture, HPCA, The Seventh International Symposium, University of Wisconsin, Madison, pp.37, January [6] C. B. Zilles and J. S. Emer, G.S Sohi, The Use of Multithreading for Exception Handling, Proceeding of Microarchitecture, MICRO-32. Proceedings. 32nd Annual International Symposium, Maui, Hawaii, USA, November [7] A. Bhowmik and M. Franklin, A General Compiler Framework for Speculative Multithreading, Proceeding of Parallel and Distributed Systems, IEEE Transactions, Manitoba, Canada, August [8] Kaizenlog, Server Designs (Iterative Server Algorithm), 26th October Accessed on February 2nd [9] NA., Fork (Concurrent Server), Accessed on February 5 th 2011]. [10] The Apache, Apache MPM Prefork, How it Woks, Accessed on February 2nd [11] W. R. Stevens and B. Fenner, A. M. Rudoff, UNIX Network Programming: The Sockets Networking API. (3rd ed.), Addison Wesley, Pearson Education International, USA, [12] Tuxradar, How Linux Kernel Works, March 15 th Accessed on April 11th 2011].

A Secure Pre-threaded and Pre-forked Unix Client-Server Design for Efficient Handling of Multiple Clients

A Secure Pre-threaded and Pre-forked Unix Client-Server Design for Efficient Handling of Multiple Clients A Secure Pre-ed and Pre-forked Unix - Design for Efficient Handling of Multiple s A Secure Pre-ed and Pre-forked Unix - Design for Efficient Handling of Multiple s Lokender Tiwari 1 and Vijay Gulashan

More information

Interactive Peer-Tracking Framework for Hajj Pilgrims

Interactive Peer-Tracking Framework for Hajj Pilgrims Interactive Peer-Tracking Framework for Hajj Pilgrims Teddy Mantoro*, Media A. Ayu*, Murni Mahmud # *Faculty of Science and Technology, University Siswa Bangsa International, Jakarta, Indonesia # Integ

More information

Internal Server Architectures

Internal Server Architectures Chapter3 Page 29 Friday, January 26, 2001 2:41 PM Chapter CHAPTER 3 Internal Server Architectures Often, it is important to understand how software works internally in order to fully understand why it

More information

csdesign Documentation

csdesign Documentation csdesign Documentation Release 0.1 Ruslan Spivak Sep 27, 2017 Contents 1 Simple Examples of Concurrent Server Design in Python 3 2 Miscellanea 5 2.1 RST Packet Generation.........................................

More information

Execution-based Prediction Using Speculative Slices

Execution-based Prediction Using Speculative Slices Execution-based Prediction Using Speculative Slices Craig Zilles and Guri Sohi University of Wisconsin - Madison International Symposium on Computer Architecture July, 2001 The Problem Two major barriers

More information

Using RDMA for Lock Management

Using RDMA for Lock Management Using RDMA for Lock Management Yeounoh Chung Erfan Zamanian {yeounoh, erfanz}@cs.brown.edu Supervised by: John Meehan Stan Zdonik {john, sbz}@cs.brown.edu Abstract arxiv:1507.03274v2 [cs.dc] 20 Jul 2015

More information

Outline. Threads. Single and Multithreaded Processes. Benefits of Threads. Eike Ritter 1. Modified: October 16, 2012

Outline. Threads. Single and Multithreaded Processes. Benefits of Threads. Eike Ritter 1. Modified: October 16, 2012 Eike Ritter 1 Modified: October 16, 2012 Lecture 8: Operating Systems with C/C++ School of Computer Science, University of Birmingham, UK 1 Based on material by Matt Smart and Nick Blundell Outline 1 Concurrent

More information

Computer Systems A Programmer s Perspective 1 (Beta Draft)

Computer Systems A Programmer s Perspective 1 (Beta Draft) Computer Systems A Programmer s Perspective 1 (Beta Draft) Randal E. Bryant David R. O Hallaron August 1, 2001 1 Copyright c 2001, R. E. Bryant, D. R. O Hallaron. All rights reserved. 2 Contents Preface

More information

Background: I/O Concurrency

Background: I/O Concurrency Background: I/O Concurrency Brad Karp UCL Computer Science CS GZ03 / M030 5 th October 2011 Outline Worse Is Better and Distributed Systems Problem: Naïve single-process server leaves system resources

More information

Speculative Parallelization in Decoupled Look-ahead

Speculative Parallelization in Decoupled Look-ahead Speculative Parallelization in Decoupled Look-ahead Alok Garg, Raj Parihar, and Michael C. Huang Dept. of Electrical & Computer Engineering University of Rochester, Rochester, NY Motivation Single-thread

More information

A Comprehensive Complexity Analysis of User-level Memory Allocator Algorithms

A Comprehensive Complexity Analysis of User-level Memory Allocator Algorithms 2012 Brazilian Symposium on Computing System Engineering A Comprehensive Complexity Analysis of User-level Memory Allocator Algorithms Taís Borges Ferreira, Márcia Aparecida Fernandes, Rivalino Matias

More information

Web Client And Server

Web Client And Server Web Client And Server Project Part A Overview In this part of the project, you and your partner will build a simple web client and a succession of servers to which it can connect. The goal is to slowly

More information

SPECULATIVE MULTITHREADED ARCHITECTURES

SPECULATIVE MULTITHREADED ARCHITECTURES 2 SPECULATIVE MULTITHREADED ARCHITECTURES In this Chapter, the execution model of the speculative multithreading paradigm is presented. This execution model is based on the identification of pairs of instructions

More information

CSE544 Database Architecture

CSE544 Database Architecture CSE544 Database Architecture Tuesday, February 1 st, 2011 Slides courtesy of Magda Balazinska 1 Where We Are What we have already seen Overview of the relational model Motivation and where model came from

More information

The Use of Multithreading for Exception Handling

The Use of Multithreading for Exception Handling The Use of Multithreading for Exception Handling Craig Zilles, Joel Emer*, Guri Sohi University of Wisconsin - Madison *Compaq - Alpha Development Group International Symposium on Microarchitecture - 32

More information

Test On Line: reusing SAS code in WEB applications Author: Carlo Ramella TXT e-solutions

Test On Line: reusing SAS code in WEB applications Author: Carlo Ramella TXT e-solutions Test On Line: reusing SAS code in WEB applications Author: Carlo Ramella TXT e-solutions Chapter 1: Abstract The Proway System is a powerful complete system for Process and Testing Data Analysis in IC

More information

Host Identifier and Local Locator for Mobile Oriented Future Internet: Implementation Perspective

Host Identifier and Local Locator for Mobile Oriented Future Internet: Implementation Perspective Host Identifier and Local Locator for Mobile Oriented Future Internet: Implementation Perspective Nak Jung Choi*, Ji In Kim**, Seok Joo Koh* * School of Computer Science and Engineering, Kyungpook National

More information

Multithreaded Architectural Support for Speculative Trace Scheduling in VLIW Processors

Multithreaded Architectural Support for Speculative Trace Scheduling in VLIW Processors Multithreaded Architectural Support for Speculative Trace Scheduling in VLIW Processors Manvi Agarwal and S.K. Nandy CADL, SERC, Indian Institute of Science, Bangalore, INDIA {manvi@rishi.,nandy@}serc.iisc.ernet.in

More information

More on Conjunctive Selection Condition and Branch Prediction

More on Conjunctive Selection Condition and Branch Prediction More on Conjunctive Selection Condition and Branch Prediction CS764 Class Project - Fall Jichuan Chang and Nikhil Gupta {chang,nikhil}@cs.wisc.edu Abstract Traditionally, database applications have focused

More information

Royal Mail International Update December 2018

Royal Mail International Update December 2018 Royal Mail International Update December 2018 This update, about incidents which have affected international mail services throughout December, was issued by Royal Mail Customer Services on Thursday 10

More information

Universal Communication Component on Symbian Series60 Platform

Universal Communication Component on Symbian Series60 Platform Universal Communication Component on Symbian Series60 Platform Róbert Kereskényi, Bertalan Forstner, Hassan Charaf Department of Automation and Applied Informatics Budapest University of Technology and

More information

LOAD BALANCING ALGORITHMS ROUND-ROBIN (RR), LEAST- CONNECTION, AND LEAST LOADED EFFICIENCY

LOAD BALANCING ALGORITHMS ROUND-ROBIN (RR), LEAST- CONNECTION, AND LEAST LOADED EFFICIENCY LOAD BALANCING ALGORITHMS ROUND-ROBIN (RR), LEAST- CONNECTION, AND LEAST LOADED EFFICIENCY Dr. Mustafa ElGili Mustafa Computer Science Department, Community College, Shaqra University, Shaqra, Saudi Arabia,

More information

An Efficient NAT Traversal for SIP and Its Associated Media sessions

An Efficient NAT Traversal for SIP and Its Associated Media sessions An Efficient NAT Traversal for SIP and Its Associated Media sessions Yun-Shuai Yu, Ce-Kuen Shieh, *Wen-Shyang Hwang, **Chien-Chan Hsu, **Che-Shiun Ho, **Ji-Feng Chiu Department of Electrical Engineering,

More information

Design and Implementation of A P2P Cooperative Proxy Cache System

Design and Implementation of A P2P Cooperative Proxy Cache System Design and Implementation of A PP Cooperative Proxy Cache System James Z. Wang Vipul Bhulawala Department of Computer Science Clemson University, Box 40974 Clemson, SC 94-0974, USA +1-84--778 {jzwang,

More information

Preliminary Research on Distributed Cluster Monitoring of G/S Model

Preliminary Research on Distributed Cluster Monitoring of G/S Model Available online at www.sciencedirect.com Physics Procedia 25 (2012 ) 860 867 2012 International Conference on Solid State Devices and Materials Science Preliminary Research on Distributed Cluster Monitoring

More information

Operating Systems. Computer Science & Information Technology (CS) Rank under AIR 100

Operating Systems. Computer Science & Information Technology (CS) Rank under AIR 100 GATE- 2016-17 Postal Correspondence 1 Operating Systems Computer Science & Information Technology (CS) 20 Rank under AIR 100 Postal Correspondence Examination Oriented Theory, Practice Set Key concepts,

More information

Chapter 4: Multithreaded

Chapter 4: Multithreaded Chapter 4: Multithreaded Programming Chapter 4: Multithreaded Programming Overview Multithreading Models Thread Libraries Threading Issues Operating-System Examples 2009/10/19 2 4.1 Overview A thread is

More information

Major OS Achievements. Chris Collins. 15 th October 2006

Major OS Achievements. Chris Collins. 15 th October 2006 Major OS Achievements 1 Running head: MAJOR OS ACHIEVEMENTS Major OS Achievements Chris Collins 15 th October 2006 Major OS Achievements 2 Introduction This paper discusses several major achievements in

More information

A Review On optimization technique in Server Virtualization

A Review On optimization technique in Server Virtualization A Review On optimization technique in Server Virtualization Lavneet Kaur, Himanshu Kakkar Department of Computer Science Chandigarh Engineering College Landran, India Abstract In this paper, the earlier

More information

CS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University

CS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University CS 555: DISTRIBUTED SYSTEMS [THREADS] Shrideep Pallickara Computer Science Colorado State University Frequently asked questions from the previous class survey Shuffle less/shuffle better Which actions?

More information

Operating Systems. Lecture 09: Input/Output Management. Elvis C. Foster

Operating Systems. Lecture 09: Input/Output Management. Elvis C. Foster Operating Systems 141 Lecture 09: Input/Output Management Despite all the considerations that have discussed so far, the work of an operating system can be summarized in two main activities input/output

More information

Multithreaded Value Prediction

Multithreaded Value Prediction Multithreaded Value Prediction N. Tuck and D.M. Tullesn HPCA-11 2005 CMPE 382/510 Review Presentation Peter Giese 30 November 2005 Outline Motivation Multithreaded & Value Prediction Architectures Single

More information

Processes. 4: Threads. Problem needs > 1 independent sequential process? Example: Web Server. Last Modified: 9/17/2002 2:27:59 PM

Processes. 4: Threads. Problem needs > 1 independent sequential process? Example: Web Server. Last Modified: 9/17/2002 2:27:59 PM Processes 4: Threads Last Modified: 9/17/2002 2:27:59 PM Recall: A process includes Address space (Code, Data, Heap, Stack) Register values (including the PC) Resources allocated to the process Memory,

More information

The latency of user-to-user, kernel-to-kernel and interrupt-to-interrupt level communication

The latency of user-to-user, kernel-to-kernel and interrupt-to-interrupt level communication The latency of user-to-user, kernel-to-kernel and interrupt-to-interrupt level communication John Markus Bjørndalen, Otto J. Anshus, Brian Vinter, Tore Larsen Department of Computer Science University

More information

Chapter 4: Threads. Chapter 4: Threads

Chapter 4: Threads. Chapter 4: Threads Chapter 4: Threads Silberschatz, Galvin and Gagne 2009 Chapter 4: Threads Overview Multithreading Models Thread Libraries Threading Issues Operating System Examples Windows XP Threads Linux Threads 4.2

More information

A Framework for Space and Time Efficient Scheduling of Parallelism

A Framework for Space and Time Efficient Scheduling of Parallelism A Framework for Space and Time Efficient Scheduling of Parallelism Girija J. Narlikar Guy E. Blelloch December 996 CMU-CS-96-97 School of Computer Science Carnegie Mellon University Pittsburgh, PA 523

More information

Power and Locality Aware Request Distribution Technical Report Heungki Lee, Gopinath Vageesan and Eun Jung Kim Texas A&M University College Station

Power and Locality Aware Request Distribution Technical Report Heungki Lee, Gopinath Vageesan and Eun Jung Kim Texas A&M University College Station Power and Locality Aware Request Distribution Technical Report Heungki Lee, Gopinath Vageesan and Eun Jung Kim Texas A&M University College Station Abstract With the growing use of cluster systems in file

More information

Processes. Process Scheduling, Process Synchronization, and Deadlock will be discussed further in Chapters 5, 6, and 7, respectively.

Processes. Process Scheduling, Process Synchronization, and Deadlock will be discussed further in Chapters 5, 6, and 7, respectively. Processes Process Scheduling, Process Synchronization, and Deadlock will be discussed further in Chapters 5, 6, and 7, respectively. 1. Process Concept 1.1 What is a Process? A process is a program in

More information

Introduction. New latch modes

Introduction. New latch modes A B link Tree method and latch protocol for synchronous node deletion in a high concurrency environment Karl Malbrain malbrain@cal.berkeley.edu Introduction A new B link Tree latching method and protocol

More information

Evaluating the Effect of IP and IGP on the ICMP Throughput of a WAN

Evaluating the Effect of IP and IGP on the ICMP Throughput of a WAN Evaluating the Effect of IP and IGP on the ICMP Throughput of a WAN Burhan ul Islam Khan 1,a, Humaira Dar 2,b, Asadullah Shah 3,c and Rashidah F. Olanrewaju 4,d 1,2,4 Department of Computer and Information

More information

Lecture 2: February 6

Lecture 2: February 6 CMPSCI 691W Parallel and Concurrent Programming Spring 2006 Lecture 2: February 6 Lecturer: Emery Berger Scribe: Richard Chang 2.1 Overview This lecture gives an introduction to processes and threads.

More information

Concurrent Programming

Concurrent Programming Concurrent Programming is Hard! Concurrent Programming Kai Shen The human mind tends to be sequential Thinking about all possible sequences of events in a computer system is at least error prone and frequently

More information

AN 831: Intel FPGA SDK for OpenCL

AN 831: Intel FPGA SDK for OpenCL AN 831: Intel FPGA SDK for OpenCL Host Pipelined Multithread Subscribe Send Feedback Latest document on the web: PDF HTML Contents Contents 1 Intel FPGA SDK for OpenCL Host Pipelined Multithread...3 1.1

More information

Asynchronous Events on Linux

Asynchronous Events on Linux Asynchronous Events on Linux Frederic.Rossi@Ericsson.CA Open System Lab Systems Research June 25, 2002 Ericsson Research Canada Introduction Linux performs well as a general purpose OS but doesn t satisfy

More information

Virtual Memory COMPSCI 386

Virtual Memory COMPSCI 386 Virtual Memory COMPSCI 386 Motivation An instruction to be executed must be in physical memory, but there may not be enough space for all ready processes. Typically the entire program is not needed. Exception

More information

Resource Containers. A new facility for resource management in server systems. Presented by Uday Ananth. G. Banga, P. Druschel, J. C.

Resource Containers. A new facility for resource management in server systems. Presented by Uday Ananth. G. Banga, P. Druschel, J. C. Resource Containers A new facility for resource management in server systems G. Banga, P. Druschel, J. C. Mogul OSDI 1999 Presented by Uday Ananth Lessons in history.. Web servers have become predominantly

More information

Job Re-Packing for Enhancing the Performance of Gang Scheduling

Job Re-Packing for Enhancing the Performance of Gang Scheduling Job Re-Packing for Enhancing the Performance of Gang Scheduling B. B. Zhou 1, R. P. Brent 2, C. W. Johnson 3, and D. Walsh 3 1 Computer Sciences Laboratory, Australian National University, Canberra, ACT

More information

Decoupled Software Pipelining in LLVM

Decoupled Software Pipelining in LLVM Decoupled Software Pipelining in LLVM 15-745 Final Project Fuyao Zhao, Mark Hahnenberg fuyaoz@cs.cmu.edu, mhahnenb@andrew.cmu.edu 1 Introduction 1.1 Problem Decoupled software pipelining [5] presents an

More information

Processes and Threads

Processes and Threads TDDI04 Concurrent Programming, Operating Systems, and Real-time Operating Systems Processes and Threads [SGG7] Chapters 3 and 4 Copyright Notice: The lecture notes are mainly based on Silberschatz s, Galvin

More information

Traffic in Network /8. Background. Initial Experience. Geoff Huston George Michaelson APNIC R&D. April 2010

Traffic in Network /8. Background. Initial Experience. Geoff Huston George Michaelson APNIC R&D. April 2010 Traffic in Network 1.0.0.0/8 Geoff Huston George Michaelson APNIC R&D April 2010 Background The address plan for IPv4 has a reservation for Private Use address space. This reservation, comprising of 3

More information

WHITE PAPER NGINX An Open Source Platform of Choice for Enterprise Website Architectures

WHITE PAPER NGINX An Open Source Platform of Choice for Enterprise Website Architectures ASHNIK PTE LTD. White Paper WHITE PAPER NGINX An Open Source Platform of Choice for Enterprise Website Architectures Date: 10/12/2014 Company Name: Ashnik Pte Ltd. Singapore By: Sandeep Khuperkar, Director

More information

A Reconfigurable Cache Design for Embedded Dynamic Data Cache

A Reconfigurable Cache Design for Embedded Dynamic Data Cache I J C T A, 9(17) 2016, pp. 8509-8517 International Science Press A Reconfigurable Cache Design for Embedded Dynamic Data Cache Shameedha Begum, T. Vidya, Amit D. Joshi and N. Ramasubramanian ABSTRACT Applications

More information

International Journal of Scientific & Engineering Research, Volume 4, Issue 7, July ISSN

International Journal of Scientific & Engineering Research, Volume 4, Issue 7, July ISSN International Journal of Scientific & Engineering Research, Volume 4, Issue 7, July-201 971 Comparative Performance Analysis Of Sorting Algorithms Abhinav Yadav, Dr. Sanjeev Bansal Abstract Sorting Algorithms

More information

Pull based Migration of Real-Time Tasks in Multi-Core Processors

Pull based Migration of Real-Time Tasks in Multi-Core Processors Pull based Migration of Real-Time Tasks in Multi-Core Processors 1. Problem Description The complexity of uniprocessor design attempting to extract instruction level parallelism has motivated the computer

More information

Athanassios Liakopoulos Slovenian IPv6 Training, Ljubljana, May 2010

Athanassios Liakopoulos Slovenian IPv6 Training, Ljubljana, May 2010 Introduction ti to IPv6 (Part A) Athanassios Liakopoulos (aliako@grnet.gr) Slovenian IPv6 Training, Ljubljana, May 2010 Copy Rights This slide set is the ownership of the 6DEPLOY project via its partners

More information

EXAM 1 SOLUTIONS. Midterm Exam. ECE 741 Advanced Computer Architecture, Spring Instructor: Onur Mutlu

EXAM 1 SOLUTIONS. Midterm Exam. ECE 741 Advanced Computer Architecture, Spring Instructor: Onur Mutlu Midterm Exam ECE 741 Advanced Computer Architecture, Spring 2009 Instructor: Onur Mutlu TAs: Michael Papamichael, Theodoros Strigkos, Evangelos Vlachos February 25, 2009 EXAM 1 SOLUTIONS Problem Points

More information

Meltdown or "Holy Crap: How did we do this to ourselves" Meltdown exploits side effects of out-of-order execution to read arbitrary kernelmemory

Meltdown or Holy Crap: How did we do this to ourselves Meltdown exploits side effects of out-of-order execution to read arbitrary kernelmemory Meltdown or "Holy Crap: How did we do this to ourselves" Abstract Meltdown exploits side effects of out-of-order execution to read arbitrary kernelmemory locations Breaks all security assumptions given

More information

2 TEST: A Tracer for Extracting Speculative Threads

2 TEST: A Tracer for Extracting Speculative Threads EE392C: Advanced Topics in Computer Architecture Lecture #11 Polymorphic Processors Stanford University Handout Date??? On-line Profiling Techniques Lecture #11: Tuesday, 6 May 2003 Lecturer: Shivnath

More information

B. V. Patel Institute of Business Management, Computer &Information Technology, UTU

B. V. Patel Institute of Business Management, Computer &Information Technology, UTU BCA-3 rd Semester 030010304-Fundamentals Of Operating Systems Unit: 1 Introduction Short Answer Questions : 1. State two ways of process communication. 2. State any two uses of operating system according

More information

Performance Benchmark and Capacity Planning. Version: 7.3

Performance Benchmark and Capacity Planning. Version: 7.3 Performance Benchmark and Capacity Planning Version: 7.3 Copyright 215 Intellicus Technologies This document and its content is copyrighted material of Intellicus Technologies. The content may not be copied

More information

Process size is independent of the main memory present in the system.

Process size is independent of the main memory present in the system. Hardware control structure Two characteristics are key to paging and segmentation: 1. All memory references are logical addresses within a process which are dynamically converted into physical at run time.

More information

Chapter 8 Virtual Memory

Chapter 8 Virtual Memory Chapter 8 Virtual Memory Contents Hardware and control structures Operating system software Unix and Solaris memory management Linux memory management Windows 2000 memory management Characteristics of

More information

Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel

Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel Semra.gulder@crc.ca, mathieu.deziel@crc.ca Abstract: This paper describes a QoS mechanism suitable for Mobile Ad Hoc Networks

More information

Migration to 64-bit Platform Improves Performance of Growing Bank s Core

Migration to 64-bit Platform Improves Performance of Growing Bank s Core Microsoft Windows Server 2003 Customer Solution Case Study Migration to 64-bit Platform Improves Performance of Growing Bank s Core Overview Country or Region: Mexico Industry: Banking Customer Profile

More information

Perceptive DataTransfer

Perceptive DataTransfer Perceptive DataTransfer Release Notes Version: 6.4.x Written by: Product Knowledge, R&D Date: September 2016 2016 Lexmark. All rights reserved. Lexmark is a trademark of Lexmark International, Inc., registered

More information

CSCE 313 Introduction to Computer Systems. Instructor: Dezhen Song

CSCE 313 Introduction to Computer Systems. Instructor: Dezhen Song CSCE 313 Introduction to Computer Systems Instructor: Dezhen Song Programs, Processes, and Threads Programs and Processes Threads Programs, Processes, and Threads Programs and Processes Threads Processes

More information

Problem Set: Processes

Problem Set: Processes Lecture Notes on Operating Systems Problem Set: Processes 1. Answer yes/no, and provide a brief explanation. (a) Can two processes be concurrently executing the same program executable? (b) Can two running

More information

CSCE 313: Intro to Computer Systems

CSCE 313: Intro to Computer Systems CSCE 313 Introduction to Computer Systems Instructor: Dr. Guofei Gu http://courses.cse.tamu.edu/guofei/csce313/ Programs, Processes, and Threads Programs and Processes Threads 1 Programs, Processes, and

More information

SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

SEDA: An Architecture for Well-Conditioned, Scalable Internet Services SEDA: An Architecture for Well-Conditioned, Scalable Internet Services Matt Welsh, David Culler, and Eric Brewer Computer Science Division University of California, Berkeley Operating Systems Principles

More information

ANALYSIS AND EVALUATION OF DISTRIBUTED DENIAL OF SERVICE ATTACKS IDENTIFICATION METHODS

ANALYSIS AND EVALUATION OF DISTRIBUTED DENIAL OF SERVICE ATTACKS IDENTIFICATION METHODS ANALYSIS AND EVALUATION OF DISTRIBUTED DENIAL OF SERVICE ATTACKS IDENTIFICATION METHODS Saulius Grusnys, Ingrida Lagzdinyte Kaunas University of Technology, Department of Computer Networks, Studentu 50,

More information

GETTING 1 STARTED. Chapter SYS-ED/ COMPUTER EDUCATION TECHNIQUES, INC.

GETTING 1 STARTED. Chapter SYS-ED/ COMPUTER EDUCATION TECHNIQUES, INC. GETTING 1 STARTED hapter SYS-ED/ OMPUTER EDUATION TEHNIQUES, IN. Objectives You will learn: Apache Software Foundation. Apache execution. Apache components. Hypertext Transfer Protocol. TP/IP protocol.

More information

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University CS 333 Introduction to Operating Systems Class 3 Threads & Concurrency Jonathan Walpole Computer Science Portland State University 1 The Process Concept 2 The Process Concept Process a program in execution

More information

A Beginner s Guide to Programming Logic, Introductory. Chapter 6 Arrays

A Beginner s Guide to Programming Logic, Introductory. Chapter 6 Arrays A Beginner s Guide to Programming Logic, Introductory Chapter 6 Arrays Objectives In this chapter, you will learn about: Arrays and how they occupy computer memory Manipulating an array to replace nested

More information

Chapter 4: Threads. Operating System Concepts. Silberschatz, Galvin and Gagne

Chapter 4: Threads. Operating System Concepts. Silberschatz, Galvin and Gagne Chapter 4: Threads Silberschatz, Galvin and Gagne Chapter 4: Threads Overview Multithreading Models Thread Libraries Threading Issues Operating System Examples Linux Threads 4.2 Silberschatz, Galvin and

More information

1 PROCESSES PROCESS CONCEPT The Process Process State Process Control Block 5

1 PROCESSES PROCESS CONCEPT The Process Process State Process Control Block 5 Process Management A process can be thought of as a program in execution. A process will need certain resources such as CPU time, memory, files, and I/O devices to accomplish its task. These resources

More information

Evaluation of a Speculative Multithreading Compiler by Characterizing Program Dependences

Evaluation of a Speculative Multithreading Compiler by Characterizing Program Dependences Evaluation of a Speculative Multithreading Compiler by Characterizing Program Dependences By Anasua Bhowmik Manoj Franklin Indian Institute of Science Univ of Maryland Supported by National Science Foundation,

More information

Distributed Scheduling for the Sombrero Single Address Space Distributed Operating System

Distributed Scheduling for the Sombrero Single Address Space Distributed Operating System Distributed Scheduling for the Sombrero Single Address Space Distributed Operating System Donald S. Miller Department of Computer Science and Engineering Arizona State University Tempe, AZ, USA Alan C.

More information

CSE 333 Lecture server sockets

CSE 333 Lecture server sockets CSE 333 Lecture 17 -- server sockets Hal Perkins Department of Computer Science & Engineering University of Washington Administrivia It s crunch time! HW3 due tomorrow, but lots of work to do still, so...

More information

Chapter 4: Threads. Chapter 4: Threads

Chapter 4: Threads. Chapter 4: Threads Chapter 4: Threads Silberschatz, Galvin and Gagne 2013 Chapter 4: Threads Overview Multicore Programming Multithreading Models Thread Libraries Implicit Threading Threading Issues Operating System Examples

More information

IXPs and IPv6 in the Asia Pacific: An Update

IXPs and IPv6 in the Asia Pacific: An Update IXPs and IPv6 in the Asia Pacific: An Update Duncan Macintosh CEO APNIC Foundation First session of the Asia-Pacific Information Superhighway (AP-IS) Steering Committee Dhaka, Bangladesh 1-2 November 2017

More information

Chapter 4: Threads. Operating System Concepts 9 th Edition

Chapter 4: Threads. Operating System Concepts 9 th Edition Chapter 4: Threads Silberschatz, Galvin and Gagne 2013 Chapter 4: Threads Overview Multicore Programming Multithreading Models Thread Libraries Implicit Threading Threading Issues Operating System Examples

More information

Questions answered in this lecture: CS 537 Lecture 19 Threads and Cooperation. What s in a process? Organizing a Process

Questions answered in this lecture: CS 537 Lecture 19 Threads and Cooperation. What s in a process? Organizing a Process Questions answered in this lecture: CS 537 Lecture 19 Threads and Cooperation Why are threads useful? How does one use POSIX pthreads? Michael Swift 1 2 What s in a process? Organizing a Process A process

More information

PROCESS CONCEPTS. Process Concept Relationship to a Program What is a Process? Process Lifecycle Process Management Inter-Process Communication 2.

PROCESS CONCEPTS. Process Concept Relationship to a Program What is a Process? Process Lifecycle Process Management Inter-Process Communication 2. [03] PROCESSES 1. 1 OUTLINE Process Concept Relationship to a Program What is a Process? Process Lifecycle Creation Termination Blocking Process Management Process Control Blocks Context Switching Threads

More information

Paradigm Shift of Database

Paradigm Shift of Database Paradigm Shift of Database Prof. A. A. Govande, Assistant Professor, Computer Science and Applications, V. P. Institute of Management Studies and Research, Sangli Abstract Now a day s most of the organizations

More information

CSE 544 Principles of Database Management Systems

CSE 544 Principles of Database Management Systems CSE 544 Principles of Database Management Systems Alvin Cheung Fall 2015 Lecture 5 - DBMS Architecture and Indexing 1 Announcements HW1 is due next Thursday How is it going? Projects: Proposals are due

More information

Heuristics for Profile-driven Method- level Speculative Parallelization

Heuristics for Profile-driven Method- level Speculative Parallelization Heuristics for Profile-driven Method- level John Whaley and Christos Kozyrakis Stanford University Speculative Multithreading Speculatively parallelize an application Uses speculation to overcome ambiguous

More information

Summary: Open Questions:

Summary: Open Questions: Summary: The paper proposes an new parallelization technique, which provides dynamic runtime parallelization of loops from binary single-thread programs with minimal architectural change. The realization

More information

18-447: Computer Architecture Lecture 23: Tolerating Memory Latency II. Prof. Onur Mutlu Carnegie Mellon University Spring 2012, 4/18/2012

18-447: Computer Architecture Lecture 23: Tolerating Memory Latency II. Prof. Onur Mutlu Carnegie Mellon University Spring 2012, 4/18/2012 18-447: Computer Architecture Lecture 23: Tolerating Memory Latency II Prof. Onur Mutlu Carnegie Mellon University Spring 2012, 4/18/2012 Reminder: Lab Assignments Lab Assignment 6 Implementing a more

More information

Techno India Batanagar Department of Computer Science & Engineering. Model Questions. Multiple Choice Questions:

Techno India Batanagar Department of Computer Science & Engineering. Model Questions. Multiple Choice Questions: Techno India Batanagar Department of Computer Science & Engineering Model Questions Subject Name: Operating System Multiple Choice Questions: Subject Code: CS603 1) Shell is the exclusive feature of a)

More information

Virtual Memory. Chapter 8

Virtual Memory. Chapter 8 Chapter 8 Virtual Memory What are common with paging and segmentation are that all memory addresses within a process are logical ones that can be dynamically translated into physical addresses at run time.

More information

Server algorithms and their design

Server algorithms and their design Server algorithms and their design slide 1 many ways that a client/server can be designed each different algorithm has various benefits and problems are able to classify these algorithms by looking at

More information

NETWORK SIMULATION USING NCTUns. Ankit Verma* Shashi Singh* Meenakshi Vyas*

NETWORK SIMULATION USING NCTUns. Ankit Verma* Shashi Singh* Meenakshi Vyas* NETWORK SIMULATION USING NCTUns Ankit Verma* Shashi Singh* Meenakshi Vyas* 1. Introduction: Network simulator is software which is very helpful tool to develop, test, and diagnose any network protocol.

More information

Fall 2012 Parallel Computer Architecture Lecture 15: Speculation I. Prof. Onur Mutlu Carnegie Mellon University 10/10/2012

Fall 2012 Parallel Computer Architecture Lecture 15: Speculation I. Prof. Onur Mutlu Carnegie Mellon University 10/10/2012 18-742 Fall 2012 Parallel Computer Architecture Lecture 15: Speculation I Prof. Onur Mutlu Carnegie Mellon University 10/10/2012 Reminder: Review Assignments Was Due: Tuesday, October 9, 11:59pm. Sohi

More information

LESSON PLAN. Sub. Code & Name : IT2351 & Network Programming and Management Unit : I Branch: IT Year : III Semester: VI.

LESSON PLAN. Sub. Code & Name : IT2351 & Network Programming and Management Unit : I Branch: IT Year : III Semester: VI. Unit : I Branch: IT Year : III Semester: VI Page: 1 of 6 UNIT I ELEMENTARY TCP SOCKETS 9 Introduction to Socket Programming Overview of TCP/IP Protocols Introduction to Sockets Socket address Structures

More information

Chapter 4: Threads. Operating System Concepts 9 th Edition

Chapter 4: Threads. Operating System Concepts 9 th Edition Chapter 4: Threads Silberschatz, Galvin and Gagne 2013 Chapter 4: Threads Overview Multicore Programming Multithreading Models Thread Libraries Implicit Threading Threading Issues Operating System Examples

More information

Process Time. Steven M. Bellovin January 25,

Process Time. Steven M. Bellovin January 25, Multiprogramming Computers don t really run multiple programs simultaneously; it just appears that way Each process runs to completion, but intermixed with other processes Process 1 6 ticks Process 2 Process

More information

Royal Mail International Update September 2018

Royal Mail International Update September 2018 Royal Mail International Update September 2018 This update, about incidents which have affected international mail services throughout September, was issued by Royal Mail Customer Services on Tuesday 16

More information

Announcement. Exercise #2 will be out today. Due date is next Monday

Announcement. Exercise #2 will be out today. Due date is next Monday Announcement Exercise #2 will be out today Due date is next Monday Major OS Developments 2 Evolution of Operating Systems Generations include: Serial Processing Simple Batch Systems Multiprogrammed Batch

More information

APNIC Update. RIPE 59 October 2009

APNIC Update. RIPE 59 October 2009 APNIC Update RIPE 59 October 2009 Overview APNIC Services Update APNIC 28 policy outcomes APNIC Members and Stakeholder Survey Next APNIC Meetings Resource Delegations (1 Oct 09) No of /8 delegated No

More information

Report. Middleware Proxy: A Request-Driven Messaging Broker For High Volume Data Distribution

Report. Middleware Proxy: A Request-Driven Messaging Broker For High Volume Data Distribution CERN-ACC-2013-0237 Wojciech.Sliwinski@cern.ch Report Middleware Proxy: A Request-Driven Messaging Broker For High Volume Data Distribution W. Sliwinski, I. Yastrebov, A. Dworak CERN, Geneva, Switzerland

More information