Implementation and modeling of two-phase locking concurrency control a performance study

Size: px
Start display at page:

Download "Implementation and modeling of two-phase locking concurrency control a performance study"

Transcription

1 INFSOF 4047 Information and Software Technology 42 (2000) Implementation and modeling of two-phase locking concurrency control a performance study N.B. Al-Jumah a, H.S. Hassanein b, *, M. El-Sharkawi a a Department of Mathematics and Computer Science, Kuwait University, P.O. Box 5969, Safat, Kuwait b Department of Computing and Information Science, Queen s University, Kingston, Ontario, Canada K7L 3N6 Received 10 October 1998; received in revised form 14 June 1999; accepted 12 August 1999 Abstract Two-phase locking (2PL) is the concurrency control mechanism that is used in most commercial database systems. In 2PL, for a transaction to access a data item, it has to hold the appropriate lock (read or write) on the data item by issuing a lock request. While the way transactions set their lock requests and the way the requests are granted would certainly affect a system s performance, such aspects have not received much attention in the literature. In this paper, a general transaction-processing model is proposed. In this model, a transaction is comprised of a number of stages, and in each stage the transaction can request to lock one or more data items. Methods for granting transaction requests and scheduling policies for granting blocked transactions are also proposed. A comprehensive simulation model is developed from which the performance of 2PL with our proposals is evaluated. Results indicate that performance models in which transactions request locks on an item-by-item basis and use first-come-first-served (FCFS) scheduling in granting blocked transactions underestimate the performance of 2PL. The performance of 2PL can be greatly improved if locks are requested in stages as dictated by the application. A scheduling policy that uses global information and/or schedules blocked transactions dynamically shows a better performance than the default FCFS Elsevier Science B.V. All rights reserved. Keywords: Two-phase locking; First-come-first-served; Concurrency control; Transaction-processing model 1. Introduction Two-phase locking (2PL) is the concurrency control mechanism implemented in most commercial database systems [1,2]. Thus an enormous body of literature has been developed to study its performance, including both simulation [3 5] and analytical modeling [6 13]. These studies have shown that 2PL possesses an overall performance advantage over the nonlocking concurrency control mechanisms, such as timestamp and optimistic concurrency control mechanisms. However, it has the drawback that it is susceptible to thrashing when the degree of concurrency reaches a certain limit. To overcome this drawback, many attempts have been made to enhance the performance of the two-phase locking mechanism. These attempts include introducing new variants of 2PL. These variants can be classified according to more than one criterion. One such criterion is concerned with conflict resolution. Conflicts can be resolved by either blocking the requesting transaction (the waiting case, which * Corresponding author. Tel.: ; fax: address: hossam@cs.queensu.ca (H.S. Hassanein). is the B2PL) or aborting the requesting transaction (the no waiting case [9] which is also called immediate restart [3]). Between these two extremes are mechanisms that balance between the two, using either blocking, or restarts to resolve conflicts. Examples of these mechanisms are, wait-die [14], wound-wait [14], running priority [6], cautious waiting [15], and the wait-depth-limited concurrency control mechanism [16]. A similar criterion classifies 2PL according to deadlocks. Some schemes prevent deadlocks by not allowing transactions to wait. This can be achieved by aborting a transaction whenever its lock requests cannot be granted, viz. the no-waiting mechanism [9], or the immediate restart [3]. An alternative is to hold locks on all data items at the same time as in conservative two-phase locking (C2PL). The performance of these protocols was studied and compared to the B2PL [3,5,17] it was shown that the immediate restart outperformed B2PL under low resource utilization. In [5], B2PL was compared to both C2PL and wait-die. C2PL outperformed both B2PL and wait-die. Tay et al. [10] have shown that immediate restart and C2PL can exceed the upper bound on throughput imposed by blocking in the B2PL. It was also shown that C2PL is a better policy when B2PL has reached its thrashing point. Other /00/$ - see front matter 2000 Elsevier Science B.V. All rights reserved. PII: S (99)

2 258 N.B. Al-Jumah et al. / Information and Software Technology 42 (2000) Fig. 1. Transaction processing model. enhancements to improve the performance of B2PL have involved modifications to the two-phase locking. The modifications introduce new lock modes or adding additional rules [17,18]. Such schemes, therefore, deviate from the spirit of 2PL. While the efforts to enhance the performance of 2PL have resulted in some improvement, such efforts have ignored the implementation of the 2PL itself. Most performance studies are based on the locking implementation described in Refs. [19,20], in which each data item has a queue of transactions waiting to lock this data item. When a transaction lock request cannot be granted it is appended to the end of that item queue. Whenever a lock becomes available, transactions in the waiting queue are granted in a FCFS basis. Moreover, in modeling lock requests it is assumed that a transaction consists of a number of processing steps and each step is preceded by a single lock request, i.e. locks on data items are requested in an item-by-item basis. This model, however, is not suited for some applications, e.g. form-based systems [21,22] (see Section 2). In this paper, we propose a general transaction-processing model that is capable of modeling different ways of requesting and granting locks. This is facilitated by allowing transactions to request a group of data items at the same time, rather than the item-by-item classical model. Requesting a group of data items represents the beginning of a stage of the transaction. A transaction moves to the next stage if the locks requested in the current stage have been granted and processed. Since each stage may hold more than one lock request, lock-granting schemes must also be considered. We propose two lock-granting models: total granting and partial granting. In total granting, requests on data items made within a stage are granted at the same time. This then represents a stage-wise-conservative locking. In partial granting, a transaction is granted the available data items and waits for already locked ones. While all existing work on the performance of 2PL assume first-come-first-served (FCFS) scheduling in lock management, we propose other scheduling policies. Such scheduling policies take into consideration lock information within a stage, or global transaction lock information, to expedite transaction processing. It is shown that the performance of 2PL can be greatly improved by adopting such transaction model and lock management techniques. This paper is organized as follows. In Section 2, we describe the proposed general transaction processing model and the proposed granting methods. Section 3 provides a description of the performance model for evaluating the performance of 2PL under our proposed implementations. The effect of stage-wise locking is studied in Section 4. As deadlocks proved to be an important factor in the performance of stagewise 2PL, in Section 5 we analyze deadlocks under our new proposed transaction-processing model. Extensive simulation experiments and numerical results are shown in Section 6. Finally, Section 7, summarizes the main conclusions of this work. 2. Proposed implementations Traditionally, there are two models of lock acquisition. In the first model, which we call the item-by-item model, a transaction acquires locks on required items one item at a time. If the item is not available the transaction waits. In the second model, known as the conservative model, a transaction acquires locks on all required data items simultaneously. If any of the required items cannot be locked, the transaction does not hold any lock, even if it is available, and waits until the time when all items become available. In some applications, however, neither of these two approaches is acceptable. For example, in form-based systems [21,22], the database is accessed by fill-in-theform interfaces. A form is defined as a set of cells, where each cell is a holder of a database item s value. According to the form definition, the user can introduce values of some of the cells and others can be retrieved from the database. A

3 N.B. Al-Jumah et al. / Information and Software Technology 42 (2000) Fig. 2. Block diagram of total and partial granting. form can be divided into segments. Cells in a segment should be accessed simultaneously, i.e. if any of the items corresponding to the segment s cell is not available, the segment cannot be acted upon by the user. Moreover, some of the cells can be computed as an aggregate function applied on values in some other cells in the forms. For instance, subtotal and total cells in a purchase order form, and other such aggregate functions cannot be computed unless all their parameters are available. We can think of a form as a transaction that can be divided into stages, where a stage corresponds to a segment in the form. A stage may include a single database item or several items, and all items in the stage should be accessed simultaneously General transaction processing model In our proposed transaction-processing model, the transaction is divided into stages. Each of these stages represents a separate processing step involving the requested data items. Whenever a transaction completes a stage it can move to the next stage. In each stage, the transaction requests a group of data items. When locks on all the requested data items within a stage are granted, only then the transaction can process those data items. After processing the data items in a stage, the transaction can move to the next stage. At the end of the last stage, the transaction can commit and release all the locked data items. Fig. 1 demonstrates this transaction-processing model. In Fig. 1, D i j denotes data item j requested in stage i (S i ). By assuming that transactions are divided into stages, we are able to accommodate the different ways transactions can set their lock requests. For instance, if a transaction consists of only one stage then this represents the conservative twophase locking mechanism. If the transaction requests one data item in each stage, then this represents the conventional implementation of the B2PL. Fig. 3. Classification of scheduling policies.

4 260 N.B. Al-Jumah et al. / Information and Software Technology 42 (2000) Table 1 Scheduling policies cost value assignment Scheduling policy Cost value: C(T i ) FCFS TMAXF OTF RMINRF 2.2. Granting methods Time at which the request is made Number of locks held thus far by the transaction Timestamp of the transaction (Number of locks that are requested for the current stage) (number of locks granted thus far during the stage) In addition to modeling the way transactions request their data items in stages, we also propose different models for lock-granting. We propose two granting methods, total granting and partial granting. In the total granting method, the requested locks in each stage are granted at the same time, as implemented in conservative two-phase locking. Lock-granting is therefore, stagewise conservative. In the partial granting method, the transaction is granted locks on the available data items and waits for those that are not available. Thus, we can divide a transaction stage into three phases. In the first phase, the transaction pre-declares its lock requirements for this stage. The next phase is the lock-granting phase, which can be total or partial, and the last phase is the processing phase. Fig. 2 illustrates the difference between total and partial granting Total granting In the total granting method, locks requested in a stage are locked at the same time. Thus in each stage, the lock manager checks for the availability of each of the requested data items. If all of the requested data items are available, the transaction is granted locks on these data items. If any of the requested locks is not available, then the transaction is blocked and is inserted at the end of a block queue. This block queue is served in a FCFS basis. Whenever a transaction is committed or aborted, the whole queue is examined and those transactions that can have their lock requirements satisfied are granted their requested locks. Such transactions are then removed from the block queue. This implementation is similar to the implementation of the C2PL in Ref. [1], but here the granting of requests is performed at the stage level, and not at the transaction level Partial granting In the partial granting method, for each lock on a data item requested by a transaction T i in stage S i, if a lock cannot be granted, then the transaction is blocked on the wait queue of this data item. When a lock is released, then one or more of the waiting transactions will be granted. The choice of this transaction(s) depends on some scheduling policy. We propose the use of one of the following four scheduling policies: 1. First-come-first-served (FCFS): this is similar to the conventional implementation of the lock manager under 2PL [19], in which a request arriving first at the data item s queue is scheduled first. However, in our case, several data items may be requested simultaneously during a stage. 2. Transaction holding MAXimum number of locks first (TMAXF): here the transaction, which has locked the most data items among blocked transactions, is scheduled first. This includes locks granted in previous stages and locks granted during the current stage. This scheduling is best suited if all transactions are of comparable size, since a transaction with a large number of locks is then more likely to commit before the others. 3. Oldest transaction first (OTF): here the oldest transaction is scheduled first. This is achieved by assigning each transaction a timestamp 1 value, which is the time at which the transaction was generated. The transaction with the lowest timestamp value is scheduled first. To prevent starvation, restarted transactions retain their old timestamp. 4. Request with MINimum remaining number of locks first (RMINRF): in RMINRF, the transaction that is waiting for the minimum number of locks is scheduled first. 2 To implement these scheduling policies we assign a cost value, denoted by C(T i ), to each waiting transaction according to Table 1. Note that in FCFS, OTF, and RMINRF lower cost values imply higher priority, whereas in TMAXF a high cost value implies higher priority. Depending on the information used in assigning transaction cost values, we can classify the four proposed scheduling policies into two categories (see Fig. 3). The first category uses global information. TMAXF and OTF belong to this category. FCFS and RMINRF belong to a second category that uses local information, i.e. information from the current stage only. This classification can be further extended to include the nature of the cost value assignment policy, i.e. how often the cost value of a transaction is computed. This can be either static or dynamic. In the category that uses global information in assigning cost values to blocked transactions, the cost value assignment of OTF is static, as it is computed only once when the transaction is generated and does not change during the transaction s lifetime. In TMAXF, the cost value assignment is dynamic as it is adjusted every time the transaction 1 The use of timestamp here is fundamentally different than in the timestamping concurrency control mechanism [1]. 2 Note that the concept minimum remaining number of locks is only applicable with pre-declaration. This is why it may be used within a stage, but not on a whole transaction as pre-declaration at the transaction level may be expensive.

5 N.B. Al-Jumah et al. / Information and Software Technology 42 (2000) Fig. 4. System model. is granted a lock on a data item. It starts with cost value zero, then this cost value is increased as the transaction is granted new locks, until the transaction commits. In the category that uses local information, the classification into dynamic or static applies on the transaction stages. When the transaction moves from one stage to the next, its cost value is set to a new value. Here, the cost values assignment for FCFS is static as it depends on the time at which a stage starts. In RMINRF, on the other hand, it is dynamic as it depends on the number of locks the transaction is waiting for in the current stage. It starts with a cost value which is equal to the number of requested data items in this stage, then it is decreased as the transaction is granted new locks until it becomes zero, which means that the transaction has reached the processing phase of the stage. In the following sections we provide performance evaluation and analysis of two-phase locking concurrency control with our proposed stage-wise locking model and lock-granting methods. Fig. 5. Logical queuing model.

6 262 N.B. Al-Jumah et al. / Information and Software Technology 42 (2000) Table 2 Simulation parameters Variable num.terminal num.resource.units Mpl Object.io Object.cpu db.size ext.think.time trans.size write.prob 3. Performance model Definition The number of terminals generating transactions The number of physical resources (CPUs and disks) in the system, where a resource unit ˆ 1 CPU 2 disks the multiprogramming level (the maximum number of active transactions in the system) The service time for a single data object in the disk server The service time for a single data object in the CPU server The size of the database (the number of data items) The mean of the exponential inter-arrival time of transactions The size of the transaction (the number of data items requested by the transaction) The probability of write locks The concurrency control model used in this work consists of four components: the database model, the user model, system model, and the transaction model. The database model considered in this work is centralized. Following is a description of the first three components. The transaction model was described in Section 2. The database model: the database is modeled as a group of data items with the locking granularity being a page. Data items are chosen uniformly from 1 to db.size (the database size). The user model: transactions are generated from a fixed number of terminals. The time between the completion of one transaction and submitting another transaction is exponentially distributed with mean ext.think.time, which is a simulation parameter. This means that terminals can have at most one pending transaction. The model is capable of generating transactions of different sizes. The system model: the system model consists of a fixed number of terminals. The physical resources are disks and CPUs. There are multiple disk servers and multiple CPU servers. Each disk has its own queue, while all CPU s have the same queue (see Fig. 4). If a transaction requests a disk, then it is uniformly assigned one of the disks. If a transaction requests a CPU, it is assigned a free CPU. The service time on these resources is deterministic, with parameter obj.io for reading or writing a data item, and obj.cpu for processing. Each transaction is given an amount obj.io and obj.cpu for reading the data item. For items that are requested in a write mode, they are given the same amount as for read locks, but when they commit they are given an obj.io to write the deferred updates. CPU and disk access requests are served according to the FCFS scheduling discipline Simulation model The simulation model used in this work is based on a closed queuing model of a centralized database system (see Fig. 5). The model is similar to that used in [5,17], with special provisions to accommodate our implementations. The model consists of a group of terminals in which transactions originate. The transaction generator is responsible for generating transactions along with their associated parameters. This includes the number of stages, the size of each stage, the data items requested and their lock mode. The multiprogramming level (mpl) controls the maximum number of transactions allowed to be active in the system. A transaction is considered to be active if it is receiving service, waiting in the queues of the Lock manager, a CPU or I/O server, or if it is blocked. When a transaction is generated from a terminal and the system contains a full set of active transactions, i.e. the number of active transactions is equal to the multiprogramming level, then the transaction is delayed in the ready queue awaiting any active transaction to commit or abort. When a transaction is admitted to the system, it starts by making its first concurrency control request (its first stage). The lock manager is responsible for granting such requests. According to the granting model employed and the status of the database, one of the following outcomes will take place: 1. The request is granted: here the transaction proceeds to the object queue to read the granted data items. It is assumed that each locked data item has to be read from the disk. For each data item read the transaction is given an object.io service time. After reading all data items requested in the current stage the transaction is queued in the CPU queue for the processing phase, where it receives an object.cpu service time for each data item. 2. The transaction is blocked: this differs according to the lock-granting model used as detailed in Section 2. In the total granting model, the transaction is blocked if at least one of its requested data items is not available. Here, the transaction is inserted at the end of a queue of blocked transactions. In the partial granting model, a transaction may be granted some of the requested locks, i.e. the available ones, and waits for those that are not available. For each such data item it is inserted in the data item s wait queue according to the cost value assignment under the employed scheduling policy. 3. A deadlock is detected: here one of the deadlocked transactions has to be restarted to resolve this deadlock. (A specific portion of the simulator is concerned with maintaining the wait-for-graph and checking for deadlocks each time a transaction is blocked.) When a deadlock is

7 N.B. Al-Jumah et al. / Information and Software Technology 42 (2000) Table 3 Base parameter settings Parameter Value num.terminal 200 num.resource.units 1 (1 CPU 2 Disks) object.io time unit object.cpu time unit db.size 1000 data item ext.think.time 1 time unit trans.size 12 data items write.prob 0.25 detected the current blocker is chosen as a victim to resolve the deadlock. As in Ref. [5], we have chosen the current blocker as a victim selection criterion. An aborted transaction is restarted by releasing all of its locked data items and inserting it at the end of the ready queue. When a transaction completes a stage it is queued again at the concurrency control queue to begin its next stage, until all stages are completed. Here the transaction prepares to commit by writing the deferred updates (only for write transactions), and then it releases all locked data items and commits. When a transaction from a terminal is committed, another transaction is generated in its place after an exponentially distributed time (ext.think.time). Table 2 summarizes the simulation parameters Experimental setting In analyzing the simulation results, the main performance metrics used are the average transaction response time, and the system throughput. The average transaction response time is the time elapsed from the generation of a transaction until it commits. The throughput of the system is the number of committed transactions per time unit. Other performance measures are also computed to help in analyzing simulation results, among which is the restart ratio. The restart ratio is the ratio of restarted transactions to committed transactions. In all experiments, unless otherwise stated, the base parameter settings are those shown in Table 3. These experiments are performed for a system with one class of transactions (a group of transactions belong to the same Fig. 6. Effect of number of stages, total granting (response time). class if they have the same size). Such a system was considered to reduce the randomness and to gain better insight from the obtained results. A general system with multiple classes is also considered. This parameter setting is similar to that in Ref. [3], where the number of terminals was set to 200 and the mpl was varied from 5 to 150. These values were chosen to provide a wide range of operating conditions with respect to both data and resource contention. The choice of the data item s processing costs makes the system slightly I/O bound. The database size, average transaction size, and the probability a data item being updated were chosen so that, and with high mpl values, they would yield a system with high data contention where differences between the proposed granting methods can be observed. Each simulation run is composed of 11 batches each of size 3000 events. Events here represent committed transactions. The first batch is discarded to remove the initial transient. A 96% confidence interval is computed for each performance measure. 4. Preliminary results As mentioned earlier, our transaction-processing model is capable of accommodating different transaction types with respect to the degree of pre-declaration a transaction can provide. Depending on what a transaction can pre-declare, it is divided into stages. If a transaction can pre-declare all of its lock requirements when it starts, then it consists of only one stage. In this experiment, we compare the Table 4 Stage size according to the number of stages Number of stages Number of data items requested in each stage Fig. 7. Effect of number of stages, partial granting, FCFS (response time).

8 264 N.B. Al-Jumah et al. / Information and Software Technology 42 (2000) Fig. 8. Effect of number of stages, total granting (throughput). performance of 2PL with different lock-request settings. Therefore, the purpose of the experiment is to study the effect of stagewise locking on the performance of 2PL. This is modeled by considering transactions with different number of stages (see Table 4). For total granting, and when the transaction is comprised of one stage, this represents conservative 2PL. With partial granting, FCFS, and a stage size of one data item, we are modeling the conventional B2PL. This experiment has been conducted in a system with one class of transactions for different mpl values. However, we only show results at mpl values of 25, 75, and 150, which represent low, moderate, and high data contention, respectively. Figs. 6, 8 and 10, respectively, plot the response time, throughput, and the restart ratio, for different number of stages under the total granting method. The first thing to notice is that the response time, the response time standard deviation, the throughput, and the restart ratio curves are proportional to each other. We also notice that the performance under low mpl values is the same for all numbers of stages. When the mpl is increased, then, as the number of stages increases, performance degrades. The response time increases rapidly, and the throughput decreases, indicating that the system is thrashing. This is due to transaction restarts. At number of stages equal to one, which represent the conventional C2PL, there are no restarts, and hence it shows best performance. As the number of stages is increased, the restart ratio increases until the case where the transaction requests one data item per stage, which represents the conventional B2PL. Here the restart ratio has the highest value. Fig. 10. Effect of number of stages, total granting (restart ratio). The same experiment is performed under the partial granting (here we only show the results under FCFS scheduling). Figs. 7, 9 and 11, respectively, plot the response time, throughput, and the restart ratio versus number of stages for the three values of mpl. As in the case with total granting, performance differences between the different transaction types appear only at high data contention (mpl value of 150). Only at this point does the way of setting lock requests affect the performance. Best performance is observed when the transaction is comprised of one stage only. This is because there are no deadlocks in this case. When the number of stages is increased, performance degrades until a certain number of stages, and then it improves. Such performance is due to restarts, see Fig. 11. Similar observations have been noted for the other scheduling policies (TMAXF, RMINRF, and OTF). By examining the restart ratio results, it is noted that it is higher for an intermediate number of stages. That is, deadlocks are maximized at such value of number of stages. The following section analyzes deadlocks under the partial granting method. 5. Deadlocks analysis In 2PL, deadlocks occur when two or more transactions wait for each other. This deadlock can be resolved by aborting one or more of the transactions involved in the deadlock cycle. In our proposed transaction model, and by allowing the transaction to wait for more than one data item, some additional deadlocks can occur. This can be demonstrated by the following example. Consider a system with two Fig. 9. Effect of number of stages, partial granting, FCFS (throughput). Fig. 11. Effect of number of stages, partial granting, FCFS (restart ratio).

9 N.B. Al-Jumah et al. / Information and Software Technology 42 (2000) Fig. 12. An example illustrating fake deadlocks. transactions T 1 and T 2, which for simplicity are comprised of one stage only. The write set of transaction T 1 is (D 1, D 2, D 3, D 4 ) and the write set of transaction T 2 is (D 1, D 2, D 5, D 6 ). Assume that data items D 2, D 3, and D 4 are already locked by some transaction, T i, in a write mode, and data items D 1, D 5, and D 6 are free. Then the following takes place (see Fig. 12). 1. At t 1 : T 1 requests locks on data items (D 1, D 2, D 3, D 4 ). Since data item D 1 is free, T 1 can hold a lock on D 1. Edge T 1! T i is inserted in the wait for graph (WFG) on items D 2, D 3,andD 4, since they are already locked by T i. 2. At t 2 : T 2 requests locks on data items (D 1, D 2, D 5, D 6 ). Edge T 2! T i is inserted in the WFG, on item D 2, since it is already locked by T i. Edge T 2! T 1 is inserted in the WFG, for item D 1, since it is already locked by T At t 3 : T i commits and releases data items (D 2, D 3, and D 4 ). Every edge directed to T i will be removed from the WFG. Now according to the scheduling policy in wait queues the following will happen. 1. If FCFS or OTF are used then: T 1 will be granted items D 2, D 3, and D 4, since C T 1 C T 2 ; where C T 1 ˆt 1 ; C T 2 ˆt 2 : T 2 will wait for T 1 on item D 2, and continue to wait for T 1 on item D 1.This is a safe situation, i.e. there is no deadlock. 2. If TMAXF or RMINRF are used then: T 2 will be granted a lock on data item D 2, since the priority of T 2 is higher than the priority of T 1. 3 Edge T 1! T 2 will be inserted in the WFG, this will result in a deadlock cycle since edge T 2! T 1 already exits in the WFG. Note that this deadlock is not a real deadlock, i.e. it does not require to restart any of the deadlocked transactions to be resolved. This is because none of data items D 1 nor D 2 has been used by either T 1 or T 2. (Recall that a transaction does not start the processing phase unless all the requested data items in a stage are locked.) This deadlock can be 3 In TMAXF, higher cost values imply higher priority and C T 2 C T 1 ; where C T 2 ˆ2; and C T 1 ˆ1: In RMINRF, on the contrary, lower cost values imply higher priority and C T 2 C T 1 ; where C T 2 ˆ 2; and C T 1 ˆ3:

10 266 N.B. Al-Jumah et al. / Information and Software Technology 42 (2000) Table 5 Deadlock types Deadlock type Stage in which transaction T 2 locked data item D 1 Stage in which transaction T 1 locked data item D 2 Deadlock class 1 p p Real 2 p c Fake 3 c p Fake 4 c c Fake resolved by having either T 1 release the lock on data item D 1 or T 2 release the lock on data item D 2. We will call such deadlock a fake deadlock. From the previous example, if a transaction is comprised of only one stage then both FCFS, and its equivalent, in this case, OTF are deadlock free, even though locks are not granted at the same time. This is because when two transactions T 1 and T 2 are in a deadlock cycle, we have the following situation: T 1 is waiting for T 2 on item D 1, hence C T 1 C T 2 : T 2 is waiting for T 1 on item D 2, hence C T 2 C T 1 : This is a contradiction, since C T i denotes the timestamp of the transaction in case of OTF and for FCFS it is time at which the stage was started. Generally, for a deadlock cycle: T 1! T 2 on D 1 and T 2! T 1 on D 2. We can classify deadlocks into four types according to the stage in which the data items involved in a deadlock cycle were locked (see Table 5). These data items are D 1 and D 2 with respect to transactions T 1 and T 2, respectively. This can be either: 1. Locked in the current c stage, i.e. the data item is not processed yet, and hence releasing this data item will not affect serializability. 2. Locked in a previous stage p. Here the data item has been processed and releasing the data item must involve aborting and restarting the transaction. From Table 5 we notice the following deadlock types: Type 1: this deadlock results when transaction T 1 is blocked on data item D 1 that has been locked by another transaction T 2 is one of its previous stages. And at the same time transaction T 2 is blocked on data item D 2 that has been locked by transaction T 1 in one of its previous stages. This is a real deadlock, which can be resolved by restarting either transaction T 1 or transaction T 2. Type 2: this deadlock results when transaction T 1 is blocked on data item D 1 that has been locked by transaction T 2 in one of its previous stages. And at the same time transaction T 2 is blocked on data item D 2 that has been locked by transaction T 1 in its current stage. This is a fake deadlock, and can be resolved by having transaction T 1 release data item D 2. Type 3: this deadlock results when transaction T 1 is blocked on data item D 1 that has been locked by transaction T 2 in its current stage. And at the same time transaction T 2 blocks on data item D 2 that has been locked by transaction T 1 in one of its previous stages. Again this is a fake deadlock, and can be resolved by having transaction T 2 release data item D 1. Type 4: this deadlock results when transaction T 1 is blocked on data item D 1 that has been locked by transaction T 2 in its current stage. And also transaction T 2 is blocked on data item D 2 that has been locked by transaction T 1 in its current stage. This is another fake-deadlock, and can be resolved either by having transaction T 1 release data item D 2, or by having transaction T 2 releases data item D 1. According to the way transactions are divided into stages and the number of locks requested in each stage, Table 6 shows the types of deadlocks under each of the proposed scheduling policies. From the table, we notice the absence Table 6 Deadlock types under each scheduling policy Scheduling policy Deadlock types One stage One data item in a stage More than one stage, and more than one data item in a stage FCFS None 1 1, 2, and 3 OTF None 1 All RMINRF 4 1 All TMAXF 4 1 All

11 N.B. Al-Jumah et al. / Information and Software Technology 42 (2000) System with one class of transactions In this section, we study the performance of single-class transactions. In the first experiment we compare the performance of the different scheduling policies within the partial granting. Then an experiment that compares the performance of the two granting methods, total and partial granting, is presented. Yet in another experiment, we study the effect of the transaction size on both total and partial granting. Fig. 13. Effect of scheduling policy, case 1 (response time and response time STD.DEV). of deadlocks of type 4 when the FCFS scheduling is used. Note that a deadlock of type 4 occurs when the data items, involved in the deadlock cycle, are requested in the current stage. Since in FCFS there is always a partial order among transactions waiting for items locked during the current stage. Deadlocks of this type are avoided. Also note that, in OTF and for single-stage transactions, deadlocks of type 4 do not exist as well. This is because OTF scheduling in this case is equivalent to FCFS. 6. Performance results In the previous two sections we have studied and analyzed the performance effects of stagewise locking, which is the basis of our proposed transaction-processing model. In this section we present the remainder of the performance experiments. Section 6.1 considers a system with one class of transactions and Section 6.2 considers a system with multiple classes of transactions. Fig. 14. Effect of scheduling policy, case 2 (response time) Effect of scheduling policy This experiment compares the performance of the partial granting method under the four scheduling policies proposed in Section 2. Since the number of stages by which the transaction is comprised is an important factor affecting the performance, we conducted this experiment for all the cases shown in Table 4 for different mpl values. Only three such cases are shown here. Case 1: a transaction consisting of one stage, where the stage size equals the transaction size (12 data items). Case 2: a transaction consisting of multiple stages each of size one. Case 3: a transaction consisting of three stages each of size four. This represents an intermediate case. Fig. 13 shows the response time under different mpl values for the case in which transactions are comprised of one stage. Recall that here FCFS and OTF are equivalent and are deadlock free. As well, TMAXF is equivalent to RMINRF, as the system consists of transactions of one size only. Here, FCFS/OTF and TMAXF/RMINRF have relatively the same performance under all mpl values. This means that in a system with transactions comprised of one stage, it does not make a difference which scheduling policy is used. It should be noted, however, that the standard deviation of the response time is higher under TMAX/ RMINRF scheduling policies. Note that in FCFS/OTF, and because cost value assignment is static, the transaction, in the worst case, will only have to wait for some of the transactions that were active 4 on its arrival. But in TMAXF/ RMINRF, the cost value assignment is dynamic, and transactions can wait for transactions that might have come later or proceed before transactions that arrived earlier. This makes its waiting time unpredictable. Fig. 14 plots the response time for transactions with the number of stages equal to the transaction size. Each stage is of size one item. Here FCFS is equivalent to RMINRF. At low mpl values, the performance of all scheduling policies is similar. This is because low mpl values result in low data contention, which does not allow differences to appear. At high levels of multiprogramming, TMAXF has the best performance followed by OTF, then FCFS/RMINRF. For 4 More accurately, it waits for only those transactions that were holding a lock or waiting for locks requested by the blocked transaction.

12 268 N.B. Al-Jumah et al. / Information and Software Technology 42 (2000) Fig. 17. Total and partial granting, mpl ˆ 150 (response time). Fig. 15. Effect of scheduling policy, case 3 (response time). Fig. 18. Total and partial granting, mpl ˆ 150 (restart ratio). Fig. 16. Effect of number of stages on scheduling policies, mpl ˆ 150 (response time). instance at mpl equal to 150, the response time of TMAXF was 21 and 28% lower than OTF and FCFS/RMINRF, respectively. In FCFS/RMINRF when a lock is released, choosing a blocked transaction from the data item s wait queue is based on the time at which the request is made. It is to be noted that the request that came first may not be the best decision. This is because, and as was mentioned in Section 2, FCFS and RMINRF apply a local decision, which in this particular case does not take into consideration the global information that transactions can provide. In OTF, on the other hand, the scheduling decision is based on how old the transactions is. This is still not enough, because an old transaction might not be the transaction that is most likely to commit. TMAXF in systems with fixed-size transactions, or in a system where the variance in transaction size is small, makes the most appropriate decision. The values of response time standard deviation confirm this. Fig. 15 plots the response time versus mpl for the four scheduling policies for transactions comprised of three stages, where each stage is of size four. Note that here the cost value assignments in TMAXF and RMINRF are dynamic, and are static in FCFS and OTF. For low mpl values, no difference between the different scheduling policies is observed. As the mpl increases, we notice that TMAXF outperforms RMINRF, OTF and FCFS. For instance, at mpl ˆ 150; the response time of TMAXF was 14, 30, and 33% less than that of RMINRF, FCFS, and OTF, respectively. OTF and FCFS have relatively the same performance. As mentioned in the previous case, FCFS does not use request nor transaction information in making a scheduling decision. That is, it does not distinguish the stage the request is made (global decision), nor the situation with the request (local decision) as in RMINRF. OTF, on the other hand, uses only the transaction timestamp, which, as mentioned before, might not be sufficient. RMINRF showed a superior performance to both of OTF and FCFS. Although it still uses local information, here this local information helps the transaction to complete its current stage. And, subsequently, it can move to the next stage, until it commits. From this experiment, we can conclude that the choice of the best scheduling policy depends on how the transaction is divided into stages. Fig. 16 plots the response time for the four scheduling policies at different number of stages. A Table 7 Effect of transaction size on system workload Transaction size Workload

13 Table 8 Stage size according to transaction size (one transaction class) N.B. Al-Jumah et al. / Information and Software Technology 42 (2000) Number of stages Stage size Trans:size ˆ 8 Trans:size ˆ 12 Trans:size ˆ 16 Trans:size ˆ Transaction size high mpl value (150) is chosen to allow differences to appear. Here we notice that TMAXF always has the best performance, then comes RMINRF. As mentioned in Section 2, TMAXF has the property that it uses global information in getting the transaction through. RMINRF uses information that helps get requests through and hence transactions. Between static scheduling policies, FCFS is better for transactions with fewer stages (large stage size), but as the number of stages is increased (stage size is decreased), OTF outperformed FCFS. This is because in FCFS there are no fake deadlocks of type 4 (Table 6). In general, OTF should have better performance than FCFS, as the information it uses in scheduling blocked transactions is global. But FCFS outperformed OTF when the number of stages is low (stage size is large). This is because in FCFS the probability of fake deadlocks is lower than in OTF. But when the stage size decreases, OTF outperforms FCFS, because here the probability of fake deadlocks of type 4 is reduced, as the stage size is small. Hence OTF outperforms FCFS Comparison between total and partial granting In this experiment, we compare the performance of the two granting models; total granting and partial granting. We consider only the FCFS and TMAXF scheduling policies of the partial granting, as TMAXF showed the best performance among the proposed partial granting scheduling policies and FCFS is similar to total granting if the transaction requests one data item per stage. Figs. 17 and 18, respectively, plot the response time, and the restart ratio for different number of stages at mpl value of 150. Fig. 19. Effect of transaction size, 1 stage, mpl ˆ 75 (response time). From the response time results we notice that total granting outperforms partial granting at all numbers of stages except the case when the transaction locks one data item in each stage. Here TMAXF outperforms total granting, since, and as already mentioned in Section 6.1.1, it outperformed FCFS, which is similar to total granting in this particular case. Notice that here the difference in the restart ratio curves between the total granting and FCFS or TMAXF is a consequence of fake deadlocks Effect of transaction size In this experiment, we study the performance of the proposed implementations of 2PL under different transaction sizes. The transaction size is varied from 8 to 20 in steps of four and the mpl is fixed at 75. The setting of the other parameters is the same as those shown in Table 3. Note that varying the transaction size in this way would yield different data contention workloads. Using the formula introduced by Tay [10], Table 7 shows the workload 5 under each transaction size. Changing the transaction size and fixing the number of stages, of which the transaction is comprised, yields different stage sizes as shown in Table 8. Figs plot the response time for the total and partial granting, under different transaction sizes and a mpl value of 75, for number of stages equals 1, 2, 4, and transaction size, respectively. We notice that as the transaction size increases, the performance degrades, due to the increase in data contention. We also notice that when the transaction is comprised of one stage (Fig. 19) both total granting and partial granting with its scheduling policies have relatively similar performance. However, when the transaction requests one data item in each stage (Fig. 22), partial granting with the TMAXF scheduling policy outperforms total granting (at transaction size 20, the response time of TMAXF was 7% lower than total granting). On the contrary, Figs. 20 and 21 show that the total granting method has a superior performance, as its restart ratio is very low compared with the restart ratio of the scheduling policies of the partial granting. Hence, it has the lowest response time. In total granting, the increase in response time with transaction size is slower than that of the partial granting. In partial granting when a transaction is blocked at 5 While such a workload estimate is computed under a number of simplifying assumptions, it is the relative values that are important.

14 270 N.B. Al-Jumah et al. / Information and Software Technology 42 (2000) Fig. 22. Effect of transaction size, 1 item/stage, mpl ˆ 75 (response time). Fig. 20. Effect of transaction size, 2 stages, mpl ˆ 75 (response time). any stage, then in addition to the locks it already holds from previous stages, it will be holding some locks from the current stage. This causes fake deadlocks to occur. Hence in total granting when the transaction size increases, deadlocks increase due to the increase in data contention. In partial granting, however, the increase in deadlocks is due to the increase in data contention and also the increase in the probability of fake deadlocks System with multiple classes of transactions In this section, we consider the general case in which the transaction size and the number of stages are random. The transaction size is uniformly distributed between (4, 20), yielding an average transaction size of 12. The number of stages and the stage size are variable. Figs. 23, 25 and 27, respectively, plot the response time, the throughput, and the restart ratio for total granting and partial granting, with its four scheduling policies, under different mpl values. As in the previous experiments, total granting showed a superior performance to partial granting. In partial granting, TMAXF showed the best performance. FCFS, OTF, and RIMNRF have relatively the same performance. Next, we consider a system in which the transaction size is uniformly distributed between 4 and 20 as in the previous experiment, but here transactions request only one data item in each stage. This corresponds to the conventional implementation of the two-phase locking if FCFS is used. Figs. 24, 26 and 28, respectively, plot the response time, throughout, and the restart ratio, for total and partial granting versus the mpl. Here we notice that total and partial granting have the same performance up to the thrashing point. At this point, differences start to appear, where total granting and TMAXF outperformed OTF, FCFS/RMINRF. At mpl ˆ 150; the response time of TMAXF is 54% lower than that of FCFS/RMINRF which has the highest response time. In Section 4, it was shown that for identical 6 fixed-size transactions, and under partial granting, requesting data items in an item-by-item basis may outperform the stagerequesting case. This is not true for systems where the transaction size and accordingly the stage sizes are not fixed. Indeed, requesting data items in an item-by-item basis and applying the FCFS scheduling underestimates the performance of 2PL. This is illustrated in Fig. 29 which plots the average response time of total granting, TMAXF, and FCFS with stage-wise locking case, and the conventional item-by-item FCFS. This figure has shown that requesting data items in stages as dictated by the application enhances the performance. For instance, at mpl ˆ 150 using the same scheduling policy, FCFS, result in a 30% improvement in performance. TMAXF results in 88% improvement. Total granting tremendously improves the performance as 178% improvement is achieved over conventional item-by-item FCFS 2PL. 7. Conclusions and future work In this paper, we have proposed a general transactionprocessing model that, in our opinion, represents the natural way in which transactions set their lock requirements. This Fig. 21. Effect of transaction size, 4 stages, mpl ˆ 75 (response time). 6 By identical we mean transactions with same number of stages and same number of data items per stage.

Chapter 13 : Concurrency Control

Chapter 13 : Concurrency Control Chapter 13 : Concurrency Control Chapter 13: Concurrency Control Lock-Based Protocols Timestamp-Based Protocols Validation-Based Protocols Multiple Granularity Multiversion Schemes Insert and Delete Operations

More information

Lecture 21 Concurrency Control Part 1

Lecture 21 Concurrency Control Part 1 CMSC 461, Database Management Systems Spring 2018 Lecture 21 Concurrency Control Part 1 These slides are based on Database System Concepts 6 th edition book (whereas some quotes and figures are used from

More information

Concurrency Control. Transaction Management. Lost Update Problem. Need for Concurrency Control. Concurrency control

Concurrency Control. Transaction Management. Lost Update Problem. Need for Concurrency Control. Concurrency control Concurrency Control Process of managing simultaneous operations on the database without having them interfere with one another. Transaction Management Concurrency control Connolly & Begg. Chapter 19. Third

More information

CERIAS Tech Report Autonomous Transaction Processing Using Data Dependency in Mobile Environments by I Chung, B Bhargava, M Mahoui, L Lilien

CERIAS Tech Report Autonomous Transaction Processing Using Data Dependency in Mobile Environments by I Chung, B Bhargava, M Mahoui, L Lilien CERIAS Tech Report 2003-56 Autonomous Transaction Processing Using Data Dependency in Mobile Environments by I Chung, B Bhargava, M Mahoui, L Lilien Center for Education and Research Information Assurance

More information

Stochastic Petri Net Analysis of Deadlock Detection Algorithms in Transaction Database Systems with Dynamic Locking

Stochastic Petri Net Analysis of Deadlock Detection Algorithms in Transaction Database Systems with Dynamic Locking Stochastic Petri Net Analysis of Deadlock Detection Algorithms in Transaction Database Systems with Dynamic Locking I NG-RAY CHEN Institute of Information Engineering, National Cheng Kung University, No.

More information

Lecture 22 Concurrency Control Part 2

Lecture 22 Concurrency Control Part 2 CMSC 461, Database Management Systems Spring 2018 Lecture 22 Concurrency Control Part 2 These slides are based on Database System Concepts 6 th edition book (whereas some quotes and figures are used from

More information

Implementing Isolation

Implementing Isolation CMPUT 391 Database Management Systems Implementing Isolation Textbook: 20 & 21.1 (first edition: 23 & 24.1) University of Alberta 1 Isolation Serial execution: Since each transaction is consistent and

More information

Incompatibility Dimensions and Integration of Atomic Commit Protocols

Incompatibility Dimensions and Integration of Atomic Commit Protocols The International Arab Journal of Information Technology, Vol. 5, No. 4, October 2008 381 Incompatibility Dimensions and Integration of Atomic Commit Protocols Yousef Al-Houmaily Department of Computer

More information

OPERATING SYSTEMS. Prescribed Text Book. Operating System Principles, Seventh Edition. Abraham Silberschatz, Peter Baer Galvin and Greg Gagne

OPERATING SYSTEMS. Prescribed Text Book. Operating System Principles, Seventh Edition. Abraham Silberschatz, Peter Baer Galvin and Greg Gagne OPERATING SYSTEMS Prescribed Text Book Operating System Principles, Seventh Edition By Abraham Silberschatz, Peter Baer Galvin and Greg Gagne 1 DEADLOCKS In a multi programming environment, several processes

More information

Database Management Systems

Database Management Systems Database Management Systems Concurrency Control Doug Shook Review Why do we need transactions? What does a transaction contain? What are the four properties of a transaction? What is a schedule? What is

More information

A can be implemented as a separate process to which transactions send lock and unlock requests The lock manager replies to a lock request by sending a lock grant messages (or a message asking the transaction

More information

Introduction to Deadlocks

Introduction to Deadlocks Unit 5 Introduction to Deadlocks Structure 5.1 Introduction Objectives 5.2 System Model 5.3 Deadlock Characterization Necessary Conditions for Deadlock Resource-Allocation Graph. 5.4 Deadlock Handling

More information

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem? What is the Race Condition? And what is its solution? Race Condition: Where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular

More information

230 Chapter 17. (c) Otherwise, T writes O and WTS(O) is set to TS(T).

230 Chapter 17. (c) Otherwise, T writes O and WTS(O) is set to TS(T). 230 Chapter 17 (c) Otherwise, T writes O and WTS(O) is set to TS(T). The justification is as follows: had TS(T )

More information

Concurrency control CS 417. Distributed Systems CS 417

Concurrency control CS 417. Distributed Systems CS 417 Concurrency control CS 417 Distributed Systems CS 417 1 Schedules Transactions must have scheduled so that data is serially equivalent Use mutual exclusion to ensure that only one transaction executes

More information

DB2 Lecture 10 Concurrency Control

DB2 Lecture 10 Concurrency Control DB2 Lecture 10 Control Jacob Aae Mikkelsen November 28, 2012 1 / 71 Jacob Aae Mikkelsen DB2 Lecture 10 Control ACID Properties Properly implemented transactions are commonly said to meet the ACID test,

More information

Chapter 15 : Concurrency Control

Chapter 15 : Concurrency Control Chapter 15 : Concurrency Control What is concurrency? Multiple 'pieces of code' accessing the same data at the same time Key issue in multi-processor systems (i.e. most computers today) Key issue for parallel

More information

Database Management System Prof. D. Janakiram Department of Computer Science & Engineering Indian Institute of Technology, Madras Lecture No.

Database Management System Prof. D. Janakiram Department of Computer Science & Engineering Indian Institute of Technology, Madras Lecture No. Database Management System Prof. D. Janakiram Department of Computer Science & Engineering Indian Institute of Technology, Madras Lecture No. # 20 Concurrency Control Part -1 Foundations for concurrency

More information

Performance Evaluation of Two New Disk Scheduling Algorithms. for Real-Time Systems. Department of Computer & Information Science

Performance Evaluation of Two New Disk Scheduling Algorithms. for Real-Time Systems. Department of Computer & Information Science Performance Evaluation of Two New Disk Scheduling Algorithms for Real-Time Systems Shenze Chen James F. Kurose John A. Stankovic Don Towsley Department of Computer & Information Science University of Massachusetts

More information

! A lock is a mechanism to control concurrent access to a data item! Data items can be locked in two modes :

! A lock is a mechanism to control concurrent access to a data item! Data items can be locked in two modes : Lock-Based Protocols Concurrency Control! A lock is a mechanism to control concurrent access to a data item! Data items can be locked in two modes : 1 exclusive (X) mode Data item can be both read as well

More information

Adapting Mixed Workloads to Meet SLOs in Autonomic DBMSs

Adapting Mixed Workloads to Meet SLOs in Autonomic DBMSs Adapting Mixed Workloads to Meet SLOs in Autonomic DBMSs Baoning Niu, Patrick Martin, Wendy Powley School of Computing, Queen s University Kingston, Ontario, Canada, K7L 3N6 {niu martin wendy}@cs.queensu.ca

More information

Transaction Management. Pearson Education Limited 1995, 2005

Transaction Management. Pearson Education Limited 1995, 2005 Chapter 20 Transaction Management 1 Chapter 20 - Objectives Function and importance of transactions. Properties of transactions. Concurrency Control Deadlock and how it can be resolved. Granularity of

More information

Intro to Transactions

Intro to Transactions Reading Material CompSci 516 Database Systems Lecture 14 Intro to Transactions [RG] Chapter 16.1-16.3, 16.4.1 17.1-17.4 17.5.1, 17.5.3 Instructor: Sudeepa Roy Acknowledgement: The following slides have

More information

Unit 3 : Process Management

Unit 3 : Process Management Unit : Process Management Processes are the most widely used units of computation in programming and systems, although object and threads are becoming more prominent in contemporary systems. Process management

More information

CS 347 Parallel and Distributed Data Processing

CS 347 Parallel and Distributed Data Processing CS 347 Parallel and Distributed Data Processing Spring 2016 Notes 5: Concurrency Control Topics Data Database design Queries Decomposition Localization Optimization Transactions Concurrency control Reliability

More information

Concurrency Control in Distributed Systems. ECE 677 University of Arizona

Concurrency Control in Distributed Systems. ECE 677 University of Arizona Concurrency Control in Distributed Systems ECE 677 University of Arizona Agenda What? Why? Main problems Techniques Two-phase locking Time stamping method Optimistic Concurrency Control 2 Why concurrency

More information

Distributed Systems. 13. Distributed Deadlock. Paul Krzyzanowski. Rutgers University. Fall 2017

Distributed Systems. 13. Distributed Deadlock. Paul Krzyzanowski. Rutgers University. Fall 2017 Distributed Systems 13. Distributed Deadlock Paul Krzyzanowski Rutgers University Fall 2017 October 23, 2017 2014-2017 Paul Krzyzanowski 1 Deadlock Four conditions for deadlock 1. Mutual exclusion 2. Hold

More information

Concurrency Control. Chapter 17. Comp 521 Files and Databases Spring

Concurrency Control. Chapter 17. Comp 521 Files and Databases Spring Concurrency Control Chapter 17 Comp 521 Files and Databases Spring 2010 1 Conflict Serializable Schedules Recall conflicts (WW, RW, WW) were the cause of sequential inconsistency Two schedules are conflict

More information

Silberschatz and Galvin Chapter 18

Silberschatz and Galvin Chapter 18 Silberschatz and Galvin Chapter 18 Distributed Coordination CPSC 410--Richard Furuta 4/21/99 1 Distributed Coordination Synchronization in a distributed environment Ð Event ordering Ð Mutual exclusion

More information

Chapter 9: Concurrency Control

Chapter 9: Concurrency Control Chapter 9: Concurrency Control Concurrency, Conflicts, and Schedules Locking Based Algorithms Timestamp Ordering Algorithms Deadlock Management Acknowledgements: I am indebted to Arturas Mazeika for providing

More information

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date:

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date: Subject Name: OPERATING SYSTEMS Subject Code: 10EC65 Prepared By: Kala H S and Remya R Department: ECE Date: Unit 7 SCHEDULING TOPICS TO BE COVERED Preliminaries Non-preemptive scheduling policies Preemptive

More information

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses. 1 Memory Management Address Binding The normal procedures is to select one of the processes in the input queue and to load that process into memory. As the process executed, it accesses instructions and

More information

Phantom Problem. Phantom Problem. Phantom Problem. Phantom Problem R1(X1),R1(X2),W2(X3),R1(X1),R1(X2),R1(X3) R1(X1),R1(X2),W2(X3),R1(X1),R1(X2),R1(X3)

Phantom Problem. Phantom Problem. Phantom Problem. Phantom Problem R1(X1),R1(X2),W2(X3),R1(X1),R1(X2),R1(X3) R1(X1),R1(X2),W2(X3),R1(X1),R1(X2),R1(X3) 57 Phantom Problem So far we have assumed the database to be a static collection of elements (=tuples) If tuples are inserted/deleted then the phantom problem appears 58 Phantom Problem INSERT INTO Product(name,

More information

Transaction Management

Transaction Management Transaction Management Imran Khan FCS, IBA In this chapter, you will learn: What a database transaction is and what its properties are How database transactions are managed What concurrency control is

More information

A Performance Study of Locking Granularity in Shared-Nothing Parallel Database Systems

A Performance Study of Locking Granularity in Shared-Nothing Parallel Database Systems A Performance Study of Locking Granularity in Shared-Nothing Parallel Database Systems S. Dandamudi, S. L. Au, and C. Y. Chow School of Computer Science, Carleton University Ottawa, Ontario K1S 5B6, Canada

More information

Concurrency Control! Snapshot isolation" q How to ensure serializability and recoverability? " q Lock-Based Protocols" q Other Protocols"

Concurrency Control! Snapshot isolation q How to ensure serializability and recoverability?  q Lock-Based Protocols q Other Protocols Concurrency Control! q How to ensure serializability and recoverability? q Lock-Based Protocols q Lock, 2PL q Lock Conversion q Lock Implementation q Deadlock q Multiple Granularity q Other Protocols q

More information

CS 347 Parallel and Distributed Data Processing

CS 347 Parallel and Distributed Data Processing CS 347 Parallel and Distributed Data Processing Spring 2016 Notes 5: Concurrency Control Topics Data Database design Queries Decomposition Localization Optimization Transactions Concurrency control Reliability

More information

Transactional Information Systems:

Transactional Information Systems: Transactional Information Systems: Theory, Algorithms, and the Practice of Concurrency Control and Recovery Gerhard Weikum and Gottfried Vossen 2002 Morgan Kaufmann ISBN 1-55860-508-8 Teamwork is essential.

More information

Lecture 13 Concurrency Control

Lecture 13 Concurrency Control Lecture 13 Concurrency Control Shuigeng Zhou December 23, 2009 School of Computer Science Fudan University Outline Lock-Based Protocols Multiple Granularity Deadlock Handling Insert and Delete Operations

More information

Chapter 16: Distributed Synchronization

Chapter 16: Distributed Synchronization Chapter 16: Distributed Synchronization Chapter 16 Distributed Synchronization Event Ordering Mutual Exclusion Atomicity Concurrency Control Deadlock Handling Election Algorithms Reaching Agreement 18.2

More information

Multiversion schemes keep old versions of data item to increase concurrency. Multiversion Timestamp Ordering Multiversion Two-Phase Locking Each

Multiversion schemes keep old versions of data item to increase concurrency. Multiversion Timestamp Ordering Multiversion Two-Phase Locking Each Multiversion schemes keep old versions of data item to increase concurrency. Multiversion Timestamp Ordering Multiversion Two-Phase Locking Each successful write results in the creation of a new version

More information

Concurrency Control Algorithms

Concurrency Control Algorithms Concurrency Control Algorithms Given a number of conflicting transactions, the serializability theory provides criteria to study the correctness of a possible schedule of execution it does not provide

More information

Deadlocks. Minsoo Ryu. Real-Time Computing and Communications Lab. Hanyang University.

Deadlocks. Minsoo Ryu. Real-Time Computing and Communications Lab. Hanyang University. Deadlocks Minsoo Ryu Real-Time Computing and Communications Lab. Hanyang University msryu@hanyang.ac.kr Topics Covered System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Distributed Scheduling for the Sombrero Single Address Space Distributed Operating System

Distributed Scheduling for the Sombrero Single Address Space Distributed Operating System Distributed Scheduling for the Sombrero Single Address Space Distributed Operating System Donald S. Miller Department of Computer Science and Engineering Arizona State University Tempe, AZ, USA Alan C.

More information

Advances in Data Management Transaction Management A.Poulovassilis

Advances in Data Management Transaction Management A.Poulovassilis 1 Advances in Data Management Transaction Management A.Poulovassilis 1 The Transaction Manager Two important measures of DBMS performance are throughput the number of tasks that can be performed within

More information

Carnegie Mellon Univ. Dept. of Computer Science /615 - DB Applications. Last Class. Last Class. Faloutsos/Pavlo CMU /615

Carnegie Mellon Univ. Dept. of Computer Science /615 - DB Applications. Last Class. Last Class. Faloutsos/Pavlo CMU /615 Carnegie Mellon Univ. Dept. of Computer Science 15-415/615 - DB Applications C. Faloutsos A. Pavlo Lecture#21: Concurrency Control (R&G ch. 17) Last Class Introduction to Transactions ACID Concurrency

More information

Chapter 18: Distributed

Chapter 18: Distributed Chapter 18: Distributed Synchronization, Silberschatz, Galvin and Gagne 2009 Chapter 18: Distributed Synchronization Event Ordering Mutual Exclusion Atomicity Concurrency Control Deadlock Handling Election

More information

T ransaction Management 4/23/2018 1

T ransaction Management 4/23/2018 1 T ransaction Management 4/23/2018 1 Air-line Reservation 10 available seats vs 15 travel agents. How do you design a robust and fair reservation system? Do not enough resources Fair policy to every body

More information

DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI

DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI Department of Computer Science and Engineering CS6302- DATABASE MANAGEMENT SYSTEMS Anna University 2 & 16 Mark Questions & Answers Year / Semester: II / III

More information

11/7/2018. Event Ordering. Module 18: Distributed Coordination. Distributed Mutual Exclusion (DME) Implementation of. DME: Centralized Approach

11/7/2018. Event Ordering. Module 18: Distributed Coordination. Distributed Mutual Exclusion (DME) Implementation of. DME: Centralized Approach Module 18: Distributed Coordination Event Ordering Event Ordering Mutual Exclusion Atomicity Concurrency Control Deadlock Handling Election Algorithms Reaching Agreement Happened-before relation (denoted

More information

Incompatibility Dimensions and Integration of Atomic Commit Protocols

Incompatibility Dimensions and Integration of Atomic Commit Protocols Preprint Incompatibility Dimensions and Integration of Atomic Protocols, Yousef J. Al-Houmaily, International Arab Journal of Information Technology, Vol. 5, No. 4, pp. 381-392, October 2008. Incompatibility

More information

Synchronization Part II. CS403/534 Distributed Systems Erkay Savas Sabanci University

Synchronization Part II. CS403/534 Distributed Systems Erkay Savas Sabanci University Synchronization Part II CS403/534 Distributed Systems Erkay Savas Sabanci University 1 Election Algorithms Issue: Many distributed algorithms require that one process act as a coordinator (initiator, etc).

More information

Concurrency Control 9-1

Concurrency Control 9-1 Concurrency Control The problem of synchronizing concurrent transactions such that the consistency of the database is maintained while, at the same time, maximum degree of concurrency is achieved. Principles:

More information

Database Management Systems Concurrency Control

Database Management Systems Concurrency Control atabase Management Systems Concurrency Control B M G 1 BMS Architecture SQL INSTRUCTION OPTIMIZER MANAGEMENT OF ACCESS METHOS CONCURRENCY CONTROL BUFFER MANAGER RELIABILITY MANAGEMENT Index Files ata Files

More information

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013)

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013) CPU Scheduling Daniel Mosse (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013) Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU I/O Burst Cycle Process

More information

Distributed Fault-Tolerant Channel Allocation for Cellular Networks

Distributed Fault-Tolerant Channel Allocation for Cellular Networks 1326 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 18, NO. 7, JULY 2000 Distributed Fault-Tolerant Channel Allocation for Cellular Networks Guohong Cao, Associate Member, IEEE, and Mukesh Singhal,

More information

Deadlocks. Prepared By: Kaushik Vaghani

Deadlocks. Prepared By: Kaushik Vaghani Deadlocks Prepared By : Kaushik Vaghani Outline System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection & Recovery The Deadlock Problem

More information

CSC 261/461 Database Systems Lecture 21 and 22. Spring 2017 MW 3:25 pm 4:40 pm January 18 May 3 Dewey 1101

CSC 261/461 Database Systems Lecture 21 and 22. Spring 2017 MW 3:25 pm 4:40 pm January 18 May 3 Dewey 1101 CSC 261/461 Database Systems Lecture 21 and 22 Spring 2017 MW 3:25 pm 4:40 pm January 18 May 3 Dewey 1101 Announcements Project 3 (MongoDB): Due on: 04/12 Work on Term Project and Project 1 The last (mini)

More information

Chapter 8: Deadlocks. The Deadlock Problem

Chapter 8: Deadlocks. The Deadlock Problem Chapter 8: Deadlocks System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from Deadlock Combined Approach to Deadlock

More information

The Deadlock Problem. Chapter 8: Deadlocks. Bridge Crossing Example. System Model. Deadlock Characterization. Resource-Allocation Graph

The Deadlock Problem. Chapter 8: Deadlocks. Bridge Crossing Example. System Model. Deadlock Characterization. Resource-Allocation Graph Chapter 8: Deadlocks The Deadlock Problem System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from Deadlock Combined

More information

Impact of Mobility on Concurrent Transactions Mixture

Impact of Mobility on Concurrent Transactions Mixture Impact of Mobility on Concurrent Transactions Mixture Ahmad Alqerem Abstract This paper presents a simulation analysis of the impact of mobility on concurrent transaction processing over a mixture of mobile

More information

UNIT-3 DEADLOCKS DEADLOCKS

UNIT-3 DEADLOCKS DEADLOCKS UNIT-3 DEADLOCKS Deadlocks: System Model - Deadlock Characterization - Methods for Handling Deadlocks - Deadlock Prevention. Deadlock Avoidance - Deadlock Detection - Recovery from Deadlock DEADLOCKS Definition:

More information

Oracle Database 10g Resource Manager. An Oracle White Paper October 2005

Oracle Database 10g Resource Manager. An Oracle White Paper October 2005 Oracle Database 10g Resource Manager An Oracle White Paper October 2005 Oracle Database 10g Resource Manager INTRODUCTION... 3 SYSTEM AND RESOURCE MANAGEMENT... 3 ESTABLISHING RESOURCE PLANS AND POLICIES...

More information

Chapter 9. Uniprocessor Scheduling

Chapter 9. Uniprocessor Scheduling Operating System Chapter 9. Uniprocessor Scheduling Lynn Choi School of Electrical Engineering Scheduling Processor Scheduling Assign system resource (CPU time, IO device, etc.) to processes/threads to

More information

Virtual Memory - Overview. Programmers View. Virtual Physical. Virtual Physical. Program has its own virtual memory space.

Virtual Memory - Overview. Programmers View. Virtual Physical. Virtual Physical. Program has its own virtual memory space. Virtual Memory - Overview Programmers View Process runs in virtual (logical) space may be larger than physical. Paging can implement virtual. Which pages to have in? How much to allow each process? Program

More information

Transaction Management: Concurrency Control

Transaction Management: Concurrency Control Transaction Management: Concurrency Control Yanlei Diao Slides Courtesy of R. Ramakrishnan and J. Gehrke DBMS Architecture Query Parser Query Rewriter Query Optimizer Query Executor Lock Manager Concurrency

More information

Chapter 7: Deadlocks

Chapter 7: Deadlocks Chapter 7: Deadlocks The Deadlock Problem System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from Deadlock Chapter

More information

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy Operating Systems Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. AL-AZHAR University Website : eaymanelshenawy.wordpress.com Email : eaymanelshenawy@yahoo.com Reference

More information

The Deadlock Problem. A set of blocked processes each holding a resource and waiting to acquire a resource held by another process in the set.

The Deadlock Problem. A set of blocked processes each holding a resource and waiting to acquire a resource held by another process in the set. Deadlock The Deadlock Problem A set of blocked processes each holding a resource and waiting to acquire a resource held by another process in the set Example semaphores A and B, initialized to 1 P 0 P

More information

An Improved Priority Dynamic Quantum Time Round-Robin Scheduling Algorithm

An Improved Priority Dynamic Quantum Time Round-Robin Scheduling Algorithm An Improved Priority Dynamic Quantum Time Round-Robin Scheduling Algorithm Nirali A. Patel PG Student, Information Technology, L.D. College Of Engineering,Ahmedabad,India ABSTRACT In real-time embedded

More information

The Deadlock Problem (1)

The Deadlock Problem (1) Deadlocks The Deadlock Problem (1) A set of blocked processes each holding a resource and waiting to acquire a resource held by another process in the set. Example System has 2 disk drives. P 1 and P 2

More information

Concurrency Control. Chapter 17. Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke 1

Concurrency Control. Chapter 17. Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke 1 Concurrency Control Chapter 17 Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke 1 Conflict Schedules Two actions conflict if they operate on the same data object and at least one of them

More information

Chapter 6 Concurrency: Deadlock and Starvation

Chapter 6 Concurrency: Deadlock and Starvation Operating Systems: Internals and Design Principles Chapter 6 Concurrency: Deadlock and Starvation Seventh Edition By William Stallings Edited by Rana Forsati CSE410 Outline Principles of deadlock Deadlock

More information

Some Examples of Conflicts. Transactional Concurrency Control. Serializable Schedules. Transactions: ACID Properties. Isolation and Serializability

Some Examples of Conflicts. Transactional Concurrency Control. Serializable Schedules. Transactions: ACID Properties. Isolation and Serializability ome Examples of onflicts ransactional oncurrency ontrol conflict exists when two transactions access the same item, and at least one of the accesses is a write. 1. lost update problem : transfer $100 from

More information

CS54200: Distributed Database Systems

CS54200: Distributed Database Systems CS54200: Distributed Database Systems Timestamp Ordering 28 January 2009 Prof. Chris Clifton Timestamp Ordering The key idea for serializability is to ensure that conflicting operations are not executed

More information

University of Babylon / College of Information Technology / Network Department. Operating System / Dr. Mahdi S. Almhanna & Dr. Rafah M.

University of Babylon / College of Information Technology / Network Department. Operating System / Dr. Mahdi S. Almhanna & Dr. Rafah M. Chapter 6 Methods for Handling Deadlocks Generally speaking, we can deal with the deadlock problem in one of three ways: We can use a protocol to prevent or avoid deadlocks, ensuring that the system will

More information

EXTENDING THE PRIORITY CEILING PROTOCOL USING READ/WRITE AFFECTED SETS MICHAEL A. SQUADRITO A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE

EXTENDING THE PRIORITY CEILING PROTOCOL USING READ/WRITE AFFECTED SETS MICHAEL A. SQUADRITO A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE EXTENDING THE PRIORITY CEILING PROTOCOL USING READ/WRITE AFFECTED SETS BY MICHAEL A. SQUADRITO A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE IN COMPUTER

More information

Chapter 7: Deadlocks. Operating System Concepts 9 th Edition

Chapter 7: Deadlocks. Operating System Concepts 9 th Edition Chapter 7: Deadlocks Silberschatz, Galvin and Gagne 2013 Chapter 7: Deadlocks System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection

More information

Resolving Executing Committing Conflicts in Distributed Real-time Database Systems

Resolving Executing Committing Conflicts in Distributed Real-time Database Systems Resolving Executing Committing Conflicts in Distributed Real-time Database Systems KAM-YIU LAM 1,CHUNG-LEUNG PANG 1,SANG H. SON 2 AND JIANNONG CAO 3 1 Department of Computer Science, City University of

More information

Future-ready IT Systems with Performance Prediction using Analytical Models

Future-ready IT Systems with Performance Prediction using Analytical Models Future-ready IT Systems with Performance Prediction using Analytical Models Madhu Tanikella Infosys Abstract Large and complex distributed software systems can impact overall software cost and risk for

More information

CHAPTER-III WAVELENGTH ROUTING ALGORITHMS

CHAPTER-III WAVELENGTH ROUTING ALGORITHMS CHAPTER-III WAVELENGTH ROUTING ALGORITHMS Introduction A wavelength routing (WR) algorithm selects a good route and a wavelength to satisfy a connection request so as to improve the network performance.

More information

Distributed Deadlocks. Prof. Ananthanarayana V.S. Dept. of Information Technology N.I.T.K., Surathkal

Distributed Deadlocks. Prof. Ananthanarayana V.S. Dept. of Information Technology N.I.T.K., Surathkal Distributed Deadlocks Prof. Ananthanarayana V.S. Dept. of Information Technology N.I.T.K., Surathkal Objectives of This Module In this module different kind of resources, different kind of resource request

More information

The Deadlock Problem

The Deadlock Problem Chapter 7: Deadlocks The Deadlock Problem System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from Deadlock The Deadlock

More information

FCM 710: Architecture of Secure Operating Systems

FCM 710: Architecture of Secure Operating Systems FCM 710: Architecture of Secure Operating Systems Practice Exam, Spring 2010 Email your answer to ssengupta@jjay.cuny.edu March 16, 2010 Instructor: Shamik Sengupta Multiple-Choice 1. operating systems

More information

Transaction Management Exercises KEY

Transaction Management Exercises KEY Transaction Management Exercises KEY I/O and CPU activities can be and are overlapped to minimize (disk and processor) idle time and to maximize throughput (units of work per time unit). This motivates

More information

Concurrency Control. Concurrency Control Ensures interleaving of operations amongst concurrent transactions result in serializable schedules

Concurrency Control. Concurrency Control Ensures interleaving of operations amongst concurrent transactions result in serializable schedules Concurrency Control Concurrency Control Ensures interleaving of operations amongst concurrent transactions result in serializable schedules How? transaction operations interleaved following a protocol

More information

Chapter 8: Deadlocks. Bridge Crossing Example. The Deadlock Problem

Chapter 8: Deadlocks. Bridge Crossing Example. The Deadlock Problem Chapter 8: Deadlocks Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from Deadlock 8.1 Bridge Crossing Example Bridge has one

More information

Review. Review. Carnegie Mellon Univ. Dept. of Computer Science /615 - DB Applications. Lecture #21: Concurrency Control (R&G ch.

Review. Review. Carnegie Mellon Univ. Dept. of Computer Science /615 - DB Applications. Lecture #21: Concurrency Control (R&G ch. Carnegie Mellon Univ. Dept. of Computer Science 15-415/615 - DB Applications Lecture #21: Concurrency Control (R&G ch. 17) Review DBMSs support ACID Transaction semantics. Concurrency control and Crash

More information

Coordination and Agreement

Coordination and Agreement Coordination and Agreement Nicola Dragoni Embedded Systems Engineering DTU Informatics 1. Introduction 2. Distributed Mutual Exclusion 3. Elections 4. Multicast Communication 5. Consensus and related problems

More information

Management of Protocol State

Management of Protocol State Management of Protocol State Ibrahim Matta December 2012 1 Introduction These notes highlight the main issues related to synchronizing the data at both sender and receiver of a protocol. For example, in

More information

Database Tuning and Physical Design: Execution of Transactions

Database Tuning and Physical Design: Execution of Transactions Database Tuning and Physical Design: Execution of Transactions Spring 2018 School of Computer Science University of Waterloo Databases CS348 (University of Waterloo) Transaction Execution 1 / 20 Basics

More information

Datenbanksysteme II: Implementation of Database Systems Synchronization of Concurrent Transactions

Datenbanksysteme II: Implementation of Database Systems Synchronization of Concurrent Transactions Datenbanksysteme II: Implementation of Database Systems Synchronization of Concurrent Transactions Material von Prof. Johann Christoph Freytag Prof. Kai-Uwe Sattler Prof. Alfons Kemper, Dr. Eickler Prof.

More information

The Deadlock Problem

The Deadlock Problem Deadlocks The Deadlock Problem A set of blocked processes each holding a resource and waiting to acquire a resource held by another process in the set. Example System has 2 disk drives. P1 and P2 each

More information

DiffServ Architecture: Impact of scheduling on QoS

DiffServ Architecture: Impact of scheduling on QoS DiffServ Architecture: Impact of scheduling on QoS Abstract: Scheduling is one of the most important components in providing a differentiated service at the routers. Due to the varying traffic characteristics

More information

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008. CSC 4103 - Operating Systems Spring 2008 Lecture - XII Midterm Review Tevfik Ko!ar Louisiana State University March 4 th, 2008 1 I/O Structure After I/O starts, control returns to user program only upon

More information

Consistency in Distributed Systems

Consistency in Distributed Systems Consistency in Distributed Systems Recall the fundamental DS properties DS may be large in scale and widely distributed 1. concurrent execution of components 2. independent failure modes 3. transmission

More information

The Slide does not contain all the information and cannot be treated as a study material for Operating System. Please refer the text book for exams.

The Slide does not contain all the information and cannot be treated as a study material for Operating System. Please refer the text book for exams. The Slide does not contain all the information and cannot be treated as a study material for Operating System. Please refer the text book for exams. System Model Deadlock Characterization Methods of handling

More information

A Modified Maximum Urgency First Scheduling Algorithm for Real-Time Tasks

A Modified Maximum Urgency First Scheduling Algorithm for Real-Time Tasks Vol:, o:9, 2007 A Modified Maximum Urgency irst Scheduling Algorithm for Real-Time Tasks Vahid Salmani, Saman Taghavi Zargar, and Mahmoud aghibzadeh International Science Index, Computer and Information

More information

B. V. Patel Institute of Business Management, Computer &Information Technology, UTU

B. V. Patel Institute of Business Management, Computer &Information Technology, UTU BCA-3 rd Semester 030010304-Fundamentals Of Operating Systems Unit: 1 Introduction Short Answer Questions : 1. State two ways of process communication. 2. State any two uses of operating system according

More information