Proving Lock-Freedom Easily and Automatically

Size: px
Start display at page:

Download "Proving Lock-Freedom Easily and Automatically"

Transcription

1 Proving Lock-Freedom Easily and Automatically Xiao Jia Wei Li Shanghai Jiao Tong University Viktor Vafeiadis Max Planck Institute for Software Systems (MPI-SWS) Abstract Lock-freedom is a liveness property satisfied by most non-blocking concurrent algorithms. It ensures that at any point at least one thread is making progress towards termination; so the system as a whole makes progress. As a global property, lock-freedom is typically shown by global proofs or complex iterated arguments. We show that this complexity is not needed in practice. By introducing simple loop depth counters into the programs, we can reduce proving lock-freedom to checking simple local properties on those counters. We have implemented the approach in CAVE and report on our findings. Categories and Subject Descriptors D.3.1 [Programming Languages]: Formal Definitions and Theory; F.3.1 [Logics and Meanings of Programs]: Specifying and Verifying and Reasoning about Programs Keywords 1. Introduction Concurrency; Lock-freedom; Verification; RGSep Non-blocking synchronisation is a style of multithreaded programming that is extensively used in concurrent data structure libraries, such as java.util.concurrent, to achieve good average performance and progress even under bad schedules. Unlike blocking synchronisation primitives, such as mutual exclusion locks, which cannot guarantee progress whenever one thread fails or stalls ininitely, non-blocking synchronisation ensures some form of systemwide progress even if some threads of the system are not scheduled for an ininitely long time period. In detail, the literature contains three such liveness properties: Wait-freedom [6] is the strongest of the three properties. It requires every operation to terminate provided it is scheduled possibly intermittently for a sufficient number of steps, irrespective of any other concurrent operations. Such behaviour is indeed desirable, but this requirement is very strong and often results in complicated and inefficient implementations. Lock-freedom [11] requires that from each point in time onwards, some library operation will be completed provided that at least one thread executing a library operation is scheduled. This requirement ensures that the program as a whole makes progress and is never blocked. From the point of view of an Currently at Google; the work was done during an internship at MPI-SWS. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. CPP 15, January 13 14, 2015, Mumbai, India. Copyright is held by the owner/author(s). Publication rights licensed to ACM. ACM /15/01... $ individual thread, however, very little is guaranteed: a thread executing a certain operation might never terminate if other operations continuously get in its way and are completed instead. Obstruction-freedom [8] is the weakest requirement, guaranteeing progress only if a thread is run in isolation for a sufficiently long amount of time. In the presence of contention, the entire system can livelock with each thread undoing the work done by other threads. In this paper, we concentrate on lock-freedom, for two reasons. First, it is the most practically relevant property of the three, because it provides both a reasonably strong progress guarantee and allows for efficient implementations in practice. As a point of reference, most non-blocking concurrent algorithms in the Art of Multiprocessor Programming book [7] are lock-free, but not wait-free. Second, as noted by Gotsman et al. [4], proving lockfreedom is more challenging than the other two properties, because whereas wait-freedom and obstruction-freedom concern the termination of one thread, lock-freedom is a global property concerning the progress of the system as a whole. The literature contains a few approaches for verifying lockfreedom, but these are unnecessarily complicated and difficult to automate. We discuss them in detail in Section 6, but it suffices to say that they involve either a global termination argument (e.g., [1, 2]) or non-trivial extensions to previously known concurrent program logics (e.g., [4, 9, 10]). We propose a simpler method for verifying lock-freedom, which does not require a new program logic or any sophisticated program analysis. We instrument the source code with assignments to certain auxiliary variables and add assertions to each back-edge of the program s control flow graph. We prove in Coq that if all these assertions are never violated, then the program is lock-free. Using an off-the-shelf program analysis or program logic, we can then verify these assertions, thereby establishing lock-freedom. 1.1 A Simple Example: the Read-Compute-Update Pattern To illustrate our proof technique for proving lock-freedom, as a first example, let us consider the read, compute and update (henceforth, RCU) pattern that lies at the core of many non-blocking algorithms, such as the Treiber stack [13]. This pattern consists of a loop first reading a shared variable X, then doing some computation, and finally checking whether the value of X has changed: if it has, we ignore the results of the computation and restart the loop, otherwise, if the value of X has not changed, we update X and return. (Typically, the check that X has the same value and the update both happen in one atomic step using a CAS instruction. We elide these details, as the atomicity is irrelevant for the progress argument.) op t : X;... ; if (X t) {X :... ; break; Consider now a system with multiple threads that repeatedly execute this operation. We can immediately observe that the opera-

2 tion is not wait-free: there exist executions that always happen to change the value of X between the initial read and the equality test, thereby making the operation potentially loop forever. The operation, however, is lock-free. The only reason why the loop might fail to terminate is if another thread writes to X. But the only way a thread can write to X is by then exiting and finishing its operation. Thus, at all times, if a thread performing op is scheduled, then at least one thread is making progress towards finishing its ongoing operation. To capture this global argument, our idea is to instrument the code with an auxiliary variable, P, counting the number of times progress towards an operation s completion is made. That is, we increment P at every program step that does not appear on a looping path. By this we mean any program step that either (1) is not inside a loop, or (2) that is inside only one loop, but on a path that will exit the loop, such as the assignment to X in op. (In the instrumented pseudocode, we use angle brackets to denote that the assignment to X and the auxiliary increment of P are executed together atomically.) r : P ; t : X;... ; if (X t) { X :... ; ++P ; break; assert(p > r); For each loop, we check that for every time the operation goes round a loop (and hence might not terminate), P was incremented (and hence some other operation made progress towards termination). The introduced code does not change the behaviour of the program inasmuch as the assertion checks succeed. The proof obligation that the assertion checks always succeed can be discharged by standard thread-modular reasoning techniques. 1.2 Contribution and Paper Outline In the remainder of this paper, we formalise the construction just described and generalise it to handle operations with nested loops and more subtle local termination arguments. The beauty of our technique is that by introducing auxiliary code, we reduce the global nature of lock-freedom proofs to checks that can be discharged by thread-modular techniques. We have also automated our approach by building on an existing verification tool, CAVE [14], a sound but incomplete threadmodular verification tool suitable for verifying fine-grained concurrent linked list algorithms. We have mildly adapted the frontend of CAVE to instrument the given concurrent library with the appropriate auxiliary variables and assertions needed to prove lockfreedom, before calling the backend to prove that these assertions are never violated, which in turn means that the original concurrent library is lock-free. In more detail, in Section 2, we introduce a simple concurrent programming language and formally ine lock-freedom. Then, in Section 3, we present the essence of our technique, formalise and prove its soundness, and discuss its implementation in CAVE. Next, in Section 4, we present a more refined version that is necessary for dealing with more advanced lock-freedom arguments. In Section 5, we present our experimental results, and then, in Section 6, we discuss related work, and conclude in Section Background 2.1 Programming Language For the purposes of this paper, we will consider a minimal imperative programming language with an unspecified set of basic commands, BC, sequential and parallel composition, non-deterministic choice, a looping construct, and a break statement that terminates (σ, σ ) [[BC ]] BC, σ skip, σ C 1, σ C 1, σ C 1; C 2, σ C 1 ; C2, σ skip; C 2, σ C 2, σ break; C 2, σ break, σ C 1, σ C 1, σ C 1 C 2, σ C 1 C2, σ C 2, σ C 2, σ C 1 C 2, σ C 1 C 2, σ skip skip, σ skip, σ C 1 C 2, σ C 1, σ C 1 C 2, σ C 2, σ L n(break, C ), σ skip, σ L n(skip, C ), σ L n(c, C ), σ C 1, σ C 1, σ L n(c 1, C 2), σ L n(c 1, C2), σ Figure 1. Operational semantics of the language. the innermost loop. Commands, C, in our language are given by the following grammar: C :: skip break BC C 1; C 2 C 1 C 2 C 1 C 2 L n(c 1, C 2) Here, BC ranges over basic commands, such as assignments and assume statements. The combination of assume statements and non-deterministic choice allows us to encode conditionals in the standard way: if(b) C 1 else C 2 (assume(b); C 1) (assume( B); C 2) The final construct, L n(c 1, C 2) is a somewhat non-standard looping construct that represents a partially executed loop, where C 2 is the loop body and C 1 is the remainder of the current loop iteration. (The n subscript will be used to record the value of a certain auxiliary program variable, but can be ignored for the time being.) The construct therefore first executes C 1 and then proceeds to execute C 2 repeatedly in a loop. The only way to exit a loop is by encountering a break statement. Normal while loops and standard non-deterministic loops, C, can be encoded as follows: while(b) C L 0(skip, if(b) C else break) L 0(skip, C break) C Operational Semantics Figure 1 presents the small-step operational semantics for our language as a reduction relation between configurations. Configurations are pairs, C, σ, consisting of a command, C Cmd and a program state, σ State. The evaluation rules are standard. For basic commands, we assume their semantics is given by the function, [[ ]] : BasicCmd P(State State), and we pick an arbitrary new state σ such that (σ, σ ) [[c]]. This allows us to model both non-deterministic and blocking commands. The rules for sequential composition, parallel composition, and non-deterministic choice are straightforward: one point to note is that break commands propagate over sequential compositions. Finally, L n(c 1, C 2) executes the current loop iteration, C 1, as long as possible. If C 1 break, the loop is exited, whereas if C 1 skip, it is restarted. 2.2 Specifying Lock-Freedom A concurrent library consists of some initialisation code C init and a collection of concurrent operations C 1,..., C n.

3 We ine the most general client of the library to be a program that calls the library s initialisation code, and then creates k threads, each of which executes an unbounded number of the library s operations in any order inside a loop. MGC k (C init, C 1,..., C n) C init ; ((C 1... C n)... (C 1... C n) ) {{ k threads This client is most general in the sense that if we record the traces of calls to the library s operations from any concrete client of the library, the set of recorded traces would be a subset of those produced by the most general client. Definition 1. We say that a library is lock-free if every nonterminating execution of its most general client (for any k) has an infinite number of completed calls to the library s operations. Gotsman et al. [4] and later Hoffmann et al. [9] reduced proving lock-freedom to showing termination of a certain bounded most general client. Here, we use the theorem of Hoffmann et al. [9], as it is more general. (Gotsman et al. s reduction is incorrect for implementations using thread identifiers or thread-local state.) The bounded most general client of a program is ined as follows: BMGC k,m (C init, C 1,..., C n) C init ; ((C 1... C n) m... (C 1... C n) m) {{ k threads where C m m times {{ skip C (C; C)... (C;... ; C). The inition is very similar to that of the most general client, except that the the number of calls to the library s operations per thread are bounded by m. Now we can state the reduction theorem as follows. Theorem 1 (Hoffmann et al. [9]). A concurrent library, Lib, is lock-free iff its bounded most general client, BMGC k,m (Lib), terminates for all k and m. 3. The Basic Scheme for Proving Lock-Freedom Our proof technique for establishing termination and thereby lockfreedom consists of recording the auxiliary progress counters and proving that the program does not have any assertion violations. In this section, we first explain our technique by applying it on a more advanced example ( 3.1). We then formalise it using an instrumented operational semantics ( 3.2) and prove it suffices for verifying lock-freedom ( 3.3). We finally discuss its implementation within the CAVE verifier ( 3.4). 3.1 A Motivating Example: The Elimination Stack We start with another motivating example, the elimination stack of Hendler et al. [5], that demonstrates the need for a slightly more advanced technique than what was needed for the simple readcompute-update (RCU) example of the introduction. Abstracting over all details that are irrelevant for lock-freedom, the code of the elimination stack is shown in Figure 2. Both stack operations, push and pop, have the same structure. Within a loop, they first perform the RCU pattern trying to update S, the shared pointer to the top of the stack. When the check that the shared pointer has not changed succeeds, the operation terminates. When, however, the check fails (because of contention), the algorithm does not immediately try the same pattern again, but rather takes part in the elimination scheme. This scheme consists of a (nested) RCU loop, where the operation tries to register itself at some random push/pop t : S;... ; if(s t) { S :... ; break; i :... ;... ; c : C[i];... ; if(c[i] c) { C[i] :... ; break;... ; Figure 2. The essential part of the elimination stack. r 1 : P 1; t : S; ++P 2 ;... ; if(s t) { S :... ; ++P 1; ++P 2 ; break; i :... ; ++P 2 ;... ; r 2 : P 2; c : C[i];... ; if(c[i] c) { C[i] :... ; ++P 2 ; break; assert(p 2 > r 2);... ; assert(p 1 > r 1); Figure 3. Instrumentation of the elimination stack. index i in the collision array, C. After it successfully does so, it waits a while and determines whether it collided with another operation of the opposite kind, in which case both operations terminate; otherwise, the operation goes round the outer loop and attempts to update S again. To verify lock-freedom of this program, it is clear that we cannot simply use one auxiliary variable for counting the progress outside looping paths, because in this way we will not be able to show that the nested loop terminates. Instead, we need one auxiliary variable for each loop nesting depth. For the outer loop, we use P 1, which we increment on every atomic statement that does not appear on a looping path, while for the inner loop, we use P 2 which gets incremented on statements that do not appear on looping paths through nested loops. Figure 3 shows the instrumented code for the elimination stack example. In general, our instrumentation requires one auxiliary variable per loop nesting depth. These variables get incremented on every atomic step at smaller loop nesting depths. So, for example in Figure 3, the assignment to S increments both P 1 and P 2, because it is on an exit path of the outer loop, whereas the assignment to i increments only P 2 because it is inside a looping path of depth one. 3.2 Instrumented Operational Semantics We now make this auxiliary variable instrumentation formal. Given a function P : N N representing the values of the auxiliary progress counters and a natural number d representing the current loop depth, we ine Cinc(P, d) : N N to increment all counters above d: { Cinc(P, d) λx. P (x) P (x) + 1 if x d if x > d In essence, Cinc(P, d) is the same as P except that all progress counters at depths greater than d have been incremented by one.

4 (σ, σ ) [[BC ]] d BC, σ, P skip, σ, Cinc(P, d) d C 1, σ, P C 1, σ, P d C 1; C 2, σ, P C 1 ; C2, σ, P d skip; C 2, σ, P C 2, σ, P d break; C 2, σ, P break, σ, P d C 1, σ, P C 1, σ, P d C 1 C 2, σ, P C 1 C2, σ, P d C 2, σ, P C 2, σ, P d C 1 C 2, σ, P C 1 C 2, σ, P d skip skip, σ, P skip, σ, P d C 1 C 2, σ, P C 1, σ, P d C 1 C 2, σ, P C 2, σ, P d L n(break, C ), σ, P skip, σ, P d L n(skip, C ), σ, P L P (d) (C, C ), σ, P d + ldinc(c 1) C 1, σ, P C 1, σ, P d L n(c 1, C 2), σ, P L n(c 1, C2), σ, P d C 1, σ, P abort d C 1; C 2, σ, P abort d C 1, σ, P abort d C 1 C 2, σ, P abort d C 2, σ, P abort d C 1 C 2, σ, P abort d + ldinc(c 1) C 1, σ, P abort d L n(c 1, C 2), σ, P abort P (d) n d L n(skip, C ), σ, P abort Figure 4. Instrumented operational semantics of the language. We say that a command, C, is loop-free if it does not syntactically contain any loop. Formally, this is ined by structural recursion as follows: lpfree(l n(c 1, C 2)) false lpfree(c 1; C 2) lpfree(c 1 C 2) lpfree(c 1 C 2) lpfree(c other ) lpfree(c 1) lpfree(c 2) lpfree(c 1) lpfree(c 2) lpfree(c 1) lpfree(c 2) true We say that a command, C, is loop exiting, whenever it can be syntactically determined that all executions of C will terminate and end with break (and not skip). This is formally ined by structural recursion as follows: lpexit(break) true lpexit(c 1; C 2) lpexit(c 1) (lpfree(c 1) lpexit(c 2)) lpexit(c 1 C 2) lpexit(c 1) lpexit(c 2) lpexit(c other ) false Finally, we ine ldinc(c) to return the loop depth increment needed when considering a loop whose current iteration is C. The function returns 0 if C is loop exiting and 1 otherwise. ldinc(c ) { 0 if lpexit(c ) 1 if lpexit(c ) Figure 4 presents the small-step operational semantics for our language as a reduction relation between extended configurations and indexed by the current loop depth, d. Extended configurations are triples, C, σ, P, consisting of a command, C Cmd, a program state, σ State, and a function, P N N, representing the auxiliary progress counters. (Recall that there is one progress counter per loop nesting depth.) The rules in Figure 4 follow those of Figure 1 and except as noted below simply pass the P and d components around. The exceptions are: The rule for basic commands calls Cinc(P, d) to increment all the progress counters at depths greater than the current one. When going round through the loop, the value of the current progress counter, P (d), is recorded in the annotation of the loop command. The rule for evaluating the body of the loop does so at depth d + ldinc(c 1); that is, at the same depth for loop exiting C 1 and at depth d + 1 otherwise. In addition to normal configurations, we also have a special abort configuration for detecting assertion violations. The only assertion violation of interest here is when a loop fails to make progress. Progress is checked in the last rule of Figure 4, which fails in case the current progress counter, P (d), is not greater than the recorded one, n. The other aborting rules ensure that violations are propagated to the top level. 3.3 Soundness of the Instrumentation We now turn to proving soundness of the instrumented semantics, that programs with no assertion violations always terminate. First, we relate the instrumented and the normal semantics. We ine the erasure, C of a command C to replace all the n annotations in all L n(, ) subterms of C with 0. Definition 2 (Erasure). Given a command, C, we ine its erasure, C, by structural recursion on the command as follows: L n(c 1, C 2) L 0( C 1, C 2 ) C 1; C 2 C 1 C 2 C 1 C 2 C other C 1 ; C 2 C 1 C 2 C 1 C 2 C other We can prove the following proposition: Proposition 2 (Instrumentation Erasure). If d C, σ, P C, σ, P, then C, σ C, σ. If C, σ C, σ, then for all d, P, there exist C, P such that d C, σ, P C, σ, P and C C. We continue with some auxiliary initions. We ine the loop depth of a command to be its maximum loop nesting depth.

5 C C C 2 C 2 lpexit(c 1) C 2 C 1 ; C 2 skip BC break C 1 break C 1; C 2 C 1 C 1 C 1; C 2 C 1 ; C2 C 1 C 1 C 2 C 2 C 1 C 2 C 1 C 2 skip C 1 C 2 C C 1 C 2 C C 1 C 2 C C 1 C C 2 break C 2 skip L n(c 1, C 2) C 1 C 2 L n(c 1, C 2) L n (C 1, C2) Figure 5. Auxiliary reduction relation, C C. Definition 3 (Loop Depth). Given a command, C, we ine its loop depth, C depth, by structural recursion as follows: L n(c 1, C 2) depth C 1; C 2 depth C 1 C 2 depth C 1 C 2 depth C other depth 1 + max ( C 1 depth, C 2 depth ) max ( C 1 depth, C 2 depth ) max ( C 1 depth, C 2 depth ) max ( C 1 depth, C 2 depth ) 0 Next, in Figure 5, we ine when a command, C, is a reduction of another command, C, which we denote as C C. The relation is reflexive and includes a case C C whenever C, σ C, σ. A few rules are noteworthy: The first interesting case is the rule deducing C 2 C 1; C 2. This checks that C 1 is not loop exiting, because otherwise C 2 is never executed. The next interesting case is the second to last rule. According to this rule, skip L n(c 1, C 2) holds only if the loop may be exited, i.e. if break is a reduction of the entire loop body, C 2. The final rule deduces L n(c 1, C 2) L n (C 1, C 2) by ignoring the current loop iteration C 1 and just checking that C 1 C 2. This is because the current iteration might terminate and another iteration may start from C 2. For the same reason, the recorded progress counter may differ. We show that is transitive, relates commands as reduced by the operational semantics, and preserves loop depths. Proposition 3 (Properties of ). If C 1 C 2 and C 2 C 3, then C 1 C 3. If C, σ C, σ, then C C. If C 1 C 2 then C 1 depth C 2 depth. We move on to some well-formedness properties of commands. We ine user commands to have no partially executed loops, and well-formed commands to only have partially executed loops that could arise from a full loop. Definition 4 (User Command). A command, C, is a user command, denoted U(C), iff for every loop L n(c 1, C 2) it contains, C 1 skip and n 0. Formally, U(L 0(skip, C 2)) U(C 2) U(L p(c 1, C 2)) U(C 1; C 2) U(C 1 C 2) U(C 1 C 2) U(C other ) false U(C 1) U(C 2) U(C 1) U(C 2) U(C 1) U(C 2) true Definition 5 (Well-Formed Command). A command, C, is wellformed iff it contains partially executed loops only at evaluated positions and for every such loop L n(c 1, C 2), we have C 1 C 2 or C 1 skip. Formally, WF(L p(c 1, C 2)) WF(C 1) U(C 2) (C 1 C 2 C 1 skip) WF(C 1; C 2) WF(C 1 C 2) WF(C 1 C 2) WF(C other ) WF(C 1) WF(C 2) WF(C 1) WF(C 2) WF(C 1) WF(C 2) true We say that a configuration is safe at a given loop nesting depth if it contains a well-formed command and the configuration can never reach an aborting state. Definition 6 (Safety). A configuration, C, σ, P, is safe at depth d iff C is well-formed and (d C, σ, P abort). From Propositions 2 and 3, we deduce that safety is preserved by steps of the operational semantics. Lemma 4 (Preservation). If C, σ, P is safe at loop depth d and d C, σ, P C, σ, P, then C, σ, P is also safe at d. We move on to the metric we use to prove termination. We take the domain of our termination metric to be infinite sequences over natural numbers, which we represent as functions N N. Definition 7 (Lexicographic Ordering). Given two sequences P, P : N N, we ine k to be the strict lexicographic order on the first k elements of P and P. P k P i k. P (i) < P (i) j < i. P (j) P (j). It is easy to show (by induction on k and well-foundedness of < on N) that our lexicographic order, k, is well-founded. Given a command, C, counters, P : N N, and the current depth d N, we ine C, P d : N N to be the size of a configuration C, σ, P at loop depth d. Our goal is to show that the size of safe configurations decreases by each execution step according to the lexicographic order we have just ined. Formally, we want to establish the following. Lemma 5. If d C, σ, P C, σ, P and C, σ, P is safe at depth d, then C, P d d+ C depth C, P d. We now proceed to the inition C, P d. Intuitively, the size of a command records the number of nodes in the command s abstract syntax tree at each loop nesting level. The formal inition, found in Figure 6, is quite subtle in order to deal with partially executed loops and break statements, and ensure that Lemma 5 holds. In the sequential composition case, C 1; C 2, P d, if C 1 is a loop exiting command, then C 2 is dead code and hence we do not count its size. In the case for loops, L n(c 1, C 2), P d (m), there are four subcases. If the current iteration, C 1, is bound to exit, then we return its size at the current depth d. Otherwise, we return some

6 skip, P d (m) 0 break, P d (m) 1 BC, P d (m) 1 C 1; C 2, P d (m) L n(c 1, C 2), P d (m) { C 1, P d (m) + 1 if lpexit(c 1) C 1, P d (m) + C 2, P d (m) + 1 if lpexit(c 1) C 1, P d (m) if lpexit(c 1) C 2, P d+1 (m) else if m < d C 1, P d+1 (m) + C 2, P d+1 (m) + C 2, P d+1 (m) + 1 else if n < P (d) C 1, P d+1 (m) + C 2, P d+1 (m) otherwise C 1 C 2, P d (m) C 1, P d (m) + C 2, P d (m) + 1 C 1 C 2, P d (m) C 1, P d (m) + C 2, P d (m) + 1 Figure 6. Definition of C, P d : N N representing the size of the configuration C, σ, P at loop depth d. combination of the sizes of C 1 and C 2 at depth d + 1. More specifically, if m < d (i.e., we are calculating the size of the loop at a depth smaller than the current depth), then we return a constant response, the size of C 2. (We chose the size of C 2 and not some other constant because the size must decrease whenever a transition makes C 1 loop exiting. So, we chose C 2 whose size is greater or equal to that of C 1.) Now, if m d, then we have to take the current loop iteration into account. To do so, we look at the current progress counter, P (d), to determine whether the current loop iteration is the last one or not. If P (d) is larger than n (the recorded of the counter when the iteration started), the loop is allowed to go round once again; otherwise, the current iteration is the last one. In the latter case (i.e., P (d) n), we just take the total size of C 1 and C 2. We take into account the size of C 2 because we also want the size of a configuration, C, P d, to be monotonic (with respect to m) assuming that P is also monotonic. In the former case (i.e., if n < P (d)), we must ine the size to be even bigger so that the size decreases when going round the loop (i.e., L n(skip, C) L P (d) (C, C)). That is how we end up with the size of C 1 plus twice the size of C 2 plus 1. To prove Lemma 5, we perform an induction over the reduction relation of the operational semantics. However, for the induction to work out, we first prove a stronger version of the lemma with the same premises and a stronger conclusion. The conclusion of the stronger version is that there exists i d + C depth such that: for all j i, P (j) P (j); and for all j i, C, P d (j) C, P d (j); and for all k i, if P (j) P (j) holds for every j k, then C, P d (k) C, P d (k). Lemma 5 then follows as a corollary of this stronger version. As a consequence of Lemma 5 and the well-foundedness of, we can prove that any program without assertion violations always terminates. Theorem 6 (Termination). If C, σ, λx. 1 is safe at depth 0, then C, σ always terminates. As a corollary of this theorem and Theorem 1, we conclude that a library is lock-free if its bounded most general client has no assertion violations. Corollary 7 (Lock-freedom). Given a concurrent library, Lib, if BMGC k,m (Lib), σ 0, λx. 1 is safe for all k and m, and valid initial states σ 0, then the library is lock-free. 3.4 Implementation within CAVE To automate proving lock-freedom, we have mildly adapted the implementation of CAVE [14]. CAVE takes as its input a program consisting of some initialisation code and a number of concurrent methods, which are all executed in parallel an unbounded number of times each. When successful, it produces a proof in RGSep [16] that the program has no memory errors and none of its assertions are violated at run time. Internally, it performs a thread-modular program analysis, RGSep action inference [15], that figures out a set of actions abstracting the updates to shared memory cells performed by the various operations. We first run CAVE s action inference algorithm, which gives us a set of actions abstracting over the shared state changes performed by the program threads. CAVE also returns a mapping from the individual atomic program commands to the actions that they are responsible for. We then calculate the maximum loop nesting depth of the program, M, and create a fresh auxiliary variable level t,d for each thread t and each loop nesting level 1 d M. We use this variable to record whether the progress variable P (d) was incremented since the start of the loop at depth d by thread t. We then perform the following instrumentation. At each loop header, we add the auxiliary assignment level t,k : false where t is the identifier of the current thread and k is the loop s nesting depth. At each loop back edge, we add the assertion check where t and k are as before. assert(level t,k ) For each action inferred by CAVE, we calculate the maximum loop nesting depth under which it is performed. Let that level be k. Then, we modify the action to set level t,i : true for every t and for every i such that k < i M. For example, an action performed only outside loops would have k 0, and can therefore set all the level variables to true. On the other hand, an action that may be performed inside a single loop will have k 1, and therefore cannot set the level 1 to true because that would enable a loop at the same level to avoid terminating.

7 type struct Node s { int val; Node tl; *Node; type struct Queue s { Node head, tail; *Queue; Queue new queue(void) { Queue Q malloc(...); Node nd malloc(...); nd->tl NULL; Q->head nd; Q->tail nd; return Q; void enqueue(queue Q, int value) { Node node, next, tail; node new node(); node->val value; node->tl NULL; tail Q->tail; next tail->tl; if (Q->tail tail) { if (next NULL) { if (CAS(&tail->tl,next,node)) break; else { CAS(&Q->tail,tail,next); CAS(&Q->tail,tail,node); int trydequeue(queue Q) { Node next, head, tail; int pval; head Q->head; tail Q->tail; next head->tl; if (Q->head head) { if (head tail) { if (next NULL) return EMPTY; CAS(&Q->tail,tail,next); else { pval next->val; if (CAS(&Q->head,head,next)) return pval; Figure 7. The Michael and Scott non-blocking queue implementation. Finally, we rerun CAVE to verify the modified program threadmodularly under the updated set of actions. This, in effect, means that CAVE will check that the appropriate value level t,k variable was set in every loop iteration, meaning that there was interference by a statement from a concurrent thread that was at a lower loop nesting level, which as we have shown suffices to prove lockfreedom. We note that since CAVE is both thread- and procedure-modular, it suffices to check each operation on its own, under the assumption they could be arbitrarily many other concurrent operations. We never need to explicitly quantify over the parameters k and m of the bounded most general client. 4. A Refined Scheme for Proving Lock-Freedom The basic proof technique presented in the previous section works well for programs following the RCU pattern, such as the examples from Sections 1.1 and 3.1, but has a clear limitation that prevents its applicability to more complex examples. The main problem is that it requires loops to terminate in a single iteration when there is no contention. While this assumption holds for the RCU patterns, it is clearly not true in general. Consider, for instance, the program: i : 0; while(i < 2) {i++; While this program clearly always terminates, instrumenting the program as discussed in Section 3 leads to the following program i : 0; ++P 1 ; L 0(skip, p : P 1; if(i < 2) i++; else break; assert(p 1 > p); ) that has an obvious assertion violation. In the remainder of this section, we discuss a refined proof technique that overcomes this evident limitation of the basic scheme. 4.1 Another Example: The Michael and Scott Queue Before delving into our refined proof technique, let us first consider another motivating example. Figure 7 presents a concurrent nonblocking queue implementation due to Michael and Scott [12]. The queue is represented by two pointers into a null-terminated singly-linked list. The first pointer (Q->head) points to the beginning of the list and is updated by dequeueing operations. The second pointer (Q->tail) is used to find the end of the list so that enqueue can locate the last node of the list. It does not necessarily point to the last node of the list, but it can lag behind. This is because enqueue first appends a node onto the list with its first CAS instruction, and then modifies Q->tail with its final CAS instruction. Whenever an enqueue or a dequeue operation notices that the tail pointer lags behind, it firsts perform a CAS to advance the tail pointer it forward (cf. the underlined instructions in Figure 7) before proceeding with doing its own work. What complicates the lock-freedom proof is that this advancement of the tail pointer happens inside a loop and can prevent a concurrent dequeue from terminating. However, one can easily argue that if no further elements are added to the queue, the tail pointer cannot lag behind more than the number of elements that have been ever added to the queue, which for any given point in time is some finite number. The tail pointer can therefore be advanced at most a finite number of times. (With a more careful argument, one can show that the tail pointer can only be at most one node behind the real tail of the list, but we do not need such a precise bound for proving lock-freedom.) 4.2 The Refined Proof Method To reason about the aforementioned examples, we therefore also need to take into account loops that in the absence of contention do not terminate immediately, but for which we can still argue that they will eventually terminate. To do so, we adopt one of the standard techniques for proving termination of sequential programs, namely the use of ranking functions. A ranking function, f, is a function from program states to a well founded domain (typically, N with <) such that whenever executing the body of a loop changes the state from σ to σ, we have that f(σ ) < f(σ). We assume that a ranking function is attached to each loop. When executing a loop (i.e., in the L(C 1, C 2) construct), we now record not only the value of the progress counter at the beginning of the loop iteration, but also the state at that time, as well as the ranking function. When having to check whether the loop can be restarted or not, we relax the check to allow reentering the loop if the ranking of the state has decreased. Formally, we replace the last

8 rule of Figure 4 with the following: P (d) n f(σ ) f(σ) d L f,n,σ (skip, C), σ, P abort We have added another premise to the rule, thereby making programs abort less frequently. One can show that even with this stronger abort rule, if a certain program never aborts, then it always terminates. To prove this statement, however, we need a more complex termination metric, a lexicographic product that combines the metric on the progress variables and on the recorded states. Returning to Michael and Scott queue, which function should we choose as a ranking function? Naturally, we could take the length of the path through the heap following the next fields from Q->tail to NULL. The length of this path is decremented whenever the tail pointer is advanced, and is unaffected by local accesses; the only access that may increase this path length is enqueueing a new node, but this only happens on a loop-free code path. A bit more generically, we can take as a ranking function the sum of the lengths of all distinct acyclic paths between any two pairs of nodes in the heap. We can further instrument the program to track how that sum is affected by individual heap updates. Specifically, whenever we have the field assignment x f : y, and we know (1) that x is not part of a cyclic data structure and (2) that there was a path from x to y of at least length two, then we know that the field assignment decrements the length of heap paths. Otherwise, we conservatively say that the field assignment might increment the length of paths in the heap. 4.3 Implementation within CAVE For each action that Cave generates, we calculate whether the action decreases the total length of the heap paths. If so, we call the action decreasing; we call it possibly increasing otherwise. We introduce two additional auxiliary variables, inc t,i and dec t,i, for each thread t and each loop nesting depth of the program, i.e. 1 i M. We augment every decreasing action to set dec t,k : true for every t and every 1 k M, and every possibly increasing one to set inc t,k : true for every t and every 1 k M. At the loop header, we initialise the increment and decrement variables of the current thread s current loop nesting depth to false. At the loop back edges, we check that either the progress flag appropriate to the loop level has been set, or the heap paths have been decremented and not possibly incremented. Formally, we add the check: assert(level t,k (dec t,k inc t,k )) where t and k are the current thread identifier and loop nesting depth respectively. 5. Evaluation As discussed in Sections 3.4 and 4.3, we have implemented a tool for proving lock-freedom as an extension of CAVE. We have applied our tool to verify several concurrent algorithms from the literature. These include two integer counters: a simple CAS-based one and one with a secondary counter as a back-off scheme for high contention cases; three non-blocking stack algorithms: the Treiber stack [13], a variant of it using a DCAS operation, and the Hendler et al. [5] elimination stack; a non-blocking counter; and three concurrent queues: the Michael and Scott queue [12] from Section 4.1, the DGLM queue [3], and a variant of the Michael and Scott queue with a simpler trydequeue implementation. Algorithm LOC Techniques Time CAS counter 41 Level 0.02s Double counter 64 Level 0.11s Treiber stack 52 Level 0.11s DCAS stack 52 Level 0.11s Elimination stack 76 Level 1.22s MS queue 83 Level+HeapDec 3.01s DGLM queue 83 Level+HeapDec 3.92s MSV queue 74 Level+HeapDec 2.85s Table 1. Experimental results for the automated lock-freedom prover. For each example, we report the number of lines of code excluding blank lines and comments, the techniques necessary for the verification, and the total verification time. Table 1 reports on our experimental results. The tool has successfully proved lock-freedom of all of these algorithms. The basic loop level technique described in Section 3 was sufficient for verifying the counter and stack implementations, but in order to verify the concurrent queue algorithms, we needed the refined proof technique introduced in Section 4. In both cases, taking the lengths of heap paths as a termination metric was sufficient. We have also tested our tool on variants of these algorithms that support blocking methods, such as a dequeue operation that busily waits whenever the queue is empty. As expected, in all these cases, the tool failed to prove lock-freedom. We would also like to stress that our automated proof technique for proving lock-freedom is heuristic, and while it may work very well for standard examples, it is easy to eat it. Specifically, we calculate loop nesting depths as a heuristic way of detecting when interference by another thread makes enough progress to block the termination of a certain loop. This heuristic can easily be confused by adding a clone of one of the looping methods of a concurrent library, but wrapping its code inside a terminating loop. Then, our heuristic will fail to detect this, and will not be able to prove the modified library lock-free. Needless to say, however, we have not encountered such cases in practice. 6. Related Work The literature contains a few approaches for proving lock-freedom. The most closely related approach to ours is by Colvin and Dongol [2]. They verified the MSqueue algorithm shown in Figure 7 by determining which program statements make progress towards termination of an operation and constructing a global well-founded order that is decremented whenever a program makes a step different from those statements. This approach is very nice and corresponds closely to ours in the case where the operations do not have nested loops. For programs with nested loops, such as the HSY stack, however, coming up with a global well-founded order that ensures progress can be rather difficult. In such cases, our technique of having a counter per loop nesting level can substantially simplify the proof. Further, even for algorithms without nested loops, such as the MSqueue, we do not really need to come up with a custom global well-founded order because the ault heap path decrementing heuristic explained in Section 4.2 suffices. In an earlier paper [1], the same authors verified the Treiber stack using just a global well-founded order, but as the authors themselves admit in their next paper [2] the global approach it too difficult to apply for more advanced algorithms. In a very different style, Gotsman et al. [4] presented an automatic approach for verifying lock-freedom by reducing the lockfreedom problem to termination. They then presented an advanced combination of separation logic, rely-guarantee, and linear temporal logic, and used it to prove lock-freedom of the Treiber stack,

9 the HSY stack and a few more algorithms. They also reported on a prototype implementation, which is sadly not available. An instructive example to compare their approach with ours is the elimination stack [5]. Gotsman et al. [4] prove lock-freedom in an iterated fashion. First, they show that the bounded most general client can perform only a finite number of changes to the central stack object. Next, they show that under the assumption that the threads eventually stop updating the central stack object, the number of updates to the collision array is also finite. Finally, assuming that updates to both the central stack and the collision array will eventually end, they show that the bounded most general client program terminates. In our method, however, this iterative argument is unnecessary, as it is implicitly captured in the different auxiliary progress variables we introduce per loop nesting depth. Later, Hoffmann et al. [9] showed that Gotsman et al. s reduction is incorrect for implementations using thread identifiers or thread-local state, and propose an alternative reduction to termination that is correct for such implementations. They then proposed an extension of concurrent separation logic with permission tokens for reasoning about termination. These tokens act like reservations from a shared pool of fuel. By construction, each loop iteration consumes one unit of fuel, and so each thread has to have enough tokens (units of fuel) in its precondition to guarantee that it will terminate successfully. Threads are allowed to transfer any unused units of fuel to another thread, thereby allowing that second thread to run for longer. While the underlying idea of this logic is nice and clear, unfortunately proofs in this logic are not entirely straightforward and currently only manual. The main difficulty is coming up with the right transfer of fuel reservations between threads and encoding this using CSL s resource invariants. More recently, Liang et al. [10] have developed a program logic for proving termination-preserving refinement of concurrent programs, which can be used to prove lock-freedom of concurrent libraries. As far as proving termination and lock-freedom is concerned, they use a progress token assertion, wf(n), in a similar way to Hoffmann et al. [9]. While using any of the aforementioned approaches, one can in principle prove lock-freedom for all our examples, the proofs one gets are more complicated than necessary and are consequently hard to derive automatically. In contrast, our approach does not aim at generality, but is geared towards automation. After the instrumentation with auxiliary variables, any existing concurrent program verifier can be used to check the absence of assertion violations, thereby showing lock-freedom. 7. Conclusion In this paper, we have discussed how to verify lock-freedom by instrumenting the program with suitable auxiliary variables. Using our approach we have successfully verified lock-freedom of a number of stack and queue algorithms. In principle, one can also apply our technique to other lock-free algorithms, such as list- or tree-based implementations of sets. Our tool, however, cannot yet verify such implementations because it does not have any support for inferring suitable ranking functions. In the future, it would also be useful to see whether introducing similar auxiliary variables could assist in proving liveness properties for locked-based concurrent libraries, such as deadlockfreedom and starvation-freedom under some fairness assumptions about the scheduler. References [1] Robert Colvin and Brijesh Dongol. Verifying lock-freedom using well-founded orders. In ICTAC 2007: 4th International Colloquium on Theoretical Aspects of Computing, pages , [2] Robert Colvin and Brijesh Dongol. A general technique for proving lock-freedom. Sci. Comput. Program., 74(3): , [3] Simon Doherty, Lindsay Groves, Victor Luchangco, and Mark Moir. Formal verification of a practical lock-free queue algorithm. In FORTE 2014: Formal Techniques for Networked and Distributed Systems, volume 3235 of Lecture Notes in Computer Science, pages Springer, [4] Alexey Gotsman, Byron Cook, Matthew J. Parkinson, and Viktor Vafeiadis. Proving that non-blocking algorithms don t block. In Proceedings of the 36th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL 2009), pages ACM, [5] Danny Hendler, Nir Shavit, and Lena Yerushalmi. A scalable lock-free stack algorithm. J. Parallel Distrib. Comput., 70(1):1 12, [6] Maurice Herlihy. Wait-free synchronization. ACM Transactions on Programming Languages and Systems (TOPLAS), 13(1): , [7] Maurice Herlihy and Nir Shavit. The art of multiprocessor programming. Morgan Kaufmann, ISBN [8] Maurice P. Herlihy, Victor Luchangco, and Mark Moir. Obstructionfree synchronization: Double-ended queues as an example. In Proceedings of the 23rd International Conference on Distributed Computing Systems (ICDCS 2003), pages IEEE, [9] Jan Hoffmann, Michael Marmar, and Zhong Shao. Quantitative reasoning for proving lock-freedom. In Proceedings of the 28th Annual IEEE/ACM Symposium on Logic in Computer Science (LICS), pages IEEE, [10] Hongjin Liang, Xinyu Feng, and Zhong Shao. Compositional verification of termination-preserving refinement of concurrent programs. In Joint Meeting of the 23rd EACSL Annual Conference on Computer Science Logic and the 29th Annual ACM/IEEE Symposium on Logic in Computer Science (CSL-LICS 2014). ACM, [11] Henry Massalin and Calton Pu. A lock-free multiprocessor os kernel. ACM SIGOPS Operating Systems Review, 26(2):108, [12] Maged M. Michael and Michael L. Scott. Simple, fast, and practical non-blocking and blocking concurrent queue algorithms. In PODC 1996: 15th Annual ACM Symposium on Principles of Distributed Computing, pages ACM, [13] R. K. Treiber. Systems programming: Coping with parallelism. Technical Report Technical Report RJ 5118, IBM Almaden Research Center, April [14] Viktor Vafeiadis. Automatically proving linearizability. In CAV 2010: 22nd Int. Conference on Computer Aided Verification, pages Springer, [15] Viktor Vafeiadis. RGSep action inference. In VMCAI 2010: 11th Int. Conference on Verification, Model Checking, and Abstract Interpretation, pages Springer, [16] Viktor Vafeiadis and Matthew J. Parkinson. A marriage of rely/guarantee and separation logic. In CONCUR 2007: 18th International Conference on Concurrency Theory, volume 4703 of Lecture Notes in Computer Science, pages Springer, Acknowledgements We would like to thank Marko Doko and the anonymous CPP 15 reviewers for their helpful feedback. We acknowledge support from the EC FET project ADVENT.

Proving liveness. Alexey Gotsman IMDEA Software Institute

Proving liveness. Alexey Gotsman IMDEA Software Institute Proving liveness Alexey Gotsman IMDEA Software Institute Safety properties Ensure bad things don t happen: - the program will not commit a memory safety fault - it will not release a lock it does not hold

More information

Proving linearizability & lock-freedom

Proving linearizability & lock-freedom Proving linearizability & lock-freedom Viktor Vafeiadis MPI-SWS Michael & Scott non-blocking queue head tail X 1 3 2 null CAS compare & swap CAS (address, expectedvalue, newvalue) { atomic { if ( *address

More information

Proof Pearl: The Termination Analysis of Terminator

Proof Pearl: The Termination Analysis of Terminator Proof Pearl: The Termination Analysis of Terminator Joe Hurd Computing Laboratory Oxford University joe.hurd@comlab.ox.ac.uk Abstract. Terminator is a static analysis tool developed by Microsoft Research

More information

A Program Logic for Concurrent Objects under Fair Scheduling

A Program Logic for Concurrent Objects under Fair Scheduling A Program Logic for Concurrent Objects under Fair Scheduling Hongjin Liang Xinyu Feng School of Computer Science and Technology & Suzhou Institute for Advanced Study University of Science and Technology

More information

AST: scalable synchronization Supervisors guide 2002

AST: scalable synchronization Supervisors guide 2002 AST: scalable synchronization Supervisors guide 00 tim.harris@cl.cam.ac.uk These are some notes about the topics that I intended the questions to draw on. Do let me know if you find the questions unclear

More information

Concurrent Objects and Linearizability

Concurrent Objects and Linearizability Chapter 3 Concurrent Objects and Linearizability 3.1 Specifying Objects An object in languages such as Java and C++ is a container for data. Each object provides a set of methods that are the only way

More information

Lindsay Groves, Simon Doherty. Mark Moir, Victor Luchangco

Lindsay Groves, Simon Doherty. Mark Moir, Victor Luchangco Lindsay Groves, Simon Doherty Victoria University of Wellington Mark Moir, Victor Luchangco Sun Microsystems, Boston (FORTE, Madrid, September, 2004) Lock-based concurrency doesn t scale Lock-free/non-blocking

More information

Hoare logic. A proof system for separation logic. Introduction. Separation logic

Hoare logic. A proof system for separation logic. Introduction. Separation logic Introduction Hoare logic Lecture 6: Examples in separation logic In the previous lecture, we saw how reasoning about pointers in Hoare logic was problematic, which motivated introducing separation logic.

More information

Non-blocking Array-based Algorithms for Stacks and Queues!

Non-blocking Array-based Algorithms for Stacks and Queues! Non-blocking Array-based Algorithms for Stacks and Queues! Niloufar Shafiei! Department of Computer Science and Engineering York University ICDCN 09 Outline! Introduction! Stack algorithm! Queue algorithm!

More information

Technical Report. Automatically proving linearizability. Viktor Vafeiadis. Number 778. September Computer Laboratory

Technical Report. Automatically proving linearizability. Viktor Vafeiadis. Number 778. September Computer Laboratory Technical Report UCAM-CL-TR-778 ISSN 1476-2986 Number 778 Computer Laboratory Automatically proving linearizability Viktor Vafeiadis September 2016 15 JJ Thomson Avenue Cambridge CB3 0FD United Kingdom

More information

Joint Entity Resolution

Joint Entity Resolution Joint Entity Resolution Steven Euijong Whang, Hector Garcia-Molina Computer Science Department, Stanford University 353 Serra Mall, Stanford, CA 94305, USA {swhang, hector}@cs.stanford.edu No Institute

More information

Handout 9: Imperative Programs and State

Handout 9: Imperative Programs and State 06-02552 Princ. of Progr. Languages (and Extended ) The University of Birmingham Spring Semester 2016-17 School of Computer Science c Uday Reddy2016-17 Handout 9: Imperative Programs and State Imperative

More information

Cover Page. The handle holds various files of this Leiden University dissertation

Cover Page. The handle   holds various files of this Leiden University dissertation Cover Page The handle http://hdl.handle.net/1887/22891 holds various files of this Leiden University dissertation Author: Gouw, Stijn de Title: Combining monitoring with run-time assertion checking Issue

More information

A Non-Blocking Concurrent Queue Algorithm

A Non-Blocking Concurrent Queue Algorithm A Non-Blocking Concurrent Queue Algorithm Bruno Didot bruno.didot@epfl.ch June 2012 Abstract This report presents a new non-blocking concurrent FIFO queue backed by an unrolled linked list. Enqueue and

More information

Introduction to Linear-Time Temporal Logic. CSE 814 Introduction to LTL

Introduction to Linear-Time Temporal Logic. CSE 814 Introduction to LTL Introduction to Linear-Time Temporal Logic CSE 814 Introduction to LTL 1 Outline Motivation for TL in general Types of properties to be expressed in TL Structures on which LTL formulas are evaluated Syntax

More information

Induction and Semantics in Dafny

Induction and Semantics in Dafny 15-414 Lecture 11 1 Instructor: Matt Fredrikson Induction and Semantics in Dafny TA: Ryan Wagner Encoding the syntax of Imp Recall the abstract syntax of Imp: a AExp ::= n Z x Var a 1 + a 2 b BExp ::=

More information

VS 3 : SMT Solvers for Program Verification

VS 3 : SMT Solvers for Program Verification VS 3 : SMT Solvers for Program Verification Saurabh Srivastava 1,, Sumit Gulwani 2, and Jeffrey S. Foster 1 1 University of Maryland, College Park, {saurabhs,jfoster}@cs.umd.edu 2 Microsoft Research, Redmond,

More information

The Relative Power of Synchronization Methods

The Relative Power of Synchronization Methods Chapter 5 The Relative Power of Synchronization Methods So far, we have been addressing questions of the form: Given objects X and Y, is there a wait-free implementation of X from one or more instances

More information

A Correctness Proof for a Practical Byzantine-Fault-Tolerant Replication Algorithm

A Correctness Proof for a Practical Byzantine-Fault-Tolerant Replication Algorithm Appears as Technical Memo MIT/LCS/TM-590, MIT Laboratory for Computer Science, June 1999 A Correctness Proof for a Practical Byzantine-Fault-Tolerant Replication Algorithm Miguel Castro and Barbara Liskov

More information

Time and Space Lower Bounds for Implementations Using k-cas

Time and Space Lower Bounds for Implementations Using k-cas Time and Space Lower Bounds for Implementations Using k-cas Hagit Attiya Danny Hendler September 12, 2006 Abstract This paper presents lower bounds on the time- and space-complexity of implementations

More information

Interprocess Communication By: Kaushik Vaghani

Interprocess Communication By: Kaushik Vaghani Interprocess Communication By: Kaushik Vaghani Background Race Condition: A situation where several processes access and manipulate the same data concurrently and the outcome of execution depends on the

More information

Automatically Proving Linearizability

Automatically Proving Linearizability Automatically Proving Linearizability Viktor Vafeiadis University of Cambridge Abstract. This paper presents a practical automatic verification procedure for proving linearizability (i.e., atomicity and

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

CS 6110 S14 Lecture 38 Abstract Interpretation 30 April 2014

CS 6110 S14 Lecture 38 Abstract Interpretation 30 April 2014 CS 6110 S14 Lecture 38 Abstract Interpretation 30 April 2014 1 Introduction to Abstract Interpretation At this point in the course, we have looked at several aspects of programming languages: operational

More information

Building Efficient Concurrent Graph Object through Composition of List-based Set

Building Efficient Concurrent Graph Object through Composition of List-based Set Building Efficient Concurrent Graph Object through Composition of List-based Set Sathya Peri Muktikanta Sa Nandini Singhal Department of Computer Science & Engineering Indian Institute of Technology Hyderabad

More information

6. Hoare Logic and Weakest Preconditions

6. Hoare Logic and Weakest Preconditions 6. Hoare Logic and Weakest Preconditions Program Verification ETH Zurich, Spring Semester 07 Alexander J. Summers 30 Program Correctness There are many notions of correctness properties for a given program

More information

ACONCURRENT system may be viewed as a collection of

ACONCURRENT system may be viewed as a collection of 252 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 10, NO. 3, MARCH 1999 Constructing a Reliable Test&Set Bit Frank Stomp and Gadi Taubenfeld AbstractÐThe problem of computing with faulty

More information

Lecture Notes on Program Equivalence

Lecture Notes on Program Equivalence Lecture Notes on Program Equivalence 15-312: Foundations of Programming Languages Frank Pfenning Lecture 24 November 30, 2004 When are two programs equal? Without much reflection one might say that two

More information

Section 4 Concurrent Objects Correctness, Progress and Efficiency

Section 4 Concurrent Objects Correctness, Progress and Efficiency Section 4 Concurrent Objects Correctness, Progress and Efficiency CS586 - Panagiota Fatourou 1 Concurrent Objects A concurrent object is a data object shared by concurrently executing processes. Each object

More information

Cuckoo Hashing for Undergraduates

Cuckoo Hashing for Undergraduates Cuckoo Hashing for Undergraduates Rasmus Pagh IT University of Copenhagen March 27, 2006 Abstract This lecture note presents and analyses two simple hashing algorithms: Hashing with Chaining, and Cuckoo

More information

Lecture Notes on Contracts

Lecture Notes on Contracts Lecture Notes on Contracts 15-122: Principles of Imperative Computation Frank Pfenning Lecture 2 August 30, 2012 1 Introduction For an overview the course goals and the mechanics and schedule of the course,

More information

Lecture 21: Relational Programming II. November 15th, 2011

Lecture 21: Relational Programming II. November 15th, 2011 Lecture 21: Relational Programming II November 15th, 2011 Lecture Outline Relational Programming contd The Relational Model of Computation Programming with choice and Solve Search Strategies Relational

More information

Programming Languages Third Edition

Programming Languages Third Edition Programming Languages Third Edition Chapter 12 Formal Semantics Objectives Become familiar with a sample small language for the purpose of semantic specification Understand operational semantics Understand

More information

Lecture 1 Contracts : Principles of Imperative Computation (Fall 2018) Frank Pfenning

Lecture 1 Contracts : Principles of Imperative Computation (Fall 2018) Frank Pfenning Lecture 1 Contracts 15-122: Principles of Imperative Computation (Fall 2018) Frank Pfenning In these notes we review contracts, which we use to collectively denote function contracts, loop invariants,

More information

Lecture 24 Search in Graphs

Lecture 24 Search in Graphs Lecture 24 Search in Graphs 15-122: Principles of Imperative Computation (Fall 2018) Frank Pfenning, André Platzer, Rob Simmons, Penny Anderson, Iliano Cervesato In this lecture, we will discuss the question

More information

Qualifying Exam in Programming Languages and Compilers

Qualifying Exam in Programming Languages and Compilers Qualifying Exam in Programming Languages and Compilers University of Wisconsin Fall 1991 Instructions This exam contains nine questions, divided into two parts. All students taking the exam should answer

More information

Non-blocking Array-based Algorithms for Stacks and Queues. Niloufar Shafiei

Non-blocking Array-based Algorithms for Stacks and Queues. Niloufar Shafiei Non-blocking Array-based Algorithms for Stacks and Queues Niloufar Shafiei Outline Introduction Concurrent stacks and queues Contributions New algorithms New algorithms using bounded counter values Correctness

More information

P Is Not Equal to NP. ScholarlyCommons. University of Pennsylvania. Jon Freeman University of Pennsylvania. October 1989

P Is Not Equal to NP. ScholarlyCommons. University of Pennsylvania. Jon Freeman University of Pennsylvania. October 1989 University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science October 1989 P Is Not Equal to NP Jon Freeman University of Pennsylvania Follow this and

More information

Ownership of a queue for practical lock-free scheduling

Ownership of a queue for practical lock-free scheduling Ownership of a queue for practical lock-free scheduling Lincoln Quirk May 4, 2008 Abstract We consider the problem of scheduling tasks in a multiprocessor. Tasks cannot always be scheduled independently

More information

AXIOMS OF AN IMPERATIVE LANGUAGE PARTIAL CORRECTNESS WEAK AND STRONG CONDITIONS. THE AXIOM FOR nop

AXIOMS OF AN IMPERATIVE LANGUAGE PARTIAL CORRECTNESS WEAK AND STRONG CONDITIONS. THE AXIOM FOR nop AXIOMS OF AN IMPERATIVE LANGUAGE We will use the same language, with the same abstract syntax that we used for operational semantics. However, we will only be concerned with the commands, since the language

More information

Lecture 1 Contracts. 1 A Mysterious Program : Principles of Imperative Computation (Spring 2018) Frank Pfenning

Lecture 1 Contracts. 1 A Mysterious Program : Principles of Imperative Computation (Spring 2018) Frank Pfenning Lecture 1 Contracts 15-122: Principles of Imperative Computation (Spring 2018) Frank Pfenning In these notes we review contracts, which we use to collectively denote function contracts, loop invariants,

More information

Lecture 5 - Axiomatic semantics

Lecture 5 - Axiomatic semantics Program Verification March 2014 Lecture 5 - Axiomatic semantics Lecturer: Noam Rinetzky Scribes by: Nir Hemed 1.1 Axiomatic semantics The development of the theory is contributed to Robert Floyd, C.A.R

More information

3.7 Denotational Semantics

3.7 Denotational Semantics 3.7 Denotational Semantics Denotational semantics, also known as fixed-point semantics, associates to each programming language construct a well-defined and rigorously understood mathematical object. These

More information

Lecture 10 Notes Linked Lists

Lecture 10 Notes Linked Lists Lecture 10 Notes Linked Lists 15-122: Principles of Imperative Computation (Spring 2016) Frank Pfenning, Rob Simmons, André Platzer 1 Introduction In this lecture we discuss the use of linked lists to

More information

1 Introduction. 3 Syntax

1 Introduction. 3 Syntax CS 6110 S18 Lecture 19 Typed λ-calculus 1 Introduction Type checking is a lightweight technique for proving simple properties of programs. Unlike theorem-proving techniques based on axiomatic semantics,

More information

Lecture 10 Notes Linked Lists

Lecture 10 Notes Linked Lists Lecture 10 Notes Linked Lists 15-122: Principles of Imperative Computation (Summer 1 2015) Frank Pfenning, Rob Simmons, André Platzer 1 Introduction In this lecture we discuss the use of linked lists to

More information

STABILITY AND PARADOX IN ALGORITHMIC LOGIC

STABILITY AND PARADOX IN ALGORITHMIC LOGIC STABILITY AND PARADOX IN ALGORITHMIC LOGIC WAYNE AITKEN, JEFFREY A. BARRETT Abstract. Algorithmic logic is the logic of basic statements concerning algorithms and the algorithmic rules of deduction between

More information

Constrained Types and their Expressiveness

Constrained Types and their Expressiveness Constrained Types and their Expressiveness JENS PALSBERG Massachusetts Institute of Technology and SCOTT SMITH Johns Hopkins University A constrained type consists of both a standard type and a constraint

More information

Indistinguishability: Friend and Foe of Concurrent Data Structures. Hagit Attiya CS, Technion

Indistinguishability: Friend and Foe of Concurrent Data Structures. Hagit Attiya CS, Technion Indistinguishability: Friend and Foe of Concurrent Data Structures Hagit Attiya CS, Technion Uncertainty is a main obstacle for designing correct applications in concurrent systems Formally captured by

More information

A Comparison of Relativistic and Reader-Writer Locking Approaches to Shared Data Access

A Comparison of Relativistic and Reader-Writer Locking Approaches to Shared Data Access A Comparison of Relativistic and Reader-Writer Locking Approaches to Shared Data Access Philip W. Howard, Josh Triplett, and Jonathan Walpole Portland State University Abstract. This paper explores the

More information

Solo-Valency and the Cost of Coordination

Solo-Valency and the Cost of Coordination Solo-Valency and the Cost of Coordination Danny Hendler Nir Shavit November 21, 2007 Abstract This paper introduces solo-valency, a variation on the valency proof technique originated by Fischer, Lynch,

More information

Hoare Logic and Model Checking

Hoare Logic and Model Checking Hoare Logic and Model Checking Kasper Svendsen University of Cambridge CST Part II 2016/17 Acknowledgement: slides heavily based on previous versions by Mike Gordon and Alan Mycroft Introduction In the

More information

NON-BLOCKING DATA STRUCTURES AND TRANSACTIONAL MEMORY. Tim Harris, 31 October 2012

NON-BLOCKING DATA STRUCTURES AND TRANSACTIONAL MEMORY. Tim Harris, 31 October 2012 NON-BLOCKING DATA STRUCTURES AND TRANSACTIONAL MEMORY Tim Harris, 31 October 2012 Lecture 6 Linearizability Lock-free progress properties Queues Reducing contention Explicit memory management Linearizability

More information

Formal Specification and Verification

Formal Specification and Verification Formal Specification and Verification Introduction to Promela Bernhard Beckert Based on a lecture by Wolfgang Ahrendt and Reiner Hähnle at Chalmers University, Göteborg Formal Specification and Verification:

More information

COS 320. Compiling Techniques

COS 320. Compiling Techniques Topic 5: Types COS 320 Compiling Techniques Princeton University Spring 2016 Lennart Beringer 1 Types: potential benefits (I) 2 For programmers: help to eliminate common programming mistakes, particularly

More information

7. Introduction to Denotational Semantics. Oscar Nierstrasz

7. Introduction to Denotational Semantics. Oscar Nierstrasz 7. Introduction to Denotational Semantics Oscar Nierstrasz Roadmap > Syntax and Semantics > Semantics of Expressions > Semantics of Assignment > Other Issues References > D. A. Schmidt, Denotational Semantics,

More information

Models of concurrency & synchronization algorithms

Models of concurrency & synchronization algorithms Models of concurrency & synchronization algorithms Lecture 3 of TDA383/DIT390 (Concurrent Programming) Carlo A. Furia Chalmers University of Technology University of Gothenburg SP3 2016/2017 Today s menu

More information

Lecture Notes on Induction and Recursion

Lecture Notes on Induction and Recursion Lecture Notes on Induction and Recursion 15-317: Constructive Logic Frank Pfenning Lecture 7 September 19, 2017 1 Introduction At this point in the course we have developed a good formal understanding

More information

Mutual Exclusion: Classical Algorithms for Locks

Mutual Exclusion: Classical Algorithms for Locks Mutual Exclusion: Classical Algorithms for Locks John Mellor-Crummey Department of Computer Science Rice University johnmc@cs.rice.edu COMP 422 Lecture 18 21 March 2006 Motivation Ensure that a block of

More information

The Universality of Consensus

The Universality of Consensus Chapter 6 The Universality of Consensus 6.1 Introduction In the previous chapter, we considered a simple technique for proving statements of the form there is no wait-free implementation of X by Y. We

More information

A Scalable Lock-free Stack Algorithm

A Scalable Lock-free Stack Algorithm A Scalable Lock-free Stack Algorithm Danny Hendler Ben-Gurion University Nir Shavit Tel-Aviv University Lena Yerushalmi Tel-Aviv University The literature describes two high performance concurrent stack

More information

Lecture Notes on Queues

Lecture Notes on Queues Lecture Notes on Queues 15-122: Principles of Imperative Computation Frank Pfenning Lecture 9 September 25, 2012 1 Introduction In this lecture we introduce queues as a data structure and linked lists

More information

Hoare Logic and Model Checking. A proof system for Separation logic. Introduction. Separation Logic

Hoare Logic and Model Checking. A proof system for Separation logic. Introduction. Separation Logic Introduction Hoare Logic and Model Checking In the previous lecture we saw the informal concepts that Separation Logic is based on. Kasper Svendsen University of Cambridge CST Part II 2016/17 This lecture

More information

Semantics of COW. July Alex van Oostenrijk and Martijn van Beek

Semantics of COW. July Alex van Oostenrijk and Martijn van Beek Semantics of COW /; ;\ \\ // /{_\_/ `'\ \ (o) (o } / :--',-,'`@@@@@@@@ @@@@@@ \_ ` \ ;:( @@@@@@@@@ @@@ \ (o'o) :: ) @@@@ @@@@@@,'@@( `====' Moo! :: : @@@@@: @@@@ `@@@: :: \ @@@@@: @@@@@@@) ( '@@@' ;; /\

More information

Whatever can go wrong will go wrong. attributed to Edward A. Murphy. Murphy was an optimist. authors of lock-free programs 3.

Whatever can go wrong will go wrong. attributed to Edward A. Murphy. Murphy was an optimist. authors of lock-free programs 3. Whatever can go wrong will go wrong. attributed to Edward A. Murphy Murphy was an optimist. authors of lock-free programs 3. LOCK FREE KERNEL 309 Literature Maurice Herlihy and Nir Shavit. The Art of Multiprocessor

More information

Principles of AI Planning. Principles of AI Planning. 7.1 How to obtain a heuristic. 7.2 Relaxed planning tasks. 7.1 How to obtain a heuristic

Principles of AI Planning. Principles of AI Planning. 7.1 How to obtain a heuristic. 7.2 Relaxed planning tasks. 7.1 How to obtain a heuristic Principles of AI Planning June 8th, 2010 7. Planning as search: relaxed planning tasks Principles of AI Planning 7. Planning as search: relaxed planning tasks Malte Helmert and Bernhard Nebel 7.1 How to

More information

Definition: A context-free grammar (CFG) is a 4- tuple. variables = nonterminals, terminals, rules = productions,,

Definition: A context-free grammar (CFG) is a 4- tuple. variables = nonterminals, terminals, rules = productions,, CMPSCI 601: Recall From Last Time Lecture 5 Definition: A context-free grammar (CFG) is a 4- tuple, variables = nonterminals, terminals, rules = productions,,, are all finite. 1 ( ) $ Pumping Lemma for

More information

Concurrent library abstraction without information hiding

Concurrent library abstraction without information hiding Universidad Politécnica de Madrid Escuela técnica superior de ingenieros informáticos Concurrent library abstraction without information hiding Artem Khyzha Advised by Manuel Carro and Alexey Gotsman Madrid

More information

Constant RMR Transformation to Augment Reader-Writer Locks with Atomic Upgrade/Downgrade Support

Constant RMR Transformation to Augment Reader-Writer Locks with Atomic Upgrade/Downgrade Support Constant RMR Transformation to Augment Reader-Writer Locks with Atomic Upgrade/Downgrade Support Jake Stern Leichtling Thesis Advisor: Prasad Jayanti With significant contributions from Michael Diamond.

More information

Remaining Contemplation Questions

Remaining Contemplation Questions Process Synchronisation Remaining Contemplation Questions 1. The first known correct software solution to the critical-section problem for two processes was developed by Dekker. The two processes, P0 and

More information

Byzantine Consensus in Directed Graphs

Byzantine Consensus in Directed Graphs Byzantine Consensus in Directed Graphs Lewis Tseng 1,3, and Nitin Vaidya 2,3 1 Department of Computer Science, 2 Department of Electrical and Computer Engineering, and 3 Coordinated Science Laboratory

More information

[module 2.2] MODELING CONCURRENT PROGRAM EXECUTION

[module 2.2] MODELING CONCURRENT PROGRAM EXECUTION v1.0 20130407 Programmazione Avanzata e Paradigmi Ingegneria e Scienze Informatiche - UNIBO a.a 2013/2014 Lecturer: Alessandro Ricci [module 2.2] MODELING CONCURRENT PROGRAM EXECUTION 1 SUMMARY Making

More information

An Annotated Language

An Annotated Language Hoare Logic An Annotated Language State and Semantics Expressions are interpreted as functions from states to the corresponding domain of interpretation Operators have the obvious interpretation Free of

More information

Distributed minimum spanning tree problem

Distributed minimum spanning tree problem Distributed minimum spanning tree problem Juho-Kustaa Kangas 24th November 2012 Abstract Given a connected weighted undirected graph, the minimum spanning tree problem asks for a spanning subtree with

More information

Whatever can go wrong will go wrong. attributed to Edward A. Murphy. Murphy was an optimist. authors of lock-free programs LOCK FREE KERNEL

Whatever can go wrong will go wrong. attributed to Edward A. Murphy. Murphy was an optimist. authors of lock-free programs LOCK FREE KERNEL Whatever can go wrong will go wrong. attributed to Edward A. Murphy Murphy was an optimist. authors of lock-free programs LOCK FREE KERNEL 251 Literature Maurice Herlihy and Nir Shavit. The Art of Multiprocessor

More information

From IMP to Java. Andreas Lochbihler. parts based on work by Gerwin Klein and Tobias Nipkow ETH Zurich

From IMP to Java. Andreas Lochbihler. parts based on work by Gerwin Klein and Tobias Nipkow ETH Zurich From IMP to Java Andreas Lochbihler ETH Zurich parts based on work by Gerwin Klein and Tobias Nipkow 2015-07-14 1 Subtyping 2 Objects and Inheritance 3 Multithreading 1 Subtyping 2 Objects and Inheritance

More information

Application: Programming Language Semantics

Application: Programming Language Semantics Chapter 8 Application: Programming Language Semantics Prof. Dr. K. Madlener: Specification and Verification in Higher Order Logic 527 Introduction to Programming Language Semantics Programming Language

More information

The alternator. Mohamed G. Gouda F. Furman Haddix

The alternator. Mohamed G. Gouda F. Furman Haddix Distrib. Comput. (2007) 20:21 28 DOI 10.1007/s00446-007-0033-1 The alternator Mohamed G. Gouda F. Furman Haddix Received: 28 August 1999 / Accepted: 5 July 2000 / Published online: 12 June 2007 Springer-Verlag

More information

Harvard School of Engineering and Applied Sciences CS 152: Programming Languages

Harvard School of Engineering and Applied Sciences CS 152: Programming Languages Harvard School of Engineering and Applied Sciences CS 152: Programming Languages Lecture 19 Tuesday, April 3, 2018 1 Introduction to axiomatic semantics The idea in axiomatic semantics is to give specifications

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18 601.433/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18 22.1 Introduction We spent the last two lectures proving that for certain problems, we can

More information

Software Engineering using Formal Methods

Software Engineering using Formal Methods Software Engineering using Formal Methods Introduction to Promela Wolfgang Ahrendt & Richard Bubel & Reiner Hähnle & Wojciech Mostowski 31 August 2011 SEFM: Promela /GU 110831 1 / 35 Towards Model Checking

More information

Symbolic Execution and Proof of Properties

Symbolic Execution and Proof of Properties Chapter 7 Symbolic Execution and Proof of Properties Symbolic execution builds predicates that characterize the conditions under which execution paths can be taken and the effect of the execution on program

More information

Analyze the obvious algorithm, 5 points Here is the most obvious algorithm for this problem: (LastLargerElement[A[1..n]:

Analyze the obvious algorithm, 5 points Here is the most obvious algorithm for this problem: (LastLargerElement[A[1..n]: CSE 101 Homework 1 Background (Order and Recurrence Relations), correctness proofs, time analysis, and speeding up algorithms with restructuring, preprocessing and data structures. Due Thursday, April

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Incompatibility Dimensions and Integration of Atomic Commit Protocols

Incompatibility Dimensions and Integration of Atomic Commit Protocols The International Arab Journal of Information Technology, Vol. 5, No. 4, October 2008 381 Incompatibility Dimensions and Integration of Atomic Commit Protocols Yousef Al-Houmaily Department of Computer

More information

BQ: A Lock-Free Queue with Batching

BQ: A Lock-Free Queue with Batching BQ: A Lock-Free Queue with Batching Gal Milman Technion, Israel galy@cs.technion.ac.il Alex Kogan Oracle Labs, USA alex.kogan@oracle.com Yossi Lev Oracle Labs, USA levyossi@icloud.com ABSTRACT Victor Luchangco

More information

Lock-free Serializable Transactions

Lock-free Serializable Transactions Lock-free Serializable Transactions Jeff Napper jmn@cs.utexas.edu Lorenzo Alvisi lorenzo@cs.utexas.edu Laboratory for Advanced Systems Research Department of Computer Science The University of Texas at

More information

Static Program Analysis Part 1 the TIP language

Static Program Analysis Part 1 the TIP language Static Program Analysis Part 1 the TIP language http://cs.au.dk/~amoeller/spa/ Anders Møller & Michael I. Schwartzbach Computer Science, Aarhus University Questions about programs Does the program terminate

More information

Reuse, don t Recycle: Transforming Lock-free Algorithms that Throw Away Descriptors

Reuse, don t Recycle: Transforming Lock-free Algorithms that Throw Away Descriptors Reuse, don t Recycle: Transforming Lock-free Algorithms that Throw Away Descriptors Maya Arbel-Raviv and Trevor Brown Technion, Computer Science Department, Haifa, Israel mayaarl@cs.technion.ac.il, me@tbrown.pro

More information

Lectures 20, 21: Axiomatic Semantics

Lectures 20, 21: Axiomatic Semantics Lectures 20, 21: Axiomatic Semantics Polyvios Pratikakis Computer Science Department, University of Crete Type Systems and Static Analysis Based on slides by George Necula Pratikakis (CSD) Axiomatic Semantics

More information

Reasoning About Imperative Programs. COS 441 Slides 10

Reasoning About Imperative Programs. COS 441 Slides 10 Reasoning About Imperative Programs COS 441 Slides 10 The last few weeks Agenda reasoning about functional programming It s very simple and very uniform: substitution of equal expressions for equal expressions

More information

CS 6110 S11 Lecture 25 Typed λ-calculus 6 April 2011

CS 6110 S11 Lecture 25 Typed λ-calculus 6 April 2011 CS 6110 S11 Lecture 25 Typed λ-calculus 6 April 2011 1 Introduction Type checking is a lightweight technique for proving simple properties of programs. Unlike theorem-proving techniques based on axiomatic

More information

Operational Semantics

Operational Semantics 15-819K: Logic Programming Lecture 4 Operational Semantics Frank Pfenning September 7, 2006 In this lecture we begin in the quest to formally capture the operational semantics in order to prove properties

More information

Note that in this definition, n + m denotes the syntactic expression with three symbols n, +, and m, not to the number that is the sum of n and m.

Note that in this definition, n + m denotes the syntactic expression with three symbols n, +, and m, not to the number that is the sum of n and m. CS 6110 S18 Lecture 8 Structural Operational Semantics and IMP Today we introduce a very simple imperative language, IMP, along with two systems of rules for evaluation called small-step and big-step semantics.

More information

CS 161 Computer Security

CS 161 Computer Security Wagner Spring 2014 CS 161 Computer Security 1/27 Reasoning About Code Often functions make certain assumptions about their arguments, and it is the caller s responsibility to make sure those assumptions

More information

Proving the Correctness of Distributed Algorithms using TLA

Proving the Correctness of Distributed Algorithms using TLA Proving the Correctness of Distributed Algorithms using TLA Khushboo Kanjani, khush@cs.tamu.edu, Texas A & M University 11 May 2007 Abstract This work is a summary of the Temporal Logic of Actions(TLA)

More information

Lock-Free and Practical Doubly Linked List-Based Deques using Single-Word Compare-And-Swap

Lock-Free and Practical Doubly Linked List-Based Deques using Single-Word Compare-And-Swap Lock-Free and Practical Doubly Linked List-Based Deques using Single-Word Compare-And-Swap Håkan Sundell Philippas Tsigas OPODIS 2004: The 8th International Conference on Principles of Distributed Systems

More information

On the Importance of Synchronization Primitives with Low Consensus Numbers

On the Importance of Synchronization Primitives with Low Consensus Numbers On the Importance of Synchronization Primitives with Low Consensus Numbers ABSTRACT Pankaj Khanchandani ETH Zurich kpankaj@ethz.ch The consensus number of a synchronization primitive is the maximum number

More information

NON-BLOCKING DATA STRUCTURES AND TRANSACTIONAL MEMORY. Tim Harris, 17 November 2017

NON-BLOCKING DATA STRUCTURES AND TRANSACTIONAL MEMORY. Tim Harris, 17 November 2017 NON-BLOCKING DATA STRUCTURES AND TRANSACTIONAL MEMORY Tim Harris, 17 November 2017 Lecture 7 Linearizability Lock-free progress properties Hashtables and skip-lists Queues Reducing contention Explicit

More information

Software Engineering using Formal Methods

Software Engineering using Formal Methods Software Engineering using Formal Methods Introduction to Promela Wolfgang Ahrendt 03 September 2015 SEFM: Promela /GU 150903 1 / 36 Towards Model Checking System Model Promela Program byte n = 0; active

More information