Exceptions in Chapel

Size: px
Start display at page:

Download "Exceptions in Chapel"

Transcription

1 Exceptions in Chapel Thomas H. Hildebrandt Abstract This document provides a design proposal for exceptions in Chapel. It shows how synchronous and asynchronous exceptions can be added to the language, and makes recommendations on how these should be implemented. In Chapel, exceptions would have two notable properties: asynchronous exceptions can alter the normal flow of control without requiring polling or explicit blocking commands; and exceptions can be resumable, meaning that execution can continue from the point at which the exception was thrown. 1 Introduction Exceptions are a useful contruct in modern programming languages. They permit a piece of code to interact with its callers, and thus determine how to proceed when a anomaly arises. The use of execeptions to respond to error conditions is within their charter, but takes a narrow view of their power. They can also be used to alter flow under more normal circumstances. This view of exceptions is compatible with both synchronous and asynchronous exceptions. Synchronous exceptions are those which arise in (or appear to arise in) the code being executed. Asynchronous exceptions are not directly related to the code being executed, and may occur at any point within the program. Although they are quite different in their origins, synchronous and asynchronous exceptions have many properties in common. Thus it is reasonable to consider exposing them to the programmer using the same basic constructs. A well-implemented exception system should provide the coder with complete control over how an exception is handled including the possibility of ignoring it completely. This is especially important in handling asynchronous exceptions. The resume statement proposed here would provide the means to ignore an exception after it has been caught. It is also desirable to implement exceptions in such a way that system throughput can be maximized. The anticipation of an asynchronous event must not require polling a shared object for a change-of-state. It is also undesirable to require a blocking call or a task switch to receive an asynchronous event. The manifestation of an asynchronous event in the high-level language should mimic as closely as possible its manifestation at the hardware level. It should be noted that asynchronous exceptions are not meant to replace signal- or event-handling provided by the OS or programming framework. Rather, asynchronous exceptions supplement signal/event handlers by hiding synchronization details and possibly providing a non-polled, non-blocking execution pathway. 2 Motivating Examples This section provides a number of examples to motivate the design presented later in this document. 2.1 Reporting an Error A standard example of how exceptions can be used is when the current routine cannot proceed under the set of assumptions used in its design. For example, in the implementation of a Vector class, an indexing routine cannot proceed normally if the given index lies outside the set of known indices. 1

2 1 class Vector { 2 type valuetype; 3 var mydomain : range(int); 4 var myarray : [] valuetype; proc get(i:int) { 7 if (! mydomain.member(i)) then 8 throw new InvalidIndex(); // i lies outside my domain! 9 return myarray[i]; 10 } 11 } This example clearly shows we want to do something different if the given index does not match any value element. But what to do in this case is best left to the caller. 2.2 Resuming an Exception In the above example, the communication between the callee and caller is one-way. The callee says that an exceptional condition was encountered and it cannot continue. The caller may call the same function again, possibly with different arguments, but any state private to the callee is lost. When a caller instead resumes a synchronous exception, execution continues without any loss of state information. It can be interpreted as an indication that the callee should try harder, using the information it has on hand. It is also possible that the caller s state can be modified in the catch handler before execution is resumed. The implementation can support such an idiom, but it must be used with caution. Portions of the callee state which are derived from that state in the caller would have to be recalculated. Unless this is a natural part of the callee implementation, this idiom is probably best avoided. Other approaches are that the callee must assume that shared data in its environment may have changed after a throw statement. Defensive programming would encourage the use of die instead of throw in most cases using throw only where the resumption of an exception was anticipated. In the context of an asynchronous esception, resumption allows the event to be ignored. Execution continues from the point at which the asynchronous exception was recognized, with no loss of state information. This construct could be especially useful in polymorphic implementations, where one version of a subobject is to be sensitive to a class of asynchronous exceptions while another is not. In a typical implementation of exception handling, a throw statement causes the current dynamic scope to be exited. Its effect is similar to other flow-altering statements like break, continue and return, in that none of the statements following it in the same block will be executed. A throw statement should be the last statement in a block; any statement following it is unreachable. If we allow an exception to be resumed, then these semantics change. A routine has to be prepared to continue execution following the throw statement. That is, the statements following a throw are no long necessarily unreachable. They will be reached if and only if the thrown exception is resumed. 1 class Vector { 2 type valuetype; 3 var mydomain : range(int); 4 var myarray : [] valuetype; proc get(i:int) { 7 if (! mydomain.member(i)) { 8 throw new InvalidIndex(); // i lies outside my domain! 10 // If the above exception is resumed, we continue execution here: 11 return new valuetype(); 12 } 13 return myarray[i]; 14 } 15 } Consider the throw statement on line 8. If a catch clause or the calling routine wants this piece of code to try harder, it will resume the exception. In that case, execution will continue with the statement following the throw 2

3 statement. This routine responds by constructing and returning a new element of the valuetype which is initialized to that type s default value. Alternatively, if the valuetype is known to be a class type, the routine could return nil. Another alternative would be to throw another exception. The dialog between this routine and its caller could continue until the callee returns some reasonable value or succeeds in persuading the caller that nothing more can be done. Thus, the callee also needs a way to signal that an exception cannot be resumed. For this purpose, we propose to use the die keyword to throw an exception which cannot be resumed. To retain opacity (that is, the calling routine cannot know whether an exception came from a throw or from a die), it must be permissible for calling routine to execute a resume statement in either case. A resume on a terminal (non-resumable) exception would simply re-throw the exception as if it originated at that point in the caller s catch clause (i.e. at the end of the resume) statement. As in C++ and Java, throw without an object parameter re-throws the current exception. Similarly, a die statement without parameters re-throws the current exception as a non-resumable exception. 2.3 Asynchronous Exceptions This example motivates the desirability of handling asynchronous exceptions without requiring explicit polling or yield statements. Let us assume that we have a mechanism for attaching handlers to signals that may be raised by asynchronous events. Then, we might have code to respond to a keyboard input event. The first implementation uses polling: 1 record syncchar { 2 sync var lock = false; 3 var full : bool = false; 4 var char : int(8); 6 proc write(c) { 7 lock; // Grab the lock. 8 char = c; 9 full = true; 10 lock = false; // Release the lock. 11 } 13 proc isfull() { 14 lock; // Grab the lock. 15 if (full) return true; 16 lock = false; // Release the lock. 17 return false; 18 } 20 proc read() { 21 // Assume we already have the lock. 22 // A call to read() must occur after isfull() returns true and before release(). 23 var c = char; 24 full = false; 25 lock = false; // Release the lock. 26 return c; 27 } 28 } 30 var sharedchar : syncchar; 32 pragma "handler" proc KeyboardInterruptHandler() { 33 sharedchar.write( primitive(getchar()); 34 } 36 proc mykbdreader() { while (! sharedchar.isfull()) { /* do nothing */ } 40 var mychar = sharedchar.read();

4 42 } Here, the keyboard interrupt handler places a value into the sharedchar record. The reader (one of several, perhaps) waits until a character is available and then reads it. The busy-wait loop is clearly undesirable, and can be eliminated through a better use of sync variables. 1 sync var sharedchar : int(8); 3 pragma "handler" proc KeyboardInterruptHandler() { 4 sharedchar = primitive(getchar()); 5 } 7 proc mykbdreader() { var mychar = sharedchar; } The assignment in mykbdreader() waits for a character to become available before proceeding. The underlying thread may still be busy-waiting until the sync variable is filled, but at least there is the opportunity for it to switch to another task and accomplish some useful work while the sync variable is empty. However, this example still suffers from the fact that an explicit blocking statement (namely the assignment with sharedchar on the RHS) is required to receive the asynchronous message (that of a character being available through getchar()). The usefulness of this approach breaks down completely when one considers a computation that has nothing to do with reading the keyboard. We d still like to be able to interrupt it in case the user hits Ctrl-C, for example, or if some timer times out. 2 pragma "handler" proc KeyboardInterruptHandler() { 3 var c = primitive(getchar()); if (c == 0x03) then die new System.Keyboard.Exceptions.CtrlC(); } 9 // Write out Fibonacci numbers forever. 10 proc fibwriter() 11 { 12 try { 13 var a = (0, 1); 14 while (true) { 15 writeln(a(1)); 16 a = (a(2), a(1) + a(2)); 17 } 18 } 19 catch (e:system.keyboard.exceptions.ctrlc) { /* Extinguish it */ } 20 finally { 21 writeln("et cetera...."); 22 } 23 } There s a problem here. Depending on how writeln() is implemented, the interrupt might occur between the printing of consecutive digits. So we might get something like: et cetera.... at the end of the printout, rather then seeing et cetera.... on a line by itself. In that case, the programmer can supply his own synchronization by setting a shared variable and testing that in a loop, thus: 1 // Write out Fibonacci numbers until someone hits Ctrl-C. 2 proc fibwriter() 3 { 4 volatile var keepprinting = true; 5 try { 4

5 6 var a = (0, 1); 7 while (keepprinting) { 8 writeln(a(1)); 9 a = (a(2), a(1) + a(2)); 10 } 11 } 12 catch (e:system.keyboard.exceptions.ctrlc) { 13 keepprinting = false; 14 resume; 15 } 16 finally { 17 writeln("et cetera...."); 18 } 19 } In this case, when the Ctrl-C exception occurs, the catch clause just sets keepprinting to false and then resumes execution from wherever the asynchronous interrupt occurred. If writeln was interrupted mid-line, it will continue and execution will proceed to the end of the inner loop. Then, the loop condition will test false and the routine will exit normally, printing et cetera.... on a line by itself. Note that no synchronization guards are required on updates to the variable keepprinting. That is because all exceptions have been synchronized when they run in user (as opposed to handler) code. Note also that the volatile keyword is not really necessary. The need to re-read keepprinting can be inferred from the fact that it is referenced from within a catch clause. 2.4 Asynchronous Messaging The final motivating example involves sending a message from one task to another. An important construct we wish to support is to launch several search tasks in parallel and let the winner terminate all of its sibling tasks early. That so-called eureka construct presents a challenge, especially if polling is to be avoided. Some search algorithms (e.g. some sort of table-driven search) might be amenable to periodically checking a state variable to see if the search had been concluded elsewhere; others perhaps not. Either way, the polling code represents the intermingling of control and computation, so does not fit well with the Chapel design goal of separating the two. In any case, polling represents a potential communication bottleneck. Suppose, for instance, that it was important to preserve the first answer found (as opposed to assuming that any answer is as good as any other). In that case, it would be necessary for each task to lock the state variable before checking it. If no answer has yet been found, the current task would write its answer and only then release the lock. Now it is clear that each task will have to wait its turn to examine the state (whether an answer has already been found). If it takes only a small bit of computation to generate and test a candidate solution, then the overhead of polling the search state will dominate, and the overall performance of the algorithm will be poor. Supporting the implementation of a eureka through exceptions solves both problems: it removes explicit polling code from the algorithmic code and it avoids creating a performance bottleneck. 1 proc searchall() 2 { 3 sync var answer: answertype; 4 var tasks: [] chpl_taskid; 5 tasks = coforall i in 0..#numLocales { 6 on Locales[i] do 7 try { 8 answer = localsearch(i); 9 // We get here on the one thread that finds a solution. 10 killtasks(tasks, i); 11 } 12 catch (e:killtaskexception) { /* extinguish */ } 13 finally do writeln("task ", i, " exited."); 14 } 15 } 16 return answer; 17 } 5

6 19 // Terminates normally only if a solution is found. 20 proc localsearch(i:int) 21 { 22 while (true) { 23 // Generate a candidate that is unique to this locale. 24 var candidate = generatecandidate(i); 25 if issolution(candidate) then 26 return candidate; 27 // otherwise, keep looking. 28 } 29 } 31 proc killtasks(tasks:tasklist, i:int) { 32 // i is the index of this task (the winner). 33 // We want to kill all the others. 34 forall j in tasklist.dom do 35 if j!= i then 36 throw (tasks[j]) new KillTaskException(); 37 } The intent here is that a copy of localsearch() is spawned on each locale, but only the first to produce an answer gets to write a value into the answer sync variable. Others which terminate later will be blocked, waiting for that lock to be released. The remainder continue their search. The first localsearch to produce a value causes searchall() to run killtasks(). That routine sends a signal to each task in the list of tasks returned by the forall statement, except for the currently running tasks. 1 The semantics of the coforall have been altered for the sake of this example: coforall now returns a list of the tasks it creates. It is assumed that the assignment of tasks is performed before the body of any of the child tasks is entered. In effect, the coforall starts to create all the tasks it needs and then there is a barrier to ensure they are all created. This is necessary to avoid the possible race in which one task shouts Eureka before one of its peers has even been created. Said sibling would be absent from the list of tasks to be killed, so would start running after all the other siblings had been harvested. At best, it would just needlessly consume resources and eventually run to completion without finding a solution. At worst, it would overwrite the first answer with a different one, or perhaps attempt to overwrite an object that was no longer there. We also introduce (in Chapel, at least) the concept of sending a message to a particular task. The syntax throw (tasks[j]) new Ki is intended to represent throwing KillTaskException within the task whose ID is tasks[j]. Only the throw keyword is supported when sending an exception to another task. This is because execution continues immediately when an exception is generated synchronously and sent to another task. The exception becomes asynchronous in the context of the target task; resuming it there causes execution to continue from the point at which the target task was interrupted. The resume has no effect on the task which threw the exception. When a task that is running localsearch() receives a KillTaskException, control will exit that routine and be transferred to the matching catch block in searchall(). The catch block merely extinguishes the exception. Since the interrupted routine is not resumed, the stack is unwound to the context of the catch block. Execution resumes with the finally block, which prints that the corresponding task has exited. A throw to another task returns immediately, so the winning thread will exit first. After all of the other tasks exit, we exit the coforall statement, print the answer, and finally return from searchall(). 2.5 Blocking Asynchronous Exceptions It is assumed that the execution environment internally prevents the reentry of handler routines. Thus, it is possible to reason about the sequence in which handlers of a given event type are executed. Different event types may have different priorities; whereby it is possible for one handler to be interrupted by another. 1 In this example, we thought it a better design to place all of the task control logic in the process that creates the search tasks. That way, the search termination exception flows from the parent to the children rather than from one child to another. This design is not always possible (for example, when each child can spawn other tasks), but hopefully adds to the clarity of the example. 6

7 In both handler and user code, it may be desirable to delay the receipt of asynchronous exceptions. Lacking any specific examples, it seems reasonable to provide a general language element that can provide the necessary semantics. A block library function could serve both functions, as shown. 1 proc sample() { var blocked = System.Exceptions.block(); // Returns the current blocking state. 4 // Default argument is true, 5 // meaning blocking is enabled System.Exceptions.block(blocked); // Restore the previous blocking state. 8 } 3 Added Statements This subsection describes the statements that are implied by this proposal. Support for synchronous exceptions follows other modern languages (Java, C#) which do not directly support parallelism. 2 The required statements are try, catch, finally, throw, die, resume and block. Support for asynchronous exceptions requires the ability to record task IDs and throw exceptions at a particular thread. These capabilities extend the current parallel constructs in the language, forcing them to return a list of spawned task IDs, and extend the throw statement to accept an optional task ID. So that asynchronous exceptions may be ignored, the resume statement allows execution to continue from the point at which an exception was issued. 3.1 The Throw Statement In Chapel, throw and die statements have the following syntax: throw-statement: throw-or-die throw-or-die expression throw ( task-id ) expression throw-or-die: throw die task-id: expression The first form (without an expression) can appear only within a catch clause. It re-throws the current exception as if it were thrown from that point in the code. In particular, the stack is unwound to the current run-time scope before the exception is re-thrown. If the throw keyword is used, the re-thrown exception is resumable; if the die keyword is used, the re-thrown exception is not resumable. The second form may appear anywhere a statement may appear. When such a throw or die statement is executed, the expression is evaluated, and the resulting value becomes the value of the exception. The type of the exception is the run-time type of that value. If the throw keyword is used, the thrown exception is resumable; if the die keyword is used, the thrown exception is not resumable. After the value of the exception is established, control is transferred to the end of the statement in which the throw statement appears. If this statement is a try statement, then the search for a matching catch clause begins. Otherwise, successive run-time scopes are exited until a try statement is exited. If no such try statement exists, then the task in which the exception was generated terminates. In the third form, an exception is thrown to the task indicated by the task-id expression. Execution continues in the throwing task with the statement immediately following the throw. The exception is delivered to the target task as an asynchronous exception. If a task terminates with an active exception, that exception is sent to the spawning task as an asynchronous exception. See below for more information on how asynchronous exceptions are handled. 2 Both have standard thread support libraries, but neither integrates the concept of a task or thread directly in the language. 7

8 3.2 The Try, Catch and Finally Statements A try... catch... finally statement has the following syntax: try-catch-finally-statement: try-statement catch-statement-list opt finally-statement opt try-statement: try statement catch-statement-list: catch-statement catch-statement-list catch-statement catch-statement: catch ( exception-declaration ) where-clause do opt statement finally-statement: finally do opt statement A try statement may be followed by zero or more catch statements, followed by an optional finally statement. In a catch-statement or finally-statement, the do keyword is optional if the controlled statement is a compound statement (i.e. it starts with an open brace). The try-statement introduces a dynamic scope from which an exception may be thrown, directly or indirectly. If an exception causes control to exit a try statement, then the first matching catch statement is executed. Whether or not an exception is thrown within the try block, and whether or not any catch statement is executed, the finally statement is executed before control reaches the end of the try-catch-finally-statement. To find a matching catch statement, each catch statement in the catch-statement-list is tried in turn. Each catch clause is treated like a local procedure taking one argument. If the run-time type of the exception is compatible with the type of the formal parameter and the where clause (if present) evaluates to true, then that is a matching catch clause, and its body is executed. The exception becomes inactive upon entering the body of the catch clause. Control can leave a catch clause in one of five ways: by reaching the end of the controlled statement; by reaching a return statement; by throwing a new exception; by re-throwing the current exception; or by reaching a resume statement. If control reaches the end of the controlled statement, then control is transferred to the beginning of the finally clause if it exists, and otherwise to the statement following the try-catch-finally statement. If a return statement is encountered within a catch statement, control is transferred to the end of the controlled statement. The finally block will still be executed (if it exists) and then control transferred to the statement following the end of the try-catch-finally-statement. 3 If a throw statement is reached within a catch statement, control is transferred to the end of the controlled catch statement. No further catch clauses are tested, and control is transferred to the start of the corresponding finally clause if present. Execution resumes with the statement following the enclosing try-catch-finally statement. The semantics are the same for a re-thrown exception, except that the current exception is merely reactivated. See below (3.3 for the semantics of the resume statement. 3 This is consistent with the execution of catch statements behaving like procedure calls. 8

9 3.3 The Resume Statement When a resume statement is executed, the action taken depends on whether the current exception is resumable. With a resumable exception, control is transferred to the point from which the current exception was thrown. For a synchronous exception, this is the end of the throw statement which generated the exception. For an asynchronous exception, it is the point at which the exception took control away from the executing task. In either case, the current exception is released, meaning that the current exception is cleared from the execution context and any resources allocated to it are released. With an exception that is not resumable, execution proceeds as if the resume statement were replaced by a die; statement. That is, the stack is unwound to the current execution context, the current exception is reactivated, and control is transferred to the end of the enclosing catch block. 4 The re-thrown exception is not resumable. 5 4 Modified Statements Support for asynchronous exceptions requires that task identity be revealed. Therefore, statements which create tasks now return a task ID or a list of task IDs as appropriate. The begin statement now behaves like an expression, returning the task ID (of type chpl_taskid) of the task being spawned. By the time control returns to the spawning task and the chpl_taskid result is available, the spawned task is guaranteed to have been initialized to the point that it can receive asynchronous exceptions. No other timing constraints are implied. In particular, it is possible for the spawned task to generate an asynchronous exception in the spawning task before the execution of the spawning begin statement has completed. The cobegin, coforall and forall statments now return a list of task IDs corresponding to the tasks spawned by them. As with the begin statement, the spawned tasks are guaranteed to be initialized to the point that they can receive asynchronous exceptions by the time the assignment is made. However, the list of tasks is updated before execution of any of the spawned tasks begins. That way the entire list is available before any of them can generate an exception. The concept of task teams has been discussed in the past. In that vernacular, coforall, cobegin forall start by creating an anonymous task team. Then, the task team is used to implement the body of the corresponding parallel statement. It stands to reason that the identities of the task team members would be available before execution of any of those team members has begun. If multiple exceptions reach the end of a structured parallel construct (sync, cobegin, coforall or forall), they are queued as asynchronous exceptions in the parent task. For the sync, cobegin and coforall statements, the order of the asynchronous exceptions is unspecified. For the forall statement, asynchronous exceptions are delivered in the order in which the corresponding loop indices are generated Added Library Calls Asynchronous exceptions also indicate the need for a block() function call, which both sets and returns the blocking state (in which asynchronous exceptions are disabled) The Block Function The block() function takes a boolean expression as its argument and returns a boolean value. If the expression evaluates to true, then the delivery of asynchronous exceptions is prevented until another block call with a false argument is executed. Its return value is the previous blocked state. When a new task is spawned, its blocked state is inherited from the spawning task. 4 It is an open issue whether a try-catch-finally-statement may appear within a catch or finally statement. 5 The rationale for this is mostly syntactical: resume exits the enclosing catch block; there is no expectation that execution can continue past the resume statement. 6 This obeys the serializability restriction on the semantics of a forall. 9

10 5 Additional Semantics for Asynchronous Exceptions Asynchronous exceptions are so named because they can originate from threads other than the one running the target task. They must be synchronized with the target task so they can run on the same thread. Clearly, only the task or the exception can then be running, not both at once. Our concept of an exception only makes sense if the exception is allowed to interrupt the target task. The target task can expect to complete certain operations atomically, but may be interrupted at the point where one such atomic operation has completed and before the next commences. Those points are called sequence points. An exception sent to a task must wait for the task to reach the next sequence point before interrupting it. To keep the implementation efficient, the granularity of sequence points is expected to be quite fine-grained corresponding to the read or write of a variable of fundamental type. At that point, the state of the running task is saved and control is transferred to the exception execution path as described above. Since asynchronous exceptions can come from multiple sources, each task has a first-in, first-out exception queue associated with it. Normal execution cannot continue until all the exceptions in the queue have been handled. Thus, a normal task exit means that no exceptions were pending. The other possibility is that the task exits with an active exception. In this case, the active exception becomes an asynchronous exception in its parent s task and remaining queued exceptions are moved from the child s queue to the parent s queue. 7 Sequence points can be generally defined as times during the execution at which the result of a calculation or other side-effects becomes externally visible. For example, between two successive statements which increment the variable a, a += 1; a += 1; the result of the first incrementation must become visible so that the new value of a can be read and updated by the second. The semicolon at the end of the statement can be taken to demarcate the sequence point between the two. There are other places in the code where it is convenient to specify the existence of a sequence point. We will follow the C++ spec in general, and make such modifications as are necessary to fit the Chapel computational model. The definition of a sequence point is intended to guarantee that updates to variables of a fundamental type are completed atomically. Users are expected to supply additional code in class definitions if updates to larger types are similarly necessary. 6 Implementation Notes This section presents a possible implementation for synchronous exceptions, assuming existing hardware and operating system support. It is not normative. 6.1 Overview This subsection provides an overview of the concepts used to discuss the implementation Tasks and Threads In Chapel, a task is the user-visible unit of execution. When work is to be performed in parallel (whether distributed across locales or not), a task is launched for each independent portion. Tasks run on threads, meaning that several tasks may be assigned to one thread, but at most one can be running at a given time. Similarly, threads run on the hardware. Multiple threads can run on a single core. The operating system typically mediates the assignment of threads to cores, and schedules when the threads are run. A processor chip can contain several cores. A processor will be underutilized if it is running fewer threads than it has cores. Some physical cores 7 The child task does not necessarily have a try statement active when the first exception is taken. So this arrangement is necessary to ensure that an asynchronous event is not ignored. 10

11 can run multiple threads concurrently, effectively multiplying the number of virtual cores available. In such systems, the number of threads should be greater than or equal to the number of virtual cores. Tasks provide another layer of abstraction on top of threads, but have a similar implementation. A queue or table of tasks is maintained, and one is selected to run when there is an unoccupied thread available. However, in the current Chapel implementation, a thread can only change between tasks when the task explicitly relinquishes control (at a yield or sleep call, or when the running task terminates. Part of the current proposal is to expand the semantics of tasks, so that control can also be transferred between tasks in response to a hardware interrupt Interrupts, Signals and Events When a hardware interrupt occurs, control is transferred from the currently running thread to an interrupt service routine (ISR). In order to do any useful work, the ISR will set up a separate stack for its duration. In this sense, all hardware interrupts are asynchronous, even if they were immediately caused by the execution of an instuction (e.g. a stack overflow or other hardware trap). An ISR can communicate with the operating system and trigger the creation of one or more threads. After this is done, the interrupting thread can return, whereupon the interrupting thread will die and the interrupted thread will continue execution. Alternatively, the state of the interrupted thread can be stored, and control transferred to another thread. That is to say, hardware interrupts can interact with the thread scheduler and switch control from one thread to another. In either case, the interrupting thread ceases to exist after the ISR exits. Operating systems present machine interrupts as signals. Any task can attach a signal handler to a specific signal. That code is run when an ISR is run and detects the conditions corresponding to the specified signal. These signal handlers can modify state that is local to the task that is attaching the handler, providing a means for the task and its installed handler to communicate. The important concepts are that the operating system provides an abstraction layer on top of hardware interrupts, and that a signal handler runs asynchronously w.r.t. the task which installs it. Some programming environments provide an event system, which adds a layer of abstraction to the signal handlers provided by the OS. Event handlers are attached to events in a manner similar to signal handlers being attached to signals. Event handlers are still run in a thread which is distinct from the thread which installed the event handler, and the synchronization problem persists (see below). 6.2 Synchronization of Asynchronous Interrupts As long as code which affects a given task is running in a different thread, the two are not synchronized. In the case of a signal or event handler, state which is local to the task which installed the handler can be updated by the handler, but it is still necessary for the original task to notice that the local state has changed. Only then can it modify its behavior in response to the asynchronous stimulus. To avoid polling, a handler must have the ability to inform a running task that its state has changed. For tasks which are suspended, this is easy: A bit in the task state is flipped to indicate that an event has occurred. When the task is resumed, this state is checked to determine if the task is to be resumed normally or to take some different path. In the current proposal, resumption from an interrupted task would cause control to be transferred to the innermost exception handler. The address of the exception object is passed to this handler. It is presumed that asynchronous event handler has created this object, so the code on the exception handler side is the same as that used for normal (synchronous) exceptions. Since execution path taken by the interrupted task changes immediately (as far as it can tell) in response to the asynchronous event, such an event becomes synchronized within the user s code. A few more elements are necesary to complete the design. Operating systems typically treat all threads equally. The first tasks performed by an ISR are to preserve the state of the interrupted thread and to create a full-up thread for the top-level handler code to run in. At this point (it is important to note) all of the synchronous threads (on that core) are suspended. The handler code can then spawn other threads and call other handler routines as needed to complete its work. Further interrupts are disabled until the interrupting thread is fully operational. Once that is done, further interrupts can be accepted without compromising the threading model. When it is done, the handler thread is destroyed, the interrupted (or another) thread is restored and execution resumes on the selected thread. This regularization of threads in the OS must be propagated to tasks as well. Presumably, an event handler knows which task installed it. Therefore, it can locate and modify the corresponding task table entry to indicate that it should 11

12 take the exception path upon resumption rather than the normal execution path. This will work whether or not the task was running when the event occurred. By adding code to the task scheduler to resume a task normally or via its exception handler (as appropriate), we can effect the desired behavior. It only remains to appropriately alter the behavior of the task which was running on the interrupted thread. We get an opportunity to do that when the thread is revived as the handler completes execution. As the thread was suspended, it would have written off the address at which execution was to have resumed. Since handler code can have modified this address, reloading the task data prior to resuming the thread will propagate the desired exception handling behavior to the running task. 6.3 Cross-task Signaling An event handler knows which task installed it. It can look up that task ID, and knows which task s descriptor needs to be updated in order to continue execution on the exception path. It has the advantage of knowing (we assume for convenience) that the target task is either suspended, or the thread it was running on has been interrupted (directly or indirectly) by its own thread. A slight generalization would allow the event handler to select any thread as its target. If the hardware supports only one active thread, this can work in just the same way as asynchronous events described above. Either the target task itself is suspended or the thread on which it is running is. But this approach breaks down if there are multiple (physical or virtual) processors. The situation is still simple if the target task is suspended. Its resumption address is modified; when it resumes it will take the exception path. There are (at least) two possibilities if the target task is running: one is to create a hardware exception that will interrupt the thread on which the target task is running; the other is to simply queue the exception, and wait for the task to be suspended and resumed for other reasons. It may be desirable to implement both, since this automatically provides two levels of priority. The queue idea comes into play because another asynchronous event will cause the resumption address in the interrupted task to be overwritten. If it happened that an exception was already pending on that task, then the program would simply forget to ever take the exception. This kind of unpredictable behavior is clearly unacceptable. When an attempt is made to resume a task which has a queue of pending exceptions, it must process all of them before proceeding with normal execution. With the queue in place, it becomes very easy to implement cross-task signaling: An exception record can be added asynchronously to the exception queue of any other task. It is merely necessary for the signaling task to be able to identify the target task (using its unique task ID, for example). 6.4 Required Thread Support In the above discussion, we noted that in a running task the immediate response to an asynchronous event depends upon some specific support in the threading implementation. Generally, as part of handling an asynchronous event, the thread must store off the location at which it expects to resume. This location may be modified in the case that the running task receives an exception while the ISR is active. Prior to resuming, the thread must reload the address from the active task. If this address has changed, then execution can continue from the start of the catch handler for the current execution context. Minimal raw support could be effected by provinding a call against the threading interface, to update the address to which an ISR returns when returning control to the interrupted thread. The original return address must also be stored. The new location supplied would jump to a handler which could examine and update the stored return address based on information stored in the exception queue attached to the executing task. This action is probably best handled in the context of the tasking layer meaning that the tasking layer registers a handler with the threading layer. The task layer s exception notification handler then performs the actions necessary. 12

Programming Languages Third Edition. Chapter 9 Control I Expressions and Statements

Programming Languages Third Edition. Chapter 9 Control I Expressions and Statements Programming Languages Third Edition Chapter 9 Control I Expressions and Statements Objectives Understand expressions Understand conditional statements and guards Understand loops and variation on WHILE

More information

AP COMPUTER SCIENCE JAVA CONCEPTS IV: RESERVED WORDS

AP COMPUTER SCIENCE JAVA CONCEPTS IV: RESERVED WORDS AP COMPUTER SCIENCE JAVA CONCEPTS IV: RESERVED WORDS PAUL L. BAILEY Abstract. This documents amalgamates various descriptions found on the internet, mostly from Oracle or Wikipedia. Very little of this

More information

IT 540 Operating Systems ECE519 Advanced Operating Systems

IT 540 Operating Systems ECE519 Advanced Operating Systems IT 540 Operating Systems ECE519 Advanced Operating Systems Prof. Dr. Hasan Hüseyin BALIK (3 rd Week) (Advanced) Operating Systems 3. Process Description and Control 3. Outline What Is a Process? Process

More information

THE PROCESS ABSTRACTION. CS124 Operating Systems Winter , Lecture 7

THE PROCESS ABSTRACTION. CS124 Operating Systems Winter , Lecture 7 THE PROCESS ABSTRACTION CS124 Operating Systems Winter 2015-2016, Lecture 7 2 The Process Abstraction Most modern OSes include the notion of a process Term is short for a sequential process Frequently

More information

Announcement. Exercise #2 will be out today. Due date is next Monday

Announcement. Exercise #2 will be out today. Due date is next Monday Announcement Exercise #2 will be out today Due date is next Monday Major OS Developments 2 Evolution of Operating Systems Generations include: Serial Processing Simple Batch Systems Multiprogrammed Batch

More information

Chapter 3 Process Description and Control

Chapter 3 Process Description and Control Operating Systems: Internals and Design Principles Chapter 3 Process Description and Control Seventh Edition By William Stallings Operating Systems: Internals and Design Principles The concept of process

More information

3.1 Introduction. Computers perform operations concurrently

3.1 Introduction. Computers perform operations concurrently PROCESS CONCEPTS 1 3.1 Introduction Computers perform operations concurrently For example, compiling a program, sending a file to a printer, rendering a Web page, playing music and receiving e-mail Processes

More information

Interprocess Communication By: Kaushik Vaghani

Interprocess Communication By: Kaushik Vaghani Interprocess Communication By: Kaushik Vaghani Background Race Condition: A situation where several processes access and manipulate the same data concurrently and the outcome of execution depends on the

More information

Asynchronous Functions in C#

Asynchronous Functions in C# Asynchronous Functions in C# Asynchronous operations are methods and other function members that may have most of their execution take place after they return. In.NET the recommended pattern for asynchronous

More information

Programming Languages Third Edition. Chapter 10 Control II Procedures and Environments

Programming Languages Third Edition. Chapter 10 Control II Procedures and Environments Programming Languages Third Edition Chapter 10 Control II Procedures and Environments Objectives Understand the nature of procedure definition and activation Understand procedure semantics Learn parameter-passing

More information

Multitasking / Multithreading system Supports multiple tasks

Multitasking / Multithreading system Supports multiple tasks Tasks and Intertask Communication Introduction Multitasking / Multithreading system Supports multiple tasks As we ve noted Important job in multitasking system Exchanging data between tasks Synchronizing

More information

Control Abstraction. Hwansoo Han

Control Abstraction. Hwansoo Han Control Abstraction Hwansoo Han Review of Static Allocation Static allocation strategies Code Global variables Own variables (live within an encapsulation - static in C) Explicit constants (including strings,

More information

Weiss Chapter 1 terminology (parenthesized numbers are page numbers)

Weiss Chapter 1 terminology (parenthesized numbers are page numbers) Weiss Chapter 1 terminology (parenthesized numbers are page numbers) assignment operators In Java, used to alter the value of a variable. These operators include =, +=, -=, *=, and /=. (9) autoincrement

More information

Informatica 3. Marcello Restelli. Laurea in Ingegneria Informatica Politecnico di Milano 9/15/07 10/29/07

Informatica 3. Marcello Restelli. Laurea in Ingegneria Informatica Politecnico di Milano 9/15/07 10/29/07 Informatica 3 Marcello Restelli 9/15/07 10/29/07 Laurea in Ingegneria Informatica Politecnico di Milano Structuring the Computation Control flow can be obtained through control structure at instruction

More information

ELEC 377 Operating Systems. Week 1 Class 2

ELEC 377 Operating Systems. Week 1 Class 2 Operating Systems Week 1 Class 2 Labs vs. Assignments The only work to turn in are the labs. In some of the handouts I refer to the labs as assignments. There are no assignments separate from the labs.

More information

6.001 Notes: Section 15.1

6.001 Notes: Section 15.1 6.001 Notes: Section 15.1 Slide 15.1.1 Our goal over the next few lectures is to build an interpreter, which in a very basic sense is the ultimate in programming, since doing so will allow us to define

More information

Processes and Non-Preemptive Scheduling. Otto J. Anshus

Processes and Non-Preemptive Scheduling. Otto J. Anshus Processes and Non-Preemptive Scheduling Otto J. Anshus Threads Processes Processes Kernel An aside on concurrency Timing and sequence of events are key concurrency issues We will study classical OS concurrency

More information

Signals: Management and Implementation. Sanjiv K. Bhatia Univ. of Missouri St. Louis

Signals: Management and Implementation. Sanjiv K. Bhatia Univ. of Missouri St. Louis Signals: Management and Implementation Sanjiv K. Bhatia Univ. of Missouri St. Louis sanjiv@aryabhat.umsl.edu http://www.cs.umsl.edu/~sanjiv Signals Mechanism to notify processes of asynchronous events

More information

Research on the Novel and Efficient Mechanism of Exception Handling Techniques for Java. Xiaoqing Lv 1 1 huihua College Of Hebei Normal University,

Research on the Novel and Efficient Mechanism of Exception Handling Techniques for Java. Xiaoqing Lv 1 1 huihua College Of Hebei Normal University, International Conference on Informatization in Education, Management and Business (IEMB 2015) Research on the Novel and Efficient Mechanism of Exception Handling Techniques for Java Xiaoqing Lv 1 1 huihua

More information

Signals and Session Management. Signals. Mechanism to notify processes of system events

Signals and Session Management. Signals. Mechanism to notify processes of system events Signals and Session Management Signals Mechanism to notify processes of system events Primitives for communication and synchronization between user processes Signal generation and handling Allow an action

More information

CS193k, Stanford Handout #8. Threads 3

CS193k, Stanford Handout #8. Threads 3 CS193k, Stanford Handout #8 Spring, 2000-01 Nick Parlante Threads 3 t.join() Wait for finish We block until the receiver thread exits its run(). Use this to wait for another thread to finish. The current

More information

The Dining Philosophers Problem CMSC 330: Organization of Programming Languages

The Dining Philosophers Problem CMSC 330: Organization of Programming Languages The Dining Philosophers Problem CMSC 0: Organization of Programming Languages Threads Classic Concurrency Problems Philosophers either eat or think They must have two forks to eat Can only use forks on

More information

GUJARAT TECHNOLOGICAL UNIVERSITY MASTER OF COMPUTER APPLICATION SEMESTER: III

GUJARAT TECHNOLOGICAL UNIVERSITY MASTER OF COMPUTER APPLICATION SEMESTER: III GUJARAT TECHNOLOGICAL UNIVERSITY MASTER OF COMPUTER APPLICATION SEMESTER: III Subject Name: Operating System (OS) Subject Code: 630004 Unit-1: Computer System Overview, Operating System Overview, Processes

More information

Multitasking Multitasking allows several activities to occur concurrently on the computer. A distinction is usually made between: Process-based multit

Multitasking Multitasking allows several activities to occur concurrently on the computer. A distinction is usually made between: Process-based multit Threads Multitasking Multitasking allows several activities to occur concurrently on the computer. A distinction is usually made between: Process-based multitasking Thread-based multitasking Multitasking

More information

Tasks. Task Implementation and management

Tasks. Task Implementation and management Tasks Task Implementation and management Tasks Vocab Absolute time - real world time Relative time - time referenced to some event Interval - any slice of time characterized by start & end times Duration

More information

Q.1 Explain Computer s Basic Elements

Q.1 Explain Computer s Basic Elements Q.1 Explain Computer s Basic Elements Ans. At a top level, a computer consists of processor, memory, and I/O components, with one or more modules of each type. These components are interconnected in some

More information

Concurrent and Real-Time Programming in Java

Concurrent and Real-Time Programming in Java 064202 Degree Examinations 2003 DEPARTMENT OF COMPUTER SCIENCE Concurrent and Real-Time Programming in Java Time allowed: One and one half (1.5) hours Candidates should answer not more than two questions.

More information

Process Concepts. CSC400 - Operating Systems. 3. Process Concepts. J. Sumey

Process Concepts. CSC400 - Operating Systems. 3. Process Concepts. J. Sumey CSC400 - Operating Systems 3. Process Concepts J. Sumey Overview Concurrency Processes & Process States Process Accounting Interrupts & Interrupt Processing Interprocess Communication CSC400 - Process

More information

21. Exceptions. Advanced Concepts: // exceptions #include <iostream> using namespace std;

21. Exceptions. Advanced Concepts: // exceptions #include <iostream> using namespace std; - 147 - Advanced Concepts: 21. Exceptions Exceptions provide a way to react to exceptional circumstances (like runtime errors) in our program by transferring control to special functions called handlers.

More information

8. Control statements

8. Control statements 8. Control statements A simple C++ statement is each of the individual instructions of a program, like the variable declarations and expressions seen in previous sections. They always end with a semicolon

More information

CMSC 330: Organization of Programming Languages. The Dining Philosophers Problem

CMSC 330: Organization of Programming Languages. The Dining Philosophers Problem CMSC 330: Organization of Programming Languages Threads Classic Concurrency Problems The Dining Philosophers Problem Philosophers either eat or think They must have two forks to eat Can only use forks

More information

SMD149 - Operating Systems

SMD149 - Operating Systems SMD149 - Operating Systems Roland Parviainen November 3, 2005 1 / 45 Outline Overview 2 / 45 Process (tasks) are necessary for concurrency Instance of a program in execution Next invocation of the program

More information

Thirty one Problems in the Semantics of UML 1.3 Dynamics

Thirty one Problems in the Semantics of UML 1.3 Dynamics Thirty one Problems in the Semantics of UML 1.3 Dynamics G. Reggio R.J. Wieringa September 14, 1999 1 Introduction In this discussion paper we list a number of problems we found with the current dynamic

More information

Newbie s Guide to AVR Interrupts

Newbie s Guide to AVR Interrupts Newbie s Guide to AVR Interrupts Dean Camera March 15, 2015 ********** Text Dean Camera, 2013. All rights reserved. This document may be freely distributed without payment to the author, provided that

More information

Harvard School of Engineering and Applied Sciences CS 152: Programming Languages

Harvard School of Engineering and Applied Sciences CS 152: Programming Languages Harvard School of Engineering and Applied Sciences CS 152: Programming Languages Lecture 24 Thursday, April 19, 2018 1 Error-propagating semantics For the last few weeks, we have been studying type systems.

More information

The Kernel Abstraction

The Kernel Abstraction The Kernel Abstraction Debugging as Engineering Much of your time in this course will be spent debugging In industry, 50% of software dev is debugging Even more for kernel development How do you reduce

More information

ECE519 Advanced Operating Systems

ECE519 Advanced Operating Systems IT 540 Operating Systems ECE519 Advanced Operating Systems Prof. Dr. Hasan Hüseyin BALIK (10 th Week) (Advanced) Operating Systems 10. Multiprocessor, Multicore and Real-Time Scheduling 10. Outline Multiprocessor

More information

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5 Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5 Multiple Processes OS design is concerned with the management of processes and threads: Multiprogramming Multiprocessing Distributed processing

More information

CMSC 330: Organization of Programming Languages

CMSC 330: Organization of Programming Languages CMSC 330: Organization of Programming Languages Threads Synchronization Refers to mechanisms allowing a programmer to control the execution order of some operations across different threads in a concurrent

More information

STATE MACHINES. Figure 1: State Machines

STATE MACHINES. Figure 1: State Machines STATE MACHINES Figure 1: State Machines state machine A state machine is a behavior that specifies the sequences of states an object goes through during its lifetime in response to events. Graphically,

More information

Suggestions for Stream Based Parallel Systems in Ada

Suggestions for Stream Based Parallel Systems in Ada Suggestions for Stream Based Parallel Systems in Ada M. Ward * and N. C. Audsley Real Time Systems Group University of York York, England (mward,neil)@cs.york.ac.uk Abstract Ada provides good support for

More information

CPS122 Lecture: From Python to Java last revised January 4, Objectives:

CPS122 Lecture: From Python to Java last revised January 4, Objectives: Objectives: CPS122 Lecture: From Python to Java last revised January 4, 2017 1. To introduce the notion of a compiled language 2. To introduce the notions of data type and a statically typed language 3.

More information

CS 455: INTRODUCTION TO DISTRIBUTED SYSTEMS [THREADS] Frequently asked questions from the previous class survey

CS 455: INTRODUCTION TO DISTRIBUTED SYSTEMS [THREADS] Frequently asked questions from the previous class survey CS 455: INTRODUCTION TO DISTRIBUTED SYSTEMS [THREADS] Shrideep Pallickara Computer Science Colorado State University L6.1 Frequently asked questions from the previous class survey L6.2 SLIDES CREATED BY:

More information

3. Process Management in xv6

3. Process Management in xv6 Lecture Notes for CS347: Operating Systems Mythili Vutukuru, Department of Computer Science and Engineering, IIT Bombay 3. Process Management in xv6 We begin understanding xv6 process management by looking

More information

Deviations are things that modify a thread s normal flow of control. Unix has long had signals, and these must be dealt with in multithreaded

Deviations are things that modify a thread s normal flow of control. Unix has long had signals, and these must be dealt with in multithreaded Deviations are things that modify a thread s normal flow of control. Unix has long had signals, and these must be dealt with in multithreaded improvements to Unix. There are actually two fairly different

More information

CPS221 Lecture: Threads

CPS221 Lecture: Threads Objectives CPS221 Lecture: Threads 1. To introduce threads in the context of processes 2. To introduce UML Activity Diagrams last revised 9/5/12 Materials: 1. Diagram showing state of memory for a process

More information

Using the Actor Framework in LabVIEW

Using the Actor Framework in LabVIEW Using the Actor Framework in LabVIEW By Allen C. Smith and Stephen R. Mercer The Actor Framework is a software library that supports the writing of applications in which multiple VIs run independently

More information

Atomic Transactions in Cilk

Atomic Transactions in Cilk Atomic Transactions in Jim Sukha 12-13-03 Contents 1 Introduction 2 1.1 Determinacy Races in Multi-Threaded Programs......................... 2 1.2 Atomicity through Transactions...................................

More information

DOWNLOAD PDF CORE JAVA APTITUDE QUESTIONS AND ANSWERS

DOWNLOAD PDF CORE JAVA APTITUDE QUESTIONS AND ANSWERS Chapter 1 : Chapter-wise Java Multiple Choice Questions and Answers Interview MCQs Java Programming questions and answers with explanation for interview, competitive examination and entrance test. Fully

More information

Lecture 20. Java Exceptional Event Handling. Dr. Martin O Connor CA166

Lecture 20. Java Exceptional Event Handling. Dr. Martin O Connor CA166 Lecture 20 Java Exceptional Event Handling Dr. Martin O Connor CA166 www.computing.dcu.ie/~moconnor Topics What is an Exception? Exception Handler Catch or Specify Requirement Three Kinds of Exceptions

More information

Primitive Task-Parallel Constructs The begin statement The sync types Structured Task-Parallel Constructs Atomic Transactions and Memory Consistency

Primitive Task-Parallel Constructs The begin statement The sync types Structured Task-Parallel Constructs Atomic Transactions and Memory Consistency Primitive Task-Parallel Constructs The begin statement The sync types Structured Task-Parallel Constructs Atomic Transactions and Memory Consistency Chapel: Task Parallelism 2 Syntax begin-stmt: begin

More information

Concurrent Programming using Threads

Concurrent Programming using Threads Concurrent Programming using Threads Threads are a control mechanism that enable you to write concurrent programs. You can think of a thread in an object-oriented language as a special kind of system object

More information

* What are the different states for a task in an OS?

* What are the different states for a task in an OS? * Kernel, Services, Libraries, Application: define the 4 terms, and their roles. The kernel is a computer program that manages input/output requests from software, and translates them into data processing

More information

CS455: Introduction to Distributed Systems [Spring 2019] Dept. Of Computer Science, Colorado State University

CS455: Introduction to Distributed Systems [Spring 2019] Dept. Of Computer Science, Colorado State University CS 455: INTRODUCTION TO DISTRIBUTED SYSTEMS [THREADS] The House of Heap and Stacks Stacks clean up after themselves But over deep recursions they fret The cheerful heap has nary a care Harboring memory

More information

Templates what and why? Beware copying classes! Templates. A simple example:

Templates what and why? Beware copying classes! Templates. A simple example: Beware copying classes! Templates what and why? class A { private: int data1,data2[5]; float fdata; public: // methods etc. } A a1,a2; //some work initializes a1... a2=a1; //will copy all data of a2 into

More information

Chapter Machine instruction level 2. High-level language statement level 3. Unit level 4. Program level

Chapter Machine instruction level 2. High-level language statement level 3. Unit level 4. Program level Concurrency can occur at four levels: 1. Machine instruction level 2. High-level language statement level 3. Unit level 4. Program level Because there are no language issues in instruction- and program-level

More information

Learning from Bad Examples. CSCI 5828: Foundations of Software Engineering Lecture 25 11/18/2014

Learning from Bad Examples. CSCI 5828: Foundations of Software Engineering Lecture 25 11/18/2014 Learning from Bad Examples CSCI 5828: Foundations of Software Engineering Lecture 25 11/18/2014 1 Goals Demonstrate techniques to design for shared mutability Build on an example where multiple threads

More information

do fifty two: Language Reference Manual

do fifty two: Language Reference Manual do fifty two: Language Reference Manual Sinclair Target Jayson Ng Josephine Tirtanata Yichi Liu Yunfei Wang 1. Introduction We propose a card game language targeted not at proficient programmers but at

More information

Unit 2 : Computer and Operating System Structure

Unit 2 : Computer and Operating System Structure Unit 2 : Computer and Operating System Structure Lesson 1 : Interrupts and I/O Structure 1.1. Learning Objectives On completion of this lesson you will know : what interrupt is the causes of occurring

More information

Computer Systems II. First Two Major Computer System Evolution Steps

Computer Systems II. First Two Major Computer System Evolution Steps Computer Systems II Introduction to Processes 1 First Two Major Computer System Evolution Steps Led to the idea of multiprogramming (multiple concurrent processes) 2 1 At First (1945 1955) In the beginning,

More information

Assoc. Prof. Marenglen Biba. (C) 2010 Pearson Education, Inc. All rights reserved.

Assoc. Prof. Marenglen Biba. (C) 2010 Pearson Education, Inc. All rights reserved. Assoc. Prof. Marenglen Biba Exception handling Exception an indication of a problem that occurs during a program s execution. The name exception implies that the problem occurs infrequently. With exception

More information

Concurrency. Glossary

Concurrency. Glossary Glossary atomic Executing as a single unit or block of computation. An atomic section of code is said to have transactional semantics. No intermediate state for the code unit is visible outside of the

More information

CPS 310 first midterm exam, 10/6/2014

CPS 310 first midterm exam, 10/6/2014 CPS 310 first midterm exam, 10/6/2014 Your name please: Part 1. More fun with fork and exec* What is the output generated by this program? Please assume that each executed print statement completes, e.g.,

More information

CS 6353 Compiler Construction Project Assignments

CS 6353 Compiler Construction Project Assignments CS 6353 Compiler Construction Project Assignments In this project, you need to implement a compiler for a language defined in this handout. The programming language you need to use is C or C++ (and the

More information

Pebbles Kernel Specification September 26, 2004

Pebbles Kernel Specification September 26, 2004 15-410, Operating System Design & Implementation Pebbles Kernel Specification September 26, 2004 Contents 1 Introduction 2 1.1 Overview...................................... 2 2 User Execution Environment

More information

Exception Handling Introduction. Error-Prevention Tip 13.1 OBJECTIVES

Exception Handling Introduction. Error-Prevention Tip 13.1 OBJECTIVES 1 2 13 Exception Handling It is common sense to take a method and try it. If it fails, admit it frankly and try another. But above all, try something. Franklin Delano Roosevelt O throw away the worser

More information

Chapter 3 Process Domain

Chapter 3 Process Domain Chapter 3 Process Domain 173-Prdef 174-Prdef Modeling Concepts Modeling Concepts Introduction Prdef.1 Introduction Process models are used to specify the behavior of processor and queue modules which exist

More information

An Exceptional Programming Language

An Exceptional Programming Language An Exceptional Programming Language John Aycock Department of Computer Science University of Calgary 2500 University Drive N.W. Calgary, Alberta, Canada T2N 1N4 Phone: +1 403 210 9409, Fax: +1 403 284

More information

Multi-threaded programming in Java

Multi-threaded programming in Java Multi-threaded programming in Java Java allows program to specify multiple threads of execution Provides instructions to ensure mutual exclusion, and selective blocking/unblocking of threads What is a

More information

Process Time. Steven M. Bellovin January 25,

Process Time. Steven M. Bellovin January 25, Multiprogramming Computers don t really run multiple programs simultaneously; it just appears that way Each process runs to completion, but intermixed with other processes Process 1 6 ticks Process 2 Process

More information

(In columns, of course.)

(In columns, of course.) CPS 310 first midterm exam, 10/9/2013 Your name please: Part 1. Fun with forks (a) What is the output generated by this program? In fact the output is not uniquely defined, i.e., it is not always the same.

More information

Chapter 6 Parallel Loops

Chapter 6 Parallel Loops Chapter 6 Parallel Loops Part I. Preliminaries Part II. Tightly Coupled Multicore Chapter 6. Parallel Loops Chapter 7. Parallel Loop Schedules Chapter 8. Parallel Reduction Chapter 9. Reduction Variables

More information

CS 201. Exceptions and Processes. Gerson Robboy Portland State University

CS 201. Exceptions and Processes. Gerson Robboy Portland State University CS 201 Exceptions and Processes Gerson Robboy Portland State University Control Flow Computers Do One Thing From startup to shutdown, a CPU reads and executes (interprets) a sequence of instructions, one

More information

CPSC/ECE 3220 Fall 2017 Exam Give the definition (note: not the roles) for an operating system as stated in the textbook. (2 pts.

CPSC/ECE 3220 Fall 2017 Exam Give the definition (note: not the roles) for an operating system as stated in the textbook. (2 pts. CPSC/ECE 3220 Fall 2017 Exam 1 Name: 1. Give the definition (note: not the roles) for an operating system as stated in the textbook. (2 pts.) Referee / Illusionist / Glue. Circle only one of R, I, or G.

More information

6.033 Computer System Engineering

6.033 Computer System Engineering MIT OpenCourseWare http://ocw.mit.edu 6.033 Computer System Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 6.003 Lecture 7: Threads

More information

Atropos User s manual

Atropos User s manual Atropos User s manual Jan Lönnberg 22nd November 2010 1 Introduction Atropos is a visualisation tool intended to display information relevant to understanding the behaviour of concurrent Java programs,

More information

Fundamentals of Programming Session 13

Fundamentals of Programming Session 13 Fundamentals of Programming Session 13 Instructor: Reza Entezari-Maleki Email: entezari@ce.sharif.edu 1 Fall 2014 These slides have been created using Deitel s slides Sharif University of Technology Outlines

More information

Shell Execution of Programs. Process Groups, Session and Signals 1

Shell Execution of Programs. Process Groups, Session and Signals 1 Shell Execution of Programs Process Groups, Session and Signals 1 Signal Concepts Signals are a way for a process to be notified of asynchronous events (software interrupts). Some examples: a timer you

More information

Top-Level View of Computer Organization

Top-Level View of Computer Organization Top-Level View of Computer Organization Bởi: Hoang Lan Nguyen Computer Component Contemporary computer designs are based on concepts developed by John von Neumann at the Institute for Advanced Studies

More information

Program Correctness and Efficiency. Chapter 2

Program Correctness and Efficiency. Chapter 2 Program Correctness and Efficiency Chapter 2 Chapter Objectives To understand the differences between the three categories of program errors To understand the effect of an uncaught exception and why you

More information

Lecture Notes on Advanced Garbage Collection

Lecture Notes on Advanced Garbage Collection Lecture Notes on Advanced Garbage Collection 15-411: Compiler Design André Platzer Lecture 21 November 4, 2010 1 Introduction More information on garbage collection can be found in [App98, Ch 13.5-13.7]

More information

6.001 Notes: Section 8.1

6.001 Notes: Section 8.1 6.001 Notes: Section 8.1 Slide 8.1.1 In this lecture we are going to introduce a new data type, specifically to deal with symbols. This may sound a bit odd, but if you step back, you may realize that everything

More information

Interrupts and Time. Real-Time Systems, Lecture 5. Martina Maggio 28 January Lund University, Department of Automatic Control

Interrupts and Time. Real-Time Systems, Lecture 5. Martina Maggio 28 January Lund University, Department of Automatic Control Interrupts and Time Real-Time Systems, Lecture 5 Martina Maggio 28 January 2016 Lund University, Department of Automatic Control Content [Real-Time Control System: Chapter 5] 1. Interrupts 2. Clock Interrupts

More information

1 Process Coordination

1 Process Coordination COMP 730 (242) Class Notes Section 5: Process Coordination 1 Process Coordination Process coordination consists of synchronization and mutual exclusion, which were discussed earlier. We will now study

More information

OPERATING SYSTEM SUPPORT (Part 1)

OPERATING SYSTEM SUPPORT (Part 1) Eastern Mediterranean University School of Computing and Technology ITEC255 Computer Organization & Architecture OPERATING SYSTEM SUPPORT (Part 1) Introduction The operating system (OS) is the software

More information

Interrupts and Time. Interrupts. Content. Real-Time Systems, Lecture 5. External Communication. Interrupts. Interrupts

Interrupts and Time. Interrupts. Content. Real-Time Systems, Lecture 5. External Communication. Interrupts. Interrupts Content Interrupts and Time Real-Time Systems, Lecture 5 [Real-Time Control System: Chapter 5] 1. Interrupts 2. Clock Interrupts Martina Maggio 25 January 2017 Lund University, Department of Automatic

More information

PROCESS CONTROL BLOCK TWO-STATE MODEL (CONT D)

PROCESS CONTROL BLOCK TWO-STATE MODEL (CONT D) MANAGEMENT OF APPLICATION EXECUTION PROCESS CONTROL BLOCK Resources (processor, I/O devices, etc.) are made available to multiple applications The processor in particular is switched among multiple applications

More information

Java Threads. Written by John Bell for CS 342, Spring 2018

Java Threads. Written by John Bell for CS 342, Spring 2018 Java Threads Written by John Bell for CS 342, Spring 2018 Based on chapter 9 of Learning Java, Fourth Edition by Niemeyer and Leuck, and other sources. Processes A process is an instance of a running program.

More information

Hardware versus software

Hardware versus software Logic 1 Hardware versus software 2 In hardware such as chip design or architecture, designs are usually proven to be correct using proof tools In software, a program is very rarely proved correct Why?

More information

The UCSC Java Nanokernel Version 0.2 API

The UCSC Java Nanokernel Version 0.2 API The UCSC Java Nanokernel Version 0.2 API UCSC-CRL-96-28 Bruce R. Montague y Computer Science Department University of California, Santa Cruz brucem@cse.ucsc.edu 9 December 1996 Abstract The Application

More information

Ausgewählte Betriebssysteme - Mark Russinovich & David Solomon (used with permission of authors)

Ausgewählte Betriebssysteme - Mark Russinovich & David Solomon (used with permission of authors) Outline Windows 2000 - The I/O Structure Ausgewählte Betriebssysteme Institut Betriebssysteme Fakultät Informatik Components of I/O System Plug n Play Management Power Management I/O Data Structures File

More information

Process Coordination and Shared Data

Process Coordination and Shared Data Process Coordination and Shared Data Lecture 19 In These Notes... Sharing data safely When multiple threads/processes interact in a system, new species of bugs arise 1. Compiler tries to save time by not

More information

Introduction to OS Synchronization MOS 2.3

Introduction to OS Synchronization MOS 2.3 Introduction to OS Synchronization MOS 2.3 Mahmoud El-Gayyar elgayyar@ci.suez.edu.eg Mahmoud El-Gayyar / Introduction to OS 1 Challenge How can we help processes synchronize with each other? E.g., how

More information

Introduction to Concurrent Software Systems. CSCI 5828: Foundations of Software Engineering Lecture 08 09/17/2015

Introduction to Concurrent Software Systems. CSCI 5828: Foundations of Software Engineering Lecture 08 09/17/2015 Introduction to Concurrent Software Systems CSCI 5828: Foundations of Software Engineering Lecture 08 09/17/2015 1 Goals Present an overview of concurrency in software systems Review the benefits and challenges

More information

Pace University. Fundamental Concepts of CS121 1

Pace University. Fundamental Concepts of CS121 1 Pace University Fundamental Concepts of CS121 1 Dr. Lixin Tao http://csis.pace.edu/~lixin Computer Science Department Pace University October 12, 2005 This document complements my tutorial Introduction

More information

Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY Fall 2008.

Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY Fall 2008. Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.828 Fall 2008 Quiz II Solutions 1 I File System Consistency Ben is writing software that stores data in

More information

CS 167 Final Exam Solutions

CS 167 Final Exam Solutions CS 167 Final Exam Solutions Spring 2016 Do all questions. 1. The implementation given of thread_switch in class is as follows: void thread_switch() { thread_t NextThread, OldCurrent; } NextThread = dequeue(runqueue);

More information

Lecture 3: Concurrency & Tasking

Lecture 3: Concurrency & Tasking Lecture 3: Concurrency & Tasking 1 Real time systems interact asynchronously with external entities and must cope with multiple threads of control and react to events - the executing programs need to share

More information

Lexical Considerations

Lexical Considerations Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.035, Fall 2005 Handout 6 Decaf Language Wednesday, September 7 The project for the course is to write a

More information

Task: a unit of parallel work in a Chapel program all Chapel parallelism is implemented using tasks

Task: a unit of parallel work in a Chapel program all Chapel parallelism is implemented using tasks Task: a unit of parallel work in a Chapel program all Chapel parallelism is implemented using tasks Thread: a system-level concept for executing tasks not exposed in the language sometimes exposed in the

More information