Parallelizing Compilers for Object. Oriented Languages. M. Sudholt

Size: px
Start display at page:

Download "Parallelizing Compilers for Object. Oriented Languages. M. Sudholt"

Transcription

1 Parallelizing Compilers for Object Oriented Languages M. Sudholt June 10, 1992; Universitat Koblenz-Landau

2 2

3 Contents I Theory 9 Introduction Organization and contents Foundations Parallelizing compilers Parallelization for distributed memory architectures Analyzing data and control ow of source code Object oriented programming Object oriented concepts A C++ subset Miscellaneous Parallelizing compilers for OOL How parallelism can be handled and expressed in OOL The underlying parallel programming environment Parallelization approach The Global Dependency Graph Mapping of global objects Mapping of array variables and dynamic data structures Data Flow Analysis Intraprocedural analysis Alias computation in the presence of pointers Interprocedural analysis Cooper's and Kennedy's algorithm Problems with object oriented languages A revised interprocedural data ow algorithm The fronting transformation Loop Parallelization A loop parallelization theory Distributed data structures A Glossary 89 3

4 4 CONTENTS B Graph theoretic notions 91 C Specication of the C++ subset used 93 C.1 Lex-specication C.2 Yacc-specication D Example 99 II Implementation 105 E The parser 109 E.1 Description E.2 Listing F Auxiliary modules 117 F.1 Module IDENT F.1.1 The interface ident.h F.1.2 The implementation ident.c F.2 Module GDG F.2.1 The interface gdg.h F.2.2 The implementation gdg.c F.3 Module IG F.3.1 The interface ig.h F.3.2 The implementation ig.c G Eidestattliche Versicherung 133

5 List of Figures 1.1 Dierent types of loop dependencies Example for control dependence Structure of loops within a data ow graph Data ow graphs of example loops Lattice dening regular sections An \object": internal state and public interface Example illustrating the use of virtual functions Substituting switch-statements using virtual functions Partitioning of object oriented programs with the aid of the GDG Structure of application xppuc Part of the GDG of program xppuc Algorithm to compute pointer aliases for function calls Original algorithm to solve the formal reference parameter problem Virtual functions may cause O(e n ) General algorithm to solve the virtual function problem Example binding-multigraph Solving the formal reference parameter problem Global side eect algorithm for object oriented languages Example call graph for algorithm Example illustrating the fronting transformation The fronting transformation Distributed data type RegSecArray

6 6 LIST OF FIGURES

7 List of Tables 2.1 Syntax directed def. of intraprocedural data ow Aliasing by pointer assignments Denitions for the interprocedural side eect problem Example sets for denitions State of algorithm 2.10 at line (14) for example State of algorithm 2.10 at line (25) for example State of algorithm 2.10 at line (35) for example Structure of the communication and computation processes

8 8 LIST OF TABLES

9 Part I Theory 9

10

11 Introduction 11 Introduction This master thesis deals not exclusively with two subjects which are very popular at the moment: parallel computers and object oriented programming. As conventional von Neumann computers reach the theoretical limits of technology regarding speed, packing size and performance, parallel computers are becoming more and more attractive. Their attraction is mainly founded on the fact that speed limits can be undergone without any theoretical limit if the application is suitable simply by using more processing elements in solving a problem. Parallel computers can be devided roughly in shared and distributed memory computers. Whereas the former has access to a global address space which can be used to store global variables directly, all memory of the latter is distributed among processors. The value of an address held in an address space of one processor can therefore not be changed by another processor directly, i. e. without interprocessor communication. This thesis concentrates on parallelizing sequential languages for distributed memory architectures. Today there are three main approaches used to program parallel computers: 1. The rst approach is to use a parallel programming language especially dened for parallel programming (e. g. OCCAM). Here the programmer has full control over (and the burden of controlling) parallelism and can normally make use of constructs oering the possibility to access various hardware idioms to tune eciency. Alas, these languages are not known widely in the normal case and require to know a lot about the underlying hardware platform. 2. Using an enhanced sequential language with few or more constructs which can be used to express parallelism is another approach (e. g. ParC, C and parallel libraries, Concurrent Pascal). This approach has the advantage that programmers can rely on most of their knowledge and have to learn only some new statements. The means for expression of parallelism oered by enhanced sequential languages is more or less equal to those oered by parallel programming languages, but the eort spent in requiring the knowledge necessary to develop applications is much smaller, since only some new constructs have to be learnt. 3. The third approach consists of writing programs in a sequential language and using an automatic parallelizing compiler 1 to extract as much parallelism as possible out of the sequential program. Here the programmer can rely entirely on the rm ground of semantics of sequential programming languages, has (ideally) nothing to learn about the underlying hardware and makes implicitly use of the most sophisticated parallel algorithms. While the rst two approaches have given rise to an abundance of parallel programming languages and language dialects, the latter has not been given very much attention until 1 Terms set in bold face are explained shortly in the glossary (appendix A)

12 12 Introduction recently. This has been due mainly to two reasons: On the one hand the abundance of parallel hardware architectures constructed had to be programmed without any delay. Since the implementation of compilers especially tailored to suit these architectures or the extension of well known sequential languages took less time, the third approach has been neglected. On the other hand many of the theoretical and practical issues in the eld of automatic parallelizing compilers have not been well understood. Now, as parallel computer technology has matured, severe limitations of the rst two approaches come to light: As the programmer bears the whole responsibility of controlling parallelism, parallelism \adds another dimension of complexity to the software development process". The programmer has not only to structure his program according to its logical structure 2, but also according to its parallel execution. This does not only encompass the determination of what parts of the program can be run in parallel, synchronization issues are also involved. Since there is no (simple) theory of modelling timing issues such as synchronization in parallel systems, humans are not capable of controlling it completely. Consequently there are imminent dangers such as deadlock and starvation, which can not be handled succinctly by human beings in all cases. The structure of parallel algorithms depends heavily on the underlying hardware architecture. An algorithm which performs very well on a shared memory architecture can perform very badly on a distributed memory architecture. A programmer therefore has to know to choose the right algorithmic structure in function of the architecture he uses. Similar problems also arise in the eld of parallelizing compilers because of the memory hierarchies used in shared and distributed memory architectures. New verication, validation and debugging techniques have to be developed which take synchronization issues into account. This point causes enormous problems. These problems have led to a software crisis in the eld of parallel computing, which is much more important than in sequential programming. All the problems stated above do not remain valid in the case of automatic parallelizing compilers: Firstly, since parallelism is entirely hidden from the programmer, he has not to bother about synchronization issues, parallel algorithms and hardware parallelism. Secondly, verication and validation of programs executed in parallel can be done with regard to their sequential originals. Thirdly, sequential debuggers and debugging techniques can also be used to examine parallel execution. The second subject this thesis deals with is object oriented programming. Object oriented languages can informally 3 be described as languages which essentially base 2 This is commonly done by using modules, functions and blocks. 3 A more formal description can be found in section 1.

13 Introduction 13 upon the notion of objects encapsulating an internal state and being accessible by public methods and object relationships such as inheritance. The notion of object oriented languages as it is employed here cannot be classied by any of the three prevailing programming language paradigms imperative, functional and logical languages. Instead it is possible to enhance each paradigm by object oriented features which is proven by numerous languages which pursue just this direction. Object oriented behaviour is therefore mostly orthogonal to the conventional programming language paradigms. Programming is bound up tightly with these paradigms. As these paradigms entail different forms of explicit and inherent parallelism, parallelizing compilers have to be adapted more or less to one paradigm or another. This thesis investigates several open problems regarding parallelizing compilers for imperative languages. It is widely stated that object oriented programming is also orthogonal to parallel programming and concurrency aspects. This is certainly comprehensible if object oriented features are investigated apart from a particular language paradigm. Nevertheless, once this factor is xed there are various issues of object oriented programming which inuence parallel programming in general and the design of parallelizing compilers in an important manner. The interdependence between parallel programming by using an parallelizing compiler and object oriented imperative programming languages is the main subject of this thesis. Organization and contents of this thesis This thesis is organized as follows: Chapter 1 states the foundations of parallelizing compiler theory and object oriented programming. Basic notions used in the thesis are dened there. Related work is also to be found there. Chapter 2 describes the new ideas which has been developed in this thesis. At last several annexes have been added where the source code of the implemented parts, a glossary and an index can be found. Several graph theoretic notions which are used throughout this thesis are also dened in an own appendix. Although this thesis attempts to examine more closely many of the issues involved in parallelization of object oriented languages, some of them have been given less attention and others have been entirely left out. In chapter 1 (almost) all problems will be addressed shortly, but chapter 2 will only investigate some of them. Finally, some remarks about type setting conventions may be helpful: Terms set in bold face are also explained shortly in the glossary. Terms and Phrases are emphasized by setting them italic. Unless otherwise noted typewriter font marks passages representing source code.

14 14 Introduction

15 Chapter 1 Foundations 1.1 Parallelizing compilers Parallelizing compilers represent an active and interesting research area due to a variety of problems which are specic to this branch of compiler construction. Shared memory architectures especially vector computers have been the rst target architectures considered in parallelizing compiler technology. This section rst presents the main problems regarding parallelizing compiling for distributed memory architectures and subsequently the techniques which are predominantly used to analyze source code in order to expose potential parallelism Parallelization for distributed memory architectures Parallelization of source code for distributed memory architectures is harder than for shared memory architectures. The main diculties certainly arise from the need to map a global name space to distributed memory. A parallelizing compiler for distributed memory architectures therefore has to perform domain decomposition i. e. data and the corresponding computations have to be distributed among processors. There are, nevertheless, other dierences between the two models: The cost of referencing a shared variable is dependent on the processor where it resides, the interconnection topology, communication mechanisms used etc. This directly leads to a mapping problem. Where variables and functions have to be stored in the multiprocessor in order to minimize communication overhead? Concurrent references to a shared variable can not only cause memory collisions, but may also result in network overload. The collisions could even cause the complete network to be shut down for recovering. The most recent update of a shared variable is not provided at run time by dedicated hardware, but has to be controlled explicitly in a distributed memory system. This 15

16 16 CHAPTER 1. FOUNDATIONS problem demands for explicit synchronization while accessing the only copy of a global variable or management of multiple copies in the processor network. Domain decomposition and the shared variable update problem give rise to some restrictions which can be elucidated by the following example: for (i = 1; i <= N; i++) l[i] = r[s[i]]; The values of s[i] cannot be known at compile time, but on shared memory architectures this loop can be parallelized independently from the values s[i], since all these values are accessible in shared memory. On a distributed memory architecture a careless approach to parallelization can cause heavy network overload. This loop could not even be compiled without run time support, if array r[] were distributed among several processors. Similar cases are studied in more detail in [12]. This diculty inuences the design of a parallelizing compiler in one way or another. Ruhl et al. have chosen not to restrict the distribution of array elements between processors in their parallelizing compiler Oxygen (cf. to [12]). Consequently, they have to check data dependencies at run time. This issue is further elaborated in section 2.2 (pp. 38.) Loop parallelization One of the restricted areas where all problems discussed above are present is the parallelization of loops. Parallelizing loops constitutes one of the major concerns in parallelizing compiler technology for several reasons: Parallelizing loops is conceptually simple. Even without considering the statements in the loop, the loop itself can be parallelized by executing dierent loop iterations in parallel. If data dependencies exist between dierent iterations this strategy has to be modied or the sequential ordering imposed by the data dependencies has to be enforced by synchronization. Parallelizing loops can be most eective, since perfect distribution of loop iterations guarantees a speedup linear in the number of processors. On shared memory architectures there are no other issues which are likewise crucial to parallelization. On distributed memory architectures the generation of ecient communication statements is at least as important as loop parallelization. There are two major problems in parallelizing sequential loops: 1. First all data dependencies carried in the loop have to be determined. Following Wang in [25] three types of loops are distinguished according to whether dependencies are carried within the loop or not:

17 1.1. PARALLELIZING COMPILERS 17 Denition 1.1 (DOSER-loop) A DOSER-loop (do serial) is characterized by arbitrary data dependencies between loop iterations. All iterations therefore have to be executed sequentially. Denition 1.2 (DOACR-loop) In a DOACR-loop (do across) there are regular data dependence patterns between neighbouring loop iterations. Consequently, loop iterations can partially overlap and it is possible to parallelize them by separating sets of non-dependent iterations. An important subclass of DOACR-loops are loops having the following structure: The index set f1; 2; : : : ; ng representing the corresponding iteration set can be partitioned in subsets f1; : : : ; n 1 g; fn 1 + 1; : : : ; n 2 g; : : : ; fn x ; : : : ; ng; where the subsets alternately contain dependent and non-dependent loop iterations. The subsets can then be regarded as part of DOSER- and DOALL-loops. Most DOACR-loops can be handled by partitioning the iteration set in similar subsets which do not contain necessarily consecutive iterations, though. Such a partition is used during loop parallelization in section 2.6 (pp. 75.) to parallelize DOACR-loops Denition 1.3 (DOALL-loop) There are no data dependencies between dierent iterations of a DOALL-loop. All iterations can thus be executed in parallel. 2. After determination of loop dependencies, iterations which are to be executed in parallel have to be chosen. This choice can be done immediately in the case of DOSER- (no parallel iterations) and DOALL-loops (all iterations can be chosen without any restriction), but DOACR-loops demand for a more thorough examination. In principal subsets of the iteration set which do not interfere with other subsets can be determined and distributed or interfering loop iterations are also distributed independently and data dependencies have to be enforced by synchronization. Furthermore the parallelization of loop iterations depends on the underlying hardware. The examples shown in gure 1.1.a{c illustrate the three denitions. Figure 1.1.a shows a DOSER-loop. The variable sum is read as well as written in all loop iterations and it cannot be parallelized simply by separating loop iterations. Evidently, this loop could be parallelized by using a tree-like architecture, but this type of parallelization is architecture dependent and not addressed in this paper. In the second example (gure 1.1.b) there is a data dependence between adjacent loop iterations. This DOACR-loop can be parallelized by distributing the arrays a[] and b[] according to the distribution of loop iterations. This could be done by placing the ith element of a[] and the (i-1)th element of b[] on the same processor. This procedure is elaborated in section 2.6 (pp. 75.).

18 18 CHAPTER 1. FOUNDATIONS for(i=0; i<100; i++) sum += a[i]; (a) Example of a DOSER-loop for(i=0; i<100; i++) a[i] = b[i-1]; (b) Example of a DOACR-loop for(i=0; i<100; i++) a[i] = i*i; (c) Example of a DOALL-loop Figure 1.1: Dierent types of loop dependencies The DOALL-loop of gure 1.1.c does not carry any data dependence in and between loop iterations. It can therefore be simply parallelized by assigning each array element on a dierent processor and distributing iterations accordingly. There are ve tasks which have to be tackled in the coarse of loop parallelization: 1. Data distribution: The program's data (in particular arrays and dynamic data structures) has to be distributed among the processors' memory. 2. Iteration distribution: Loop iterations are to be distributed among processors. This is the essential source of parallelism inherent in loops. 3. Communication set generation: To solve this task all sets of data accesses to non-local objects have to be determined. 4. Communication implementation: The messaging statements 1 which transfer nonlocal data must be generated using the sets of the last task. 5. Computation implementation: Finally, the statements implementing the actual computation have to be generated. Loop parallelization approaches which attempt to solve these tasks exist in abundance. They can be distinguished by the tasks they try to solve and the types of loops (DOALL-, DOACR- or DOSER-loops) for which they are applicable. In [22] a loop parallelization theory is presented which is able to transform certain types of nested DOACR-loops for coarse or ne grained parallelism. In [15] DOACR-loops 1 The term messaging statement is used in this thesis to denote the synchronized message transferring statements (send, receive) available on a transputer-like hardware

19 1.1. PARALLELIZING COMPILERS 19 are parallelized not by looking for parts which can be treated as parts of DOALL-loops, but using a special form of synchronization statements. The approach presented in [23] is similar to that presented in this thesis. It only treats, however, the parallelization of arrays using algebraic methods. Ramanujam and Sadayappan also use algebraic methods to guide the distribution of array elements in [20]. In section 2.6 (pp. 75.) a loop parallelization theory is presented which deals with all DOALL- and most DOACR-loops. This theory is greatly introduced by the methods presented in [21] Analyzing data and control ow of source code Parallelization of sequential source code largely depends on information about the interference between its various parts. This information can be collected with the aid of data ow and control ow analysis Flow analysis basics Most data ow analysis algorithm operate on dierent forms of graphs which are traversed in order to calculate the data ow information according to recursive equations describing a data ow problem. In the following several graph structures which are used to solve data ow problems in this thesis are dened along with some remarks how they are typically used. A graph structure which is often used in analyzing the data ow in sequential programs is the ow graph. It is often dened (see [2]) on the level of intermediary statements generated after parsing and semantic checking. It is, nevertheless, useful at the source statement level, as is shown in section (pp. 51.), where it is used to compute pointer aliases. The notion of basic blocks denes a basic unit on the source code level. Denition 1.4 (basic block) A basic block is a set of consecutive statements where control ow enters at the beginning and leaves at the end, i. e. there is no branching statement (a conditional or goto) except at the end of a basic block. Note that a function call is an elementary statement, which can occur anywhere in a basic block. The ow graph models how control can ow between the basic blocks of a program. Denition 1.5 (ow graph) A ow graph is a graph (V f ; E f ), where V f is the set of all basic blocks and there is an edge (v 1 ; v 2 ) 2 E f, i 2 the control ow can immediately enter basic block v 2 just after leaving v 1. 2 The abbreviation \i" stands for \if and only if".

20 20 CHAPTER 1. FOUNDATIONS Conditionals and loops are analyzed in basic blocks and lead to several edges emerging at the end of a basic block (representing the condition in a conditional or iterating construct). The following steps have to be performed in order to solve a given data ow problem using the ow graph: 1. First, the eect of basic statements on the data ow problem has to be determined. 2. Second, recursive equations (recursive in the structure of the ow graph) have to be determined which describe the data ow problem. 3. Third, an iterative algorithm (iterating over the ow graph) can be used in order to solve the recursive equations. Note that since function calls are simple statements, which can occur in basic blocks, the data ow information has also to be determined for function calls in step 1. As a safe assumption it can be assumed, that function calls alter data ow information as in the worst case. This means that if, for example, variables live on exit from a basic block are computed, it can be assumed, that all variables are live just after a function call. Unfortunately this worst case assumption prevents much parallelism to be extracted, if data ow information is used to nd statements which can be safely parallelized. Function calls therefore have to be analyzed more closely which requires that the data ow information is computed for all statements except function calls and later propagated between all functions. In step 1 it is therefore necessary to iterate over the call graph in order to compute the (more exact) data ow information for function calls. In step 3 this information can be used while traversing the ow graph to compute the data ow information for all basic blocks. In [2] this procedure is exemplied for a variety of data ow problems: reaching denitions, live variable analysis and common subexpressions. In order to compute the data ow information for function calls, the call graph has to be examined. It is this graph which has to be traversed in order to compute the more exact data ow information in the presence of function calls. Denition 1.6 (call graph) The call graph is a graph (V c ; E c ), where V c is the set of all procedures and there is an edge (p! q) 2 E c, i procedure p calls procedure q. The call graph is used during computation of aliases in section (pp. 51.) and solving for exact sets of modied variables in section (pp. 61.). Besides data ow analysis control ow dependencies have to be observed for safe and ecient parallelization of sequential programs. The ow graph represents control ow dependencies in an unwieldy way. Another means to represent dependencies of the ow of control is the control dependence graph.

21 1.1. PARALLELIZING COMPILERS 21 Denition 1.7 (control ow graph) A control ow graph is a directed graph G augmented with one unique entry vertex 3 START and one unique exit vertex STOP such that each vertex in the graph has at most two successors. It is assumed that all those vertices with two successors have attributes \T" (true) and \F" (false) associated with the outgoing edges. It is further assumed that each node n in G can be reached from START and each node can reach STOP. The control ow graph can be used to represent the execution of statement sequences, but does not depend on a structuring in basic blocks as the ow graph does. To derive a denition of control dependence which is more useful for parallelization, the strict sequential ordering of all statements has to be abandoned. Post domination is used to derive a less strict denition. Denition 1.8 (post domination) A vertex v is post dominated by a vertex w (v; w 2 G) i every directed path from v to STOP (not including v) contains w. Figure 1.2 is used to illustrate this and the following denitions. Figure 1.2.a shows a sequential example program, gure 1.2.b the corresponding control dependence graph. The second statement (the for-loop) is represented in the control ow graph by the header representing the condition to be fulllled. In the control ow graph, there are two edges from this header: One corresponding to the jump out of the loop, if the condition is not true and a second to the rst statement of the loop (statement 3). Since we do not have to pass via statement 3 (if the edge jumping out of the loop is taken), statement 2 is not post-dominated by statement 3. Statement 3, however, is post dominated by statement 4, because there is no path from the node corresponding to statement 3 to STOP without passing via statement 4. The notion of \control dependence" species a condition which is more useful in the eld of parallelization. Denition 1.9 (control dependence) Let G be a control ow graph. w is control dependent on v (v; w 2 G), i 1. there exists a directed path P from vertex v to vertex w with any z 2 P (z 6= v^z 6= w) post dominated by w and 2. v is not post dominated by w An important property is established in the following theorem which is proven in [3]: Theorem 1.1 If w is control dependent on v, v must have two leaving edges, one which results always in executing w and a second which can result in not executing w. Finally the control dependence graph can be dened. 3 The term node is used equivalently to the the term vertex

22 22 CHAPTER 1. FOUNDATIONS proc1(); // 1 for (i=1; i<n; i++) { // 2 a[i] = 5; // 3 if (cond) // 4 proc2(); // 5 else { // 6 c = a[i]*a[i-1]; // 7 g = c + 3; // 8 } } proc3(); // 9 F T? 1 -? 2? T 3? 4 T + Q F Q Qs 5 7 J JJ? F 8 J - J^ F 9? START STOP (a) A sequential program T Q QQQs +? S SSw 3 4 Q + QQ? Qs START (b) Its control ow and control dependence graph Figure 1.2: Example for control dependence

23 1.1. PARALLELIZING COMPILERS 23 Denition 1.10 (control dependence graph) The control dependence graph is a graph (V cd ; E cd ), where V cd is the set of vertices in the abstract syntax tree and there is an edge (v 1! v 2 ) 2 E cd, i n 2 is control dependent on n 1. This graph has the important property that conditional statements and loops can be identied by entry nodes which dominate all other nodes representing statements which are nested in the conditional or loop. Statement sequences, however, appear as independent nodes on the same nesting level in the control dependence graph. This property which follows from theorem 1.1 is exploited to dene the fronting transformation in section 2.5 (pp. 73.) and implement it eciently. Consider the example control dependance graph in gure 1.2. It can be seen that all statements of a loop body or a branch are control dependent on the nodes representing the loop header or the branch condition. The statements of the loop body or branch forming a statement sequence are not control dependent on each other, because there cannot be a path from a statement (say s) of the statement sequence to STOP which does not pass by any statement of the statement sequence following s. All statements forming a statement sequence can therefore be identied easily, which is used to dene the fronting transformation in section 2.5 (pp. 73.). Another important distinction devides all data ow problems in ow sensitive and ow insensitive problems, which are dened by Denition 1.11 (ow (in)sensitive problems) A data ow problem is called ow sensitive if it uses information about the internal control ow of called functions to determine the data ow information. If it does not use this type of information, it is called ow insensitive. There are a great deal more ow insensitive data ow problems (whose output is also termed may summary information) than ow sensitive problems (must summary information), because data ow analysis algorithms cannot always yield exact solutions to data ow problems. This situation would yet be aggravated if ow sensitive information had to be computed Using ow information in parallelizing compilers As stated above the output of data and control ow analysis algorithms consists in information where access to memory locations has to be sequentialized in order to meet the semantics of the sequential program. Two of the most important types of information gathered in the data ow analysis phase are the MOD- and USE-sets for a given statement s: USE(s) is the set of all variables x whose value is used on some execution path of statement s and MOD(s) is the set of all variables modied somewhere in s. A formal denition of these sets can be given based on the structure of the statements considered. An example how this is done can be found in section (pp. 48.).

24 24 CHAPTER 1. FOUNDATIONS After determination of the MOD- and USE-sets this information can be used with the bernstein conditions to determine pairs of statements which can be executed in parallel. If equation 1.1 (which is a reformulation of the bernstein conditions along the lines of [11]) is satised, statements s 1 and s 2 can safely be executed in parallel, because neither changes a memory location which is read or written by another. [MOD(s 1 ) \ (USE(s 2 ) [ MOD(s 2 ))] [ [MOD(s 2 ) \ (USE(s 1 ) [ MOD(s 1 ))] = ; (1.1) This test can be extended in order to test if a compound statement (i. e. statement sequence [s 1 ; : : : ; s n ]) can be parallelized. To do this the MOD- and USE-sets are extended to statement sequences (the corresponding sets are called MOD n and USE n ): n[ USE n ([s 1 ; : : : ; s n ]) = USE(s i ) MOD n ([s 1 ; : : : ; s n ]) = i=1 n[ MOD(s i ) Interference sets (I([s 1 ; : : : ; s n ]; s j )) are used to record where statement s j interferes with statement sequence [s 1 ; : : : ; s n ]: I([s 1 ; : : : ; s n ]; s j ) = i=1 [MOD n ([s 1 ; : : : ; s n ]) \ (USE(s j ) [ MOD(s j ))] [ [MOD(s j ) \ (USE n ([s 1 ; : : : ; s n ]) [ MOD n ([s 1 ; : : : ; s n ]))] Finally, equation 1.2 species the condition when a statement s n can be executed in parallel with statement sequence [s 1 ; : : : ; s n?1 ]. n?1 [ i=1 I([s 1 ; : : : ; s i ]; s i+1 )! = ; (1.2) This equation has the important property that a (length-) maximal parallelizable statement sequence can be build incrementally. First, statements s 1 and s 2 are tested for interference using equation 1.1. Then, statement s 3 is tested whether it can be executed in parallel with compound statement [s 1 ; s 2 ] (by equation 1.2) and so forth. In [11] this interference test is yet extended to test for interferences between dierent compound statements. Unfortunately all these test are not appropriate to the parallel model presented in this thesis. In section 2.5 (pp. 73.) the insuciencies of this approach are marked and solved by the introduction of the fronting transformation Data ow information is also very useful in classifying loops according to the distinction made in section In general, loops can be identied within a data ow analysis graph by a structure shown in gure DOALL-, DOACR- and DOSER-loops 5 This statement is only valid if the underlying graph is reducible which is always the case if no goto and

25 1.1. PARALLELIZING COMPILERS 25 ' loop header loop body &??? $ % Figure 1.3: Structure of loops within a data ow graph? if(...) found=true;??? ;, ffoundg if(...) break; prev=help; -? fhelpg,fprevg? sum+=help->info; fsumg, fsumg? Figure 1.4: Data ow graphs of example loops All ow information which reaches the loop has to pass the header. In particular, information which is altered within the loop does ow back to the loop header. DOALL-, DOACR- and DOSER-loops can therefore be identied by the information which is propagated back from the loop body to the loop header. The following examples whose local data ow graphs are shown in gure 1.4 illustrate this: for(help = first; help!= NULL; help=help->next) if (help->info == info) found = true; The rst example shows a loop which searches a linear list for a given element info. There are no modied variables used before their redenition in the loop. This loop is therefore a DOALL-loop. for(help = first; help!= NULL; help=help->next) { break statements are used. These statements are not incorporated in the C++ subset used in this thesis (described in section (pp. 31.)). An exact denition of reducibility can be found in [2].

26 26 CHAPTER 1. FOUNDATIONS } if (help->info == info) break; prev = help; The code of the second example also searches a linear list sequentially, but calculates, moreover, a pointer to the element which precedes the element found. This solution is necessary for deletion of list elements. Here the value of prev has to be carried along with the jump out of the loop and back to the header. Since the data ow graph shows that the value of prev can reach the loop header, this loop is not a DOALL-loop and cannot simply be parallelized by distributing the loop iterations. for(help = first; help!= NULL; help=help->next) sum += help->info; The third example 6 is a variation of the summation loop already discussed. Here the data ow graph indicates that this loop is a DOSER-loop, since the value sum computed in the loop body ows back to that statement and is used there (in the next iteration). Another problem with loop parallelization is that data ow information which distinguishes DOALL-, DOACR- and DOSER-loops as described up to now is to coarse for an ecient parallelization of loops, because of the two problems illustrated by the following example: for (i=0; i<n; i++) { a[i] = i*i; insert(a, i); } First, conventional data ow analysis treats arrays as indivisible (atomic) objects. This means that the example loop modies array a[] as a whole and therefore cannot be parallelized, because all loop iterations modify this array. For an ecient parallelization it is necessary to examine array accesses more closely in order to make more precise statements about the subarrays which are accessed. The same argumentation applies to dynamic data structures. This problem is addressed in this thesis by developing a loop parallelization theory (cf. section 2.6 (pp. 75.)) which tries to resolve array accesses according to distributed loop iterations. Second, the call of function insert() demands for a thorough interprocedural data ow analysis, since otherwise we have to assume that all array elements may be modied in function insert(). This problem requires a powerful interprocedural data ow analysis algorithm for object oriented languages which is presented in section (pp. 61.). 6 Note that the C++ statement sum += help->info; is equivalent to sum = sum + help->info.

27 1.2. OBJECT ORIENTED PROGRAMMING Regular sections The rst data ow analysis algorithms treated arrays and dynamic data structures as entities, which were not be further distributed. Today parallelizing compiler technology has advanced so far, that this restriction leads to results which are too coarse for an ecient parallelization. It is therefore necessary to model subarrays and parts of dynamic data structures. While there are few approaches dealing with parts of dynamic data structures (see [11], for example), there are several models dealing with subarrays: Triolet regions, linearization, atom images 7 and regular sections, to mention some of these. Regular sections are discussed here, because the data ow analysis algorithm presented in section (pp. 61.) is shown to be extendable to use regular sections and in section (pp. 82.)regular sections are used for modelling of distributed data structures. Regular sections (described in [6] and [18]) can model very general subregions of arrays including dense regions like blocks, rows and columns and sparse regions like grids. These are the subregions most often used in existing parallelizing compilers. The lattice shown in gure 1.5 denes the subregions which can be described by regular sections. A circle is labelled by a triple denoting a range (l : u : s), l + isji 2 N ^ 0 i u s ; where l; u; s 2 N Here parameters s or parameters u and s can be omitted. If s is omitted, s = 1 is assumed and if u and s are lacking, it is assumed that u = l and s = 1. This lattice is used to dene regular sections: 8 Denition 1.12 (regular section) A regular section is a subregion of an array which has an exact representation in the lattice of gure 1.5. A complete array with indexes ranging from 0 to n? 1, for example, can be represented as (0 : n? 1) or (0 : n? 1 : 1). The second column of the array declared by int array[10][20]; as (1 : 200 : 20). 1.2 Object oriented programming As stated in the introduction, the concepts of object oriented programming are not directly bound to imperative, functional or logical programming languages. This section introduces all basic concepts which are largely independent from a programming language paradigm and all relevant concepts which are bound to imperative languages. It also serves to dene the subset of C++ which is used to illustrate and implement the methods and algorithms in this thesis. 7 The other approaches not used in this thesis are described in [18]. Further literature can also be found there. 8 The triple-notation envisaged for the FORTRAN90 standard just supports subregions of arrays which correspond to regular sections.

28 28 CHAPTER 1. FOUNDATIONS # "!?1 # "! # Z? "! Z~?1 : 0 # = "! 0 # Z "! Z~?1 : 1 : 2 # Z =? "! Z~?1 : 1 # "! > Z P XXXXX ZZ 9 ) PPPPPPPP X XXXXXXXXXXz Pq Z ZZ Z ZZ # "! n? 3 # Z "! Z~ 1 # "! # Z "! Z~ 0 : 1 (n?3) : # Z~ = "!? Z ZZ # "! n? 2 Z ZZ n? 1 # "! 2 # Z = "! Z~ (n? 2) = =?? =? Z ZZ Z ZZ Z ZZ Z ZZ t t t t t Z ZZ (n?3) : (n?1) : (n?2) : (n? 1) # Z =? "! Z~ (n?3) : (n? 1) Figure 1.5: Lattice dening regular sections

29 ' 1.2. OBJECT ORIENTED PROGRAMMING 29 & private state public interface $ % Figure 1.6: An \object": internal state and public interface Object oriented concepts In the introduction an object oriented language was informally characterized by the notions of \object" and \object relationsships". An object is dened in this thesis by Denition 1.13 (object) An object is a data object (i. e. covering an amount of memory space) which is composed of two parts: 1. The internal state: A set of other data object. 2. The public interface: A set of procedures or functions which are used to access its internal state. Elements of the public interface are called methods. No other procedure or function than those part of the object's public interface can access the object's internal state. The last property which restricts the access to the object's internal state is the fundamental property with respect to encapsulation of data types. The notion of an object is illustrated in gure 1.6. In most object oriented languages objects are dened by using the concept of classes. A class serves as a pattern from which objects having this class type are instantiated. An important property of classes is that they are often organized in class hierarchies by a mechanism called inheritance which is dened as follows: Denition 1.14 (inheritance, base class, derived class) The relation B : C C; (c 1 ; c 2 ) 2 B, c 1 appears in the inherit-list of c 2 where C is the set of all classes and the inherit-list species from which classes a class inherits state and methods as dened below in this denition 9 is called \base-class relation". 9 In the case of the subset of C++ dened below, inherit-list is a simple list of identiers as dened by the following EBNF-statement (which was drawn from appendix C (pp. 93): inherit-list = [ id ] f "," id g

30 30 CHAPTER 1. FOUNDATIONS c 1 is called a base class of c 2 i (c 1 ; c 2 ) 2 B. c 1 is called a derived class of c 2 i (c 2 ; c 1 ) 2 B. These denitions specify a syntactic condition. The semantics associated with relation B is that a derived class d of base class b inherits the internal state 10 the set of variables but not the values of these variables and public interface from b, i. e. it includes a portion of memory space which is structurally identical to the internal state of b and responds to the methods of b the same way b does (if the methods have not been redened). The whole mechanism { state inclusion and method reuse { is termed inheritance. Since inheritance is essentially dened by the relation B it can be represented by a graph structure: Denition 1.15 (inheritance graph) The inheritance graph is an acyclic graph (V i ; E i ) where V i is the set of all classes and there is an edge (b; d) 2 E i, i d is derived from b. According to whether the inheritance graph has the simpler structure of a tree or not, dierent forms of inheritance are distinguished: Denition 1.16 (single, multiple inheritance) Inheritance is called simple, i the corresponding inheritance graph is a tree. Otherwise it is said to be multiple inheritance. The restriction to single inheritance entails that at most one class is member of the inherits-list, that is each class is derived from at most one base class. In this case the corresponding graph is termed inheritance tree. There are some good reasons to restrict the inheritance mechanism used in a programming language to single inheritance. Because of the simpler structure of the inheritance tree analysis algorithms operating on it are more ecient than in the general case. Furthermore there are some problems concerning the semantics of function resolution if two functions with the same name are dened in two independent base classes. The language subset used in this thesis, however, supports multiple inheritance, since multiple inheritance greatly extends the usefulness of the object and class abstraction. One example how this is done are the so-called mixin classes. These classes oer a largely restricted functionality, but can be used as base classes with almost all other classes to incorporate their functionality in the derived class. Based upon the notions dened up to here the notion of an object oriented language can be dened more formally than in the introduction: In a C++ class declaration the inherit-list follows the identier naming a class as shown in the denition of a class declaration: cdecl = /* class declaration */ \class" identier [ \:" inherit-list ] \f" \private:" f simple-decls g \public:" f method-decls-defs g \g;" 10 Note that this internal state is private, since according to denition 1.13 only methods of the same object can access it.

31 1.2. OBJECT ORIENTED PROGRAMMING 31 Denition 1.17 (object oriented language) An object oriented language is a programming language which is characterized by: 1. Objects are used as a basic structuring means and are organized in classes. 2. (Multiple) inheritance provides a high level structuring means and a basis for code reuse. 3. Late binding enables functions to be bound at run time. In object oriented languages late binding is mainly used to express heterogeneous behaviour (for example in heterogeneous data structures) in a uniform manner A C++ subset As a basis for demonstration and implementation a C++ subset will be dened whose exact syntax denition is given as a lex-yacc-specication in appendix C (pp. 93.). In the following its essential object oriented features and the features dierentiating it from the full C++ language (Version 2.1 as described for example in [24]) are listed: Notion of object: This thesis supports a rather \pure" notion of objects. Consequently, the C++ subset used here does not make available all member access declarations of the full C++ language. The syntax of a class declaration has been restricted as follows: 11 cdecl = /* class declaration */ \class" identier [ \:" inherit-list ] \f" \private:" f simple-decls g \public:" f method-decls-defs g \g;" Thus, an object (cf. to denition 1.13) consists of zero or more simple declarations which represent its private internal state followed by zero or more public method declarations or denitions which represent its interface. Objects in the narrower sense as dened here are referred to in the following as proper objects. Base classes are specied by naming them as part of the inherit-list. Objects and pointer to objects can be instantiated by the following declarations which are part of all simple declarations (sdecl): EBNF (extended backus naur form) is used for syntax denition and the following conventions are used: \terminals" are \double quoted" and non-terminals are set in italic. 12 In the C and C++ language denitions a distinction has been made between declarations, which do not reserve memory space for the objects declared and denitions, which do reserve memory space. In this thesis this distinction is not maintained for simplicity. The notions declaration, denition and instantiation (in the case of objects) are used equivalently.

32 32 CHAPTER 1. FOUNDATIONS class shape S SS S SS / Sw class circle class rectangle Figure 1.7: Example illustrating the use of virtual functions sdecl = /* simple declaration */ identier identier \;" j identier \*" identier \g;" Pointer to objects are used as in C++ in order to implement late binding. Support for multiple inheritance and late binding: These two issues are treated as in the full C++ language: Multiple inheritance can be achieved by specifying all base classes in the identier list (inherit-list) in the head of a class declaration. Late binding (also called dynamic binding) is implemented through the virtual function mechanism. These functions, which are identied by the keyword virtual, are bound to calls at run time while all other functions are bound early, i. e. at compile time. In the C++ subset late binding is coupled with inheritance in the following way: As described above all virtual functions will be bound at run time, but uniform behaviour within data structures depends on the implementation of dierent virtual functions of classes derived from the same virtual function. Figure 1.7 shows a part of the example described in appendix D (pp. 99.) which illustrates this. Here all three classes implement the virtual function GetArea(). The semantics of C++ specify that any pointer to an object of a base class can also point to any class derived from that base class. The virtual function which is invoked at run time is then determined by the class type of the object to which the pointer points. In the example the virtual function call shape * ptr;... ptr->getarea(); can therefore invoke any of the three virtual functions. Types: The type model of C++ has been restricted as the only simple type supported is the integer data type (int). This has been done, because addition of other simple

The Compositional C++ Language. Denition. Abstract. This document gives a concise denition of the syntax and semantics

The Compositional C++ Language. Denition. Abstract. This document gives a concise denition of the syntax and semantics The Compositional C++ Language Denition Peter Carlin Mani Chandy Carl Kesselman March 12, 1993 Revision 0.95 3/12/93, Comments welcome. Abstract This document gives a concise denition of the syntax and

More information

A taxonomy of race. D. P. Helmbold, C. E. McDowell. September 28, University of California, Santa Cruz. Santa Cruz, CA

A taxonomy of race. D. P. Helmbold, C. E. McDowell. September 28, University of California, Santa Cruz. Santa Cruz, CA A taxonomy of race conditions. D. P. Helmbold, C. E. McDowell UCSC-CRL-94-34 September 28, 1994 Board of Studies in Computer and Information Sciences University of California, Santa Cruz Santa Cruz, CA

More information

Abstract formula. Net formula

Abstract formula. Net formula { PEP { More than a Petri Net Tool ABSTRACT Bernd Grahlmann and Eike Best The PEP system (Programming Environment based on Petri Nets) supports the most important tasks of a good net tool, including HL

More information

Acknowledgments 2

Acknowledgments 2 Program Slicing: An Application of Object-oriented Program Dependency Graphs Anand Krishnaswamy Dept. of Computer Science Clemson University Clemson, SC 29634-1906 anandk@cs.clemson.edu Abstract A considerable

More information

A Linear-C Implementation of Dijkstra's Algorithm. Chung-Hsing Hsu and Donald Smith and Saul Levy. Department of Computer Science. Rutgers University

A Linear-C Implementation of Dijkstra's Algorithm. Chung-Hsing Hsu and Donald Smith and Saul Levy. Department of Computer Science. Rutgers University A Linear-C Implementation of Dijkstra's Algorithm Chung-Hsing Hsu and Donald Smith and Saul Levy Department of Computer Science Rutgers University LCSR-TR-274 October 9, 1996 Abstract Linear-C is a data-parallel

More information

Programming Languages Third Edition. Chapter 7 Basic Semantics

Programming Languages Third Edition. Chapter 7 Basic Semantics Programming Languages Third Edition Chapter 7 Basic Semantics Objectives Understand attributes, binding, and semantic functions Understand declarations, blocks, and scope Learn how to construct a symbol

More information

SAMOS: an Active Object{Oriented Database System. Stella Gatziu, Klaus R. Dittrich. Database Technology Research Group

SAMOS: an Active Object{Oriented Database System. Stella Gatziu, Klaus R. Dittrich. Database Technology Research Group SAMOS: an Active Object{Oriented Database System Stella Gatziu, Klaus R. Dittrich Database Technology Research Group Institut fur Informatik, Universitat Zurich fgatziu, dittrichg@ifi.unizh.ch to appear

More information

DRAFT for FINAL VERSION. Accepted for CACSD'97, Gent, Belgium, April 1997 IMPLEMENTATION ASPECTS OF THE PLC STANDARD IEC

DRAFT for FINAL VERSION. Accepted for CACSD'97, Gent, Belgium, April 1997 IMPLEMENTATION ASPECTS OF THE PLC STANDARD IEC DRAFT for FINAL VERSION. Accepted for CACSD'97, Gent, Belgium, 28-3 April 1997 IMPLEMENTATION ASPECTS OF THE PLC STANDARD IEC 1131-3 Martin hman Stefan Johansson Karl-Erik rzen Department of Automatic

More information

Object-oriented Compiler Construction

Object-oriented Compiler Construction 1 Object-oriented Compiler Construction Extended Abstract Axel-Tobias Schreiner, Bernd Kühl University of Osnabrück, Germany {axel,bekuehl}@uos.de, http://www.inf.uos.de/talks/hc2 A compiler takes a program

More information

Programming Languages Third Edition. Chapter 9 Control I Expressions and Statements

Programming Languages Third Edition. Chapter 9 Control I Expressions and Statements Programming Languages Third Edition Chapter 9 Control I Expressions and Statements Objectives Understand expressions Understand conditional statements and guards Understand loops and variation on WHILE

More information

CS201 - Introduction to Programming Glossary By

CS201 - Introduction to Programming Glossary By CS201 - Introduction to Programming Glossary By #include : The #include directive instructs the preprocessor to read and include a file into a source code file. The file name is typically enclosed with

More information

ELEC 876: Software Reengineering

ELEC 876: Software Reengineering ELEC 876: Software Reengineering () Dr. Ying Zou Department of Electrical & Computer Engineering Queen s University Compiler and Interpreter Compiler Source Code Object Compile Execute Code Results data

More information

(Refer Slide Time: 4:00)

(Refer Slide Time: 4:00) Principles of Programming Languages Dr. S. Arun Kumar Department of Computer Science & Engineering Indian Institute of Technology, Delhi Lecture - 38 Meanings Let us look at abstracts namely functional

More information

Worst-case running time for RANDOMIZED-SELECT

Worst-case running time for RANDOMIZED-SELECT Worst-case running time for RANDOMIZED-SELECT is ), even to nd the minimum The algorithm has a linear expected running time, though, and because it is randomized, no particular input elicits the worst-case

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

Lecture 13: Object orientation. Object oriented programming. Introduction. Object oriented programming. OO and ADT:s. Introduction

Lecture 13: Object orientation. Object oriented programming. Introduction. Object oriented programming. OO and ADT:s. Introduction Lecture 13: Object orientation Object oriented programming Introduction, types of OO languages Key concepts: Encapsulation, Inheritance, Dynamic binding & polymorphism Other design issues Smalltalk OO

More information

FILTER SYNTHESIS USING FINE-GRAIN DATA-FLOW GRAPHS. Waqas Akram, Cirrus Logic Inc., Austin, Texas

FILTER SYNTHESIS USING FINE-GRAIN DATA-FLOW GRAPHS. Waqas Akram, Cirrus Logic Inc., Austin, Texas FILTER SYNTHESIS USING FINE-GRAIN DATA-FLOW GRAPHS Waqas Akram, Cirrus Logic Inc., Austin, Texas Abstract: This project is concerned with finding ways to synthesize hardware-efficient digital filters given

More information

such internal data dependencies can be formally specied. A possible approach to specify

such internal data dependencies can be formally specied. A possible approach to specify Chapter 6 Specication and generation of valid data unit instantiations In this chapter, we discuss the problem of generating valid data unit instantiations. As valid data unit instantiations must adhere

More information

Short Notes of CS201

Short Notes of CS201 #includes: Short Notes of CS201 The #include directive instructs the preprocessor to read and include a file into a source code file. The file name is typically enclosed with < and > if the file is a system

More information

perform. If more storage is required, more can be added without having to modify the processor (provided that the extra memory is still addressable).

perform. If more storage is required, more can be added without having to modify the processor (provided that the extra memory is still addressable). How to Make Zuse's Z3 a Universal Computer Raul Rojas January 14, 1998 Abstract The computing machine Z3, built by Konrad Zuse between 1938 and 1941, could only execute xed sequences of oating-point arithmetical

More information

Chapter 6 Control Flow. June 9, 2015

Chapter 6 Control Flow. June 9, 2015 Chapter 6 Control Flow June 9, 2015 Expression evaluation It s common in programming languages to use the idea of an expression, which might be a simple object function invocation over some number of arguments

More information

Program Abstractions, Language Paradigms. CS152. Chris Pollett. Aug. 27, 2008.

Program Abstractions, Language Paradigms. CS152. Chris Pollett. Aug. 27, 2008. Program Abstractions, Language Paradigms. CS152. Chris Pollett. Aug. 27, 2008. Outline. Abstractions for telling a computer how to do things. Computational Paradigms. Language Definition, Translation.

More information

UNIVERSITÄT PADERBORN. ComponentTools

UNIVERSITÄT PADERBORN. ComponentTools UNIVERSITÄT PADERBORN ComponentTools Component Library Concept Project Group ComponentTools pg-components@uni-paderborn.de Alexander Gepting, Joel Greenyer, Andreas Maas, Sebastian Munkelt, Csaba Pales,

More information

The PCAT Programming Language Reference Manual

The PCAT Programming Language Reference Manual The PCAT Programming Language Reference Manual Andrew Tolmach and Jingke Li Dept. of Computer Science Portland State University September 27, 1995 (revised October 15, 2002) 1 Introduction The PCAT language

More information

Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc. Abstract. Direct Volume Rendering (DVR) is a powerful technique for

Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc. Abstract. Direct Volume Rendering (DVR) is a powerful technique for Comparison of Two Image-Space Subdivision Algorithms for Direct Volume Rendering on Distributed-Memory Multicomputers Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc Dept. of Computer Eng. and

More information

1 Lexical Considerations

1 Lexical Considerations Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.035, Spring 2013 Handout Decaf Language Thursday, Feb 7 The project for the course is to write a compiler

More information

and therefore the system throughput in a distributed database system [, 1]. Vertical fragmentation further enhances the performance of database transa

and therefore the system throughput in a distributed database system [, 1]. Vertical fragmentation further enhances the performance of database transa Vertical Fragmentation and Allocation in Distributed Deductive Database Systems Seung-Jin Lim Yiu-Kai Ng Department of Computer Science Brigham Young University Provo, Utah 80, U.S.A. Email: fsjlim,ngg@cs.byu.edu

More information

Handout 9: Imperative Programs and State

Handout 9: Imperative Programs and State 06-02552 Princ. of Progr. Languages (and Extended ) The University of Birmingham Spring Semester 2016-17 School of Computer Science c Uday Reddy2016-17 Handout 9: Imperative Programs and State Imperative

More information

Parallel Program Graphs and their. (fvivek dependence graphs, including the Control Flow Graph (CFG) which

Parallel Program Graphs and their. (fvivek dependence graphs, including the Control Flow Graph (CFG) which Parallel Program Graphs and their Classication Vivek Sarkar Barbara Simons IBM Santa Teresa Laboratory, 555 Bailey Avenue, San Jose, CA 95141 (fvivek sarkar,simonsg@vnet.ibm.com) Abstract. We categorize

More information

Programming in C++ Prof. Partha Pratim Das Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Programming in C++ Prof. Partha Pratim Das Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Programming in C++ Prof. Partha Pratim Das Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture - 43 Dynamic Binding (Polymorphism): Part III Welcome to Module

More information

Functional Programming. Big Picture. Design of Programming Languages

Functional Programming. Big Picture. Design of Programming Languages Functional Programming Big Picture What we ve learned so far: Imperative Programming Languages Variables, binding, scoping, reference environment, etc What s next: Functional Programming Languages Semantics

More information

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES DESIGN AND ANALYSIS OF ALGORITHMS Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES http://milanvachhani.blogspot.in USE OF LOOPS As we break down algorithm into sub-algorithms, sooner or later we shall

More information

Algorithmic "imperative" language

Algorithmic imperative language Algorithmic "imperative" language Undergraduate years Epita November 2014 The aim of this document is to introduce breiy the "imperative algorithmic" language used in the courses and tutorials during the

More information

Seminar on. A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm

Seminar on. A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm Seminar on A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm Mohammad Iftakher Uddin & Mohammad Mahfuzur Rahman Matrikel Nr: 9003357 Matrikel Nr : 9003358 Masters of

More information

6.001 Notes: Section 15.1

6.001 Notes: Section 15.1 6.001 Notes: Section 15.1 Slide 15.1.1 Our goal over the next few lectures is to build an interpreter, which in a very basic sense is the ultimate in programming, since doing so will allow us to define

More information

Module 10 Inheritance, Virtual Functions, and Polymorphism

Module 10 Inheritance, Virtual Functions, and Polymorphism Module 10 Inheritance, Virtual Functions, and Polymorphism Table of Contents CRITICAL SKILL 10.1: Inheritance Fundamentals... 2 CRITICAL SKILL 10.2: Base Class Access Control... 7 CRITICAL SKILL 10.3:

More information

to automatically generate parallel code for many applications that periodically update shared data structures using commuting operations and/or manipu

to automatically generate parallel code for many applications that periodically update shared data structures using commuting operations and/or manipu Semantic Foundations of Commutativity Analysis Martin C. Rinard y and Pedro C. Diniz z Department of Computer Science University of California, Santa Barbara Santa Barbara, CA 93106 fmartin,pedrog@cs.ucsb.edu

More information

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 The Encoding Complexity of Network Coding Michael Langberg, Member, IEEE, Alexander Sprintson, Member, IEEE, and Jehoshua Bruck,

More information

Topic IV. Parameters. Chapter 5 of Programming languages: Concepts & constructs by R. Sethi (2ND EDITION). Addison-Wesley, 1996.

Topic IV. Parameters. Chapter 5 of Programming languages: Concepts & constructs by R. Sethi (2ND EDITION). Addison-Wesley, 1996. References: Topic IV Block-structured procedural languages Algol and Pascal Chapters 5 and 7, of Concepts in programming languages by J. C. Mitchell. CUP, 2003. Chapter 5 of Programming languages: Concepts

More information

OBJECT ORIENTED PROGRAMMING USING C++ CSCI Object Oriented Analysis and Design By Manali Torpe

OBJECT ORIENTED PROGRAMMING USING C++ CSCI Object Oriented Analysis and Design By Manali Torpe OBJECT ORIENTED PROGRAMMING USING C++ CSCI 5448- Object Oriented Analysis and Design By Manali Torpe Fundamentals of OOP Class Object Encapsulation Abstraction Inheritance Polymorphism Reusability C++

More information

Annex A (Informative) Collected syntax The nonterminal symbols pointer-type, program, signed-number, simple-type, special-symbol, and structured-type

Annex A (Informative) Collected syntax The nonterminal symbols pointer-type, program, signed-number, simple-type, special-symbol, and structured-type Pascal ISO 7185:1990 This online copy of the unextended Pascal standard is provided only as an aid to standardization. In the case of dierences between this online version and the printed version, the

More information

(Preliminary Version 2 ) Jai-Hoon Kim Nitin H. Vaidya. Department of Computer Science. Texas A&M University. College Station, TX

(Preliminary Version 2 ) Jai-Hoon Kim Nitin H. Vaidya. Department of Computer Science. Texas A&M University. College Station, TX Towards an Adaptive Distributed Shared Memory (Preliminary Version ) Jai-Hoon Kim Nitin H. Vaidya Department of Computer Science Texas A&M University College Station, TX 77843-3 E-mail: fjhkim,vaidyag@cs.tamu.edu

More information

RSL Reference Manual

RSL Reference Manual RSL Reference Manual Part No.: Date: April 6, 1990 Original Authors: Klaus Havelund, Anne Haxthausen Copyright c 1990 Computer Resources International A/S This document is issued on a restricted basis

More information

PRINCIPLES OF COMPILER DESIGN UNIT I INTRODUCTION TO COMPILERS

PRINCIPLES OF COMPILER DESIGN UNIT I INTRODUCTION TO COMPILERS Objective PRINCIPLES OF COMPILER DESIGN UNIT I INTRODUCTION TO COMPILERS Explain what is meant by compiler. Explain how the compiler works. Describe various analysis of the source program. Describe the

More information

Decaf Language Reference Manual

Decaf Language Reference Manual Decaf Language Reference Manual C. R. Ramakrishnan Department of Computer Science SUNY at Stony Brook Stony Brook, NY 11794-4400 cram@cs.stonybrook.edu February 12, 2012 Decaf is a small object oriented

More information

ICC++ Language Denition. Andrew A. Chien and Uday S. Reddy 1. May 25, 1995

ICC++ Language Denition. Andrew A. Chien and Uday S. Reddy 1. May 25, 1995 ICC++ Language Denition Andrew A. Chien and Uday S. Reddy 1 May 25, 1995 Preface ICC++ is a new dialect of C++ designed to support the writing of both sequential and parallel programs. Because of the signicant

More information

What are the characteristics of Object Oriented programming language?

What are the characteristics of Object Oriented programming language? What are the various elements of OOP? Following are the various elements of OOP:- Class:- A class is a collection of data and the various operations that can be performed on that data. Object- This is

More information

Stability in ATM Networks. network.

Stability in ATM Networks. network. Stability in ATM Networks. Chengzhi Li, Amitava Raha y, and Wei Zhao Abstract In this paper, we address the issues of stability in ATM networks. A network is stable if and only if all the packets have

More information

Metamodeling with Metamodels. Using. UML/MOF including OCL

Metamodeling with Metamodels. Using. UML/MOF including OCL Metamodeling with Metamodels Using UML/MOF including OCL Introducing Metamodels (Wikipedia) A metamodel is a model of a model An instantiation of metamodel gives a model Metamodeling is the process of

More information

LECTURE 18. Control Flow

LECTURE 18. Control Flow LECTURE 18 Control Flow CONTROL FLOW Sequencing: the execution of statements and evaluation of expressions is usually in the order in which they appear in a program text. Selection (or alternation): a

More information

Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES DESIGN AND ANALYSIS OF ALGORITHMS Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES http://milanvachhani.blogspot.in USE OF LOOPS As we break down algorithm into sub-algorithms, sooner or later we shall

More information

CMa simple C Abstract Machine

CMa simple C Abstract Machine CMa simple C Abstract Machine CMa architecture An abstract machine has set of instructions which can be executed in an abstract hardware. The abstract hardware may be seen as a collection of certain data

More information

Parallel Programming Patterns Overview CS 472 Concurrent & Parallel Programming University of Evansville

Parallel Programming Patterns Overview CS 472 Concurrent & Parallel Programming University of Evansville Parallel Programming Patterns Overview CS 472 Concurrent & Parallel Programming of Evansville Selection of slides from CIS 410/510 Introduction to Parallel Computing Department of Computer and Information

More information

6.001 Notes: Section 8.1

6.001 Notes: Section 8.1 6.001 Notes: Section 8.1 Slide 8.1.1 In this lecture we are going to introduce a new data type, specifically to deal with symbols. This may sound a bit odd, but if you step back, you may realize that everything

More information

Topic 1: What is HoTT and why?

Topic 1: What is HoTT and why? Topic 1: What is HoTT and why? May 5, 2014 Introduction Homotopy type theory (HoTT) is a newly emerging field of mathematics which is currently being developed as a foundation of mathematics which is in

More information

Group B Assignment 9. Code generation using DAG. Title of Assignment: Problem Definition: Code generation using DAG / labeled tree.

Group B Assignment 9. Code generation using DAG. Title of Assignment: Problem Definition: Code generation using DAG / labeled tree. Group B Assignment 9 Att (2) Perm(3) Oral(5) Total(10) Sign Title of Assignment: Code generation using DAG. 9.1.1 Problem Definition: Code generation using DAG / labeled tree. 9.1.2 Perquisite: Lex, Yacc,

More information

The Global Standard for Mobility (GSM) (see, e.g., [6], [4], [5]) yields a

The Global Standard for Mobility (GSM) (see, e.g., [6], [4], [5]) yields a Preprint 0 (2000)?{? 1 Approximation of a direction of N d in bounded coordinates Jean-Christophe Novelli a Gilles Schaeer b Florent Hivert a a Universite Paris 7 { LIAFA 2, place Jussieu - 75251 Paris

More information

Programming Languages Third Edition. Chapter 10 Control II Procedures and Environments

Programming Languages Third Edition. Chapter 10 Control II Procedures and Environments Programming Languages Third Edition Chapter 10 Control II Procedures and Environments Objectives Understand the nature of procedure definition and activation Understand procedure semantics Learn parameter-passing

More information

The SPL Programming Language Reference Manual

The SPL Programming Language Reference Manual The SPL Programming Language Reference Manual Leonidas Fegaras University of Texas at Arlington Arlington, TX 76019 fegaras@cse.uta.edu February 27, 2018 1 Introduction The SPL language is a Small Programming

More information

. The problem: ynamic ata Warehouse esign Ws are dynamic entities that evolve continuously over time. As time passes, new queries need to be answered

. The problem: ynamic ata Warehouse esign Ws are dynamic entities that evolve continuously over time. As time passes, new queries need to be answered ynamic ata Warehouse esign? imitri Theodoratos Timos Sellis epartment of Electrical and Computer Engineering Computer Science ivision National Technical University of Athens Zographou 57 73, Athens, Greece

More information

CS 270 Algorithms. Oliver Kullmann. Binary search. Lists. Background: Pointers. Trees. Implementing rooted trees. Tutorial

CS 270 Algorithms. Oliver Kullmann. Binary search. Lists. Background: Pointers. Trees. Implementing rooted trees. Tutorial Week 7 General remarks Arrays, lists, pointers and 1 2 3 We conclude elementary data structures by discussing and implementing arrays, lists, and trees. Background information on pointers is provided (for

More information

III Data Structures. Dynamic sets

III Data Structures. Dynamic sets III Data Structures Elementary Data Structures Hash Tables Binary Search Trees Red-Black Trees Dynamic sets Sets are fundamental to computer science Algorithms may require several different types of operations

More information

UNIT V SYSTEM SOFTWARE TOOLS

UNIT V SYSTEM SOFTWARE TOOLS 5.1 Text editors UNIT V SYSTEM SOFTWARE TOOLS A text editor is a type of program used for editing plain text files. Text editors are often provided with operating systems or software development packages,

More information

Informatica 3 Syntax and Semantics

Informatica 3 Syntax and Semantics Informatica 3 Syntax and Semantics Marcello Restelli 9/15/07 Laurea in Ingegneria Informatica Politecnico di Milano Introduction Introduction to the concepts of syntax and semantics Binding Variables Routines

More information

CS301 - Data Structures Glossary By

CS301 - Data Structures Glossary By CS301 - Data Structures Glossary By Abstract Data Type : A set of data values and associated operations that are precisely specified independent of any particular implementation. Also known as ADT Algorithm

More information

AOSA - Betriebssystemkomponenten und der Aspektmoderatoransatz

AOSA - Betriebssystemkomponenten und der Aspektmoderatoransatz AOSA - Betriebssystemkomponenten und der Aspektmoderatoransatz Results obtained by researchers in the aspect-oriented programming are promoting the aim to export these ideas to whole software development

More information

Weiss Chapter 1 terminology (parenthesized numbers are page numbers)

Weiss Chapter 1 terminology (parenthesized numbers are page numbers) Weiss Chapter 1 terminology (parenthesized numbers are page numbers) assignment operators In Java, used to alter the value of a variable. These operators include =, +=, -=, *=, and /=. (9) autoincrement

More information

2 The Service Provision Problem The formulation given here can also be found in Tomasgard et al. [6]. That paper also details the background of the mo

2 The Service Provision Problem The formulation given here can also be found in Tomasgard et al. [6]. That paper also details the background of the mo Two-Stage Service Provision by Branch and Bound Shane Dye Department ofmanagement University of Canterbury Christchurch, New Zealand s.dye@mang.canterbury.ac.nz Asgeir Tomasgard SINTEF, Trondheim, Norway

More information

Compilation Issues for High Performance Computers: A Comparative. Overview of a General Model and the Unied Model. Brian J.

Compilation Issues for High Performance Computers: A Comparative. Overview of a General Model and the Unied Model. Brian J. Compilation Issues for High Performance Computers: A Comparative Overview of a General Model and the Unied Model Abstract This paper presents a comparison of two models suitable for use in a compiler for

More information

M301: Software Systems & their Development. Unit 4: Inheritance, Composition and Polymorphism

M301: Software Systems & their Development. Unit 4: Inheritance, Composition and Polymorphism Block 1: Introduction to Java Unit 4: Inheritance, Composition and Polymorphism Aims of the unit: Study and use the Java mechanisms that support reuse, in particular, inheritance and composition; Analyze

More information

Combining Analyses, Combining Optimizations - Summary

Combining Analyses, Combining Optimizations - Summary Combining Analyses, Combining Optimizations - Summary 1. INTRODUCTION Cliff Click s thesis Combining Analysis, Combining Optimizations [Click and Cooper 1995] uses a structurally different intermediate

More information

Institut fur Informatik, Universitat Klagenfurt. Institut fur Informatik, Universitat Linz. Institut fur Witschaftsinformatik, Universitat Linz

Institut fur Informatik, Universitat Klagenfurt. Institut fur Informatik, Universitat Linz. Institut fur Witschaftsinformatik, Universitat Linz Coupling and Cohesion in Object-Oriented Systems Johann Eder (1) Gerti Kappel (2) Michael Schre (3) (1) Institut fur Informatik, Universitat Klagenfurt Universitatsstr. 65, A-9020 Klagenfurt, Austria,

More information

9/24/ Hash functions

9/24/ Hash functions 11.3 Hash functions A good hash function satis es (approximately) the assumption of SUH: each key is equally likely to hash to any of the slots, independently of the other keys We typically have no way

More information

Enhancing Integrated Layer Processing using Common Case. Anticipation and Data Dependence Analysis. Extended Abstract

Enhancing Integrated Layer Processing using Common Case. Anticipation and Data Dependence Analysis. Extended Abstract Enhancing Integrated Layer Processing using Common Case Anticipation and Data Dependence Analysis Extended Abstract Philippe Oechslin Computer Networking Lab Swiss Federal Institute of Technology DI-LTI

More information

Conditional Branching is not Necessary for Universal Computation in von Neumann Computers Raul Rojas (University of Halle Department of Mathematics an

Conditional Branching is not Necessary for Universal Computation in von Neumann Computers Raul Rojas (University of Halle Department of Mathematics an Conditional Branching is not Necessary for Universal Computation in von Neumann Computers Raul Rojas (University of Halle Department of Mathematics and Computer Science rojas@informatik.uni-halle.de) Abstract:

More information

Generating Continuation Passing Style Code for the Co-op Language

Generating Continuation Passing Style Code for the Co-op Language Generating Continuation Passing Style Code for the Co-op Language Mark Laarakkers University of Twente Faculty: Computer Science Chair: Software engineering Graduation committee: dr.ing. C.M. Bockisch

More information

Distributed minimum spanning tree problem

Distributed minimum spanning tree problem Distributed minimum spanning tree problem Juho-Kustaa Kangas 24th November 2012 Abstract Given a connected weighted undirected graph, the minimum spanning tree problem asks for a spanning subtree with

More information

A Short Summary of Javali

A Short Summary of Javali A Short Summary of Javali October 15, 2015 1 Introduction Javali is a simple language based on ideas found in languages like C++ or Java. Its purpose is to serve as the source language for a simple compiler

More information

Design Patterns Design patterns advantages:

Design Patterns Design patterns advantages: Design Patterns Designing object-oriented software is hard, and designing reusable object oriented software is even harder. You must find pertinent objects factor them into classes at the right granularity

More information

The S-Expression Design Language (SEDL) James C. Corbett. September 1, Introduction. 2 Origins of SEDL 2. 3 The Language SEDL 2.

The S-Expression Design Language (SEDL) James C. Corbett. September 1, Introduction. 2 Origins of SEDL 2. 3 The Language SEDL 2. The S-Expression Design Language (SEDL) James C. Corbett September 1, 1993 Contents 1 Introduction 1 2 Origins of SEDL 2 3 The Language SEDL 2 3.1 Scopes : : : : : : : : : : : : : : : : : : : : : : : :

More information

Topic IV. Block-structured procedural languages Algol and Pascal. References:

Topic IV. Block-structured procedural languages Algol and Pascal. References: References: Topic IV Block-structured procedural languages Algol and Pascal Chapters 5 and 7, of Concepts in programming languages by J. C. Mitchell. CUP, 2003. Chapters 10( 2) and 11( 1) of Programming

More information

Programming Languages Third Edition

Programming Languages Third Edition Programming Languages Third Edition Chapter 12 Formal Semantics Objectives Become familiar with a sample small language for the purpose of semantic specification Understand operational semantics Understand

More information

INSTITUTE OF AERONAUTICAL ENGINEERING

INSTITUTE OF AERONAUTICAL ENGINEERING INSTITUTE OF AERONAUTICAL ENGINEERING (Autonomous) Dundigal, Hyderabad -500 043 INFORMATION TECHNOLOGY TUTORIAL QUESTION BANK Name : PRINCIPLES OF PROGRAMMING LANGUAGES Code : A40511 Class : II B. Tech

More information

Java Threads and intrinsic locks

Java Threads and intrinsic locks Java Threads and intrinsic locks 1. Java and OOP background fundamentals 1.1. Objects, methods and data One significant advantage of OOP (object oriented programming) is data encapsulation. Each object

More information

6.001 Notes: Section 4.1

6.001 Notes: Section 4.1 6.001 Notes: Section 4.1 Slide 4.1.1 In this lecture, we are going to take a careful look at the kinds of procedures we can build. We will first go back to look very carefully at the substitution model,

More information

Chapter S:II. II. Search Space Representation

Chapter S:II. II. Search Space Representation Chapter S:II II. Search Space Representation Systematic Search Encoding of Problems State-Space Representation Problem-Reduction Representation Choosing a Representation S:II-1 Search Space Representation

More information

Reverse Engineering with a CASE Tool. Bret Johnson. Research advisors: Spencer Rugaber and Rich LeBlanc. October 6, Abstract

Reverse Engineering with a CASE Tool. Bret Johnson. Research advisors: Spencer Rugaber and Rich LeBlanc. October 6, Abstract Reverse Engineering with a CASE Tool Bret Johnson Research advisors: Spencer Rugaber and Rich LeBlanc October 6, 994 Abstract We examine using a CASE tool, Interactive Development Environment's Software

More information

Principles of Programming Languages. Lecture Outline

Principles of Programming Languages. Lecture Outline Principles of Programming Languages CS 492 Lecture 1 Based on Notes by William Albritton 1 Lecture Outline Reasons for studying concepts of programming languages Programming domains Language evaluation

More information

One of the most important areas where quantifier logic is used is formal specification of computer programs.

One of the most important areas where quantifier logic is used is formal specification of computer programs. Section 5.2 Formal specification of computer programs One of the most important areas where quantifier logic is used is formal specification of computer programs. Specification takes place on several levels

More information

Chapter 3:: Names, Scopes, and Bindings (cont.)

Chapter 3:: Names, Scopes, and Bindings (cont.) Chapter 3:: Names, Scopes, and Bindings (cont.) Programming Language Pragmatics Michael L. Scott Review What is a regular expression? What is a context-free grammar? What is BNF? What is a derivation?

More information

Formal Specification and Verification

Formal Specification and Verification Formal Specification and Verification Introduction to Promela Bernhard Beckert Based on a lecture by Wolfgang Ahrendt and Reiner Hähnle at Chalmers University, Göteborg Formal Specification and Verification:

More information

Algebraic Properties of CSP Model Operators? Y.C. Law and J.H.M. Lee. The Chinese University of Hong Kong.

Algebraic Properties of CSP Model Operators? Y.C. Law and J.H.M. Lee. The Chinese University of Hong Kong. Algebraic Properties of CSP Model Operators? Y.C. Law and J.H.M. Lee Department of Computer Science and Engineering The Chinese University of Hong Kong Shatin, N.T., Hong Kong SAR, China fyclaw,jleeg@cse.cuhk.edu.hk

More information

6.001 Notes: Section 6.1

6.001 Notes: Section 6.1 6.001 Notes: Section 6.1 Slide 6.1.1 When we first starting talking about Scheme expressions, you may recall we said that (almost) every Scheme expression had three components, a syntax (legal ways of

More information

1. true / false By a compiler we mean a program that translates to code that will run natively on some machine.

1. true / false By a compiler we mean a program that translates to code that will run natively on some machine. 1. true / false By a compiler we mean a program that translates to code that will run natively on some machine. 2. true / false ML can be compiled. 3. true / false FORTRAN can reasonably be considered

More information

Chapter 3:: Names, Scopes, and Bindings (cont.)

Chapter 3:: Names, Scopes, and Bindings (cont.) Chapter 3:: Names, Scopes, and Bindings (cont.) Programming Language Pragmatics Michael L. Scott Review What is a regular expression? What is a context-free grammar? What is BNF? What is a derivation?

More information

Frank Mueller. Dept. of Computer Science. Florida State University. Tallahassee, FL phone: (904)

Frank Mueller. Dept. of Computer Science. Florida State University. Tallahassee, FL phone: (904) Static Cache Simulation and its Applications by Frank Mueller Dept. of Computer Science Florida State University Tallahassee, FL 32306-4019 e-mail: mueller@cs.fsu.edu phone: (904) 644-3441 July 12, 1994

More information

Optimizing Closures in O(0) time

Optimizing Closures in O(0) time Optimizing Closures in O(0 time Andrew W. Keep Cisco Systems, Inc. Indiana Univeristy akeep@cisco.com Alex Hearn Indiana University adhearn@cs.indiana.edu R. Kent Dybvig Cisco Systems, Inc. Indiana University

More information

G Programming Languages Spring 2010 Lecture 4. Robert Grimm, New York University

G Programming Languages Spring 2010 Lecture 4. Robert Grimm, New York University G22.2110-001 Programming Languages Spring 2010 Lecture 4 Robert Grimm, New York University 1 Review Last week Control Structures Selection Loops 2 Outline Subprograms Calling Sequences Parameter Passing

More information

Introduction to Programming Using Java (98-388)

Introduction to Programming Using Java (98-388) Introduction to Programming Using Java (98-388) Understand Java fundamentals Describe the use of main in a Java application Signature of main, why it is static; how to consume an instance of your own class;

More information

17/05/2018. Outline. Outline. Divide and Conquer. Control Abstraction for Divide &Conquer. Outline. Module 2: Divide and Conquer

17/05/2018. Outline. Outline. Divide and Conquer. Control Abstraction for Divide &Conquer. Outline. Module 2: Divide and Conquer Module 2: Divide and Conquer Divide and Conquer Control Abstraction for Divide &Conquer 1 Recurrence equation for Divide and Conquer: If the size of problem p is n and the sizes of the k sub problems are

More information