CS 6353 Compiler Construction, Homework #3
|
|
- Brent Lawson
- 5 years ago
- Views:
Transcription
1 CS 6353 Compiler Construction, Homework #3 1. Consider the following attribute grammar for code generation regarding array references. (Note that the attribute grammar is the same as the one in the notes. The newtemp function returns a new temporary variable name. The name consists of a leading character t and a number. For the i-th call, the number is i. In other words, the temporary variables generated by a sequence of call to newtemp function are: t1, t2, t3,. You can start from t1. The lookup function searches the symbol table to find the corresponding id entry. The input to the function is the name of the identifier. The field arraysize is an array storing the maximum sizes of all the dimensions of the array. The field elementwidth is the size of each element in the array. The field arraydim is the number of dimensions of the array. S := E { if (.array = true and E.array = false) then emit (.place [.tn ] := E.place); } A {.array := true;.tn := A.tn;.place := A.place; } A Elist] { A.place := arrayplace; A.tn := tempn; emit (tempn := tempn elementwidth; } Elist Elist1, E { Elist.dim := Elist1.dim 1; emit (tempn := tempn arraysize[elist.dim]); emit (tempn := tempn E.place; } Elist id[e { Elist.dim := 0; arrayplace := id.name; tempn := newtemp(); addentry (tempn, ); p := lookup (id.name); arraysize := p.arrarysize; elementwidth := p.elementwidth; emit (tempn := E.place); } E id { E.place := id.name; } E num { E.place := num; } Test offset = 128 arraydim = 4 arraysize = [9, 8, 7, 6] elementwidth = 8 Consider a given input statement: Test[5, x, y, 3] := z. The entry of Test in symbol table is given in the diagram on above right. The fields for Test are given in the table. Follow the attribute grammar above and generate the three address code. t1 := 5 t1 := t1 8 t1 := t1 x t1 := t1 7 t1 := t1 y t1 := t1 6 t1 := t1 3 t1 := t1 8 Test[t1] := z
2 2. Consider the following program. for (i=2; i<=n; i) a[i] = TRUE; count = 0; s = sqrt (n); for (i=2; i<=s; i) if (a[i]) { count; for (j=2i; j<=n; j = j1) a[j] = FASE; } (a) Translate the program into three address code as defined in Section 6.2, dragon book. (1) i := 2 (2) if i > n goto (7) (3) a[i] := TRUE (4) t2 := i1 (5) i := t2 (6) goto (2) (7) count := 0 (8) s := sqrt(n) (9) i := 2 (10) if i > s goto (23) (11) if a[i]!= TRUE goto (20) (12) t4 := count 1 (13) count := t4 (14) j := 2i (15) if j > n goto (20) (16) a[j] := FASE (17) t6 := j1 (18) j := t6 (19) goto (15) (20) t7 := i1 (21) i := t7 (22) goto (10) (23)exit (b) Identify all basic blocks in your three address code. B1: 1 B2: 2 B3: 3-6 B4: 7-9 B5: 10 B6: 11 B7: B8: 15 B9: B10: B11: 23
3 (c) Build the flow graph for the three address code.
4 (d) Build the dominator tree and identify the back edges in your flow graph in (c) Flow graph Dominator tree Back edges are (10,5), (9,8), (3,2) (shown as the red edge of the flow graph). (e) Find the entry node and the set of nodes in the natural loop associated with each back edge identified in (d). back edge (10,5): entry node: 5 loop nodes: {5,6,7,8,9,10} back edge (9,8): entry node: 8 loop nodes: {8,9} back edge (3,2): entry node: 2 loop nodes: {2,3}
5 3. Consider the flow graph in Dragon book, Figure 9.10 (also given as follows). (a) Compute Gen and Kill sets for each block in the flow graph. Gen[B1] = {(1),(2)} Kill[B1] = {(8),(10),(11)} Gen[B2] = {(3),(4)} Kill[B2] = {(5),(6)} Gen[B3] = {(5)} Kill[B3] = {(4),(6)} Gen[B4] = {(6),(7)} Kill[B4] = {(4),(5),(9)} Gen[B5] = {(8),(9)} Kill[B5] = {(2),(7),(11)} Gen[B6] = {(10),(11)} Kill[B6] = {(1),(2),(8)} (b) Compute the In and Out sets for each block in the flow graph. Round 0. Initialization: In[B1] = Out[B1] = Gen[B1] = {(1),(2)} In[B2] = Out[B2] = Gen[B2] = {(3),(4)} In[B3] = Out[B3] = Gen[B3] = {(5)} In[B4] = Out[B4] = Gen[B4] = {(6),(7)} In[B5] =
6 Out[B5] = Gen[B5] = {(8),(9)} In[B6] = Out[B6] = Gen[B6] = {(10),(11)} Update: Round 1. In[B1] = Out[B1] = Gen[B1] (In[B1]-Kill[B1]) = {(1),(2)} In[B2] = Out[B1] Out[B5] ={(1),(2), (8),(9)} Out[B2] = Gen[B2] (In[B2]-Kill[B2]) = {(1),(2), (3),(4), (8),(9)} In[B3] = Out[B2] Out[B4] ={(1),(2),(3),(4),(6),(7),(8),(9)} Out[B3] = Gen[B3] (In[B3]-Kill[B3]) = {(1),(2),(3),(5),(7),(8),(9)} In[B4] = Out[B3] ={(1),(2),(3),(5),(7),(8),(9)} Out[B4] = Gen[B4] (In[B4]-Kill[B4]) = {(1),(2),(3),(6),(7),(8)} In[B5] = Out[B3] = {(1),(2),(3),(5),(7),(8),(9)} Out[B5] = Gen[B5] (In[B5]-Kill[B5]) = {(1),(3),(5),(8),(9)} In[B6] = Out[B5] ={(1),(3),(5),(8),(9)} Out[B6] = Gen[B6] (In[B6]-Kill[B6]) = {(3),(5),(9),(10),(11)} Round 2. In[B1] = Out[B1] = Gen[B1] (In[B1]-Kill[B1]) = {(1),(2)} In[B2] = Out[B1] Out[B5] ={(1),(2), (5),(6),(8),(9)} Out[B2] = Gen[B2] (In[B2]-Kill[B2]) = {(1),(2), (3),(4), (8),(9)} In[B3] = Out[B2] Out[B4] ={(1),(2), (3),(4), (6),(7),(8),(9)} Out[B3] = Gen[B3] (In[B3]-Kill[B3]) = {(1),(2), (3),(5),(7),(8),(9)} In[B4] = Out[B3] ={(1),(2), (3),(5),(7),(8),(9)} Out[B4] = Gen[B4] (In[B4]-Kill[B4]) = {(1),(2),(3),(6),(7),(8)} In[B5] = Out[B3] = {(1),(2),(3),(5),(7),(8),(9)} Out[B5] = Gen[B5] (In[B5]-Kill[B5]) = {(1),(3),(5),(8),(9)} In[B6] = Out[B5] ={(1),(3),(5),(8),(9)} Out[B6] = Gen[B6] (In[B6]-Kill[B6]) = {(3),(5),(9),(10),(11)} Reached fixed point (c) Perform constant propagation and constant folding. In[B1]: Out[B1]: a = 1; b =2 In[B2]: a = 1; b = ; c = ; d = ; e = ; Out[B2]: a = 1; b = ; c = ; d = ; e = ; In[B3]: a = 1; b = ; c = ; d = ; e = ; Out[B3]: a = 1; b = ; c = ; d = ; e = ; In[B4]: a = 1; b = ; c = ; d = ; e = ; Out[B4]: a = 1; b = ; c = ; d = ; e = ; In[B5]: a = 1; b = ; c = ; d = ; e = ; Out[B5]: a = 1; b = ; c = ; d = ; e = ; In[B6]: a = 1; b = ; c = ; d = ; e = ; Out[B6]: a = ; b = ; c = ; d = ; e = ; Constant folding B1
7 B2 B3 B4 B5 B6 (1) a = 1 (2) b = 1 (3) c = a b c = 1 b (4) d = c a d = c 1 (5) d = b d (6) d = a b d = 1 b (7) e = e 1 (8) b = a b b = 1 b (9) e = c a e = c 1 (10) a = b d (11) b = a d (d) Perform common subexpression elimination. Round 1. In[B1] = Out[B1] = In[B2] = Out[B2] = {c = ab, d = c-a} In[B3] = Out[B3] = -- note: d = bd kills bd In[B4] = Out[B4] = {d = ab, e = e1} In[B5] = Out[B5] = {b = ab, e = c-a} In[B6] = {b = ab, e = c-a} Out[B6] = Round 2. In[B1] = Out[B1] = In[B2] = Out[B2] = {c = ab, d = c-a} In[B3] = = {c,d = ab} Out[B3] = {c,d = ab} In[B4] = {c,d = ab} Out[B4] = {c,d = ab, e = e1} In[B5] = {c,d = ab} Out[B5] = {b,c,d = ab, e = c-a} In[B6] = {b,c,d = ab, e = c-a} Out[B6] = Common sub expression elimination: B1 (1) a = 1 (2) b = 1 B2
8 B3 B4 B5 B6 (3) c = a b t1 = a b; c = t1 (4) d = c a (5) d = b d (6) d = a b d = t1 (7) e = e 1 (8) b = a b b = t1 (9) e = c a (10) a = b d (11) b = a d (e) There is one common subexpression defined in (4) that can actually be reused in (9) but got eliminated unnecessarily. Explain what the problem is. The In set of a block is the intersection of the Out sets of all prior blocks is not suitable for loops. (4) actually can reach the loop body, but got eliminated due to the intersection performed at the entry point of the loop. (f) Can you come up with a revised algorithm to give a greater power in finding common subexpressions so that the unnecessarily eliminated subexpression as shown in (e) can get reused? (You can assume that the three address code being processed always follows the structured programming practice.) A loop should be viewed as multiple branches, executing the loop body 0, 1, 2, 3, times. For each loop, we can convert the CFG and let it be represented by an infinite number of branches. Then we can perform the analysis on the converted CFG. For the specific case of finding common subexpressions, we actually only need to consider two branches: the loop body being executed 0 and 1 times. So we can convert the CFG accordingly to perform the analysis.
9 4. Consider the following flow graph. B1 t1 = 1 B3 s1 = 1 t2 = t2 N B5 t3 = N N = 100 M = 1000 s1 = 2 t2 = t1 s1 if (t2 < N) go to B4 s2 = N 2 t3 = t2 s2 if (t3 < N) go to B6 B6 d = A[t1] A[t2] if d < M go to B8 B8 A[t3] = d t1 = t1 1 if (t1 < N) go to B2 B2 B4 B7 d = d M M = M 1 B9 print A[0] to A[N 1]
10 (a) Identify and mark define-use links within the loop based on data flow analysis results. You need to consider the block level define-use relations as well as define-use relations within the block. (b) Identify all the loop invariants based on the define-use links computed in (b). Assume that all the loop invariants can be moved out of the loop. So you need to find loop invariants repetitively till a fixed point is reached. In each round, you need to pretend to move out those loop invariants you found in the previous rounds. Round 1: B2: Statement N = 100 is a loop invariant, move it out of the loop Statement M = 1000 is a loop invariant, move it out of the loop Statement s1 = 2 is a loop invariant, move it out of the loop B3: Statement s1 = 1 is a loop invariant, move it out of the loop Round2: B4: Statement s2 = N 2 now is a loop invariant, move it out of the loop
11 B5: Statement t3 = N now is a loop invariant, move it out of the loop (c) For each loop invariant, determine whether it can actually be moved out of the loop. If so, move it out of the loop. If not, state the reason why it cannot be moved out. Generate the new code after code motion. Round 1: B2: Statement N = 100 is a loop invariant, and it satisfies all 3 criteria, move it out of the loop Statement M = 1000 is a loop invariant, but there are multiple definitions of M in the loop, cannot be moved out Statement s1 = 2 is a loop invariant, but there are multiple definitions of s1 in the loop, cannot be moved out B3: B3 does not dominate the exit, no point to consider any statement in it Round2: B4: Statement s2 = N 2 now is a loop invariant, and it satisfies all 3 criteria, move it out of the loop B5: B5 does not dominate the exit, no point to consider any statement in it B1 t1 = 1 B3 s1 = 1 t2 = t2 N B5 t3 = N N = 100 s2 = N 2 N = 100 M = 1000 s1 = 2 t2 = t1 s1 if (t2 < N) go to B4 s2 = N 2 t3 = t2 s2 if (t3 < N) go to B6 B6 d = A[t1] A[t2] if d < M go to B8 B8 A[t3] = d t1 = t1 1 if (t1 < N) go to B2 B2 B4 B7 d = d M M = M 1 B9 print A[0] to A[N 1]
12 5. Consider the following three address code in a basic block. (1) t1 = j 1 (2) t2 = 4 t1 (3) temp = A[t2] (4) t3 = j (5) t4 = j 1 (6) t5 = 4 t3 (7) t6 = A[t5] (8) t7 = j 1 (9) t8 = 4 t7 (10) A[t8] = t6 (11) t9 = j (12) t10 = j 1 (13) t11 = 4 t9 (14) A[t11] = temp (b) Perform copy propagation. You need to do necessary data flow analysis to make sure that the propagation can be done correctly. In[1]=Out[1]= In[2]=Out[2]= In[3] =, Out[3] = {3}; In[4] = {3}, Out[4] = {3,4} In[5] = {3,4}, Out[5] = {3,4} In[6] = {3,4}, Out[6] = {3,4} replace (6) by (6) t5 = 4 j In[7] = {3,4}, Out[7] = {3,4,7} In[8] = {3,4,7}, Out[8] = {3,4,7} In[9] = {3,4,7}, Out[9] = {3,4,7} In[10] = {3,4,7}, Out[10] = {4,10} replace (10) by (10) A[t8] = A[t5] In[11] = {4,10}, Out[11] = {4,10,11} In[12] = {4,10,11}, Out[12] = {4,10,11} In[13] = {4,10,11}, Out[13] = {4,10,11} replace (13) by (13) t11 = 4 j In[14] = {4,10,11}, Out[14] = {4,11,14} (1) t1 = j 1 (2) t2 = 4 t1 (3) temp = A[t2] (4) t3 = j (5) t4 = j 1 (6) t5 = 4 j (7) t6 = A[t5] (8) t7 = j 1 (9) t8 = 4 t7 (10) A[t8] = A[t5] (11) t9 = j (12) t10 = j 1 (13) t11 = 4 j (14) A[t11] = temp
13 (c) Perform dead code elimination. You need to do necessary analysis to make sure that the elimination can be done correctly. Also, assume that after the basic block, array A[i], for all i, is alive and no other variables are alive. t1 = j 1 [13] = {A, j, t1} t2 = 4 t1 [13] = {A, j, t2} temp = A[t2] [13] = {A, temp, j} t3 = j dead code [13] = {A, temp, j} t4 = j dead code [13] = {A, temp, j} t5 = 4 j [13] = {A, temp, j, t5} t6 = A[t5] dead code [13] = {A, temp, j, t5} t7 = j 1 [13] = {A, temp, j, t5, t7} t8 = 4 t7 [13] = {A, temp, j, t5, t8} A[t8] = A[t5] [13] = {A, temp, j} t9 = j dead code [13] = {A, temp, j} t10 = j dead code [13] = {A, temp, j} t11 = 4 j [13] = {A, temp, t11} A[t11] = temp [14] = {A} Note: A is a special case, since it is an array, we do not know the exact address being defined or used, so A definition does not kill A. On the other hand, A[5] definition can kill A[5] 6. Consider the following flow graph. Perform strength reduction for the statements in the loop. t1 = 0 read (x, y) B1 a := x t1 t2 := t1 3 5 b := y t2 c := t2 5 d := c b t1 := t1 2 B2
14 t1 = t1 2 in loop B2 is a basic IV t2 := t135 is only defined once in loop B2 and satisfies the IV format, so it is an IV defined on the basic IV t1 = t1 2. We can perform strength reduction for t2 and the revised code is as follows: t1 = 0 read (x, y) B1 st2 = t135 a := x t1 t2 := st2 b := y t2 c := t2 5 d := c b t1 := t1 2 st2 := st2 6 B2 7. Consider the following CFG. (a) Perform liveliness analysis. Show the ive sets at each point of the CFG.
15 (b) Construct the interference graph. (c) Assume that the system has five registers. Perform color assignment. Determine the register for each variable. 5-color: Remove a first Then all nodes have < 5 edges. It is 5-colorable. Remove the remainder nodes in arbitrary order. The registers for each variable are listed as follows, a: R4 b: R4 c: R3 d: R2 f: R1 v: R0 (d) Assume that the system has only three registers. Perform color assignment and determine the register for each variable. In case it is necessary to spill, spill and revise the code and then do color assignment. 3-color: Remove a first
16 Since the node with 2 edges can t be found need to spill, choose to spill v Since the node with 2 edges still can t be found need to spill. Choose to spill c. Now it is 3-colorable Next we check whether the spilled nodes can have their registers when loaded back
17 The above interference graph is colorable by 3 colors. So, spilling c and v will work. So the registers for each variable are listed as follows a: R2 b: R0 c: spill d: R2 f: R1 v: spill t1: R0: is used for temporary loading and using v t2: R1: is used for temporary loading and using v t3: R2: are used for temporary loading and using c t4: R1: are used for temporary loading and using c 8. Consider the following three address code. t1 := j t1 := t1 dim2 t1 := t1 k t1 := t1 w t2 := j k t3 := j t3 := t3 dim2 t3 := t3 k t3 := t3 w t4 : = B [ t3 ] t5: = t2 t4 A [ t1 ] := t5 (a) Construct a dag from the three address code such that subexpressions and redundant variables are eliminated.
18 A[]= t5 t4 =B[] t1 3, t3 3 t2 t1 2, t3 2 w t1 1, t3 1 k t1 0, t3 0 j dim2 (b) Schedule the dag based on the numbering scheme. Run a global register allocation algorithm. Generate code based on the register allocation and instruction schedule you derived. Your code should make sure that the content of arrays A and B are permanently modified. 1 A[]= 2 t5 4 t4 =B[] 3 t1 2, t3 2 t1 3, t3 3 5 w 8 t t1 0, t3 0 t1 1, t3 1 j 6 dim2 7 k 11
19 t1 := j t1 := t1 dim2 t1 := t1 k t1 := t1 w t2 := j k t3 := j t3 := t3 dim2 t3 := t3 k t3 := t3 w t4 : = B [ t3 ] t5: = t2 t4 A [ t1 ] := t5 {j, k} {t1, j, k} {t1, j, k } {t1, j, k} {t1, j, k} {t1, t2, j, k} {t1, t2, t3, k} {t1, t2, t3, k} {t1, t2, t3} {t1, t2, t3} {t1, t2, t4} {t1, t5} {} r3 j r1 t1 t5 r2 t4 r3 k t2 r4 r2 t3 r3 Need 4 registers r4 := load k r3 := load j r1 := r3 r2 := r3 r4 r1 := r1 dim2 r1 := r4 r1 r1 := r1 w r3 := B[r1] r2 := r3 r2 A[r1] := r2
20 9. Consider the following instruction set (only those for basic blocks). Some costs for the instructions is unreasonable, just for the purpose of practicing the instruction selection algorithms. Instruction Description Cost load <reg> <id> load identifier <id> from memory into register <reg> 10 store <reg> <id> store <reg> value into <id> s memory location 10 aload <reg1> <reg2> <reg3> load the content of array A[reg3] into register <reg1>, 11 where reg2 = &A (A s base address) astore <reg1> <reg2> <reg3> store the content of register <reg1> into array A[reg3] 15 into, where reg2 = &A add <reg1> <reg2> <reg3> reg1 := reg2 reg3 2 mul <reg1> <reg2> <reg3> reg1 := reg2 reg3 5 addc <reg1> <reg2> <const> reg1 := reg2 constant 2 mulc <reg1> <reg2> <const> reg1 := reg2 constant 5 addx <reg1> <reg2> <id> reg1 := reg2 id 11 mulx <reg1> <reg2> <id> reg1 := reg2 id 16 Consider the following three address code in a basic block. t1 := j 1 t1 := t1 d2 t1 := t1 k t1 := t1 w t7 : = B[t1] t2 := j k t2 := t2 w t8: = A[t2] t9 := t7 t8 A[t1] := t9 (a) Draw the tiles for operators aload, astore, addx, mulx. (The tiles for the remaining operators can be found in the notes and in the book.) load reg id reg store S id aload reg1 astore S reg1 &array-id reg2 reg3 &array-id reg2 reg3 add addc addx reg1 reg1 reg1 reg2 reg3 reg2 const reg2 id
21 (b) Draw the instruction tree for the basic block given above. (c) Tile the tree using the maximal munching algorithm. Assign registers to the selected instructions and generate the machine code. S R3 R1 R2 R1 R3 &A R2 R2 w R1 R2 R2 R3 R2 R2 1 d2 k &B R2 R2 w &A R3 R3 w j R2 k j k R2 1 d2 j load R2 j addc R2 R2 1 mulx R2 R2 d2 addx R2 R2 k mulx R2 R2 w load R1 &B aload R1 R1 R2 load R2 &A load R3 j mulx R3 k mulx R3 w aload R3 R2 R3 add R2 R1 R3 load R2 j addc R2 R2 1 mulx R2 R2 d2 addx R2 R2 k mulx R2 R2 w load R1 &A astore R3 R1 R2
22 (d) Tile the tree using the dynamic programming algorithm. 1 S 1 d2 w k j k w d2 w k &A &B &A j j
A main goal is to achieve a better performance. Code Optimization. Chapter 9
1 A main goal is to achieve a better performance Code Optimization Chapter 9 2 A main goal is to achieve a better performance source Code Front End Intermediate Code Code Gen target Code user Machineindependent
More informationData Flow Analysis. Agenda CS738: Advanced Compiler Optimizations. 3-address Code Format. Assumptions
Agenda CS738: Advanced Compiler Optimizations Data Flow Analysis Amey Karkare karkare@cse.iitk.ac.in http://www.cse.iitk.ac.in/~karkare/cs738 Department of CSE, IIT Kanpur Static analysis and compile-time
More informationWhy Global Dataflow Analysis?
Why Global Dataflow Analysis? Answer key questions at compile-time about the flow of values and other program properties over control-flow paths Compiler fundamentals What defs. of x reach a given use
More informationCompiler Optimization and Code Generation
Compiler Optimization and Code Generation Professor: Sc.D., Professor Vazgen Melikyan 1 Course Overview Introduction: Overview of Optimizations 1 lecture Intermediate-Code Generation 2 lectures Machine-Independent
More informationLecture 4 Introduction to Data Flow Analysis
Lecture 4 Introduction to Data Flow Analysis I. Structure of data flow analysis II. Example 1: Reaching definition analysis III. Example 2: Liveness analysis IV. Generalization What is Data Flow Analysis?
More informationData Flow Information. already computed
Data Flow Information Determine if Determine if a constant in loop modifies Determine if expression already computed Determine if not used later in program Data Flow Equations Local Information: Gen(B):
More informationCompiler Optimizations. Chapter 8, Section 8.5 Chapter 9, Section 9.1.7
Compiler Optimizations Chapter 8, Section 8.5 Chapter 9, Section 9.1.7 2 Local vs. Global Optimizations Local: inside a single basic block Simple forms of common subexpression elimination, dead code elimination,
More informationELEC 876: Software Reengineering
ELEC 876: Software Reengineering () Dr. Ying Zou Department of Electrical & Computer Engineering Queen s University Compiler and Interpreter Compiler Source Code Object Compile Execute Code Results data
More informationLecture 3 Local Optimizations, Intro to SSA
Lecture 3 Local Optimizations, Intro to SSA I. Basic blocks & Flow graphs II. Abstraction 1: DAG III. Abstraction 2: Value numbering IV. Intro to SSA ALSU 8.4-8.5, 6.2.4 Phillip B. Gibbons 15-745: Local
More informationData-flow Analysis. Y.N. Srikant. Department of Computer Science and Automation Indian Institute of Science Bangalore
Department of Computer Science and Automation Indian Institute of Science Bangalore 560 012 NPTEL Course on Compiler Design Data-flow analysis These are techniques that derive information about the flow
More informationMore Code Generation and Optimization. Pat Morin COMP 3002
More Code Generation and Optimization Pat Morin COMP 3002 Outline DAG representation of basic blocks Peephole optimization Register allocation by graph coloring 2 Basic Blocks as DAGs 3 Basic Blocks as
More informationCMSC430 Spring 2009 Midterm 2 (Solutions)
CMSC430 Spring 2009 Midterm 2 (Solutions) Instructions You have until 4:45pm to complete the midterm. Feel free to ask questions on the midterm. One sentence answers are sufficient for the ``essay'' questions.
More informationCompiler Design. Fall Control-Flow Analysis. Prof. Pedro C. Diniz
Compiler Design Fall 2015 Control-Flow Analysis Sample Exercises and Solutions Prof. Pedro C. Diniz USC / Information Sciences Institute 4676 Admiralty Way, Suite 1001 Marina del Rey, California 90292
More informationLiveness Analysis and Register Allocation. Xiao Jia May 3 rd, 2013
Liveness Analysis and Register Allocation Xiao Jia May 3 rd, 2013 1 Outline Control flow graph Liveness analysis Graph coloring Linear scan 2 Basic Block The code in a basic block has: one entry point,
More informationCOP5621 Exam 4 - Spring 2005
COP5621 Exam 4 - Spring 2005 Name: (Please print) Put the answers on these sheets. Use additional sheets when necessary. Show how you derived your answer when applicable (this is required for full credit
More informationLecture 2. Introduction to Data Flow Analysis
Lecture 2 Introduction to Data Flow Analysis I II III Example: Reaching definition analysis Example: Liveness Analysis A General Framework (Theory in next lecture) Reading: Chapter 9.2 Advanced Compilers
More informationCompiler Optimizations. Chapter 8, Section 8.5 Chapter 9, Section 9.1.7
Compiler Optimizations Chapter 8, Section 8.5 Chapter 9, Section 9.1.7 2 Local vs. Global Optimizations Local: inside a single basic block Simple forms of common subexpression elimination, dead code elimination,
More informationCode optimization. Have we achieved optimal code? Impossible to answer! We make improvements to the code. Aim: faster code and/or less space
Code optimization Have we achieved optimal code? Impossible to answer! We make improvements to the code Aim: faster code and/or less space Types of optimization machine-independent In source code or internal
More informationA Bad Name. CS 2210: Optimization. Register Allocation. Optimization. Reaching Definitions. Dataflow Analyses 4/10/2013
A Bad Name Optimization is the process by which we turn a program into a better one, for some definition of better. CS 2210: Optimization This is impossible in the general case. For instance, a fully optimizing
More informationData Flow Analysis. Program Analysis
Program Analysis https://www.cse.iitb.ac.in/~karkare/cs618/ Data Flow Analysis Amey Karkare Dept of Computer Science and Engg IIT Kanpur Visiting IIT Bombay karkare@cse.iitk.ac.in karkare@cse.iitb.ac.in
More informationHomework 1 Answers. CS 322 Compiler Construction Winter Quarter 2006
Homework 1 Answers CS 322 Compiler Construction Winter Quarter 2006 Problem 1 m := 0 i := 0 L1: if i
More informationCOSE312: Compilers. Lecture 20 Data-Flow Analysis (2)
COSE312: Compilers Lecture 20 Data-Flow Analysis (2) Hakjoo Oh 2017 Spring Hakjoo Oh COSE312 2017 Spring, Lecture 20 June 6, 2017 1 / 18 Final Exam 6/19 (Mon), 15:30 16:45 (in class) Do not be late. Coverage:
More informationMODEL ANSWERS COMP36512, May 2016
MODEL ANSWERS COMP36512, May 2016 QUESTION 1: a) Clearly: 1-g, 2-d, 3-h, 4-e, 5-i, 6-a, 7-b, 8-c, 9-f. 0.5 marks for each correct answer rounded up as no halves are used. b) i) It has been mentioned in
More informationMIT Introduction to Dataflow Analysis. Martin Rinard Laboratory for Computer Science Massachusetts Institute of Technology
MIT 6.035 Introduction to Dataflow Analysis Martin Rinard Laboratory for Computer Science Massachusetts Institute of Technology Dataflow Analysis Used to determine properties of program that involve multiple
More informationData-flow Analysis - Part 2
- Part 2 Department of Computer Science Indian Institute of Science Bangalore 560 012 NPTEL Course on Compiler Design Data-flow analysis These are techniques that derive information about the flow of data
More informationCSE443 Compilers. Dr. Carl Alphonce 343 Davis Hall
CSE443 Compilers Dr. Carl Alphonce alphonce@buffalo.edu 343 Davis Hall http://www.cse.buffalo.edu/faculty/alphonce/sp17/cse443/index.php https://piazza.com/class/iybn4ndqa1s3ei Announcements Grading survey
More informationIntroduction. Data-flow Analysis by Iteration. CSc 553. Principles of Compilation. 28 : Optimization III
CSc 553 Principles of Compilation 28 : Optimization III Introduction Department of Computer Science University of Arizona collberg@gmail.com Copyright c 2011 Christian Collberg Computing Data-Flow Info.
More informationReaching Definitions and u-d Chaining M.B.Chandak
Reaching Definitions and u-d Chaining M.B.Chandak hodcs@rknec.edu Reaching Definitions A definition of any variable is killed if between two points along the path, there is a re-assignment. A definition
More informationIntroduction to Compiler Design: optimization and backend issues
Introduction to Compiler Design: optimization and backend issues Andy Pimentel group andy@science.uva.nl Introduction to Compiler Design A. Pimentel p. 1/127 Compilers: Organization Revisited Source Frontend
More informationReview; questions Basic Analyses (2) Assign (see Schedule for links)
Class 2 Review; questions Basic Analyses (2) Assign (see Schedule for links) Representation and Analysis of Software (Sections -5) Additional reading: depth-first presentation, data-flow analysis, etc.
More informationCSCI Compiler Design
CSCI 565 - Compiler Design Spring 2010 Final Exam - Solution May 07, 2010 at 1.30 PM in Room RTH 115 Duration: 2h 30 min. Please label all pages you turn in with your name and student number. Name: Number:
More informationLanguages and Compiler Design II IR Code Optimization
Languages and Compiler Design II IR Code Optimization Material provided by Prof. Jingke Li Stolen with pride and modified by Herb Mayer PSU Spring 2010 rev.: 4/16/2010 PSU CS322 HM 1 Agenda IR Optimization
More informationWhat Do Compilers Do? How Can the Compiler Improve Performance? What Do We Mean By Optimization?
What Do Compilers Do? Lecture 1 Introduction I What would you get out of this course? II Structure of a Compiler III Optimization Example Reference: Muchnick 1.3-1.5 1. Translate one language into another
More informationCompiler Design and Construction Optimization
Compiler Design and Construction Optimization Generating Code via Macro Expansion Macroexpand each IR tuple or subtree A := B+C; D := A * C; lw $t0, B, lw $t1, C, add $t2, $t0, $t1 sw $t2, A lw $t0, A
More informationTour of common optimizations
Tour of common optimizations Simple example foo(z) { x := 3 + 6; y := x 5 return z * y } Simple example foo(z) { x := 3 + 6; y := x 5; return z * y } x:=9; Applying Constant Folding Simple example foo(z)
More informationCS577 Modern Language Processors. Spring 2018 Lecture Optimization
CS577 Modern Language Processors Spring 2018 Lecture Optimization 1 GENERATING BETTER CODE What does a conventional compiler do to improve quality of generated code? Eliminate redundant computation Move
More informationDataflow analysis (ctd.)
Dataflow analysis (ctd.) Available expressions Determine which expressions have already been evaluated at each point. A expression x+y is available at point p if every path from the entry to p evaluates
More informationCompiler Construction 2016/2017 Loop Optimizations
Compiler Construction 2016/2017 Loop Optimizations Peter Thiemann January 16, 2017 Outline 1 Loops 2 Dominators 3 Loop-Invariant Computations 4 Induction Variables 5 Array-Bounds Checks 6 Loop Unrolling
More information7. Optimization! Prof. O. Nierstrasz! Lecture notes by Marcus Denker!
7. Optimization! Prof. O. Nierstrasz! Lecture notes by Marcus Denker! Roadmap > Introduction! > Optimizations in the Back-end! > The Optimizer! > SSA Optimizations! > Advanced Optimizations! 2 Literature!
More informationMidterm 2. CMSC 430 Introduction to Compilers Fall Instructions Total 100. Name: November 21, 2016
Name: Midterm 2 CMSC 430 Introduction to Compilers Fall 2016 November 21, 2016 Instructions This exam contains 7 pages, including this one. Make sure you have all the pages. Write your name on the top
More informationCompiler Construction 2010/2011 Loop Optimizations
Compiler Construction 2010/2011 Loop Optimizations Peter Thiemann January 25, 2011 Outline 1 Loop Optimizations 2 Dominators 3 Loop-Invariant Computations 4 Induction Variables 5 Array-Bounds Checks 6
More informationInduction Variable Identification (cont)
Loop Invariant Code Motion Last Time Uses of SSA: reaching constants, dead-code elimination, induction variable identification Today Finish up induction variable identification Loop invariant code motion
More informationCompilers. Intermediate representations and code generation. Yannis Smaragdakis, U. Athens (original slides by Sam
Compilers Intermediate representations and code generation Yannis Smaragdakis, U. Athens (original slides by Sam Guyer@Tufts) Today Intermediate representations and code generation Scanner Parser Semantic
More informationCompiler Theory. (Intermediate Code Generation Abstract S yntax + 3 Address Code)
Compiler Theory (Intermediate Code Generation Abstract S yntax + 3 Address Code) 006 Why intermediate code? Details of the source language are confined to the frontend (analysis phase) of a compiler, while
More informationRedundant Computation Elimination Optimizations. Redundancy Elimination. Value Numbering CS2210
Redundant Computation Elimination Optimizations CS2210 Lecture 20 Redundancy Elimination Several categories: Value Numbering local & global Common subexpression elimination (CSE) local & global Loop-invariant
More informationCS 403 Compiler Construction Lecture 10 Code Optimization [Based on Chapter 8.5, 9.1 of Aho2]
CS 403 Compiler Construction Lecture 10 Code Optimization [Based on Chapter 8.5, 9.1 of Aho2] 1 his Lecture 2 1 Remember: Phases of a Compiler his lecture: Code Optimization means floating point 3 What
More informationPrinciples of Compiler Design
Principles of Compiler Design Code Generation Compiler Lexical Analysis Syntax Analysis Semantic Analysis Source Program Token stream Abstract Syntax tree Intermediate Code Code Generation Target Program
More informationCODE GENERATION Monday, May 31, 2010
CODE GENERATION memory management returned value actual parameters commonly placed in registers (when possible) optional control link optional access link saved machine status local data temporaries A.R.
More informationOptimization. ASU Textbook Chapter 9. Tsan-sheng Hsu.
Optimization ASU Textbook Chapter 9 Tsan-sheng Hsu tshsu@iis.sinica.edu.tw http://www.iis.sinica.edu.tw/~tshsu 1 Introduction For some compiler, the intermediate code is a pseudo code of a virtual machine.
More informationGoals of Program Optimization (1 of 2)
Goals of Program Optimization (1 of 2) Goal: Improve program performance within some constraints Ask Three Key Questions for Every Optimization 1. Is it legal? 2. Is it profitable? 3. Is it compile-time
More informationLecture 6. Register Allocation. I. Introduction. II. Abstraction and the Problem III. Algorithm
I. Introduction Lecture 6 Register Allocation II. Abstraction and the Problem III. Algorithm Reading: Chapter 8.8.4 Before next class: Chapter 10.1-10.2 CS243: Register Allocation 1 I. Motivation Problem
More informationLecture Notes on Loop Optimizations
Lecture Notes on Loop Optimizations 15-411: Compiler Design Frank Pfenning Lecture 17 October 22, 2013 1 Introduction Optimizing loops is particularly important in compilation, since loops (and in particular
More informationCFG (Control flow graph)
CFG (Control flow graph) Class B T12 오지은 200814189 신승우 201011340 이종선 200811448 Introduction to CFG Algorithm to construct Control Flow Graph Statement of Purpose Q & A Introduction to CFG Algorithm to
More informationIntermediate representation
Intermediate representation Goals: encode knowledge about the program facilitate analysis facilitate retargeting facilitate optimization scanning parsing HIR semantic analysis HIR intermediate code gen.
More informationWhat Compilers Can and Cannot Do. Saman Amarasinghe Fall 2009
What Compilers Can and Cannot Do Saman Amarasinghe Fall 009 Optimization Continuum Many examples across the compilation pipeline Static Dynamic Program Compiler Linker Loader Runtime System Optimization
More informationLecture 6 Foundations of Data Flow Analysis
Lecture 6 Foundations of Data Flow Analysis I. Meet operator II. Transfer functions III. Correctness, Precision, Convergence IV. Efficiency ALSU 9.3 Phillip B. Gibbons 15-745: Foundations of Data Flow
More informationSelected Aspects of Compilers
Selected Aspects of Compilers Lecture Compilers SS 2009 Dr.-Ing. Ina Schaefer Software Technology Group TU Kaiserslautern Ina Schaefer Selected Aspects of Compilers 1 Content of Lecture 1. Introduction:
More informationMidterm 2. CMSC 430 Introduction to Compilers Spring Instructions Total 100. Name: April 18, 2012
Name: Midterm 2 CMSC 430 Introduction to Compilers Spring 2012 April 18, 2012 Instructions This exam contains 10 pages, including this one. Make sure you have all the pages. Write your name on the top
More informationIntro to dataflow analysis. CSE 501 Spring 15
Intro to dataflow analysis CSE 501 Spring 15 Announcements Paper commentaries Please post them 24 hours before class ApplicaBon paper presentabons Good training for conference talks! Will help go through
More informationLecture 6 Foundations of Data Flow Analysis
Review: Reaching Definitions Lecture 6 Foundations of Data Flow Analysis I. Meet operator II. Transfer functions III. Correctness, Precision, Convergence IV. Efficiency [ALSU 9.3] Phillip B. Gibbons 15-745:
More informationCSE 501: Compiler Construction. Course outline. Goals for language implementation. Why study compilers? Models of compilation
CSE 501: Compiler Construction Course outline Main focus: program analysis and transformation how to represent programs? how to analyze programs? what to analyze? how to transform programs? what transformations
More informationCSC D70: Compiler Optimization
CSC D70: Compiler Optimization Prof. Gennady Pekhimenko University of Toronto Winter 2018 The content of this lecture is adapted from the lectures of Todd Mowry and Phillip Gibbons CSC D70: Compiler Optimization
More informationFinal Examination. Winter Problem Points Score. Total 180
CS243 Winter 2002-2003 You have 3 hours to work on this exam. The examination has 180 points. Please budget your time accordingly. Write your answers in the space provided on the exam. If you use additional
More informationIntermediate Representations. Reading & Topics. Intermediate Representations CS2210
Intermediate Representations CS2210 Lecture 11 Reading & Topics Muchnick: chapter 6 Topics today: Intermediate representations Automatic code generation with pattern matching Optimization Overview Control
More informationCS 406/534 Compiler Construction Putting It All Together
CS 406/534 Compiler Construction Putting It All Together Prof. Li Xu Dept. of Computer Science UMass Lowell Fall 2004 Part of the course lecture notes are based on Prof. Keith Cooper, Prof. Ken Kennedy
More informationCS202 Compiler Construction
CS202 Compiler Construction April 17, 2003 CS 202-33 1 Today: more optimizations Loop optimizations: induction variables New DF analysis: available expressions Common subexpression elimination Copy propogation
More informationLecture 21 CIS 341: COMPILERS
Lecture 21 CIS 341: COMPILERS Announcements HW6: Analysis & Optimizations Alias analysis, constant propagation, dead code elimination, register allocation Available Soon Due: Wednesday, April 25 th Zdancewic
More informationData Structures and Algorithms in Compiler Optimization. Comp314 Lecture Dave Peixotto
Data Structures and Algorithms in Compiler Optimization Comp314 Lecture Dave Peixotto 1 What is a compiler Compilers translate between program representations Interpreters evaluate their input to produce
More informationCS 701. Class Meets. Instructor. Teaching Assistant. Key Dates. Charles N. Fischer. Fall Tuesdays & Thursdays, 11:00 12: Engineering Hall
CS 701 Charles N. Fischer Class Meets Tuesdays & Thursdays, 11:00 12:15 2321 Engineering Hall Fall 2003 Instructor http://www.cs.wisc.edu/~fischer/cs703.html Charles N. Fischer 5397 Computer Sciences Telephone:
More informationCompiler Optimization Techniques
Compiler Optimization Techniques Department of Computer Science, Faculty of ICT February 5, 2014 Introduction Code optimisations usually involve the replacement (transformation) of code from one sequence
More informationCompiler Optimization
Compiler Optimization The compiler translates programs written in a high-level language to assembly language code Assembly language code is translated to object code by an assembler Object code modules
More informationCS153: Compilers Lecture 17: Control Flow Graph and Data Flow Analysis
CS153: Compilers Lecture 17: Control Flow Graph and Data Flow Analysis Stephen Chong https://www.seas.harvard.edu/courses/cs153 Announcements Project 5 out Due Tuesday Nov 13 (14 days) Project 6 out Due
More informationThis test is not formatted for your answers. Submit your answers via to:
Page 1 of 7 Computer Science 320: Final Examination May 17, 2017 You have as much time as you like before the Monday May 22 nd 3:00PM ET deadline to answer the following questions. For partial credit,
More informationProgram Analysis. Readings
Program Analysis Class #2 Readings he Program Dependence Graph and Its Use in Optimization Program slicing A Survey of Program Slicing echniques Dragon book Program Analysis Data-low Analysis (wrap up)
More informationMidterm 2. CMSC 430 Introduction to Compilers Fall Instructions Total 100. Name: November 11, 2015
Name: Midterm 2 CMSC 430 Introduction to Compilers Fall 2015 November 11, 2015 Instructions This exam contains 8 pages, including this one. Make sure you have all the pages. Write your name on the top
More informationDixita Kagathara Page 1
2014 Sem-VII Intermediate Code Generation 1) What is intermediate code? Intermediate code is: The output of the parser and the input to the Code Generator. Relatively machine-independent: allows the compiler
More informationCOMS W4115 Programming Languages and Translators Lecture 21: Code Optimization April 15, 2013
1 COMS W4115 Programming Languages and Translators Lecture 21: Code Optimization April 15, 2013 Lecture Outline 1. Code optimization strategies 2. Peephole optimization 3. Common subexpression elimination
More informationCalvin Lin The University of Texas at Austin
Loop Invariant Code Motion Last Time Loop invariant code motion Value numbering Today Finish value numbering More reuse optimization Common subession elimination Partial redundancy elimination Next Time
More informationCMSC430 Spring 2014 Midterm 2 Solutions
CMSC430 Spring 2014 Midterm 2 Solutions 1. (12 pts) Syntax directed translation & type checking Consider the following grammar fragment for an expression for C--: exp CONST IDENT 1 IDENT 2 [ exp 1 ] Assume
More informationCalvin Lin The University of Texas at Austin
Loop Invariant Code Motion Last Time SSA Today Loop invariant code motion Reuse optimization Next Time More reuse optimization Common subexpression elimination Partial redundancy elimination February 23,
More informationCompiler Design Prof. Y. N. Srikant Department of Computer Science and Automation Indian Institute of Science, Bangalore
Compiler Design Prof. Y. N. Srikant Department of Computer Science and Automation Indian Institute of Science, Bangalore Module No. # 10 Lecture No. # 16 Machine-Independent Optimizations Welcome to the
More informationregister allocation saves energy register allocation reduces memory accesses.
Lesson 10 Register Allocation Full Compiler Structure Embedded systems need highly optimized code. This part of the course will focus on Back end code generation. Back end: generation of assembly instructions
More informationInstruction Selection. Problems. DAG Tiling. Pentium ISA. Example Tiling CS412/CS413. Introduction to Compilers Tim Teitelbaum
Instruction Selection CS42/CS43 Introduction to Compilers Tim Teitelbaum Lecture 32: More Instruction Selection 20 Apr 05. Translate low-level IR code into DAG representation 2. Then find a good tiling
More information16.10 Exercises. 372 Chapter 16 Code Improvement. be translated as
372 Chapter 16 Code Improvement 16.10 Exercises 16.1 In Section 16.2 we suggested replacing the instruction r1 := r2 / 2 with the instruction r1 := r2 >> 1, and noted that the replacement may not be correct
More informationLecture 5. Partial Redundancy Elimination
Lecture 5 Partial Redundancy Elimination I. Forms of redundancy global common subexpression elimination loop invariant code motion partial redundancy II. Lazy Code Motion Algorithm Mathematical concept:
More informationUSC 227 Office hours: 3-4 Monday and Wednesday CS553 Lecture 1 Introduction 4
CS553 Compiler Construction Instructor: URL: Michelle Strout mstrout@cs.colostate.edu USC 227 Office hours: 3-4 Monday and Wednesday http://www.cs.colostate.edu/~cs553 CS553 Lecture 1 Introduction 3 Plan
More informationCode generation and optimization
Code generation and timization Generating assembly How do we convert from three-address code to assembly? Seems easy! But easy solutions may not be the best tion What we will cover: Peephole timizations
More informationControl flow graphs and loop optimizations. Thursday, October 24, 13
Control flow graphs and loop optimizations Agenda Building control flow graphs Low level loop optimizations Code motion Strength reduction Unrolling High level loop optimizations Loop fusion Loop interchange
More informationPART 6 - RUN-TIME ENVIRONMENT. F. Wotawa TU Graz) Compiler Construction Summer term / 309
PART 6 - RUN-TIME ENVIRONMENT F. Wotawa (IST @ TU Graz) Compiler Construction Summer term 2016 188 / 309 Objectives/Tasks Relate static source code to actions at program runtime. Names in the source code
More informationCompilers. Compiler Construction Tutorial The Front-end
Compilers Compiler Construction Tutorial The Front-end Salahaddin University College of Engineering Software Engineering Department 2011-2012 Amanj Sherwany http://www.amanj.me/wiki/doku.php?id=teaching:su:compilers
More informationTwo hours UNIVERSITY OF MANCHESTER SCHOOL OF COMPUTER SCIENCE. Date: Friday 20th May 2016 Time: 14:00-16:00
Two hours UNIVERSITY OF MANCHESTER SCHOOL OF COMPUTER SCIENCE Compilers Date: Friday 20th May 2016 Time: 14:00-16:00 Please answer any THREE Questions from the FIVE Questions provided This is a CLOSED
More informationProgramming Language Processor Theory
Programming Language Processor Theory Munehiro Takimoto Course Descriptions Method of Evaluation: made through your technical reports Purposes: understanding various theories and implementations of modern
More informationLecture 4. More on Data Flow: Constant Propagation, Speed, Loops
Lecture 4 More on Data Flow: Constant Propagation, Speed, Loops I. Constant Propagation II. Efficiency of Data Flow Analysis III. Algorithm to find loops Reading: Chapter 9.4, 9.6 CS243: Constants, Speed,
More informationCode Generation. M.B.Chandak Lecture notes on Language Processing
Code Generation M.B.Chandak Lecture notes on Language Processing Code Generation It is final phase of compilation. Input from ICG and output in the form of machine code of target machine. Major issues
More informationCode Generation (#')! *+%,-,.)" !"#$%! &$' (#')! 20"3)% +"#3"0- /)$)"0%#"
Code Generation!"#$%! &$' 1$%)"-)',0%)! (#')! (#')! *+%,-,.)" 1$%)"-)',0%)! (#')! (#')! /)$)"0%#" 20"3)% +"#3"0- Memory Management returned value actual parameters commonly placed in registers (when possible)
More informationIntra-procedural Data Flow Analysis Introduction
The Setting: Optimising Compilers The Essence of Program Analysis Reaching Definitions Analysis Intra-procedural Data Flow Analysis Introduction Hanne Riis Nielson and Flemming Nielson email: {riis,nielson}@immdtudk
More informationCompiler construction in4303 lecture 9
Compiler construction in4303 lecture 9 Code generation Chapter 4.2.5, 4.2.7, 4.2.11 4.3 Overview Code generation for basic blocks instruction selection:[burs] register allocation: graph coloring instruction
More informationAdvanced Compilers Introduction to Dataflow Analysis by Example. Fall Chungnam National Univ. Eun-Sun Cho
Advanced Compilers Introduction to Dataflow Analysis by Example Fall. 2016 Chungnam National Univ. Eun-Sun Cho 1 Dataflow Analysis + Optimization r1 = r2 + r3 r6 = r4 r5 r6 = r2 + r3 r7 = r4 r5 r4 = 4
More informationIntroduction to Code Optimization. Lecture 36: Local Optimization. Basic Blocks. Basic-Block Example
Lecture 36: Local Optimization [Adapted from notes by R. Bodik and G. Necula] Introduction to Code Optimization Code optimization is the usual term, but is grossly misnamed, since code produced by optimizers
More informationLow-level optimization
Low-level optimization Advanced Course on Compilers Spring 2015 (III-V): Lecture 6 Vesa Hirvisalo ESG/CSE/Aalto Today Introduction to code generation finding the best translation Instruction selection
More information