Data Flow Analysis and Computation of SSA
|
|
- Catherine Dalton
- 6 years ago
- Views:
Transcription
1 Compiler Design 1 Data Flow Analysis and Computation of SSA
2 Compiler Design 2 Definitions A basic block is the longest sequence of three-address codes with the following properties. The control flows to the block only through the first three-address code a. The control flows out of the block only through the last three-address code b. a There is no label in the middle of the code. b No three-address code other than the last one can be branch or jump.
3 Compiler Design 3 Example 1: L2: v1 = i 13: L4:v1 = i 2: v2 = j 14 v2 = j 3: if v1>v2 goto L3 4: v1 = j 15 if v1<>v2 5: v2 = i goto L2 6: v1 = v1 - v2 7: j = v1 8: goto L4 9: L3: v1 = i 10: v2 = j 11: v1 = v1 - v2 12: i = v1
4 Compiler Design 4 Basic Block - b 1 1: L2: v1 = i 2: v2 = j 3: if v1>v2 goto L3 3a: goto (4)
5 Compiler Design 5 Basic Block - b 2 4: v1 = j 5: v2 = i 6: v1 = v1 - v2 7: j = v1 8: goto L4
6 Compiler Design 6 Basic Block - b 3 9: L3: v1 = i 10: v2 = j 11: v1 = v1 - v2 12: i = v1 12a: goto L4
7 Compiler Design 7 Basic Block - b 4 13: L4:v1 = i 14 v2 = j 15 if v1<>v2 goto L2 15a exit
8 Compiler Design 8 Definitions A control-flow graph (CFG) is a directed graph G = (V,E), where the nodes are the basic blocks and the edges correspond to the flow of control from one basic block to another. The edge e ij = (v i,v j ) corresponds to the flow of control from the basic block v i to the basic block v j.
9 Compiler Design 9 Control-Flow Graph Entry b1: L2: v1 = i v2 = j if v1>v2 goto L3 b3: L3: v1=i v2=j v1=v1 v2 i = v1 b2: v1=j v2=i v1=v1 v2 j=v1 goto L4 b4: L4: v1=i v2=j if v1 <> v2 goto L2 Exit
10 Compiler Design 10 Note We assume that a CFG has a unique entry node and also has a unique exit node. In a CFG corresponding to a function/procedure may have multiple exit and entry points. A compiler can introduce unique initial and final basic blocks and edges to (from) multiple entry (exit) points.
11 Compiler Design 11 Note A compiler may use 3-address code for each basic block and a CFG for a whole procedure or a program showing flow of control among the basic blocks. Different global optimizations require analysis of dataflow through the CFG.
12 Compiler Design 12 Definition In a CFG with the unique entry block B 0, a basic block B i dominates a basic block B j if every path from B 0 to B j has B i on it. Dominance is a binary relation on the set of nodes (basic blocks) of a CFG. The relation B i dominates B j is written as B i >>B j. If there is an edge from B i to B j in the CFG, then B i is called a predecessor of B j.
13 Compiler Design 13 Definition For all node B i, B i >>B i (reflexive). Similarly for all nodes B i,b j,b k, if B i >>B j and B j >>B k, then B i >>B k (transitive). For every node B i, the set Dom(B i ) is the collection of all dominators of B i. For every node B i, the set Pred(B i ) is the collection of all predecessors of B i.
14 Compiler Design 14 A CFG B0 B1 B2 B3 B4 B5 B6 B7 B8 B9 B10 B11 B12
15 Compiler Design 15 Examples Node Predecessors Dominators B 0 B 0 B 1 B 0 B 0,B 1 B 2 B 0 B 0,B 2 B 3 B 1 B 0,B 1,B 3 B 8 B 4,B 5 B 0,B 2,B 8 B 9 B 7,B 8 B 0,B 9
16 Compiler Design 16 Computation The dominator set of any node B i can be computed using the following inductive definition. Dom(B i ) = {B 0 }, if i = 0 {B i } B j Pred(B i ) Dom(B j), if i > 0.
17 Compiler Design 17 Note This equation corresponds to a forward data-flow analysis. The dominator set of B i depends on the dominator set of its predecessor nodes. Similarly there properties of nodes that can be defined using its successors. This gives rise to backward data-flow analysis.
18 Compiler Design 18 Algorithm 1 Dom(B 0 ) {B 0 } 2 for i 1 to n 1 3 Dom(B i ) {B 0,,B n 1 } 4 chflg true 5 while chflg = true do 6 chflg false 7 for i 1 to n 1 8 temp {B i } B j Pred(B i ) Dom(B j) 9 if temp Dom(B i ) then 10 Dom(B i ) temp; chflg true
19 Compiler Design 19 Note This is an example of fixed-point computation. The iteration of the while-loop terminates when the Dom(b) cannot be reduced further. It is the fixed-point of a monotone function defined by the equation.
20 Compiler Design 20 Liveness and Safety The algorithm terminates as in step 8 the size of Dom(B i ) may decrease or remain unchanged. This cannot continue indefinitely. The correctness of the algorithm comes from the correctness of the definition of Dom(B i ). The speed of termination depends on the ordering of the nodes in the CFG.
21 Compiler Design 21 Live Variable Analysis Using a variable before initialization is a logical error. If a variable x is (i) assigned a value in a 3-address code i, (ii) is used as an operand in a 3-address code j, and (iii) is not redefined in a path p in the CFG from i to j, then x is live at i and all points on path p.
22 Compiler Design 22 Liveness and Next Use For each 3-address code a b c, we want to know the liveness and next-uses of the variables. It is easy to compute them within a basic block. So the present goal is to determine the liveness and next-use for 3-address codes in a basic block.
23 Compiler Design 23 Liveness and Next Use in a Basic Block Input: a sequence of 3-address codes of a basic block B. All variables within B are initialized as live on exit from B, but next-use is unknown - (live, )
24 Compiler Design 24 Liveness and Next Use in a Basic Block The algorithm starts at the last 3-address code of B and works backward. It attaches liveness and next-use information to variables in each assignment statement. These information may be stored in the symbol-table.
25 Compiler Design 25 i : x = y z 1. Get liveness and next use information (from the symbol table) of x,y,z and attach it to the instruction i. 2. Update symbol-table: x: (not Live, nonextuse) as it is redefined at i. 3. Update symbol-table: y,z: (live, i) - the next use of both y and z is 3-address code i.
26 Compiler Design 26 Basic Block - b 3 9: L3: v1 = i 10: v2 = j 11: v1 = v1 - v2 12: i = v1 12a: goto L4
27 Compiler Design 27 Basic Block - b 3 symtab: i:(l, U),j:(L, U),v1:(L,U),v2:(L,U) 12: i = v1 # i: (L,U), v1: (L,U) symtab: i:(d, N),j:(L, U),v1:(L,12),v2:(L,U) 11: v1 = v1 - v2 # v1: (L, 12), v2: (L,U) symtab: i:(d, N),j:(L, U),v1:(L,11),v2:(L,11)
28 Compiler Design 28 Basic Block - b 3 symtab: i:(d, N),j:(L, U),v1:(L,11),v2:(L,11) 10: v2 = j # v2: (L, 11), j: (L, U) symtab: i:(d, N),j:(L, 10),v1:(L,11),v2:(D,N) 9: L3: v1 = i # v1: (L, 11), i: (D, N) symtab: i:(l, 9),j:(L, 10),v1:(D,N),v2:(D,N)
29 Compiler Design 29 Note - b 3 Var Beginning End Live Next Use Live Next Use i Yes 9 Yes j Yes 10 Yes v 1 No Yes v 2 No Yes
30 Compiler Design 30 Live-Variable Analysis on a CFG Given a variable x and a point p in a CFG, it is important to know whether x is live at p i.e. whether the value of x at p will be used at some path in the CFG starting at p. Computation of this information can be formulated as a dataflow equation. But in this case the information is computed opposite to the direction of the control-flow - backward dataflow.
31 Compiler Design 31 Formulation of Data Flow Equation Given a basic block B, LIn(B) and LOut(B) are the sets of all variables that are live at the entry and exit of the block B. Let ufst(b) be the set of variables whose values are used in B before any definition. Let def(b) be the set of variables that are defined in B.
32 Compiler Design 32 Formulation of Data Flow Equation Variables live at the exit of B is the union of the variables live at the entry of its successors blocks. LOut(B) = S succ(b) LIn(S). Variables live at the entry of S contains all the variables in ufst(s).
33 Compiler Design 33 Formulation of Data Flow Equation Variables of def(s) cannot be live at the entry of S unless it is in ufst(b) i.e. used before a definition. Again def(s) LOut(S). So LOut(S) \ def(s) is also contained in LIn(S). So we have LOut(B) = S succ(b) (ufst(s) (LOut(S)\def(S))).
34 Compiler Design 34 Computation It is necessary to compute def(s) and ufst(s) sets of each basic block. This will be done by examining the 3-address codes of a basic block from the beginning.
35 Compiler Design 35 Computation of ufst(b) and def(b) Following steps are performed for each basic block B with 1 k 3-adders codes of type assignment e.g. x y z. 1 ufst(b), def(b) 2 for i 1 to k 3 if y def(b), then 4 ufst(b) ufst(b) {y} 5 if z def(b), then 6 ufst(b) ufst(b) {z} 7 def(b) def(b) {x}
36 Compiler Design 36 Computation of ufst(b) and def(b) For a 3-address code like if x < y goto L there is no change in def(b). But ufst(b) will include {x,y} unless they are already in def(b). Other type of 3-address codes can be handled similarly.
37 Compiler Design 37 Computation of LOut(B) It is similar to the computation of Dom(B). 1. LOut(B i ) for i = 0,,n 1 are initialized to. 2. All LOut(B i ) s are recomputed until a fixed point is reached. 3. If the CFG corresponds to a function, the formal parameters are in LIn() of the entry block.
38 Compiler Design 38 Basic Block - b 1 1: L2: v1 = i 2: v2 = j 3: if v1>v2 goto L3 ufst(b 1 ) = {i,j} and def(b 1 ) = {v 1,v 2 }
39 Compiler Design 39 Basic Block - b 2 4: v1 = j 5: v2 = i 6: v1 = v1 - v2 7: j = v1 8: goto L4 ufst(b 2 ) = {i,j} and def(b 2 ) = {v 1,v 2,j}
40 Compiler Design 40 Basic Block - b 3 9: L3: v1 = i 10: v2 = j 11: v1 = v1 - v2 12: i = v1 ufst(b 3 ) = {i,j} and def(b 2 ) = {v 1,v 2,i}
41 Compiler Design 41 Basic Block - b 4 13: L4:v1 = i 14 v2 = j 15 if v1<>v2 goto L2 ufst(b 3 ) = {i,j} and def(b 2 ) = {v 1,v 2 }
42 Compiler Design 42 Control-Flow Graph
43 Compiler Design 43 Entry b1: L2: v1 = i v2 = j if v1>v2 goto L3 b3: L3: v1=i v2=j v1=v1 v2 i = v1 b2: v1=j v2=i v1=v1 v2 j=v1 goto L4 b4: L4: v1=i v2=j if v1 <> v2 goto L2 Exit
44 Compiler Design 44 Computation of LOut(b i ) Iteration & LOut(b i ) b i 0 1 b 1 {i,j} b 2 {i,j} b 3 {i,j} b 4 {i,j}
45 Compiler Design 45 Note The variables i, j are live at the exit of all four blocks. They may be kept in a register. We can detect uninitialized variable through this analysis. If the set of live-variables at the exit of the dummy entry node of a CFG is non-empty, then all variables of the set are uninitialized.
46 Compiler Design 46 Control-Flow Graph Ent. ufst = {} def = {} b1 j = 5 ufst = {} def = {j} b2 j = j + i ufst = {i,j} def = {j} Ext. ufst = {} def = {}
47 Compiler Design 47 Computation of LOut(b i ) Iteration & LOut(b i ) b i Ent. {i} {i} b 1 {i,j} {i,j} {i,j} b 2 Ext. Indicates that i is uninitialized.
48 Compiler Design 48 Static Single Assignment: SSA SSA is a naming method that encodes both flow of control and flow of data in a program. Each name is defined by an operation at a particular point in the code. So each use of a name has a unique definition. The flow of control is handled by using a selection function φ.
49 Compiler Design 49 Static Single-Assignment: SSA SSA representation looks similar to three-address code with two main differences. Each name is defined only once, so it is called static single-assignment. If the same variable is defined on different control paths, they are renamed as distinct variables by adding subscript to the base name.
50 Compiler Design 50 Static Single-Assignment: SSA When two or more flow-paths join, a φ-function is used to select different names of the same variable on these paths. A φ-function also defines a new name. The arguments of φ-functions of a join-block are ordered according to some order of incoming flow-paths.
51 Compiler Design 51 Three-Address & SSA Codes i = 1 i0 = 1 f = 1 f0 = 1 L2: if i>n goto - if i0 > n goto L1 L2: i1 = φ(i0, i2) f1 = φ(f0, f2) f = f*i f2 = f1*i1 i = i + 1 i2 = i1 + 1 goto L2 if i2 <= n goto L2 L1: i3 = φ(i0, i2) f3 = φ(f0, f2)
52 Compiler Design 52 Note The φ-function is inserted at the beginning of a basic block where different values of a program variable can reach along different flow-paths. In our example (i0,f0) are copied to (i1,f1) when the control flows from the top. Otherwise, (i2,f2) is copied to (i1,f1).
53 Compiler Design 53 Note All φ-functions at the beginning of a basic block are assumed to be evaluated concurrently. So the ordering does not matter. But it may complicate the implementation of φ-function. The single assignment property simplifies code optimization. As a definition is never killed, the value of a name is available on any path starting from the definition.
54 Compiler Design 54 Note The φ-function can take any number of arguments. The number of arguments depends on the number of control-flow paths entering a join-block. So it may have more than 3-addresses and there should be mechanism to accommodate it in 3-address data-structure.
55 Compiler Design 55 Note Some of the arguments of a φ-function may be undefined during its execution. In our example i2 is undefined when the loop is entered first time. But it should not create trouble as φ-function selects the argument corresponding to current control-flow path taken to enter the join-block (where the argument is defined).
56 Compiler Design 56 Building SSA: A Simple Method For every variable name x used in the code, a φ-function, x φ(y,,y) is inserted at the beginning of each basic block with more than one predecessors. The number of arguments of a φ-function is equal to the number of predecessors of the basic block.
57 Compiler Design 57 Note The ordering of φ-function inserted at the beginning of a basic block does not matter as they are executed concurrently i.e. they select the input parameters simultaneously and write them simultaneously. But the process may insert many useless φ-functions.
58 Compiler Design 58 φ-functions Entry b1: L2: i = φ (i,i) b3: j = φ (j,j) u = φ(u,u) v = φ(v,v) u = i v = j if u>v goto L3 L3: u = i v = j u = u v i = u b2: u = j v = i u = u v j = u goto L4 b4: L4: i = φ (i,i) j = φ (j,j) u = φ (u,u) v = φ (v,v) u = i v = j if u <> v goto L2 Exit
59 Compiler Design 59 After Renaming: SSA Entry b1: L2: i2 = φ (i0,i1) b3: j2 = φ (j0,j1) u2 = φ (u0,u1) v2 = φ (v0,v1) u3 = i2 v3 = j2 if u3 > v3 goto L3 L3: u6 = i2 v5 = j2 u7 = u6 v5 i3 = u7 b2: u4 = j2 b4: L4: i1 = φ(i2,i3) v4 = i2 j1 = φ(j3,j2) u5 = u4 v4 u8 = φ (u5,u7) j3 = u5 v6 = φ (v4,v5) goto L4 u1 = i1 v1 = j1 if u1 <> v1 goto L2 Exit
60 Compiler Design 60 Note Both in b 1 and b 3, the φ-functions for u and v are useless.
61 Compiler Design 61 Reaching Definitions For renaming variables for SSA we need to perform a data-flow analysis called reaching definitions. Each φ-function is also a new definition. A definition d of a name v (d : v ) reaches a use i (i : v ) if there is a path from d to i on which v is not redefined.
62 Compiler Design 62 Reaching Definitions Reaches(n) is the set of definitions reaches the beginning of the basic block n. Gen(n) is the set of set of definitions generated in the basic block n but not killed within it. So they go out of n. Kill(n) is the set of all definitions (globally) killed in the basic block n.
63 Compiler Design 63 Gen() and Kill() Kill(b1) = {d3,d5,d7,d9,d11, d4, d8,d12} Gen(b1) = {d1,d2} b1: d1: d2: L2: v1 = i v2 = j if v1>v2 goto L3 Gen(b3) = {d8,d9,d10} Kill(b3) = {d1,d2,d3,d4,d5,d7,d11,d12} d7: d8: d9: d10: b3: L3: v1=i v2=j v1=v1 v2 i = v1 Kill(b2) = {d1,d2,d3,d7,d8,d9,d11,d12} Gen(b2) = {d4,d5,d6} b2: d3: v1=j d4: v2=i d5: v1=v1 v2 d6: j=v1 goto L4 d11: d12: b4: L4: v1=i v2=j if v1 <> v2 goto L2 Kill(b4) = {d1,d2,d3,d4,d5,d7,d8,d9} Gen(b4) = {d11,d12}
64 Compiler Design 64 Reaching Definitions The reaching definition can be formulated as a forward data-flow problem. reaches(n) = m pred(n) gen(m) (reaches(m)\ kill(m)) We start with reaches(n) = for all basic block n.
65 Compiler Design 65 Building SSA: A Simple Method The φ-function ensures that only one definition reaches an use. Variable in each use are renamed according to the definition that reaches it. For each φ-function, names are ordered according to the control-path of their arrival.
66 Compiler Design 66 Note The algorithm correctly translates a 3-address code to SSA form. But there are many redundant φ-functions e.g. x i φ(x j,x j ) or the name defined may not be live (redefinition of x before use). Redundant φ-functions increases algorithmic cost.
67 Compiler Design 67 Note There is an algorithm based on the notion of dominator that will not generate useless φ-functions. For every block b i it finds out the blocks that require φ-functions for a definition d in b i.
68 Compiler Design 68 Note If b i Dom(b j ) and d : x is a definition in b i, then b j does not need a φ-function for x unless it is redefined in some control path that reaches b j. If there is a redefinition of x in some intermediate block b k on a control path between b i and b j, then the redefinition forces a φ-function.
69 Compiler Design 69 Forcing a φ-function A definition d : x in a block b i forces a φ-function for x in a block b j where more than one control-paths meet (join point), in the following cases. b i Dom(b k ), where b k is a predecessor of b j. b i = b j. We call the set Dom(b j )\{b j } as the set of strict dominators of b j.
70 Compiler Design 70 Note In our example CFG, a definition in block B 2 forces a φ-function in block B 9. The collection of blocks where the block b i can force a φ-function is called the dominance frontier of b i, DF(b i ). This is the DF(B 2 ) = {B 9 }.
71 Compiler Design 71 Dominator Tree Given a block b j in a CFG, the block b i DOM(b j )\{b j }, closest to b j, is called the immediate dominator of b j, IDom(b j ). For every block b i of a CFG we draw a directed-edge from IDom(b i ) b i. This forms a tree with the entry-node of the CFG as the root. It is known as dominator tree of the CFG.
72 Compiler Design 72 Dominator Tree B0 B1 B2 B3 B4 B5 B6 B7 B8 B9 B10 B11 B12
73 Compiler Design 73 Note The dominator tree of a CFG has all blocks (nodes) of the CFG. For a node b i, IDom(b j ) is the parent of b i in the tree. The Dom(b i ) is the collection of nodes (blocks) on the path from the root (entry node) to b i e.g. IDom(B 10 ) = B 8, Dom(B 10 ) = {B 0,B 2,B 8,B 10 }.
74 Compiler Design 74 Properties: Node in DF If a node (block) b j is in some DF, then Multiple control-paths meet at b j i.e. b j is a join-point. For each predecessor b k, the node b j DF(b k ). As b j is the first join-point after b k and b j is not dominated by b k. For each predecessor b k, the node b j DF(b l ) for all b l Dom(k)\Dom(j).
75 Compiler Design 75 Computing Dominance Frontier 1. For each block b j with multiple incoming control paths (join-point) do the following. 2. For each predecessor b k of b j do the following. 3. Walk up the dominator tree until some b i Dom(b j ) is found. 4. For each node b l in the path of b i to b k, except b i, put b j in DF(b l ).
76 Compiler Design 76 Algorithm 1 for i 0 to n 1 do 2 DF(b i ) 3 for j 0 to n 1 do 4 if join-point(b j ) then 5 for each b k pred(b j ) do 6 temp b k 7 while temp IDom(b j ) do 8 DF(temp) DF(temp) {b j } 9 temp IDom(temp)
77 Compiler Design 77 Example CFG
78 Compiler Design 78 B0 B1 B2 B3 B4 B5 B6 B7 B8 B9 B10 B11 B12
79 Compiler Design 79 Example Blocks/nodes that are join-points (multiple control-paths meet): B 1,B 2,B 7,B 8,B 9,B 11,B 12.
80 Compiler Design 80 Dominator Tree B0 B1 B2 B3 B4 B5 B6 B7 B8 B9 B10 B11 B12
81 Compiler Design 81 Example Dominance Frontier: Blocks Predecessors DF B 0 B 1 B 0,B 6 B 1,B 9,B 11 B 2 B 0,B 10 B 2,B 9,B 11 B 3 B 1 B 1,B 7,B 11 B 4 B 2 B 8 B 5 B 2 B 8 B 6 B 3 B 1,B 11
82 Compiler Design 82 Example Dominance Frontier(cont.): Blocks Predecessors DF B 7 B 3,B 1 B 9 B 8 B 4,B 5 B 2,B 9,B 11 B 9 B 7,B 8 B 12 B 10 B 8 B 2,B 11 B 11 B 6,B 10 B 12 B 12 B 9,B 11
83 Compiler Design 83 Example The node B 1 has two predecessors B 0,B 6. As B 0 = IDom(B 1 ), the while-loop is not entered. So it does not contribute any node in the dominance frontier. For the other predecessor B 6, the loop is entered with temp B 6,B 3,B 1, so B 1 goes in to DF(B 6 ), DF(B 3 ), DF(B 1 ).
84 Compiler Design 84 Placement of φ-function In the first algorithm (simple) a φ-function was introduced for every variable x at the beginning of each join-block - x φ(x,,x). In the first modification, for every x defined in a block b i, we introduce a φ-function at the beginning of each b j DF(b i ).
85 Compiler Design 85 Placement of φ-function If the life span of a variable is restricted to a block, no φ-function is required for it. Only the names global to a CFG may require φ-functions. The union of ufst(b), the set of variables whose values are used before any definition in block B, is the set of global names.
86 Compiler Design 86 Algorithm for Global Names Following algorithm computes global names, glovvar and varblklst(x) for the list of blocks where the variable x is defined.
87 Compiler Design 87 Algorithm 1 globvar 2 for all variable x, varblklst(x) 3 for each block B 4 ufst(b) 5 for each 3-address code a b c from beginning 6 if b def(b) then globvar globvar {b} 7 if c def(b) then globvar globvar {c} 8 def(b) def(b) {a} 9 varblklst(a) varblklst(a) {B}
88 Compiler Design 88 Algorithm for Inserting φ-function For each global name x globvar there is a list of blocks (varblklst(x)) where x is defined. For each b i varblklst(x) a φ-function for x is inserted in ever block b j DF(b i ). But then every φ-function in some block b j is also a definition of x and gives rise to new φ-functions in b k DF(b j ).
89 Compiler Design 89 Algorithm 1 for each x globvar 2 curblklst varblklst(x) 3 for each b i curblklst 4 for each b j DF(b i ) 5 if b j does not have φ(x,,x) insert one 6 curblklst curblklst {b j }
90 Compiler Design 90 Example It is necessary to put more flesh in the basic blocks of the example CFG. B 0 : a B 1 : b B 2 : a b c d B 3 : d (a > c) : 3,7) (d b) : 4,5) x B 4 : x a 1 B 5 : x b+1 B 6 : y B 7 : a B 8 : b b+1 b b y (y = b) : 1,11 (y = b) : 9,10
91 Compiler Design 91 Example B 9 : z b+d B 10 : d d+1 B 11 : a b+y a z +d (d < y) : 2,11 b d+x c a+b
92 Compiler Design 92 globvar and varblklst Global names: a,b,c a,d,x,y but z is local to B 9. List of blocks where a variable is defined are: Var a b c d x y z Blk B 0,B 2 B 0,B 1 B 1 B 2 B 3 B 6 B 9 B 7,B 9 B 6,B 7 B 9 B 3 B 4 B 8 B 11 B 8,B 11 B 10 B 5 a It is pointed out by Ashray Sudhir that c is not a global name.
93 Compiler Design 93 Placement of φ-function Consider the variable a. It is defined in B 0,B 2,B 7,B 9,B 11. These definitions induces φ-functions for a in the blocks DF(B 0 ) DF(B 2 ) DF(B 7 ) DF(B 9 ) DF(B 11 ) = {B 2,B 9,B 11 } {B 9 } {B 12 } {B 12 } = {B 2,B 9,B 11,B 12 }. As B 12 is exit block, we ignore it.
94 Compiler Design 94 Placement of φ-function. As {B 2,B 9,B 11 } varblklst(a), no new block is added in the curblklst. After introducing a φ(a,a) blocks are B 2 : a φ(a,a) B 9 : a φ(a,a) B 11 : a φ(a,a) a z b+d a b+y d a z +d b d+x (d b) : 4,5 c a+b
95 Compiler Design 95 Note. There are redundant φ-functions in B 9 and B 11 as a is not live in these two blocks. The live-variable analysis result may be used to detect that a is not live at the beginning of B 9 and B 11. It can be done at step-5 of φ-function insertion algorithm.
96 Compiler Design 96 Placement of φ-function. So blocks B 2,B 9,B 11 are B 2 : a φ(a,a) B 9 : z b+d B 11 : a b+y a a z +d b d+x d c a+b (d b) : 4,5
97 Compiler Design 97 Placement of φ-function Consider the variable b. It is defined in B 0,B 1,B 6,B 7,B 8,B 11. φ-functions for b are introduced in DF(B 0 ) DF(B 1 ) DF(B 6 ) DF(B 7 ) DF(B 8 ) DF(B 11 ) = {B 1,B 9,B 11 } {B 1,B 11 } {B 9 } {B 2,B 9,B 11 } {B 12 } = {B 1,B 2,B 9,B 11,B 12 }.
98 Compiler Design 98 Placement of φ-function The block B 2 is added in the curblklst. DF(B 2 ) = {B 2,B 9,B 11 } are added in the list of blocks where b φ(b,b) are introduced. Final set of blocks are {B 1,B 2,B 9,B 11 }.
99 Compiler Design 99 Placement of φ-function Blocks B 1,B 2,B 9,B 11 after insertion of b φ(b,b) are B 1 : b φ(b,b) B 2 : b φ(b,b) B 9 : b φ(b,b) b a z b+d c d a z +d (a > c) : 3,7 (d b) : 4,5 c a+b B 11 : b φ(b,b) a b+y b d+x
100 Compiler Design 100 Placement of φ-function The variable c introduces φ-function in B 1,B 9,B 11, all are useless, but the given algorithm cannot detect that. The variable d introduces φ-function in B 1,B 2,B 7,B 9,B 11. But it is redundant in B 1,B 7 that our algorithm cannot detect.
101 Compiler Design 101 Placement of φ-function The variable x introduces φ-function in B 1,B 2,B 7,B 8,B 9,B 11. Again they are redundant in B 1 and B 7. The variable y introduces φ-function in B 1,B 2,B 9,B 11 out of which B 1,B 2 and B 9 are redundant.
102 Compiler Design 102 Renaming of Variables Each global name is treated as a base of name in each new definition of it e.g. x. For each definition it is differentiated using subscripts e.g. x 0,x 1,. One algorithm for renaming traverse the dominator tree in pre-order. It works as follows.
103 Compiler Design 103 Renaming of Variables Each variable defined by the φ-function at the beginning of the block (b) are renamed first. After that the 3-address codes of b are visited in order. If the 3-address code is x y z and the current indices of x,y,z are i,j,k respectively, then after renaming it will be x i+1 y j z k and the current indices are modified to i+1,j,k.
104 Compiler Design 104 Renaming of Variables After renaming the variables in the 3-address codes of the block b, appropriate arguments of the φ-functions in the CFG-successors of b are renamed. Then the renaming is recursively called on each children block of b in the dominator tree.
105 Compiler Design 105 Renaming of Variables On return from the recursive of renaming on block b, the state of name indices is restored to the value that existed before entering the block. It is necessary to maintain a stack of indices for all global names.
106 Compiler Design 106 Data Structure For each global name we use a separate stack. This requires a second pass over the block (b) at the end of the recursive call to restore the status. This reduces the total memory requirement for the stack. We also may use a counter that holds the current indices of global names.
107 Compiler Design 107 Algorithm 1 for each x globvar 2 cind[x] 0 3 stack[x] 4 rename(b 0 )
108 Compiler Design 108 sub(x) Returns the current subscript for the global name x, pushes it in the stack of x and increments it. sub(x) 1 i cind[x] 2 push(stack[x], i) 3 cind cind +1 4 return i
109 Compiler Design 109 rename(b) 1. For each φ-function x φ( ) of block b rewrite x by x sub(x). 2. For each 3-address code in order where y is an operand change it to y i, where i = top(stack[y]). 3. If x is a target of a 3-address code i.e. x, it is replaced by x sub(x).
110 Compiler Design 110 rename(b) 4. For each successor block of b in the CFG, initialize the appropriate parameters of φ-functions. 5. Recursively call rename() on the dominator tree children of b. 6. For each definition of x in block b, pop(stack[x]).
111 Compiler Design 111 Example In our example we have six global names - globvar = {a,b,c,d,x,y}. Initial cind and stack[x] are u cind stack[u] a 0 b 0 c 0 d 0 x 0 y 0
112 Compiler Design 112 rename(b 0 ) Old B 0 : a New B 0 a 0 b b 0 u cind stack[u] a 1 a 0 Modified b 1 b 0 c 0 d 0 x 0 y 0
113 Compiler Design 113 CFG Successor of B 0 B 1 : b φ(b,b 0 ) B 2 : a φ(a 0,a) c φ(c, ) b φ(b 0,b) d φ(d, ) d φ(,d) x φ(x, ) x φ(,x) y φ(y, ) y φ(,y) b a c d (a > c) : 3,7 (d b) : 4,5
114 Compiler Design 114 rename(b 1 ): DT Successor of B 0 B 1 : b 1 φ(b,b 0 ) c 0 φ(c, ) d 0 φ(d, ) x 0 φ(x, ) y 0 φ(y, ) b 2 c 1 (a 0 > c 1 ) : 3,7
115 Compiler Design 115 Modified cind and stack[u] u cind stack[u] a 1 a 0 b 3 b 0 b 1 b 2 c 2 c 0 c 1 d 1 d 0 x 1 x 0 y 1 y 0
116 Compiler Design 116 CFG Successor of B 1 B 3 : d B 7 : d φ(d,d 0 ) x x φ(x,x 0 ) a b
117 Compiler Design 117 rename(b 3 ): DT Successor of B 1 B 3 : d 1 x 1
118 Compiler Design 118 Modified cind and stack[u] u cind stack[u] a 1 a 0 b 3 b 0 b 1 b 2 c 2 c 0 c 1 d 2 d 0 d 1 x 2 x 0 x 1 y 1 y 0
119 Compiler Design 119 CFG Successor of B 3 B 6 : y B 7 : d φ(d 1,d 0 ) b x φ(x 1,x 0 ) a b
120 Compiler Design 120 rename(b 6 ): DT Successor of B 3 B 6 : y 1 b 3
121 Compiler Design 121 Modified cind and stack[u] u cind stack[u] a 1 a 0 b 4 b 0 b 1 b 2 b 3 c 2 c 0 c 1 d 2 d 0 d 1 x 2 x 0 x 1 y 2 y 0 y 1
122 Compiler Design 122 CFG Successor of B 6 B 1 : b 1 φ(b 3,b 0 ) B 11 : a φ(a 0,a) c 0 φ(c 1, ) b φ(b 3,b) d 0 φ(d 1, ) c φ(c 1,c) x 0 φ(x 1, ) d φ(d 1,d) y 0 φ(y 1, ) x φ(x 1,x) b 2 y φ(y 1,y) c 1 a b+y (a 0 > c 1 ) : 3,7 b d+x
123 Compiler Design 123 Return from rename(b 6 ) There is no children of B 6 in the dominator tree. So the stack entries corresponding to B 6 are popped. Again B 6 is the only child of B 3 in the dominator tree. So the stack entries corresponding to B 3 are also popped.
124 Compiler Design 124 stack[u] after pop-b 6 u cind stack[u] a 1 a 0 b 4 b 0 b 1 b 2 c 2 c 0 c 1 d 2 d 0 d 1 x 2 x 0 x 1 y 2 y 0 b 3 and y 1 removed.
125 Compiler Design 125 stack[u] after pop-b 3 u cind stack[u] a 1 a 0 b 4 b 0 b 1 b 2 c 2 c 0 c 1 d 2 d 0 x 2 x 0 y 2 y 0 d 1 and x 1 removed.
126 Compiler Design 126 Note Next step is to rename B 7. But It is necessary to translate SSA form to 3-address code.
127 Compiler Design 127 Removing φ-function Let in a block b there is x i φ(x j,x k ). The value is x j when the control-path to b is from the block b u ; and it is x k when the control-path to b is from the block b v. x i φ(x j,x k ) in block b is replaced by inserting x i x j in b u and x i x k in b v (at the end).
128 Compiler Design 128 Example: B 1 and Two Predecessors B 1 : b 1 φ(b 3,b 0 ) B 6 : y 1 B 0 : a 0 c 0 φ(c 1, ) b 3 b 0 d 0 φ(d 1, ) x 0 φ(x 1, ) y 0 φ(y 1, ) b 2 c 1 (a 0 > c 1 ) : 3,7
129 Compiler Design 129 Example: Replacing φ from B 1 B 1 : b 2 B 6 : y 1 B 0 : a 0 c 1 b 3 b 0 (a 0 > c 1 ) : 3,7 b 1 b 3 b 1 b 0 c 0 c 1 d 0 d 1 x 0 x 1 y 0 y 1
130 Compiler Design 130 References [KDCLT] Engineering a Compiler by Kieth D. Cooper & Linda Troczon, 2nd ed., Morgan Kaufmann Pub., 2012, ISBN
Compiler Construction 2009/2010 SSA Static Single Assignment Form
Compiler Construction 2009/2010 SSA Static Single Assignment Form Peter Thiemann March 15, 2010 Outline 1 Static Single-Assignment Form 2 Converting to SSA Form 3 Optimization Algorithms Using SSA 4 Dependencies
More informationWhat If. Static Single Assignment Form. Phi Functions. What does φ(v i,v j ) Mean?
Static Single Assignment Form What If Many of the complexities of optimization and code generation arise from the fact that a given variable may be assigned to in many different places. Thus reaching definition
More informationExample. Example. Constant Propagation in SSA
Example x=1 a=x x=2 b=x x=1 x==10 c=x x++ print x Original CFG x 1 =1 a=x 1 x 2 =2 x 3 =φ (x 1,x 2 ) b=x 3 x 4 =1 x 5 = φ(x 4,x 6 ) x 5 ==10 c=x 5 x 6 =x 5 +1 print x 5 CFG in SSA Form In SSA form computing
More informationIntermediate representation
Intermediate representation Goals: encode knowledge about the program facilitate analysis facilitate retargeting facilitate optimization scanning parsing HIR semantic analysis HIR intermediate code gen.
More informationCSE P 501 Compilers. SSA Hal Perkins Spring UW CSE P 501 Spring 2018 V-1
CSE P 0 Compilers SSA Hal Perkins Spring 0 UW CSE P 0 Spring 0 V- Agenda Overview of SSA IR Constructing SSA graphs Sample of SSA-based optimizations Converting back from SSA form Sources: Appel ch., also
More informationCompiler Design. Fall Data-Flow Analysis. Sample Exercises and Solutions. Prof. Pedro C. Diniz
Compiler Design Fall 2015 Data-Flow Analysis Sample Exercises and Solutions Prof. Pedro C. Diniz USC / Information Sciences Institute 4676 Admiralty Way, Suite 1001 Marina del Rey, California 90292 pedro@isi.edu
More informationCS153: Compilers Lecture 23: Static Single Assignment Form
CS153: Compilers Lecture 23: Static Single Assignment Form Stephen Chong https://www.seas.harvard.edu/courses/cs153 Pre-class Puzzle Suppose we want to compute an analysis over CFGs. We have two possible
More informationWhy Global Dataflow Analysis?
Why Global Dataflow Analysis? Answer key questions at compile-time about the flow of values and other program properties over control-flow paths Compiler fundamentals What defs. of x reach a given use
More informationData Flow Analysis. CSCE Lecture 9-02/15/2018
Data Flow Analysis CSCE 747 - Lecture 9-02/15/2018 Data Flow Another view - program statements compute and transform data So, look at how that data is passed through the program. Reason about data dependence
More informationControl-Flow Analysis
Control-Flow Analysis Dragon book [Ch. 8, Section 8.4; Ch. 9, Section 9.6] Compilers: Principles, Techniques, and Tools, 2 nd ed. by Alfred V. Aho, Monica S. Lam, Ravi Sethi, and Jerey D. Ullman on reserve
More informationData Flow Analysis. Agenda CS738: Advanced Compiler Optimizations. 3-address Code Format. Assumptions
Agenda CS738: Advanced Compiler Optimizations Data Flow Analysis Amey Karkare karkare@cse.iitk.ac.in http://www.cse.iitk.ac.in/~karkare/cs738 Department of CSE, IIT Kanpur Static analysis and compile-time
More informationAn Overview of GCC Architecture (source: wikipedia) Control-Flow Analysis and Loop Detection
An Overview of GCC Architecture (source: wikipedia) CS553 Lecture Control-Flow, Dominators, Loop Detection, and SSA Control-Flow Analysis and Loop Detection Last time Lattice-theoretic framework for data-flow
More informationCompiler Optimisation
Compiler Optimisation 4 Dataflow Analysis Hugh Leather IF 1.18a hleather@inf.ed.ac.uk Institute for Computing Systems Architecture School of Informatics University of Edinburgh 2018 Introduction This lecture:
More informationIntermediate Representations
Compiler Design 1 Intermediate Representations Compiler Design 2 Front End & Back End The portion of the compiler that does scanning, parsing and static semantic analysis is called the front-end. The translation
More informationMIT Introduction to Dataflow Analysis. Martin Rinard Laboratory for Computer Science Massachusetts Institute of Technology
MIT 6.035 Introduction to Dataflow Analysis Martin Rinard Laboratory for Computer Science Massachusetts Institute of Technology Dataflow Analysis Used to determine properties of program that involve multiple
More informationWe can express this in dataflow equations using gen and kill sets, where the sets are now sets of expressions.
Available expressions Suppose we want to do common-subexpression elimination; that is, given a program that computes x y more than once, can we eliminate one of the duplicate computations? To find places
More informationCompiler Optimizations. Chapter 8, Section 8.5 Chapter 9, Section 9.1.7
Compiler Optimizations Chapter 8, Section 8.5 Chapter 9, Section 9.1.7 2 Local vs. Global Optimizations Local: inside a single basic block Simple forms of common subexpression elimination, dead code elimination,
More informationELEC 876: Software Reengineering
ELEC 876: Software Reengineering () Dr. Ying Zou Department of Electrical & Computer Engineering Queen s University Compiler and Interpreter Compiler Source Code Object Compile Execute Code Results data
More informationWe can express this in dataflow equations using gen and kill sets, where the sets are now sets of expressions.
Available expressions Suppose we want to do common-subexpression elimination; that is, given a program that computes x y more than once, can we eliminate one of the duplicate computations? To find places
More informationCompiler Design. Fall Control-Flow Analysis. Prof. Pedro C. Diniz
Compiler Design Fall 2015 Control-Flow Analysis Sample Exercises and Solutions Prof. Pedro C. Diniz USC / Information Sciences Institute 4676 Admiralty Way, Suite 1001 Marina del Rey, California 90292
More informationECE 5775 High-Level Digital Design Automation Fall More CFG Static Single Assignment
ECE 5775 High-Level Digital Design Automation Fall 2018 More CFG Static Single Assignment Announcements HW 1 due next Monday (9/17) Lab 2 will be released tonight (due 9/24) Instructor OH cancelled this
More informationA main goal is to achieve a better performance. Code Optimization. Chapter 9
1 A main goal is to achieve a better performance Code Optimization Chapter 9 2 A main goal is to achieve a better performance source Code Front End Intermediate Code Code Gen target Code user Machineindependent
More informationMiddle End. Code Improvement (or Optimization) Analyzes IR and rewrites (or transforms) IR Primary goal is to reduce running time of the compiled code
Traditional Three-pass Compiler Source Code Front End IR Middle End IR Back End Machine code Errors Code Improvement (or Optimization) Analyzes IR and rewrites (or transforms) IR Primary goal is to reduce
More informationLecture 4 Introduction to Data Flow Analysis
Lecture 4 Introduction to Data Flow Analysis I. Structure of data flow analysis II. Example 1: Reaching definition analysis III. Example 2: Liveness analysis IV. Generalization What is Data Flow Analysis?
More informationIntroduction. Data-flow Analysis by Iteration. CSc 553. Principles of Compilation. 28 : Optimization III
CSc 553 Principles of Compilation 28 : Optimization III Introduction Department of Computer Science University of Arizona collberg@gmail.com Copyright c 2011 Christian Collberg Computing Data-Flow Info.
More informationDepartment of Computer Sciences CS502. Purdue University is an Equal Opportunity/Equal Access institution.
Reaching Definition Analysis CS502 The most fundamental dataflow information concerns the definition (or generation) and the use of values. When an operation modifies a variable at program point p, we
More informationIntroduction to Machine-Independent Optimizations - 6
Introduction to Machine-Independent Optimizations - 6 Machine-Independent Optimization Algorithms Department of Computer Science and Automation Indian Institute of Science Bangalore 560 012 NPTEL Course
More informationReview; questions Basic Analyses (2) Assign (see Schedule for links)
Class 2 Review; questions Basic Analyses (2) Assign (see Schedule for links) Representation and Analysis of Software (Sections -5) Additional reading: depth-first presentation, data-flow analysis, etc.
More informationControl Flow Analysis
COMP 6 Program Analysis and Transformations These slides have been adapted from http://cs.gmu.edu/~white/cs60/slides/cs60--0.ppt by Professor Liz White. How to represent the structure of the program? Based
More informationData-flow Analysis. Y.N. Srikant. Department of Computer Science and Automation Indian Institute of Science Bangalore
Department of Computer Science and Automation Indian Institute of Science Bangalore 560 012 NPTEL Course on Compiler Design Data-flow analysis These are techniques that derive information about the flow
More informationCompiler Optimizations. Chapter 8, Section 8.5 Chapter 9, Section 9.1.7
Compiler Optimizations Chapter 8, Section 8.5 Chapter 9, Section 9.1.7 2 Local vs. Global Optimizations Local: inside a single basic block Simple forms of common subexpression elimination, dead code elimination,
More informationCS 406/534 Compiler Construction Putting It All Together
CS 406/534 Compiler Construction Putting It All Together Prof. Li Xu Dept. of Computer Science UMass Lowell Fall 2004 Part of the course lecture notes are based on Prof. Keith Cooper, Prof. Ken Kennedy
More informationControl Flow Analysis & Def-Use. Hwansoo Han
Control Flow Analysis & Def-Use Hwansoo Han Control Flow Graph What is CFG? Represents program structure for internal use of compilers Used in various program analyses Generated from AST or a sequential
More informationLecture 2. Introduction to Data Flow Analysis
Lecture 2 Introduction to Data Flow Analysis I II III Example: Reaching definition analysis Example: Liveness Analysis A General Framework (Theory in next lecture) Reading: Chapter 9.2 Advanced Compilers
More informationAnticipation-based partial redundancy elimination for static single assignment form
SOFTWARE PRACTICE AND EXPERIENCE Softw. Pract. Exper. 00; : 9 Published online 5 August 00 in Wiley InterScience (www.interscience.wiley.com). DOI: 0.00/spe.68 Anticipation-based partial redundancy elimination
More informationTopic I (d): Static Single Assignment Form (SSA)
Topic I (d): Static Single Assignment Form (SSA) 621-10F/Topic-1d-SSA 1 Reading List Slides: Topic Ix Other readings as assigned in class 621-10F/Topic-1d-SSA 2 ABET Outcome Ability to apply knowledge
More informationComputing Static Single Assignment (SSA) Form. Control Flow Analysis. Overview. Last Time. Constant propagation Dominator relationships
Control Flow Analysis Last Time Constant propagation Dominator relationships Today Static Single Assignment (SSA) - a sparse program representation for data flow Dominance Frontier Computing Static Single
More informationSSA. Stanford University CS243 Winter 2006 Wei Li 1
SSA Wei Li 1 Overview SSA Representation SSA Construction Converting out of SSA 2 Static Single Assignment Each variable has only one reaching definition. When two definitions merge, a Ф function is introduced
More informationDataflow analysis (ctd.)
Dataflow analysis (ctd.) Available expressions Determine which expressions have already been evaluated at each point. A expression x+y is available at point p if every path from the entry to p evaluates
More informationCS577 Modern Language Processors. Spring 2018 Lecture Optimization
CS577 Modern Language Processors Spring 2018 Lecture Optimization 1 GENERATING BETTER CODE What does a conventional compiler do to improve quality of generated code? Eliminate redundant computation Move
More informationData-flow Analysis - Part 2
- Part 2 Department of Computer Science Indian Institute of Science Bangalore 560 012 NPTEL Course on Compiler Design Data-flow analysis These are techniques that derive information about the flow of data
More informationControl flow and loop detection. TDT4205 Lecture 29
1 Control flow and loop detection TDT4205 Lecture 29 2 Where we are We have a handful of different analysis instances None of them are optimizations, in and of themselves The objective now is to Show how
More informationStatic single assignment
Static single assignment Control-flow graph Loop-nesting forest Static single assignment SSA with dominance property Unique definition for each variable. Each definition dominates its uses. Static single
More informationFoundations of Dataflow Analysis
Foundations of Dataflow Analysis 15-745 Optimizing Compilers Spring 2006 Peter Lee Ingredients of a dataflow analysis direction: forward or backward flow/transfer function combining ( meet ) operator dataflow
More informationCompiler Optimization and Code Generation
Compiler Optimization and Code Generation Professor: Sc.D., Professor Vazgen Melikyan 1 Course Overview Introduction: Overview of Optimizations 1 lecture Intermediate-Code Generation 2 lectures Machine-Independent
More informationLecture 23 CIS 341: COMPILERS
Lecture 23 CIS 341: COMPILERS Announcements HW6: Analysis & Optimizations Alias analysis, constant propagation, dead code elimination, register allocation Due: Wednesday, April 25 th Zdancewic CIS 341:
More informationCompiler Passes. Optimization. The Role of the Optimizer. Optimizations. The Optimizer (or Middle End) Traditional Three-pass Compiler
Compiler Passes Analysis of input program (front-end) character stream Lexical Analysis Synthesis of output program (back-end) Intermediate Code Generation Optimization Before and after generating machine
More informationThe Static Single Assignment Form:
The Static Single Assignment Form: Construction and Application to Program Optimizations - Part 3 Department of Computer Science Indian Institute of Science Bangalore 560 012 NPTEL Course on Compiler Design
More informationCalvin Lin The University of Texas at Austin
Loop Invariant Code Motion Last Time Loop invariant code motion Value numbering Today Finish value numbering More reuse optimization Common subession elimination Partial redundancy elimination Next Time
More informationAdvanced Compilers CMPSCI 710 Spring 2003 Computing SSA
Advanced Compilers CMPSCI 70 Spring 00 Computing SSA Emery Berger University of Massachusetts, Amherst More SSA Last time dominance SSA form Today Computing SSA form Criteria for Inserting f Functions
More informationHomework Assignment #1 Sample Solutions
Homework Assignment # Sample Solutions Due Monday /, at the start of lecture. For the following control flow graph, for the purposes of a forward data flow analysis (such as available expressions), show
More informationData Flow Information. already computed
Data Flow Information Determine if Determine if a constant in loop modifies Determine if expression already computed Determine if not used later in program Data Flow Equations Local Information: Gen(B):
More informationModule 14: Approaches to Control Flow Analysis Lecture 27: Algorithm and Interval. The Lecture Contains: Algorithm to Find Dominators.
The Lecture Contains: Algorithm to Find Dominators Loop Detection Algorithm to Detect Loops Extended Basic Block Pre-Header Loops With Common eaders Reducible Flow Graphs Node Splitting Interval Analysis
More informationCOSE312: Compilers. Lecture 20 Data-Flow Analysis (2)
COSE312: Compilers Lecture 20 Data-Flow Analysis (2) Hakjoo Oh 2017 Spring Hakjoo Oh COSE312 2017 Spring, Lecture 20 June 6, 2017 1 / 18 Final Exam 6/19 (Mon), 15:30 16:45 (in class) Do not be late. Coverage:
More informationData Structures and Algorithms in Compiler Optimization. Comp314 Lecture Dave Peixotto
Data Structures and Algorithms in Compiler Optimization Comp314 Lecture Dave Peixotto 1 What is a compiler Compilers translate between program representations Interpreters evaluate their input to produce
More informationStatic Single Assignment (SSA) Form
Static Single Assignment (SSA) Form A sparse program representation for data-flow. CSL862 SSA form 1 Computing Static Single Assignment (SSA) Form Overview: What is SSA? Advantages of SSA over use-def
More informationCS202 Compiler Construction
CS202 Compiler Construction April 17, 2003 CS 202-33 1 Today: more optimizations Loop optimizations: induction variables New DF analysis: available expressions Common subexpression elimination Copy propogation
More informationDependence and Data Flow Models. (c) 2007 Mauro Pezzè & Michal Young Ch 6, slide 1
Dependence and Data Flow Models (c) 2007 Mauro Pezzè & Michal Young Ch 6, slide 1 Why Data Flow Models? Models from Chapter 5 emphasized control Control flow graph, call graph, finite state machines We
More informationLoops. Lather, Rinse, Repeat. CS4410: Spring 2013
Loops or Lather, Rinse, Repeat CS4410: Spring 2013 Program Loops Reading: Appel Ch. 18 Loop = a computation repeatedly executed until a terminating condition is reached High-level loop constructs: While
More informationCS553 Lecture Generalizing Data-flow Analysis 3
Generalizing Data-flow Analysis Announcements Project 2 writeup is available Read Stephenson paper Last Time Control-flow analysis Today C-Breeze Introduction Other types of data-flow analysis Reaching
More informationCS202 Compiler Construction
S202 ompiler onstruction pril 15, 2003 S 202-32 1 ssignment 11 (last programming assignment) Loop optimizations S 202-32 today 2 ssignment 11 Final (necessary) step in compilation! Implement register allocation
More informationFINALTERM EXAMINATION Fall 2009 CS301- Data Structures Question No: 1 ( Marks: 1 ) - Please choose one The data of the problem is of 2GB and the hard
FINALTERM EXAMINATION Fall 2009 CS301- Data Structures Question No: 1 The data of the problem is of 2GB and the hard disk is of 1GB capacity, to solve this problem we should Use better data structures
More informationIntermediate Representations Part II
Intermediate Representations Part II Types of Intermediate Representations Three major categories Structural Linear Hybrid Directed Acyclic Graph A directed acyclic graph (DAG) is an AST with a unique
More informationExample of Global Data-Flow Analysis
Example of Global Data-Flow Analysis Reaching Definitions We will illustrate global data-flow analysis with the computation of reaching definition for a very simple and structured programming language.
More informationSSA Construction. Daniel Grund & Sebastian Hack. CC Winter Term 09/10. Saarland University
SSA Construction Daniel Grund & Sebastian Hack Saarland University CC Winter Term 09/10 Outline Overview Intermediate Representations Why? How? IR Concepts Static Single Assignment Form Introduction Theory
More informationWhy Data Flow Models? Dependence and Data Flow Models. Learning objectives. Def-Use Pairs (1) Models from Chapter 5 emphasized control
Why Data Flow Models? Dependence and Data Flow Models Models from Chapter 5 emphasized control Control flow graph, call graph, finite state machines We also need to reason about dependence Where does this
More informationLive Variable Analysis. Work List Iterative Algorithm Rehashed
Putting Data Flow Analysis to Work Last Time Iterative Worklist Algorithm via Reaching Definitions Why it terminates. What it computes. Why it works. How fast it goes. Today Live Variable Analysis (backward
More informationCompiler Construction 2010/2011 Loop Optimizations
Compiler Construction 2010/2011 Loop Optimizations Peter Thiemann January 25, 2011 Outline 1 Loop Optimizations 2 Dominators 3 Loop-Invariant Computations 4 Induction Variables 5 Array-Bounds Checks 6
More informationCompiler Structure. Data Flow Analysis. Control-Flow Graph. Available Expressions. Data Flow Facts
Compiler Structure Source Code Abstract Syntax Tree Control Flow Graph Object Code CMSC 631 Program Analysis and Understanding Fall 2003 Data Flow Analysis Source code parsed to produce AST AST transformed
More informationLiveness Analysis and Register Allocation. Xiao Jia May 3 rd, 2013
Liveness Analysis and Register Allocation Xiao Jia May 3 rd, 2013 1 Outline Control flow graph Liveness analysis Graph coloring Linear scan 2 Basic Block The code in a basic block has: one entry point,
More informationCSC D70: Compiler Optimization LICM: Loop Invariant Code Motion
CSC D70: Compiler Optimization LICM: Loop Invariant Code Motion Prof. Gennady Pekhimenko University of Toronto Winter 2018 The content of this lecture is adapted from the lectures of Todd Mowry and Phillip
More informationCS153: Compilers Lecture 17: Control Flow Graph and Data Flow Analysis
CS153: Compilers Lecture 17: Control Flow Graph and Data Flow Analysis Stephen Chong https://www.seas.harvard.edu/courses/cs153 Announcements Project 5 out Due Tuesday Nov 13 (14 days) Project 6 out Due
More informationIntroduction to Optimization. CS434 Compiler Construction Joel Jones Department of Computer Science University of Alabama
Introduction to Optimization CS434 Compiler Construction Joel Jones Department of Computer Science University of Alabama Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved.
More information20b -Advanced-DFA. J. L. Peterson, "Petri Nets," Computing Surveys, 9 (3), September 1977, pp
State Propagation Reading assignment J. L. Peterson, "Petri Nets," Computing Surveys, 9 (3), September 1977, pp. 223-252. Sections 1-4 For reference only M. Pezzè, R. N. Taylor and M. Young, Graph Models
More informationPage # 20b -Advanced-DFA. Reading assignment. State Propagation. GEN and KILL sets. Data Flow Analysis
b -Advanced-DFA Reading assignment J. L. Peterson, "Petri Nets," Computing Surveys, 9 (3), September 977, pp. 3-5. Sections -4 State Propagation For reference only M. Pezzè, R. N. Taylor and M. Young,
More informationCSE Section 10 - Dataflow and Single Static Assignment - Solutions
CSE 401 - Section 10 - Dataflow and Single Static Assignment - Solutions 1. Dataflow Review For each of the following optimizations, list the dataflow analysis that would be most directly applicable. You
More informationIntroduction to Machine-Independent Optimizations - 4
Introduction to Machine-Independent Optimizations - 4 Department of Computer Science and Automation Indian Institute of Science Bangalore 560 012 NPTEL Course on Principles of Compiler Design Outline of
More information8. Static Single Assignment Form. Marcus Denker
8. Static Single Assignment Form Marcus Denker Roadmap > Static Single Assignment Form (SSA) > Converting to SSA Form > Examples > Transforming out of SSA 2 Static Single Assignment Form > Goal: simplify
More informationCompiler Optimisation
Compiler Optimisation 3 Dataflow Analysis Hugh Leather IF 1.18a hleather@inf.ed.ac.uk Institute for Computing Systems Architecture School of Informatics University of Edinburgh 2018 Introduction Optimisations
More informationExample (cont.): Control Flow Graph. 2. Interval analysis (recursive) 3. Structural analysis (recursive)
DDC86 Compiler Optimizations and Code Generation Control Flow Analysis. Page 1 C.Kessler,IDA,Linköpings Universitet, 2009. Control Flow Analysis [ASU1e 10.4] [ALSU2e 9.6] [Muchnick 7] necessary to enable
More informationLecture 21 CIS 341: COMPILERS
Lecture 21 CIS 341: COMPILERS Announcements HW6: Analysis & Optimizations Alias analysis, constant propagation, dead code elimination, register allocation Available Soon Due: Wednesday, April 25 th Zdancewic
More informationLecture 6 Foundations of Data Flow Analysis
Lecture 6 Foundations of Data Flow Analysis I. Meet operator II. Transfer functions III. Correctness, Precision, Convergence IV. Efficiency ALSU 9.3 Phillip B. Gibbons 15-745: Foundations of Data Flow
More informationLecture 6 Foundations of Data Flow Analysis
Review: Reaching Definitions Lecture 6 Foundations of Data Flow Analysis I. Meet operator II. Transfer functions III. Correctness, Precision, Convergence IV. Efficiency [ALSU 9.3] Phillip B. Gibbons 15-745:
More informationMore Dataflow Analysis
More Dataflow Analysis Steps to building analysis Step 1: Choose lattice Step 2: Choose direction of dataflow (forward or backward) Step 3: Create transfer function Step 4: Choose confluence operator (i.e.,
More informationProgram analysis for determining opportunities for optimization: 2. analysis: dataow Organization 1. What kind of optimizations are useful? lattice al
Scalar Optimization 1 Program analysis for determining opportunities for optimization: 2. analysis: dataow Organization 1. What kind of optimizations are useful? lattice algebra solving equations on lattices
More informationThis test is not formatted for your answers. Submit your answers via to:
Page 1 of 7 Computer Science 320: Final Examination May 17, 2017 You have as much time as you like before the Monday May 22 nd 3:00PM ET deadline to answer the following questions. For partial credit,
More informationCompiler Optimization Techniques
Compiler Optimization Techniques Department of Computer Science, Faculty of ICT February 5, 2014 Introduction Code optimisations usually involve the replacement (transformation) of code from one sequence
More informationA Bad Name. CS 2210: Optimization. Register Allocation. Optimization. Reaching Definitions. Dataflow Analyses 4/10/2013
A Bad Name Optimization is the process by which we turn a program into a better one, for some definition of better. CS 2210: Optimization This is impossible in the general case. For instance, a fully optimizing
More informationInduction Variable Identification (cont)
Loop Invariant Code Motion Last Time Uses of SSA: reaching constants, dead-code elimination, induction variable identification Today Finish up induction variable identification Loop invariant code motion
More informationChallenges in the back end. CS Compiler Design. Basic blocks. Compiler analysis
Challenges in the back end CS3300 - Compiler Design Basic Blocks and CFG V. Krishna Nandivada IIT Madras The input to the backend (What?). The target program instruction set, constraints, relocatable or
More informationGlobal Register Allocation via Graph Coloring
Global Register Allocation via Graph Coloring Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University have explicit permission
More informationA Practical and Fast Iterative Algorithm for φ-function Computation Using DJ Graphs
A Practical and Fast Iterative Algorithm for φ-function Computation Using DJ Graphs Dibyendu Das U. Ramakrishna ACM Transactions on Programming Languages and Systems May 2005 Humayun Zafar Outline Introduction
More informationData Flow Analysis. Program Analysis
Program Analysis https://www.cse.iitb.ac.in/~karkare/cs618/ Data Flow Analysis Amey Karkare Dept of Computer Science and Engg IIT Kanpur Visiting IIT Bombay karkare@cse.iitk.ac.in karkare@cse.iitb.ac.in
More informationFORTH SEMESTER DIPLOMA EXAMINATION IN ENGINEERING/ TECHNOLIGY- OCTOBER, 2012 DATA STRUCTURE
TED (10)-3071 Reg. No.. (REVISION-2010) Signature. FORTH SEMESTER DIPLOMA EXAMINATION IN ENGINEERING/ TECHNOLIGY- OCTOBER, 2012 DATA STRUCTURE (Common to CT and IF) [Time: 3 hours (Maximum marks: 100)
More informationCompiler Construction 2016/2017 Loop Optimizations
Compiler Construction 2016/2017 Loop Optimizations Peter Thiemann January 16, 2017 Outline 1 Loops 2 Dominators 3 Loop-Invariant Computations 4 Induction Variables 5 Array-Bounds Checks 6 Loop Unrolling
More informationTopic 9: Control Flow
Topic 9: Control Flow COS 320 Compiling Techniques Princeton University Spring 2016 Lennart Beringer 1 The Front End The Back End (Intel-HP codename for Itanium ; uses compiler to identify parallelism)
More informationRedundant Computation Elimination Optimizations. Redundancy Elimination. Value Numbering CS2210
Redundant Computation Elimination Optimizations CS2210 Lecture 20 Redundancy Elimination Several categories: Value Numbering local & global Common subexpression elimination (CSE) local & global Loop-invariant
More informationIR Lowering. Notation. Lowering Methodology. Nested Expressions. Nested Statements CS412/CS413. Introduction to Compilers Tim Teitelbaum
IR Lowering CS412/CS413 Introduction to Compilers Tim Teitelbaum Lecture 19: Efficient IL Lowering 7 March 07 Use temporary variables for the translation Temporary variables in the Low IR store intermediate
More informationPlan for Today. Concepts. Next Time. Some slides are from Calvin Lin s grad compiler slides. CS553 Lecture 2 Optimizations and LLVM 1
Plan for Today Quiz 2 How to automate the process of performance optimization LLVM: Intro to Intermediate Representation Loops as iteration spaces Data-flow Analysis Intro Control-flow graph terminology
More informationIntermediate Representation (IR)
Intermediate Representation (IR) Components and Design Goals for an IR IR encodes all knowledge the compiler has derived about source program. Simple compiler structure source code More typical compiler
More information